arch/x86/include/asm/pgtable_64_types.h | 3 +++ arch/x86/mm/init_64.c | 5 +++++ include/linux/pgalloc.h | 24 ++++++++++++++++++++++++ include/linux/pgtable.h | 16 ++++++++++++++++ include/linux/vmalloc.h | 16 ---------------- mm/kasan/init.c | 12 ++++++------ mm/percpu.c | 6 +++--- mm/sparse-vmemmap.c | 6 +++--- 8 files changed, 60 insertions(+), 28 deletions(-) create mode 100644 include/linux/pgalloc.h
v3: https://lore.kernel.org/linux-mm/aIQnvFTkQGieHfEh@hyeyoo/ To x86 folks: It's not clear whether this should go through the MM tree or the x86 tree as it changes both. We could send it to the MM tree with Acks from the x86 folks, or we could send it through the x86 tree instead. What do you think? To MM maintainers: I'll add include/linux/pgalloc.h to "MEMORY MANAGEMENT - CORE" in the follow-up series, if there's no objection. v3 -> v4: - Updated the subject line to emphasize that this is a bug fix rather than just an improvement. (was: a more robust approach to sync top level kernel page tables) - Added include/linux/pgalloc.h and moved p*d_populate_kernel() to the file (fixed sparc64 build error). - Added Fixes: tags to patch 1 and 2 to clarify which -stable versions they should be backported to (Andrew). - Dropped patch 4 and 5 because they don't fix bugs but are improvements. They are planned as follow-ups (Andrew). - Rebased onto the latest mm-hotfixes-unstable (f1f0068165a4), but also applies to the latest mm-unstable (c2144e09b922) This patch series includes only minimal changes necessary for backporting the fix to -stable. Planned follow-up patches: - treewide: include linux/pgalloc.h instead of asm/pgalloc.h in common code - MAINTAINERS: add include/linux/pgalloc.h to MM CORE - x86/mm/64: convert p*d_populate{,_init} to _kernel variants - x86/mm/64: drop unnecessary calls to sync_global_pgds() and fold it into its sole user # The problem: It is easy to miss/overlook page table synchronization Hi all, During our internal testing, we started observing intermittent boot failures when the machine uses 4-level paging and has a large amount of persistent memory: BUG: unable to handle page fault for address: ffffe70000000034 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP NOPTI RIP: 0010:__init_single_page+0x9/0x6d Call Trace: <TASK> __init_zone_device_page+0x17/0x5d memmap_init_zone_device+0x154/0x1bb pagemap_range+0x2e0/0x40f memremap_pages+0x10b/0x2f0 devm_memremap_pages+0x1e/0x60 dev_dax_probe+0xce/0x2ec [device_dax] dax_bus_probe+0x6d/0xc9 [... snip ...] </TASK> It turns out that the kernel panics while initializing vmemmap (struct page array) when the vmemmap region spans two PGD entries, because the new PGD entry is only installed in init_mm.pgd, but not in the page tables of other tasks. And looking at __populate_section_memmap(): if (vmemmap_can_optimize(altmap, pgmap)) // does not sync top level page tables r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap); else // sync top level page tables in x86 r = vmemmap_populate(start, end, nid, altmap); In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c synchronizes the top level page table (See commit 9b861528a801 ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping changes")) so that all tasks in the system can see the new vmemmap area. However, when vmemmap_can_optimize() returns true, the optimized path skips synchronization of top-level page tables. This is because vmemmap_populate_compound_pages() is implemented in core MM code, which does not handle synchronization of the top-level page tables. Instead, the core MM has historically relied on each architecture to perform this synchronization manually. We're not the first party to encounter a crash caused by not-sync'd top level page tables: earlier this year, Gwan-gyeong Mun attempted to address the issue [1] [2] after hitting a kernel panic when x86 code accessed the vmemmap area before the corresponding top-level entries were synced. At that time, the issue was believed to be triggered only when struct page was enlarged for debugging purposes, and the patch did not get further updates. It turns out that current approach of relying on each arch to handle the page table sync manually is fragile because 1) it's easy to forget to sync the top level page table, and 2) it's also easy to overlook that the kernel should not access the vmemmap and direct mapping areas before the sync. # The solution: Make page table sync more code robust and harder to miss To address this, Dave Hansen suggested [3] [4] introducing {pgd,p4d}_populate_kernel() for updating kernel portion of the page tables and allow each architecture to explicitly perform synchronization when installing top-level entries. With this approach, we no longer need to worry about missing the sync step, reducing the risk of future regressions. The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK, PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by vmalloc and ioremap to synchronize page tables. pgd_populate_kernel() looks like this: static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd, p4d_t *p4d) { pgd_populate(&init_mm, pgd, p4d); if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) arch_sync_kernel_mappings(addr, addr); } It is worth noting that vmalloc() and apply_to_range() carefully synchronizes page tables by calling p*d_alloc_track() and arch_sync_kernel_mappings(), and thus they are not affected by this patch series. This patch series was hugely inspired by Dave Hansen's suggestion and hence added Suggested-by: Dave Hansen. Cc stable because lack of this series opens the door to intermittent boot failures. [1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com [2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com [4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com Harry Yoo (3): mm: move page table sync declarations to linux/pgtable.h mm: introduce and use {pgd,p4d}_populate_kernel() x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() arch/x86/include/asm/pgtable_64_types.h | 3 +++ arch/x86/mm/init_64.c | 5 +++++ include/linux/pgalloc.h | 24 ++++++++++++++++++++++++ include/linux/pgtable.h | 16 ++++++++++++++++ include/linux/vmalloc.h | 16 ---------------- mm/kasan/init.c | 12 ++++++------ mm/percpu.c | 6 +++--- mm/sparse-vmemmap.c | 6 +++--- 8 files changed, 60 insertions(+), 28 deletions(-) create mode 100644 include/linux/pgalloc.h -- 2.43.0
On Mon, Aug 11, 2025 at 02:34:17PM +0900, Harry Yoo wrote: > # The solution: Make page table sync more code robust and harder to miss > > To address this, Dave Hansen suggested [3] [4] introducing > {pgd,p4d}_populate_kernel() for updating kernel portion > of the page tables and allow each architecture to explicitly perform > synchronization when installing top-level entries. With this approach, > we no longer need to worry about missing the sync step, reducing the risk > of future regressions. Looks sane: Acked-by: Kiryl Shutsemau <kas@kernel.org> > The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK, > PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by > vmalloc and ioremap to synchronize page tables. > > pgd_populate_kernel() looks like this: > static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd, > p4d_t *p4d) > { > pgd_populate(&init_mm, pgd, p4d); > if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) > arch_sync_kernel_mappings(addr, addr); > } > > It is worth noting that vmalloc() and apply_to_range() carefully > synchronizes page tables by calling p*d_alloc_track() and > arch_sync_kernel_mappings(), and thus they are not affected by > this patch series. Well, except ARCH_PAGE_TABLE_SYNC_MASK is not defined on x86-64 until now. So I think it is affected. -- Kiryl Shutsemau / Kirill A. Shutemov
On Mon, Aug 11, 2025 at 07:46:13AM +0100, Kiryl Shutsemau wrote: > On Mon, Aug 11, 2025 at 02:34:17PM +0900, Harry Yoo wrote: > > # The solution: Make page table sync more code robust and harder to miss > > > > To address this, Dave Hansen suggested [3] [4] introducing > > {pgd,p4d}_populate_kernel() for updating kernel portion > > of the page tables and allow each architecture to explicitly perform > > synchronization when installing top-level entries. With this approach, > > we no longer need to worry about missing the sync step, reducing the risk > > of future regressions. > > Looks sane: > > Acked-by: Kiryl Shutsemau <kas@kernel.org> Thanks a lot, Kiryl! > > The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK, > > PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by > > vmalloc and ioremap to synchronize page tables. > > > > pgd_populate_kernel() looks like this: > > static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd, > > p4d_t *p4d) > > { > > pgd_populate(&init_mm, pgd, p4d); > > if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) > > arch_sync_kernel_mappings(addr, addr); > > } > > > > It is worth noting that vmalloc() and apply_to_range() carefully > > synchronizes page tables by calling p*d_alloc_track() and > > arch_sync_kernel_mappings(), and thus they are not affected by > > this patch series. > Well, except ARCH_PAGE_TABLE_SYNC_MASK is not defined on x86-64 until > now. So I think it is affected. Oh, you are right. Although they don't use p*d_populate_kernel() API, changing ARCH_PAGE_TABLE_SYNC_MASK affects their behavior. PGD entries for vmalloc are always pre-populated so it shouldn't be affected much. But apply_to_page_range() is. Though I'm not aware of any bugs from it spanning multiple PGD ranges and missing page table sync. By the way, I think it may be better in the future to unify them under the same logic for synchronizing kernel mappings. With this series, there are two ways: 1. p*d_populate_kernel() 2. p*d_alloc_track() + arch_sync_kernel_mappings. -- Cheers, Harry / Hyeonggon > -- > Kiryl Shutsemau / Kirill A. Shutemov
© 2016 - 2025 Red Hat, Inc.