arch/x86/include/asm/pgalloc.h | 20 +++++++++++++ arch/x86/include/asm/pgtable_64_types.h | 3 ++ arch/x86/mm/init_64.c | 37 ++++++++++++++----------- arch/x86/mm/kasan_init_64.c | 8 +++--- include/asm-generic/pgalloc.h | 16 +++++++++++ include/linux/pgtable.h | 17 ++++++++++++ include/linux/vmalloc.h | 16 ----------- mm/kasan/init.c | 10 +++---- mm/percpu.c | 4 +-- mm/sparse-vmemmap.c | 4 +-- 10 files changed, 90 insertions(+), 45 deletions(-)
v2: https://lore.kernel.org/linux-mm/20250720234203.9126-1-harry.yoo@oracle.com/ v2 -> v3: - Rebased onto mm-hotfixes-unstable (e89f90f1a588 ("sprintf.h requires stdarg.h")) - Fixed kernel test robot reports - Moved arch-independent ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() declarations to <linux/pgtable.h>. Moved x86-64 version of ARCH_PAGE_TABLE_SYNC_MASK from asm/pgalloc.h to arch/x86/include/asm/pgtable_64_types.h Now, any code that wants to use ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() will include <linux/pgtable.h>. - Dropped Cc: stable from patch 4-5 as technically they are not fixing bugs. # The problem: It is easy to miss/overlook page table synchronization Hi all, During our internal testing, we started observing intermittent boot failures when the machine uses 4-level paging and has a large amount of persistent memory: BUG: unable to handle page fault for address: ffffe70000000034 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP NOPTI RIP: 0010:__init_single_page+0x9/0x6d Call Trace: <TASK> __init_zone_device_page+0x17/0x5d memmap_init_zone_device+0x154/0x1bb pagemap_range+0x2e0/0x40f memremap_pages+0x10b/0x2f0 devm_memremap_pages+0x1e/0x60 dev_dax_probe+0xce/0x2ec [device_dax] dax_bus_probe+0x6d/0xc9 [... snip ...] </TASK> It turns out that the kernel panics while initializing vmemmap (struct page array) when the vmemmap region spans two PGD entries, because the new PGD entry is only installed in init_mm.pgd, but not in the page tables of other tasks. And looking at __populate_section_memmap(): if (vmemmap_can_optimize(altmap, pgmap)) // does not sync top level page tables r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap); else // sync top level page tables in x86 r = vmemmap_populate(start, end, nid, altmap); In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c synchronizes the top level page table (See commit 9b861528a801 ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping changes")) so that all tasks in the system can see the new vmemmap area. However, when vmemmap_can_optimize() returns true, the optimized path skips synchronization of top-level page tables. This is because vmemmap_populate_compound_pages() is implemented in core MM code, which does not handle synchronization of the top-level page tables. Instead, the core MM has historically relied on each architecture to perform this synchronization manually. We're not the first party to encounter a crash caused by not-sync'd top level page tables: earlier this year, Gwan-gyeong Mun attempted to address the issue [1] [2] after hitting a kernel panic when x86 code accessed the vmemmap area before the corresponding top-level entries were synced. At that time, the issue was believed to be triggered only when struct page was enlarged for debugging purposes, and the patch did not get further updates. It turns out that current approach of relying on each arch to handle the page table sync manually is fragile because 1) it's easy to forget to sync the top level page table, and 2) it's also easy to overlook that the kernel should not access the vmemmap and direct mapping areas before the sync. # The solution: Make page table sync more code robust To address this, Dave Hansen suggested [3] [4] introducing {pgd,p4d}_populate_kernel() for updating kernel portion of the page tables and allow each architecture to explicitly perform synchronization when installing top-level entries. With this approach, we no longer need to worry about missing the sync step, reducing the risk of future regressions. The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK, PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by vmalloc and ioremap to synchronize page tables. pgd_populate_kernel() looks like this: #define pgd_populate_kernel(addr, pgd, p4d) \ do { \ pgd_populate(&init_mm, pgd, p4d); \ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \ arch_sync_kernel_mappings(addr, addr); \ } while (0) It is worth noting that vmalloc() and apply_to_range() carefully synchronizes page tables by calling p*d_alloc_track() and arch_sync_kernel_mappings(), and thus they are not affected by this patch series. This patch series was hugely inspired by Dave Hansen's suggestion and hence added Suggested-by: Dave Hansen. Cc stable because lack of this series opens the door to intermittent boot failures. [1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com [2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com [4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com Harry Yoo (5): mm: move page table sync declarations to linux/pgtable.h mm: introduce and use {pgd,p4d}_populate_kernel() x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() x86/mm/64: convert p*d_populate{,_init} to _kernel variants x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user arch/x86/include/asm/pgalloc.h | 20 +++++++++++++ arch/x86/include/asm/pgtable_64_types.h | 3 ++ arch/x86/mm/init_64.c | 37 ++++++++++++++----------- arch/x86/mm/kasan_init_64.c | 8 +++--- include/asm-generic/pgalloc.h | 16 +++++++++++ include/linux/pgtable.h | 17 ++++++++++++ include/linux/vmalloc.h | 16 ----------- mm/kasan/init.c | 10 +++---- mm/percpu.c | 4 +-- mm/sparse-vmemmap.c | 4 +-- 10 files changed, 90 insertions(+), 45 deletions(-) -- 2.43.0
On Fri, 25 Jul 2025 10:21:01 +0900 Harry Yoo <harry.yoo@oracle.com> wrote: > During our internal testing, we started observing intermittent boot > failures when the machine uses 4-level paging and has a large amount > of persistent memory: > > BUG: unable to handle page fault for address: ffffe70000000034 > #PF: supervisor write access in kernel mode > #PF: error_code(0x0002) - not-present page > PGD 0 P4D 0 > Oops: 0002 [#1] SMP NOPTI > RIP: 0010:__init_single_page+0x9/0x6d > Call Trace: > <TASK> > __init_zone_device_page+0x17/0x5d > memmap_init_zone_device+0x154/0x1bb > pagemap_range+0x2e0/0x40f > memremap_pages+0x10b/0x2f0 > devm_memremap_pages+0x1e/0x60 > dev_dax_probe+0xce/0x2ec [device_dax] > dax_bus_probe+0x6d/0xc9 > [... snip ...] > </TASK> > > ... > > arch/x86/include/asm/pgalloc.h | 20 +++++++++++++ > arch/x86/include/asm/pgtable_64_types.h | 3 ++ > arch/x86/mm/init_64.c | 37 ++++++++++++++----------- > arch/x86/mm/kasan_init_64.c | 8 +++--- > include/asm-generic/pgalloc.h | 16 +++++++++++ > include/linux/pgtable.h | 17 ++++++++++++ > include/linux/vmalloc.h | 16 ----------- > mm/kasan/init.c | 10 +++---- > mm/percpu.c | 4 +-- > mm/sparse-vmemmap.c | 4 +-- > 10 files changed, 90 insertions(+), 45 deletions(-) Are any other architectures likely to be affected by this flaw? It's late for 6.16. I'd propose that this series target 6.17 and once merged, the cc:stable tags will take care of 6.16.x and earlier. It's regrettable that the series contains some patches which are cc:stable and some which are not. Because 6.16.x and earlier will end up getting only some of these patches, so we're backporting an untested patch combination. It would be better to prepare all this as two series: one for backporting and the other not. It's awkward that some of the cc:stable patches have a Fixes: and others do not. Exactly which kernel version(s) are we asking the -stable maintainers to merge these patches into? This looks somewhat more like an x86 series than an MM one. I can take it via mm.git with suitable x86 acks. Or drop it from mm.git if it goes into the x86 tree. We can discuss that. For now, I'll add this to mm.git's mm-new branch. There it will get a bit of exposure but it will be withheld from linux-next. Once 6.17-rc1 is released I can move this into mm.git's mm-unstable branch to expose it to linux-next testers. Thanks. I'll suppress the usual added-to-mm emails, save a few electrons.
On Fri, Jul 25, 2025 at 04:51:01PM -0700, Andrew Morton wrote: > On Fri, 25 Jul 2025 10:21:01 +0900 Harry Yoo <harry.yoo@oracle.com> wrote: > > > During our internal testing, we started observing intermittent boot > > failures when the machine uses 4-level paging and has a large amount > > of persistent memory: > > > > BUG: unable to handle page fault for address: ffffe70000000034 > > #PF: supervisor write access in kernel mode > > #PF: error_code(0x0002) - not-present page > > PGD 0 P4D 0 > > Oops: 0002 [#1] SMP NOPTI > > RIP: 0010:__init_single_page+0x9/0x6d > > Call Trace: > > <TASK> > > __init_zone_device_page+0x17/0x5d > > memmap_init_zone_device+0x154/0x1bb > > pagemap_range+0x2e0/0x40f > > memremap_pages+0x10b/0x2f0 > > devm_memremap_pages+0x1e/0x60 > > dev_dax_probe+0xce/0x2ec [device_dax] > > dax_bus_probe+0x6d/0xc9 > > [... snip ...] > > </TASK> > > > > ... > > > > arch/x86/include/asm/pgalloc.h | 20 +++++++++++++ > > arch/x86/include/asm/pgtable_64_types.h | 3 ++ > > arch/x86/mm/init_64.c | 37 ++++++++++++++----------- > > arch/x86/mm/kasan_init_64.c | 8 +++--- > > include/asm-generic/pgalloc.h | 16 +++++++++++ > > include/linux/pgtable.h | 17 ++++++++++++ > > include/linux/vmalloc.h | 16 ----------- > > mm/kasan/init.c | 10 +++---- > > mm/percpu.c | 4 +-- > > mm/sparse-vmemmap.c | 4 +-- > > 10 files changed, 90 insertions(+), 45 deletions(-) > > Are any other architectures likely to be affected by this flaw? In theory, any architecture that does not share a kernel page table between tasks can be affected if they forgot to sync page tables properly. e.g., arm64 uses a single page table for kernel address space which is shared between tasks, so it should not be affected. But I'm not aware of any other architectures that are _actually_ known to have this flaw. Even on x86, it was quite hard to trigger without hot-plugging a large amount of memory. But if it turns out other architectures are affected, they can be fixed later in the same way as x86-64. > It's late for 6.16. I'd propose that this series target 6.17 and once > merged, the cc:stable tags will take care of 6.16.x and earlier. Yes. It's quite late and that makes sense. > It's regrettable that the series contains some patches which are > cc:stable and some which are not. Because 6.16.x and earlier will end > up getting only some of these patches, so we're backporting an untested > patch combination. It would be better to prepare all this as two > series: one for backporting and the other not. Yes, that makes sense and I'll post it as two series (one for backporting and the other not for backporting but as a follow-up) unless someone speaks up and argues that it should be backported as a whole. > It's awkward that some of the cc:stable patches have a Fixes: and > others do not. Exactly which kernel version(s) are we asking the > -stable maintainers to merge these patches into? I thought technically patch 1 and 2 are not fixing any bugs but they are prequisites of patch 3. But I think you're right that it only confuses -stable maintainers. I'll add Fixes: tags (the same one as patch 3) to patch 1 and 2 in future revisions. > This looks somewhat more like an x86 series than an MM one. I can take > it via mm.git with suitable x86 acks. Or drop it from mm.git if it > goes into the x86 tree. We can discuss that. It touches both x86/mm and general mm code so I was unsure which tree is the right one :) I don't have a strong opinion and I'm fine with both. Let's wait to hear opinions from the x86/mm maintainers. > For now, I'll add this to mm.git's mm-new branch. There it will get a > bit of exposure but it will be withheld from linux-next. Once 6.17-rc1 > is released I can move this into mm.git's mm-unstable branch to expose > it to linux-next testers. > > Thanks. I'll suppress the usual added-to-mm emails, save a few electrons. Yeah, the Cc list got quite long since it touches many files.. Thanks a lot, Andrew! -- Cheers, Harry / Hyeonggon
© 2016 - 2025 Red Hat, Inc.