arch/arm/include/asm/pgtable.h | 4 ++-- arch/arm/mm/mmu.c | 10 +--------- arch/arm/mm/nommu.c | 10 +--------- 3 files changed, 4 insertions(+), 20 deletions(-)
Andrew,
Can you please stick this between patch 3 (arm: introduce
arch_zone_limits_init()) and patch 4 (arm64: introduce
arch_zone_limits_init())?
From 35d016bbf5da7c08cc5c5547c85558fc50cb63aa Mon Sep 17 00:00:00 2001
From: Klara Modin <klarasmodin@gmail.com>
Date: Sat, 3 Jan 2026 20:40:09 +0200
Subject: [PATCH] arm: make initialization of zero page independent of the
memory map
Unlike most architectures, arm keeps a struct page pointer to the
empty_zero_page and to initialize it requires conversion of a virtual
address to page which makes it necessary to have memory map initialized
before creating the empty_zero_page.
Make empty_zero_page a stataic array in BSS to decouple it's
initialization from the initialization of the memory map.
This also aligns arm with vast majorty of architectures.
Signed-off-by: Klara Modin <klarasmodin@gmail.com>
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/arm/include/asm/pgtable.h | 4 ++--
arch/arm/mm/mmu.c | 10 +---------
arch/arm/mm/nommu.c | 10 +---------
3 files changed, 4 insertions(+), 20 deletions(-)
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 86378eec7757..6fa9acd6a7f5 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -15,8 +15,8 @@
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..
*/
-extern struct page *empty_zero_page;
-#define ZERO_PAGE(vaddr) (empty_zero_page)
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
#endif
#include <asm-generic/pgtable-nopud.h>
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 8bac96e205ac..518def8314e7 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -45,7 +45,7 @@ extern unsigned long __atags_pointer;
* empty_zero_page is a special page that is used for
* zero-initialized data and COW.
*/
-struct page *empty_zero_page;
+unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
/*
@@ -1754,8 +1754,6 @@ static void __init early_fixmap_shutdown(void)
*/
void __init paging_init(const struct machine_desc *mdesc)
{
- void *zero_page;
-
#ifdef CONFIG_XIP_KERNEL
/* Store the kernel RW RAM region start/end in these variables */
kernel_sec_start = CONFIG_PHYS_OFFSET & SECTION_MASK;
@@ -1781,13 +1779,7 @@ void __init paging_init(const struct machine_desc *mdesc)
top_pmd = pmd_off_k(0xffff0000);
- /* allocate the zero page. */
- zero_page = early_alloc(PAGE_SIZE);
-
bootmem_init();
-
- empty_zero_page = virt_to_page(zero_page);
- __flush_dcache_folio(NULL, page_folio(empty_zero_page));
}
void __init early_mm_init(const struct machine_desc *mdesc)
diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
index d638cc87807e..7e42d8accec6 100644
--- a/arch/arm/mm/nommu.c
+++ b/arch/arm/mm/nommu.c
@@ -31,7 +31,7 @@ unsigned long vectors_base;
* empty_zero_page is a special page that is used for
* zero-initialized data and COW.
*/
-struct page *empty_zero_page;
+unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
#ifdef CONFIG_ARM_MPU
@@ -156,18 +156,10 @@ void __init adjust_lowmem_bounds(void)
*/
void __init paging_init(const struct machine_desc *mdesc)
{
- void *zero_page;
-
early_trap_init((void *)vectors_base);
mpu_setup();
- /* allocate the zero page. */
- zero_page = (void *)memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
-
bootmem_init();
-
- empty_zero_page = virt_to_page(zero_page);
- flush_dcache_page(empty_zero_page);
}
/*
--
2.51.0
On Sun, 4 Jan 2026 14:01:40 +0200 Mike Rapoport <rppt@kernel.org> wrote: > Can you please stick this between patch 3 (arm: introduce > arch_zone_limits_init()) and patch 4 (arm64: introduce > arch_zone_limits_init())? Did, thanks. I made this a standalone patch rather than a squashable -fix. > >From 35d016bbf5da7c08cc5c5547c85558fc50cb63aa Mon Sep 17 00:00:00 2001 > From: Klara Modin <klarasmodin@gmail.com> > Date: Sat, 3 Jan 2026 20:40:09 +0200 > Subject: [PATCH] arm: make initialization of zero page independent of the > memory map > > Unlike most architectures, arm keeps a struct page pointer to the > empty_zero_page and to initialize it requires conversion of a virtual > address to page which makes it necessary to have memory map initialized > before creating the empty_zero_page. > > Make empty_zero_page a stataic array in BSS to decouple it's > initialization from the initialization of the memory map. > > This also aligns arm with vast majorty of architectures. Russell, can you please update us on your concerns with this change?
On Sun, Jan 04, 2026 at 02:01:40PM +0200, Mike Rapoport wrote: > From 35d016bbf5da7c08cc5c5547c85558fc50cb63aa Mon Sep 17 00:00:00 2001 > From: Klara Modin <klarasmodin@gmail.com> > Date: Sat, 3 Jan 2026 20:40:09 +0200 > Subject: [PATCH] arm: make initialization of zero page independent of the > memory map > > Unlike most architectures, arm keeps a struct page pointer to the > empty_zero_page and to initialize it requires conversion of a virtual > address to page which makes it necessary to have memory map initialized > before creating the empty_zero_page. > > Make empty_zero_page a stataic array in BSS to decouple it's > initialization from the initialization of the memory map. I see you haven't considered _why_ ARM does this. You are getting rid of the flush_dcache_page() call, which ensures that the zeroed contents of the page are pushed out of the cache into memory. This is necessary. BSS is very similar. It's memset() during the kernel boot _after_ the caches are enabled. Without an explicit flush, nothing guarantees that those writes will be visible to userspace. To me, this seems like a bad idea, which will cause userspace to break. We need to call flush_dcache_page(), and _that_ requires a struct page. -- RMK's Patch system: https://www.armlinux.org.uk/developer/patches/ FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!
On Sun, Jan 04, 2026 at 08:56:45PM +0000, Russell King (Oracle) wrote: > On Sun, Jan 04, 2026 at 02:01:40PM +0200, Mike Rapoport wrote: > > From 35d016bbf5da7c08cc5c5547c85558fc50cb63aa Mon Sep 17 00:00:00 2001 > > From: Klara Modin <klarasmodin@gmail.com> > > Date: Sat, 3 Jan 2026 20:40:09 +0200 > > Subject: [PATCH] arm: make initialization of zero page independent of the > > memory map > > > > Unlike most architectures, arm keeps a struct page pointer to the > > empty_zero_page and to initialize it requires conversion of a virtual > > address to page which makes it necessary to have memory map initialized > > before creating the empty_zero_page. > > > > Make empty_zero_page a stataic array in BSS to decouple it's > > initialization from the initialization of the memory map. > > I see you haven't considered _why_ ARM does this. > > You are getting rid of the flush_dcache_page() call, which ensures > that the zeroed contents of the page are pushed out of the cache > into memory. This is necessary. > > BSS is very similar. It's memset() during the kernel boot _after_ > the caches are enabled. Without an explicit flush, nothing > guarantees that those writes will be visible to userspace. There's a call to flush_cache_all() paging_init()->devicemaps_init() that will guarantee that those writes are flushed long before userspace starts. > To me, this seems like a bad idea, which will cause userspace to > break. > > We need to call flush_dcache_page(), and _that_ requires a struct > page. Right now there's __flush_dcache_folio() that will break anyway when folio divorces from struct page. -- Sincerely yours, Mike.
© 2016 - 2026 Red Hat, Inc.