[PATCH v2 01/10] arm64: Move the zero page to rodata

Ard Biesheuvel posted 10 patches 1 week, 6 days ago
[PATCH v2 01/10] arm64: Move the zero page to rodata
Posted by Ard Biesheuvel 1 week, 6 days ago
From: Ard Biesheuvel <ardb@kernel.org>

The zero page should contain only zero bytes, and so mapping it
read-write is unnecessary. Combine it with reserved_pg_dir, which lives
in the read-only region of the kernel, and already serves a similar
purpose.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/vmlinux.lds.S | 1 +
 arch/arm64/mm/mmu.c             | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index ad6133b89e7a..b2a093f5b3fc 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -229,6 +229,7 @@ SECTIONS
 #endif
 
 	reserved_pg_dir = .;
+	empty_zero_page = .;
 	. += PAGE_SIZE;
 
 	swapper_pg_dir = .;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9ae7ce00a7ef..c36422a3fae2 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -66,9 +66,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
 
 /*
  * Empty_zero_page is a special page that is used for zero-initialized data
- * and COW.
+ * and COW. Defined in the linker script.
  */
-unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
 EXPORT_SYMBOL(empty_zero_page);
 
 static DEFINE_SPINLOCK(swapper_pgdir_lock);
-- 
2.52.0.457.g6b5491de43-goog
Re: [PATCH v2 01/10] arm64: Move the zero page to rodata
Posted by Ryan Roberts 1 week, 5 days ago
On 26/01/2026 09:26, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> The zero page should contain only zero bytes, and so mapping it
> read-write is unnecessary. Combine it with reserved_pg_dir, which lives
> in the read-only region of the kernel, and already serves a similar
> purpose.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/kernel/vmlinux.lds.S | 1 +
>  arch/arm64/mm/mmu.c             | 3 +--
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index ad6133b89e7a..b2a093f5b3fc 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -229,6 +229,7 @@ SECTIONS
>  #endif
>  
>  	reserved_pg_dir = .;
> +	empty_zero_page = .;
>  	. += PAGE_SIZE;
>  
>  	swapper_pg_dir = .;

Isn't there a magic macro for getting from swapper to reserved? That will need
updating?

/*
 *  Open-coded (swapper_pg_dir - reserved_pg_dir) as this cannot be calculated
 *  until link time.
 */
#define RESERVED_SWAPPER_OFFSET	(PAGE_SIZE)


> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9ae7ce00a7ef..c36422a3fae2 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -66,9 +66,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
>  
>  /*
>   * Empty_zero_page is a special page that is used for zero-initialized data
> - * and COW.
> + * and COW. Defined in the linker script.
>   */
> -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
>  EXPORT_SYMBOL(empty_zero_page);

What's the benefit of giving it it's own place in the linker script vs just
declaring it as const and having it placed in the rodata?

Thanks,
Ryan

>  
>  static DEFINE_SPINLOCK(swapper_pgdir_lock);
Re: [PATCH v2 01/10] arm64: Move the zero page to rodata
Posted by Ard Biesheuvel 1 week, 5 days ago
On Tue, 27 Jan 2026 at 10:34, Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 26/01/2026 09:26, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > The zero page should contain only zero bytes, and so mapping it
> > read-write is unnecessary. Combine it with reserved_pg_dir, which lives
> > in the read-only region of the kernel, and already serves a similar
> > purpose.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/arm64/kernel/vmlinux.lds.S | 1 +
> >  arch/arm64/mm/mmu.c             | 3 +--
> >  2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> > index ad6133b89e7a..b2a093f5b3fc 100644
> > --- a/arch/arm64/kernel/vmlinux.lds.S
> > +++ b/arch/arm64/kernel/vmlinux.lds.S
> > @@ -229,6 +229,7 @@ SECTIONS
> >  #endif
> >
> >       reserved_pg_dir = .;
> > +     empty_zero_page = .;
> >       . += PAGE_SIZE;
> >
> >       swapper_pg_dir = .;
>
> Isn't there a magic macro for getting from swapper to reserved? That will need
> updating?
>

Why? This just adds an alias to refer to the same allocation.

> /*
>  *  Open-coded (swapper_pg_dir - reserved_pg_dir) as this cannot be calculated
>  *  until link time.
>  */
> #define RESERVED_SWAPPER_OFFSET (PAGE_SIZE)
>
>
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 9ae7ce00a7ef..c36422a3fae2 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -66,9 +66,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
> >
> >  /*
> >   * Empty_zero_page is a special page that is used for zero-initialized data
> > - * and COW.
> > + * and COW. Defined in the linker script.
> >   */
> > -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
> >  EXPORT_SYMBOL(empty_zero_page);
>
> What's the benefit of giving it it's own place in the linker script vs just
> declaring it as const and having it placed in the rodata?
>

Because it collapses the two into one.
Re: [PATCH v2 01/10] arm64: Move the zero page to rodata
Posted by Ryan Roberts 1 week, 5 days ago
On 27/01/2026 09:49, Ard Biesheuvel wrote:
> On Tue, 27 Jan 2026 at 10:34, Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 26/01/2026 09:26, Ard Biesheuvel wrote:
>>> From: Ard Biesheuvel <ardb@kernel.org>
>>>
>>> The zero page should contain only zero bytes, and so mapping it
>>> read-write is unnecessary. Combine it with reserved_pg_dir, which lives
>>> in the read-only region of the kernel, and already serves a similar
>>> purpose.
>>>
>>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>>> ---
>>>  arch/arm64/kernel/vmlinux.lds.S | 1 +
>>>  arch/arm64/mm/mmu.c             | 3 +--
>>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>>> index ad6133b89e7a..b2a093f5b3fc 100644
>>> --- a/arch/arm64/kernel/vmlinux.lds.S
>>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>>> @@ -229,6 +229,7 @@ SECTIONS
>>>  #endif
>>>
>>>       reserved_pg_dir = .;
>>> +     empty_zero_page = .;
>>>       . += PAGE_SIZE;
>>>
>>>       swapper_pg_dir = .;
>>
>> Isn't there a magic macro for getting from swapper to reserved? That will need
>> updating?
>>
> 
> Why? This just adds an alias to refer to the same allocation.

Oh yes, sorry I completely missed that. And you've even stated it in the commit
log...

I'm struggling to see where this gets zeroed though? I assume it must be zeroed
before the old empty_zero_page would have been so everything works fine?

Assuming yes, then:

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

> 
>> /*
>>  *  Open-coded (swapper_pg_dir - reserved_pg_dir) as this cannot be calculated
>>  *  until link time.
>>  */
>> #define RESERVED_SWAPPER_OFFSET (PAGE_SIZE)
>>
>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 9ae7ce00a7ef..c36422a3fae2 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -66,9 +66,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
>>>
>>>  /*
>>>   * Empty_zero_page is a special page that is used for zero-initialized data
>>> - * and COW.
>>> + * and COW. Defined in the linker script.
>>>   */
>>> -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
>>>  EXPORT_SYMBOL(empty_zero_page);
>>
>> What's the benefit of giving it it's own place in the linker script vs just
>> declaring it as const and having it placed in the rodata?
>>
> 
> Because it collapses the two into one.
Re: [PATCH v2 01/10] arm64: Move the zero page to rodata
Posted by Ard Biesheuvel 1 week, 5 days ago
On Tue, 27 Jan 2026 at 11:04, Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 27/01/2026 09:49, Ard Biesheuvel wrote:
> > On Tue, 27 Jan 2026 at 10:34, Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> On 26/01/2026 09:26, Ard Biesheuvel wrote:
> >>> From: Ard Biesheuvel <ardb@kernel.org>
> >>>
> >>> The zero page should contain only zero bytes, and so mapping it
> >>> read-write is unnecessary. Combine it with reserved_pg_dir, which lives
> >>> in the read-only region of the kernel, and already serves a similar
> >>> purpose.
> >>>
> >>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> >>> ---
> >>>  arch/arm64/kernel/vmlinux.lds.S | 1 +
> >>>  arch/arm64/mm/mmu.c             | 3 +--
> >>>  2 files changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> >>> index ad6133b89e7a..b2a093f5b3fc 100644
> >>> --- a/arch/arm64/kernel/vmlinux.lds.S
> >>> +++ b/arch/arm64/kernel/vmlinux.lds.S
> >>> @@ -229,6 +229,7 @@ SECTIONS
> >>>  #endif
> >>>
> >>>       reserved_pg_dir = .;
> >>> +     empty_zero_page = .;
> >>>       . += PAGE_SIZE;
> >>>
> >>>       swapper_pg_dir = .;
> >>
> >> Isn't there a magic macro for getting from swapper to reserved? That will need
> >> updating?
> >>
> >
> > Why? This just adds an alias to refer to the same allocation.
>
> Oh yes, sorry I completely missed that. And you've even stated it in the commit
> log...
>
> I'm struggling to see where this gets zeroed though? I assume it must be zeroed
> before the old empty_zero_page would have been so everything works fine?
>

It is statically zero initialized in the image.

> Assuming yes, then:
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>

Thanks!