[PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option

Pankaj Raghav posted 5 patches 4 months ago
arch/x86/Kconfig               |  1 +
arch/x86/include/asm/pgtable.h |  8 +++++
arch/x86/kernel/head_64.S      |  8 +++++
block/blk-lib.c                | 17 +++++----
include/linux/huge_mm.h        | 31 ----------------
include/linux/mm.h             | 64 ++++++++++++++++++++++++++++++++++
mm/Kconfig                     | 13 +++++++
mm/huge_memory.c               | 62 ++++++++++++++++++++++++--------
mm/memory.c                    | 19 ++++++++++
9 files changed, 170 insertions(+), 53 deletions(-)
[PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Pankaj Raghav 4 months ago
There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.

This concern was raised during the review of adding Large Block Size support
to XFS[1][2].

This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of a single bvec.

Some examples of places in the kernel where this could be useful:
- blkdev_issue_zero_pages()
- iomap_dio_zero()
- vmalloc.c:zero_iter()
- rxperf_process_call()
- fscrypt_zeroout_range_inline_crypt()
- bch2_checksum_update()
...

We already have huge_zero_folio that is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left.

But to use huge_zero_folio, we need to pass a mm struct and the
put_folio needs to be called in the destructor. This makes sense for
systems that have memory constraints but for bigger servers, it does not
matter if the PMD size is reasonable (like in x86).

Add a config option STATIC_PMD_ZERO_PAGE that will always allocate
the huge_zero_folio in .bss, and it will never be freed.

The static PMD page is reused by huge_zero_folio when this config
option is enabled.

I have converted blkdev_issue_zero_pages() as an example as a part of
this series.

I will send patches to individual subsystems using the huge_zero_folio
once this gets upstreamed.

Looking forward to some feedback.

[1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
[2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/

Changes since RFC:
- Added the config option based on the feedback from David.
- Encode more info in the header to avoid dead code (Dave hansen
  feedback)
- The static part of huge_zero_folio in memory.c and the dynamic part
  stays in huge_memory.c
- Split the patches to make it easy for review.

Pankaj Raghav (5):
  mm: move huge_zero_page declaration from huge_mm.h to mm.h
  huge_memory: add huge_zero_page_shrinker_(init|exit) function
  mm: add static PMD zero page
  mm: add mm_get_static_huge_zero_folio() routine
  block: use mm_huge_zero_folio in __blkdev_issue_zero_pages()

 arch/x86/Kconfig               |  1 +
 arch/x86/include/asm/pgtable.h |  8 +++++
 arch/x86/kernel/head_64.S      |  8 +++++
 block/blk-lib.c                | 17 +++++----
 include/linux/huge_mm.h        | 31 ----------------
 include/linux/mm.h             | 64 ++++++++++++++++++++++++++++++++++
 mm/Kconfig                     | 13 +++++++
 mm/huge_memory.c               | 62 ++++++++++++++++++++++++--------
 mm/memory.c                    | 19 ++++++++++
 9 files changed, 170 insertions(+), 53 deletions(-)


base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
-- 
2.49.0
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Dave Hansen 4 months ago
On 6/12/25 03:50, Pankaj Raghav wrote:
> But to use huge_zero_folio, we need to pass a mm struct and the
> put_folio needs to be called in the destructor. This makes sense for
> systems that have memory constraints but for bigger servers, it does not
> matter if the PMD size is reasonable (like in x86).

So, what's the problem with calling a destructor?

In your last patch, surely bio_add_folio() can put the page/folio when
it's done. Is the real problem that you don't want to call zero page
specific code at bio teardown?
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Pankaj Raghav (Samsung) 3 months, 4 weeks ago
On Thu, Jun 12, 2025 at 06:50:07AM -0700, Dave Hansen wrote:
> On 6/12/25 03:50, Pankaj Raghav wrote:
> > But to use huge_zero_folio, we need to pass a mm struct and the
> > put_folio needs to be called in the destructor. This makes sense for
> > systems that have memory constraints but for bigger servers, it does not
> > matter if the PMD size is reasonable (like in x86).
> 
> So, what's the problem with calling a destructor?
> 
> In your last patch, surely bio_add_folio() can put the page/folio when
> it's done. Is the real problem that you don't want to call zero page
> specific code at bio teardown?

Yeah, it feels like a lot of code on the caller just to use a zero page.
It would be nice just to have a call similar to ZERO_PAGE() in these
subsystems where we can have guarantee of getting huge zero page.

Apart from that, these are the following problems if we use
mm_get_huge_zero_folio() at the moment:

- We might end up allocating 512MB PMD on ARM systems with 64k base page
  size, which is undesirable. With the patch series posted, we will only
  enable the static huge page for sane architectures and page sizes.

- In the current implementation we always call mm_put_huge_zero_folio()
  in __mmput()[1]. I am not sure if model will work for all subsystems. For
  example bio completions can be async, i.e, we might need a reference
  to the zero page even if the process is no longer alive.

I will try to include these motivations in the cover letter next time.

Thanks

[1] 6fcb52a56ff6 ("thp: reduce usage of huge zero page's atomic counter")

--
Pankaj
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Dave Hansen 3 months, 4 weeks ago
On 6/12/25 13:36, Pankaj Raghav (Samsung) wrote:
> On Thu, Jun 12, 2025 at 06:50:07AM -0700, Dave Hansen wrote:
>> On 6/12/25 03:50, Pankaj Raghav wrote:
>>> But to use huge_zero_folio, we need to pass a mm struct and the
>>> put_folio needs to be called in the destructor. This makes sense for
>>> systems that have memory constraints but for bigger servers, it does not
>>> matter if the PMD size is reasonable (like in x86).
>>
>> So, what's the problem with calling a destructor?
>>
>> In your last patch, surely bio_add_folio() can put the page/folio when
>> it's done. Is the real problem that you don't want to call zero page
>> specific code at bio teardown?
> 
> Yeah, it feels like a lot of code on the caller just to use a zero page.
> It would be nice just to have a call similar to ZERO_PAGE() in these
> subsystems where we can have guarantee of getting huge zero page.
> 
> Apart from that, these are the following problems if we use
> mm_get_huge_zero_folio() at the moment:
> 
> - We might end up allocating 512MB PMD on ARM systems with 64k base page
>   size, which is undesirable. With the patch series posted, we will only
>   enable the static huge page for sane architectures and page sizes.

Does *anybody* want the 512MB huge zero page? Maybe it should be an
opt-in at runtime or something.

> - In the current implementation we always call mm_put_huge_zero_folio()
>   in __mmput()[1]. I am not sure if model will work for all subsystems. For
>   example bio completions can be async, i.e, we might need a reference
>   to the zero page even if the process is no longer alive.

The mm is a nice convenient place to stick an mm but there are other
ways to keep an efficient refcount around. For instance, you could just
bump a per-cpu refcount and then have the shrinker sum up all the
refcounts to see if there are any outstanding on the system as a whole.

I understand that the current refcounts are tied to an mm, but you could
either replace the mm-specific ones or add something in parallel for
when there's no mm.
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Pankaj Raghav (Samsung) 3 months, 4 weeks ago
On Thu, Jun 12, 2025 at 02:46:34PM -0700, Dave Hansen wrote:
> On 6/12/25 13:36, Pankaj Raghav (Samsung) wrote:
> > On Thu, Jun 12, 2025 at 06:50:07AM -0700, Dave Hansen wrote:
> >> On 6/12/25 03:50, Pankaj Raghav wrote:
> >>> But to use huge_zero_folio, we need to pass a mm struct and the
> >>> put_folio needs to be called in the destructor. This makes sense for
> >>> systems that have memory constraints but for bigger servers, it does not
> >>> matter if the PMD size is reasonable (like in x86).
> >>
> >> So, what's the problem with calling a destructor?
> >>
> >> In your last patch, surely bio_add_folio() can put the page/folio when
> >> it's done. Is the real problem that you don't want to call zero page
> >> specific code at bio teardown?
> > 
> > Yeah, it feels like a lot of code on the caller just to use a zero page.
> > It would be nice just to have a call similar to ZERO_PAGE() in these
> > subsystems where we can have guarantee of getting huge zero page.
> > 
> > Apart from that, these are the following problems if we use
> > mm_get_huge_zero_folio() at the moment:
> > 
> > - We might end up allocating 512MB PMD on ARM systems with 64k base page
> >   size, which is undesirable. With the patch series posted, we will only
> >   enable the static huge page for sane architectures and page sizes.
> 
> Does *anybody* want the 512MB huge zero page? Maybe it should be an
> opt-in at runtime or something.
> 
Yeah, I think that needs to be fixed. David also pointed this out in one
of his earlier reviews[1].

> > - In the current implementation we always call mm_put_huge_zero_folio()
> >   in __mmput()[1]. I am not sure if model will work for all subsystems. For
> >   example bio completions can be async, i.e, we might need a reference
> >   to the zero page even if the process is no longer alive.
> 
> The mm is a nice convenient place to stick an mm but there are other
> ways to keep an efficient refcount around. For instance, you could just
> bump a per-cpu refcount and then have the shrinker sum up all the
> refcounts to see if there are any outstanding on the system as a whole.
> 
> I understand that the current refcounts are tied to an mm, but you could
> either replace the mm-specific ones or add something in parallel for
> when there's no mm.

But the whole idea of allocating a static PMD page for sane
architectures like x86 started with the intent of avoiding the refcounts and
shrinker.

This was the initial feedback I got[2]:

I mean, the whole thing about dynamically allocating/freeing it was for 
memory-constrained systems. For large systems, we just don't care.


[1] https://lore.kernel.org/linux-mm/1e571419-9709-4898-9349-3d2eef0f8709@redhat.com/
[2] https://lore.kernel.org/linux-mm/cb52312d-348b-49d5-b0d7-0613fb38a558@redhat.com/
--
Pankaj
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by David Hildenbrand 3 months, 3 weeks ago
On 13.06.25 10:58, Pankaj Raghav (Samsung) wrote:
> On Thu, Jun 12, 2025 at 02:46:34PM -0700, Dave Hansen wrote:
>> On 6/12/25 13:36, Pankaj Raghav (Samsung) wrote:
>>> On Thu, Jun 12, 2025 at 06:50:07AM -0700, Dave Hansen wrote:
>>>> On 6/12/25 03:50, Pankaj Raghav wrote:
>>>>> But to use huge_zero_folio, we need to pass a mm struct and the
>>>>> put_folio needs to be called in the destructor. This makes sense for
>>>>> systems that have memory constraints but for bigger servers, it does not
>>>>> matter if the PMD size is reasonable (like in x86).
>>>>
>>>> So, what's the problem with calling a destructor?
>>>>
>>>> In your last patch, surely bio_add_folio() can put the page/folio when
>>>> it's done. Is the real problem that you don't want to call zero page
>>>> specific code at bio teardown?
>>>
>>> Yeah, it feels like a lot of code on the caller just to use a zero page.
>>> It would be nice just to have a call similar to ZERO_PAGE() in these
>>> subsystems where we can have guarantee of getting huge zero page.
>>>
>>> Apart from that, these are the following problems if we use
>>> mm_get_huge_zero_folio() at the moment:
>>>
>>> - We might end up allocating 512MB PMD on ARM systems with 64k base page
>>>    size, which is undesirable. With the patch series posted, we will only
>>>    enable the static huge page for sane architectures and page sizes.
>>
>> Does *anybody* want the 512MB huge zero page? Maybe it should be an
>> opt-in at runtime or something.
>>
> Yeah, I think that needs to be fixed. David also pointed this out in one
> of his earlier reviews[1].
> 
>>> - In the current implementation we always call mm_put_huge_zero_folio()
>>>    in __mmput()[1]. I am not sure if model will work for all subsystems. For
>>>    example bio completions can be async, i.e, we might need a reference
>>>    to the zero page even if the process is no longer alive.
>>
>> The mm is a nice convenient place to stick an mm but there are other
>> ways to keep an efficient refcount around. For instance, you could just
>> bump a per-cpu refcount and then have the shrinker sum up all the
>> refcounts to see if there are any outstanding on the system as a whole.
>>
>> I understand that the current refcounts are tied to an mm, but you could
>> either replace the mm-specific ones or add something in parallel for
>> when there's no mm.
> 
> But the whole idea of allocating a static PMD page for sane
> architectures like x86 started with the intent of avoiding the refcounts and
> shrinker.
> 
> This was the initial feedback I got[2]:
> 
> I mean, the whole thing about dynamically allocating/freeing it was for
> memory-constrained systems. For large systems, we just don't care.

For non-mm usage we can just use the folio refcount. The per-mm 
refcounts are all combined into a single folio refcount. The way the 
global variable is managed based on per-mm refcounts is the weird thing.

In some corner cases we might end up having multiple instances of huge 
zero folios right now. Just imagine:

1) Allocate huge zero folio during read fault
2) vmsplice() it
3) Unmap the huge zero folio
4) Shrinker runs and frees it
5) Repeat with 1)

As long as the folio is vmspliced(), it will not get actually freed ...

I would hope that we could remove the shrinker completely, and simply 
never free the huge zero folio once allocated. Or at least, only free it 
once it is actually no longer used.

-- 
Cheers,

David / dhildenb
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Pankaj Raghav (Samsung) 3 months, 3 weeks ago
> > > 
> > > The mm is a nice convenient place to stick an mm but there are other
> > > ways to keep an efficient refcount around. For instance, you could just
> > > bump a per-cpu refcount and then have the shrinker sum up all the
> > > refcounts to see if there are any outstanding on the system as a whole.
> > > 
> > > I understand that the current refcounts are tied to an mm, but you could
> > > either replace the mm-specific ones or add something in parallel for
> > > when there's no mm.
> > 
> > But the whole idea of allocating a static PMD page for sane
> > architectures like x86 started with the intent of avoiding the refcounts and
> > shrinker.
> > 
> > This was the initial feedback I got[2]:
> > 
> > I mean, the whole thing about dynamically allocating/freeing it was for
> > memory-constrained systems. For large systems, we just don't care.
> 
> For non-mm usage we can just use the folio refcount. The per-mm refcounts
> are all combined into a single folio refcount. The way the global variable
> is managed based on per-mm refcounts is the weird thing.
> 
> In some corner cases we might end up having multiple instances of huge zero
> folios right now. Just imagine:
> 
> 1) Allocate huge zero folio during read fault
> 2) vmsplice() it
> 3) Unmap the huge zero folio
> 4) Shrinker runs and frees it
> 5) Repeat with 1)
> 
> As long as the folio is vmspliced(), it will not get actually freed ...
> 
> I would hope that we could remove the shrinker completely, and simply never
> free the huge zero folio once allocated. Or at least, only free it once it
> is actually no longer used.
> 

Thanks for the explanation, David.

But I am still a bit confused on how to proceed with these patches.

So IIUC, our eventual goal is to get rid of the shrinker.

But do we still want to add a static PMD page in the .bss or do we take
an alternate approach here?

--
Pankaj
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Christoph Hellwig 3 months, 3 weeks ago
Just curious: why doesn't this series get rid of the iomap zero_page,
which would be really low hanging fruit?
Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Posted by Pankaj Raghav 3 months, 3 weeks ago
On 6/16/25 07:40, Christoph Hellwig wrote:
> Just curious: why doesn't this series get rid of the iomap zero_page,
> which would be really low hanging fruit?
> 

I did the conversion in my first RFC series [1]. But the implementation and API for the zero
page was still under heavy discussion. So I decided to focus on that instead and leave the
conversion for later as I mentioned in the cover letter. I included blkdev_issue_zero_pages() as it
is a more straight forward conversion.

--
Pankaj


[1] https://lore.kernel.org/linux-mm/20250516101054.676046-4-p.raghav@samsung.com/