arch/x86/Kconfig | 1 + block/blk-lib.c | 15 ++++++------ include/linux/huge_mm.h | 24 +++++++++++++++++++ mm/Kconfig | 12 ++++++++++ mm/huge_memory.c | 52 +++++++++++++++++++++++++++++++---------- 5 files changed, 85 insertions(+), 19 deletions(-)
From: Pankaj Raghav <p.raghav@samsung.com> NOTE: I am resending as an RFC again based on Lorenzo's feedback. The old series can be found here [1]. There are many places in the kernel where we need to zeroout larger chunks but the maximum segment we can zeroout at a time by ZERO_PAGE is limited by PAGE_SIZE. This concern was raised during the review of adding Large Block Size support to XFS[2][3]. This is especially annoying in block devices and filesystems where we attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage bvec support in block layer, it is much more efficient to send out larger zero pages as a part of a single bvec. Some examples of places in the kernel where this could be useful: - blkdev_issue_zero_pages() - iomap_dio_zero() - vmalloc.c:zero_iter() - rxperf_process_call() - fscrypt_zeroout_range_inline_crypt() - bch2_checksum_update() ... Usually huge_zero_folio is allocated on demand, and it will be deallocated by the shrinker if there are no users of it left. At the moment, huge_zero_folio infrastructure refcount is tied to the process lifetime that created it. This might not work for bio layer as the completions can be async and the process that created the huge_zero_folio might no longer be alive. And, one of the main point that came during discussion is to have something bigger than zero page as a drop-in replacement. Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate the huge_zero_folio, and it will never drop the reference. This makes using the huge_zero_folio without having to pass any mm struct and does not tie the lifetime of the zero folio to anything, making it a drop-in replacement for ZERO_PAGE. I have converted blkdev_issue_zero_pages() as an example as a part of this series. I also noticed close to 4% performance improvement just by replacing ZERO_PAGE with static huge_zero_folio. I will send patches to individual subsystems using the huge_zero_folio once this gets upstreamed. Looking forward to some feedback. [1] https://lore.kernel.org/linux-mm/20250707142319.319642-1-kernel@pankajraghav.com/ [2] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/ [3] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/ Changes since last series[1]: - Instead of allocating a new page through memblock, use the same infrastructure as huge_zero_folio but raise the reference and never drop it. (David) - And some minor cleanups based on Lorenzo's feedback. Pankaj Raghav (4): mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker mm: add static huge zero folio mm: add largest_zero_folio() routine block: use largest_zero_folio in __blkdev_issue_zero_pages() arch/x86/Kconfig | 1 + block/blk-lib.c | 15 ++++++------ include/linux/huge_mm.h | 24 +++++++++++++++++++ mm/Kconfig | 12 ++++++++++ mm/huge_memory.c | 52 +++++++++++++++++++++++++++++++---------- 5 files changed, 85 insertions(+), 19 deletions(-) base-commit: 1b0686bd18c1aa9d7f01943829faa5befe6ab3d1 -- 2.49.0
On 22.07.25 11:42, Pankaj Raghav (Samsung) wrote: > From: Pankaj Raghav <p.raghav@samsung.com> > > NOTE: I am resending as an RFC again based on Lorenzo's feedback. The > old series can be found here [1]. > > There are many places in the kernel where we need to zeroout larger > chunks but the maximum segment we can zeroout at a time by ZERO_PAGE > is limited by PAGE_SIZE. > > This concern was raised during the review of adding Large Block Size support > to XFS[2][3]. > > This is especially annoying in block devices and filesystems where we > attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage > bvec support in block layer, it is much more efficient to send out > larger zero pages as a part of a single bvec. > > Some examples of places in the kernel where this could be useful: > - blkdev_issue_zero_pages() > - iomap_dio_zero() > - vmalloc.c:zero_iter() > - rxperf_process_call() > - fscrypt_zeroout_range_inline_crypt() > - bch2_checksum_update() > ... > > Usually huge_zero_folio is allocated on demand, and it will be > deallocated by the shrinker if there are no users of it left. At the moment, > huge_zero_folio infrastructure refcount is tied to the process lifetime > that created it. This might not work for bio layer as the completions > can be async and the process that created the huge_zero_folio might no > longer be alive. And, one of the main point that came during discussion > is to have something bigger than zero page as a drop-in replacement. > > Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate > the huge_zero_folio, and it will never drop the reference. This makes > using the huge_zero_folio without having to pass any mm struct and does > not tie the lifetime of the zero folio to anything, making it a drop-in > replacement for ZERO_PAGE. > > I have converted blkdev_issue_zero_pages() as an example as a part of > this series. I also noticed close to 4% performance improvement just by > replacing ZERO_PAGE with static huge_zero_folio. > > I will send patches to individual subsystems using the huge_zero_folio > once this gets upstreamed. > > Looking forward to some feedback. Please run scripts/checkpatch.pl on your patches. There are quite some warning for patch #2 and #3, in particular, around using spaces vs. tabs. -- Cheers, David / dhildenb
> > > > I will send patches to individual subsystems using the huge_zero_folio > > once this gets upstreamed. > > > > Looking forward to some feedback. > > Please run scripts/checkpatch.pl on your patches. > > There are quite some warning for patch #2 and #3, in particular, around > using spaces vs. tabs. Ah, you are right. I usually run it as a post-commit hook but I forgot to add it when I changed my workflow. Thanks for pointing it out. I also got a unused variable warning for huge_zero_static_fail_count as we don't use it when CONFIG_STATIC_HUGE_ZERO_FOLIO is disabled. I will change them in the new version. -- Pankaj
© 2016 - 2025 Red Hat, Inc.