block/blk-lib.c | 15 +++++---- include/linux/huge_mm.h | 38 ++++++++++++++++++++++ include/linux/mm_types.h | 2 +- mm/Kconfig | 16 +++++++++ mm/huge_memory.c | 70 ++++++++++++++++++++++++++-------------- 5 files changed, 108 insertions(+), 33 deletions(-)
From: Pankaj Raghav <p.raghav@samsung.com> Many places in the kernel need to zero out larger chunks, but the maximum segment we can zero out at a time by ZERO_PAGE is limited by PAGE_SIZE. This concern was raised during the review of adding Large Block Size support to XFS[2][3]. This is especially annoying in block devices and filesystems where multiple ZERO_PAGEs are attached to the bio in different bvecs. With multipage bvec support in block layer, it is much more efficient to send out larger zero pages as a part of single bvec. Some examples of places in the kernel where this could be useful: - blkdev_issue_zero_pages() - iomap_dio_zero() - vmalloc.c:zero_iter() - rxperf_process_call() - fscrypt_zeroout_range_inline_crypt() - bch2_checksum_update() ... Usually huge_zero_folio is allocated on demand, and it will be deallocated by the shrinker if there are no users of it left. At the moment, huge_zero_folio infrastructure refcount is tied to the process lifetime that created it. This might not work for bio layer as the completions can be async and the process that created the huge_zero_folio might no longer be alive. And, one of the main point that came during discussion is to have something bigger than zero page as a drop-in replacement. Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate the huge_zero_folio, and disable the shrinker so that huge_zero_folio is never freed. This makes using the huge_zero_folio without having to pass any mm struct and does not tie the lifetime of the zero folio to anything, making it a drop-in replacement for ZERO_PAGE. I have converted blkdev_issue_zero_pages() as an example as a part of this series. I also noticed close to 4% performance improvement just by replacing ZERO_PAGE with persistent huge_zero_folio. I will send patches to individual subsystems using the huge_zero_folio once this gets upstreamed. Looking forward to some feedback. [1] https://lore.kernel.org/linux-mm/20250707142319.319642-1-kernel@pankajraghav.com/ [2] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/ [3] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/ Changes since v2: - Minor wording changes. No functional changes. - Added RVB by Lorenzo Changes since v1: - Simplified the code by allocating in thp_shinrker_init() and disable the shrinker when this PERSISTENT_HUGE_ZERO_FOLIO config is enabled. - Reworked commit messages and config messages based on Dave's feedback - Added RVB and Acked-by tags. Changes since RFC v2: - Convert get_huge_zero_page and put_huge_zero_page to *_folio. - Convert MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO. - Make the retry for huge_zero_folio from 2 to 1. - Add an extra sanity check in shrinker scan for static huge_zero_folio case. Changes since v1: - Fixed all warnings. - Added a retry feature after a particular time. - Added Acked-by and Signed-off-by from David. Changes since last series[1]: - Instead of allocating a new page through memblock, use the same infrastructure as huge_zero_folio but raise the reference and never drop it. (David) - And some minor cleanups based on Lorenzo's feedback. Pankaj Raghav (5): mm: rename huge_zero_page to huge_zero_folio mm: rename MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO mm: add persistent huge zero folio mm: add largest_zero_folio() routine block: use largest_zero_folio in __blkdev_issue_zero_pages() block/blk-lib.c | 15 +++++---- include/linux/huge_mm.h | 38 ++++++++++++++++++++++ include/linux/mm_types.h | 2 +- mm/Kconfig | 16 +++++++++ mm/huge_memory.c | 70 ++++++++++++++++++++++++++-------------- 5 files changed, 108 insertions(+), 33 deletions(-) base-commit: 53c448023185717d0ed56b5546dc2be405da92ff -- 2.49.0
On Mon, Aug 11, 2025 at 10:41:08AM +0200, Pankaj Raghav (Samsung) wrote: > From: Pankaj Raghav <p.raghav@samsung.com> > > Many places in the kernel need to zero out larger chunks, but the > maximum segment we can zero out at a time by ZERO_PAGE is limited by > PAGE_SIZE. > > This concern was raised during the review of adding Large Block Size support > to XFS[2][3]. > > This is especially annoying in block devices and filesystems where > multiple ZERO_PAGEs are attached to the bio in different bvecs. With multipage > bvec support in block layer, it is much more efficient to send out > larger zero pages as a part of single bvec. > > Some examples of places in the kernel where this could be useful: > - blkdev_issue_zero_pages() > - iomap_dio_zero() > - vmalloc.c:zero_iter() > - rxperf_process_call() > - fscrypt_zeroout_range_inline_crypt() > - bch2_checksum_update() > ... > > Usually huge_zero_folio is allocated on demand, and it will be > deallocated by the shrinker if there are no users of it left. At the moment, > huge_zero_folio infrastructure refcount is tied to the process lifetime > that created it. This might not work for bio layer as the completions > can be async and the process that created the huge_zero_folio might no > longer be alive. And, one of the main point that came during discussion > is to have something bigger than zero page as a drop-in replacement. > > Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate > the huge_zero_folio, and disable the shrinker so that huge_zero_folio is > never freed. > This makes using the huge_zero_folio without having to pass any mm struct and does > not tie the lifetime of the zero folio to anything, making it a drop-in > replacement for ZERO_PAGE. > > I have converted blkdev_issue_zero_pages() as an example as a part of > this series. I also noticed close to 4% performance improvement just by > replacing ZERO_PAGE with persistent huge_zero_folio. > > I will send patches to individual subsystems using the huge_zero_folio > once this gets upstreamed. > > Looking forward to some feedback. Why does it need to be compile-time? Maybe whoever needs huge zero page would just call get_huge_zero_page()/folio() on initialization to get it pinned? -- Kiryl Shutsemau / Kirill A. Shutemov
On 11.08.25 11:43, Kiryl Shutsemau wrote: > On Mon, Aug 11, 2025 at 10:41:08AM +0200, Pankaj Raghav (Samsung) wrote: >> From: Pankaj Raghav <p.raghav@samsung.com> >> >> Many places in the kernel need to zero out larger chunks, but the >> maximum segment we can zero out at a time by ZERO_PAGE is limited by >> PAGE_SIZE. >> >> This concern was raised during the review of adding Large Block Size support >> to XFS[2][3]. >> >> This is especially annoying in block devices and filesystems where >> multiple ZERO_PAGEs are attached to the bio in different bvecs. With multipage >> bvec support in block layer, it is much more efficient to send out >> larger zero pages as a part of single bvec. >> >> Some examples of places in the kernel where this could be useful: >> - blkdev_issue_zero_pages() >> - iomap_dio_zero() >> - vmalloc.c:zero_iter() >> - rxperf_process_call() >> - fscrypt_zeroout_range_inline_crypt() >> - bch2_checksum_update() >> ... >> >> Usually huge_zero_folio is allocated on demand, and it will be >> deallocated by the shrinker if there are no users of it left. At the moment, >> huge_zero_folio infrastructure refcount is tied to the process lifetime >> that created it. This might not work for bio layer as the completions >> can be async and the process that created the huge_zero_folio might no >> longer be alive. And, one of the main point that came during discussion >> is to have something bigger than zero page as a drop-in replacement. >> >> Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate >> the huge_zero_folio, and disable the shrinker so that huge_zero_folio is >> never freed. >> This makes using the huge_zero_folio without having to pass any mm struct and does >> not tie the lifetime of the zero folio to anything, making it a drop-in >> replacement for ZERO_PAGE. >> >> I have converted blkdev_issue_zero_pages() as an example as a part of >> this series. I also noticed close to 4% performance improvement just by >> replacing ZERO_PAGE with persistent huge_zero_folio. >> >> I will send patches to individual subsystems using the huge_zero_folio >> once this gets upstreamed. >> >> Looking forward to some feedback. > > Why does it need to be compile-time? Maybe whoever needs huge zero page > would just call get_huge_zero_page()/folio() on initialization to get it > pinned? That's what v2 did, and this way here is cleaner. -- Cheers, David / dhildenb
On 11.08.25 11:49, David Hildenbrand wrote: > On 11.08.25 11:43, Kiryl Shutsemau wrote: >> On Mon, Aug 11, 2025 at 10:41:08AM +0200, Pankaj Raghav (Samsung) wrote: >>> From: Pankaj Raghav <p.raghav@samsung.com> >>> >>> Many places in the kernel need to zero out larger chunks, but the >>> maximum segment we can zero out at a time by ZERO_PAGE is limited by >>> PAGE_SIZE. >>> >>> This concern was raised during the review of adding Large Block Size support >>> to XFS[2][3]. >>> >>> This is especially annoying in block devices and filesystems where >>> multiple ZERO_PAGEs are attached to the bio in different bvecs. With multipage >>> bvec support in block layer, it is much more efficient to send out >>> larger zero pages as a part of single bvec. >>> >>> Some examples of places in the kernel where this could be useful: >>> - blkdev_issue_zero_pages() >>> - iomap_dio_zero() >>> - vmalloc.c:zero_iter() >>> - rxperf_process_call() >>> - fscrypt_zeroout_range_inline_crypt() >>> - bch2_checksum_update() >>> ... >>> >>> Usually huge_zero_folio is allocated on demand, and it will be >>> deallocated by the shrinker if there are no users of it left. At the moment, >>> huge_zero_folio infrastructure refcount is tied to the process lifetime >>> that created it. This might not work for bio layer as the completions >>> can be async and the process that created the huge_zero_folio might no >>> longer be alive. And, one of the main point that came during discussion >>> is to have something bigger than zero page as a drop-in replacement. >>> >>> Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate >>> the huge_zero_folio, and disable the shrinker so that huge_zero_folio is >>> never freed. >>> This makes using the huge_zero_folio without having to pass any mm struct and does >>> not tie the lifetime of the zero folio to anything, making it a drop-in >>> replacement for ZERO_PAGE. >>> >>> I have converted blkdev_issue_zero_pages() as an example as a part of >>> this series. I also noticed close to 4% performance improvement just by >>> replacing ZERO_PAGE with persistent huge_zero_folio. >>> >>> I will send patches to individual subsystems using the huge_zero_folio >>> once this gets upstreamed. >>> >>> Looking forward to some feedback. >> >> Why does it need to be compile-time? Maybe whoever needs huge zero page >> would just call get_huge_zero_page()/folio() on initialization to get it >> pinned? > > That's what v2 did, and this way here is cleaner. Sorry, RFC v2 I think. It got a bit confusing with series names/versions. -- Cheers, David / dhildenb
> > > > Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate > > > > the huge_zero_folio, and disable the shrinker so that huge_zero_folio is > > > > never freed. > > > > This makes using the huge_zero_folio without having to pass any mm struct and does > > > > not tie the lifetime of the zero folio to anything, making it a drop-in > > > > replacement for ZERO_PAGE. > > > > > > > > I have converted blkdev_issue_zero_pages() as an example as a part of > > > > this series. I also noticed close to 4% performance improvement just by > > > > replacing ZERO_PAGE with persistent huge_zero_folio. > > > > > > > > I will send patches to individual subsystems using the huge_zero_folio > > > > once this gets upstreamed. > > > > > > > > Looking forward to some feedback. > > > > > > Why does it need to be compile-time? Maybe whoever needs huge zero page > > > would just call get_huge_zero_page()/folio() on initialization to get it > > > pinned? > > > > That's what v2 did, and this way here is cleaner. > > Sorry, RFC v2 I think. It got a bit confusing with series names/versions. > Another reason we made it a compile time config is because not all machines would want a PMD sized folio just for zeroing. For example, Dave Hansen told in one of the early revisions that a small x86 VM would not want this. So it is a default N, and it will be an opt-in. -- Pankaj
"Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes: >> > > > Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate >> > > > the huge_zero_folio, and disable the shrinker so that huge_zero_folio is >> > > > never freed. >> > > > This makes using the huge_zero_folio without having to pass any mm struct and does >> > > > not tie the lifetime of the zero folio to anything, making it a drop-in >> > > > replacement for ZERO_PAGE. >> > > > >> > > > I have converted blkdev_issue_zero_pages() as an example as a part of >> > > > this series. I also noticed close to 4% performance improvement just by >> > > > replacing ZERO_PAGE with persistent huge_zero_folio. >> > > > >> > > > I will send patches to individual subsystems using the huge_zero_folio >> > > > once this gets upstreamed. >> > > > >> > > > Looking forward to some feedback. >> > > >> > > Why does it need to be compile-time? Maybe whoever needs huge zero page >> > > would just call get_huge_zero_page()/folio() on initialization to get it >> > > pinned? >> > >> > That's what v2 did, and this way here is cleaner. >> >> Sorry, RFC v2 I think. It got a bit confusing with series names/versions. >> > > Another reason we made it a compile time config is because not all > machines would want a PMD sized folio just for zeroing. For example, > Dave Hansen told in one of the early revisions that a small x86 VM would > not want this. > > So it is a default N, and it will be an opt-in. > I looked over the patches and I liked this design. This is much simpler and cleaner compared to the initial version. Thanks! -ritesh
On Mon, Aug 11, 2025 at 11:52:11AM +0200, David Hildenbrand wrote: > On 11.08.25 11:49, David Hildenbrand wrote: > > On 11.08.25 11:43, Kiryl Shutsemau wrote: > > > On Mon, Aug 11, 2025 at 10:41:08AM +0200, Pankaj Raghav (Samsung) wrote: > > > > From: Pankaj Raghav <p.raghav@samsung.com> > > > > > > > > Many places in the kernel need to zero out larger chunks, but the > > > > maximum segment we can zero out at a time by ZERO_PAGE is limited by > > > > PAGE_SIZE. > > > > > > > > This concern was raised during the review of adding Large Block Size support > > > > to XFS[2][3]. > > > > > > > > This is especially annoying in block devices and filesystems where > > > > multiple ZERO_PAGEs are attached to the bio in different bvecs. With multipage > > > > bvec support in block layer, it is much more efficient to send out > > > > larger zero pages as a part of single bvec. > > > > > > > > Some examples of places in the kernel where this could be useful: > > > > - blkdev_issue_zero_pages() > > > > - iomap_dio_zero() > > > > - vmalloc.c:zero_iter() > > > > - rxperf_process_call() > > > > - fscrypt_zeroout_range_inline_crypt() > > > > - bch2_checksum_update() > > > > ... > > > > > > > > Usually huge_zero_folio is allocated on demand, and it will be > > > > deallocated by the shrinker if there are no users of it left. At the moment, > > > > huge_zero_folio infrastructure refcount is tied to the process lifetime > > > > that created it. This might not work for bio layer as the completions > > > > can be async and the process that created the huge_zero_folio might no > > > > longer be alive. And, one of the main point that came during discussion > > > > is to have something bigger than zero page as a drop-in replacement. > > > > > > > > Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will always allocate > > > > the huge_zero_folio, and disable the shrinker so that huge_zero_folio is > > > > never freed. > > > > This makes using the huge_zero_folio without having to pass any mm struct and does > > > > not tie the lifetime of the zero folio to anything, making it a drop-in > > > > replacement for ZERO_PAGE. > > > > > > > > I have converted blkdev_issue_zero_pages() as an example as a part of > > > > this series. I also noticed close to 4% performance improvement just by > > > > replacing ZERO_PAGE with persistent huge_zero_folio. > > > > > > > > I will send patches to individual subsystems using the huge_zero_folio > > > > once this gets upstreamed. > > > > > > > > Looking forward to some feedback. > > > > > > Why does it need to be compile-time? Maybe whoever needs huge zero page > > > would just call get_huge_zero_page()/folio() on initialization to get it > > > pinned? > > > > That's what v2 did, and this way here is cleaner. > > Sorry, RFC v2 I think. It got a bit confusing with series names/versions. Well, my worry is that 2M can be a high tax for smaller machines. Compile-time might be cleaner, but it has downsides. It is also not clear if these users actually need physical HZP or virtual is enough. Virtual is cheap. -- Kiryl Shutsemau / Kirill A. Shutemov
> > Sorry, RFC v2 I think. It got a bit confusing with series names/versions. > > Well, my worry is that 2M can be a high tax for smaller machines. > Compile-time might be cleaner, but it has downsides. > > It is also not clear if these users actually need physical HZP or virtual > is enough. Virtual is cheap. We do need physical as the main usecase was for block IO where we will be DMAing. The main reason I was seeing an improvement in perf was we were sending bigger chunks of memory in a single bio_vec instead of using multiple bio_vec. -- Pankaj
On Mon, Aug 11, 2025 at 11:07:48AM +0100, Kiryl Shutsemau wrote: > > Well, my worry is that 2M can be a high tax for smaller machines. > Compile-time might be cleaner, but it has downsides. > > It is also not clear if these users actually need physical HZP or virtual > is enough. Virtual is cheap. The kernel config flag (default =N) literally says don't use unless you have plenty of memory :) So this isn't an issue.
On Mon, Aug 11, 2025 at 11:09:24AM +0100, Lorenzo Stoakes wrote: > On Mon, Aug 11, 2025 at 11:07:48AM +0100, Kiryl Shutsemau wrote: > > > > Well, my worry is that 2M can be a high tax for smaller machines. > > Compile-time might be cleaner, but it has downsides. > > > > It is also not clear if these users actually need physical HZP or virtual > > is enough. Virtual is cheap. > > The kernel config flag (default =N) literally says don't use unless you > have plenty of memory :) > > So this isn't an issue. Distros use one-config-fits-all approach. Default N doesn't help anything. -- Kiryl Shutsemau / Kirill A. Shutemov
On 11.08.25 12:17, Kiryl Shutsemau wrote: > On Mon, Aug 11, 2025 at 11:09:24AM +0100, Lorenzo Stoakes wrote: >> On Mon, Aug 11, 2025 at 11:07:48AM +0100, Kiryl Shutsemau wrote: >>> >>> Well, my worry is that 2M can be a high tax for smaller machines. >>> Compile-time might be cleaner, but it has downsides. >>> >>> It is also not clear if these users actually need physical HZP or virtual >>> is enough. Virtual is cheap. >> >> The kernel config flag (default =N) literally says don't use unless you >> have plenty of memory :) >> >> So this isn't an issue. > > Distros use one-config-fits-all approach. Default N doesn't help > anything. You'd probably want a way to say "use the persistent huge zero folio if you machine has more than X Gigs". That's all reasonable stuff that can be had on top of this series. -- Cheers, David / dhildenb
On Mon, Aug 11, 2025 at 12:21:23PM +0200, David Hildenbrand wrote: > On 11.08.25 12:17, Kiryl Shutsemau wrote: > > On Mon, Aug 11, 2025 at 11:09:24AM +0100, Lorenzo Stoakes wrote: > > > On Mon, Aug 11, 2025 at 11:07:48AM +0100, Kiryl Shutsemau wrote: > > > > > > > > Well, my worry is that 2M can be a high tax for smaller machines. > > > > Compile-time might be cleaner, but it has downsides. > > > > > > > > It is also not clear if these users actually need physical HZP or virtual > > > > is enough. Virtual is cheap. > > > > > > The kernel config flag (default =N) literally says don't use unless you > > > have plenty of memory :) > > > > > > So this isn't an issue. > > > > Distros use one-config-fits-all approach. Default N doesn't help > > anything. > > You'd probably want a way to say "use the persistent huge zero folio if you > machine has more than X Gigs". That's all reasonable stuff that can be had > on top of this series. We have 'totalram_pages() < (512 << (20 - PAGE_SHIFT))' check in hugepage_init(). It can [be abstracted out and] re-used. -- Kiryl Shutsemau / Kirill A. Shutemov
On 11.08.25 12:36, Kiryl Shutsemau wrote: > On Mon, Aug 11, 2025 at 12:21:23PM +0200, David Hildenbrand wrote: >> On 11.08.25 12:17, Kiryl Shutsemau wrote: >>> On Mon, Aug 11, 2025 at 11:09:24AM +0100, Lorenzo Stoakes wrote: >>>> On Mon, Aug 11, 2025 at 11:07:48AM +0100, Kiryl Shutsemau wrote: >>>>> >>>>> Well, my worry is that 2M can be a high tax for smaller machines. >>>>> Compile-time might be cleaner, but it has downsides. >>>>> >>>>> It is also not clear if these users actually need physical HZP or virtual >>>>> is enough. Virtual is cheap. >>>> >>>> The kernel config flag (default =N) literally says don't use unless you >>>> have plenty of memory :) >>>> >>>> So this isn't an issue. >>> >>> Distros use one-config-fits-all approach. Default N doesn't help >>> anything. >> >> You'd probably want a way to say "use the persistent huge zero folio if you >> machine has more than X Gigs". That's all reasonable stuff that can be had >> on top of this series. > > We have 'totalram_pages() < (512 << (20 - PAGE_SHIFT))' check in > hugepage_init(). It can [be abstracted out and] re-used. I'll note that e.g., RHEL 10 already has a minimum RAM requirement of 2 GiB. I think for Fedora it's 1 GiB, with the recommendation of having at least 2 GiB. What might be reasonable is having a kconfig option where one (distro) can define the minimum RAM size for the persistent huge zero folio, and then checking against totalram_pages() during boot. But again, I think this is something that goes on top of this series. (it might also be interesting to allow for disabling the persistent huge zero folio through a cmdline option) -- Cheers, David / dhildenb
On 8/11/25 12:43, David Hildenbrand wrote: > On 11.08.25 12:36, Kiryl Shutsemau wrote: >> On Mon, Aug 11, 2025 at 12:21:23PM +0200, David Hildenbrand wrote: >>> On 11.08.25 12:17, Kiryl Shutsemau wrote: >>>> On Mon, Aug 11, 2025 at 11:09:24AM +0100, Lorenzo Stoakes wrote: >>>>> On Mon, Aug 11, 2025 at 11:07:48AM +0100, Kiryl Shutsemau wrote: >>>>>> >>>>>> Well, my worry is that 2M can be a high tax for smaller machines. >>>>>> Compile-time might be cleaner, but it has downsides. >>>>>> >>>>>> It is also not clear if these users actually need physical HZP or >>>>>> virtual >>>>>> is enough. Virtual is cheap. >>>>> >>>>> The kernel config flag (default =N) literally says don't use unless >>>>> you >>>>> have plenty of memory :) >>>>> >>>>> So this isn't an issue. >>>> >>>> Distros use one-config-fits-all approach. Default N doesn't help >>>> anything. >>> >>> You'd probably want a way to say "use the persistent huge zero folio >>> if you >>> machine has more than X Gigs". That's all reasonable stuff that can >>> be had >>> on top of this series. >> >> We have 'totalram_pages() < (512 << (20 - PAGE_SHIFT))' check in >> hugepage_init(). It can [be abstracted out and] re-used. > > I'll note that e.g., RHEL 10 already has a minimum RAM requirement of 2 > GiB. I think for Fedora it's 1 GiB, with the recommendation of having at > least 2 GiB. > > What might be reasonable is having a kconfig option where one (distro) > can define the minimum RAM size for the persistent huge zero folio, and > then checking against totalram_pages() during boot. > > But again, I think this is something that goes on top of this series. > (it might also be interesting to allow for disabling the persistent huge > zero folio through a cmdline option) > Please make this a kernel config option and don't rely on heuristics. They have the nasty habit of doing exactly the wrong thing at places where you really don't expect them to. (Consider SoCs with a large CMA area for video grabbing or similar stuff and very little main memory ...) A kernel option will give distros and/or admins the flexibility they need without having to rebuild the kernel and also not having to worry about heuristics going wrong. Cheers, Hannes -- Dr. Hannes Reinecke Kernel Storage Architect hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
On Mon, Aug 11, 2025 at 11:36:29AM +0100, Kiryl Shutsemau wrote: > We have 'totalram_pages() < (512 << (20 - PAGE_SHIFT))' check in > hugepage_init(). It can [be abstracted out and] re-used. We can decide to do that at a later date with whatever heuristics we like. Let's just get this merged, it's behind a config flag there's no harm being done here.
© 2016 - 2025 Red Hat, Inc.