Apply the same batch-freeing optimization from free_contig_range() to the
frozen page path. The previous __free_contig_frozen_range() freed each
order-0 page individually via free_frozen_pages(), which is slow for the
same reason the old free_contig_range() was: each page goes to the
order-0 pcp list rather than being coalesced into higher-order blocks.
Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
each order-0 page, then batch the prepared pages into the largest
possible power-of-2 aligned chunks via free_prepared_contig_range().
If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
deliberately not freed; it should not be returned to the allocator.
I've tested CMA through debugfs. The test allocates 16384 pages per
allocation for several iterations. There is 3.5x improvement.
Before: 1406 usec per iteration
After: 402 usec per iteration
Before:
70.89% 0.69% cma [kernel.kallsyms] [.] free_contig_frozen_range
|
|--70.20%--free_contig_frozen_range
| |
| |--46.41%--__free_frozen_pages
| | |
| | --36.18%--free_frozen_page_commit
| | |
| | --29.63%--_raw_spin_unlock_irqrestore
| |
| |--8.76%--_raw_spin_trylock
| |
| |--7.03%--__preempt_count_dec_and_test
| |
| |--4.57%--_raw_spin_unlock
| |
| |--1.96%--__get_pfnblock_flags_mask.isra.0
| |
| --1.15%--free_frozen_page_commit
|
--0.69%--el0t_64_sync
After:
23.57% 0.00% cma [kernel.kallsyms] [.] free_contig_frozen_range
|
---free_contig_frozen_range
|
|--20.45%--__free_contig_frozen_range
| |
| |--17.77%--free_pages_prepare
| |
| --0.72%--free_prepared_contig_range
| |
| --0.55%--__free_frozen_pages
|
--3.12%--free_pages_prepare
Suggested-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
mm/page_alloc.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6a9430f720579..2e99fa85cdc8e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7020,8 +7020,22 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
{
- for (; nr_pages--; pfn++)
- free_frozen_pages(pfn_to_page(pfn), 0);
+ struct page *page = pfn_to_page(pfn);
+ struct page *start = NULL;
+ unsigned long i;
+
+ for (i = 0; i < nr_pages; i++, page++) {
+ if (free_pages_prepare(page, 0)) {
+ if (!start)
+ start = page;
+ } else if (start) {
+ free_prepared_contig_range(start, page - start);
+ start = NULL;
+ }
+ }
+
+ if (start)
+ free_prepared_contig_range(start, page - start);
}
/**
--
2.47.3
On 16 Mar 2026, at 7:31, Muhammad Usama Anjum wrote: > Apply the same batch-freeing optimization from free_contig_range() to the > frozen page path. The previous __free_contig_frozen_range() freed each > order-0 page individually via free_frozen_pages(), which is slow for the > same reason the old free_contig_range() was: each page goes to the > order-0 pcp list rather than being coalesced into higher-order blocks. > > Rewrite __free_contig_frozen_range() to call free_pages_prepare() for > each order-0 page, then batch the prepared pages into the largest > possible power-of-2 aligned chunks via free_prepared_contig_range(). > If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is > deliberately not freed; it should not be returned to the allocator. > > I've tested CMA through debugfs. The test allocates 16384 pages per > allocation for several iterations. There is 3.5x improvement. > > Before: 1406 usec per iteration > After: 402 usec per iteration > > Before: > > 70.89% 0.69% cma [kernel.kallsyms] [.] free_contig_frozen_range > | > |--70.20%--free_contig_frozen_range > | | > | |--46.41%--__free_frozen_pages > | | | > | | --36.18%--free_frozen_page_commit > | | | > | | --29.63%--_raw_spin_unlock_irqrestore > | | > | |--8.76%--_raw_spin_trylock > | | > | |--7.03%--__preempt_count_dec_and_test > | | > | |--4.57%--_raw_spin_unlock > | | > | |--1.96%--__get_pfnblock_flags_mask.isra.0 > | | > | --1.15%--free_frozen_page_commit > | > --0.69%--el0t_64_sync > > After: > > 23.57% 0.00% cma [kernel.kallsyms] [.] free_contig_frozen_range > | > ---free_contig_frozen_range > | > |--20.45%--__free_contig_frozen_range > | | > | |--17.77%--free_pages_prepare > | | > | --0.72%--free_prepared_contig_range > | | > | --0.55%--__free_frozen_pages > | > --3.12%--free_pages_prepare > > Suggested-by: Zi Yan <ziy@nvidia.com> > Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com> > --- > mm/page_alloc.c | 18 ++++++++++++++++-- > 1 file changed, 16 insertions(+), 2 deletions(-) > LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi
On 3/16/26 12:31, Muhammad Usama Anjum wrote:
> Apply the same batch-freeing optimization from free_contig_range() to the
> frozen page path. The previous __free_contig_frozen_range() freed each
> order-0 page individually via free_frozen_pages(), which is slow for the
> same reason the old free_contig_range() was: each page goes to the
> order-0 pcp list rather than being coalesced into higher-order blocks.
>
> Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
> each order-0 page, then batch the prepared pages into the largest
> possible power-of-2 aligned chunks via free_prepared_contig_range().
> If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
> deliberately not freed; it should not be returned to the allocator.
>
> I've tested CMA through debugfs. The test allocates 16384 pages per
> allocation for several iterations. There is 3.5x improvement.
>
> Before: 1406 usec per iteration
> After: 402 usec per iteration
>
> Before:
>
> 70.89% 0.69% cma [kernel.kallsyms] [.] free_contig_frozen_range
> |
> |--70.20%--free_contig_frozen_range
> | |
> | |--46.41%--__free_frozen_pages
> | | |
> | | --36.18%--free_frozen_page_commit
> | | |
> | | --29.63%--_raw_spin_unlock_irqrestore
> | |
> | |--8.76%--_raw_spin_trylock
> | |
> | |--7.03%--__preempt_count_dec_and_test
> | |
> | |--4.57%--_raw_spin_unlock
> | |
> | |--1.96%--__get_pfnblock_flags_mask.isra.0
> | |
> | --1.15%--free_frozen_page_commit
> |
> --0.69%--el0t_64_sync
>
> After:
>
> 23.57% 0.00% cma [kernel.kallsyms] [.] free_contig_frozen_range
> |
> ---free_contig_frozen_range
> |
> |--20.45%--__free_contig_frozen_range
> | |
> | |--17.77%--free_pages_prepare
> | |
> | --0.72%--free_prepared_contig_range
> | |
> | --0.55%--__free_frozen_pages
> |
> --3.12%--free_pages_prepare
>
> Suggested-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
LGTM.
Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> ---
> mm/page_alloc.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6a9430f720579..2e99fa85cdc8e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7020,8 +7020,22 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
>
> static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
> {
> - for (; nr_pages--; pfn++)
> - free_frozen_pages(pfn_to_page(pfn), 0);
> + struct page *page = pfn_to_page(pfn);
> + struct page *start = NULL;
> + unsigned long i;
> +
> + for (i = 0; i < nr_pages; i++, page++) {
> + if (free_pages_prepare(page, 0)) {
> + if (!start)
> + start = page;
> + } else if (start) {
> + free_prepared_contig_range(start, page - start);
> + start = NULL;
> + }
> + }
> +
> + if (start)
> + free_prepared_contig_range(start, page - start);
> }
>
> /**
© 2016 - 2026 Red Hat, Inc.