[PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()

Muhammad Usama Anjum posted 3 patches 1 week, 4 days ago
There is a newer version of this series
[PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Posted by Muhammad Usama Anjum 1 week, 4 days ago
Apply the same batch-freeing optimization from free_contig_range() to the
frozen page path. The previous __free_contig_frozen_range() freed each
order-0 page individually via free_frozen_pages(), which is slow for the
same reason the old free_contig_range() was: each page goes to the
order-0 pcp list rather than being coalesced into higher-order blocks.

Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
each order-0 page, then batch the prepared pages into the largest
possible power-of-2 aligned chunks via free_prepared_contig_range().
If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
deliberately not freed; it should not be returned to the allocator.

I've tested CMA through debugfs. The test allocates 16384 pages per
allocation for several iterations. There is 3.5x improvement.

Before: 1406 usec per iteration
After:   402 usec per iteration

Before:

    70.89%     0.69%  cma              [kernel.kallsyms]      [.] free_contig_frozen_range
            |
            |--70.20%--free_contig_frozen_range
            |          |
            |          |--46.41%--__free_frozen_pages
            |          |          |
            |          |           --36.18%--free_frozen_page_commit
            |          |                     |
            |          |                      --29.63%--_raw_spin_unlock_irqrestore
            |          |
            |          |--8.76%--_raw_spin_trylock
            |          |
            |          |--7.03%--__preempt_count_dec_and_test
            |          |
            |          |--4.57%--_raw_spin_unlock
            |          |
            |          |--1.96%--__get_pfnblock_flags_mask.isra.0
            |          |
            |           --1.15%--free_frozen_page_commit
            |
             --0.69%--el0t_64_sync

After:

    23.57%     0.00%  cma              [kernel.kallsyms]      [.] free_contig_frozen_range
            |
            ---free_contig_frozen_range
               |
               |--20.45%--__free_contig_frozen_range
               |          |
               |          |--17.77%--free_pages_prepare
               |          |
               |           --0.72%--free_prepared_contig_range
               |                     |
               |                      --0.55%--__free_frozen_pages
               |
                --3.12%--free_pages_prepare

Suggested-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
Changes since v2:
- Rework the loop to check for memory sections just like __free_contig_range()
- Didn't add reviewed-by tags because of rework
---
 mm/page_alloc.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 250cc07e547b8..26eac35ef73bd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7038,8 +7038,30 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
 
 static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
 {
-	for (; nr_pages--; pfn++)
-		free_frozen_pages(pfn_to_page(pfn), 0);
+	struct page *page = pfn_to_page(pfn);
+	struct page *start = NULL;
+	unsigned long start_sec;
+	unsigned long i;
+
+	for (i = 0; i < nr_pages; i++, page++) {
+		if (!free_pages_prepare(page, 0)) {
+			if (start) {
+				free_prepared_contig_range(start, page - start);
+				start = NULL;
+			}
+		} else if (start &&
+			   memdesc_section(page->flags) != start_sec) {
+			free_prepared_contig_range(start, page - start);
+			start = page;
+			start_sec = memdesc_section(page->flags);
+		} else if (!start) {
+			start = page;
+			start_sec = memdesc_section(page->flags);
+		}
+	}
+
+	if (start)
+		free_prepared_contig_range(start, page - start);
 }
 
 /**
-- 
2.47.3
Re: [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Posted by Zi Yan 1 week, 4 days ago
On 24 Mar 2026, at 9:35, Muhammad Usama Anjum wrote:

> Apply the same batch-freeing optimization from free_contig_range() to the
> frozen page path. The previous __free_contig_frozen_range() freed each
> order-0 page individually via free_frozen_pages(), which is slow for the
> same reason the old free_contig_range() was: each page goes to the
> order-0 pcp list rather than being coalesced into higher-order blocks.
>
> Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
> each order-0 page, then batch the prepared pages into the largest
> possible power-of-2 aligned chunks via free_prepared_contig_range().
> If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
> deliberately not freed; it should not be returned to the allocator.
>
> I've tested CMA through debugfs. The test allocates 16384 pages per
> allocation for several iterations. There is 3.5x improvement.
>
> Before: 1406 usec per iteration
> After:   402 usec per iteration
>
> Before:
>
>     70.89%     0.69%  cma              [kernel.kallsyms]      [.] free_contig_frozen_range
>             |
>             |--70.20%--free_contig_frozen_range
>             |          |
>             |          |--46.41%--__free_frozen_pages
>             |          |          |
>             |          |           --36.18%--free_frozen_page_commit
>             |          |                     |
>             |          |                      --29.63%--_raw_spin_unlock_irqrestore
>             |          |
>             |          |--8.76%--_raw_spin_trylock
>             |          |
>             |          |--7.03%--__preempt_count_dec_and_test
>             |          |
>             |          |--4.57%--_raw_spin_unlock
>             |          |
>             |          |--1.96%--__get_pfnblock_flags_mask.isra.0
>             |          |
>             |           --1.15%--free_frozen_page_commit
>             |
>              --0.69%--el0t_64_sync
>
> After:
>
>     23.57%     0.00%  cma              [kernel.kallsyms]      [.] free_contig_frozen_range
>             |
>             ---free_contig_frozen_range
>                |
>                |--20.45%--__free_contig_frozen_range
>                |          |
>                |          |--17.77%--free_pages_prepare
>                |          |
>                |           --0.72%--free_prepared_contig_range
>                |                     |
>                |                      --0.55%--__free_frozen_pages
>                |
>                 --3.12%--free_pages_prepare
>
> Suggested-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
> ---
> Changes since v2:
> - Rework the loop to check for memory sections just like __free_contig_range()
> - Didn't add reviewed-by tags because of rework
> ---
>  mm/page_alloc.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 250cc07e547b8..26eac35ef73bd 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7038,8 +7038,30 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
>
>  static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>  {
> -	for (; nr_pages--; pfn++)
> -		free_frozen_pages(pfn_to_page(pfn), 0);
> +	struct page *page = pfn_to_page(pfn);
> +	struct page *start = NULL;
> +	unsigned long start_sec;
> +	unsigned long i;
> +
> +	for (i = 0; i < nr_pages; i++, page++) {
> +		if (!free_pages_prepare(page, 0)) {
> +			if (start) {
> +				free_prepared_contig_range(start, page - start);
> +				start = NULL;
> +			}
> +		} else if (start &&
> +			   memdesc_section(page->flags) != start_sec) {
> +			free_prepared_contig_range(start, page - start);
> +			start = page;
> +			start_sec = memdesc_section(page->flags);
> +		} else if (!start) {
> +			start = page;
> +			start_sec = memdesc_section(page->flags);
> +		}
> +	}
> +
> +	if (start)
> +		free_prepared_contig_range(start, page - start);
>  }

This looks almost the same as __free_contig_range().

Two approaches to deduplicate the code:

1. __free_contig_range() first does put_page_testzero()
on all pages and call __free_contig_frozen_range()
on the range, __free_contig_frozen_range() will need
to skip not frozen pages. It is not ideal.

2. add a helper function
__free_contig_range_common(unsigned long pfn,
unsigned long nr_pages, bool is_page_frozen),
and
a. call __free_contig_range_common(..., /*is_page_frozen=*/ false)
in __free_contig_range(),
b. __free_contig_range_common(..., /*is_page_frozen=*/ true)
in __free_contig_frozen_range().

But I would like to hear others’ opinions.



Best Regards,
Yan, Zi
Re: [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Posted by David Hildenbrand (Arm) 1 week, 4 days ago
On 3/24/26 16:06, Zi Yan wrote:
> On 24 Mar 2026, at 9:35, Muhammad Usama Anjum wrote:
> 
>> Apply the same batch-freeing optimization from free_contig_range() to the
>> frozen page path. The previous __free_contig_frozen_range() freed each
>> order-0 page individually via free_frozen_pages(), which is slow for the
>> same reason the old free_contig_range() was: each page goes to the
>> order-0 pcp list rather than being coalesced into higher-order blocks.
>>
>> Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
>> each order-0 page, then batch the prepared pages into the largest
>> possible power-of-2 aligned chunks via free_prepared_contig_range().
>> If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
>> deliberately not freed; it should not be returned to the allocator.
>>
>> I've tested CMA through debugfs. The test allocates 16384 pages per
>> allocation for several iterations. There is 3.5x improvement.
>>
>> Before: 1406 usec per iteration
>> After:   402 usec per iteration
>>
>> Before:
>>
>>     70.89%     0.69%  cma              [kernel.kallsyms]      [.] free_contig_frozen_range
>>             |
>>             |--70.20%--free_contig_frozen_range
>>             |          |
>>             |          |--46.41%--__free_frozen_pages
>>             |          |          |
>>             |          |           --36.18%--free_frozen_page_commit
>>             |          |                     |
>>             |          |                      --29.63%--_raw_spin_unlock_irqrestore
>>             |          |
>>             |          |--8.76%--_raw_spin_trylock
>>             |          |
>>             |          |--7.03%--__preempt_count_dec_and_test
>>             |          |
>>             |          |--4.57%--_raw_spin_unlock
>>             |          |
>>             |          |--1.96%--__get_pfnblock_flags_mask.isra.0
>>             |          |
>>             |           --1.15%--free_frozen_page_commit
>>             |
>>              --0.69%--el0t_64_sync
>>
>> After:
>>
>>     23.57%     0.00%  cma              [kernel.kallsyms]      [.] free_contig_frozen_range
>>             |
>>             ---free_contig_frozen_range
>>                |
>>                |--20.45%--__free_contig_frozen_range
>>                |          |
>>                |          |--17.77%--free_pages_prepare
>>                |          |
>>                |           --0.72%--free_prepared_contig_range
>>                |                     |
>>                |                      --0.55%--__free_frozen_pages
>>                |
>>                 --3.12%--free_pages_prepare
>>
>> Suggested-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
>> ---
>> Changes since v2:
>> - Rework the loop to check for memory sections just like __free_contig_range()
>> - Didn't add reviewed-by tags because of rework
>> ---
>>  mm/page_alloc.c | 26 ++++++++++++++++++++++++--
>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 250cc07e547b8..26eac35ef73bd 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -7038,8 +7038,30 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
>>
>>  static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>>  {
>> -	for (; nr_pages--; pfn++)
>> -		free_frozen_pages(pfn_to_page(pfn), 0);
>> +	struct page *page = pfn_to_page(pfn);
>> +	struct page *start = NULL;
>> +	unsigned long start_sec;
>> +	unsigned long i;
>> +
>> +	for (i = 0; i < nr_pages; i++, page++) {
>> +		if (!free_pages_prepare(page, 0)) {
>> +			if (start) {
>> +				free_prepared_contig_range(start, page - start);
>> +				start = NULL;
>> +			}
>> +		} else if (start &&
>> +			   memdesc_section(page->flags) != start_sec) {
>> +			free_prepared_contig_range(start, page - start);
>> +			start = page;
>> +			start_sec = memdesc_section(page->flags);
>> +		} else if (!start) {
>> +			start = page;
>> +			start_sec = memdesc_section(page->flags);
>> +		}
>> +	}
>> +
>> +	if (start)
>> +		free_prepared_contig_range(start, page - start);
>>  }
> 
> This looks almost the same as __free_contig_range().
> 
> Two approaches to deduplicate the code:
> 
> 1. __free_contig_range() first does put_page_testzero()
> on all pages and call __free_contig_frozen_range()
> on the range, __free_contig_frozen_range() will need
> to skip not frozen pages. It is not ideal.

Right, let's not do that.

> 
> 2. add a helper function
> __free_contig_range_common(unsigned long pfn,
> unsigned long nr_pages, bool is_page_frozen),
> and
> a. call __free_contig_range_common(..., /*is_page_frozen=*/ false)
> in __free_contig_range(),
> b. __free_contig_range_common(..., /*is_page_frozen=*/ true)
> in __free_contig_frozen_range().
> 

As long as it's an internal helper, that makes sense. I wouldn't want to
expose the bool in the external interface.

Thanks!

-- 
Cheers,

David
Re: [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Posted by Muhammad Usama Anjum 1 week, 3 days ago
<snip>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 250cc07e547b8..26eac35ef73bd 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -7038,8 +7038,30 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
>>>
>>>  static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>>>  {
>>> -	for (; nr_pages--; pfn++)
>>> -		free_frozen_pages(pfn_to_page(pfn), 0);
>>> +	struct page *page = pfn_to_page(pfn);
>>> +	struct page *start = NULL;
>>> +	unsigned long start_sec;
>>> +	unsigned long i;
>>> +
>>> +	for (i = 0; i < nr_pages; i++, page++) {
>>> +		if (!free_pages_prepare(page, 0)) {
>>> +			if (start) {
>>> +				free_prepared_contig_range(start, page - start);
>>> +				start = NULL;
>>> +			}
>>> +		} else if (start &&
>>> +			   memdesc_section(page->flags) != start_sec) {
>>> +			free_prepared_contig_range(start, page - start);
>>> +			start = page;
>>> +			start_sec = memdesc_section(page->flags);
>>> +		} else if (!start) {
>>> +			start = page;
>>> +			start_sec = memdesc_section(page->flags);
>>> +		}
>>> +	}
>>> +
>>> +	if (start)
>>> +		free_prepared_contig_range(start, page - start);
>>>  }
>>
>> This looks almost the same as __free_contig_range().
>>
>> Two approaches to deduplicate the code:
>>
>> 1. __free_contig_range() first does put_page_testzero()
>> on all pages and call __free_contig_frozen_range()
>> on the range, __free_contig_frozen_range() will need
>> to skip not frozen pages. It is not ideal.
> 
> Right, let's not do that.
> 
>>
>> 2. add a helper function
>> __free_contig_range_common(unsigned long pfn,
>> unsigned long nr_pages, bool is_page_frozen),
>> and
>> a. call __free_contig_range_common(..., /*is_page_frozen=*/ false)
>> in __free_contig_range(),
>> b. __free_contig_range_common(..., /*is_page_frozen=*/ true)
>> in __free_contig_frozen_range().
>>
I'm adding the common version. After the change, I'm thinking about the current functions and if
they can be simplified further:

free_contig_range()	- only calls __free_contig_range()
			- only visible with CONFIG_CONTIG_ALLOC
			- Exported as well

__free_contig_range()	- only calls __free_contig_range_common(is_frozen=false)
			- visible even without CONFIG_CONTIG_ALLOC as vfree() uses it
			- Exported as well (there is no user of this export at this time)

__free_contig_frozen_range()	- only calls __free_contig_range_common(is_frozen=true)

__free_contig_range_common()	- it does the actual work

vfree()->free_pages_bulk()	- calls __free_contig_range()

Should we remove __free_contig_range() and __free_contig_frozen_range() both entirely and
just use __free_contig_range_common() everywhere?
> 
> As long as it's an internal helper, that makes sense. I wouldn't want to
> expose the bool in the external interface.
> 
> Thanks!
> 

Thanks,
Usama
Re: [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Posted by Zi Yan 1 week, 3 days ago
On 25 Mar 2026, at 12:03, Muhammad Usama Anjum wrote:

> <snip>
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index 250cc07e547b8..26eac35ef73bd 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -7038,8 +7038,30 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
>>>>
>>>>  static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>>>>  {
>>>> -	for (; nr_pages--; pfn++)
>>>> -		free_frozen_pages(pfn_to_page(pfn), 0);
>>>> +	struct page *page = pfn_to_page(pfn);
>>>> +	struct page *start = NULL;
>>>> +	unsigned long start_sec;
>>>> +	unsigned long i;
>>>> +
>>>> +	for (i = 0; i < nr_pages; i++, page++) {
>>>> +		if (!free_pages_prepare(page, 0)) {
>>>> +			if (start) {
>>>> +				free_prepared_contig_range(start, page - start);
>>>> +				start = NULL;
>>>> +			}
>>>> +		} else if (start &&
>>>> +			   memdesc_section(page->flags) != start_sec) {
>>>> +			free_prepared_contig_range(start, page - start);
>>>> +			start = page;
>>>> +			start_sec = memdesc_section(page->flags);
>>>> +		} else if (!start) {
>>>> +			start = page;
>>>> +			start_sec = memdesc_section(page->flags);
>>>> +		}
>>>> +	}
>>>> +
>>>> +	if (start)
>>>> +		free_prepared_contig_range(start, page - start);
>>>>  }
>>>
>>> This looks almost the same as __free_contig_range().
>>>
>>> Two approaches to deduplicate the code:
>>>
>>> 1. __free_contig_range() first does put_page_testzero()
>>> on all pages and call __free_contig_frozen_range()
>>> on the range, __free_contig_frozen_range() will need
>>> to skip not frozen pages. It is not ideal.
>>
>> Right, let's not do that.
>>
>>>
>>> 2. add a helper function
>>> __free_contig_range_common(unsigned long pfn,
>>> unsigned long nr_pages, bool is_page_frozen),
>>> and
>>> a. call __free_contig_range_common(..., /*is_page_frozen=*/ false)
>>> in __free_contig_range(),
>>> b. __free_contig_range_common(..., /*is_page_frozen=*/ true)
>>> in __free_contig_frozen_range().
>>>
> I'm adding the common version. After the change, I'm thinking about the current functions and if
> they can be simplified further:
>
> free_contig_range()	- only calls __free_contig_range()
> 			- only visible with CONFIG_CONTIG_ALLOC
> 			- Exported as well
>
> __free_contig_range()	- only calls __free_contig_range_common(is_frozen=false)
> 			- visible even without CONFIG_CONTIG_ALLOC as vfree() uses it
> 			- Exported as well (there is no user of this export at this time)
>
> __free_contig_frozen_range()	- only calls __free_contig_range_common(is_frozen=true)
>
> __free_contig_range_common()	- it does the actual work
>
> vfree()->free_pages_bulk()	- calls __free_contig_range()
>
> Should we remove __free_contig_range() and __free_contig_frozen_range() both entirely and
> just use __free_contig_range_common() everywhere?

The idea of adding __free_contig_range_common() is to deduplicate code.
__free_contig_range() and __free_contig_frozen_range() improves readability,
since people do not need to see the /* is_page_frozen= */ true/false
in __free_contig_range_common().

>>
>> As long as it's an internal helper, that makes sense. I wouldn't want to
>> expose the bool in the external interface.
>>

Like what David said above.


Best Regards,
Yan, Zi