[PATCH v1 0/2] Free contiguous order-0 pages efficiently

Ryan Roberts posted 2 patches 1 month ago
include/linux/gfp.h |   1 +
mm/page_alloc.c     | 116 +++++++++++++++++++++++++++++++++++++++-----
mm/vmalloc.c        |  29 +++++++----
3 files changed, 125 insertions(+), 21 deletions(-)
[PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by Ryan Roberts 1 month ago
Hi All,

A recent change to vmalloc caused some performance benchmark regressions (see
[1]). I'm attempting to fix that (and at the same time signficantly improve
beyond the baseline) by freeing a contiguous set of order-0 pages as a batch.

At the same time I observed that free_contig_range() was essentially doing the
same thing as vfree() so I've fixed it there too.

I think I've convinced myself that free_pages_prepare() per order-0 page
followed by a single free_frozen_page_commit() or free_one_page() for the high
order block is safe/correct, but would be good if a page_alloc expert can
confirm!

Applies against today's mm-unstable (344d3580dacd). All mm selftests run and
pass.

Thanks,
Ryan

Ryan Roberts (2):
  mm/page_alloc: Optimize free_contig_range()
  vmalloc: Optimize vfree

 include/linux/gfp.h |   1 +
 mm/page_alloc.c     | 116 +++++++++++++++++++++++++++++++++++++++-----
 mm/vmalloc.c        |  29 +++++++----
 3 files changed, 125 insertions(+), 21 deletions(-)

--
2.43.0
Re: [PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by Matthew Wilcox 1 month ago
On Mon, Jan 05, 2026 at 04:17:36PM +0000, Ryan Roberts wrote:
> Hi All,
> 
> A recent change to vmalloc caused some performance benchmark regressions (see
> [1]). I'm attempting to fix that (and at the same time signficantly improve

Unfortunately, there was no [1] ... I'm not sure this benchmark is
really doing anything representative.  But the performance improvement
is certainly welcome; we'd deferred work on that for later.
Re: [PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by Uladzislau Rezki 1 month ago
On Tue, Jan 06, 2026 at 04:38:39AM +0000, Matthew Wilcox wrote:
> On Mon, Jan 05, 2026 at 04:17:36PM +0000, Ryan Roberts wrote:
> > Hi All,
> > 
> > A recent change to vmalloc caused some performance benchmark regressions (see
> > [1]). I'm attempting to fix that (and at the same time signficantly improve
> 
> Unfortunately, there was no [1] ... I'm not sure this benchmark is
> really doing anything representative.  But the performance improvement
> is certainly welcome; we'd deferred work on that for later.
>
When discussed the high-order preference allocation patch, i noticed
a difference in behaviour right away. Further investigation showed that
the free path also needs to be improved to fully benefit from that change.

I can document the test-cases in the test_vmalloc.c if it helps. Also
i can add more focused benchmarks for allocation and free paths.

--
Uladzislau Rezki
Re: [PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by Ryan Roberts 1 month ago
On 06/01/2026 04:38, Matthew Wilcox wrote:
> On Mon, Jan 05, 2026 at 04:17:36PM +0000, Ryan Roberts wrote:
>> Hi All,
>>
>> A recent change to vmalloc caused some performance benchmark regressions (see
>> [1]). I'm attempting to fix that (and at the same time signficantly improve
> 
> Unfortunately, there was no [1] ... 

Oops:

[1] https://lore.kernel.org/all/66919a28-bc81-49c9-b68f-dd7c73395a0d@arm.com/

(it's the same link as Closes: tag in patch 2).

> I'm not sure this benchmark is
> really doing anything representative. 

Yes that's probably fair, but my argument is that we should either care about
the numbers or delete the tests. It seems we don't want to delete the tests.

> But the performance improvement
> is certainly welcome; we'd deferred work on that for later.

OK, let's focus on the "performance improvement" motivation instead of the
"regression fixing" part :)

Thanks,
Ryan
Re: [PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by Zi Yan 1 month ago
On 5 Jan 2026, at 11:17, Ryan Roberts wrote:

> Hi All,
>
> A recent change to vmalloc caused some performance benchmark regressions (see
> [1]). I'm attempting to fix that (and at the same time signficantly improve
> beyond the baseline) by freeing a contiguous set of order-0 pages as a batch.
>
> At the same time I observed that free_contig_range() was essentially doing the
> same thing as vfree() so I've fixed it there too.
>
> I think I've convinced myself that free_pages_prepare() per order-0 page
> followed by a single free_frozen_page_commit() or free_one_page() for the high
> order block is safe/correct, but would be good if a page_alloc expert can
> confirm!
>
> Applies against today's mm-unstable (344d3580dacd). All mm selftests run and
> pass.

Kefeng has a series on using frozen pages for alloc_contig*() in mm-new
and touches free_contig_range() as well. You might want to rebase on top
of that.

I like your approach of freeing multiple order-0 pages as a batch, since
they are essentially a non-compound high order page. I also pointed out
a similar optimization when reviewing Kefeng’s patchset[1] (see my comment
on __free_contig_frozen_range()).

In terms of rebase, there should be minor for free_contig_range(). In addition,
maybe your free_prepared_contig_range() can replace __free_contig_frozen_range()
in Kefeng’s version to improve performance for both code paths.

I will take a look at the patches. Thanks.

[1] https://lore.kernel.org/linux-mm/D90F7769-F3A8-4234-A9CE-F97BC48CCACE@nvidia.com/

>
> Thanks,
> Ryan
>
> Ryan Roberts (2):
>   mm/page_alloc: Optimize free_contig_range()
>   vmalloc: Optimize vfree
>
>  include/linux/gfp.h |   1 +
>  mm/page_alloc.c     | 116 +++++++++++++++++++++++++++++++++++++++-----
>  mm/vmalloc.c        |  29 +++++++----
>  3 files changed, 125 insertions(+), 21 deletions(-)
>
> --
> 2.43.0


Best Regards,
Yan, Zi
Re: [PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by Ryan Roberts 1 month ago
On 05/01/2026 16:36, Zi Yan wrote:
> On 5 Jan 2026, at 11:17, Ryan Roberts wrote:
> 
>> Hi All,
>>
>> A recent change to vmalloc caused some performance benchmark regressions (see
>> [1]). I'm attempting to fix that (and at the same time signficantly improve
>> beyond the baseline) by freeing a contiguous set of order-0 pages as a batch.
>>
>> At the same time I observed that free_contig_range() was essentially doing the
>> same thing as vfree() so I've fixed it there too.
>>
>> I think I've convinced myself that free_pages_prepare() per order-0 page
>> followed by a single free_frozen_page_commit() or free_one_page() for the high
>> order block is safe/correct, but would be good if a page_alloc expert can
>> confirm!
>>
>> Applies against today's mm-unstable (344d3580dacd). All mm selftests run and
>> pass.
> 
> Kefeng has a series on using frozen pages for alloc_contig*() in mm-new
> and touches free_contig_range() as well. You might want to rebase on top
> of that.
> 
> I like your approach of freeing multiple order-0 pages as a batch, since
> they are essentially a non-compound high order page. I also pointed out
> a similar optimization when reviewing Kefeng’s patchset[1] (see my comment
> on __free_contig_frozen_range()).
> 
> In terms of rebase, there should be minor for free_contig_range(). In addition,
> maybe your free_prepared_contig_range() can replace __free_contig_frozen_range()
> in Kefeng’s version to improve performance for both code paths.

OK, great! I'll hold off on the rebase until I get some code review feedback on
this version (I'd like to hear someone agree that what I'm doing is actually
sound!). Assuming feedback is positive, I'll rebase v2 onto mm-new and look at
the extra optimization opportunites as you suggest.

Thanks,
Ryan

> 
> I will take a look at the patches. Thanks.
> 
> [1] https://lore.kernel.org/linux-mm/D90F7769-F3A8-4234-A9CE-F97BC48CCACE@nvidia.com/
> 
>>
>> Thanks,
>> Ryan
>>
>> Ryan Roberts (2):
>>   mm/page_alloc: Optimize free_contig_range()
>>   vmalloc: Optimize vfree
>>
>>  include/linux/gfp.h |   1 +
>>  mm/page_alloc.c     | 116 +++++++++++++++++++++++++++++++++++++++-----
>>  mm/vmalloc.c        |  29 +++++++----
>>  3 files changed, 125 insertions(+), 21 deletions(-)
>>
>> --
>> 2.43.0
> 
> 
> Best Regards,
> Yan, Zi

Re: [PATCH v1 0/2] Free contiguous order-0 pages efficiently
Posted by David Hildenbrand (Red Hat) 1 month ago
On 1/5/26 17:17, Ryan Roberts wrote:
> Hi All,

Hi,

> 
> A recent change to vmalloc caused some performance benchmark regressions (see
> [1]). I'm attempting to fix that (and at the same time signficantly improve
> beyond the baseline) by freeing a contiguous set of order-0 pages as a batch.

I recently raised the utility of something like that in case we had to 
split a large folio in the page cache for guestmemfd, and then want to 
punch-hole the whole (original) thing while making sure that the whole 
thing ends up back in the buddy.

Freeing individual chunks (e.g., order-0 pages) has the problem that 
some pages might get reallocated before merged, consequently fragmenting 
the bigger chunk.

-- 
Cheers

David