The fallback code searches for the biggest buddy first in an attempt
to steal the whole block and encourage type grouping down the line.
The approach used to be this:
- Non-movable requests will split the largest buddy and steal the
remainder. This splits up contiguity, but it allows subsequent
requests of this type to fall back into adjacent space.
- Movable requests go and look for the smallest buddy instead. The
thinking is that movable requests can be compacted, so grouping is
less important than retaining contiguity.
c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
conversion") enforces freelist type hygiene, which restricts stealing
to either claiming the whole block or just taking the requested chunk;
no additional pages or buddy remainders can be stolen any more.
The patch mishandled when to switch to finding the smallest buddy in
that new reality. As a result, it may steal the exact request size,
but from the biggest buddy. This causes fracturing for no good reason.
Fix this by committing to the new behavior: either steal the whole
block, or fall back to the smallest buddy.
Remove single-page stealing from steal_suitable_fallback(). Rename it
to try_to_steal_block() to make the intentions clear. If this fails,
always fall back to the smallest buddy.
The following is from 4 runs of mmtest's thpchallenge. "Pollute" is
single page fallback, "steal" is conversion of a partially used block.
The numbers for free block conversions (omitted) are comparable.
vanilla patched
@pollute[unmovable from reclaimable]: 27 106
@pollute[unmovable from movable]: 82 46
@pollute[reclaimable from unmovable]: 256 83
@pollute[reclaimable from movable]: 46 8
@pollute[movable from unmovable]: 4841 868
@pollute[movable from reclaimable]: 5278 12568
@steal[unmovable from reclaimable]: 11 12
@steal[unmovable from movable]: 113 49
@steal[reclaimable from unmovable]: 19 34
@steal[reclaimable from movable]: 47 21
@steal[movable from unmovable]: 250 183
@steal[movable from reclaimable]: 81 93
The allocator appears to do a better job at keeping stealing and
polluting to the first fallback preference. As a result, the numbers
for "from movable" - the least preferred fallback option, and most
detrimental to compactability - are down across the board.
Fixes: c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block conversion")
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/page_alloc.c | 80 +++++++++++++++++++++----------------------------
1 file changed, 34 insertions(+), 46 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 16dfcf7ade74..9ea14ec52449 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1986,13 +1986,12 @@ static inline bool boost_watermark(struct zone *zone)
* can claim the whole pageblock for the requested migratetype. If not, we check
* the pageblock for constituent pages; if at least half of the pages are free
* or compatible, we can still claim the whole block, so pages freed in the
- * future will be put on the correct free list. Otherwise, we isolate exactly
- * the order we need from the fallback block and leave its migratetype alone.
+ * future will be put on the correct free list.
*/
static struct page *
-steal_suitable_fallback(struct zone *zone, struct page *page,
- int current_order, int order, int start_type,
- unsigned int alloc_flags, bool whole_block)
+try_to_steal_block(struct zone *zone, struct page *page,
+ int current_order, int order, int start_type,
+ unsigned int alloc_flags)
{
int free_pages, movable_pages, alike_pages;
unsigned long start_pfn;
@@ -2005,7 +2004,7 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
* highatomic accounting.
*/
if (is_migrate_highatomic(block_type))
- goto single_page;
+ return NULL;
/* Take ownership for orders >= pageblock_order */
if (current_order >= pageblock_order) {
@@ -2026,14 +2025,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
if (boost_watermark(zone) && (alloc_flags & ALLOC_KSWAPD))
set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
- /* We are not allowed to try stealing from the whole block */
- if (!whole_block)
- goto single_page;
-
/* moving whole block can fail due to zone boundary conditions */
if (!prep_move_freepages_block(zone, page, &start_pfn, &free_pages,
&movable_pages))
- goto single_page;
+ return NULL;
/*
* Determine how many pages are compatible with our allocation.
@@ -2066,9 +2061,7 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
return __rmqueue_smallest(zone, order, start_type);
}
-single_page:
- page_del_and_expand(zone, page, order, current_order, block_type);
- return page;
+ return NULL;
}
/*
@@ -2250,14 +2243,19 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
}
/*
- * Try finding a free buddy page on the fallback list and put it on the free
- * list of requested migratetype, possibly along with other pages from the same
- * block, depending on fragmentation avoidance heuristics. Returns true if
- * fallback was found so that __rmqueue_smallest() can grab it.
+ * Try finding a free buddy page on the fallback list.
+ *
+ * This will attempt to steal a whole pageblock for the requested type
+ * to ensure grouping of such requests in the future.
+ *
+ * If a whole block cannot be stolen, regress to __rmqueue_smallest()
+ * logic to at least break up as little contiguity as possible.
*
* The use of signed ints for order and current_order is a deliberate
* deviation from the rest of this file, to make the for loop
* condition simpler.
+ *
+ * Return the stolen page, or NULL if none can be found.
*/
static __always_inline struct page *
__rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
@@ -2291,45 +2289,35 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
if (fallback_mt == -1)
continue;
- /*
- * We cannot steal all free pages from the pageblock and the
- * requested migratetype is movable. In that case it's better to
- * steal and split the smallest available page instead of the
- * largest available page, because even if the next movable
- * allocation falls back into a different pageblock than this
- * one, it won't cause permanent fragmentation.
- */
- if (!can_steal && start_migratetype == MIGRATE_MOVABLE
- && current_order > order)
- goto find_smallest;
+ if (!can_steal)
+ break;
- goto do_steal;
+ page = get_page_from_free_area(area, fallback_mt);
+ page = try_to_steal_block(zone, page, current_order, order,
+ start_migratetype, alloc_flags);
+ if (page)
+ goto got_one;
}
- return NULL;
+ if (alloc_flags & ALLOC_NOFRAGMENT)
+ return NULL;
-find_smallest:
+ /* No luck stealing blocks. Find the smallest fallback page */
for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) {
area = &(zone->free_area[current_order]);
fallback_mt = find_suitable_fallback(area, current_order,
start_migratetype, false, &can_steal);
- if (fallback_mt != -1)
- break;
- }
-
- /*
- * This should not happen - we already found a suitable fallback
- * when looking for the largest page.
- */
- VM_BUG_ON(current_order > MAX_PAGE_ORDER);
+ if (fallback_mt == -1)
+ continue;
-do_steal:
- page = get_page_from_free_area(area, fallback_mt);
+ page = get_page_from_free_area(area, fallback_mt);
+ page_del_and_expand(zone, page, order, current_order, fallback_mt);
+ goto got_one;
+ }
- /* take off list, maybe claim block, expand remainder */
- page = steal_suitable_fallback(zone, page, current_order, order,
- start_migratetype, alloc_flags, can_steal);
+ return NULL;
+got_one:
trace_mm_page_alloc_extfrag(page, order, current_order,
start_migratetype, fallback_mt);
--
2.48.1
On Mon, Feb 23, 2025 at 07:08:24PM -0500, Johannes Weiner wrote:
> The fallback code searches for the biggest buddy first in an attempt
> to steal the whole block and encourage type grouping down the line.
>
> The approach used to be this:
>
> - Non-movable requests will split the largest buddy and steal the
> remainder. This splits up contiguity, but it allows subsequent
> requests of this type to fall back into adjacent space.
>
> - Movable requests go and look for the smallest buddy instead. The
> thinking is that movable requests can be compacted, so grouping is
> less important than retaining contiguity.
>
> c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
> conversion") enforces freelist type hygiene, which restricts stealing
> to either claiming the whole block or just taking the requested chunk;
> no additional pages or buddy remainders can be stolen any more.
>
> The patch mishandled when to switch to finding the smallest buddy in
> that new reality. As a result, it may steal the exact request size,
> but from the biggest buddy. This causes fracturing for no good reason.
>
> Fix this by committing to the new behavior: either steal the whole
> block, or fall back to the smallest buddy.
>
> Remove single-page stealing from steal_suitable_fallback(). Rename it
> to try_to_steal_block() to make the intentions clear. If this fails,
> always fall back to the smallest buddy.
Nit - I think the try_to_steal_block() changes could be a separate
patch, the history might be easier to understand if it went:
[1/N] mm: page_alloc: don't steal single pages from biggest buddy
[2/N] mm: page_alloc: drop unused logic in steal_suitable_fallback()
(But not a big deal, it's not that hard to follow as-is).
> static __always_inline struct page *
> __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
> @@ -2291,45 +2289,35 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
> if (fallback_mt == -1)
> continue;
>
> - /*
> - * We cannot steal all free pages from the pageblock and the
> - * requested migratetype is movable. In that case it's better to
> - * steal and split the smallest available page instead of the
> - * largest available page, because even if the next movable
> - * allocation falls back into a different pageblock than this
> - * one, it won't cause permanent fragmentation.
> - */
> - if (!can_steal && start_migratetype == MIGRATE_MOVABLE
> - && current_order > order)
> - goto find_smallest;
> + if (!can_steal)
> + break;
>
> - goto do_steal;
> + page = get_page_from_free_area(area, fallback_mt);
> + page = try_to_steal_block(zone, page, current_order, order,
> + start_migratetype, alloc_flags);
> + if (page)
> + goto got_one;
> }
>
> - return NULL;
> + if (alloc_flags & ALLOC_NOFRAGMENT)
> + return NULL;
Is this a separate change? Is it a bug that we currently allow
stealing a from a fallback type when ALLOC_NOFRAGMENT? (I wonder if
the second loop was supposed to start from min_order).
>
> -find_smallest:
> + /* No luck stealing blocks. Find the smallest fallback page */
> for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) {
> area = &(zone->free_area[current_order]);
> fallback_mt = find_suitable_fallback(area, current_order,
> start_migratetype, false, &can_steal);
> - if (fallback_mt != -1)
> - break;
> - }
> -
> - /*
> - * This should not happen - we already found a suitable fallback
> - * when looking for the largest page.
> - */
> - VM_BUG_ON(current_order > MAX_PAGE_ORDER);
> + if (fallback_mt == -1)
> + continue;
>
> -do_steal:
> - page = get_page_from_free_area(area, fallback_mt);
> + page = get_page_from_free_area(area, fallback_mt);
> + page_del_and_expand(zone, page, order, current_order, fallback_mt);
> + goto got_one;
> + }
>
> - /* take off list, maybe claim block, expand remainder */
> - page = steal_suitable_fallback(zone, page, current_order, order,
> - start_migratetype, alloc_flags, can_steal);
> + return NULL;
>
> +got_one:
> trace_mm_page_alloc_extfrag(page, order, current_order,
> start_migratetype, fallback_mt);
On Tue, Feb 25, 2025 at 01:34:32PM +0000, Brendan Jackman wrote:
> On Mon, Feb 23, 2025 at 07:08:24PM -0500, Johannes Weiner wrote:
> > The fallback code searches for the biggest buddy first in an attempt
> > to steal the whole block and encourage type grouping down the line.
> >
> > The approach used to be this:
> >
> > - Non-movable requests will split the largest buddy and steal the
> > remainder. This splits up contiguity, but it allows subsequent
> > requests of this type to fall back into adjacent space.
> >
> > - Movable requests go and look for the smallest buddy instead. The
> > thinking is that movable requests can be compacted, so grouping is
> > less important than retaining contiguity.
> >
> > c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
> > conversion") enforces freelist type hygiene, which restricts stealing
> > to either claiming the whole block or just taking the requested chunk;
> > no additional pages or buddy remainders can be stolen any more.
> >
> > The patch mishandled when to switch to finding the smallest buddy in
> > that new reality. As a result, it may steal the exact request size,
> > but from the biggest buddy. This causes fracturing for no good reason.
> >
> > Fix this by committing to the new behavior: either steal the whole
> > block, or fall back to the smallest buddy.
> >
> > Remove single-page stealing from steal_suitable_fallback(). Rename it
> > to try_to_steal_block() to make the intentions clear. If this fails,
> > always fall back to the smallest buddy.
>
> Nit - I think the try_to_steal_block() changes could be a separate
> patch, the history might be easier to understand if it went:
>
> [1/N] mm: page_alloc: don't steal single pages from biggest buddy
> [2/N] mm: page_alloc: drop unused logic in steal_suitable_fallback()
There are several ways in which steal_suitable_fallback() could end up
taking a single page, and I'd have to mirror all those conditions in
the caller if I wanted to prevent this. That would be too convoluted.
> > static __always_inline struct page *
> > __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
> > @@ -2291,45 +2289,35 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
> > if (fallback_mt == -1)
> > continue;
> >
> > - /*
> > - * We cannot steal all free pages from the pageblock and the
> > - * requested migratetype is movable. In that case it's better to
> > - * steal and split the smallest available page instead of the
> > - * largest available page, because even if the next movable
> > - * allocation falls back into a different pageblock than this
> > - * one, it won't cause permanent fragmentation.
> > - */
> > - if (!can_steal && start_migratetype == MIGRATE_MOVABLE
> > - && current_order > order)
> > - goto find_smallest;
> > + if (!can_steal)
> > + break;
> >
> > - goto do_steal;
> > + page = get_page_from_free_area(area, fallback_mt);
> > + page = try_to_steal_block(zone, page, current_order, order,
> > + start_migratetype, alloc_flags);
> > + if (page)
> > + goto got_one;
> > }
> >
> > - return NULL;
> > + if (alloc_flags & ALLOC_NOFRAGMENT)
> > + return NULL;
>
> Is this a separate change? Is it a bug that we currently allow
> stealing a from a fallback type when ALLOC_NOFRAGMENT? (I wonder if
> the second loop was supposed to start from min_order).
No, I don't see how we could hit that right now. With NOFRAGMENT, the
first loop scans whole free blocks only, which, if present, are always
stealable. If there are no blocks, the loop continues through all the
fallback_mt == -1 and then the function returns NULL. Only without
NOFRAGMENT does it run into !can_steal buddies.
IOW, the control flow implicit in min_order, can_steal and the gotos
would make it honor NOFRAGMENT - albeit in a fairly non-obvious way.
The code is just a bit odd. While the function currently looks like
it's two loops following each other, this isn't how it's actually
executed. Instead, the first loop is the main sequence of the
function. The second loop is entered only from a jump in the main loop
under certain conditions, more akin to a function call.
I'm changing the sequence so that all types fall back to the smallest
buddy if stealing a block fails. The easiest way to express that is
removing the find_smallest jump and having the loops *actually* follow
each other as the main sequence of this function.
For that, I need to make that implicit NOFRAGMENT behavior explicit.
On 2/25/25 14:34, Brendan Jackman wrote:
> On Mon, Feb 23, 2025 at 07:08:24PM -0500, Johannes Weiner wrote:
>> The fallback code searches for the biggest buddy first in an attempt
>> to steal the whole block and encourage type grouping down the line.
>>
>> The approach used to be this:
>>
>> - Non-movable requests will split the largest buddy and steal the
>> remainder. This splits up contiguity, but it allows subsequent
>> requests of this type to fall back into adjacent space.
>>
>> - Movable requests go and look for the smallest buddy instead. The
>> thinking is that movable requests can be compacted, so grouping is
>> less important than retaining contiguity.
>>
>> c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
>> conversion") enforces freelist type hygiene, which restricts stealing
>> to either claiming the whole block or just taking the requested chunk;
>> no additional pages or buddy remainders can be stolen any more.
>>
>> The patch mishandled when to switch to finding the smallest buddy in
>> that new reality. As a result, it may steal the exact request size,
>> but from the biggest buddy. This causes fracturing for no good reason.
>>
>> Fix this by committing to the new behavior: either steal the whole
>> block, or fall back to the smallest buddy.
>>
>> Remove single-page stealing from steal_suitable_fallback(). Rename it
>> to try_to_steal_block() to make the intentions clear. If this fails,
>> always fall back to the smallest buddy.
>
> Nit - I think the try_to_steal_block() changes could be a separate
> patch, the history might be easier to understand if it went:
>
> [1/N] mm: page_alloc: don't steal single pages from biggest buddy
> [2/N] mm: page_alloc: drop unused logic in steal_suitable_fallback()
>
> (But not a big deal, it's not that hard to follow as-is).
>
>> static __always_inline struct page *
>> __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
>> @@ -2291,45 +2289,35 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
>> if (fallback_mt == -1)
>> continue;
>>
>> - /*
>> - * We cannot steal all free pages from the pageblock and the
>> - * requested migratetype is movable. In that case it's better to
>> - * steal and split the smallest available page instead of the
>> - * largest available page, because even if the next movable
>> - * allocation falls back into a different pageblock than this
>> - * one, it won't cause permanent fragmentation.
>> - */
>> - if (!can_steal && start_migratetype == MIGRATE_MOVABLE
>> - && current_order > order)
>> - goto find_smallest;
>> + if (!can_steal)
>> + break;
>>
>> - goto do_steal;
>> + page = get_page_from_free_area(area, fallback_mt);
>> + page = try_to_steal_block(zone, page, current_order, order,
>> + start_migratetype, alloc_flags);
>> + if (page)
>> + goto got_one;
>> }
>>
>> - return NULL;
>> + if (alloc_flags & ALLOC_NOFRAGMENT)
>> + return NULL;
>
> Is this a separate change? Is it a bug that we currently allow
> stealing a from a fallback type when ALLOC_NOFRAGMENT? (I wonder if
> the second loop was supposed to start from min_order).
It's subtle but not a new condition. Previously ALLOC_NOFRAGMENT would
result in not taking the "goto find_smallest" path because it means
searching >=pageblock_order only and that would always be can_steal == true
if it found a fallback. And failure to find fallback would reach an
unconditional return NULL here. Now we fall through the search below
(instead of the goto), but ALLOC_NOFRAGMENT must not do it so it's now
explicit here.
>>
>> -find_smallest:
>> + /* No luck stealing blocks. Find the smallest fallback page */
>> for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) {
>> area = &(zone->free_area[current_order]);
>> fallback_mt = find_suitable_fallback(area, current_order,
>> start_migratetype, false, &can_steal);
>> - if (fallback_mt != -1)
>> - break;
>> - }
>> -
>> - /*
>> - * This should not happen - we already found a suitable fallback
>> - * when looking for the largest page.
>> - */
>> - VM_BUG_ON(current_order > MAX_PAGE_ORDER);
>> + if (fallback_mt == -1)
>> + continue;
>>
>> -do_steal:
>> - page = get_page_from_free_area(area, fallback_mt);
>> + page = get_page_from_free_area(area, fallback_mt);
>> + page_del_and_expand(zone, page, order, current_order, fallback_mt);
>> + goto got_one;
>> + }
>>
>> - /* take off list, maybe claim block, expand remainder */
>> - page = steal_suitable_fallback(zone, page, current_order, order,
>> - start_migratetype, alloc_flags, can_steal);
>> + return NULL;
>>
>> +got_one:
>> trace_mm_page_alloc_extfrag(page, order, current_order,
>> start_migratetype, fallback_mt);
On Tue, Feb 25, 2025 at 03:35:25PM +0100, Vlastimil Babka wrote: > >> - return NULL; > >> + if (alloc_flags & ALLOC_NOFRAGMENT) > >> + return NULL; > > > > Is this a separate change? Is it a bug that we currently allow > > stealing a from a fallback type when ALLOC_NOFRAGMENT? (I wonder if > > the second loop was supposed to start from min_order). > > It's subtle but not a new condition. Previously ALLOC_NOFRAGMENT would > result in not taking the "goto find_smallest" path because it means > searching >=pageblock_order only and that would always be can_steal == true > if it found a fallback. And failure to find fallback would reach an > unconditional return NULL here. Now we fall through the search below > (instead of the goto), but ALLOC_NOFRAGMENT must not do it so it's now > explicit here. Ahhhh yes, thank you for the help. The new explicit code is much better. Reviewed-by: Brendan Jackman <jackmanb@google.com>
On 2/25/25 01:08, Johannes Weiner wrote:
> The fallback code searches for the biggest buddy first in an attempt
> to steal the whole block and encourage type grouping down the line.
>
> The approach used to be this:
>
> - Non-movable requests will split the largest buddy and steal the
> remainder. This splits up contiguity, but it allows subsequent
> requests of this type to fall back into adjacent space.
>
> - Movable requests go and look for the smallest buddy instead. The
> thinking is that movable requests can be compacted, so grouping is
> less important than retaining contiguity.
>
> c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
> conversion") enforces freelist type hygiene, which restricts stealing
> to either claiming the whole block or just taking the requested chunk;
> no additional pages or buddy remainders can be stolen any more.
>
> The patch mishandled when to switch to finding the smallest buddy in
> that new reality. As a result, it may steal the exact request size,
> but from the biggest buddy. This causes fracturing for no good reason.
>
> Fix this by committing to the new behavior: either steal the whole
> block, or fall back to the smallest buddy.
>
> Remove single-page stealing from steal_suitable_fallback(). Rename it
> to try_to_steal_block() to make the intentions clear. If this fails,
> always fall back to the smallest buddy.
>
> The following is from 4 runs of mmtest's thpchallenge. "Pollute" is
> single page fallback, "steal" is conversion of a partially used block.
> The numbers for free block conversions (omitted) are comparable.
>
> vanilla patched
>
> @pollute[unmovable from reclaimable]: 27 106
> @pollute[unmovable from movable]: 82 46
> @pollute[reclaimable from unmovable]: 256 83
> @pollute[reclaimable from movable]: 46 8
> @pollute[movable from unmovable]: 4841 868
> @pollute[movable from reclaimable]: 5278 12568
>
> @steal[unmovable from reclaimable]: 11 12
> @steal[unmovable from movable]: 113 49
> @steal[reclaimable from unmovable]: 19 34
> @steal[reclaimable from movable]: 47 21
> @steal[movable from unmovable]: 250 183
> @steal[movable from reclaimable]: 81 93
>
> The allocator appears to do a better job at keeping stealing and
> polluting to the first fallback preference. As a result, the numbers
> for "from movable" - the least preferred fallback option, and most
> detrimental to compactability - are down across the board.
>
> Fixes: c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block conversion")
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Thanks!
© 2016 - 2026 Red Hat, Inc.