For allocations that are of costly order and __GFP_NORETRY (and can
perform compaction) we attempt direct compaction first. If that fails,
we continue with a single round of direct reclaim+compaction (as for
other __GFP_NORETRY allocations, except the compaction is of lower
priority), with two exceptions that fail immediately:
- __GFP_THISNODE is specified, to prevent zone_reclaim_mode-like
behavior for e.g. THP page faults
- compaction failed because it was deferred (i.e. has been failing
recently so further attempts are not done for a while) or skipped,
which means there are insufficient free base pages to defragment to
begin with
Upon closer inspection, the second condition has a somewhat flawed
reasoning. If there are not enough base pages and reclaim could create
them, we instead fail. When there are enough base pages and compaction
has already ran and failed, we proceed and hope that reclaim and the
subsequent compaction attempt will succeed. But it's unclear why they
should and whether it will be as inexpensive as intended.
It might make therefore more sense to just fail unconditionally after
the initial compaction attempt, so do that instead. Costly allocations
that do want the reclaim/compaction to happen at least once can omit
__GFP_NORETRY, or even specify __GFP_RETRY_MAYFAIL for more than one
attempt.
There is a slight potential unfairness in that costly __GFP_NORETRY
allocations that can't perform direct compaction (i.e. lack __GFP_IO)
will still be allowed to direct reclaim, while those that can direct
compact will now never attempt direct reclaim. However, in cases of
memory pressure causing compaction to be skipped due to insufficient
base pages, direct reclaim was already not done before, so there should
be no functional regressions from this change.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
include/linux/gfp_types.h | 2 ++
mm/page_alloc.c | 47 +++--------------------------------------------
2 files changed, 5 insertions(+), 44 deletions(-)
diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h
index 3de43b12209e..051311fdbdb1 100644
--- a/include/linux/gfp_types.h
+++ b/include/linux/gfp_types.h
@@ -218,6 +218,8 @@ enum {
* caller must handle the failure which is quite likely to happen under
* heavy memory pressure. The flag is suitable when failure can easily be
* handled at small cost, such as reduced throughput.
+ * For costly orders, only memory compaction can be attempted with no reclaim
+ * under some conditions.
*
* %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
* procedures that have previously failed if there is some indication
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e6fd1213328b..2671cbbd6375 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4763,52 +4763,11 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
goto got_pg;
/*
- * Checks for costly allocations with __GFP_NORETRY, which
- * includes some THP page fault allocations
+ * Compaction didn't succeed and we were told not to try hard,
+ * so fail now.
*/
if (costly_order && (gfp_mask & __GFP_NORETRY)) {
- /*
- * If allocating entire pageblock(s) and compaction
- * failed because all zones are below low watermarks
- * or is prohibited because it recently failed at this
- * order, fail immediately unless the allocator has
- * requested compaction and reclaim retry.
- *
- * Reclaim is
- * - potentially very expensive because zones are far
- * below their low watermarks or this is part of very
- * bursty high order allocations,
- * - not guaranteed to help because isolate_freepages()
- * may not iterate over freed pages as part of its
- * linear scan, and
- * - unlikely to make entire pageblocks free on its
- * own.
- */
- if (compact_result == COMPACT_SKIPPED ||
- compact_result == COMPACT_DEFERRED)
- goto nopage;
-
- /*
- * THP page faults may attempt local node only first,
- * but are then allowed to only compact, not reclaim,
- * see alloc_pages_mpol()
- *
- * compaction can fail for other reasons than those
- * checked above and we don't want such THP allocations
- * to put reclaim pressure on a single node in a
- * situation where other nodes might have plenty of
- * available memory
- */
- if (gfp_mask & __GFP_THISNODE)
- goto nopage;
-
- /*
- * Looks like reclaim/compaction is worth trying, but
- * sync compaction could be very expensive, so keep
- * using async compaction.
- */
- compact_priority = INIT_COMPACT_PRIORITY;
- }
+ goto nopage;
}
retry:
--
2.52.0
On Tue, Dec 16, 2025 at 04:54:22PM +0100, Vlastimil Babka wrote:
> It might make therefore more sense to just fail unconditionally after
> the initial compaction attempt, so do that instead. Costly allocations
> that do want the reclaim/compaction to happen at least once can omit
> __GFP_NORETRY, or even specify __GFP_RETRY_MAYFAIL for more than one
> attempt.
>
> There is a slight potential unfairness in that costly __GFP_NORETRY
> allocations that can't perform direct compaction (i.e. lack __GFP_IO)
> will still be allowed to direct reclaim, while those that can direct
> compact will now never attempt direct reclaim. However, in cases of
> memory pressure causing compaction to be skipped due to insufficient
> base pages, direct reclaim was already not done before, so there should
> be no functional regressions from this change.
Hm, kind of. There could be enough basepages for compaction_suitable()
but compaction odds are still higher with more free pages. So there
might be cases it regresses.
__GFP_NORETRY semantics say it'll try reclaim at least once. We should
be able to keep that and still simplify, no?
> if (costly_order && (gfp_mask & __GFP_NORETRY)) {
> - if (gfp_mask & __GFP_THISNODE)
> - goto nopage;
> + goto nopage;
IOW, maybe directly select for the NUMA-THP special case here?
/* Optimistic node-local huge page - only compact once */
if (costly_order &&
((gfp_mask & (__GFP_NORETRY|__GFP_THISNODE)) ==
(__GFP_NORETRY|__GFP_THISNODE)))
goto nopage;
and then let other __GFP_NORETRY fall through.
On 12/16/25 21:32, Johannes Weiner wrote:
> On Tue, Dec 16, 2025 at 04:54:22PM +0100, Vlastimil Babka wrote:
>> It might make therefore more sense to just fail unconditionally after
>> the initial compaction attempt, so do that instead. Costly allocations
>> that do want the reclaim/compaction to happen at least once can omit
>> __GFP_NORETRY, or even specify __GFP_RETRY_MAYFAIL for more than one
>> attempt.
>>
>> There is a slight potential unfairness in that costly __GFP_NORETRY
>> allocations that can't perform direct compaction (i.e. lack __GFP_IO)
>> will still be allowed to direct reclaim, while those that can direct
>> compact will now never attempt direct reclaim. However, in cases of
>> memory pressure causing compaction to be skipped due to insufficient
>> base pages, direct reclaim was already not done before, so there should
>> be no functional regressions from this change.
>
> Hm, kind of. There could be enough basepages for compaction_suitable()
> but compaction odds are still higher with more free pages. So there
> might be cases it regresses.
>
> __GFP_NORETRY semantics say it'll try reclaim at least once. We should
> be able to keep that and still simplify, no?
>
>> if (costly_order && (gfp_mask & __GFP_NORETRY)) {
>> - if (gfp_mask & __GFP_THISNODE)
>> - goto nopage;
>> + goto nopage;
>
> IOW, maybe directly select for the NUMA-THP special case here?
>
> /* Optimistic node-local huge page - only compact once */
> if (costly_order &&
> ((gfp_mask & (__GFP_NORETRY|__GFP_THISNODE)) ==
> (__GFP_NORETRY|__GFP_THISNODE)))
> goto nopage;
>
> and then let other __GFP_NORETRY fall through.
I did consider it as an alternative when realizing the potential unfairness
mentioned above, but then went with the simpler code option.
With your suggestion we keep the THP-specific check but at least remove the
arguably illogical compaction feedback.
On Wed, Dec 17, 2025 at 09:46:34AM +0100, Vlastimil Babka wrote:
> On 12/16/25 21:32, Johannes Weiner wrote:
> > On Tue, Dec 16, 2025 at 04:54:22PM +0100, Vlastimil Babka wrote:
> >> It might make therefore more sense to just fail unconditionally after
> >> the initial compaction attempt, so do that instead. Costly allocations
> >> that do want the reclaim/compaction to happen at least once can omit
> >> __GFP_NORETRY, or even specify __GFP_RETRY_MAYFAIL for more than one
> >> attempt.
> >>
> >> There is a slight potential unfairness in that costly __GFP_NORETRY
> >> allocations that can't perform direct compaction (i.e. lack __GFP_IO)
> >> will still be allowed to direct reclaim, while those that can direct
> >> compact will now never attempt direct reclaim. However, in cases of
> >> memory pressure causing compaction to be skipped due to insufficient
> >> base pages, direct reclaim was already not done before, so there should
> >> be no functional regressions from this change.
> >
> > Hm, kind of. There could be enough basepages for compaction_suitable()
> > but compaction odds are still higher with more free pages. So there
> > might be cases it regresses.
> >
> > __GFP_NORETRY semantics say it'll try reclaim at least once. We should
> > be able to keep that and still simplify, no?
> >
> >> if (costly_order && (gfp_mask & __GFP_NORETRY)) {
> >> - if (gfp_mask & __GFP_THISNODE)
> >> - goto nopage;
> >> + goto nopage;
> >
> > IOW, maybe directly select for the NUMA-THP special case here?
> >
> > /* Optimistic node-local huge page - only compact once */
> > if (costly_order &&
> > ((gfp_mask & (__GFP_NORETRY|__GFP_THISNODE)) ==
> > (__GFP_NORETRY|__GFP_THISNODE)))
> > goto nopage;
> >
> > and then let other __GFP_NORETRY fall through.
>
> I did consider it as an alternative when realizing the potential unfairness
> mentioned above, but then went with the simpler code option.
>
> With your suggestion we keep the THP-specific check but at least remove the
> arguably illogical compaction feedback.
Yes, I'm in favor of removing those either way.
Reclaim makes its own decisions around costly orders. For example, it
targets a higher number of free pages through compaction_ready() than
where compaction would return SKIPPED, to account for concurrency. I
don't think the allocator should have conflicting opinions.
Regarding __GFP_NORETRY: I think it would just be a chance to simplify
the mental model around it again. If somebody does a NORETRY request
when memory is full of stale page cache, I think it's reasonable to
expect at least one shot at dropping some cache to make it happen.
Shortcutting directly to compaction is a good optimization when we
suspect it could succeed without requiring reclaim. But I'm not sure
it's reasonable to ONLY do that and give up.
Btw, I do wonder why that up-front compaction run is so explicit, when
we have
__alloc_pages_direct_reclaim()
__alloc_pages_direct_compact()
calls following below. Couldn't we check for conditions upfront and
set a flag to skip reclaim initially? Then handle priority adjustments
in the retry conditions? IOW, something like:
unsigned long did_some_progress = 0;
if (can_compact && costly_order)
skip_reclaim = true;
if (can_compact && order > 0 && ac->migratetype != MIGRATE_MOVABLE)
skip_reclaim = true;
if (gfp_thisnode_noretry(gfp_mask))
skip_reclaim = true;
retry:
page = get_page_from_freelist(..., alloc_flags, ...);
if (page)
goto got_pg;
if (!skip_reclaim) {
page = __alloc_pages_direct_reclaim(..., &did_some_progress);
if (page)
goto got_pg;
}
page = __alloc_pages_direct_compact(...);
if (page)
goto got_pg;
if (should_loop()) {
skip_reclaim = false;
compact_priority = ...;
goto retry;
}
That would naturally get rid of the gfp_pfmemalloc_allowed() branch
for the upfront check as well, because the ALLOC_NO_WATERMARKS attempt
happens before we do the reclaim/compaction calls.
On Tue 16-12-25 16:54:22, Vlastimil Babka wrote:
> For allocations that are of costly order and __GFP_NORETRY (and can
> perform compaction) we attempt direct compaction first. If that fails,
> we continue with a single round of direct reclaim+compaction (as for
> other __GFP_NORETRY allocations, except the compaction is of lower
> priority), with two exceptions that fail immediately:
>
> - __GFP_THISNODE is specified, to prevent zone_reclaim_mode-like
> behavior for e.g. THP page faults
>
> - compaction failed because it was deferred (i.e. has been failing
> recently so further attempts are not done for a while) or skipped,
> which means there are insufficient free base pages to defragment to
> begin with
>
> Upon closer inspection, the second condition has a somewhat flawed
> reasoning. If there are not enough base pages and reclaim could create
> them, we instead fail. When there are enough base pages and compaction
> has already ran and failed, we proceed and hope that reclaim and the
> subsequent compaction attempt will succeed. But it's unclear why they
> should and whether it will be as inexpensive as intended.
>
> It might make therefore more sense to just fail unconditionally after
> the initial compaction attempt, so do that instead. Costly allocations
> that do want the reclaim/compaction to happen at least once can omit
> __GFP_NORETRY, or even specify __GFP_RETRY_MAYFAIL for more than one
> attempt.
>
> There is a slight potential unfairness in that costly __GFP_NORETRY
> allocations that can't perform direct compaction (i.e. lack __GFP_IO)
> will still be allowed to direct reclaim, while those that can direct
> compact will now never attempt direct reclaim. However, in cases of
> memory pressure causing compaction to be skipped due to insufficient
> base pages, direct reclaim was already not done before, so there should
> be no functional regressions from this change.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
I like this because, quite honestly, us trying to over-optimize for THP
(which seems to be the only costly allocation with GFP_NORETRY) has
turned out quite tricky and hard to reason about. So simplifying this
wrt. to the compaction feedback makes a lot of sense. Let's see where we
get from here.
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
> ---
> include/linux/gfp_types.h | 2 ++
> mm/page_alloc.c | 47 +++--------------------------------------------
> 2 files changed, 5 insertions(+), 44 deletions(-)
>
> diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h
> index 3de43b12209e..051311fdbdb1 100644
> --- a/include/linux/gfp_types.h
> +++ b/include/linux/gfp_types.h
> @@ -218,6 +218,8 @@ enum {
> * caller must handle the failure which is quite likely to happen under
> * heavy memory pressure. The flag is suitable when failure can easily be
> * handled at small cost, such as reduced throughput.
> + * For costly orders, only memory compaction can be attempted with no reclaim
> + * under some conditions.
> *
> * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
> * procedures that have previously failed if there is some indication
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e6fd1213328b..2671cbbd6375 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4763,52 +4763,11 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> goto got_pg;
>
> /*
> - * Checks for costly allocations with __GFP_NORETRY, which
> - * includes some THP page fault allocations
> + * Compaction didn't succeed and we were told not to try hard,
> + * so fail now.
> */
> if (costly_order && (gfp_mask & __GFP_NORETRY)) {
> - /*
> - * If allocating entire pageblock(s) and compaction
> - * failed because all zones are below low watermarks
> - * or is prohibited because it recently failed at this
> - * order, fail immediately unless the allocator has
> - * requested compaction and reclaim retry.
> - *
> - * Reclaim is
> - * - potentially very expensive because zones are far
> - * below their low watermarks or this is part of very
> - * bursty high order allocations,
> - * - not guaranteed to help because isolate_freepages()
> - * may not iterate over freed pages as part of its
> - * linear scan, and
> - * - unlikely to make entire pageblocks free on its
> - * own.
> - */
> - if (compact_result == COMPACT_SKIPPED ||
> - compact_result == COMPACT_DEFERRED)
> - goto nopage;
> -
> - /*
> - * THP page faults may attempt local node only first,
> - * but are then allowed to only compact, not reclaim,
> - * see alloc_pages_mpol()
> - *
> - * compaction can fail for other reasons than those
> - * checked above and we don't want such THP allocations
> - * to put reclaim pressure on a single node in a
> - * situation where other nodes might have plenty of
> - * available memory
> - */
> - if (gfp_mask & __GFP_THISNODE)
> - goto nopage;
> -
> - /*
> - * Looks like reclaim/compaction is worth trying, but
> - * sync compaction could be very expensive, so keep
> - * using async compaction.
> - */
> - compact_priority = INIT_COMPACT_PRIORITY;
> - }
> + goto nopage;
> }
>
> retry:
>
> --
> 2.52.0
--
Michal Hocko
SUSE Labs
© 2016 - 2025 Red Hat, Inc.