mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
From: Matt Fleming <mfleming@cloudflare.com>
Under memory pressure it's possible for GFP_ATOMIC order-0 allocations
to fail even though free pages are available in the highatomic reserves.
GFP_ATOMIC allocations cannot trigger unreserve_highatomic_pageblock()
since it's only run from reclaim.
Given that such allocations will pass the watermarks in
__zone_watermark_unusable_free(), it makes sense to fallback to
highatomic reserves the same way that ALLOC_OOM can.
This fixes order-0 page allocation failures observed on Cloudflare's
fleet when handling network packets:
kswapd1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC),
nodemask=(null),cpuset=/,mems_allowed=0-7
CPU: 10 PID: 696 Comm: kswapd1 Kdump: loaded Tainted: G O 6.6.43-CUSTOM #1
Hardware name: MACHINE
Call Trace:
<IRQ>
dump_stack_lvl+0x3c/0x50
warn_alloc+0x13a/0x1c0
__alloc_pages_slowpath.constprop.0+0xc9d/0xd10
__alloc_pages+0x327/0x340
__napi_alloc_skb+0x16d/0x1f0
bnxt_rx_page_skb+0x96/0x1b0 [bnxt_en]
bnxt_rx_pkt+0x201/0x15e0 [bnxt_en]
__bnxt_poll_work+0x156/0x2b0 [bnxt_en]
bnxt_poll+0xd9/0x1c0 [bnxt_en]
__napi_poll+0x2b/0x1b0
bpf_trampoline_6442524138+0x7d/0x1000
__napi_poll+0x5/0x1b0
net_rx_action+0x342/0x740
handle_softirqs+0xcf/0x2b0
irq_exit_rcu+0x6c/0x90
sysvec_apic_timer_interrupt+0x72/0x90
</IRQ>
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Link: https://lore.kernel.org/all/CAGis_TWzSu=P7QJmjD58WWiu3zjMTVKSzdOwWE8ORaGytzWJwQ@mail.gmail.com/
Signed-off-by: Matt Fleming <mfleming@cloudflare.com>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afab64814dc..0c4c359f5ba7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2898,7 +2898,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
* failing a high-order atomic allocation in the
* future.
*/
- if (!page && (alloc_flags & ALLOC_OOM))
+ if (!page && (alloc_flags & (ALLOC_OOM|ALLOC_NON_BLOCK)))
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (!page) {
--
2.34.1
On 10/11/24 14:07, Matt Fleming wrote: > From: Matt Fleming <mfleming@cloudflare.com> > > Under memory pressure it's possible for GFP_ATOMIC order-0 allocations > to fail even though free pages are available in the highatomic reserves. > GFP_ATOMIC allocations cannot trigger unreserve_highatomic_pageblock() > since it's only run from reclaim. > > Given that such allocations will pass the watermarks in > __zone_watermark_unusable_free(), it makes sense to fallback to > highatomic reserves the same way that ALLOC_OOM can. > > This fixes order-0 page allocation failures observed on Cloudflare's > fleet when handling network packets: > > kswapd1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), > nodemask=(null),cpuset=/,mems_allowed=0-7 > CPU: 10 PID: 696 Comm: kswapd1 Kdump: loaded Tainted: G O 6.6.43-CUSTOM #1 > Hardware name: MACHINE > Call Trace: > <IRQ> > dump_stack_lvl+0x3c/0x50 > warn_alloc+0x13a/0x1c0 > __alloc_pages_slowpath.constprop.0+0xc9d/0xd10 > __alloc_pages+0x327/0x340 > __napi_alloc_skb+0x16d/0x1f0 > bnxt_rx_page_skb+0x96/0x1b0 [bnxt_en] > bnxt_rx_pkt+0x201/0x15e0 [bnxt_en] > __bnxt_poll_work+0x156/0x2b0 [bnxt_en] > bnxt_poll+0xd9/0x1c0 [bnxt_en] > __napi_poll+0x2b/0x1b0 > bpf_trampoline_6442524138+0x7d/0x1000 > __napi_poll+0x5/0x1b0 > net_rx_action+0x342/0x740 > handle_softirqs+0xcf/0x2b0 > irq_exit_rcu+0x6c/0x90 > sysvec_apic_timer_interrupt+0x72/0x90 > </IRQ> > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: linux-mm@kvack.org > Link: https://lore.kernel.org/all/CAGis_TWzSu=P7QJmjD58WWiu3zjMTVKSzdOwWE8ORaGytzWJwQ@mail.gmail.com/ > Signed-off-by: Matt Fleming <mfleming@cloudflare.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> (but a comment should be updated, see below) I think we could add Cc: stable and I believe the commit that broke it was: Fixes: 1d91df85f399 ("mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs") because it was where an order > 0 condition was introduced to allow allocation from MIGRATE_HIGHATOMIC commit eb2e2b425c69 ("mm/page_alloc: explicitly record high-order atomic allocations in alloc_flags") realized there's a gap for OOM (even if changelog doesn't mention it) but we should allow the order-0 atomic allocations to fallback as well. > --- > mm/page_alloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 8afab64814dc..0c4c359f5ba7 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2898,7 +2898,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, > * failing a high-order atomic allocation in the > * future. > */ We should also update the comment above to reflect this is no longer just for the OOM case? > - if (!page && (alloc_flags & ALLOC_OOM)) > + if (!page && (alloc_flags & (ALLOC_OOM|ALLOC_NON_BLOCK))) > page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); > > if (!page) {
© 2016 - 2024 Red Hat, Inc.