From nobody Sat Feb 7 08:02:42 2026 Received: from mail-oa1-f66.google.com (mail-oa1-f66.google.com [209.85.160.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63F113314AE for ; Wed, 28 Jan 2026 11:22:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769599331; cv=none; b=nwpvZV6ZC0dobEA1zWqetv4IyKWjBcck3Fq12I/lekb8yX/mAD7dxFVsEE4C94cMuSJGbiiXJzoY2wAlwyq4PTFv2RPuF6MM1i7f/gSyWwOoSWSy1bE3FXZXCOlMpHTR8LOX95PA0ldZcLNNQ4C9mDHbe3IE9hJoDBsRSfI+Pzc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769599331; c=relaxed/simple; bh=dM6V9gyBFb30++rXu/ZCcopmb7KSbIPEUleXqgdTP0s=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=lGD9xtovD3Tc2RzwTASyORbLMfxbW7/hR3wSaBo9O9OIIrUiWmKNQ0/APslQudn+VPVshDzb9hOlIfAoMZrcfAHnYk4u1F69YNVjD4mwH4Qz4nQV4o3SBLLoCl03OQiBzX5x5X/5TfYBVFYiSqT2ymkJiTVXT4U1thSgfSwbiR0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=O7h8V9lQ; arc=none smtp.client-ip=209.85.160.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="O7h8V9lQ" Received: by mail-oa1-f66.google.com with SMTP id 586e51a60fabf-4042905015cso4033980fac.0 for ; Wed, 28 Jan 2026 03:22:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769599328; x=1770204128; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=2MWO3DKTfHPlWbWoJnbmcsmvrxuGAi/ENfoXzbGs/t0=; b=O7h8V9lQM7FKmLrIwptGGj7mYeUdZUz+eK9h2QFPYCvl3w0lTbwgBUpCc00/wpta1L pJDc8K8qT0vrYV8jlMNRa44OhA4nUdgC/SUFFLcRg1quHVSDnllJj+azc8a9SdpKTQ/n 2i3aLlKwh0jyve1Zq8bfmWpq6WSNZIlIN7cfS8lYz+6KLnjY3bfKNHp6J0Gg9nBN3xfs PQVFBQf/78aXpFYRW+4Amiv6fWNo+VDcPDMlF0BFjPgkFFnPRhg56ND2hj3Ou23VDM2E M6pFv+9DsI+gMONUmNlkBJocH938QjMx7MefHTvKt6J/8YEkPveqIKcDdHBoALHThuF6 yRPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769599328; x=1770204128; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=2MWO3DKTfHPlWbWoJnbmcsmvrxuGAi/ENfoXzbGs/t0=; b=rFuKtmSnSqGyp9hy2iFOMkbyrT02wn2MfbPM2LSQYeuERaAgC5E3PGELqRF6LQ/cI9 jZoxQhestPtYnlzgtOS+ow297uZ3+Vb/F3zS5asjVmxpKZlxInysVbPBr5QPIqBOWxnS d3J69UYz+9ZRNOr4dLq/SwfHxpEmKXSMd2civAajF3RerV6/szC+rZRRZFaUimlDBJ1i TgkAJRcgKynV+2rKxHgW9gf9zQkOeO2R8vJ8Sdb3vzxxvKESmMLeJpBkjx65DycCRa/o /9oafxnixMjX6dQ+RAmls7KsTGbnJ6R/cii0v1kaU8sG1P9OB6dJQ5DON1IFSv9eQ3pt kLVg== X-Forwarded-Encrypted: i=1; AJvYcCV9CHTOinYxwg1v3E5QfuvwA5/DjPfYXPizGs+5EAtwUkPRpuyPFiHkZ1Ous7sc9xr+0JyGemK2W4NNtbQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yw8bOU3SxNbkkB17oJCqWYVQxaM9QfsnEs7vq77W9vDq8UtXBji b/9lkPb7KMtLW42wSudspD2k0H1C3oIRe4mrrFCEMncO8ItNLbWlHwfyLMG52WGXsS4= X-Gm-Gg: AZuq6aLLC1jIzRhRhApYe1Rkri7C/8TN+kBoBE4sG/vgtF8im8hb6AielX4iLU1CctH uf7zllvX0ZmqP6Hmt5FZE59YU1qIfvSrOUvBkmjlw1lTPA8B4Fl6rI8DG9y5PQ+EEPJBphhfxLR bWUUJ9c2WfZgiZasufUHp0dq2VskizWdRiA318jkV2+lfr8SuABQh8MQT4plrl3DE+xMvEQJdqA CAI9bNXF3Q3hvp0qQusSBnRjCHKkKMMoet4lOyZckxk5JgzTToBdM7uZ6Onvl+Tdf2Ey53PhGlv 7ED/b/C36/DvI4Xyve2wYfk7OWhOdhTLR6tGj5NkbfGaAy+/E6d+HXEpHLQvMgvR2k6NTlMMUIl jA6cnuTNQXnVrXyTE49ZFVipwsnUZVWidjeAyIFv2wf05CnGD3u4UKvYmCfbBLV95SHSqonD0ze X8VTg= X-Received: by 2002:a05:7300:cc12:b0:2ab:ecd0:5221 with SMTP id 5a478bee46e88-2b78da4b7e1mr3063339eec.42.1769592801585; Wed, 28 Jan 2026 01:33:21 -0800 (PST) Received: from debian ([74.48.213.230]) by smtp.gmail.com with ESMTPSA id 5a478bee46e88-2b7a16ee7d3sm1828362eec.11.2026.01.28.01.33.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:33:21 -0800 (PST) From: Qiliang Yuan To: Andrew Morton , Axel Rasmussen , Yuanchu Xie , David Hildenbrand , Vlastimil Babka Cc: lance.yang@linux.dev, Qiliang Yuan , kernel test robot , Qiliang Yuan , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v8] mm/page_alloc: boost watermarks on atomic allocation failure Date: Wed, 28 Jan 2026 04:32:53 -0500 Message-ID: <20260128093258.1740809-1-realwujing@gmail.com> X-Mailer: git-send-email 2.51.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Atomic allocations (GFP_ATOMIC) are prone to failure under heavy memory pressure as they cannot enter direct reclaim. This patch introduces a watermark boost mechanism to mitigate this issue. When a GFP_ATOMIC request enters the slowpath, the preferred zone's watermark_boost is increased under zone->lock protection. This triggers kswapd to proactively reclaim memory, creating a safety buffer for future atomic allocations. A 1-second debounce timer prevents excessive boosts during traffic bursts. This approach reuses existing watermark_boost infrastructure with minimal overhead and proper locking to ensure thread safety. Allocation failure logs: [38535644.718700] node 0: slabs: 1031, objs: 43328, free: 0 [38535644.725059] node 1: slabs: 339, objs: 17616, free: 317 [38535645.428345] SLUB: Unable to allocate memory on node -1, gfp=3D0x48002= 0(GFP_ATOMIC) [38535645.436888] cache: skbuff_head_cache, object size: 232, buffer size: = 256, default order: 2, min order: 0 [38535645.447664] node 0: slabs: 940, objs: 40864, free: 144 [38535645.454026] node 1: slabs: 322, objs: 19168, free: 383 [38535645.556122] SLUB: Unable to allocate memory on node -1, gfp=3D0x48002= 0(GFP_ATOMIC) [38535645.564576] cache: skbuff_head_cache, object size: 232, buffer size: = 256, default order: 2, min order: 0 [38535649.655523] warn_alloc: 59 callbacks suppressed [38535649.655527] swapper/100: page allocation failure: order:0, mode:0x480= 020(GFP_ATOMIC), nodemask=3D(null) [38535649.671692] swapper/100 cpuset=3D/ mems_allowed=3D0-1 Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-lkp/202601271341.5d24a59f-lkp@intel.com Signed-off-by: Qiliang Yuan Signed-off-by: Qiliang Yuan --- --- v8: - Use spin_lock_irqsave() to prevent inconsistent lock state (softirq-on vs in-softirq) as reported by LKP. v7: - Use local variable for boost_amount to improve code readability - Add zone->lock protection in boost_zones_for_atomic() - Add lockdep assertion in boost_watermark() to prevent locking mistakes - Remove redundant boost call at fail label due to 1-second debounce - Link: https://lore.kernel.org/all/20260123064231.250767-1-realwujing@gm= ail.com/ v6: - Replace magic number ">> 10" with ATOMIC_BOOST_SCALE_SHIFT define - Add documentation explaining 0.1% zone size boost rationale v5: - Simplify to use native boost_watermark() instead of custom logic v4: - Add watermark_scale_boost and gradual decay via balance_pgdat v3: - Move debounce timer to per-zone; optimize zone selection v2: - Add debounce logic and zone-proportional boosting v1: - Initial: boost min_free_kbytes on GFP_ATOMIC failure include/linux/mmzone.h | 1 + mm/page_alloc.c | 48 ++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 47 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 75ef7c9f9307..8e37e4e6765b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -882,6 +882,7 @@ struct zone { /* zone watermarks, access with *_wmark_pages(zone) macros */ unsigned long _watermark[NR_WMARK]; unsigned long watermark_boost; + unsigned long last_boost_jiffies; =20 unsigned long nr_reserved_highatomic; unsigned long nr_free_highatomic; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c380f063e8b7..7dc1e056a082 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -218,6 +218,13 @@ unsigned int pageblock_order __read_mostly; static void __free_pages_ok(struct page *page, unsigned int order, fpi_t fpi_flags); =20 +/* + * Boost watermarks by ~0.1% of zone size on atomic allocation pressure. + * This provides zone-proportional safety buffers: ~1MB per 1GB of zone si= ze. + * Larger zones under GFP_ATOMIC pressure need proportionally larger reser= ves. + */ +#define ATOMIC_BOOST_SCALE_SHIFT 10 + /* * results with 256, 32 in the lowmem_reserve sysctl: * 1G machine -> (16M dma, 800M-16M normal, 1G-800M high) @@ -2161,6 +2168,9 @@ bool pageblock_unisolate_and_move_free_pages(struct z= one *zone, struct page *pag static inline bool boost_watermark(struct zone *zone) { unsigned long max_boost; + unsigned long boost_amount; + + lockdep_assert_held(&zone->lock); =20 if (!watermark_boost_factor) return false; @@ -2189,12 +2199,42 @@ static inline bool boost_watermark(struct zone *zon= e) =20 max_boost =3D max(pageblock_nr_pages, max_boost); =20 - zone->watermark_boost =3D min(zone->watermark_boost + pageblock_nr_pages, - max_boost); + boost_amount =3D max(pageblock_nr_pages, + zone_managed_pages(zone) >> ATOMIC_BOOST_SCALE_SHIFT); + zone->watermark_boost =3D min(zone->watermark_boost + boost_amount, + max_boost); =20 return true; } =20 +static void boost_zones_for_atomic(struct alloc_context *ac, gfp_t gfp_mas= k) +{ + struct zoneref *z; + struct zone *zone; + unsigned long now =3D jiffies; + bool should_wake; + + for_each_zone_zonelist(zone, z, ac->zonelist, ac->highest_zoneidx) { + /* Rate-limit boosts to once per second per zone */ + if (time_after(now, zone->last_boost_jiffies + HZ)) { + unsigned long flags; + + zone->last_boost_jiffies =3D now; + + /* Modify watermark under lock, wake kswapd outside */ + spin_lock_irqsave(&zone->lock, flags); + should_wake =3D boost_watermark(zone); + spin_unlock_irqrestore(&zone->lock, flags); + + if (should_wake) + wakeup_kswapd(zone, gfp_mask, 0, ac->highest_zoneidx); + + /* Boost only the preferred zone */ + break; + } + } +} + /* * When we are falling back to another migratetype during allocation, shou= ld we * try to claim an entire block to satisfy further allocations, instead of @@ -4742,6 +4782,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int = order, if (page) goto got_pg; =20 + /* Boost watermarks for atomic requests entering slowpath */ + if ((gfp_mask & GFP_ATOMIC) && order =3D=3D 0) + boost_zones_for_atomic(ac, gfp_mask); + /* * For costly allocations, try direct compaction first, as it's likely * that we have enough base pages and don't need to reclaim. For non- --=20 2.51.0