From nobody Sat Feb 7 15:35:06 2026 Received: from mail-dy1-f196.google.com (mail-dy1-f196.google.com [74.125.82.196]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 474D436B06C for ; Thu, 22 Jan 2026 02:00:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.196 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769047248; cv=none; b=qJbGtuxD7AwsgUHbrDWliwQxz/YCJWElynsVQQ2Nuh++RnLnrNOOnu+S8+fgkOpapgZ0uJ0lIceXoiYyHMPB7suK3LwrWp9+ZgaXzErv8QvQ4NYaar+GIiT846QjXrJEPudAhykhhHSMoRYrClHo6lNXmkkeIvteGzjjNulB4ws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769047248; c=relaxed/simple; bh=iIE8re20lsjTBnYk5+H33FGulJ5mYzcJH8tI2L8qaXE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SX8/5p1+6GxWMQpJbVErJjrj9AubxGEpTNGvE5numsGk80tyRs7qTyVQLv6hgOGOPAfoEugxjPH+QyNC66oupW+50nDD4nB1Liq4S4VuURO+3543qCptlAned/AEQYJ9e/3kGyyL2spfKxn9/Od+K1M4s0XXx8abtz9ww/mobWk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QrB9ed89; arc=none smtp.client-ip=74.125.82.196 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QrB9ed89" Received: by mail-dy1-f196.google.com with SMTP id 5a478bee46e88-2b714f30461so448282eec.0 for ; Wed, 21 Jan 2026 18:00:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769047246; x=1769652046; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tg0wswL3UiVCyK8pC6Hx23JoxxNi/TA43rjgfSUgEB4=; b=QrB9ed898d12TsdUSK1zS+6J9oJHHaM8tha3ZxGfG1vnoCu1MOqbZVBVr9KAeqPK9A kUQhlMiYM8uYIY8coo8aS8E62Jqv7HNjsC9SPvzPeg0kmUeP/Akvjnb1og6HR8TGz9wJ zAVxDX/Tko7U6tkoQB0R+kPszEpOTdirDYqIgg9CnaBMjndNQHK1JdmSD7d2htekX6/L GTrRGVGuQtMh2BBcw+sR7h8Mmzj3615xQ3yY8FhVMo9EA9b3FfUHqMhD2bpD5pZAwDw7 xJc2tXcWrFplDiH6ZU1XXw5lKf0Yq4PY/dFCrr8n3zm+dJWNMPjNY/cq0w2tVUAkCao6 CPRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769047246; x=1769652046; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=tg0wswL3UiVCyK8pC6Hx23JoxxNi/TA43rjgfSUgEB4=; b=e8F2QkqJzczGtzumMTaKwUN4j+vAhhklG8dRKNw7jLlwvWWMH+1yzxzitp6emVlok9 LEKtKCiqRFZNw4MD/aD57ERQpXam575Ew3FMjLz3RzeXhMKmypKshcIW/xA+iKtpzkFq OIWEFeHPdV/vKDCsdp4jXnLchriAuXzTCs1j/m6unj0/OvC7EGN5u2dQt0HrkGI0pmdE oPZmypnW8EzQYAfxKvrziq2eAD/9LFVDJH4o5bqE6DVJ00Xko0aSRj9l26kOZ3QK4rO0 24j4RnvEb7N38jGjjhkSvimdayPGLioyrhG9gTrWrng6HzpWnNpQ0tFfCOfAk8gJoTnT DvWg== X-Forwarded-Encrypted: i=1; AJvYcCXfr8piiwiO/ZYvjfrBvPdDTfBlXu0t6C6mKenPYgbtm6lNfLXdzTxYzqw1PK3Kg5JPecmabvsBGFAkQz0=@vger.kernel.org X-Gm-Message-State: AOJu0YzFaq8cEVu8lOPJtiR7d6xU/zjA65m+DjbxG0SmruMNWGpFTOcY yyJvBkO26Pphnx/vqxBblpHaVkF81qIQ/GV2AQM9KiIUoaXawW+lI9iN X-Gm-Gg: AZuq6aJFsdVWrguyFdLq8rD56z41sHDqP1juOrzhzlszz7wBDM/Nd4Zfea/In8mVFpU VimJG0Q2tclGVi9Y9Pb8C7GRmFCGG0uzbO+gWhra94wyuUPrW6tGY6FxzP9mf6Jcn16eepczp6k aaAlxZecKF2BydjmWpLMjZzQ8uoXG4izZ+01Pp/2Q9vdjUPZ/szE6/MiMLTJW8bF8CtqQ9OlK4F L6Svun0IfCjU5Xowp1wKUQwGZpqkpgBe63DKN6p4Ba2wqzaOnUHaNgF6J6nC88sT2fKsxsRyPBv jm2TIFi0nWUCyOzYN2zZQWl96josuuKt0YG62GbD26bd/n11Za04CQJYRqo64GMxnp3bVsePugL GlvAWKKyKvzyUECFGVl6QRliqYiaV2Oy9oElRYTHgQhWRBGJlnF3B/0u4H69rvSweXiPnj9QhPJ 4gTow= X-Received: by 2002:a05:7300:cb05:b0:2b0:6a03:e66f with SMTP id 5a478bee46e88-2b6b3f29e91mr10715914eec.10.1769047245987; Wed, 21 Jan 2026 18:00:45 -0800 (PST) Received: from debian ([74.48.213.230]) by smtp.gmail.com with ESMTPSA id 5a478bee46e88-2b6b361f5d4sm23807434eec.17.2026.01.21.18.00.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jan 2026 18:00:45 -0800 (PST) From: Qiliang Yuan To: akpm@linux-foundation.org Cc: david@kernel.org, mhocko@suse.com, vbabka@suse.cz, willy@infradead.org, lance.yang@linux.dev, hannes@cmpxchg.org, surenb@google.com, jackmanb@google.com, ziy@nvidia.com, weixugc@google.com, rppt@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, jis1@chinatelecom.cn, wangh13@chinatelecom.cn, liyi1@chinatelecom.cn, sunshx@chinatelecom.cn, zhangzq20@chinatelecom.cn, zhangjn11@chinatelecom.cn, Qiliang Yuan , Qiliang Yuan Subject: [PATCH] mm/page_alloc: boost watermarks on atomic allocation failure Date: Wed, 21 Jan 2026 21:00:35 -0500 Message-ID: <20260122020035.227449-1-realwujing@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260121125603.47b204cc8fbe9466b25cce16@linux-foundation.org> References: <20260121125603.47b204cc8fbe9466b25cce16@linux-foundation.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Atomic allocations (GFP_ATOMIC) are prone to failure under heavy memory pressure as they cannot enter direct reclaim. This patch introduces a 'Soft Boost' mechanism to mitigate this. When a GFP_ATOMIC request fails or enters the slowpath, the preferred zone's watermark_boost is increased. This triggers kswapd to proactively reclaim memory, creating a safety buffer for future atomic bursts. To prevent excessive reclaim during packet storms, a 1-second debounce timer (last_boost_jiffies) is added to each zone to rate-limit boosts. This approach reuses existing watermark_boost infrastructure, ensuring minimal overhead and asynchronous background reclaim via kswapd. Allocation failure logs: [38535644.718700] node 0: slabs: 1031, objs: 43328, free: 0 [38535644.725059] node 1: slabs: 339, objs: 17616, free: 317 [38535645.428345] SLUB: Unable to allocate memory on node -1, gfp=3D0x48002= 0(GFP_ATOMIC) [38535645.436888] cache: skbuff_head_cache, object size: 232, buffer size: = 256, default order: 2, min order: 0 [38535645.447664] node 0: slabs: 940, objs: 40864, free: 144 [38535645.454026] node 1: slabs: 322, objs: 19168, free: 383 [38535645.556122] SLUB: Unable to allocate memory on node -1, gfp=3D0x48002= 0(GFP_ATOMIC) [38535645.564576] cache: skbuff_head_cache, object size: 232, buffer size: = 256, default order: 2, min order: 0 [38535649.655523] warn_alloc: 59 callbacks suppressed [38535649.655527] swapper/100: page allocation failure: order:0, mode:0x480= 020(GFP_ATOMIC), nodemask=3D(null) [38535649.671692] swapper/100 cpuset=3D/ mems_allowed=3D0-1 Signed-off-by: Qiliang Yuan Signed-off-by: Qiliang Yuan --- v6: - Replace magic number ">> 10" with ATOMIC_BOOST_SCALE_SHIFT define - Add documentation explaining 0.1% zone size boost rationale v5: - Simplify to use native boost_watermark() instead of custom logic v4: - Add watermark_scale_boost and gradual decay via balance_pgdat v3: - Move debounce timer to per-zone; optimize zone selection v2: - Add debounce logic and zone-proportional boosting v1: - Initial: boost min_free_kbytes on GFP_ATOMIC failure --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 36 +++++++++++++++++++++++++++++++++++- 2 files changed, 36 insertions(+), 1 deletion(-) --- mm/page_alloc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1faace9e2dc5..8ea2435125d5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -218,6 +218,13 @@ unsigned int pageblock_order __read_mostly; static void __free_pages_ok(struct page *page, unsigned int order, fpi_t fpi_flags); =20 +/* + * Boost watermarks by ~0.1% of zone size on atomic allocation pressure. + * This provides zone-proportional safety buffers: ~1MB per 1GB of zone si= ze. + * Larger zones under GFP_ATOMIC pressure need proportionally larger reser= ves. + */ +#define ATOMIC_BOOST_SCALE_SHIFT 10 + /* * results with 256, 32 in the lowmem_reserve sysctl: * 1G machine -> (16M dma, 800M-16M normal, 1G-800M high) @@ -2190,7 +2197,7 @@ static inline bool boost_watermark(struct zone *zone) max_boost =3D max(pageblock_nr_pages, max_boost); =20 zone->watermark_boost =3D min(zone->watermark_boost + - max(pageblock_nr_pages, zone_managed_pages(zone) >> 10), + max(pageblock_nr_pages, zone_managed_pages(zone) >> ATOMIC_BOOST_SCALE_S= HIFT), max_boost); =20 return true; --=20 2.51.0