From nobody Thu Apr 2 09:18:48 2026 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C76D1547C0 for ; Mon, 30 Mar 2026 01:08:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774832937; cv=none; b=NPPd6+Opnyc2VR42izDFOSynPTjiV83xYjQwkBqymGtqwlXx4rwJ2kIt+8vKBR+IdC0IRWn4vpQctk796e+6PZ5waU0nSdn35qHtm5jITO6m5dW53CtZqcUZTm+wzCeXOvdUbDYT/p/D3aYvQA7ZjOXy7MdT7rJGrLiRRHgulY4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774832937; c=relaxed/simple; bh=O7fnOh5o9huUT1LibsqkVxLb7cXF+wDxfy1p4pNi8QY=; h=Date:From:To:cc:Subject:In-Reply-To:Message-ID:References: MIME-Version:Content-Type; b=qsSOcv7JVQTERr6iXY6r82vcbPw1zGCSaM2b0NyiO0yCiv/P2k+ek2VBTJ3cR/moRrudRJoT74ffOU6BohevCBFZQtAzAipWiI9qq+fwO/jz3UXu6Jj2M5037gINmnvBeu78v4x0URd3nAwxflrZXNGs/6XxJ2di6Ymz6kiMYa4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hgn9heIF; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hgn9heIF" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-2b0b260d309so90715ad.1 for ; Sun, 29 Mar 2026 18:08:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774832935; x=1775437735; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=bSxailGNhRbcV3r6KP78FzNOZYKnveomaypLipLphEw=; b=hgn9heIFDT+Ke9knNPjElcyJURifmASkII5YK1xis8NTvoVWLfbR9uOONCRTm8umzU T4lRZfPO2B0Q+y5SDSBn1ByftsAYua6Xwp4MuyX4IBu45e+GoW4YAj2m3nIdFDPblJ3I OBrXwacFMybcPJswipjTUcUYXQaFRWyddz6t7Jag7CKxd0JI7GvvA+hPzYoDmUeCj8w7 AfNwkj6UoI8346oFVcvCGLQhdBNYuN0zKmKMOrHPzPTewgykZ3t/Q2R+X6Pn5VQxj2SW Pe9DBmQ0CebMoGobGvR44yWQXdtzlbUjsIHsBT/iDlI7+qAFwgo7Ztvp8H9TFa1qMzd2 SX8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774832935; x=1775437735; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-gg:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bSxailGNhRbcV3r6KP78FzNOZYKnveomaypLipLphEw=; b=A+vPDJlGe/n6MlovgwQLEzMBW3AVzF7PG8AFxlAVN59CNxhvla/qEsBsvJyns8JwNv YuDPPzh0GivANDJJ+tQypOJf+fzplH677mg0blKGrZarvaBLDTlzlGt5oEZGnEgZuteh 8AAO3SnlCgKFEUae3KJ/KYYLbGaSMkjGcG7jESVe0YpJ6tXmU0pJ489InnEoTXl/lULD vPdE3YKo1JULKwaya4tUVyDkeL3HdvlFFeEvP+47FvmbqF7cmGz7PfPjd1tTTvNl9232 8NjOEOss6SYZwG1w/GS4jLTa0lNWueIZYKWqmrJ3dF15tnQLQ7ntPoDmYVU68iip26y9 6W8A== X-Forwarded-Encrypted: i=1; AJvYcCVktW+rMA4huQ56SZm1Jd+B/x1NvaBbxYEyMdsUsMd5u0ScrmIUv974tg9vRH97THv0PsMntBdZgxJyM78=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+SSiR+SFeymh6R20ZGgR6RnCJOCIX2U2EoYzZfu6Ll8Ye8XuU HkPgsXjEzLvKNCEkJNNWt0as5/hLF7yMt+6MWIsnMcGZo33C4QbENZwqx55DtsklRQ== X-Gm-Gg: ATEYQzye961qnrxjPcuwHIuZIoKDJ66dPQgmbFf0Q9k+ILXbwiPrePsG4iqpqykWXOP PPHC7Qy3bVi3gz8+3eoQd7PZ1WcfJs0E9QrqNGON+tGL/HGzI1zkCRQ2RYXAWSq01dfifx46dnv MItVKwkpQATS9x25SkelhkWMyYvXfMqiWSr24Hjw//pbH8AMbD0rQ9y0ojyz00Y3IP79LTrlxaG iJvQZ2TvHYuiPx5qIOLgxzAWL3lwOeKC5yFPHRDov4H2MJCR8SQ0mHbK72UTVEpCOhJGdKMjBvR DpwkI0XAT5wDzy1zrVU7/s5Y1Ud5PfJQa3Eiue6kCHG8rFsAMU7IqzMY+jjOokG90P1vZ8fAo8E YNnJjnPCG34c+2Pl60HtJuqHUw8LD+owI9VrtHe6ugiDTE1f5VM7L+lmsQD4dLOsaLf+zccsXVT hBCHKmW9WHlIFWEcWS2CrWluhMoYGDl6fQXaNMoALIWwy1So0FZ658wMu9+ExALCiOwhZ0pbILV e3JPAZUuo+K0tRbsSBnu9SKEhdlhQp1VkqFH1w/ZkmdITM+3tzGDg== X-Received: by 2002:a17:902:f78a:b0:2b0:be7d:a25e with SMTP id d9443c01a7336-2b241d59310mr3989985ad.18.1774832934233; Sun, 29 Mar 2026 18:08:54 -0700 (PDT) Received: from [2a00:79e0:2eb0:8:1044:279f:2a49:f6d0] ([2a00:79e0:2eb0:8:1044:279f:2a49:f6d0]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2b24264287csm58867195ad.3.2026.03.29.18.08.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Mar 2026 18:08:53 -0700 (PDT) Date: Sun, 29 Mar 2026 18:08:52 -0700 (PDT) From: David Rientjes To: Andrew Morton , Vlastimil Babka cc: Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Petr Mladek , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [patch] mm, page_alloc: reintroduce page allocation stall warning In-Reply-To: <30945cc3-9c4d-94bb-e7e7-dde71483800c@google.com> Message-ID: <231154f8-a3c3-229a-31a7-f91ab8ec1773@google.com> References: <30945cc3-9c4d-94bb-e7e7-dde71483800c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Previously, we had warnings when a single page allocation took longer than reasonably expected. This was introduced in commit 63f53dea0c98 ("mm: warn about allocations which stall for too long"). The warning was subsequently reverted in commit 400e22499dd9 ("mm: don't warn about allocations which stall for too long") but for reasons unrelated to the warning itself. Page allocation stalls in excess of 10 seconds are always useful to debug because they can result in severe userspace unresponsiveness. Adding this artifact can be used to correlate with userspace going out to lunch and to understand the state of memory at the time. There should be a reasonable expectation that this warning will never trigger given it is very passive, it will only be emitted when a page allocation takes longer than 10 seconds. If it does trigger, this reveals an issue that should be fixed: a single page allocation should never loop for more than 10 seconds without oom killing to make memory available. Unlike the original implementation, this implementation only reports stalls once for the system every 10 seconds. Otherwise, many concurrent reclaimers could spam the kernel log unnecessarily. Stalls are only reported when calling into direct reclaim. Signed-off-by: David Rientjes Acked-by: Vlastimil Babka (SUSE) --- mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -316,6 +316,14 @@ EXPORT_SYMBOL(nr_node_ids); EXPORT_SYMBOL(nr_online_nodes); #endif =20 +/* + * When page allocations stall for longer than a threshold, + * ALLOC_STALL_WARN_MSECS, leave a warning in the kernel log. Only one wa= rning + * will be printed during this duration for the entire system. + */ +#define ALLOC_STALL_WARN_MSECS (10 * 1000UL) +static unsigned long alloc_stall_warn_jiffies; + static bool page_contains_unaccepted(struct page *page, unsigned int order= ); static bool cond_accept_memory(struct zone *zone, unsigned int order, int alloc_flags); @@ -4706,6 +4714,40 @@ check_retry_cpuset(int cpuset_mems_cookie, struct al= loc_context *ac) return false; } =20 +static void check_alloc_stall_warn(gfp_t gfp_mask, nodemask_t *nodemask, + unsigned int order, unsigned long alloc_start_time) +{ + static DEFINE_SPINLOCK(alloc_stall_lock); + unsigned long stall_msecs =3D jiffies_to_msecs(jiffies - alloc_start_time= ); + + if (likely(stall_msecs < ALLOC_STALL_WARN_MSECS)) + return; + if (time_before(jiffies, READ_ONCE(alloc_stall_warn_jiffies))) + return; + if (gfp_mask & __GFP_NOWARN) + return; + + if (!spin_trylock(&alloc_stall_lock)) + return; + + if (time_after_eq(jiffies, alloc_stall_warn_jiffies)) { + WRITE_ONCE(alloc_stall_warn_jiffies, + jiffies + msecs_to_jiffies(ALLOC_STALL_WARN_MSECS)); + spin_unlock(&alloc_stall_lock); + + pr_warn("%s: page allocation stall for %lu secs: order:%d, mode:%#x(%pGg= ) nodemask=3D%*pbl", + current->comm, stall_msecs / MSEC_PER_SEC, order, gfp_mask, &gfp_mask, + nodemask_pr_args(nodemask)); + cpuset_print_current_mems_allowed(); + pr_cont("\n"); + dump_stack(); + warn_alloc_show_mem(gfp_mask, nodemask); + return; + } + + spin_unlock(&alloc_stall_lock); +} + static inline struct page * __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, struct alloc_context *ac) @@ -4726,6 +4768,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int o= rder, int reserve_flags; bool compact_first =3D false; bool can_retry_reserves =3D true; + unsigned long alloc_start_time =3D jiffies; =20 if (unlikely(nofail)) { /* @@ -4841,6 +4884,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int o= rder, if (current->flags & PF_MEMALLOC) goto nopage; =20 + /* If allocation has taken excessively long, warn about it */ + check_alloc_stall_warn(gfp_mask, ac->nodemask, order, alloc_start_time); + /* Try direct reclaim and then allocating */ if (!compact_first) { page =3D __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags,