From nobody Thu Apr 9 18:03:01 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54BE530F532 for ; Fri, 6 Mar 2026 16:06:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772813188; cv=none; b=cuHqzmnGsE99QCqL4xx+iJu9lrwhwyvJjPxmQh4CX2ZBU91K0yL2TCCr2EhI5VfX9sgcPQrB77nmq5S4ARUrDbPRWX7VUY7SVlW090FbB/kvxwGAZR5cC86mnWD+ERv8CMp30GU/+g3uRl9uQSH8G+KEp1iTZV2C0udgdLCuJks= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772813188; c=relaxed/simple; bh=Zqk1gQ7Atvw6gjTVGJNTQAhvcX3Jy8TQqjpE9dC/ZuM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i7b6asBMkvreSTRTd8MZvbiHCfocVV/YITfsWUmAkNTsuIf5SQnuewdY3fdCjXuHZrW92P04mTdprVHp8+VwQmiNEl2y9o0zJR9QUXhO+FhcZA+RLZdd2vqNQtx7iUr2DOs7h6J6ZQmmIPfhDq7bT2aBszdjD8ruwG1cHhiE0EM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=kIsu87qC; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="kIsu87qC" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id 6D4DEB343A; Fri, 06 Mar 2026 16:06:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1772813184; bh=i9UEkKMHpo5tNbaHzJ6bf5dC7adclkzJ3ytPit9GXIQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=kIsu87qCkpQM60NfUwX2uTA3cZQ1SRKOjnU7PGWuT3Y76LtaFmxUaE7TF2LuaKa5D KbUgRhD1q2TZaVN+7l9IoZ/e+/Sr6qnyw3f3LksEdr7coxXDaIgC9R/8tAlzG+dRZa s7jxzba4KWLI5P8orJ4WEY5O4kaS/BUNk8YKyCSY= From: Dmitry Ilvokhin To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, Dmitry Ilvokhin , Steven Rostedt Subject: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Date: Fri, 6 Mar 2026 16:05:35 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the newly introduced zone_lock_irqsave lock guard in reserve_highatomic_pageblock() to replace the explicit lock/unlock and goto out_unlock pattern with automatic scope-based cleanup. Suggested-by: Steven Rostedt Signed-off-by: Dmitry Ilvokhin --- include/linux/mmzone_lock.h | 9 +++++++++ mm/page_alloc.c | 13 +++++-------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/include/linux/mmzone_lock.h b/include/linux/mmzone_lock.h index 6bd8b026029f..fe399a4505ba 100644 --- a/include/linux/mmzone_lock.h +++ b/include/linux/mmzone_lock.h @@ -97,4 +97,13 @@ static inline void zone_unlock_irq(struct zone *zone) spin_unlock_irq(&zone->_lock); } =20 +DEFINE_LOCK_GUARD_1(zone_lock_irqsave, struct zone, + zone_lock_irqsave(_T->lock, _T->flags), + zone_unlock_irqrestore(_T->lock, _T->flags), + unsigned long flags) +DECLARE_LOCK_GUARD_1_ATTRS(zone_lock_irqsave, + __acquires(_T), __releases(*(struct zone **)_T)) +#define class_zone_lock_irqsave_constructor(_T) \ + WITH_LOCK_GUARD_1_ATTRS(zone_lock_irqsave, _T) + #endif /* _LINUX_MMZONE_LOCK_H */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 75ee81445640..260fb003822a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3407,7 +3407,7 @@ static void reserve_highatomic_pageblock(struct page = *page, int order, struct zone *zone) { int mt; - unsigned long max_managed, flags; + unsigned long max_managed; =20 /* * The number reserved as: minimum is 1 pageblock, maximum is @@ -3421,29 +3421,26 @@ static void reserve_highatomic_pageblock(struct pag= e *page, int order, if (zone->nr_reserved_highatomic >=3D max_managed) return; =20 - zone_lock_irqsave(zone, flags); + guard(zone_lock_irqsave)(zone); =20 /* Recheck the nr_reserved_highatomic limit under the lock */ if (zone->nr_reserved_highatomic >=3D max_managed) - goto out_unlock; + return; =20 /* Yoink! */ mt =3D get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (!migratetype_is_mergeable(mt)) - goto out_unlock; + return; =20 if (order < pageblock_order) { if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) =3D=3D -1) - goto out_unlock; + return; zone->nr_reserved_highatomic +=3D pageblock_nr_pages; } else { change_pageblock_range(page, order, MIGRATE_HIGHATOMIC); zone->nr_reserved_highatomic +=3D 1 << order; } - -out_unlock: - zone_unlock_irqrestore(zone, flags); } =20 /* --=20 2.47.3