From nobody Thu Apr 2 17:14:29 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3AFF7346E75 for ; Fri, 27 Mar 2026 16:15:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774628115; cv=none; b=HjgMPy/6HUZZSayb0AqzaQtEOHkifibKuo1nUan8HNldZDY0D/QqOriTUJEsGug6e2j39NkziwUrhQ++wiSFAkUneQsWZRmgdhzFuzSKznPCYs/M2UDpeTw3r2/xORolijItjh0Sc3KbfbcTjEKxcY3A2KxXtQC72uzm7i/bpb4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774628115; c=relaxed/simple; bh=xh7qVUQBAAyFUrIjvVpM4UCdGklJOKkWumk3kTo11TU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LuTf7ZP3U9YnsirWcGJzvG5eayjposO7OnNMtx/eSe+8ycVhTbOGWS7IVhKaXqNhWYfU4m5x2Ko+buZozt/QI4/Qu5y1LCWfFu3kz14zyUsw+4rdfKjB97SR6G4mNi8faXPGhMd+8aT4Xdo+MWHnq3jV6ItG1kiRcgMDfNNyK00= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=wyYmjHEH; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="wyYmjHEH" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id E14CFBDFF0; Fri, 27 Mar 2026 16:15:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1774628106; bh=YEgctjadaPC6e1ht8UPND33GBVnpMY+JzCTIORsmJW0=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=wyYmjHEHeoLQV080hjixwgktMwryHxpYFkRCh2fkkH8E0djeIfdN8d/BPOEbFwwOn IkobkraGWdidDzVni9+VfV1Om7zCe/jcHo4dF68x3hXOdlMeWr//qwWNbgayNjJh+O XRv5p5gqJti/XTbKxBFkA5/76AR7WrKmGVTdKxTI= From: Dmitry Ilvokhin To: Andrew Morton , Vlastimil Babka , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, Dmitry Ilvokhin , Steven Rostedt Subject: [PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Date: Fri, 27 Mar 2026 16:14:41 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the spinlock_irqsave zone lock guard in reserve_highatomic_pageblock() to replace the explicit lock/unlock and goto out_unlock pattern with automatic scope-based cleanup. Suggested-by: Steven Rostedt Signed-off-by: Dmitry Ilvokhin --- mm/page_alloc.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f11f38ba2e12..c7b9b82b5956 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3403,7 +3403,7 @@ static void reserve_highatomic_pageblock(struct page = *page, int order, struct zone *zone) { int mt; - unsigned long max_managed, flags; + unsigned long max_managed; =20 /* * The number reserved as: minimum is 1 pageblock, maximum is @@ -3417,29 +3417,26 @@ static void reserve_highatomic_pageblock(struct pag= e *page, int order, if (zone->nr_reserved_highatomic >=3D max_managed) return; =20 - spin_lock_irqsave(&zone->lock, flags); + guard(spinlock_irqsave)(&zone->lock); =20 /* Recheck the nr_reserved_highatomic limit under the lock */ if (zone->nr_reserved_highatomic >=3D max_managed) - goto out_unlock; + return; =20 /* Yoink! */ mt =3D get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (!migratetype_is_mergeable(mt)) - goto out_unlock; + return; =20 if (order < pageblock_order) { if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) =3D=3D -1) - goto out_unlock; + return; zone->nr_reserved_highatomic +=3D pageblock_nr_pages; } else { change_pageblock_range(page, order, MIGRATE_HIGHATOMIC); zone->nr_reserved_highatomic +=3D 1 << order; } - -out_unlock: - spin_unlock_irqrestore(&zone->lock, flags); } =20 /* --=20 2.52.0