From nobody Thu Apr 2 17:43:09 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 330A92BDC1C; Wed, 11 Feb 2026 15:23:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823411; cv=none; b=LIKnNJuX7Km5t0J587WC6q9mrUKIU62sU0WxK0Iem18lBhVDOIjj3/tg6LQmCf6u4LcHHxm0SmbugN68yLIPWukoR313ly1X8xmulDyL2sdv2Hq6bG54r/pezeALiKBY/yLb3NDAWO8hQcb6TSt0CARsfKTvtfeOMhZzMQMzkCw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823411; c=relaxed/simple; bh=8GKR5up6VqUGWg79EckUAlM++p8psGHHDeQJZ7gMZRg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RnC9OPh3GZ2CADZa/ud7boTcHJy4CMKTwzJVPppt7xsXJM4SEqt8QZCpLhPoMek8GcpsXt/F0TYFDMaqiuIvWK9WllJTt8DTTDDOCPAj1bzhJM6Pcq6YNFAUJB+aFMqnea5NVJfpiHYYgXRVFFZNcer4WLuK7sT0wQiCnIrYAio= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=GCLbBzEz; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="GCLbBzEz" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id 6FB4BB2584; Wed, 11 Feb 2026 15:23:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1770823402; bh=f7pIPpettLlSQoHWOm1p7EtiixS7ksKzCY9xzpT+daI=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=GCLbBzEzvvGcC4OQoQw1NLDJPOjkUC9s49rkNbr3+0EBeQ/ebAoZISWZYAH26SABB zNgugPQOVRqWsGvT2sCt0sPPbgBF2aDPBgr01xMqS6Uh+tQbaiPKT67mNAQfDjt5I8 kIlZIYZbLAZM/q6GyZeSwLgnIcp+EujkPX5YdpEo= From: Dmitry Ilvokhin To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Brendan Jackman , Johannes Weiner , Zi Yan , Oscar Salvador , Qi Zheng , Shakeel Butt , Axel Rasmussen , Yuanchu Xie , Wei Xu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, kernel-team@meta.com, Dmitry Ilvokhin Subject: [PATCH 1/4] mm: introduce zone lock wrappers Date: Wed, 11 Feb 2026 15:22:13 +0000 Message-ID: <3826dd6dc55a9c5721ec3de85f019764a6cf3222.1770821420.git.d@ilvokhin.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add thin wrappers around zone lock acquire/release operations. This prepares the code for future tracepoint instrumentation without modifying individual call sites. Centralizing zone lock operations behind wrappers allows future instrumentation or debugging hooks to be added without touching all users. No functional change intended. The wrappers are introduced in preparation for subsequent patches and are not yet used. Signed-off-by: Dmitry Ilvokhin Acked-by: Shakeel Butt --- MAINTAINERS | 1 + include/linux/zone_lock.h | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) create mode 100644 include/linux/zone_lock.h diff --git a/MAINTAINERS b/MAINTAINERS index b4088f7290be..680c9ae02d7e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -16498,6 +16498,7 @@ F: include/linux/pgtable.h F: include/linux/ptdump.h F: include/linux/vmpressure.h F: include/linux/vmstat.h +F: include/linux/zone_lock.h F: kernel/fork.c F: mm/Kconfig F: mm/debug.c diff --git a/include/linux/zone_lock.h b/include/linux/zone_lock.h new file mode 100644 index 000000000000..c531e26280e6 --- /dev/null +++ b/include/linux/zone_lock.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_ZONE_LOCK_H +#define _LINUX_ZONE_LOCK_H + +#include +#include + +static inline void zone_lock_init(struct zone *zone) +{ + spin_lock_init(&zone->lock); +} + +#define zone_lock_irqsave(zone, flags) \ +do { \ + spin_lock_irqsave(&(zone)->lock, flags); \ +} while (0) + +#define zone_trylock_irqsave(zone, flags) \ +({ \ + spin_trylock_irqsave(&(zone)->lock, flags); \ +}) + +static inline void zone_unlock_irqrestore(struct zone *zone, unsigned long= flags) +{ + spin_unlock_irqrestore(&zone->lock, flags); +} + +static inline void zone_lock_irq(struct zone *zone) +{ + spin_lock_irq(&zone->lock); +} + +static inline void zone_unlock_irq(struct zone *zone) +{ + spin_unlock_irq(&zone->lock); +} + +#endif /* _LINUX_ZONE_LOCK_H */ --=20 2.47.3 From nobody Thu Apr 2 17:43:09 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3311F2C237F; Wed, 11 Feb 2026 15:23:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823412; cv=none; b=jax4sCDpRFQr45bixgJwqyPAEnrimne+snnzltjVPDOrPKVMNPgUskWzfRPEyOScJPOeDM2SXV6DqUNk1Vaa10UasCJX9OiIZq/yoyurrOTtbD1hnrQWTql+glY+WRRmMNjsdO+PaATJHlpGkPz5rW1KF6Ku4T5HTFdd9rPNCKk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823412; c=relaxed/simple; bh=01XNlcDauKvRj+ZhD4otBLbB+EnEA8wP9+zEvYMPFvU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rWKFivJgSDnGNfFw63JOMK86fc5gtZ277mOfUX+csVpWBdD91Hcru3w7FXpx1mjZYiw2UbDqBmDSZJ+MpcJufK5j1HK6PIgoARhqQmqHUDec4Kp/CjR7idYn1SjN1ryTU2steANhrgOX4i67Kwuvq9JKrPjkO+k8f1BdWMRQSW0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=aT4R8grp; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="aT4R8grp" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id D61E7B2589; Wed, 11 Feb 2026 15:23:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1770823403; bh=k6WMYOlUnJnbb/yiHFfUdTkuXMu+Nd/JtojMQvn02DE=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=aT4R8grpcDacmY09DJ8xJ2Stf10YvyW+wcrH0WzMJE4EcWzrOJct3JPvnbosUa6NK IAFcY87y7qzLXdeenWP22iYahUNF6GPXFeaJh24Fjmo2RnyYdl9TG8mN2w8iM6qbNS QtcjqL1y/bQU/HRKbwTfu1Dw2/6LoN5Iwuaaq7rM= From: Dmitry Ilvokhin To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Brendan Jackman , Johannes Weiner , Zi Yan , Oscar Salvador , Qi Zheng , Shakeel Butt , Axel Rasmussen , Yuanchu Xie , Wei Xu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, kernel-team@meta.com, Dmitry Ilvokhin Subject: [PATCH 2/4] mm: convert zone lock users to wrappers Date: Wed, 11 Feb 2026 15:22:14 +0000 Message-ID: <7d1ee95201a8870445556e61e47161f46ade8b3b.1770821420.git.d@ilvokhin.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace direct zone lock acquire/release operations with the newly introduced wrappers. The changes are purely mechanical substitutions. No functional change intended. Locking semantics and ordering remain unchanged. The compaction path is left unchanged for now and will be handled separately in the following patch due to additional non-trivial modifications. Signed-off-by: Dmitry Ilvokhin Acked-by: Shakeel Butt --- mm/memory_hotplug.c | 9 +++--- mm/mm_init.c | 3 +- mm/page_alloc.c | 73 +++++++++++++++++++++++---------------------- mm/page_isolation.c | 19 ++++++------ mm/page_reporting.c | 13 ++++---- mm/show_mem.c | 5 ++-- mm/vmscan.c | 5 ++-- mm/vmstat.c | 9 +++--- 8 files changed, 72 insertions(+), 64 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index bc805029da51..cfc0103fa50e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -36,6 +36,7 @@ #include #include #include +#include =20 #include =20 @@ -1190,9 +1191,9 @@ int online_pages(unsigned long pfn, unsigned long nr_= pages, * Fixup the number of isolated pageblocks before marking the sections * onlining, such that undo_isolate_page_range() works correctly. */ - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); zone->nr_isolate_pageblock +=3D nr_pages / pageblock_nr_pages; - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 /* * If this zone is not populated, then it is not in zonelist. @@ -2041,9 +2042,9 @@ int offline_pages(unsigned long start_pfn, unsigned l= ong nr_pages, * effectively stale; nobody should be touching them. Fixup the number * of isolated pageblocks, memory onlining will properly revert this. */ - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); zone->nr_isolate_pageblock -=3D nr_pages / pageblock_nr_pages; - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 lru_cache_enable(); zone_pcp_enable(zone); diff --git a/mm/mm_init.c b/mm/mm_init.c index 1a29a719af58..426e5a0256f9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -32,6 +32,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -1425,7 +1426,7 @@ static void __meminit zone_init_internals(struct zone= *zone, enum zone_type idx, zone_set_nid(zone, nid); zone->name =3D zone_names[idx]; zone->zone_pgdat =3D NODE_DATA(nid); - spin_lock_init(&zone->lock); + zone_lock_init(zone); zone_seqlock_init(zone); zone_pcp_init(zone); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e4104973e22f..2c9fe30da7a1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include "internal.h" #include "shuffle.h" @@ -1494,7 +1495,7 @@ static void free_pcppages_bulk(struct zone *zone, int= count, /* Ensure requested pindex is drained first. */ pindex =3D pindex - 1; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); =20 while (count > 0) { struct list_head *list; @@ -1527,7 +1528,7 @@ static void free_pcppages_bulk(struct zone *zone, int= count, } while (count > 0 && !list_empty(list)); } =20 - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } =20 /* Split a multi-block free page into its individual pageblocks. */ @@ -1571,12 +1572,12 @@ static void free_one_page(struct zone *zone, struct= page *page, unsigned long flags; =20 if (unlikely(fpi_flags & FPI_TRYLOCK)) { - if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (!zone_trylock_irqsave(zone, flags)) { add_page_to_zone_llist(zone, page, order); return; } } else { - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); } =20 /* The lock succeeded. Process deferred pages. */ @@ -1594,7 +1595,7 @@ static void free_one_page(struct zone *zone, struct p= age *page, } } split_large_buddy(zone, page, pfn, order, fpi_flags); - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 __count_vm_events(PGFREE, 1 << order); } @@ -2547,10 +2548,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned= int order, int i; =20 if (unlikely(alloc_flags & ALLOC_TRYLOCK)) { - if (!spin_trylock_irqsave(&zone->lock, flags)) + if (!zone_trylock_irqsave(zone, flags)) return 0; } else { - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); } for (i =3D 0; i < count; ++i) { struct page *page =3D __rmqueue(zone, order, migratetype, @@ -2570,7 +2571,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned i= nt order, */ list_add_tail(&page->pcp_list, list); } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 return i; } @@ -3235,10 +3236,10 @@ struct page *rmqueue_buddy(struct zone *preferred_z= one, struct zone *zone, do { page =3D NULL; if (unlikely(alloc_flags & ALLOC_TRYLOCK)) { - if (!spin_trylock_irqsave(&zone->lock, flags)) + if (!zone_trylock_irqsave(zone, flags)) return NULL; } else { - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); } if (alloc_flags & ALLOC_HIGHATOMIC) page =3D __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); @@ -3257,11 +3258,11 @@ struct page *rmqueue_buddy(struct zone *preferred_z= one, struct zone *zone, page =3D __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); =20 if (!page) { - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return NULL; } } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } while (check_new_pages(page, order)); =20 __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); @@ -3448,7 +3449,7 @@ static void reserve_highatomic_pageblock(struct page = *page, int order, if (zone->nr_reserved_highatomic >=3D max_managed) return; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); =20 /* Recheck the nr_reserved_highatomic limit under the lock */ if (zone->nr_reserved_highatomic >=3D max_managed) @@ -3470,7 +3471,7 @@ static void reserve_highatomic_pageblock(struct page = *page, int order, } =20 out_unlock: - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } =20 /* @@ -3503,7 +3504,7 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, pageblock_nr_pages) continue; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); for (order =3D 0; order < NR_PAGE_ORDERS; order++) { struct free_area *area =3D &(zone->free_area[order]); unsigned long size; @@ -3551,11 +3552,11 @@ static bool unreserve_highatomic_pageblock(const st= ruct alloc_context *ac, */ WARN_ON_ONCE(ret =3D=3D -1); if (ret > 0) { - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return ret; } } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } =20 return false; @@ -6435,7 +6436,7 @@ static void __setup_per_zone_wmarks(void) for_each_zone(zone) { u64 tmp; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); tmp =3D (u64)pages_min * zone_managed_pages(zone); tmp =3D div64_ul(tmp, lowmem_pages); if (is_highmem(zone) || zone_idx(zone) =3D=3D ZONE_MOVABLE) { @@ -6476,7 +6477,7 @@ static void __setup_per_zone_wmarks(void) zone->_watermark[WMARK_PROMO] =3D high_wmark_pages(zone) + tmp; trace_mm_setup_per_zone_wmarks(zone); =20 - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } =20 /* update totalreserve_pages */ @@ -7246,7 +7247,7 @@ struct page *alloc_contig_frozen_pages_noprof(unsigne= d long nr_pages, zonelist =3D node_zonelist(nid, gfp_mask); for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nodemask) { - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); =20 pfn =3D ALIGN(zone->zone_start_pfn, nr_pages); while (zone_spans_last_pfn(zone, pfn, nr_pages)) { @@ -7260,18 +7261,18 @@ struct page *alloc_contig_frozen_pages_noprof(unsig= ned long nr_pages, * allocation spinning on this lock, it may * win the race and cause allocation to fail. */ - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); ret =3D alloc_contig_frozen_range_noprof(pfn, pfn + nr_pages, ACR_FLAGS_NONE, gfp_mask); if (!ret) return pfn_to_page(pfn); - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); } pfn +=3D nr_pages; } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } /* * If we failed, retry the search, but treat regions with HugeTLB pages @@ -7425,7 +7426,7 @@ unsigned long __offline_isolated_pages(unsigned long = start_pfn, =20 offline_mem_sections(pfn, end_pfn); zone =3D page_zone(pfn_to_page(pfn)); - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); while (pfn < end_pfn) { page =3D pfn_to_page(pfn); /* @@ -7455,7 +7456,7 @@ unsigned long __offline_isolated_pages(unsigned long = start_pfn, del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE); pfn +=3D (1 << order); } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 return end_pfn - start_pfn - already_offline; } @@ -7531,7 +7532,7 @@ bool take_page_off_buddy(struct page *page) unsigned int order; bool ret =3D false; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); for (order =3D 0; order < NR_PAGE_ORDERS; order++) { struct page *page_head =3D page - (pfn & ((1 << order) - 1)); int page_order =3D buddy_order(page_head); @@ -7552,7 +7553,7 @@ bool take_page_off_buddy(struct page *page) if (page_count(page_head) > 0) break; } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return ret; } =20 @@ -7565,7 +7566,7 @@ bool put_page_back_buddy(struct page *page) unsigned long flags; bool ret =3D false; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); if (put_page_testzero(page)) { unsigned long pfn =3D page_to_pfn(page); int migratetype =3D get_pfnblock_migratetype(page, pfn); @@ -7576,7 +7577,7 @@ bool put_page_back_buddy(struct page *page) ret =3D true; } } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 return ret; } @@ -7625,7 +7626,7 @@ static void __accept_page(struct zone *zone, unsigned= long *flags, account_freepages(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); __mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES); __ClearPageUnaccepted(page); - spin_unlock_irqrestore(&zone->lock, *flags); + zone_unlock_irqrestore(zone, *flags); =20 accept_memory(page_to_phys(page), PAGE_SIZE << MAX_PAGE_ORDER); =20 @@ -7637,9 +7638,9 @@ void accept_page(struct page *page) struct zone *zone =3D page_zone(page); unsigned long flags; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); if (!PageUnaccepted(page)) { - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return; } =20 @@ -7652,11 +7653,11 @@ static bool try_to_accept_memory_one(struct zone *z= one) unsigned long flags; struct page *page; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); page =3D list_first_entry_or_null(&zone->unaccepted_pages, struct page, lru); if (!page) { - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return false; } =20 @@ -7713,12 +7714,12 @@ static bool __free_unaccepted(struct page *page) if (!lazy_accept) return false; =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); list_add_tail(&page->lru, &zone->unaccepted_pages); account_freepages(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); __mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES); __SetPageUnaccepted(page); - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 return true; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index c48ff5c00244..56a272f38b66 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "internal.h" =20 #define CREATE_TRACE_POINTS @@ -173,7 +174,7 @@ static int set_migratetype_isolate(struct page *page, e= num pb_isolate_mode mode, if (PageUnaccepted(page)) accept_page(page); =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); =20 /* * We assume the caller intended to SET migrate type to isolate. @@ -181,7 +182,7 @@ static int set_migratetype_isolate(struct page *page, e= num pb_isolate_mode mode, * set it before us. */ if (is_migrate_isolate_page(page)) { - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return -EBUSY; } =20 @@ -200,15 +201,15 @@ static int set_migratetype_isolate(struct page *page,= enum pb_isolate_mode mode, mode); if (!unmovable) { if (!pageblock_isolate_and_move_free_pages(zone, page)) { - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return -EBUSY; } zone->nr_isolate_pageblock++; - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); return 0; } =20 - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); if (mode =3D=3D PB_ISOLATE_MODE_MEM_OFFLINE) { /* * printk() with zone->lock held will likely trigger a @@ -229,7 +230,7 @@ static void unset_migratetype_isolate(struct page *page) struct page *buddy; =20 zone =3D page_zone(page); - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); if (!is_migrate_isolate_page(page)) goto out; =20 @@ -280,7 +281,7 @@ static void unset_migratetype_isolate(struct page *page) } zone->nr_isolate_pageblock--; out: - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } =20 static inline struct page * @@ -641,9 +642,9 @@ int test_pages_isolated(unsigned long start_pfn, unsign= ed long end_pfn, =20 /* Check all pages are free or marked as ISOLATED */ zone =3D page_zone(page); - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); pfn =3D __test_page_isolated_in_pageblock(start_pfn, end_pfn, mode); - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); =20 ret =3D pfn < end_pfn ? -EBUSY : 0; =20 diff --git a/mm/page_reporting.c b/mm/page_reporting.c index 8a03effda749..ac2ac8fd0487 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -7,6 +7,7 @@ #include #include #include +#include =20 #include "page_reporting.h" #include "internal.h" @@ -161,7 +162,7 @@ page_reporting_cycle(struct page_reporting_dev_info *pr= dev, struct zone *zone, if (list_empty(list)) return err; =20 - spin_lock_irq(&zone->lock); + zone_lock_irq(zone); =20 /* * Limit how many calls we will be making to the page reporting @@ -219,7 +220,7 @@ page_reporting_cycle(struct page_reporting_dev_info *pr= dev, struct zone *zone, list_rotate_to_front(&page->lru, list); =20 /* release lock before waiting on report processing */ - spin_unlock_irq(&zone->lock); + zone_unlock_irq(zone); =20 /* begin processing pages in local list */ err =3D prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY); @@ -231,7 +232,7 @@ page_reporting_cycle(struct page_reporting_dev_info *pr= dev, struct zone *zone, budget--; =20 /* reacquire zone lock and resume processing */ - spin_lock_irq(&zone->lock); + zone_lock_irq(zone); =20 /* flush reported pages from the sg list */ page_reporting_drain(prdev, sgl, PAGE_REPORTING_CAPACITY, !err); @@ -251,7 +252,7 @@ page_reporting_cycle(struct page_reporting_dev_info *pr= dev, struct zone *zone, if (!list_entry_is_head(next, list, lru) && !list_is_first(&next->lru, li= st)) list_rotate_to_front(&next->lru, list); =20 - spin_unlock_irq(&zone->lock); + zone_unlock_irq(zone); =20 return err; } @@ -296,9 +297,9 @@ page_reporting_process_zone(struct page_reporting_dev_i= nfo *prdev, err =3D prdev->report(prdev, sgl, leftover); =20 /* flush any remaining pages out from the last report */ - spin_lock_irq(&zone->lock); + zone_lock_irq(zone); page_reporting_drain(prdev, sgl, leftover, !err); - spin_unlock_irq(&zone->lock); + zone_unlock_irq(zone); } =20 return err; diff --git a/mm/show_mem.c b/mm/show_mem.c index 24078ac3e6bc..245beca127af 100644 --- a/mm/show_mem.c +++ b/mm/show_mem.c @@ -14,6 +14,7 @@ #include #include #include +#include =20 #include "internal.h" #include "swap.h" @@ -363,7 +364,7 @@ static void show_free_areas(unsigned int filter, nodema= sk_t *nodemask, int max_z show_node(zone); printk(KERN_CONT "%s: ", zone->name); =20 - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); for (order =3D 0; order < NR_PAGE_ORDERS; order++) { struct free_area *area =3D &zone->free_area[order]; int type; @@ -377,7 +378,7 @@ static void show_free_areas(unsigned int filter, nodema= sk_t *nodemask, int max_z types[order] |=3D 1 << type; } } - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); for (order =3D 0; order < NR_PAGE_ORDERS; order++) { printk(KERN_CONT "%lu*%lukB ", nr[order], K(1UL) << order); diff --git a/mm/vmscan.c b/mm/vmscan.c index 973ffb9813ea..9fe5c41e0e0a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -58,6 +58,7 @@ #include #include #include +#include =20 #include #include @@ -7129,9 +7130,9 @@ static int balance_pgdat(pg_data_t *pgdat, int order,= int highest_zoneidx) =20 /* Increments are under the zone lock */ zone =3D pgdat->node_zones + i; - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); zone->watermark_boost -=3D min(zone->watermark_boost, zone_boosts[i]); - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } =20 /* diff --git a/mm/vmstat.c b/mm/vmstat.c index 99270713e0c1..06b27255a626 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -28,6 +28,7 @@ #include #include #include +#include =20 #include "internal.h" =20 @@ -1535,10 +1536,10 @@ static void walk_zones_in_node(struct seq_file *m, = pg_data_t *pgdat, continue; =20 if (!nolock) - spin_lock_irqsave(&zone->lock, flags); + zone_lock_irqsave(zone, flags); print(m, pgdat, zone); if (!nolock) - spin_unlock_irqrestore(&zone->lock, flags); + zone_unlock_irqrestore(zone, flags); } } #endif @@ -1603,9 +1604,9 @@ static void pagetypeinfo_showfree_print(struct seq_fi= le *m, } } seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount); - spin_unlock_irq(&zone->lock); + zone_unlock_irq(zone); cond_resched(); - spin_lock_irq(&zone->lock); + zone_lock_irq(zone); } seq_putc(m, '\n'); } --=20 2.47.3 From nobody Thu Apr 2 17:43:09 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0680B2D73A7; Wed, 11 Feb 2026 15:23:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823412; cv=none; b=i9+NfgvaR685z/5lbLP+JL98M4V2SIHOPBpuAPNfh8Fes1Wjn64Una3fpzqLd/UAKO+wRKSwtfxTCRzjw0hM1aFJ4Vvz/WrvgZDAxq9NlqK1vx5/A9abJJK7MaRskX+ZVmqVOr42+1oZSvulxB62My4xRcLDjm2M+uqFTT/43As= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823412; c=relaxed/simple; bh=t/R0LT705DW3Vi3PTqmS6RvKg+OPDRdlDwze+TeM3Bo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pOGhxqy7CEmWEYWPHQRBbGul7GlyDzUHXEoUDLWwOrc8IxdIk+e24q4Hm08AmECKY5yCBRncY9vcUo7l2hRhFVRFzDifvlKXYTYhYDh6YqDuFDOgBIIc/wm6tGJf5WIn95rNdKcLIsaw/I86CY8Tgbh0ugcCyBasLsCRNGNhUWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=wx6om+dw; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="wx6om+dw" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id 4B54EB258E; Wed, 11 Feb 2026 15:23:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1770823403; bh=piddMgk1JYD7fpCEJF5CHqiXyLYyeHfb+Th2n1GKhHM=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=wx6om+dwpVOpcy47oGY15lz8YsL2U1kRYlPk5reCKHou4LGoZ9iItRPGL6CkHwusl 0BzxzVp0Ywtog+xhHlPPHn6QKIVegA343/0YUqAqefp2iOykkwe4ny0PsoFxRb2YOz 7kzpkg4q6nyLawSG1Mtrp19yC9ZUiI52GmkAp3qw= From: Dmitry Ilvokhin To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Brendan Jackman , Johannes Weiner , Zi Yan , Oscar Salvador , Qi Zheng , Shakeel Butt , Axel Rasmussen , Yuanchu Xie , Wei Xu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, kernel-team@meta.com, Dmitry Ilvokhin Subject: [PATCH 3/4] mm: convert compaction to zone lock wrappers Date: Wed, 11 Feb 2026 15:22:15 +0000 Message-ID: <3462b7fd26123c69ccdd121a894da14bbfafdd9d.1770821420.git.d@ilvokhin.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Compaction uses compact_lock_irqsave(), which currently operates on a raw spinlock_t pointer so that it can be used for both zone->lock and lru_lock. Since zone lock operations are now wrapped, compact_lock_irqsave() can no longer operate directly on a spinlock_t when the lock belongs to a zone. Introduce struct compact_lock to abstract the underlying lock type. The structure carries a lock type enum and a union holding either a zone pointer or a raw spinlock_t pointer, and dispatches to the appropriate lock/unlock helper. No functional change intended. Signed-off-by: Dmitry Ilvokhin --- mm/compaction.c | 108 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 89 insertions(+), 19 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 1e8f8eca318c..1b000d2b95b2 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -24,6 +24,7 @@ #include #include #include +#include #include "internal.h" =20 #ifdef CONFIG_COMPACTION @@ -493,6 +494,65 @@ static bool test_and_set_skip(struct compact_control *= cc, struct page *page) } #endif /* CONFIG_COMPACTION */ =20 +enum compact_lock_type { + COMPACT_LOCK_ZONE, + COMPACT_LOCK_RAW_SPINLOCK, +}; + +struct compact_lock { + enum compact_lock_type type; + union { + struct zone *zone; + spinlock_t *lock; /* Reference to lru lock */ + }; +}; + +static bool compact_do_zone_trylock_irqsave(struct zone *zone, + unsigned long *flags) +{ + return zone_trylock_irqsave(zone, *flags); +} + +static bool compact_do_raw_trylock_irqsave(spinlock_t *lock, + unsigned long *flags) +{ + return spin_trylock_irqsave(lock, *flags); +} + +static bool compact_do_trylock_irqsave(struct compact_lock lock, + unsigned long *flags) +{ + if (lock.type =3D=3D COMPACT_LOCK_ZONE) + return compact_do_zone_trylock_irqsave(lock.zone, flags); + + return compact_do_raw_trylock_irqsave(lock.lock, flags); +} + +static void compact_do_zone_lock_irqsave(struct zone *zone, + unsigned long *flags) +__acquires(zone->lock) +{ + zone_lock_irqsave(zone, *flags); +} + +static void compact_do_raw_lock_irqsave(spinlock_t *lock, + unsigned long *flags) +__acquires(lock) +{ + spin_lock_irqsave(lock, *flags); +} + +static void compact_do_lock_irqsave(struct compact_lock lock, + unsigned long *flags) +{ + if (lock.type =3D=3D COMPACT_LOCK_ZONE) { + compact_do_zone_lock_irqsave(lock.zone, flags); + return; + } + + return compact_do_raw_lock_irqsave(lock.lock, flags); +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. For async compaction, trylock and record if the @@ -502,19 +562,19 @@ static bool test_and_set_skip(struct compact_control = *cc, struct page *page) * * Always returns true which makes it easier to track lock state in caller= s. */ -static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, - struct compact_control *cc) - __acquires(lock) +static bool compact_lock_irqsave(struct compact_lock lock, + unsigned long *flags, + struct compact_control *cc) { /* Track if the lock is contended in async mode */ if (cc->mode =3D=3D MIGRATE_ASYNC && !cc->contended) { - if (spin_trylock_irqsave(lock, *flags)) + if (compact_do_trylock_irqsave(lock, flags)) return true; =20 cc->contended =3D true; } =20 - spin_lock_irqsave(lock, *flags); + compact_do_lock_irqsave(lock, flags); return true; } =20 @@ -530,11 +590,13 @@ static bool compact_lock_irqsave(spinlock_t *lock, un= signed long *flags, * Returns true if compaction should abort due to fatal signal pending. * Returns false when compaction can continue. */ -static bool compact_unlock_should_abort(spinlock_t *lock, - unsigned long flags, bool *locked, struct compact_control *cc) +static bool compact_unlock_should_abort(struct zone *zone, + unsigned long flags, + bool *locked, + struct compact_control *cc) { if (*locked) { - spin_unlock_irqrestore(lock, flags); + zone_unlock_irqrestore(zone, flags); *locked =3D false; } =20 @@ -582,9 +644,8 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, * contention, to give chance to IRQs. Abort if fatal signal * pending. */ - if (!(blockpfn % COMPACT_CLUSTER_MAX) - && compact_unlock_should_abort(&cc->zone->lock, flags, - &locked, cc)) + if (!(blockpfn % COMPACT_CLUSTER_MAX) && + compact_unlock_should_abort(cc->zone, flags, &locked, cc)) break; =20 nr_scanned++; @@ -613,8 +674,12 @@ static unsigned long isolate_freepages_block(struct co= mpact_control *cc, =20 /* If we already hold the lock, we can skip some rechecking. */ if (!locked) { - locked =3D compact_lock_irqsave(&cc->zone->lock, - &flags, cc); + struct compact_lock zol =3D { + .type =3D COMPACT_LOCK_ZONE, + .zone =3D cc->zone, + }; + + locked =3D compact_lock_irqsave(zol, &flags, cc); =20 /* Recheck this is a buddy page under lock */ if (!PageBuddy(page)) @@ -649,7 +714,7 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, } =20 if (locked) - spin_unlock_irqrestore(&cc->zone->lock, flags); + zone_unlock_irqrestore(cc->zone, flags); =20 /* * Be careful to not go outside of the pageblock. @@ -1157,10 +1222,15 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, =20 /* If we already hold the lock, we can skip some rechecking */ if (lruvec !=3D locked) { + struct compact_lock zol =3D { + .type =3D COMPACT_LOCK_RAW_SPINLOCK, + .lock =3D &lruvec->lru_lock, + }; + if (locked) unlock_page_lruvec_irqrestore(locked, flags); =20 - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + compact_lock_irqsave(zol, &flags, cc); locked =3D lruvec; =20 lruvec_memcg_debug(lruvec, folio); @@ -1555,7 +1625,7 @@ static void fast_isolate_freepages(struct compact_con= trol *cc) if (!area->nr_free) continue; =20 - spin_lock_irqsave(&cc->zone->lock, flags); + zone_lock_irqsave(cc->zone, flags); freelist =3D &area->free_list[MIGRATE_MOVABLE]; list_for_each_entry_reverse(freepage, freelist, buddy_list) { unsigned long pfn; @@ -1614,7 +1684,7 @@ static void fast_isolate_freepages(struct compact_con= trol *cc) } } =20 - spin_unlock_irqrestore(&cc->zone->lock, flags); + zone_unlock_irqrestore(cc->zone, flags); =20 /* Skip fast search if enough freepages isolated */ if (cc->nr_freepages >=3D cc->nr_migratepages) @@ -1988,7 +2058,7 @@ static unsigned long fast_find_migrateblock(struct co= mpact_control *cc) if (!area->nr_free) continue; =20 - spin_lock_irqsave(&cc->zone->lock, flags); + zone_lock_irqsave(cc->zone, flags); freelist =3D &area->free_list[MIGRATE_MOVABLE]; list_for_each_entry(freepage, freelist, buddy_list) { unsigned long free_pfn; @@ -2021,7 +2091,7 @@ static unsigned long fast_find_migrateblock(struct co= mpact_control *cc) break; } } - spin_unlock_irqrestore(&cc->zone->lock, flags); + zone_unlock_irqrestore(cc->zone, flags); } =20 cc->total_migrate_scanned +=3D nr_scanned; --=20 2.47.3 From nobody Thu Apr 2 17:43:09 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0409928003A; Wed, 11 Feb 2026 15:23:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823412; cv=none; b=sdmW/zWdPd+sKcRcYZnptpq16alB1D+Qrd7sB83TNWHM7QOPnvUoUBexwYFZVkPmuryX9iATn8nF54H0n8G01m5xxa7Eet5fd1wI2sMqq+HQAd1cB3VZjltJ4Rnw+ccsCRdzC6y4nocwujJoyRDZeUA6OoJOybpmBqnmPKArENc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770823412; c=relaxed/simple; bh=Wg+tozHMNvvqYzCeEMnnMMz2oiyvzq2d+LAvxIuEgpw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uxZ1gtG1+msDSm3yuY/ACtgR2cpqA/99sLF7gWv0kv4PYDiTh6F+C+jJEC04mN7xkj4gyvPzZpJBn0IDL2lybgZPEQyjcycCvgn6Lx+y6LXL+VB5HnEMolOFGL2o5LlAS/0xCC/QZhgHcqVkvyUVmZb99FX7qsBpf2cIMhrD3Ak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=EnNKT7M3; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="EnNKT7M3" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id B6507B2591; Wed, 11 Feb 2026 15:23:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1770823404; bh=WI9HzWB33RHOS7x2iM4is6SHnBj+us5QajX97yLBu8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=EnNKT7M3TS3ryf7KVT447cgDcaIceytqYXzKU1+bD221uVnfKF5m7bpAMnXDRHx79 7pvy+2IjemivNf4Iu5+zUFr0aAZ1nDn2Yx8vgl6Ew9riIW67mirzm3v/bMBLk1fhh0 1nqXinwuEJ6IksxPU61gSyVI39gCQxtp+yEpEsF0= From: Dmitry Ilvokhin To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Brendan Jackman , Johannes Weiner , Zi Yan , Oscar Salvador , Qi Zheng , Shakeel Butt , Axel Rasmussen , Yuanchu Xie , Wei Xu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, kernel-team@meta.com, Dmitry Ilvokhin Subject: [PATCH 4/4] mm: add tracepoints for zone lock Date: Wed, 11 Feb 2026 15:22:16 +0000 Message-ID: <1d2a7778aeee03abf8a11528ce8d4926ca78e9b4.1770821420.git.d@ilvokhin.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add tracepoint instrumentation to zone lock acquire/release operations via the previously introduced wrappers. The implementation follows the mmap_lock tracepoint pattern: a lightweight inline helper checks whether the tracepoint is enabled and calls into an out-of-line helper when tracing is active. When CONFIG_TRACING is disabled, helpers compile to empty inline stubs. The fast path is unaffected when tracing is disabled. Signed-off-by: Dmitry Ilvokhin --- MAINTAINERS | 2 + include/linux/zone_lock.h | 64 +++++++++++++++++++++++++++++++- include/trace/events/zone_lock.h | 64 ++++++++++++++++++++++++++++++++ mm/Makefile | 2 +- mm/zone_lock.c | 31 ++++++++++++++++ 5 files changed, 161 insertions(+), 2 deletions(-) create mode 100644 include/trace/events/zone_lock.h create mode 100644 mm/zone_lock.c diff --git a/MAINTAINERS b/MAINTAINERS index 680c9ae02d7e..711ffa15f4c3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -16499,6 +16499,7 @@ F: include/linux/ptdump.h F: include/linux/vmpressure.h F: include/linux/vmstat.h F: include/linux/zone_lock.h +F: include/trace/events/zone_lock.h F: kernel/fork.c F: mm/Kconfig F: mm/debug.c @@ -16518,6 +16519,7 @@ F: mm/sparse.c F: mm/util.c F: mm/vmpressure.c F: mm/vmstat.c +F: mm/zone_lock.c N: include/linux/page[-_]* =20 MEMORY MANAGEMENT - EXECMEM diff --git a/include/linux/zone_lock.h b/include/linux/zone_lock.h index c531e26280e6..cea41dd56324 100644 --- a/include/linux/zone_lock.h +++ b/include/linux/zone_lock.h @@ -4,6 +4,53 @@ =20 #include #include +#include + +DECLARE_TRACEPOINT(zone_lock_start_locking); +DECLARE_TRACEPOINT(zone_lock_acquire_returned); +DECLARE_TRACEPOINT(zone_lock_released); + +#ifdef CONFIG_TRACING + +void __zone_lock_do_trace_start_locking(struct zone *zone); +void __zone_lock_do_trace_acquire_returned(struct zone *zone, bool success= ); +void __zone_lock_do_trace_released(struct zone *zone); + +static inline void __zone_lock_trace_start_locking(struct zone *zone) +{ + if (tracepoint_enabled(zone_lock_start_locking)) + __zone_lock_do_trace_start_locking(zone); +} + +static inline void __zone_lock_trace_acquire_returned(struct zone *zone, + bool success) +{ + if (tracepoint_enabled(zone_lock_acquire_returned)) + __zone_lock_do_trace_acquire_returned(zone, success); +} + +static inline void __zone_lock_trace_released(struct zone *zone) +{ + if (tracepoint_enabled(zone_lock_released)) + __zone_lock_do_trace_released(zone); +} + +#else /* !CONFIG_TRACING */ + +static inline void __zone_lock_trace_start_locking(struct zone *zone) +{ +} + +static inline void __zone_lock_trace_acquire_returned(struct zone *zone, + bool success) +{ +} + +static inline void __zone_lock_trace_released(struct zone *zone) +{ +} + +#endif /* CONFIG_TRACING */ =20 static inline void zone_lock_init(struct zone *zone) { @@ -12,26 +59,41 @@ static inline void zone_lock_init(struct zone *zone) =20 #define zone_lock_irqsave(zone, flags) \ do { \ + bool success =3D true; \ + \ + __zone_lock_trace_start_locking(zone); \ spin_lock_irqsave(&(zone)->lock, flags); \ + __zone_lock_trace_acquire_returned(zone, success); \ } while (0) =20 #define zone_trylock_irqsave(zone, flags) \ ({ \ - spin_trylock_irqsave(&(zone)->lock, flags); \ + bool success; \ + \ + __zone_lock_trace_start_locking(zone); \ + success =3D spin_trylock_irqsave(&(zone)->lock, flags); \ + __zone_lock_trace_acquire_returned(zone, success); \ + success; \ }) =20 static inline void zone_unlock_irqrestore(struct zone *zone, unsigned long= flags) { + __zone_lock_trace_released(zone); spin_unlock_irqrestore(&zone->lock, flags); } =20 static inline void zone_lock_irq(struct zone *zone) { + bool success =3D true; + + __zone_lock_trace_start_locking(zone); spin_lock_irq(&zone->lock); + __zone_lock_trace_acquire_returned(zone, success); } =20 static inline void zone_unlock_irq(struct zone *zone) { + __zone_lock_trace_released(zone); spin_unlock_irq(&zone->lock); } =20 diff --git a/include/trace/events/zone_lock.h b/include/trace/events/zone_l= ock.h new file mode 100644 index 000000000000..3df82a8c0160 --- /dev/null +++ b/include/trace/events/zone_lock.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM zone_lock + +#if !defined(_TRACE_ZONE_LOCK_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_ZONE_LOCK_H + +#include +#include + +struct zone; + +DECLARE_EVENT_CLASS(zone_lock, + + TP_PROTO(struct zone *zone), + + TP_ARGS(zone), + + TP_STRUCT__entry( + __field(struct zone *, zone) + ), + + TP_fast_assign( + __entry->zone =3D zone; + ), + + TP_printk("zone=3D%p", __entry->zone) +); + +#define DEFINE_ZONE_LOCK_EVENT(name) \ + DEFINE_EVENT(zone_lock, name, \ + TP_PROTO(struct zone *zone), \ + TP_ARGS(zone)) + +DEFINE_ZONE_LOCK_EVENT(zone_lock_start_locking); +DEFINE_ZONE_LOCK_EVENT(zone_lock_released); + +TRACE_EVENT(zone_lock_acquire_returned, + + TP_PROTO(struct zone *zone, bool success), + + TP_ARGS(zone, success), + + TP_STRUCT__entry( + __field(struct zone *, zone) + __field(bool, success) + ), + + TP_fast_assign( + __entry->zone =3D zone; + __entry->success =3D success; + ), + + TP_printk( + "zone=3D%p success=3D%s", + __entry->zone, + __entry->success ? "true" : "false" + ) +); + +#endif /* _TRACE_ZONE_LOCK_H */ + +/* This part must be outside protection */ +#include diff --git a/mm/Makefile b/mm/Makefile index 0d85b10dbdde..fd891710c696 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -55,7 +55,7 @@ obj-y :=3D filemap.o mempool.o oom_kill.o fadvise.o \ mm_init.o percpu.o slab_common.o \ compaction.o show_mem.o \ interval_tree.o list_lru.o workingset.o \ - debug.o gup.o mmap_lock.o vma_init.o $(mmu-y) + debug.o gup.o mmap_lock.o zone_lock.o vma_init.o $(mmu-y) =20 # Give 'page_alloc' its own module-parameter namespace page-alloc-y :=3D page_alloc.o diff --git a/mm/zone_lock.c b/mm/zone_lock.c new file mode 100644 index 000000000000..f647fd2aca48 --- /dev/null +++ b/mm/zone_lock.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0 +#define CREATE_TRACE_POINTS +#include + +#include + +EXPORT_TRACEPOINT_SYMBOL(zone_lock_start_locking); +EXPORT_TRACEPOINT_SYMBOL(zone_lock_acquire_returned); +EXPORT_TRACEPOINT_SYMBOL(zone_lock_released); + +#ifdef CONFIG_TRACING + +void __zone_lock_do_trace_start_locking(struct zone *zone) +{ + trace_zone_lock_start_locking(zone); +} +EXPORT_SYMBOL(__zone_lock_do_trace_start_locking); + +void __zone_lock_do_trace_acquire_returned(struct zone *zone, bool success) +{ + trace_zone_lock_acquire_returned(zone, success); +} +EXPORT_SYMBOL(__zone_lock_do_trace_acquire_returned); + +void __zone_lock_do_trace_released(struct zone *zone) +{ + trace_zone_lock_released(zone); +} +EXPORT_SYMBOL(__zone_lock_do_trace_released); + +#endif /* CONFIG_TRACING */ --=20 2.47.3