From nobody Tue Dec 16 15:06:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6C0A29ACE5; Fri, 5 Dec 2025 19:44:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764963841; cv=none; b=SdcTRNF3MOYBpvfNjcrjCoNXlBkFzk7yJI8J9RAHbNZqdd5uHQ2jdDVlvfaw0hFrfWOnj2zDxRDWd0XRiwgDLdtj1wDhVk9Xu4Py6MhfWR24Tgx3W4/MnYr3ueV7pa+sFowNQu/KrHOl/+TX1/C7yEjx3Yo61Zs8C1MlzXY60tU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764963841; c=relaxed/simple; bh=Ax0XJ3SLkb8eRcjcIlng0zDw88FzPQRgdgX0WIl+XlE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lqDNOcndZs6QTIVwfg9oF/kTHMzAyAXpn3wENbrqn0XbXrLaTU7vqJLKVJw2ZX0I4t89RoTNShMtwIbpLSn0WxakNoojB+L/P3Y2INl90Wlui61YmkjiD2AFi3ijDqd6g1G92KYih48CG2ItNU5a4mzkKSEg37cfY6M+WTtupWQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B50gJJe+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B50gJJe+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 65A7BC4AF09; Fri, 5 Dec 2025 19:43:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1764963839; bh=Ax0XJ3SLkb8eRcjcIlng0zDw88FzPQRgdgX0WIl+XlE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B50gJJe+zy2CmyadOs9v2nJwfOBawMgzgYYaRwCkL0bWZb43RXMgGzz8YqndkhnlE p0tlM5pPw9fjIeae5ef2AhgkMFqe8QVsJYZ3dOjUsfsYwapA7Tx9ReIV8kYLVmFfPH FiYMGEsNTIBOBD4VJVMGQi1oe3221/RFrrDqb/AW0zQMr0D4OmZDBKRAcTr0zxpezx jYnksKWelwvfB8Uscz2gMsfY5ix25fMgsjZUGEEiSJFa6chIFL8YWwUfiZIKERer3v FuVIIW08rbCRtnzsZdz8J1CJwWKFJhbQ5jsOxWr3j3CDk7qHNGj2qXEGmF+h0pjlOv Jhpfpam8jRUTw== Received: from phl-compute-02.internal (phl-compute-02.internal [10.202.2.42]) by mailfauth.phl.internal (Postfix) with ESMTP id B4A18F40070; Fri, 5 Dec 2025 14:43:57 -0500 (EST) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-02.internal (MEProxy); Fri, 05 Dec 2025 14:43:57 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgdelvdegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceurghi lhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurh ephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhirhihlhcuufhh uhhtshgvmhgruhcuoehkrghssehkvghrnhgvlhdrohhrgheqnecuggftrfgrthhtvghrnh ephfdufeejhefhkedtuedvfeevjeffvdfhvedtudfgudffjeefieekleehvdetvdevnecu vehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepkhhirhhilh hlodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdduieduudeivdeiheehqddv keeggeegjedvkedqkhgrsheppehkvghrnhgvlhdrohhrghesshhhuhhtvghmohhvrdhnrg hmvgdpnhgspghrtghpthhtohepudelpdhmohguvgepshhmthhpohhuthdprhgtphhtthho pegrkhhpmheslhhinhhugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtohepmh hutghhuhhnrdhsohhngheslhhinhhugidruggvvhdprhgtphhtthhopegurghvihgusehk vghrnhgvlhdrohhrghdprhgtphhtthhopehoshgrlhhvrgguohhrsehsuhhsvgdruggvpd hrtghpthhtoheprhhpphhtsehkvghrnhgvlhdrohhrghdprhgtphhtthhopehvsggrsghk rgesshhushgvrdgtiidprhgtphhtthhopehlohhrvghniihordhsthhorghkvghssehorh grtghlvgdrtghomhdprhgtphhtthhopeifihhllhihsehinhhfrhgruggvrggurdhorhhg pdhrtghpthhtohepiihihiesnhhvihguihgrrdgtohhm X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 5 Dec 2025 14:43:57 -0500 (EST) From: Kiryl Shutsemau To: Andrew Morton , Muchun Song Cc: David Hildenbrand , Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Matthew Wilcox , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , Usama Arif , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Kiryl Shutsemau Subject: [PATCH 02/11] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Date: Fri, 5 Dec 2025 19:43:38 +0000 Message-ID: <20251205194351.1646318-3-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251205194351.1646318-1-kas@kernel.org> References: <20251205194351.1646318-1-kas@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The 'compound_head' field in the 'struct page' encodes whether the page is a tail and where to locate the head page. Bit 0 is set if the page is a tail, and the remaining bits in the field point to the head page. As preparation for changing how the field encodes information about the head page, rename the field to 'compound_info'. Signed-off-by: Kiryl Shutsemau --- .../admin-guide/kdump/vmcoreinfo.rst | 2 +- Documentation/mm/vmemmap_dedup.rst | 6 +++--- include/linux/mm_types.h | 20 +++++++++---------- include/linux/page-flags.h | 18 ++++++++--------- include/linux/types.h | 2 +- kernel/vmcore_info.c | 2 +- mm/page_alloc.c | 2 +- mm/slab.h | 2 +- mm/util.c | 2 +- 9 files changed, 28 insertions(+), 28 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation= /admin-guide/kdump/vmcoreinfo.rst index 404a15f6782c..7663c610fe90 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -141,7 +141,7 @@ nodemask_t The size of a nodemask_t type. Used to compute the number of online nodes. =20 -(page, flags|_refcount|mapping|lru|_mapcount|private|compound_order|compou= nd_head) +(page, flags|_refcount|mapping|lru|_mapcount|private|compound_order|compou= nd_info) --------------------------------------------------------------------------= -------- =20 User-space tools compute their values based on the offset of these diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_= dedup.rst index b4a55b6569fa..1863d88d2dcb 100644 --- a/Documentation/mm/vmemmap_dedup.rst +++ b/Documentation/mm/vmemmap_dedup.rst @@ -24,7 +24,7 @@ For each base page, there is a corresponding ``struct pag= e``. Within the HugeTLB subsystem, only the first 4 ``struct page`` are used to contain unique information about a HugeTLB page. ``__NR_USED_SUBPAGE`` pro= vides this upper limit. The only 'useful' information in the remaining ``struct = page`` -is the compound_head field, and this field is the same for all tail pages. +is the compound_info field, and this field is the same for all tail pages. =20 By removing redundant ``struct page`` for HugeTLB pages, memory can be ret= urned to the buddy allocator for other uses. @@ -124,10 +124,10 @@ Here is how things look before optimization:: | | +-----------+ =20 -The value of page->compound_head is the same for all tail pages. The first +The value of page->compound_info is the same for all tail pages. The first page of ``struct page`` (page 0) associated with the HugeTLB page contains= the 4 ``struct page`` necessary to describe the HugeTLB. The only use of the rem= aining -pages of ``struct page`` (page 1 to page 7) is to point to page->compound_= head. +pages of ``struct page`` (page 1 to page 7) is to point to page->compound_= info. Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of ``struct pa= ge`` will be used for each HugeTLB page. This will allow us to free the remaini= ng 7 pages to the buddy allocator. diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 90e5790c318f..a94683272869 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,14 +125,14 @@ struct page { atomic_long_t pp_ref_count; }; struct { /* Tail pages of compound page */ - unsigned long compound_head; /* Bit zero is set */ + unsigned long compound_info; /* Bit zero is set */ }; struct { /* ZONE_DEVICE pages */ /* - * The first word is used for compound_head or folio + * The first word is used for compound_info or folio * pgmap */ - void *_unused_pgmap_compound_head; + void *_unused_pgmap_compound_info; void *zone_device_data; /* * ZONE_DEVICE private pages are counted as being @@ -383,7 +383,7 @@ struct folio { /* private: avoid cluttering the output */ /* For the Unevictable "LRU list" slot */ struct { - /* Avoid compound_head */ + /* Avoid compound_info */ void *__filler; /* public: */ unsigned int mlock_count; @@ -484,7 +484,7 @@ struct folio { FOLIO_MATCH(flags, flags); FOLIO_MATCH(lru, lru); FOLIO_MATCH(mapping, mapping); -FOLIO_MATCH(compound_head, lru); +FOLIO_MATCH(compound_info, lru); FOLIO_MATCH(__folio_index, index); FOLIO_MATCH(private, private); FOLIO_MATCH(_mapcount, _mapcount); @@ -503,7 +503,7 @@ FOLIO_MATCH(_last_cpupid, _last_cpupid); static_assert(offsetof(struct folio, fl) =3D=3D \ offsetof(struct page, pg) + sizeof(struct page)) FOLIO_MATCH(flags, _flags_1); -FOLIO_MATCH(compound_head, _head_1); +FOLIO_MATCH(compound_info, _head_1); FOLIO_MATCH(_mapcount, _mapcount_1); FOLIO_MATCH(_refcount, _refcount_1); #undef FOLIO_MATCH @@ -511,13 +511,13 @@ FOLIO_MATCH(_refcount, _refcount_1); static_assert(offsetof(struct folio, fl) =3D=3D \ offsetof(struct page, pg) + 2 * sizeof(struct page)) FOLIO_MATCH(flags, _flags_2); -FOLIO_MATCH(compound_head, _head_2); +FOLIO_MATCH(compound_info, _head_2); #undef FOLIO_MATCH #define FOLIO_MATCH(pg, fl) \ static_assert(offsetof(struct folio, fl) =3D=3D \ offsetof(struct page, pg) + 3 * sizeof(struct page)) FOLIO_MATCH(flags, _flags_3); -FOLIO_MATCH(compound_head, _head_3); +FOLIO_MATCH(compound_info, _head_3); #undef FOLIO_MATCH =20 /** @@ -583,8 +583,8 @@ struct ptdesc { #define TABLE_MATCH(pg, pt) \ static_assert(offsetof(struct page, pg) =3D=3D offsetof(struct ptdesc, pt= )) TABLE_MATCH(flags, pt_flags); -TABLE_MATCH(compound_head, pt_list); -TABLE_MATCH(compound_head, _pt_pad_1); +TABLE_MATCH(compound_info, pt_list); +TABLE_MATCH(compound_info, _pt_pad_1); TABLE_MATCH(mapping, __page_mapping); TABLE_MATCH(__folio_index, pt_index); TABLE_MATCH(rcu_head, pt_rcu_head); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 2c1153dd7e0e..446f89c01a4c 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -213,7 +213,7 @@ static __always_inline const struct page *page_fixed_fa= ke_head(const struct page /* * Only addresses aligned with PAGE_SIZE of struct page may be fake head * struct page. The alignment check aims to avoid access the fields ( - * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly) + * e.g. compound_info) of the @page[1]. It can avoid touch a (possibly) * cold cacheline in some cases. */ if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) && @@ -223,7 +223,7 @@ static __always_inline const struct page *page_fixed_fa= ke_head(const struct page * because the @page is a compound page composed with at least * two contiguous pages. */ - unsigned long head =3D READ_ONCE(page[1].compound_head); + unsigned long head =3D READ_ONCE(page[1].compound_info); =20 if (likely(head & 1)) return (const struct page *)(head - 1); @@ -281,7 +281,7 @@ static __always_inline int page_is_fake_head(const stru= ct page *page) =20 static __always_inline unsigned long _compound_head(const struct page *pag= e) { - unsigned long head =3D READ_ONCE(page->compound_head); + unsigned long head =3D READ_ONCE(page->compound_info); =20 if (unlikely(head & 1)) return head - 1; @@ -320,13 +320,13 @@ static __always_inline unsigned long _compound_head(c= onst struct page *page) =20 static __always_inline int PageTail(const struct page *page) { - return READ_ONCE(page->compound_head) & 1 || page_is_fake_head(page); + return READ_ONCE(page->compound_info) & 1 || page_is_fake_head(page); } =20 static __always_inline int PageCompound(const struct page *page) { return test_bit(PG_head, &page->flags.f) || - READ_ONCE(page->compound_head) & 1; + READ_ONCE(page->compound_info) & 1; } =20 #define PAGE_POISON_PATTERN -1l @@ -348,7 +348,7 @@ static const unsigned long *const_folio_flags(const str= uct folio *folio, { const struct page *page =3D &folio->page; =20 - VM_BUG_ON_PGFLAGS(page->compound_head & 1, page); + VM_BUG_ON_PGFLAGS(page->compound_info & 1, page); VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page); return &page[n].flags.f; } @@ -357,7 +357,7 @@ static unsigned long *folio_flags(struct folio *folio, = unsigned n) { struct page *page =3D &folio->page; =20 - VM_BUG_ON_PGFLAGS(page->compound_head & 1, page); + VM_BUG_ON_PGFLAGS(page->compound_info & 1, page); VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page); return &page[n].flags.f; } @@ -869,12 +869,12 @@ static __always_inline void set_compound_head(struct = page *page, struct page *head, unsigned int order) { - WRITE_ONCE(page->compound_head, (unsigned long)head + 1); + WRITE_ONCE(page->compound_info, (unsigned long)head + 1); } =20 static __always_inline void clear_compound_head(struct page *page) { - WRITE_ONCE(page->compound_head, 0); + WRITE_ONCE(page->compound_info, 0); } =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/include/linux/types.h b/include/linux/types.h index 6dfdb8e8e4c3..3a65f0ef4a73 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -234,7 +234,7 @@ struct ustat { * * This guarantee is important for few reasons: * - future call_rcu_lazy() will make use of lower bits in the pointer; - * - the structure shares storage space in struct page with @compound_hea= d, + * - the structure shares storage space in struct page with @compound_inf= o, * which encode PageTail() in bit 0. The guarantee is needed to avoid * false-positive PageTail(). */ diff --git a/kernel/vmcore_info.c b/kernel/vmcore_info.c index e066d31d08f8..782bc2050a40 100644 --- a/kernel/vmcore_info.c +++ b/kernel/vmcore_info.c @@ -175,7 +175,7 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(page, lru); VMCOREINFO_OFFSET(page, _mapcount); VMCOREINFO_OFFSET(page, private); - VMCOREINFO_OFFSET(page, compound_head); + VMCOREINFO_OFFSET(page, compound_info); VMCOREINFO_OFFSET(pglist_data, node_zones); VMCOREINFO_OFFSET(pglist_data, nr_zones); #ifdef CONFIG_FLATMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fe77c00c99df..cecd6d89ff60 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -704,7 +704,7 @@ static inline bool pcp_allowed_order(unsigned int order) * The first PAGE_SIZE page is called the "head page" and have PG_head set. * * The remaining PAGE_SIZE pages are called "tail pages". PageTail() is en= coded - * in bit 0 of page->compound_head. The rest of bits is pointer to head pa= ge. + * in bit 0 of page->compound_info. The rest of bits is pointer to head pa= ge. * * The first tail page's ->compound_order holds the order of allocation. * This usage means that zero-order pages may not be compound. diff --git a/mm/slab.h b/mm/slab.h index 078daecc7cf5..b471877af296 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -104,7 +104,7 @@ struct slab { #define SLAB_MATCH(pg, sl) \ static_assert(offsetof(struct page, pg) =3D=3D offsetof(struct slab, sl)) SLAB_MATCH(flags, flags); -SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ +SLAB_MATCH(compound_info, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, obj_exts); diff --git a/mm/util.c b/mm/util.c index 8989d5767528..cbf93cf3223a 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1244,7 +1244,7 @@ void snapshot_page(struct page_snapshot *ps, const st= ruct page *page) again: memset(&ps->folio_snapshot, 0, sizeof(struct folio)); memcpy(&ps->page_snapshot, page, sizeof(*page)); - head =3D ps->page_snapshot.compound_head; + head =3D ps->page_snapshot.compound_info; if ((head & 1) =3D=3D 0) { ps->idx =3D 0; foliop =3D (struct folio *)&ps->page_snapshot; --=20 2.51.2