From nobody Sat Feb 7 09:48:00 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADDAC35FF64 for ; Wed, 28 Jan 2026 13:57:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769608641; cv=none; b=kscewqwOXMQjqSLy2G8whiLPb/YD3R02CeHld7fGRCho6uNVb7lNXNNKJjP7TzmLC701Wp4TUT69XKUGwqmIOAs0GvzuVeR9/kDne+qZMycRqBcJ37LZegFu+xfD1S8hJrtpE7xhTQti+FP75tmCtxCVvXUy5ulTlwNs8y2iiJQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769608641; c=relaxed/simple; bh=X0JuAHEku0e+NeEYY+UFCEJtQB+xpNwKPHNJM8wJov0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=s/+Lbmdq0x8U5bmVn6k03UjYZP4nLXfJ0SNThiwPGRVYpQbNodm10ZX4fvluPwR9XsUg7uHvOHH4B984F81/GvM1cLifRPKrXKx/ZRcvVakENXlkDsetE2dq4vHOyWN+xR4cC91MXVUOHTr4dLjZ/8PSakdPt5xiDk21IrqyUi0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TVdZsNmo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TVdZsNmo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B79C9C4AF0B; Wed, 28 Jan 2026 13:57:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769608641; bh=X0JuAHEku0e+NeEYY+UFCEJtQB+xpNwKPHNJM8wJov0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TVdZsNmoL5BDvfDib1ByKMQ23FPRuJwZ1SYB3BSvYZbdU6sPP+mgBmfs1py9+1OFb 8A8i2kCHtaQtZnxu6G5ZVXRhwU+KDFqsBUuG5cnYuM2TTM1NEgn+ISXgqskqaYuvrs gDksJ6dG0qfglxlYgq3UVMw25Da4Md9VWqSM9SZkA4mBMLMw0taA7mKq1Bo7RUd+Gt A84MD6J25DyTAKby9oMNQVZZTW+lwgRRKOq9C7MlWXGoI1Ne6rBxMV7aOk34AlD4Yd oAceaKemyqBuv/0C9NgIHjmGU5OQRV/UdJloCKrUbVbvcrai54Y0U636QLdaFQLdwS TIrf37NfTwBCA== Received: from phl-compute-09.internal (phl-compute-09.internal [10.202.2.49]) by mailfauth.phl.internal (Postfix) with ESMTP id DEFA6F40068; Wed, 28 Jan 2026 08:57:19 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-09.internal (MEProxy); Wed, 28 Jan 2026 08:57:19 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgdduieefheehucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpefmihhrhihl ucfuhhhuthhsvghmrghuuceokhgrsheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtth gvrhhnpeffkefffedugfeiudejheefleehteevtefgvefhveetheehkefhjeefhefgleej veenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehkih hrihhllhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqudeiudduiedvieeh hedqvdekgeeggeejvdekqdhkrghspeepkhgvrhhnvghlrdhorhhgsehshhhuthgvmhhovh drnhgrmhgvpdhnsggprhgtphhtthhopedvkedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheprghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdrohhrghdprhgtphhtth hopehmuhgthhhunhdrshhonhhgsehlihhnuhigrdguvghvpdhrtghpthhtohepuggrvhhi ugesrhgvughhrghtrdgtohhmpdhrtghpthhtohepfihilhhlhiesihhnfhhrrgguvggrug drohhrghdprhgtphhtthhopehushgrmhgrrghrihhfieegvdesghhmrghilhdrtghomhdp rhgtphhtthhopehfvhgulhesghhoohhglhgvrdgtohhmpdhrtghpthhtohepohhsrghlvh grughorhesshhushgvrdguvgdprhgtphhtthhopehrphhptheskhgvrhhnvghlrdhorhhg pdhrtghpthhtohepvhgsrggskhgrsehsuhhsvgdrtgii X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 28 Jan 2026 08:57:17 -0500 (EST) From: Kiryl Shutsemau To: Andrew Morton , Muchun Song , David Hildenbrand , Matthew Wilcox , Usama Arif , Frank van der Linden Cc: Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , Huacai Chen , WANG Xuerui , Palmer Dabbelt , Paul Walmsley , Albert Ou , Alexandre Ghiti , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org, Kiryl Shutsemau Subject: [PATCHv5 16/17] hugetlb: Update vmemmap_dedup.rst Date: Wed, 28 Jan 2026 13:54:57 +0000 Message-ID: <20260128135500.22121-17-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20260128135500.22121-1-kas@kernel.org> References: <20260128135500.22121-1-kas@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Update the documentation regarding vmemmap optimization for hugetlb to reflect the changes in how the kernel maps the tail pages. Fake heads no longer exist. Remove their description. Signed-off-by: Kiryl Shutsemau Reviewed-by: Muchun Song --- Documentation/mm/vmemmap_dedup.rst | 60 +++++++++++++----------------- 1 file changed, 26 insertions(+), 34 deletions(-) diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_= dedup.rst index 1863d88d2dcb..fca9d0ce282a 100644 --- a/Documentation/mm/vmemmap_dedup.rst +++ b/Documentation/mm/vmemmap_dedup.rst @@ -124,33 +124,35 @@ Here is how things look before optimization:: | | +-----------+ =20 -The value of page->compound_info is the same for all tail pages. The first -page of ``struct page`` (page 0) associated with the HugeTLB page contains= the 4 -``struct page`` necessary to describe the HugeTLB. The only use of the rem= aining -pages of ``struct page`` (page 1 to page 7) is to point to page->compound_= info. -Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of ``struct pa= ge`` -will be used for each HugeTLB page. This will allow us to free the remaini= ng -7 pages to the buddy allocator. +The first page of ``struct page`` (page 0) associated with the HugeTLB page +contains the 4 ``struct page`` necessary to describe the HugeTLB. The rema= ining +pages of ``struct page`` (page 1 to page 7) are tail pages. + +The optimization is only applied when the size of the struct page is a pow= er-of-2 +In this case, all tail pages of the same order are identical. See +compound_head(). This allows us to remap the tail pages of the vmemmap to a +shared, read-only page. The head page is also remapped to a new page. This +allows the original vmemmap pages to be freed. =20 Here is how things look after remapping:: =20 - HugeTLB struct pages(8 pages) page frame(8 pa= ges) - +-----------+ ---virt_to_page---> +-----------+ mapping to +---------= --+ - | | | 0 | -------------> | 0 = | - | | +-----------+ +---------= --+ - | | | 1 | ---------------^ ^ ^ ^ ^ = ^ ^ - | | +-----------+ | | | | = | | - | | | 2 | -----------------+ | | | = | | - | | +-----------+ | | | = | | - | | | 3 | -------------------+ | | = | | - | | +-----------+ | | = | | - | | | 4 | ---------------------+ | = | | - | PMD | +-----------+ | = | | - | level | | 5 | -----------------------+ = | | - | mapping | +-----------+ = | | - | | | 6 | -------------------------= + | - | | +-----------+ = | - | | | 7 | -------------------------= --+ + HugeTLB struct pages(8 pages) page fr= ame + +-----------+ ---virt_to_page---> +-----------+ mapping to +---------= -------+ + | | | 0 | -------------> | 0 = | + | | +-----------+ +---------= -------+ + | | | 1 | ------=E2=94=90 + | | +-----------+ | + | | | 2 | ------=E2=94=BC +-= ---------------------------+ + | | +-----------+ | | A single= , per-node page | + | | | 3 | ------=E2=94=BC------> | = frame shared among all | + | | +-----------+ | | hugepage= s of the same size | + | | | 4 | ------=E2=94=BC +-= ---------------------------+ + | | +-----------+ | + | | | 5 | ------=E2=94=BC + | PMD | +-----------+ | + | level | | 6 | ------=E2=94=BC + | mapping | +-----------+ | + | | | 7 | ------=E2=94=98 | | +-----------+ | | | | @@ -172,16 +174,6 @@ The contiguous bit is used to increase the mapping siz= e at the pmd and pte (last) level. So this type of HugeTLB page can be optimized only when its size of the ``struct page`` structs is greater than **1** page. =20 -Notice: The head vmemmap page is not freed to the buddy allocator and all -tail vmemmap pages are mapped to the head vmemmap page frame. So we can see -more than one ``struct page`` struct with ``PG_head`` (e.g. 8 per 2 MB Hug= eTLB -page) associated with each HugeTLB page. The ``compound_head()`` can handle -this correctly. There is only **one** head ``struct page``, the tail -``struct page`` with ``PG_head`` are fake head ``struct page``. We need an -approach to distinguish between those two different types of ``struct page= `` so -that ``compound_head()`` can return the real head ``struct page`` when the -parameter is the tail ``struct page`` but with ``PG_head``. - Device DAX =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 --=20 2.51.2