From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A6243BFE59; Tue, 17 Mar 2026 16:57:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766625; cv=none; b=S2zvBMlSBlVnh31JI6z6UZa9yIx3RJDQTzHhJRycTne07ZSp7MK3ZvcPeU+SPJitbp2dPvXX+rnVC+LCFRnKdRRAwTjgX1Y4cHD1oMCLFqCxYiyLqXQdlUkd9nMVhjNf4Ry7ZlVBujoi+9uL8hxNDeil7TTR186k9tDJ2QJkY+Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766625; c=relaxed/simple; bh=enaf/WHlxY0+rUz/sR0jg/TQSIcygwkranNM2RjUkME=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=We7IN/m8lyhTeeUa3ItTdssl/Z+HEFLhS6suNEVhKqEyUuc320VMJ9EpajmG9GieHWSboyz8SnEyOdHaj+1whmM6HV/hXwfbBWHGKDcvDQ+1vI9CWbJtYcJ5fLDZAEWlnk8ZaEBoZgEb77aKhqh+0trj3VUtIGB31vWCz/x1oxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LO80KQZL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LO80KQZL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7CB83C4CEF7; Tue, 17 Mar 2026 16:57:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766625; bh=enaf/WHlxY0+rUz/sR0jg/TQSIcygwkranNM2RjUkME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LO80KQZLO7/y5oXQFwtwUJ0/a9RhLfjeBQuSn8qocvIykFu/p0BLR7KGJKb2A5JqP PPoruzqnYUL8+CG8tw0kc6SqV14FbtFCEBTdKN1cuyY2OkyZfa8sXIAtPRxszTOJdX cFa/uDAO+FzUH/CSM3+jM2HdbYBNvbFkmuw6iozaq8GeqCFVitPnVz18N35lN7/188 6MlhgdwanmY0jnhsl+jgHYJNmaVnUqb1Oggk6TLSnA6tgghgwlH/1bRQcUKaJJdkU5 IjedjUKfm3/olQK+OP4IaqHxQetBGgX59nK99evDpDoCe5igVI6ojJkVhNSNB+0mjh JvE7X95S26utA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 01/14] mm/memory_hotplug: remove for_each_valid_pfn() usage Date: Tue, 17 Mar 2026 17:56:39 +0100 Message-ID: <20260317165652.99114-2-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When offlining memory, we know that the memory range has no holes. Checking for valid pfns is not required. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/memory_hotplug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 86d3faf50453..3495d94587e7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1746,7 +1746,7 @@ static int scan_movable_pages(unsigned long start, un= signed long end, { unsigned long pfn; =20 - for_each_valid_pfn(pfn, start, end) { + for (pfn =3D start; pfn < end; pfn++) { struct page *page; struct folio *folio; =20 @@ -1791,7 +1791,7 @@ static void do_migrate_range(unsigned long start_pfn,= unsigned long end_pfn) static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); =20 - for_each_valid_pfn(pfn, start_pfn, end_pfn) { + for (pfn =3D start_pfn; pfn < end_pfn; pfn++) { struct page *page; =20 page =3D pfn_to_page(pfn); --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 411653BFE59; Tue, 17 Mar 2026 16:57:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766629; cv=none; b=d3I5/kHbWK4EykngiG16XrDkW0ALpgVzqSrEMn7dT9fVDI6Xuz9+1XU6ul4NygkNPtINTQVGqLcWR3i6dPQh7C3YhblOCmO94zH+/YJ80yR3RzY8g+E89IVEB+ek3q7RNzaMfVJvqGzPdvlnawRVtZJn0jz5NR6xWgTdnH35Q+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766629; c=relaxed/simple; bh=kzrTKM3vlrjilIM3sQvWnlvy3VBosxv9H7E3QmkzRlA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qrei39pgORKEDGCQQimSygmKq4bQ0XRNlOVm6M3uJPsshoXMgtmwh8PkQ8u71nEoWhM13qafP58Cs8LFW3mJ4YjMYQ4+Zqh/4z7SiuyO3tAH/Z5um0f+QZzjsS3TFIhsQNxi6dL91TbHs3sDXLPPHf++V9WRsQeYqDkWuxvRhQE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QHskvkxx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QHskvkxx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C62C5C2BCAF; Tue, 17 Mar 2026 16:57:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766629; bh=kzrTKM3vlrjilIM3sQvWnlvy3VBosxv9H7E3QmkzRlA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QHskvkxx07TedlGeSfCg3qWrbY+3uik23yxdt4OBz7Ty8WtpRTbVRWFH27KQzM7ih DMHCC6mbfKZd5XFSR4idzYoIP8JoBJXzxxedXgJ1qEdJo62bf9cMlJirqoQ35/wGrk k928oEnRTJNB1+nuXT0TnCr036BPdr782+F1/qr5oU3luAsNCEI4oy7y9gVY+ddcv/ aguPC8Ef0KfVcO0aD/qWl0X6tfm45jxxrCfboq0PgYA/OdDH9+M1tgRCvlLpH3G3pF Vo5vhQmp/XBajQRU26TkQBt1sGRmzjjQtpQGDg8Bm6/IRpRQJUX0gESRAuFhynz+x1 3xzcWj1mRkogQ== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 02/14] mm/sparse: remove WARN_ONs from (online|offline)_mem_sections() Date: Tue, 17 Mar 2026 17:56:40 +0100 Message-ID: <20260317165652.99114-3-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We do not allow offlining of memory with memory holes, and always hotplug memory without holes. Consequently, we cannot end up onlining or offlining memory sections that have holes (including invalid sections). That's also why these WARN_ONs never fired. Let's remove the WARN_ONs along with the TODO regarding double-checking. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/sparse.c | 17 ++--------------- 1 file changed, 2 insertions(+), 15 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index dfabe554adf8..93252112860e 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -638,13 +638,8 @@ void online_mem_sections(unsigned long start_pfn, unsi= gned long end_pfn) =20 for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { unsigned long section_nr =3D pfn_to_section_nr(pfn); - struct mem_section *ms; - - /* onlining code should never touch invalid ranges */ - if (WARN_ON(!valid_section_nr(section_nr))) - continue; + struct mem_section *ms =3D __nr_to_section(section_nr); =20 - ms =3D __nr_to_section(section_nr); ms->section_mem_map |=3D SECTION_IS_ONLINE; } } @@ -656,16 +651,8 @@ void offline_mem_sections(unsigned long start_pfn, uns= igned long end_pfn) =20 for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { unsigned long section_nr =3D pfn_to_section_nr(pfn); - struct mem_section *ms; + struct mem_section *ms =3D __nr_to_section(section_nr); =20 - /* - * TODO this needs some double checking. Offlining code makes - * sure to check pfn_valid but those checks might be just bogus - */ - if (WARN_ON(!valid_section_nr(section_nr))) - continue; - - ms =3D __nr_to_section(section_nr); ms->section_mem_map &=3D ~SECTION_IS_ONLINE; } } --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5B523F54B8; Tue, 17 Mar 2026 16:57:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766633; cv=none; b=Fa9neVNDo0Jz2i33zqSuHybwPiU4/LOfsdHi5hzysYRHpOvHLPAEUqol6adhtX8eZvIlxwYAQfrmXXsw+1J0MHash67mrWpuNaRkJdjS4T/b4wr3OakLIhLR65degoEgyxoinEGOUVuxFuyzWF+5NJiAr+cW5Oek73dDhh6K29w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766633; c=relaxed/simple; bh=S7b2x+XCPU3hZXm9d9xUfsF2uL49ITfLzqYao4p5RE8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u0VyctOCOiNJkMGoJxvlzU6mtqbmkXUdA2sdlCRYBA1VHDrTrXuuEbW7CImppp6gOfDHNVjBzrr0Ik10IHZr1fqgMtnw1qIkuXy1m9yU7Me0K2DY8bCvRPGEuPYFMqokM0p2LZQ6eBfb2nveBO8VGR2QAC2tzoTgAr+9VfYZ/kA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jCvcqHBj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jCvcqHBj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB8BDC2BCB4; Tue, 17 Mar 2026 16:57:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766633; bh=S7b2x+XCPU3hZXm9d9xUfsF2uL49ITfLzqYao4p5RE8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jCvcqHBjzL3lu3EJ5SPQRk5NuzUuC9Fj6pxDzon+/Ch75acKDiD4ExWCHzKSe6+qj u/ZuFOgoddAAb299UuHlc0qCCIQIXTcXL1e1HMsvYmP+VH0FYHJCkahfLJf3pJqV1X JhSGAG8ez7/xsOL3Pw2yr5spDTXfAguZfBWr7NOycl9dZ41cHkvtmNA+RApcjcK56m KcMgORh+yhCGxDGacFTMfKa37B+6BUGtk/7t0Hmzz3NoYqsKdjm/wx/5AmC9Eq9Cgn +zKx5AiXEG9EcdM7j0VHxRfxxjvS03ZheB4k7eAoNR1p8mfDb5hyZmfPfk7fhS8nOK XqQGiBDJM6qaQ== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 03/14] mm/Kconfig: make CONFIG_MEMORY_HOTPLUG depend on CONFIG_SPARSEMEM_VMEMMAP Date: Tue, 17 Mar 2026 17:56:41 +0100 Message-ID: <20260317165652.99114-4-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Ever since commit f8f03eb5f0f9 ("mm: stop making SPARSEMEM_VMEMMAP user-selectable"), an architecture that supports CONFIG_SPARSEMEM_VMEMMAP (by selecting SPARSEMEM_VMEMMAP_ENABLE) can no longer enable CONFIG_SPARSEMEM without CONFIG_SPARSEMEM_VMEMMAP. Right now, CONFIG_MEMORY_HOTPLUG is guarded by CONFIG_SPARSEMEM. However, CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG is only enabled by * arm64: which selects SPARSEMEM_VMEMMAP_ENABLE * loongarch: which selects SPARSEMEM_VMEMMAP_ENABLE * powerpc (64bit): which selects SPARSEMEM_VMEMMAP_ENABLE * riscv (64bit): which selects SPARSEMEM_VMEMMAP_ENABLE * s390 with SPARSEMEM: which selects SPARSEMEM_VMEMMAP_ENABLE * x86 (64bit): which selects SPARSEMEM_VMEMMAP_ENABLE So, we can make CONFIG_MEMORY_HOTPLUG depend on CONFIG_SPARSEMEM_VMEMMAP without affecting any setups. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/Kconfig b/mm/Kconfig index ebd8ea353687..c012944938a7 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -472,7 +472,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE menuconfig MEMORY_HOTPLUG bool "Memory hotplug" select MEMORY_ISOLATION - depends on SPARSEMEM + depends on SPARSEMEM_VMEMMAP depends on ARCH_ENABLE_MEMORY_HOTPLUG depends on 64BIT select NUMA_KEEP_MEMINFO if NUMA --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 723803F65E5; Tue, 17 Mar 2026 16:57:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766637; cv=none; b=UkQPDHRZWSKLGIraYLTX4im5KodmtBks3HOzCOGBzzvUM5cS/MUCt80g/ZlQ/GL2b3GOKLJn25jvM0y9optk+2XlehcYEDXJs/kYisPKbZuGbiTzhtD5HPYvis7uiMODljWXJRk+xPJvUFH9uRrU4o/YfJPcn6v71XlI6UfEkDk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766637; c=relaxed/simple; bh=BWzG9/0Rza56pImCpceClbjm8voSGno8HwPH5Rkh+Mg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FVSbVn9z1vZPsWSo3vB04vLCVYDPLFRRwknE0Nry5hA7Wxe5s1biRX1avXytnV+XL0WG1/UgNNLYk1lizQF32bH/5RddABvBDs2iozApViAjCNj6NpcBi4mP1EsEdu8VQ88WdrCaUXex0udCXz1fXlGAL2iBLo+o6WDmpYv9hg4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kYLjGDyg; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kYLjGDyg" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC353C4CEF7; Tue, 17 Mar 2026 16:57:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766637; bh=BWzG9/0Rza56pImCpceClbjm8voSGno8HwPH5Rkh+Mg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kYLjGDyg61omCXDRJEGsUHy/Y8yhVjt3XBOFEWT5DvBNW6/wOkTN2nrQp//iiamco J45Glob8YuJJh248GDDrl9TvvXrChdoD7QW45DlK+jNseB10iVb6OnmzEJ7xSGuzz9 MNqe9qt+Qb7aId7UG9ql6sLsx41tmjPUF2m9Gyb6H+0oKPeelgHh5uMD4f3yUA+N3s pXJsbZj69JuOo6hD01OFutFh7AwX5KIzlGFK2VNDQL+Oya5JE+s6s+UNqy8JNSaPp3 fPz4UJpOsugVgQRAEXpxCWnwEi6NWKzIdnWUHVOV6XH4J7B38d6Z6oTtR/T8AdgX4d rF7Vq6exNhoMw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 04/14] mm/memory_hotplug: simplify check_pfn_span() Date: Tue, 17 Mar 2026 17:56:42 +0100 Message-ID: <20260317165652.99114-5-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We now always have CONFIG_SPARSEMEM_VMEMMAP, so remove the dead code. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/memory_hotplug.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 3495d94587e7..70e620496cec 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -320,21 +320,13 @@ static void release_memory_resource(struct resource *= res) static int check_pfn_span(unsigned long pfn, unsigned long nr_pages) { /* - * Disallow all operations smaller than a sub-section and only - * allow operations smaller than a section for - * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range() - * enforces a larger memory_block_size_bytes() granularity for - * memory that will be marked online, so this check should only - * fire for direct arch_{add,remove}_memory() users outside of - * add_memory_resource(). + * Disallow all operations smaller than a sub-section. + * Note that check_hotplug_memory_range() enforces a larger + * memory_block_size_bytes() granularity for memory that will be marked + * online, so this check should only fire for direct + * arch_{add,remove}_memory() users outside of add_memory_resource(). */ - unsigned long min_align; - - if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP)) - min_align =3D PAGES_PER_SUBSECTION; - else - min_align =3D PAGES_PER_SECTION; - if (!IS_ALIGNED(pfn | nr_pages, min_align)) + if (!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION)) return -EINVAL; return 0; } --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB61C3F65F7; Tue, 17 Mar 2026 16:57:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766641; cv=none; b=suF41W6815c61XJLKGFTfFsSfvwevAlSn0a8a9dZc41MCLMPMxLvVQztDMlpMVpOnYX2G5XdWJxaiCoAUSLpsaOSGt0jekHwn//SgqPcuajDioVUZfPa2XrgMwhB+/sYsOQeLXiBOFLfHjkf0dOKsSxlF+7QGp4bqwEx1Tpr9MA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766641; c=relaxed/simple; bh=c1O1NRiO8RPTnZL/oHNtjrDswcxNKxRxDOqUVt5f2o8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r8q16L5/bhusriD1gdPZZOczcV7VtTaE5eOgvtxO9HIfE3gyrXOgTzT9nKvHCgSVfPfqQYVKKiSurnhk3fzWMkes4Enz/W3mSzJJXvLQA/UZdao7Xz/3FTSjA1MUVyI+XenI4+h/ZHayEH5G2qHRLx/1sQ5RhexgVzGjs3ckEmI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gAWwrGlr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gAWwrGlr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BFE21C4CEF7; Tue, 17 Mar 2026 16:57:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766641; bh=c1O1NRiO8RPTnZL/oHNtjrDswcxNKxRxDOqUVt5f2o8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gAWwrGlrXhf6PR29TmnxH1XIh06squZG430WQjfXPZ9K5+UQvscSFqIBaceND290b pABBgZdbP8DrGPOJ+eQYbpA/t8yzeNuQrKx1Z8VYJQJmOi+R6And+umGWDJe7jtWG2 11EC/Pf6TwUgCeNqvuPDeXGF5LS4nTQkQhA7J5uszWYQBznPLaF757+7Hh3tZO9nHV hATV7yIvULtcnd00gAb1mhHJOaArLtc9YbpEPt9bF18MEzd1MFw9VrEG02lCa7ltxA uuOWjjVTsrwaIzfYuDVa1RN2D+74d/6sqdc9+aKBm2mBnek7BoE0E0gaRlZAaSNRQO a3NE+ODhyklrw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 05/14] mm/sparse: remove !CONFIG_SPARSEMEM_VMEMMAP leftovers for CONFIG_MEMORY_HOTPLUG Date: Tue, 17 Mar 2026 17:56:43 +0100 Message-ID: <20260317165652.99114-6-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" CONFIG_MEMORY_HOTPLUG now depends on CONFIG_SPARSEMEM_SPARSEMEM. So let's remove the !CONFIG_SPARSEMEM_VMEMMAP leftovers. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/sparse.c | 61 ----------------------------------------------------- 1 file changed, 61 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index 93252112860e..636a4a0f1199 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -657,7 +657,6 @@ void offline_mem_sections(unsigned long start_pfn, unsi= gned long end_pfn) } } =20 -#ifdef CONFIG_SPARSEMEM_VMEMMAP static struct page * __meminit populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap) @@ -729,66 +728,6 @@ static int fill_subsection_map(unsigned long pfn, unsi= gned long nr_pages) =20 return rc; } -#else -static struct page * __meminit populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap, - struct dev_pagemap *pgmap) -{ - return kvmalloc_node(array_size(sizeof(struct page), - PAGES_PER_SECTION), GFP_KERNEL, nid); -} - -static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_= pages, - struct vmem_altmap *altmap) -{ - kvfree(pfn_to_page(pfn)); -} - -static void free_map_bootmem(struct page *memmap) -{ - unsigned long maps_section_nr, removing_section_nr, i; - unsigned long type, nr_pages; - struct page *page =3D virt_to_page(memmap); - - nr_pages =3D PAGE_ALIGN(PAGES_PER_SECTION * sizeof(struct page)) - >> PAGE_SHIFT; - - for (i =3D 0; i < nr_pages; i++, page++) { - type =3D bootmem_type(page); - - BUG_ON(type =3D=3D NODE_INFO); - - maps_section_nr =3D pfn_to_section_nr(page_to_pfn(page)); - removing_section_nr =3D bootmem_info(page); - - /* - * When this function is called, the removing section is - * logical offlined state. This means all pages are isolated - * from page allocator. If removing section's memmap is placed - * on the same section, it must not be freed. - * If it is freed, page allocator may allocate it which will - * be removed physically soon. - */ - if (maps_section_nr !=3D removing_section_nr) - put_page_bootmem(page); - } -} - -static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) -{ - return 0; -} - -static bool is_subsection_map_empty(struct mem_section *ms) -{ - return true; -} - -static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) -{ - return 0; -} -#endif /* CONFIG_SPARSEMEM_VMEMMAP */ =20 /* * To deactivate a memory region, there are 3 cases to handle across --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1AF03F65FE; Tue, 17 Mar 2026 16:57:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766645; cv=none; b=ccXs2JWeXDbrP9gS6JX1urUaiJb8eIC3P2SDb8sS/MgJmx4N9LSbMCR2Zy+QvVxgr6o5vgRqbFIKcgEN2MKeOrpIdyDFiPU2EeNkIEHLN67aM12TQVKzewG44lf1MxjApdKl9XpdLucm0fLRVRRY378UJA09Untt+ilfZaNcfLw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766645; c=relaxed/simple; bh=H5NeGoGk7dbqwybjxssVKWZK8p0cPYZ7svk3+pfmI+o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UPEi8Y0rOPhhASE5BCii00d4K1DENcSplA5TNddYQlpzC03mti5a0o2J7JIlZfFSszJfALnIK2s6OJVl+68ZWYzhJ+wYCVCewwlXtGfKfD4u6NWDnCzNXnA0i/JJGR7O13Cn1b72mktVASpA6S6NTKgYEkdcBuaWMkTtRYgbrxQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TfqEnA3R; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TfqEnA3R" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A106C19424; Tue, 17 Mar 2026 16:57:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766645; bh=H5NeGoGk7dbqwybjxssVKWZK8p0cPYZ7svk3+pfmI+o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TfqEnA3RRI2GR64AaLDZJghUOrYprRGXpJO4o+UpvkWcZmVSarEz8k8fpIJgG5QbS D3VfKBHRKbBTr10pB/oOmHUFA7qQ2HBQyjfDS4UeSFVudgrF/w9vGSBSY28mDGCoja +rDJ2tlgF5D3Y1YwOyS1HyxhCqpw9M3zND73l+ZGBKrkOjCpRTa9OkDOtd/XwUFZ/S eGr6wJTAX3t69ECmcsJkKl9IMoWP4xi5JQHPB31BQyWKRiUjD96q/i2/rmnEs3r/tk z9xCdDVNjkeLWMBWrp1LMzDrZ0/fGKYSOaRd+QznYwA30aeGOAEz+vKwdBhEAShFBa oT5ZjdLYAvaYQ== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 06/14] mm/bootmem_info: remove handling for !CONFIG_SPARSEMEM_VMEMMAP Date: Tue, 17 Mar 2026 17:56:44 +0100 Message-ID: <20260317165652.99114-7-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is not immediately obvious that CONFIG_HAVE_BOOTMEM_INFO_NODE is only selected from CONFIG_MEMORY_HOTREMOVE, which itself depends on CONFIG_MEMORY_HOTPLUG that ... depends on CONFIG_SPARSEMEM_VMEMMAP. Let's remove the !CONFIG_SPARSEMEM_VMEMMAP leftovers. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/bootmem_info.c | 37 ------------------------------------- 1 file changed, 37 deletions(-) diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index b0e2a9fa641f..e61e08e24924 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -40,42 +40,6 @@ void put_page_bootmem(struct page *page) } } =20 -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void __init register_page_bootmem_info_section(unsigned long start_= pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr =3D pfn_to_section_nr(start_pfn); - ms =3D __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap =3D sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page =3D virt_to_page(memmap); - mapsize =3D sizeof(struct page) * PAGES_PER_SECTION; - mapsize =3D PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i =3D 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage =3D ms->usage; - page =3D virt_to_page(usage); - - mapsize =3D PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i =3D 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ static void __init register_page_bootmem_info_section(unsigned long start_= pfn) { unsigned long mapsize, section_nr, i; @@ -100,7 +64,6 @@ static void __init register_page_bootmem_info_section(un= signed long start_pfn) for (i =3D 0; i < mapsize; i++, page++) get_page_bootmem(section_nr, page, MIX_SECTION_INFO); } -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ =20 void __init register_page_bootmem_info_node(struct pglist_data *pgdat) { --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 555513F54A5; Tue, 17 Mar 2026 16:57:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766650; cv=none; b=UQqpVlq6ZvRMez60Mmzd5gAPK5k3NX7m+UeIMygN9XUqELSZ3tKnA1RtAu6ZSfZjlbB/ddy3F0Qp7yxlU+4gp/L/Cx+aHGSSU97heb0g7+aU0dIw8CwzLEYZbFuZSC60fzOf7294hzhMDELOi8s60LfWysbAfAREKoJJk47WW58= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766650; c=relaxed/simple; bh=zAVH1p8bTgDOcqmWvthlvGourqrwWep27NazNV4QaoM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mJSRR01FUJknNw2IOjpYDicYaTuH2XeQvP2gtjW6lNC8MK5qgTCTBDzbxPo2ZSQmGMqRQyz7IZfPUtvjhxGXb05gVI1eAQLlXiwOHOJTapPy2d1WXSsCJ6ErOf6ip2ZvwLTRT23QsivFPMkpxLglb+a1iIVzXYXc/uiy+bR+oSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y2Vujhz2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y2Vujhz2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 48A0CC2BCB1; Tue, 17 Mar 2026 16:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766650; bh=zAVH1p8bTgDOcqmWvthlvGourqrwWep27NazNV4QaoM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y2Vujhz2LgkspV0Vdwe1zUr0moAccQujcKxybghmQZ6XXT5iXSrfQPoxTPir/ka6o Ro5dnqMDCEFwJa2GKmu+8ThHTiTINRru4qBFl7I2dRo2z5eAX5yxdiClX8uIk1HBYI 9TEZDo12vMIKg1hRdsEc6SD2MbIuMAjmXDrLKGFBV3lXZfVZEysQZsWFnQFl+80jbO 9U6FAGd5plj1NWSHwynZCMLun5fFMWHYreRqzZ0jaJnHqPvgPCI1Ly3LxJKsZpVWEb qNF7flFZvow1mgTGZN1MwGBMW7oDmUz6b4r9JT3ABYsXaoAwS4HMVMmUUCbS9S4tvc u/wNBLoRqyJhA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 07/14] mm/bootmem_info: avoid using sparse_decode_mem_map() Date: Tue, 17 Mar 2026 17:56:45 +0100 Message-ID: <20260317165652.99114-8-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With SPARSEMEM_VMEMMAP, we can just do a pfn_to_page(). It is not super clear whether the start_pfn is properly aligned ... so let's just make sure it is. We will soon might try to remove the bootmem info completely, for now, just keep it working as is. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/bootmem_info.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index e61e08e24924..3d7675a3ae04 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -44,17 +44,16 @@ static void __init register_page_bootmem_info_section(u= nsigned long start_pfn) { unsigned long mapsize, section_nr, i; struct mem_section *ms; - struct page *page, *memmap; struct mem_section_usage *usage; + struct page *page; =20 + start_pfn =3D SECTION_ALIGN_DOWN(start_pfn); section_nr =3D pfn_to_section_nr(start_pfn); ms =3D __nr_to_section(section_nr); =20 - memmap =3D sparse_decode_mem_map(ms->section_mem_map, section_nr); - if (!preinited_vmemmap_section(ms)) - register_page_bootmem_memmap(section_nr, memmap, - PAGES_PER_SECTION); + register_page_bootmem_memmap(section_nr, pfn_to_page(start_pfn), + PAGES_PER_SECTION); =20 usage =3D ms->usage; page =3D virt_to_page(usage); --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56AEF3446DA; Tue, 17 Mar 2026 16:57:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766654; cv=none; b=ChvTtayw8fEWHMEKXkoHD6JolzUSD4lWS8sr/Wu/duMamGszC6tTElAbjMARXR2Q92TDa7vPYV2EVuYzoks+7Qb/DVW0wTsFy8WmcdoBsf46zydKGRfFXeIaM786ZyfYRU8SMqvXU1opXrz0OKn6SPX/BnvLZTpgAWEkiFSKYJc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766654; c=relaxed/simple; bh=3YI8GLSe1tTWHR4VUkgbO+5WHzisOWZuA3eyf8N2TRo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d5kN8DIbGeXu8ZxV1vR4oGcautmNOZSipwVq1SjfTwWQziOyYGqm3lyikJn782JfUY8WuSz1fN9Bux2eLWIho4ysMOe8nR3mlZymOkBPYAaYMJQv700Estn+AveNLgYfByUb2QT73LTn5OC+lkgo5/uMxaPBhSa6VAI43odVlDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=h94nBmRa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="h94nBmRa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91B4CC4CEF7; Tue, 17 Mar 2026 16:57:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766654; bh=3YI8GLSe1tTWHR4VUkgbO+5WHzisOWZuA3eyf8N2TRo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h94nBmRa1lj7MKJ7PA9M7CgVzvxBwAAV9TakhLXrNdY2dP5jXSRsFyt3auwxNmXF/ Cmr9K/op+yjMOSFwzQo17jrSDebrH0kymNXGRAQ34hK8tLDjPuvTMqScgg/doAonCy z0QkvfRf2dThbrwJariFkrc3HxzK8XWonH4NsyuKadivtDbb0U3myTcMY+JfZ89/ZW C8SD3Z7dF3dhm68HtC9CwkUO6aD5ufhtfmV4WytNrOrVNohKLEq7dut2Y66Z/PBBDc ImEFj48erS2Yb/wxd/vaw04/8SLSebStL59JVB2WI8UDMUzaYYHFSPQm7AsE65fHiP rvwNFWh8h8AYw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 08/14] mm/sparse: remove sparse_decode_mem_map() Date: Tue, 17 Mar 2026 17:56:46 +0100 Message-ID: <20260317165652.99114-9-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" section_deactivate() applies to CONFIG_SPARSEMEM_VMEMMAP only. So we can just use pfn_to_page() (after making sure we have the start PFN of the section), and remove sparse_decode_mem_map(). Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- include/linux/memory_hotplug.h | 2 -- mm/sparse.c | 16 +--------------- 2 files changed, 1 insertion(+), 17 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index e77ef3d7ff73..815e908c4135 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -308,8 +308,6 @@ extern int sparse_add_section(int nid, unsigned long pf= n, struct dev_pagemap *pgmap); extern void sparse_remove_section(unsigned long pfn, unsigned long nr_page= s, struct vmem_altmap *altmap); -extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, - unsigned long pnum); extern struct zone *zone_for_pfn_range(enum mmop online_type, int nid, struct memory_group *group, unsigned long start_pfn, unsigned long nr_pages); diff --git a/mm/sparse.c b/mm/sparse.c index 636a4a0f1199..2a1f662245bc 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -274,18 +274,6 @@ static unsigned long sparse_encode_mem_map(struct page= *mem_map, unsigned long p return coded_mem_map; } =20 -#ifdef CONFIG_MEMORY_HOTPLUG -/* - * Decode mem_map from the coded memmap - */ -struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned l= ong pnum) -{ - /* mask off the extra low bits of information */ - coded_mem_map &=3D SECTION_MAP_MASK; - return ((struct page *)coded_mem_map) + section_nr_to_pfn(pnum); -} -#endif /* CONFIG_MEMORY_HOTPLUG */ - static void __meminit sparse_init_one_section(struct mem_section *ms, unsigned long pnum, struct page *mem_map, struct mem_section_usage *usage, unsigned long flags) @@ -758,8 +746,6 @@ static void section_deactivate(unsigned long pfn, unsig= ned long nr_pages, =20 empty =3D is_subsection_map_empty(ms); if (empty) { - unsigned long section_nr =3D pfn_to_section_nr(pfn); - /* * Mark the section invalid so that valid_section() * return false. This prevents code from dereferencing @@ -778,7 +764,7 @@ static void section_deactivate(unsigned long pfn, unsig= ned long nr_pages, kfree_rcu(ms->usage, rcu); WRITE_ONCE(ms->usage, NULL); } - memmap =3D sparse_decode_mem_map(ms->section_mem_map, section_nr); + memmap =3D pfn_to_page(SECTION_ALIGN_DOWN(pfn)); } =20 /* --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B380E3F65F2; Tue, 17 Mar 2026 16:57:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766658; cv=none; b=QNq85HGozkVedmEZaXsXmjbFM8K+x0kp88IXUS7PvHatIqSo13O2QdKyyFI2KQNWp94M98XqVv0NfiVCfs/+9pkrTtljjHlILRC7vNhqPhdFU17H9DCJ62kfi/JdW7XFdDTwOFHNLxUw4aTjljwbqNXicYikMTVamrI+3tCygIE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766658; c=relaxed/simple; bh=dWpP9RC2AOJFLEHW4kUCv0xfNWwbmf3vYnbU/fsvVFs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PqyAYxDsujJ1nOggsp4lJ6ZpjyvMvuJ3wCEep9/VGh/Q6dF5R6N/BSo+wRkNHejdET3dXsFyEtBnXSm7UnBT5FIj1IIO8WQrP6fAInKf5ICSsZClKpZ8lmVdz9XrFji6pUEymcpuz7B0GfsQyPtCRP0IOY9oJEoD7sLBJmt9y7Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s9R4skG5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s9R4skG5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1887C4CEF7; Tue, 17 Mar 2026 16:57:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766658; bh=dWpP9RC2AOJFLEHW4kUCv0xfNWwbmf3vYnbU/fsvVFs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s9R4skG5T6KH5NvwzA4vYuRme6D0OB8ITfYauWXHRHyHHcMdZQZevLyzvso6LsSEk wFjc0hW7YrUN2nIIhHWAZ2IBtJOxAiJNGgwICG1VZSPVF7UCBzEDMNYhcFROZo6jvQ 7xrRl+V3jFMzZ5RFtPYpI1LfYYBfoAZcbaAk6qIeyhCe7E6iS9ZuAaO8JPZDVQPgv1 rc6xVKrMNY8mvmIcxj6bypwbXi2S/ErknSi4akEiNdwrg3cPX1jHtTAYv0sTqy3Mqv V6Lo20crQ5OTijobm3om86U5qN1Rn1t1ewH2VXK150Cf+VPPOvuXTM3FH8voCugRav dq58rnDb12TDw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 09/14] mm/sparse: remove CONFIG_MEMORY_HOTPLUG-specific usemap allocation handling Date: Tue, 17 Mar 2026 17:56:47 +0100 Message-ID: <20260317165652.99114-10-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In 2008, we added through commit 48c906823f39 ("memory hotplug: allocate usemap on the section with pgdat") quite some complexity to try allocating memory for the "usemap" (storing pageblock information per memory section) for a memory section close to the memory of the "pgdat" of the node. The goal was to make memory hotunplug of boot memory more likely to succeed. That commit also added some checks for circular dependencies between two memory sections, whereby two memory sections would contain each others usemap, turning bot memory sections un-removable. However, in 2010, commit a4322e1bad91 ("sparsemem: Put usemap for one node together") started allocating the usemap for multiple memory sections on the same node in one chunk, effectively grouping all usemap allocations of the same node in a single memblock allocation. We don't really give guarantees about memory hotunplug of boot memory, and with the change in 2010, it is pretty much impossible in practice to get any circular dependencies. commit 48c906823f39 ("memory hotplug: allocate usemap on the section with pgdat") also added the comment: "Similarly, a pgdat can prevent a section being removed. If section A contains a pgdat and section B contains the usemap, both sections become inter-dependent." Given that we don't free the pgdat anymore, that comment (and handling) does not apply. So let's simply remove this complexity. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/sparse.c | 100 +--------------------------------------------------- 1 file changed, 1 insertion(+), 99 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index 2a1f662245bc..b57c81e99340 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -294,102 +294,6 @@ size_t mem_section_usage_size(void) return sizeof(struct mem_section_usage) + usemap_size(); } =20 -#ifdef CONFIG_MEMORY_HOTREMOVE -static inline phys_addr_t pgdat_to_phys(struct pglist_data *pgdat) -{ -#ifndef CONFIG_NUMA - VM_BUG_ON(pgdat !=3D &contig_page_data); - return __pa_symbol(&contig_page_data); -#else - return __pa(pgdat); -#endif -} - -static struct mem_section_usage * __init -sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat, - unsigned long size) -{ - struct mem_section_usage *usage; - unsigned long goal, limit; - int nid; - /* - * A page may contain usemaps for other sections preventing the - * page being freed and making a section unremovable while - * other sections referencing the usemap remain active. Similarly, - * a pgdat can prevent a section being removed. If section A - * contains a pgdat and section B contains the usemap, both - * sections become inter-dependent. This allocates usemaps - * from the same section as the pgdat where possible to avoid - * this problem. - */ - goal =3D pgdat_to_phys(pgdat) & (PAGE_SECTION_MASK << PAGE_SHIFT); - limit =3D goal + (1UL << PA_SECTION_SHIFT); - nid =3D early_pfn_to_nid(goal >> PAGE_SHIFT); -again: - usage =3D memblock_alloc_try_nid(size, SMP_CACHE_BYTES, goal, limit, nid); - if (!usage && limit) { - limit =3D MEMBLOCK_ALLOC_ACCESSIBLE; - goto again; - } - return usage; -} - -static void __init check_usemap_section_nr(int nid, - struct mem_section_usage *usage) -{ - unsigned long usemap_snr, pgdat_snr; - static unsigned long old_usemap_snr; - static unsigned long old_pgdat_snr; - struct pglist_data *pgdat =3D NODE_DATA(nid); - int usemap_nid; - - /* First call */ - if (!old_usemap_snr) { - old_usemap_snr =3D NR_MEM_SECTIONS; - old_pgdat_snr =3D NR_MEM_SECTIONS; - } - - usemap_snr =3D pfn_to_section_nr(__pa(usage) >> PAGE_SHIFT); - pgdat_snr =3D pfn_to_section_nr(pgdat_to_phys(pgdat) >> PAGE_SHIFT); - if (usemap_snr =3D=3D pgdat_snr) - return; - - if (old_usemap_snr =3D=3D usemap_snr && old_pgdat_snr =3D=3D pgdat_snr) - /* skip redundant message */ - return; - - old_usemap_snr =3D usemap_snr; - old_pgdat_snr =3D pgdat_snr; - - usemap_nid =3D sparse_early_nid(__nr_to_section(usemap_snr)); - if (usemap_nid !=3D nid) { - pr_info("node %d must be removed before remove section %ld\n", - nid, usemap_snr); - return; - } - /* - * There is a circular dependency. - * Some platforms allow un-removable section because they will just - * gather other removable sections for dynamic partitioning. - * Just notify un-removable section's number here. - */ - pr_info("Section %ld and %ld (node %d) have a circular dependency on usem= ap and pgdat allocations\n", - usemap_snr, pgdat_snr, nid); -} -#else -static struct mem_section_usage * __init -sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat, - unsigned long size) -{ - return memblock_alloc_node(size, SMP_CACHE_BYTES, pgdat->node_id); -} - -static void __init check_usemap_section_nr(int nid, - struct mem_section_usage *usage) -{ -} -#endif /* CONFIG_MEMORY_HOTREMOVE */ - #ifdef CONFIG_SPARSEMEM_VMEMMAP unsigned long __init section_map_size(void) { @@ -486,7 +390,6 @@ void __init sparse_init_early_section(int nid, struct p= age *map, unsigned long pnum, unsigned long flags) { BUG_ON(!sparse_usagebuf || sparse_usagebuf >=3D sparse_usagebuf_end); - check_usemap_section_nr(nid, sparse_usagebuf); sparse_init_one_section(__nr_to_section(pnum), pnum, map, sparse_usagebuf, SECTION_IS_EARLY | flags); sparse_usagebuf =3D (void *)sparse_usagebuf + mem_section_usage_size(); @@ -497,8 +400,7 @@ static int __init sparse_usage_init(int nid, unsigned l= ong map_count) unsigned long size; =20 size =3D mem_section_usage_size() * map_count; - sparse_usagebuf =3D sparse_early_usemaps_alloc_pgdat_section( - NODE_DATA(nid), size); + sparse_usagebuf =3D memblock_alloc_node(size, SMP_CACHE_BYTES, nid); if (!sparse_usagebuf) { sparse_usagebuf_end =3D NULL; return -ENOMEM; --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D480F3F54B8; Tue, 17 Mar 2026 16:57:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766662; cv=none; b=YNH5b+ct16PsREjibuxKLSBm58y3CQ02o/gk/+F9talBv+VrRvDP1tNthzmTYJ1V/KbW3beC+sHbvb0REngUjq+FY3MpFxJtpSrmpuxljLx6fmETYaEudLRH7Ugf2l5GZ+gYojSS5Ujvh+/2HqnFlnzjIbZi+8gCtTnknkikSk8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766662; c=relaxed/simple; bh=t9H4ODpg5R9njK15JZW6Bi4180ghG2kDbMh/t4uFB5I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rgeNfpBKA3OOdPUhuGk0n5LPsBzqkGXEV0loBz8JuqC6sHItekeT5YMYie1tkxpUTtJaK13JkBnnm3FFCmcBHcwZht9kwdEz2r+6dlKW9B3jnTE/JWz4VmcLRKOqVF5DSvTSprHO7yn//yDup47s3Sj1q2ASxgfEZD0dp5N/6ZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KmWEQkLl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KmWEQkLl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16983C19424; Tue, 17 Mar 2026 16:57:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766662; bh=t9H4ODpg5R9njK15JZW6Bi4180ghG2kDbMh/t4uFB5I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KmWEQkLlFIjSPQY6yvikDH1PeDIr1/5cT7bLQciS8NbHXDZqmdvmajSV63q3bc2Lp cekuWl/0KKVBqGfTb5TDH8N9Xia066PNeWOJTKNTmKRDkJ6khtp8r/hI2pWZil1oPo IbYkOH07uW+2PiEzjJDhD0KdJqT2UQqh/yBiVv4IJw93+K8vtYgDq1IDglC+kbq89V 63IwLnQkYDS8Z+CK5PuzThH1cQyUTE5EHxIKzhyV6Us7IKlCzkwmtTNMHz9kf3D3zk PN3WKrd11bXAX1UkLT4AYWkpKiobdnePmBdxkfIc3FqzO94Eu1hI4Ftzll0msAKtJ2 pQAtDXr1ruXCw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 10/14] mm: prepare to move subsection_map_init() to mm/sparse-vmemmap.c Date: Tue, 17 Mar 2026 17:56:48 +0100 Message-ID: <20260317165652.99114-11-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We want to move subsection_map_init() to mm/sparse-vmemmap.c. To prepare for getting rid of subsection_map_init() in mm/sparse.c completely, use a static inline function for !CONFIG_SPARSEMEM_VMEMMAP. While at it, move the declaration to internal.h and rename it to "sparse_init_subsection_map()". Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- include/linux/mmzone.h | 3 --- mm/internal.h | 12 ++++++++++++ mm/mm_init.c | 2 +- mm/sparse.c | 6 +----- 4 files changed, 14 insertions(+), 9 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7bd0134c241c..b694c69dee04 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2002,8 +2002,6 @@ struct mem_section_usage { unsigned long pageblock_flags[0]; }; =20 -void subsection_map_init(unsigned long pfn, unsigned long nr_pages); - struct page; struct page_ext; struct mem_section { @@ -2396,7 +2394,6 @@ static inline unsigned long next_present_section_nr(u= nsigned long section_nr) #define sparse_vmemmap_init_nid_early(_nid) do {} while (0) #define sparse_vmemmap_init_nid_late(_nid) do {} while (0) #define pfn_in_present_section pfn_valid -#define subsection_map_init(_pfn, _nr_pages) do {} while (0) #endif /* CONFIG_SPARSEMEM */ =20 /* diff --git a/mm/internal.h b/mm/internal.h index f98f4746ac41..5f5c45d80aca 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -960,12 +960,24 @@ void memmap_init_range(unsigned long, int, unsigned l= ong, unsigned long, unsigned long, enum meminit_context, struct vmem_altmap *, int, bool); =20 +/* + * mm/sparse.c + */ #ifdef CONFIG_SPARSEMEM void sparse_init(void); #else static inline void sparse_init(void) {} #endif /* CONFIG_SPARSEMEM */ =20 +#ifdef CONFIG_SPARSEMEM_VMEMMAP +void sparse_init_subsection_map(unsigned long pfn, unsigned long nr_pages); +#else +static inline void sparse_init_subsection_map(unsigned long pfn, + unsigned long nr_pages) +{ +} +#endif /* CONFIG_SPARSEMEM_VMEMMAP */ + #if defined CONFIG_COMPACTION || defined CONFIG_CMA =20 /* diff --git a/mm/mm_init.c b/mm/mm_init.c index 969048f9b320..3c5f18537cd1 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1898,7 +1898,7 @@ static void __init free_area_init(void) pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid, (u64)start_pfn << PAGE_SHIFT, ((u64)end_pfn << PAGE_SHIFT) - 1); - subsection_map_init(start_pfn, end_pfn - start_pfn); + sparse_init_subsection_map(start_pfn, end_pfn - start_pfn); } =20 /* Initialise every node */ diff --git a/mm/sparse.c b/mm/sparse.c index b57c81e99340..7b0bfea73a9b 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -185,7 +185,7 @@ static void subsection_mask_set(unsigned long *map, uns= igned long pfn, bitmap_set(map, idx, end - idx + 1); } =20 -void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) +void __init sparse_init_subsection_map(unsigned long pfn, unsigned long nr= _pages) { int end_sec_nr =3D pfn_to_section_nr(pfn + nr_pages - 1); unsigned long nr, start_sec_nr =3D pfn_to_section_nr(pfn); @@ -207,10 +207,6 @@ void __init subsection_map_init(unsigned long pfn, uns= igned long nr_pages) nr_pages -=3D pfns; } } -#else -void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) -{ -} #endif =20 /* Record a memory area against a node. */ --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C38A3F54CE; Tue, 17 Mar 2026 16:57:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766667; cv=none; b=gNBpbO4b3nwQFXCCeZoqw3eaYzdFRSUkPHnjhBiSmV/5/A8MF+t7e+He/CYREp7LMPqUv5wRRq66EBYi3unqV7iKLMFWpQ0IhOu5xLxCowtVki8m4i11qpiVpGWdYjCRq8zLjAtkWuVjEM2yP6IbgtlNsX1sthvoUUVL92VYlmM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766667; c=relaxed/simple; bh=GmzhoQ0PJsvg3r41ZGNVjXu6dhy2NO2MgeEoTaVUJyc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WkfI/Re8iavQ1zRA4yfNW213wQJUclgZpYGS+5LtMOZADm+7K0LcS0OwMsN57/NK1ln7gWzc0GTBlGjTbSn7yL1OVkXLgruYWmzAwnpd22I6BmU1mLDPBIpq279RHCBGIK4CnezZZFOa4PdoqDGvLI8lpJOUbsm3lXm0IwZp3ns= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KvOITOgO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KvOITOgO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46E24C2BCAF; Tue, 17 Mar 2026 16:57:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766666; bh=GmzhoQ0PJsvg3r41ZGNVjXu6dhy2NO2MgeEoTaVUJyc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KvOITOgO+ibK/lPOLHPXPeB+/05jkCMxW9PlT0bA7fNiSqObIrPhWQfImj7g4NsUq Imap3UUZb/Tp92VIG/zhLpyI/ggtGmC52Tg17EJDKgxFAAbSbIyPBQcNonDJOD1ocO L8whLAIiEhbgmmYDfK3CdjQ+aUo7ddrVdb+qeRT4lLLZqEq7USP2ESW+zqcvTd0sYT YfZHO9mKUGFpb6acRxS0xvf8lbNt07LJo1J95hp1nZA9qI9DHCQqVIi/4ZYj9rmbQA +NURaueDz8fkDy4jRiEVRYtzvnnIoPxRfsfcFjvpDQcQRwlcUlyEVMvJ3rwghpVy7e 99r7NDF8B9EGg== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 11/14] mm/sparse: drop set_section_nid() from sparse_add_section() Date: Tue, 17 Mar 2026 17:56:49 +0100 Message-ID: <20260317165652.99114-12-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" CONFIG_MEMORY_HOTPLUG is CONFIG_SPARSEMEM_VMEMMAP-only. And CONFIG_SPARSEMEM_VMEMMAP implies that NODE_NOT_IN_PAGE_FLAGS cannot be set: see include/linux/page-flags-layout.h ... #elif defined(CONFIG_SPARSEMEM_VMEMMAP) #error "Vmemmap: No space for nodes field in page flags" ... So let's remove the set_section_nid() call to prepare for moving CONFIG_MEMORY_HOTPLUG to mm/sparse-vmemmap.c Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/sparse.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/sparse.c b/mm/sparse.c index 7b0bfea73a9b..b5a2de43ac40 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -769,7 +769,6 @@ int __meminit sparse_add_section(int nid, unsigned long= start_pfn, page_init_poison(memmap, sizeof(struct page) * nr_pages); =20 ms =3D __nr_to_section(section_nr); - set_section_nid(section_nr, nid); __section_mark_present(ms, section_nr); =20 /* Align memmap to section boundary in the subsection case */ --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21B603F54D1; Tue, 17 Mar 2026 16:57:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766671; cv=none; b=UHFHTwwMDijbEVwtYTIzaEDpEeT2WcXr2eRzrcodM3mv7Kmh664eDtJPm9Uoi5cZU2eDu0tKLV8VV4fWz7xmf8VMBS20WPED/qapJoC7mtJ01lH7bvTnxuuJm0jLlyzlDcn3zfNmOqGSHbSlJMLaIQZnNqOdyON9YTsAbSxfgWc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766671; c=relaxed/simple; bh=Bip5aS2Q/tCY3dJyBk4Fs5dKG63YEPL2XgTfzOnLk/o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fKWtgp+KSaWPjxc7QIvjhZ9Ze2a1Jt1XbJDUY+h6DCJf4OxxQeBfsJTi+b8DGRu45JCTSxLLRJVbgHFuotD++m01neKy5Gw+foGUHNyuQleyYuh3Z1A+HK17YAhIM3/ahAhK+Marq0VTXmo2cYrDDPDHUMD9qEBoUXwyH+SfDqU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qFaMp8J3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qFaMp8J3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7784CC2BCB3; Tue, 17 Mar 2026 16:57:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766671; bh=Bip5aS2Q/tCY3dJyBk4Fs5dKG63YEPL2XgTfzOnLk/o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qFaMp8J3h6JOzr7rbOeTOYOK/GYkvLK8VsApl75PUwJtmsISNYD4yXcM1jTlZtOyS 6QUsA6N4pt6A0vyltmwEuqpd0PVsjYB8DN+FfJE/RI1AEOUn746O2LL48LlOxm2jHi LkEqSb2ywOlAiDfBzi9Px/MxwS6Lk8ofbJkl0CibC48PbNuFIeBjGnbDBjhxhYFJdL dS7+rCrenGPGw1k1gzHlX9qyAutHDQfjbciY3bY9j8Yt/9XqHc456BsUvXjs7NUvSU 3Kke/jH0KX9ggRX+MNKpMGzCEbVEp/Us+6guRKZrJIs5QIkEXp+3oD39j8g5uPrQHn L96mGSXBg5mBA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 12/14] mm/sparse: move sparse_init_one_section() to internal.h Date: Tue, 17 Mar 2026 17:56:50 +0100 Message-ID: <20260317165652.99114-13-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While at it, convert the BUG_ON to a WARN_ON, avoid long lines, and merge sparse_encode_mem_map() into sparse_init_one_section(). Clarify the comment a bit, pointing at page_to_pfn(). Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- include/linux/mmzone.h | 2 +- mm/internal.h | 22 ++++++++++++++++++++++ mm/sparse.c | 24 ------------------------ 3 files changed, 23 insertions(+), 25 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b694c69dee04..dcbbf36ed88c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2008,7 +2008,7 @@ struct mem_section { /* * This is, logically, a pointer to an array of struct * pages. However, it is stored with some other magic. - * (see sparse.c::sparse_init_one_section()) + * (see sparse_init_one_section()) * * Additionally during early boot we encode node id of * the location of the section here to guide allocation. diff --git a/mm/internal.h b/mm/internal.h index 5f5c45d80aca..bcf4df97b185 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -965,6 +965,28 @@ void memmap_init_range(unsigned long, int, unsigned lo= ng, unsigned long, */ #ifdef CONFIG_SPARSEMEM void sparse_init(void); + +static inline void sparse_init_one_section(struct mem_section *ms, + unsigned long pnum, struct page *mem_map, + struct mem_section_usage *usage, unsigned long flags) +{ + unsigned long coded_mem_map; + + BUILD_BUG_ON(SECTION_MAP_LAST_BIT > PFN_SECTION_SHIFT); + + /* + * We encode the start PFN of the section into the mem_map such that + * page_to_pfn() on !CONFIG_SPARSEMEM_VMEMMAP can simply subtract it + * from the page pointer to obtain the PFN. + */ + coded_mem_map =3D (unsigned long)(mem_map - section_nr_to_pfn(pnum)); + VM_WARN_ON(coded_mem_map & ~SECTION_MAP_MASK); + + ms->section_mem_map &=3D ~SECTION_MAP_MASK; + ms->section_mem_map |=3D coded_mem_map; + ms->section_mem_map |=3D SECTION_HAS_MEM_MAP | flags; + ms->usage =3D usage; +} #else static inline void sparse_init(void) {} #endif /* CONFIG_SPARSEMEM */ diff --git a/mm/sparse.c b/mm/sparse.c index b5a2de43ac40..6f5f340301a3 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -256,30 +256,6 @@ static void __init memblocks_present(void) memory_present(nid, start, end); } =20 -/* - * Subtle, we encode the real pfn into the mem_map such that - * the identity pfn - section_mem_map will return the actual - * physical page frame number. - */ -static unsigned long sparse_encode_mem_map(struct page *mem_map, unsigned = long pnum) -{ - unsigned long coded_mem_map =3D - (unsigned long)(mem_map - (section_nr_to_pfn(pnum))); - BUILD_BUG_ON(SECTION_MAP_LAST_BIT > PFN_SECTION_SHIFT); - BUG_ON(coded_mem_map & ~SECTION_MAP_MASK); - return coded_mem_map; -} - -static void __meminit sparse_init_one_section(struct mem_section *ms, - unsigned long pnum, struct page *mem_map, - struct mem_section_usage *usage, unsigned long flags) -{ - ms->section_mem_map &=3D ~SECTION_MAP_MASK; - ms->section_mem_map |=3D sparse_encode_mem_map(mem_map, pnum) - | SECTION_HAS_MEM_MAP | flags; - ms->usage =3D usage; -} - static unsigned long usemap_size(void) { return BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS) * sizeof(unsigned long); --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85BE13F7A83; Tue, 17 Mar 2026 16:57:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766675; cv=none; b=EBvT/ncAf2MRD2QWE+5zKl5c9YEZvewLSR6xCRH/sMwhFpj4syTr3ZW4l0u9cziEdz40N34T4h5ybnZl202AmtelV5niRO2xuX7PQi1T7Lg5IbJsrX7lp7QovCJGxZhXmkXOUjGyleNkp/jlolVLM/fggeDPlwWnwhEnH3LqpR0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766675; c=relaxed/simple; bh=0l6Twm5fWAYc2kE+D5nyqi/o8R2cJncgC9qwk29875U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pxGjZST8ihE3ib5UNbuHKv9HDb5Bufwfzgo5lYi3IWWBccj/PRg7shYaH5Mz0FwgDe0stHw1ST5lWx1LxfYyBwa27d+tpLSscFboO4h+HFhgfNlkV227lWVjkagXk02KY1LqSsWEmM5Sck1alZ/g9LlrteWZL/Ldu/fUbOntSJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=is7iAYXq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="is7iAYXq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8092C19424; Tue, 17 Mar 2026 16:57:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766675; bh=0l6Twm5fWAYc2kE+D5nyqi/o8R2cJncgC9qwk29875U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=is7iAYXqxhS295qB9qkyZKPyw/TDgW7GIBe8C1dgMEt3C0lFDzv9TfnIEuVRja4Ch 7Xh8nS2EDRCWYaK9cudk67IevARAf1HyKcfHzzIeECWQPewhL8isv1LtlmlL6KWuEU F+/M4tJxfdZKbyqcSQySKCaUigGH8dJuFn5fzUPMUZV0ORHmUP5xNwicF4xEzZvPWd rYMbjTGPtoD0JjPw9CyUsnOhXFnnqF/VkMhn+7Ytrq+7IYyPj5UqutKohLv1RynPSx GlOKUGD2MjGkRjjJRfdzfSpAWjX3qoyrZavoUwT08vtLhfFr3FSBPvx8z2uWOHX6n2 WwrmLlEfTWW0g== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 13/14] mm/sparse: move __section_mark_present() to internal.h Date: Tue, 17 Mar 2026 17:56:51 +0100 Message-ID: <20260317165652.99114-14-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's prepare for moving memory hotplug handling from sparse.c to sparse-vmemmap.c by moving __section_mark_present() to internal.h. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- mm/internal.h | 9 +++++++++ mm/sparse.c | 8 -------- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index bcf4df97b185..835a6f00134e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -987,6 +987,15 @@ static inline void sparse_init_one_section(struct mem_= section *ms, ms->section_mem_map |=3D SECTION_HAS_MEM_MAP | flags; ms->usage =3D usage; } + +static inline void __section_mark_present(struct mem_section *ms, + unsigned long section_nr) +{ + if (section_nr > __highest_present_section_nr) + __highest_present_section_nr =3D section_nr; + + ms->section_mem_map |=3D SECTION_MARKED_PRESENT; +} #else static inline void sparse_init(void) {} #endif /* CONFIG_SPARSEMEM */ diff --git a/mm/sparse.c b/mm/sparse.c index 6f5f340301a3..bf620f3fe05d 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -161,14 +161,6 @@ static void __meminit mminit_validate_memmodel_limits(= unsigned long *start_pfn, * those loops early. */ unsigned long __highest_present_section_nr; -static void __section_mark_present(struct mem_section *ms, - unsigned long section_nr) -{ - if (section_nr > __highest_present_section_nr) - __highest_present_section_nr =3D section_nr; - - ms->section_mem_map |=3D SECTION_MARKED_PRESENT; -} =20 static inline unsigned long first_present_section_nr(void) { --=20 2.43.0 From nobody Mon Apr 6 22:15:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE34F3F7A83; Tue, 17 Mar 2026 16:57:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766679; cv=none; b=nxg4bQrPm35IcwpQjJ9bMwyzFUbaphS9JuWAT1ma6ow1aaQUVCQntLynuknBJR4ykLceuUhTi6fuDS86BtFq/aQAE+Vqj5EU+cfG4GfVCFGi4VYy9PrZ5/t+HqnYVcmyO9sItAK/CPFvcew7YvQcZ6jwMRb2LYOz74iH/zqPmew= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773766679; c=relaxed/simple; bh=pKEwuTHD3aPaba4QgNpbwtL0yYNllLzo1BkeAtXeepQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y9CLElEOfAvvqzYJVKKCJRGkXMkOT/zOquazfDomUy8/Uc7IaKwJwH4ESitiaFuzOw/IxaxLLD1m05LfON84xaHkBMtpOPcOWsb6lVCYRk1D5jX0J8T5w/XdkyeYwStR8Efg3pxQL7u9z8ux72rMv1RwUYpOFcDUEK7VXWhIbaw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LRwTEqWD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LRwTEqWD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 161B6C4CEF7; Tue, 17 Mar 2026 16:57:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773766679; bh=pKEwuTHD3aPaba4QgNpbwtL0yYNllLzo1BkeAtXeepQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LRwTEqWD94W3S28nIA9ZS4U0TlGvRibMAAVodxsQz7/L8engpgoe+zFrAJ7aa0gJ9 rSzksugzaG82z+7SjugQg3SL5rOYFgTKgNuQG5dyBdO7ksufcYimp1723hgDoVrHoM 7ze8q0b8azwPPo7QmSqyxZagWBQQfKZcRz+9yQJpgCJzek4tjLMqTl+/TgkpBgBIUq ASDCzhHw04fkY4a/QwGc//M4/r4Y16mfVppKvwXu7lD+CdbfnLSqml1BOZfrMg7e8O nnSg0xtKFGU95v+lHdsH59l2+FIu3uUz9fNU4z6Pnbi/xoM7dmC3yhOjL0CXcrDEh3 RTau2Nnc+abwA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-cxl@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Oscar Salvador , Axel Rasmussen , Yuanchu Xie , Wei Xu , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko Subject: [PATCH 14/14] mm/sparse: move memory hotplug bits to sparse-vmemmap.c Date: Tue, 17 Mar 2026 17:56:52 +0100 Message-ID: <20260317165652.99114-15-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260317165652.99114-1-david@kernel.org> References: <20260317165652.99114-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's move all memory hoptplug related code to sparse-vmemmap.c. We only have to expose sparse_index_init(). While at it, drop the definition of sparse_index_init() for !CONFIG_SPARSEMEM, which is unused, and place the declaration in internal.h. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Mike Rapoport (Microsoft) --- include/linux/mmzone.h | 1 - mm/internal.h | 4 + mm/sparse-vmemmap.c | 308 ++++++++++++++++++++++++++++++++++++++++ mm/sparse.c | 314 +---------------------------------------- 4 files changed, 314 insertions(+), 313 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index dcbbf36ed88c..e11513f581eb 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2390,7 +2390,6 @@ static inline unsigned long next_present_section_nr(u= nsigned long section_nr) #endif =20 #else -#define sparse_index_init(_sec, _nid) do {} while (0) #define sparse_vmemmap_init_nid_early(_nid) do {} while (0) #define sparse_vmemmap_init_nid_late(_nid) do {} while (0) #define pfn_in_present_section pfn_valid diff --git a/mm/internal.h b/mm/internal.h index 835a6f00134e..b1a9e9312ffe 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -965,6 +965,7 @@ void memmap_init_range(unsigned long, int, unsigned lon= g, unsigned long, */ #ifdef CONFIG_SPARSEMEM void sparse_init(void); +int sparse_index_init(unsigned long section_nr, int nid); =20 static inline void sparse_init_one_section(struct mem_section *ms, unsigned long pnum, struct page *mem_map, @@ -1000,6 +1001,9 @@ static inline void __section_mark_present(struct mem_= section *ms, static inline void sparse_init(void) {} #endif /* CONFIG_SPARSEMEM */ =20 +/* + * mm/sparse-vmemmap.c + */ #ifdef CONFIG_SPARSEMEM_VMEMMAP void sparse_init_subsection_map(unsigned long pfn, unsigned long nr_pages); #else diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index f0690797667f..330579365a0f 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -591,3 +591,311 @@ void __init sparse_vmemmap_init_nid_late(int nid) hugetlb_vmemmap_init_late(nid); } #endif + +static void subsection_mask_set(unsigned long *map, unsigned long pfn, + unsigned long nr_pages) +{ + int idx =3D subsection_map_index(pfn); + int end =3D subsection_map_index(pfn + nr_pages - 1); + + bitmap_set(map, idx, end - idx + 1); +} + +void __init sparse_init_subsection_map(unsigned long pfn, unsigned long nr= _pages) +{ + int end_sec_nr =3D pfn_to_section_nr(pfn + nr_pages - 1); + unsigned long nr, start_sec_nr =3D pfn_to_section_nr(pfn); + + for (nr =3D start_sec_nr; nr <=3D end_sec_nr; nr++) { + struct mem_section *ms; + unsigned long pfns; + + pfns =3D min(nr_pages, PAGES_PER_SECTION + - (pfn & ~PAGE_SECTION_MASK)); + ms =3D __nr_to_section(nr); + subsection_mask_set(ms->usage->subsection_map, pfn, pfns); + + pr_debug("%s: sec: %lu pfns: %lu set(%d, %d)\n", __func__, nr, + pfns, subsection_map_index(pfn), + subsection_map_index(pfn + pfns - 1)); + + pfn +=3D pfns; + nr_pages -=3D pfns; + } +} + +#ifdef CONFIG_MEMORY_HOTPLUG + +/* Mark all memory sections within the pfn range as online */ +void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long pfn; + + for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { + unsigned long section_nr =3D pfn_to_section_nr(pfn); + struct mem_section *ms =3D __nr_to_section(section_nr); + + ms->section_mem_map |=3D SECTION_IS_ONLINE; + } +} + +/* Mark all memory sections within the pfn range as offline */ +void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long pfn; + + for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { + unsigned long section_nr =3D pfn_to_section_nr(pfn); + struct mem_section *ms =3D __nr_to_section(section_nr); + + ms->section_mem_map &=3D ~SECTION_IS_ONLINE; + } +} + +static struct page * __meminit populate_section_memmap(unsigned long pfn, + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) +{ + return __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); +} + +static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_= pages, + struct vmem_altmap *altmap) +{ + unsigned long start =3D (unsigned long) pfn_to_page(pfn); + unsigned long end =3D start + nr_pages * sizeof(struct page); + + vmemmap_free(start, end, altmap); +} +static void free_map_bootmem(struct page *memmap) +{ + unsigned long start =3D (unsigned long)memmap; + unsigned long end =3D (unsigned long)(memmap + PAGES_PER_SECTION); + + vmemmap_free(start, end, NULL); +} + +static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) +{ + DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) =3D { 0 }; + DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) =3D { 0 }; + struct mem_section *ms =3D __pfn_to_section(pfn); + unsigned long *subsection_map =3D ms->usage + ? &ms->usage->subsection_map[0] : NULL; + + subsection_mask_set(map, pfn, nr_pages); + if (subsection_map) + bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION); + + if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTI= ON), + "section already deactivated (%#lx + %ld)\n", + pfn, nr_pages)) + return -EINVAL; + + bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); + return 0; +} + +static bool is_subsection_map_empty(struct mem_section *ms) +{ + return bitmap_empty(&ms->usage->subsection_map[0], + SUBSECTIONS_PER_SECTION); +} + +static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) +{ + struct mem_section *ms =3D __pfn_to_section(pfn); + DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) =3D { 0 }; + unsigned long *subsection_map; + int rc =3D 0; + + subsection_mask_set(map, pfn, nr_pages); + + subsection_map =3D &ms->usage->subsection_map[0]; + + if (bitmap_empty(map, SUBSECTIONS_PER_SECTION)) + rc =3D -EINVAL; + else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION)) + rc =3D -EEXIST; + else + bitmap_or(subsection_map, map, subsection_map, + SUBSECTIONS_PER_SECTION); + + return rc; +} + +/* + * To deactivate a memory region, there are 3 cases to handle across + * two configurations (SPARSEMEM_VMEMMAP=3D{y,n}): + * + * 1. deactivation of a partial hot-added section (only possible in + * the SPARSEMEM_VMEMMAP=3Dy case). + * a) section was present at memory init. + * b) section was hot-added post memory init. + * 2. deactivation of a complete hot-added section. + * 3. deactivation of a complete section from memory init. + * + * For 1, when subsection_map does not empty we will not be freeing the + * usage map, but still need to free the vmemmap range. + * + * For 2 and 3, the SPARSEMEM_VMEMMAP=3D{y,n} cases are unified + */ +static void section_deactivate(unsigned long pfn, unsigned long nr_pages, + struct vmem_altmap *altmap) +{ + struct mem_section *ms =3D __pfn_to_section(pfn); + bool section_is_early =3D early_section(ms); + struct page *memmap =3D NULL; + bool empty; + + if (clear_subsection_map(pfn, nr_pages)) + return; + + empty =3D is_subsection_map_empty(ms); + if (empty) { + /* + * Mark the section invalid so that valid_section() + * return false. This prevents code from dereferencing + * ms->usage array. + */ + ms->section_mem_map &=3D ~SECTION_HAS_MEM_MAP; + + /* + * When removing an early section, the usage map is kept (as the + * usage maps of other sections fall into the same page). It + * will be re-used when re-adding the section - which is then no + * longer an early section. If the usage map is PageReserved, it + * was allocated during boot. + */ + if (!PageReserved(virt_to_page(ms->usage))) { + kfree_rcu(ms->usage, rcu); + WRITE_ONCE(ms->usage, NULL); + } + memmap =3D pfn_to_page(SECTION_ALIGN_DOWN(pfn)); + } + + /* + * The memmap of early sections is always fully populated. See + * section_activate() and pfn_valid() . + */ + if (!section_is_early) { + memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAG= E_SIZE))); + depopulate_section_memmap(pfn, nr_pages, altmap); + } else if (memmap) { + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), + PAGE_SIZE))); + free_map_bootmem(memmap); + } + + if (empty) + ms->section_mem_map =3D (unsigned long)NULL; +} + +static struct page * __meminit section_activate(int nid, unsigned long pfn, + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) +{ + struct mem_section *ms =3D __pfn_to_section(pfn); + struct mem_section_usage *usage =3D NULL; + struct page *memmap; + int rc; + + if (!ms->usage) { + usage =3D kzalloc(mem_section_usage_size(), GFP_KERNEL); + if (!usage) + return ERR_PTR(-ENOMEM); + ms->usage =3D usage; + } + + rc =3D fill_subsection_map(pfn, nr_pages); + if (rc) { + if (usage) + ms->usage =3D NULL; + kfree(usage); + return ERR_PTR(rc); + } + + /* + * The early init code does not consider partially populated + * initial sections, it simply assumes that memory will never be + * referenced. If we hot-add memory into such a section then we + * do not need to populate the memmap and can simply reuse what + * is already there. + */ + if (nr_pages < PAGES_PER_SECTION && early_section(ms)) + return pfn_to_page(pfn); + + memmap =3D populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); + if (!memmap) { + section_deactivate(pfn, nr_pages, altmap); + return ERR_PTR(-ENOMEM); + } + memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)); + + return memmap; +} + +/** + * sparse_add_section - add a memory section, or populate an existing one + * @nid: The node to add section on + * @start_pfn: start pfn of the memory range + * @nr_pages: number of pfns to add in the section + * @altmap: alternate pfns to allocate the memmap backing store + * @pgmap: alternate compound page geometry for devmap mappings + * + * This is only intended for hotplug. + * + * Note that only VMEMMAP supports sub-section aligned hotplug, + * the proper alignment and size are gated by check_pfn_span(). + * + * + * Return: + * * 0 - On success. + * * -EEXIST - Section has been present. + * * -ENOMEM - Out of memory. + */ +int __meminit sparse_add_section(int nid, unsigned long start_pfn, + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) +{ + unsigned long section_nr =3D pfn_to_section_nr(start_pfn); + struct mem_section *ms; + struct page *memmap; + int ret; + + ret =3D sparse_index_init(section_nr, nid); + if (ret < 0) + return ret; + + memmap =3D section_activate(nid, start_pfn, nr_pages, altmap, pgmap); + if (IS_ERR(memmap)) + return PTR_ERR(memmap); + + /* + * Poison uninitialized struct pages in order to catch invalid flags + * combinations. + */ + page_init_poison(memmap, sizeof(struct page) * nr_pages); + + ms =3D __nr_to_section(section_nr); + __section_mark_present(ms, section_nr); + + /* Align memmap to section boundary in the subsection case */ + if (section_nr_to_pfn(section_nr) !=3D start_pfn) + memmap =3D pfn_to_page(section_nr_to_pfn(section_nr)); + sparse_init_one_section(ms, section_nr, memmap, ms->usage, 0); + + return 0; +} + +void sparse_remove_section(unsigned long pfn, unsigned long nr_pages, + struct vmem_altmap *altmap) +{ + struct mem_section *ms =3D __pfn_to_section(pfn); + + if (WARN_ON_ONCE(!valid_section(ms))) + return; + + section_deactivate(pfn, nr_pages, altmap); +} +#endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/mm/sparse.c b/mm/sparse.c index bf620f3fe05d..007fd52c621e 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -79,7 +79,7 @@ static noinline struct mem_section __ref *sparse_index_al= loc(int nid) return section; } =20 -static int __meminit sparse_index_init(unsigned long section_nr, int nid) +int __meminit sparse_index_init(unsigned long section_nr, int nid) { unsigned long root =3D SECTION_NR_TO_ROOT(section_nr); struct mem_section *section; @@ -103,7 +103,7 @@ static int __meminit sparse_index_init(unsigned long se= ction_nr, int nid) return 0; } #else /* !SPARSEMEM_EXTREME */ -static inline int sparse_index_init(unsigned long section_nr, int nid) +int sparse_index_init(unsigned long section_nr, int nid) { return 0; } @@ -167,40 +167,6 @@ static inline unsigned long first_present_section_nr(v= oid) return next_present_section_nr(-1); } =20 -#ifdef CONFIG_SPARSEMEM_VMEMMAP -static void subsection_mask_set(unsigned long *map, unsigned long pfn, - unsigned long nr_pages) -{ - int idx =3D subsection_map_index(pfn); - int end =3D subsection_map_index(pfn + nr_pages - 1); - - bitmap_set(map, idx, end - idx + 1); -} - -void __init sparse_init_subsection_map(unsigned long pfn, unsigned long nr= _pages) -{ - int end_sec_nr =3D pfn_to_section_nr(pfn + nr_pages - 1); - unsigned long nr, start_sec_nr =3D pfn_to_section_nr(pfn); - - for (nr =3D start_sec_nr; nr <=3D end_sec_nr; nr++) { - struct mem_section *ms; - unsigned long pfns; - - pfns =3D min(nr_pages, PAGES_PER_SECTION - - (pfn & ~PAGE_SECTION_MASK)); - ms =3D __nr_to_section(nr); - subsection_mask_set(ms->usage->subsection_map, pfn, pfns); - - pr_debug("%s: sec: %lu pfns: %lu set(%d, %d)\n", __func__, nr, - pfns, subsection_map_index(pfn), - subsection_map_index(pfn + pfns - 1)); - - pfn +=3D pfns; - nr_pages -=3D pfns; - } -} -#endif - /* Record a memory area against a node. */ static void __init memory_present(int nid, unsigned long start, unsigned l= ong end) { @@ -482,279 +448,3 @@ void __init sparse_init(void) sparse_init_nid(nid_begin, pnum_begin, pnum_end, map_count); vmemmap_populate_print_last(); } - -#ifdef CONFIG_MEMORY_HOTPLUG - -/* Mark all memory sections within the pfn range as online */ -void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn) -{ - unsigned long pfn; - - for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { - unsigned long section_nr =3D pfn_to_section_nr(pfn); - struct mem_section *ms =3D __nr_to_section(section_nr); - - ms->section_mem_map |=3D SECTION_IS_ONLINE; - } -} - -/* Mark all memory sections within the pfn range as offline */ -void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn) -{ - unsigned long pfn; - - for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { - unsigned long section_nr =3D pfn_to_section_nr(pfn); - struct mem_section *ms =3D __nr_to_section(section_nr); - - ms->section_mem_map &=3D ~SECTION_IS_ONLINE; - } -} - -static struct page * __meminit populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap, - struct dev_pagemap *pgmap) -{ - return __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); -} - -static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_= pages, - struct vmem_altmap *altmap) -{ - unsigned long start =3D (unsigned long) pfn_to_page(pfn); - unsigned long end =3D start + nr_pages * sizeof(struct page); - - vmemmap_free(start, end, altmap); -} -static void free_map_bootmem(struct page *memmap) -{ - unsigned long start =3D (unsigned long)memmap; - unsigned long end =3D (unsigned long)(memmap + PAGES_PER_SECTION); - - vmemmap_free(start, end, NULL); -} - -static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) -{ - DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) =3D { 0 }; - DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) =3D { 0 }; - struct mem_section *ms =3D __pfn_to_section(pfn); - unsigned long *subsection_map =3D ms->usage - ? &ms->usage->subsection_map[0] : NULL; - - subsection_mask_set(map, pfn, nr_pages); - if (subsection_map) - bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION); - - if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTI= ON), - "section already deactivated (%#lx + %ld)\n", - pfn, nr_pages)) - return -EINVAL; - - bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); - return 0; -} - -static bool is_subsection_map_empty(struct mem_section *ms) -{ - return bitmap_empty(&ms->usage->subsection_map[0], - SUBSECTIONS_PER_SECTION); -} - -static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) -{ - struct mem_section *ms =3D __pfn_to_section(pfn); - DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) =3D { 0 }; - unsigned long *subsection_map; - int rc =3D 0; - - subsection_mask_set(map, pfn, nr_pages); - - subsection_map =3D &ms->usage->subsection_map[0]; - - if (bitmap_empty(map, SUBSECTIONS_PER_SECTION)) - rc =3D -EINVAL; - else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION)) - rc =3D -EEXIST; - else - bitmap_or(subsection_map, map, subsection_map, - SUBSECTIONS_PER_SECTION); - - return rc; -} - -/* - * To deactivate a memory region, there are 3 cases to handle across - * two configurations (SPARSEMEM_VMEMMAP=3D{y,n}): - * - * 1. deactivation of a partial hot-added section (only possible in - * the SPARSEMEM_VMEMMAP=3Dy case). - * a) section was present at memory init. - * b) section was hot-added post memory init. - * 2. deactivation of a complete hot-added section. - * 3. deactivation of a complete section from memory init. - * - * For 1, when subsection_map does not empty we will not be freeing the - * usage map, but still need to free the vmemmap range. - * - * For 2 and 3, the SPARSEMEM_VMEMMAP=3D{y,n} cases are unified - */ -static void section_deactivate(unsigned long pfn, unsigned long nr_pages, - struct vmem_altmap *altmap) -{ - struct mem_section *ms =3D __pfn_to_section(pfn); - bool section_is_early =3D early_section(ms); - struct page *memmap =3D NULL; - bool empty; - - if (clear_subsection_map(pfn, nr_pages)) - return; - - empty =3D is_subsection_map_empty(ms); - if (empty) { - /* - * Mark the section invalid so that valid_section() - * return false. This prevents code from dereferencing - * ms->usage array. - */ - ms->section_mem_map &=3D ~SECTION_HAS_MEM_MAP; - - /* - * When removing an early section, the usage map is kept (as the - * usage maps of other sections fall into the same page). It - * will be re-used when re-adding the section - which is then no - * longer an early section. If the usage map is PageReserved, it - * was allocated during boot. - */ - if (!PageReserved(virt_to_page(ms->usage))) { - kfree_rcu(ms->usage, rcu); - WRITE_ONCE(ms->usage, NULL); - } - memmap =3D pfn_to_page(SECTION_ALIGN_DOWN(pfn)); - } - - /* - * The memmap of early sections is always fully populated. See - * section_activate() and pfn_valid() . - */ - if (!section_is_early) { - memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAG= E_SIZE))); - depopulate_section_memmap(pfn, nr_pages, altmap); - } else if (memmap) { - memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), - PAGE_SIZE))); - free_map_bootmem(memmap); - } - - if (empty) - ms->section_mem_map =3D (unsigned long)NULL; -} - -static struct page * __meminit section_activate(int nid, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap, - struct dev_pagemap *pgmap) -{ - struct mem_section *ms =3D __pfn_to_section(pfn); - struct mem_section_usage *usage =3D NULL; - struct page *memmap; - int rc; - - if (!ms->usage) { - usage =3D kzalloc(mem_section_usage_size(), GFP_KERNEL); - if (!usage) - return ERR_PTR(-ENOMEM); - ms->usage =3D usage; - } - - rc =3D fill_subsection_map(pfn, nr_pages); - if (rc) { - if (usage) - ms->usage =3D NULL; - kfree(usage); - return ERR_PTR(rc); - } - - /* - * The early init code does not consider partially populated - * initial sections, it simply assumes that memory will never be - * referenced. If we hot-add memory into such a section then we - * do not need to populate the memmap and can simply reuse what - * is already there. - */ - if (nr_pages < PAGES_PER_SECTION && early_section(ms)) - return pfn_to_page(pfn); - - memmap =3D populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); - if (!memmap) { - section_deactivate(pfn, nr_pages, altmap); - return ERR_PTR(-ENOMEM); - } - memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)); - - return memmap; -} - -/** - * sparse_add_section - add a memory section, or populate an existing one - * @nid: The node to add section on - * @start_pfn: start pfn of the memory range - * @nr_pages: number of pfns to add in the section - * @altmap: alternate pfns to allocate the memmap backing store - * @pgmap: alternate compound page geometry for devmap mappings - * - * This is only intended for hotplug. - * - * Note that only VMEMMAP supports sub-section aligned hotplug, - * the proper alignment and size are gated by check_pfn_span(). - * - * - * Return: - * * 0 - On success. - * * -EEXIST - Section has been present. - * * -ENOMEM - Out of memory. - */ -int __meminit sparse_add_section(int nid, unsigned long start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap, - struct dev_pagemap *pgmap) -{ - unsigned long section_nr =3D pfn_to_section_nr(start_pfn); - struct mem_section *ms; - struct page *memmap; - int ret; - - ret =3D sparse_index_init(section_nr, nid); - if (ret < 0) - return ret; - - memmap =3D section_activate(nid, start_pfn, nr_pages, altmap, pgmap); - if (IS_ERR(memmap)) - return PTR_ERR(memmap); - - /* - * Poison uninitialized struct pages in order to catch invalid flags - * combinations. - */ - page_init_poison(memmap, sizeof(struct page) * nr_pages); - - ms =3D __nr_to_section(section_nr); - __section_mark_present(ms, section_nr); - - /* Align memmap to section boundary in the subsection case */ - if (section_nr_to_pfn(section_nr) !=3D start_pfn) - memmap =3D pfn_to_page(section_nr_to_pfn(section_nr)); - sparse_init_one_section(ms, section_nr, memmap, ms->usage, 0); - - return 0; -} - -void sparse_remove_section(unsigned long pfn, unsigned long nr_pages, - struct vmem_altmap *altmap) -{ - struct mem_section *ms =3D __pfn_to_section(pfn); - - if (WARN_ON_ONCE(!valid_section(ms))) - return; - - section_deactivate(pfn, nr_pages, altmap); -} -#endif /* CONFIG_MEMORY_HOTPLUG */ --=20 2.43.0