From nobody Sun Feb 8 05:55:00 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C9562EBBB0; Fri, 2 Jan 2026 07:05:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767337556; cv=none; b=BTDckm8U7O4nPImFsUQP4K7F6hkerLJQEBOx36mQ9fVrcu/1qcb2s8N+dne9ADjyp7OU9c490JwS+G9MpuWnXo/87cVK0qF5GccIxtlYkDJTQUS2VaHpSJ/wKia2KVj84yI7ZK9xv5uAlM5ZBY0p+SEHXAi9g7JfbH+9jbwcoXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767337556; c=relaxed/simple; bh=PCpWC6AbTzuppWM+VmkMKjEgcPECM1FjjXTtxFhKA/s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AkxeDAACgdWZwWUWOOtc2p8ZMei7z6+qoQRftQAqKLozkAHOABLbIo/Lgv3MMTh2QPM6P/hE3P9SfDqvES/y3COrE2yDtTIf4ALlAWeDpF2zCXGgFNAvIs8CAht/mFU7EXZfAJM0Vog6wHQyOOV7n5rkMm0RpDstnL91R6K9lEI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BRazAYvp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BRazAYvp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A58FDC19422; Fri, 2 Jan 2026 07:05:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767337555; bh=PCpWC6AbTzuppWM+VmkMKjEgcPECM1FjjXTtxFhKA/s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BRazAYvpE65O4ga8K2yr6JMC0JTgjxdrfQPg7QECGdx57udLajfCNmxiYN1rL21hc a2Ax5uezK3vyFKPJ2EarK8rSiCmvDAKovnnk2jzw77WcdEpAypTdd10J/L4NFYuHw1 DXvoNmbVoqO1e+RG6RPNorO8yJhKCN+QxPR9+p7pA90YcAGCfjmrO6Wox8DChjIccy epXYiWwQskLpByFfzzD++t5S/BwbIZSbEucQ9MhEgPplxwQQQVGKYI5tHo75ATiOwY kD+dIcP59kSET90TOz/PrEVZt3AswtUls/gR3l14jboA9EJKRGk2bdw9hDr4Oa7ryb nfI4Mx3TtU/Pg== From: Mike Rapoport To: Andrew Morton Cc: Alex Shi , Alexander Gordeev , Andreas Larsson , Borislav Petkov , Brian Cain , "Christophe Leroy (CS GROUP)" , Catalin Marinas , "David S. Miller" , Dave Hansen , David Hildenbrand , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Jonathan Corbet , "Liam R. Howlett" , Lorenzo Stoakes , Magnus Lindholm , Matt Turner , Max Filippov , Michael Ellerman , Michal Hocko , Michal Simek , Mike Rapoport , Muchun Song , Oscar Salvador , Palmer Dabbelt , Pratyush Yadav , Richard Weinberger , Russell King , Stafford Horne , Suren Baghdasaryan , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Vlastimil Babka , Will Deacon , x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org Subject: [PATCH v2 23/28] arch, mm: consolidate initialization of SPARSE memory model Date: Fri, 2 Jan 2026 08:59:59 +0200 Message-ID: <20260102070005.65328-24-rppt@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260102070005.65328-1-rppt@kernel.org> References: <20260102070005.65328-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: "Mike Rapoport (Microsoft)" Every architecture calls sparse_init() during setup_arch() although the data structures created by sparse_init() are not used until the initialization of the core MM. Beside the code duplication, calling sparse_init() from architecture specific code causes ordering differences of vmemmap and HVO initialization on different architectures. Move the call to sparse_init() from architecture specific code to free_area_init() to ensure that vmemmap and HVO initialization order is always the same. Signed-off-by: Mike Rapoport (Microsoft) --- Documentation/mm/memory-model.rst | 3 --- Documentation/translations/zh_CN/mm/memory-model.rst | 2 -- arch/alpha/kernel/setup.c | 1 - arch/arm/mm/init.c | 6 ------ arch/arm64/mm/init.c | 6 ------ arch/csky/kernel/setup.c | 2 -- arch/loongarch/kernel/setup.c | 8 -------- arch/mips/kernel/setup.c | 11 ----------- arch/parisc/mm/init.c | 2 -- arch/powerpc/include/asm/setup.h | 4 ++++ arch/powerpc/mm/mem.c | 5 ----- arch/powerpc/mm/numa.c | 2 -- arch/riscv/mm/init.c | 1 - arch/s390/mm/init.c | 1 - arch/sh/mm/init.c | 2 -- arch/sparc/mm/init_64.c | 2 -- arch/x86/mm/init_32.c | 1 - arch/x86/mm/init_64.c | 2 -- include/linux/mmzone.h | 2 -- mm/internal.h | 6 ++++++ mm/mm_init.c | 1 + 21 files changed, 11 insertions(+), 59 deletions(-) diff --git a/Documentation/mm/memory-model.rst b/Documentation/mm/memory-mo= del.rst index 7957122039e8..199b11328f4f 100644 --- a/Documentation/mm/memory-model.rst +++ b/Documentation/mm/memory-model.rst @@ -97,9 +97,6 @@ sections: `mem_section` objects and the number of rows is calculated to fit all the memory sections. =20 -The architecture setup code should call sparse_init() to -initialize the memory sections and the memory maps. - With SPARSEMEM there are two possible ways to convert a PFN to the corresponding `struct page` - a "classic sparse" and "sparse vmemmap". The selection is made at build time and it is determined by diff --git a/Documentation/translations/zh_CN/mm/memory-model.rst b/Documen= tation/translations/zh_CN/mm/memory-model.rst index 77ec149a970c..c0c5d8ecd880 100644 --- a/Documentation/translations/zh_CN/mm/memory-model.rst +++ b/Documentation/translations/zh_CN/mm/memory-model.rst @@ -83,8 +83,6 @@ SPARSEMEM=E6=A8=A1=E5=9E=8B=E5=B0=86=E7=89=A9=E7=90=86=E5= =86=85=E5=AD=98=E6=98=BE=E7=A4=BA=E4=B8=BA=E4=B8=80=E4=B8=AA=E9=83=A8=E5=88= =86=E7=9A=84=E9=9B=86=E5=90=88=E3=80=82=E4=B8=80=E4=B8=AA=E5=8C=BA=E6=AE=B5= =E7=94=A8me =E6=AF=8F=E4=B8=80=E8=A1=8C=E5=8C=85=E5=90=AB=E4=BB=B7=E5=80=BC `PAGE_SI= ZE` =E7=9A=84 `mem_section` =E5=AF=B9=E8=B1=A1=EF=BC=8C=E8=A1=8C=E6=95=B0= =E7=9A=84=E8=AE=A1=E7=AE=97=E6=98=AF=E4=B8=BA=E4=BA=86=E9=80=82=E5=BA=94=E6= =89=80=E6=9C=89=E7=9A=84 =E5=86=85=E5=AD=98=E5=8C=BA=E3=80=82 =20 -=E6=9E=B6=E6=9E=84=E8=AE=BE=E7=BD=AE=E4=BB=A3=E7=A0=81=E5=BA=94=E8=AF=A5= =E8=B0=83=E7=94=A8sparse_init()=E6=9D=A5=E5=88=9D=E5=A7=8B=E5=8C=96=E5=86= =85=E5=AD=98=E5=8C=BA=E5=92=8C=E5=86=85=E5=AD=98=E6=98=A0=E5=B0=84=E3=80=82 - =E9=80=9A=E8=BF=87SPARSEMEM=EF=BC=8C=E6=9C=89=E4=B8=A4=E7=A7=8D=E5=8F=AF= =E8=83=BD=E7=9A=84=E6=96=B9=E5=BC=8F=E5=B0=86PFN=E8=BD=AC=E6=8D=A2=E4=B8=BA= =E7=9B=B8=E5=BA=94=E7=9A=84 `struct page` --"classic sparse"=E5=92=8C "sparse vmemmap"=E3=80=82=E9=80=89=E6=8B=A9=E6=98=AF=E5=9C=A8=E6=9E=84=E5= =BB=BA=E6=97=B6=E8=BF=9B=E8=A1=8C=E7=9A=84=EF=BC=8C=E5=AE=83=E7=94=B1 `CONF= IG_SPARSEMEM_VMEMMAP` =E7=9A=84 =E5=80=BC=E5=86=B3=E5=AE=9A=E3=80=82 diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c index bebdffafaee8..f0af444a69a4 100644 --- a/arch/alpha/kernel/setup.c +++ b/arch/alpha/kernel/setup.c @@ -607,7 +607,6 @@ setup_arch(char **cmdline_p) /* Find our memory. */ setup_memory(kernel_end); memblock_set_bottom_up(true); - sparse_init(); =20 /* First guess at cpu cache sizes. Do this before init_arch. */ determine_cpu_caches(cpu->type); diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a8f7b4084715..0cc1bf04686d 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -207,12 +207,6 @@ void __init bootmem_init(void) =20 early_memtest((phys_addr_t)min_low_pfn << PAGE_SHIFT, (phys_addr_t)max_low_pfn << PAGE_SHIFT); - - /* - * sparse_init() tries to allocate memory from memblock, so must be - * done after the fixed reservations - */ - sparse_init(); } =20 /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3641e88ea871..9d271aff7652 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -321,12 +321,6 @@ void __init bootmem_init(void) #endif =20 kvm_hyp_reserve(); - - /* - * sparse_init() tries to allocate memory from memblock, so must be - * done after the fixed reservations - */ - sparse_init(); dma_limits_init(); =20 /* diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c index 4bf3c01ead3a..45c98dcf7f50 100644 --- a/arch/csky/kernel/setup.c +++ b/arch/csky/kernel/setup.c @@ -123,8 +123,6 @@ void __init setup_arch(char **cmdline_p) setup_smp(); #endif =20 - sparse_init(); - fixaddr_init(); =20 #ifdef CONFIG_HIGHMEM diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c index 708ac025db71..d6a1ff0e16f1 100644 --- a/arch/loongarch/kernel/setup.c +++ b/arch/loongarch/kernel/setup.c @@ -402,14 +402,6 @@ static void __init arch_mem_init(char **cmdline_p) =20 check_kernel_sections_mem(); =20 - /* - * In order to reduce the possibility of kernel panic when failed to - * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate - * low memory as small as possible before swiotlb_init(), so make - * sparse_init() using top-down allocation. - */ - memblock_set_bottom_up(false); - sparse_init(); memblock_set_bottom_up(true); =20 swiotlb_init(true, SWIOTLB_VERBOSE); diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c index 11b9b6b63e19..d36d89d01fa4 100644 --- a/arch/mips/kernel/setup.c +++ b/arch/mips/kernel/setup.c @@ -614,7 +614,6 @@ static void __init bootcmdline_init(void) * kernel but generic memory management system is still entirely uninitial= ized. * * o bootmem_init() - * o sparse_init() * o paging_init() * o dma_contiguous_reserve() * @@ -665,16 +664,6 @@ static void __init arch_mem_init(char **cmdline_p) mips_parse_crashkernel(); device_tree_init(); =20 - /* - * In order to reduce the possibility of kernel panic when failed to - * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate - * low memory as small as possible before plat_swiotlb_setup(), so - * make sparse_init() using top-down allocation. - */ - memblock_set_bottom_up(false); - sparse_init(); - memblock_set_bottom_up(true); - plat_swiotlb_setup(); =20 dma_contiguous_reserve(PFN_PHYS(max_low_pfn)); diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index ce6f09ab7a90..6a39e031e5ff 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -706,8 +706,6 @@ void __init paging_init(void) fixmap_init(); flush_cache_all_local(); /* start with known state */ flush_tlb_all_local(NULL); - - sparse_init(); } =20 static void alloc_btlb(unsigned long start, unsigned long end, int *slot, diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/se= tup.h index 50a92b24628d..6d60ea4868ab 100644 --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -20,7 +20,11 @@ extern void reloc_got2(unsigned long); =20 void check_for_initrd(void); void mem_topology_setup(void); +#ifdef CONFIG_NUMA void initmem_init(void); +#else +static inline void initmem_init(void) {} +#endif void setup_panic(void); #define ARCH_PANIC_TIMEOUT 180 =20 diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 72d4993192a6..30f56d601e56 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -182,11 +182,6 @@ void __init mem_topology_setup(void) memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); } =20 -void __init initmem_init(void) -{ - sparse_init(); -} - /* mark pages that don't exist as nosave */ static int __init mark_nonram_nosave(void) { diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 603a0f652ba6..f4cf3ae036de 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -1213,8 +1213,6 @@ void __init initmem_init(void) setup_node_data(nid, start_pfn, end_pfn); } =20 - sparse_init(); - /* * We need the numa_cpu_lookup_table to be accurate for all CPUs, * even before we online them, so that we can use cpu_to_{node,mem} diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 79b4792578c4..11ac4041afc0 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1430,7 +1430,6 @@ void __init misc_mem_init(void) { early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT); arch_numa_init(); - sparse_init(); #ifdef CONFIG_SPARSEMEM_VMEMMAP /* The entire VMEMMAP region has been populated. Flush TLB for this regio= n */ local_flush_tlb_kernel_range(VMEMMAP_START, VMEMMAP_END); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 9ec608b5cbb1..3c20475cbee2 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -98,7 +98,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone= _pfns) void __init paging_init(void) { vmem_map_init(); - sparse_init(); zone_dma_limit =3D DMA_BIT_MASK(31); } =20 diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 3edee854b755..464a3a63e2fa 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -227,8 +227,6 @@ static void __init do_init_bootmem(void) node_set_online(0); =20 plat_mem_setup(); - - sparse_init(); } =20 static void __init early_reserve_mem(void) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 931f872ce84a..4f7bdb18774b 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -1615,8 +1615,6 @@ static unsigned long __init bootmem_init(unsigned lon= g phys_base) =20 /* XXX cpu notifier XXX */ =20 - sparse_init(); - return end_pfn; } =20 diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index b55172118c91..0908c44d51e6 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -654,7 +654,6 @@ void __init paging_init(void) * NOTE: at this point the bootmem allocator is fully available. */ olpc_dt_build_devicetree(); - sparse_init(); } =20 /* diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 4daa40071c9f..df2261fa4f98 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -833,8 +833,6 @@ void __init initmem_init(void) =20 void __init paging_init(void) { - sparse_init(); - /* * clear the default setting with node 0 * note: don't use nodes_clear here, that is really clearing when diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 75ef7c9f9307..6a7db0fee54a 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2285,9 +2285,7 @@ static inline unsigned long next_present_section_nr(u= nsigned long section_nr) #define pfn_to_nid(pfn) (0) #endif =20 -void sparse_init(void); #else -#define sparse_init() do {} while (0) #define sparse_index_init(_sec, _nid) do {} while (0) #define sparse_vmemmap_init_nid_early(_nid) do {} while (0) #define sparse_vmemmap_init_nid_late(_nid) do {} while (0) diff --git a/mm/internal.h b/mm/internal.h index e430da900430..dc5316c68664 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -860,6 +860,12 @@ void memmap_init_range(unsigned long, int, unsigned lo= ng, unsigned long, unsigned long, enum meminit_context, struct vmem_altmap *, int, bool); =20 +#ifdef CONFIG_SPARSEMEM +void sparse_init(void); +#else +static inline void sparse_init(void) {} +#endif /* CONFIG_SPARSEMEM */ + #if defined CONFIG_COMPACTION || defined CONFIG_CMA =20 /* diff --git a/mm/mm_init.c b/mm/mm_init.c index ffc4a0f1fee9..4cfe722da062 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1828,6 +1828,7 @@ static void __init free_area_init(void) bool descending; =20 arch_zone_limits_init(max_zone_pfn); + sparse_init(); =20 start_pfn =3D PHYS_PFN(memblock_start_of_DRAM()); descending =3D arch_has_descending_max_zone_pfns(); --=20 2.51.0