From nobody Fri Apr 3 22:38:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91E6335DA61 for ; Mon, 23 Mar 2026 07:20:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250449; cv=none; b=fjrNRmBUz+ra2i9qCxsvN5Jt7mQy9eUAWMf4eMvSwhxzB8YQ3MbjCNDNiB0r+Nt3vD4/zU6RjqGzd7sd4LjCnTGg9j2WSASL7QovEfKvti4ecm8OUVPEEX+ZqOoK0Xe4twSQDTPL8+zuxnB9nWpsXuKA4bmzsCEyrbO4TOELAfU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250449; c=relaxed/simple; bh=0vD2X20R0Foy1Av9+GcrMxGr/n9x3ZS0o9sWsZdkNr4=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=SDWGt5Y3xU5/3a6PjR2GHBCVJpXg08FWsImF2ne+crBvp1ON6qN0wbe9P4+XvtIDaDTpMgw2nF+dyTbVWsEWHWX90eclGliLdl02d+BX1Dl3gwPzY790kIwvo0GELrbxZ8ZsuNkCpnTGXZf8rH/dLK5rGsR9FevIUxQmXnbVA0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Nxaecao/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Nxaecao/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37FBDC4CEF7; Mon, 23 Mar 2026 07:20:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774250448; bh=0vD2X20R0Foy1Av9+GcrMxGr/n9x3ZS0o9sWsZdkNr4=; h=From:To:Cc:Subject:Date:From; b=Nxaecao/+6Lkm6NnDQaCtAZzvPQXM3lyF/W7HPnRyKyZg5phrti13SUAL3wOkQit0 xP1VSIEltIXDXYDw8Cheo/YImvWVhBu/V41XJOjpw9qpkcrqCS7kiIvlOBr2Ax+TB1 100w4Kb0opYtV7gDVDdGIMFHYtx9xLzccQchh0R674k5PmEm+2NrgPM9SrYAl2JrgZ mk8+31i30LHxDtM5vlwRsNzcw4U2rqYaGsJDf0XrRmRQuc5l+8RnCNlTLRAkWfN+KA c9/c59OisIBarS5gO0CgzQAL9fNueNV0sj43U9vSrROZKziRnmslRvVqUCd3bGSDF/ 5ZsrG2IxBR0SA== From: Mike Rapoport To: Andrew Morton , David Hildenbrand Cc: Kees Cook , "Liam R. Howlett" , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Suren Baghdasaryan , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2] memblock: move reserve_bootmem_range() to memblock.c and make it static Date: Mon, 23 Mar 2026 09:20:42 +0200 Message-ID: <20260323072042.3651061-1-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" reserve_bootmem_region() is only called from memmap_init_reserved_pages() and it was in mm/mm_init.c because of its dependecies on static init_deferred_page(). Since init_deferred_page() is not static anymore, move reserve_bootmem_region(), rename it to memmap_init_reserved_range() and make it static. Update the comment describing it to better reflect what the function does and drop bogus comment about reserved pages in free_bootmem_page(). Update memblock test stubs to reflect the core changes. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- include/linux/bootmem_info.h | 4 ---- include/linux/mm.h | 3 --- mm/memblock.c | 31 ++++++++++++++++++++++++++++--- mm/mm_init.c | 25 ------------------------- tools/include/linux/mm.h | 2 -- tools/testing/memblock/internal.h | 9 +++++++++ tools/testing/memblock/mmzone.c | 4 ---- 7 files changed, 37 insertions(+), 41 deletions(-) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4c506e76a808..492ceeb1cdf8 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -44,10 +44,6 @@ static inline void free_bootmem_page(struct page *page) { enum bootmem_type type =3D bootmem_type(page); =20 - /* - * The reserve_bootmem_region sets the reserved flag on bootmem - * pages. - */ VM_BUG_ON_PAGE(page_ref_count(page) !=3D 2, page); =20 if (type =3D=3D SECTION_INFO || type =3D=3D MIX_SECTION_INFO) diff --git a/include/linux/mm.h b/include/linux/mm.h index abb4963c1f06..764d10fdfb5d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3686,9 +3686,6 @@ extern unsigned long free_reserved_area(void *start, = void *end, =20 extern void adjust_managed_page_count(struct page *page, long count); =20 -extern void reserve_bootmem_region(phys_addr_t start, - phys_addr_t end, int nid); - /* Free the reserved page into the buddy system, so it gets managed. */ void free_reserved_page(struct page *page); =20 diff --git a/mm/memblock.c b/mm/memblock.c index b3ddfdec7a80..d504205cdbf5 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -973,7 +973,7 @@ __init void memmap_init_kho_scratch_pages(void) /* * Initialize struct pages for free scratch memory. * The struct pages for reserved scratch memory will be set up in - * reserve_bootmem_region() + * memmap_init_reserved_pages() */ __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { @@ -2240,6 +2240,31 @@ static unsigned long __init __free_memory_core(phys_= addr_t start, return end_pfn - start_pfn; } =20 +/* + * Initialised pages do not have PageReserved set. This function is called + * for each reserved range and marks the pages PageReserved. + * When deferred initialization of struct pages is enabled it also ensures + * that struct pages are properly initialised. + */ +static void __init memmap_init_reserved_range(phys_addr_t start, + phys_addr_t end, int nid) +{ + unsigned long pfn; + + for_each_valid_pfn(pfn, PFN_DOWN(start), PFN_UP(end)) { + struct page *page =3D pfn_to_page(pfn); + + init_deferred_page(pfn, nid); + + /* + * no need for atomic set_bit because the struct + * page is not visible yet so nobody should + * access it yet. + */ + __SetPageReserved(page); + } +} + static void __init memmap_init_reserved_pages(void) { struct memblock_region *region; @@ -2259,7 +2284,7 @@ static void __init memmap_init_reserved_pages(void) end =3D start + region->size; =20 if (memblock_is_nomap(region)) - reserve_bootmem_region(start, end, nid); + memmap_init_reserved_range(start, end, nid); =20 memblock_set_node(start, region->size, &memblock.reserved, nid); } @@ -2284,7 +2309,7 @@ static void __init memmap_init_reserved_pages(void) if (!numa_valid_node(nid)) nid =3D early_pfn_to_nid(PFN_DOWN(start)); =20 - reserve_bootmem_region(start, end, nid); + memmap_init_reserved_range(start, end, nid); } } } diff --git a/mm/mm_init.c b/mm/mm_init.c index df34797691bd..ea8d3de43470 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -772,31 +772,6 @@ void __meminit init_deferred_page(unsigned long pfn, i= nt nid) __init_deferred_page(pfn, nid); } =20 -/* - * Initialised pages do not have PageReserved set. This function is - * called for each range allocated by the bootmem allocator and - * marks the pages PageReserved. The remaining valid pages are later - * sent to the buddy page allocator. - */ -void __meminit reserve_bootmem_region(phys_addr_t start, - phys_addr_t end, int nid) -{ - unsigned long pfn; - - for_each_valid_pfn(pfn, PFN_DOWN(start), PFN_UP(end)) { - struct page *page =3D pfn_to_page(pfn); - - __init_deferred_page(pfn, nid); - - /* - * no need for atomic set_bit because the struct - * page is not visible yet so nobody should - * access it yet. - */ - __SetPageReserved(page); - } -} - /* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped ini= t */ static bool __meminit overlap_memmap_init(unsigned long zone, unsigned long *pfn) diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h index 028f3faf46e7..74cbd51dbea2 100644 --- a/tools/include/linux/mm.h +++ b/tools/include/linux/mm.h @@ -32,8 +32,6 @@ static inline phys_addr_t virt_to_phys(volatile void *add= ress) return (phys_addr_t)address; } =20 -void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid); - static inline void totalram_pages_inc(void) { } diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/int= ernal.h index 009b97bbdd22..eb02d5771f4c 100644 --- a/tools/testing/memblock/internal.h +++ b/tools/testing/memblock/internal.h @@ -29,4 +29,13 @@ static inline unsigned long free_reserved_area(void *sta= rt, void *end, return 0; } =20 +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \ + for ((pfn) =3D (start_pfn); (pfn) < (end_pfn); (pfn)++) + +static inline void init_deferred_page(unsigned long pfn, int nid) +{ +} + +#define __SetPageReserved(p) ((void)(p)) + #endif diff --git a/tools/testing/memblock/mmzone.c b/tools/testing/memblock/mmzon= e.c index d3d58851864e..e719450f81cb 100644 --- a/tools/testing/memblock/mmzone.c +++ b/tools/testing/memblock/mmzone.c @@ -11,10 +11,6 @@ struct pglist_data *next_online_pgdat(struct pglist_data= *pgdat) return NULL; } =20 -void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid) -{ -} - void atomic_long_set(atomic_long_t *v, long i) { } --=20 2.53.0