From nobody Fri Apr 3 22:31:39 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D1B11B4F2C for ; Sun, 22 Mar 2026 14:31:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774189911; cv=none; b=Je6xYHIUDmw7HhYqM1+obUryJMWXWHd0WNHnLXoN0yUGI1DJGlmJwA7dxk/2bvwqNJDHLmiPiGsmAV3WwlwHjbFkASnuQUgcXtEpo0hDNlrEWTbX9BQf+2FWXbmohlL+GYG9+lyGxjWgFqvmJTGl85rBvQOAWE2xwhBgXSIWSJs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774189911; c=relaxed/simple; bh=+zLkfkhHTIURShydDGMQxTBPdh3Z1+edycqFZOyh5+k=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=NdwvSsCbTYP9DLYwZhjLNBsgpmuRVaINoWMdr361z16t29Y8Xvnq/s4WJMDaZNo6CD+R5E+jzFjMS56lE1qQt2EtTXjTFEQn+iWv4dObL8SA+jICLgY44BtXy/naBO8oP+D3l6BWNiG4H4+cR6Q3OJCL9LElycVeivdYZridadQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dcPDyHkD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dcPDyHkD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AAE9C19424; Sun, 22 Mar 2026 14:31:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774189911; bh=+zLkfkhHTIURShydDGMQxTBPdh3Z1+edycqFZOyh5+k=; h=From:To:Cc:Subject:Date:From; b=dcPDyHkDbMPEvvt3kMwYrXryGs/NHlDHwTR82g6dyqo9GlBGP+EPByDifj1SzSpTZ a/7R3JiYm/4sol4rEgEPSx7HvVNCvHal2xLQnvZzcgwsFnBMZVJMEcNUSTimXdl+sP hHAhAQkggp3hWaM0cMB7BjABxaVPyj4tZhldHMY281apzcXUDW4HKWEI4U9SPbeQhJ JMKJ9aLh6rMO3MxBA6XdcQ82N+ytlF9iNmO9DANlU/VPU+N9SN1F0TSfwDa+PX26g5 cDQKtroxMySBTinwV8JQO+wt5hyCrZPpgR+lFEGpB+aygylBz5N66POy2hu8QlA1W7 RXRKRKrDqLqgA== From: Mike Rapoport To: Andrew Morton , David Hildenbrand Cc: Kees Cook , "Liam R. Howlett" , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Suren Baghdasaryan , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH] memblock: move reserve_bootmem_range() to memblock.c and make it static Date: Sun, 22 Mar 2026 16:31:44 +0200 Message-ID: <20260322143144.3540679-1-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" reserve_bootmem_range() is only called from memmap_init_reserved_pages() and it was in mm/mm_init.c because of its dependecies on static init_deferred_pages(). Since init_deferred_pages() is not static anymore, move reserve_bootmem_range(), rename it to memmap_init_reserved_range() and make it static. Update the comment describing it to better reflect what the function does and drop bogus comment about reserved pages in free_bootmem_page(). Update memblock test stubs to reflect the core changes. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/bootmem_info.h | 4 ---- include/linux/mm.h | 3 --- mm/memblock.c | 29 +++++++++++++++++++++++++++-- mm/mm_init.c | 25 ------------------------- tools/include/linux/mm.h | 2 -- tools/testing/memblock/internal.h | 9 +++++++++ tools/testing/memblock/mmzone.c | 4 ---- 7 files changed, 36 insertions(+), 40 deletions(-) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4c506e76a808..492ceeb1cdf8 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -44,10 +44,6 @@ static inline void free_bootmem_page(struct page *page) { enum bootmem_type type =3D bootmem_type(page); =20 - /* - * The reserve_bootmem_region sets the reserved flag on bootmem - * pages. - */ VM_BUG_ON_PAGE(page_ref_count(page) !=3D 2, page); =20 if (type =3D=3D SECTION_INFO || type =3D=3D MIX_SECTION_INFO) diff --git a/include/linux/mm.h b/include/linux/mm.h index 70747b53c7da..51af53dfe884 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3737,9 +3737,6 @@ extern unsigned long free_reserved_area(void *start, = void *end, =20 extern void adjust_managed_page_count(struct page *page, long count); =20 -extern void reserve_bootmem_region(phys_addr_t start, - phys_addr_t end, int nid); - /* Free the reserved page into the buddy system, so it gets managed. */ void free_reserved_page(struct page *page); =20 diff --git a/mm/memblock.c b/mm/memblock.c index b3ddfdec7a80..17aa8661b84d 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2240,6 +2240,31 @@ static unsigned long __init __free_memory_core(phys_= addr_t start, return end_pfn - start_pfn; } =20 +/* + * Initialised pages do not have PageReserved set. This function is called + * for each reserved range and marks the pages PageReserved. + * When deferred initialization of struct pages is enabled it also ensures + * that struct pages are properly initialised. + */ +static void __init memmap_init_reserved_range(phys_addr_t start, + phys_addr_t end, int nid) +{ + unsigned long pfn; + + for_each_valid_pfn(pfn, PFN_DOWN(start), PFN_UP(end)) { + struct page *page =3D pfn_to_page(pfn); + + init_deferred_page(pfn, nid); + + /* + * no need for atomic set_bit because the struct + * page is not visible yet so nobody should + * access it yet. + */ + __SetPageReserved(page); + } +} + static void __init memmap_init_reserved_pages(void) { struct memblock_region *region; @@ -2259,7 +2284,7 @@ static void __init memmap_init_reserved_pages(void) end =3D start + region->size; =20 if (memblock_is_nomap(region)) - reserve_bootmem_region(start, end, nid); + memmap_init_reserved_range(start, end, nid); =20 memblock_set_node(start, region->size, &memblock.reserved, nid); } @@ -2284,7 +2309,7 @@ static void __init memmap_init_reserved_pages(void) if (!numa_valid_node(nid)) nid =3D early_pfn_to_nid(PFN_DOWN(start)); =20 - reserve_bootmem_region(start, end, nid); + memmap_init_reserved_range(start, end, nid); } } } diff --git a/mm/mm_init.c b/mm/mm_init.c index cec7bb758bdd..96ae6024a75f 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -784,31 +784,6 @@ void __meminit init_deferred_page(unsigned long pfn, i= nt nid) __init_deferred_page(pfn, nid); } =20 -/* - * Initialised pages do not have PageReserved set. This function is - * called for each range allocated by the bootmem allocator and - * marks the pages PageReserved. The remaining valid pages are later - * sent to the buddy page allocator. - */ -void __meminit reserve_bootmem_region(phys_addr_t start, - phys_addr_t end, int nid) -{ - unsigned long pfn; - - for_each_valid_pfn(pfn, PFN_DOWN(start), PFN_UP(end)) { - struct page *page =3D pfn_to_page(pfn); - - __init_deferred_page(pfn, nid); - - /* - * no need for atomic set_bit because the struct - * page is not visible yet so nobody should - * access it yet. - */ - __SetPageReserved(page); - } -} - /* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped ini= t */ static bool __meminit overlap_memmap_init(unsigned long zone, unsigned long *pfn) diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h index 028f3faf46e7..74cbd51dbea2 100644 --- a/tools/include/linux/mm.h +++ b/tools/include/linux/mm.h @@ -32,8 +32,6 @@ static inline phys_addr_t virt_to_phys(volatile void *add= ress) return (phys_addr_t)address; } =20 -void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid); - static inline void totalram_pages_inc(void) { } diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/int= ernal.h index 009b97bbdd22..eb02d5771f4c 100644 --- a/tools/testing/memblock/internal.h +++ b/tools/testing/memblock/internal.h @@ -29,4 +29,13 @@ static inline unsigned long free_reserved_area(void *sta= rt, void *end, return 0; } =20 +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \ + for ((pfn) =3D (start_pfn); (pfn) < (end_pfn); (pfn)++) + +static inline void init_deferred_page(unsigned long pfn, int nid) +{ +} + +#define __SetPageReserved(p) ((void)(p)) + #endif diff --git a/tools/testing/memblock/mmzone.c b/tools/testing/memblock/mmzon= e.c index d3d58851864e..e719450f81cb 100644 --- a/tools/testing/memblock/mmzone.c +++ b/tools/testing/memblock/mmzone.c @@ -11,10 +11,6 @@ struct pglist_data *next_online_pgdat(struct pglist_data= *pgdat) return NULL; } =20 -void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid) -{ -} - void atomic_long_set(atomic_long_t *v, long i) { } --=20 2.53.0