mm/sparse.c | 1 + 1 file changed, 1 insertion(+)
memmap pages can be allocated either from the memblock (boot) allocator
during early boot or from the buddy allocator.
When these memmap pages are removed via arch_remove_memory(), the
deallocation path depends on their source:
* For pages from the buddy allocator, depopulate_section_memmap() is
called, which also decrements the count of nr_memmap_pages.
* For pages from the boot allocator, free_map_bootmem() is called. But
it currently does not adjust the nr_memmap_boot_pages.
To fix this inconsistency, update free_map_bootmem() to also decrement
the nr_memmap_boot_pages count by invoking memmap_boot_pages_add(),
mirroring how free_vmemmap_page() handles this for boot-allocated pages.
This ensures correct tracking of memmap pages regardless of allocation
source.
Cc: stable@vger.kernel.org
Fixes: 15995a352474 ("mm: report per-page metadata information")
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
---
mm/sparse.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/sparse.c b/mm/sparse.c
index 3c012cf83cc2..d7c128015397 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -688,6 +688,7 @@ static void free_map_bootmem(struct page *memmap)
unsigned long start = (unsigned long)memmap;
unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION);
+ memmap_boot_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
vmemmap_free(start, end, NULL);
}
--
2.48.1
On 04.08.25 11:08, Sumanth Korikkar wrote: > memmap pages can be allocated either from the memblock (boot) allocator > during early boot or from the buddy allocator. > > When these memmap pages are removed via arch_remove_memory(), the > deallocation path depends on their source: > > * For pages from the buddy allocator, depopulate_section_memmap() is > called, which also decrements the count of nr_memmap_pages. > > * For pages from the boot allocator, free_map_bootmem() is called. But > it currently does not adjust the nr_memmap_boot_pages. > > To fix this inconsistency, update free_map_bootmem() to also decrement > the nr_memmap_boot_pages count by invoking memmap_boot_pages_add(), > mirroring how free_vmemmap_page() handles this for boot-allocated pages. > > This ensures correct tracking of memmap pages regardless of allocation > source. > > Cc: stable@vger.kernel.org > Fixes: 15995a352474 ("mm: report per-page metadata information") > Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> > --- > mm/sparse.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/sparse.c b/mm/sparse.c > index 3c012cf83cc2..d7c128015397 100644 > --- a/mm/sparse.c > +++ b/mm/sparse.c > @@ -688,6 +688,7 @@ static void free_map_bootmem(struct page *memmap) > unsigned long start = (unsigned long)memmap; > unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); > > + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE))); > vmemmap_free(start, end, NULL); > } > Looks good to me. But now I wonder about !CONFIG_SPARSEMEM_VMEMMAP, where neither depopulate_section_memmap() nor free_map_bootmem() adjust anything? Which makes me wonder whether we should be moving that to section_deactivate(). -- Cheers, David / dhildenb
On Mon, Aug 04, 2025 at 02:27:20PM +0200, David Hildenbrand wrote: > On 04.08.25 11:08, Sumanth Korikkar wrote: > > memmap pages can be allocated either from the memblock (boot) allocator > > during early boot or from the buddy allocator. > > > > When these memmap pages are removed via arch_remove_memory(), the > > deallocation path depends on their source: > > > > * For pages from the buddy allocator, depopulate_section_memmap() is > > called, which also decrements the count of nr_memmap_pages. > > > > * For pages from the boot allocator, free_map_bootmem() is called. But > > it currently does not adjust the nr_memmap_boot_pages. > > > > To fix this inconsistency, update free_map_bootmem() to also decrement > > the nr_memmap_boot_pages count by invoking memmap_boot_pages_add(), > > mirroring how free_vmemmap_page() handles this for boot-allocated pages. > > > > This ensures correct tracking of memmap pages regardless of allocation > > source. > > > > Cc: stable@vger.kernel.org > > Fixes: 15995a352474 ("mm: report per-page metadata information") > > Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> > > --- > > mm/sparse.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/mm/sparse.c b/mm/sparse.c > > index 3c012cf83cc2..d7c128015397 100644 > > --- a/mm/sparse.c > > +++ b/mm/sparse.c > > @@ -688,6 +688,7 @@ static void free_map_bootmem(struct page *memmap) > > unsigned long start = (unsigned long)memmap; > > unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); > > + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE))); > > vmemmap_free(start, end, NULL); > > } > > Looks good to me. But now I wonder about !CONFIG_SPARSEMEM_VMEMMAP, where > neither depopulate_section_memmap() nor free_map_bootmem() adjust anything? > > Which makes me wonder whether we should be moving that to > section_deactivate(). Agree. I will move accounting to section_deactivate() then. Thanks
© 2016 - 2025 Red Hat, Inc.