[PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages

Kiryl Shutsemau (Meta) posted 17 patches 1 month ago
[PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages
Posted by Kiryl Shutsemau (Meta) 1 month ago
Currently, the vmemmap for bootmem-allocated gigantic pages is populated
early in hugetlb_vmemmap_init_early(). However, the zone information is
only available after zones are initialized. If it is later discovered
that a page spans multiple zones, the HVO mapping must be undone and
replaced with a normal mapping using vmemmap_undo_hvo().

Defer the actual vmemmap population to hugetlb_vmemmap_init_late(). At
this stage, zones are already initialized, so it can be checked if the
page is valid for HVO before deciding how to populate the vmemmap.

This allows us to remove vmemmap_undo_hvo() and the complex logic
required to rollback HVO mappings.

In hugetlb_vmemmap_init_late(), if HVO population fails or if the zones
are invalid, fall back to a normal vmemmap population.

Postponing population until hugetlb_vmemmap_init_late() also makes zone
information available from within vmemmap_populate_hvo().

Signed-off-by: Kiryl Shutsemau (Meta) <kas@kernel.org>
---
 include/linux/mm.h   |  2 --
 mm/hugetlb_vmemmap.c | 37 +++++++++++++++----------------
 mm/sparse-vmemmap.c  | 53 --------------------------------------------
 3 files changed, 18 insertions(+), 74 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7f4dbbb9d783..0e2d45008ff4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4484,8 +4484,6 @@ int vmemmap_populate(unsigned long start, unsigned long end, int node,
 		struct vmem_altmap *altmap);
 int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node,
 			 unsigned long headsize);
-int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node,
-		     unsigned long headsize);
 void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int node,
 			  unsigned long headsize);
 void vmemmap_populate_print_last(void);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index a9280259e12a..935ec5829be9 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -790,7 +790,6 @@ void __init hugetlb_vmemmap_init_early(int nid)
 {
 	unsigned long psize, paddr, section_size;
 	unsigned long ns, i, pnum, pfn, nr_pages;
-	unsigned long start, end;
 	struct huge_bootmem_page *m = NULL;
 	void *map;
 
@@ -808,14 +807,6 @@ void __init hugetlb_vmemmap_init_early(int nid)
 		paddr = virt_to_phys(m);
 		pfn = PHYS_PFN(paddr);
 		map = pfn_to_page(pfn);
-		start = (unsigned long)map;
-		end = start + nr_pages * sizeof(struct page);
-
-		if (vmemmap_populate_hvo(start, end, nid,
-					HUGETLB_VMEMMAP_RESERVE_SIZE) < 0)
-			continue;
-
-		memmap_boot_pages_add(HUGETLB_VMEMMAP_RESERVE_SIZE / PAGE_SIZE);
 
 		pnum = pfn_to_section_nr(pfn);
 		ns = psize / section_size;
@@ -850,28 +841,36 @@ void __init hugetlb_vmemmap_init_late(int nid)
 		h = m->hstate;
 		pfn = PHYS_PFN(phys);
 		nr_pages = pages_per_huge_page(h);
+		map = pfn_to_page(pfn);
+		start = (unsigned long)map;
+		end = start + nr_pages * sizeof(struct page);
 
 		if (!hugetlb_bootmem_page_zones_valid(nid, m)) {
 			/*
 			 * Oops, the hugetlb page spans multiple zones.
-			 * Remove it from the list, and undo HVO.
+			 * Remove it from the list, and populate it normally.
 			 */
 			list_del(&m->list);
 
-			map = pfn_to_page(pfn);
-
-			start = (unsigned long)map;
-			end = start + nr_pages * sizeof(struct page);
-
-			vmemmap_undo_hvo(start, end, nid,
-					 HUGETLB_VMEMMAP_RESERVE_SIZE);
-			nr_mmap = end - start - HUGETLB_VMEMMAP_RESERVE_SIZE;
+			vmemmap_populate(start, end, nid, NULL);
+			nr_mmap = end - start;
 			memmap_boot_pages_add(DIV_ROUND_UP(nr_mmap, PAGE_SIZE));
 
 			memblock_phys_free(phys, huge_page_size(h));
 			continue;
-		} else
+		}
+
+		if (vmemmap_populate_hvo(start, end, nid,
+					 HUGETLB_VMEMMAP_RESERVE_SIZE) < 0) {
+			/* Fallback if HVO population fails */
+			vmemmap_populate(start, end, nid, NULL);
+			nr_mmap = end - start;
+		} else {
 			m->flags |= HUGE_BOOTMEM_ZONES_VALID;
+			nr_mmap = HUGETLB_VMEMMAP_RESERVE_SIZE;
+		}
+
+		memmap_boot_pages_add(DIV_ROUND_UP(nr_mmap, PAGE_SIZE));
 	}
 }
 #endif
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 37522d6cb398..032a81450838 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -302,59 +302,6 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
 	return vmemmap_populate_range(start, end, node, altmap, -1, 0);
 }
 
-/*
- * Undo populate_hvo, and replace it with a normal base page mapping.
- * Used in memory init in case a HVO mapping needs to be undone.
- *
- * This can happen when it is discovered that a memblock allocated
- * hugetlb page spans multiple zones, which can only be verified
- * after zones have been initialized.
- *
- * We know that:
- * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually
- *    allocated through memblock, and mapped.
- *
- * 2) The rest of the vmemmap pages are mirrors of the last head page.
- */
-int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end,
-				      int node, unsigned long headsize)
-{
-	unsigned long maddr, pfn;
-	pte_t *pte;
-	int headpages;
-
-	/*
-	 * Should only be called early in boot, so nothing will
-	 * be accessing these page structures.
-	 */
-	WARN_ON(!early_boot_irqs_disabled);
-
-	headpages = headsize >> PAGE_SHIFT;
-
-	/*
-	 * Clear mirrored mappings for tail page structs.
-	 */
-	for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) {
-		pte = virt_to_kpte(maddr);
-		pte_clear(&init_mm, maddr, pte);
-	}
-
-	/*
-	 * Clear and free mappings for head page and first tail page
-	 * structs.
-	 */
-	for (maddr = addr; headpages-- > 0; maddr += PAGE_SIZE) {
-		pte = virt_to_kpte(maddr);
-		pfn = pte_pfn(ptep_get(pte));
-		pte_clear(&init_mm, maddr, pte);
-		memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE);
-	}
-
-	flush_tlb_kernel_range(addr, end);
-
-	return vmemmap_populate(addr, end, node, NULL);
-}
-
 /*
  * Write protect the mirrored tail page structs for HVO. This will be
  * called from the hugetlb code when gathering and initializing the
-- 
2.51.2
Re: [PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages
Posted by David Hildenbrand (Arm) 2 weeks, 2 days ago
On 2/27/26 20:42, Kiryl Shutsemau (Meta) wrote:
> Currently, the vmemmap for bootmem-allocated gigantic pages is populated
> early in hugetlb_vmemmap_init_early(). However, the zone information is
> only available after zones are initialized. If it is later discovered
> that a page spans multiple zones, the HVO mapping must be undone and
> replaced with a normal mapping using vmemmap_undo_hvo().
> 
> Defer the actual vmemmap population to hugetlb_vmemmap_init_late(). At
> this stage, zones are already initialized, so it can be checked if the
> page is valid for HVO before deciding how to populate the vmemmap.
> 
> This allows us to remove vmemmap_undo_hvo() and the complex logic
> required to rollback HVO mappings.
> 
> In hugetlb_vmemmap_init_late(), if HVO population fails or if the zones
> are invalid, fall back to a normal vmemmap population.
> 
> Postponing population until hugetlb_vmemmap_init_late() also makes zone
> information available from within vmemmap_populate_hvo().

So we'll keep marking the sections as SECTION_IS_VMEMMAP_PREINIT such
that sparse_init_nid() will still properly skip it and leave population
to hugetlb_vmemmap_init_late().

Should we clear SECTION_IS_VMEMMAP_PREINIT in case we run into the
hugetlb_bootmem_page_zones_valid() scenario?

I suspect we don't care about SECTION_IS_VMEMMAP_PREINIT after boot and
can just leave the flag set. (maybe we wan to add a comment in the code?
above the vmemmap_populate() ?)

Nothing else jumped at me

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David
Re: [PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages
Posted by Kiryl Shutsemau 2 weeks, 1 day ago
On Mon, Mar 16, 2026 at 05:48:24PM +0100, David Hildenbrand (Arm) wrote:
> On 2/27/26 20:42, Kiryl Shutsemau (Meta) wrote:
> > Currently, the vmemmap for bootmem-allocated gigantic pages is populated
> > early in hugetlb_vmemmap_init_early(). However, the zone information is
> > only available after zones are initialized. If it is later discovered
> > that a page spans multiple zones, the HVO mapping must be undone and
> > replaced with a normal mapping using vmemmap_undo_hvo().
> > 
> > Defer the actual vmemmap population to hugetlb_vmemmap_init_late(). At
> > this stage, zones are already initialized, so it can be checked if the
> > page is valid for HVO before deciding how to populate the vmemmap.
> > 
> > This allows us to remove vmemmap_undo_hvo() and the complex logic
> > required to rollback HVO mappings.
> > 
> > In hugetlb_vmemmap_init_late(), if HVO population fails or if the zones
> > are invalid, fall back to a normal vmemmap population.
> > 
> > Postponing population until hugetlb_vmemmap_init_late() also makes zone
> > information available from within vmemmap_populate_hvo().
> 
> So we'll keep marking the sections as SECTION_IS_VMEMMAP_PREINIT such
> that sparse_init_nid() will still properly skip it and leave population
> to hugetlb_vmemmap_init_late().
> 
> Should we clear SECTION_IS_VMEMMAP_PREINIT in case we run into the
> hugetlb_bootmem_page_zones_valid() scenario?
> 
> I suspect we don't care about SECTION_IS_VMEMMAP_PREINIT after boot and
> can just leave the flag set. (maybe we wan to add a comment in the code?
> above the vmemmap_populate() ?)

I think keeping the flag is right thing to do.

SECTION_IS_VMEMMAP_PREINIT indicates to core-sparse that the section
should not be populated and it will be initialized elsewhere. Even in
!hugetlb_bootmem_page_zones_valid() we take care of it in
hugetlb_vmemmap_init_late().

And, as you mentioned, nobody looks at the flag after boot.

> Nothing else jumped at me
> 
> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
> 
> -- 
> Cheers,
> 
> David

-- 
  Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages
Posted by David Hildenbrand (Arm) 2 weeks, 1 day ago
On 3/17/26 12:28, Kiryl Shutsemau wrote:
> On Mon, Mar 16, 2026 at 05:48:24PM +0100, David Hildenbrand (Arm) wrote:
>> On 2/27/26 20:42, Kiryl Shutsemau (Meta) wrote:
>>> Currently, the vmemmap for bootmem-allocated gigantic pages is populated
>>> early in hugetlb_vmemmap_init_early(). However, the zone information is
>>> only available after zones are initialized. If it is later discovered
>>> that a page spans multiple zones, the HVO mapping must be undone and
>>> replaced with a normal mapping using vmemmap_undo_hvo().
>>>
>>> Defer the actual vmemmap population to hugetlb_vmemmap_init_late(). At
>>> this stage, zones are already initialized, so it can be checked if the
>>> page is valid for HVO before deciding how to populate the vmemmap.
>>>
>>> This allows us to remove vmemmap_undo_hvo() and the complex logic
>>> required to rollback HVO mappings.
>>>
>>> In hugetlb_vmemmap_init_late(), if HVO population fails or if the zones
>>> are invalid, fall back to a normal vmemmap population.
>>>
>>> Postponing population until hugetlb_vmemmap_init_late() also makes zone
>>> information available from within vmemmap_populate_hvo().
>>
>> So we'll keep marking the sections as SECTION_IS_VMEMMAP_PREINIT such
>> that sparse_init_nid() will still properly skip it and leave population
>> to hugetlb_vmemmap_init_late().
>>
>> Should we clear SECTION_IS_VMEMMAP_PREINIT in case we run into the
>> hugetlb_bootmem_page_zones_valid() scenario?
>>
>> I suspect we don't care about SECTION_IS_VMEMMAP_PREINIT after boot and
>> can just leave the flag set. (maybe we wan to add a comment in the code?
>> above the vmemmap_populate() ?)
> 
> I think keeping the flag is right thing to do.
> 
> SECTION_IS_VMEMMAP_PREINIT indicates to core-sparse that the section
> should not be populated and it will be initialized elsewhere. Even in
> !hugetlb_bootmem_page_zones_valid() we take care of it in
> hugetlb_vmemmap_init_late().

Makes sense.

-- 
Cheers,

David