From nobody Fri Dec 19 17:44:09 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B443F230BC4 for ; Fri, 28 Feb 2025 18:30:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767414; cv=none; b=TNLKYhhwIMGvsVIi3+x9BCQnDiRYffs+AZkmVCJyTk5N32HQzpvcCno+QYHMFpLV+5xGHknsZlRrnr4fN0HMoLriSOW3z59ahQfL1EH4eYdvC7aqLXP2EHprV2qpa3diB8ZEf7qGXu/fgP8VtVahzY2pfvmdiGFXop9+N68aLUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740767414; c=relaxed/simple; bh=YVeNipAXLxo7C3ILwJ258W1KaV6o6R/B+CwI6BK6ekQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iLMRu3xi+deqinXVvhnMole2K8LdZuSrcj2zLdlNmuOtLkp/2GAtD0gLQgxtJs3QseoZ2NDA1Ca1+Bo77Gt6UnwxZ+1dEEgY5Pv7erdg87X9H7/IJY+UaTuF1tMc7rHAVM47XMQ1v/0x6+2VPuu2YMenF+z22ToBnSUXqv2t4Rc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WBHFyQt/; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WBHFyQt/" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fe862ea448so7582984a91.3 for ; Fri, 28 Feb 2025 10:30:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767412; x=1741372212; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=WBHFyQt/SGvHdnNir2UnIerH+Ms7+zfgleAMsoDp/28QLF45rFru5yxsJfWhODxXvA wu5OCg6DJzm3yYK7qDe6RrobtyyI6BLDbKLQAKAHQlTCU3TGZZ1DQOIK9W0SoMdmCA04 HsIbNKLFmoH5tUOczTPKrpxlJYw1u3RM9fBcFpRt43dYaDQoqctZHi8ysdqKdA6/FIuN NseyM3+fPwzp5J2zXNuOwtm7CiBOptmvl1J5qJ/URb1aucV1srIa9G07zBL61mK3qjpN ycJHn/hHiAL5KIMZfh+E/Kuxc9clfnrttTIU+R4ANVTH8MOpSDqmOOGUMvQ5Km4oHYSm gpHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767412; x=1741372212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=wC8jmrYROIa6oY/goeQoxDeWd/vR5DXgU7Rrc6SXDCUIvuaVbLbOp62hEh0kfghkqM f9FrlghkfrilwF0uDvTU0rp8I36APTqE39tb+BVrzMtr1W7J2nb6+ARjAkuHLrO73Tgd nOPXSbLsj7TZjVtM57IG2kkcl2xgwEAthxtfihn2DxIH+WsKBXl8+S+ihtOMNq7v1OJs fdH1YYJyA0xYAEtqqhuME5zZChvxCWM+/IAzj112DRIMcdx2HcheJscMzOviDNhIVd70 73gnZKIi8SIrZCl2baZPLqNJvEpBJSD/Asu5I1R1aLKx5uKi56JD6VVqNFOBExi0qpwf NnHg== X-Forwarded-Encrypted: i=1; AJvYcCWyXm2UnUpzUH22x5K1PCrvzk8NOH5SIEoDuhIkEZJMdTgepsG54BaGmVPMEPJRcZZ4tQ/TPP45A9yJIAU=@vger.kernel.org X-Gm-Message-State: AOJu0YzEs3ZZcyTkXj48ISsF7SW6BIhKTUsGJvvZFDPCy3e0QjEo5y/M qwR3GV6Ryzi7wqhNJuN07Qsk0zXU6/3czGVC6Jg31fERMVR04zJJc7Q2mLz5+JqzYZ4tsg== X-Google-Smtp-Source: AGHT+IEomtc175Tn7QU4dLcaFEmpKHLIE++VomnQMAMqddL9GpIn6l/mbB6muVrn6Dl0pyEnMs9+WryW X-Received: from pgvt16.prod.google.com ([2002:a65:64d0:0:b0:ade:f03a:8509]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:144e:b0:1f2:e0c3:2619 with SMTP id adf61e73a8af0-1f2f4ddb75cmr9167961637.32.1740767411937; Fri, 28 Feb 2025 10:30:11 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:15 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-15-fvdl@google.com> Subject: [PATCH v5 14/27] mm/sparse: add vmemmap_*_hvo functions From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a few functions to enable early HVO: vmemmap_populate_hvo vmemmap_undo_hvo vmemmap_wrprotect_hvo The populate and undo functions are expected to be used in early init, from the sparse_init_nid_early() function. The wrprotect function is to be used, potentially, later. To implement these functions, mostly re-use the existing compound pages vmemmap logic used by DAX. vmemmap_populate_address has its argument changed a bit in this commit: the page structure passed in to be reused in the mapping is replaced by a PFN and a flag. The flag indicates whether an extra ref should be taken on the vmemmap page containing the head page structure. Taking the ref is appropriate to for DAX / ZONE_DEVICE, but not for HugeTLB HVO. The HugeTLB vmemmap optimization maps tail page structure pages read-only. The vmemmap_wrprotect_hvo function that does this is implemented separately, because it cannot be guaranteed that reserved page structures will not be write accessed during memory initialization. Even with CONFIG_DEFERRED_STRUCT_PAGE_INIT, they might still be written to (if they are at the bottom of a zone). So, vmemmap_populate_hvo leaves the tail page structure pages RW initially, and then later during initialization, after memmap init is fully done, vmemmap_wrprotect_hvo must be called to finish the job. Subsequent commits will use these functions for early HugeTLB HVO. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 9 ++- mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index df83653ed6e3..0463c062fd7a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3837,7 +3837,8 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long= addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap, struct page *reuse); + struct vmem_altmap *altmap, unsigned long ptpfn, + unsigned long flags); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, @@ -3853,6 +3854,12 @@ int vmemmap_populate_hugepages(unsigned long start, = unsigned long end, int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); +int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int nod= e, + unsigned long headsize); void vmemmap_populate_print_last(void); #ifdef CONFIG_MEMORY_HOTPLUG void vmemmap_free(unsigned long start, unsigned long end, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8751c46c35e4..8cc848c4b17c 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -30,6 +30,13 @@ =20 #include #include +#include + +/* + * Flags for vmemmap_populate_range and friends. + */ +/* Get a ref on the head page struct page, for ZONE_DEVICE compound pages = */ +#define VMEMMAP_POPULATE_PAGEREF 0x0001 =20 #include "internal.h" =20 @@ -144,17 +151,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, =20 pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int= node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, unsigned long flags) { pte_t *pte =3D pte_offset_kernel(pmd, addr); if (pte_none(ptep_get(pte))) { pte_t entry; void *p; =20 - if (!reuse) { + if (ptpfn =3D=3D (unsigned long)-1) { p =3D vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); if (!p) return NULL; + ptpfn =3D PHYS_PFN(__pa(p)); } else { /* * When a PTE/PMD entry is freed from the init_mm @@ -165,10 +173,10 @@ pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, un= signed long addr, int node, * and through vmemmap_populate_compound_pages() when * slab is available. */ - get_page(reuse); - p =3D page_to_virt(reuse); + if (flags & VMEMMAP_POPULATE_PAGEREF) + get_page(pfn_to_page(ptpfn)); } - entry =3D pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); + entry =3D pfn_pte(ptpfn, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } return pte; @@ -238,7 +246,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long ad= dr, int node) =20 static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int = node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { pgd_t *pgd; p4d_t *p4d; @@ -258,7 +267,7 @@ static pte_t * __meminit vmemmap_populate_address(unsig= ned long addr, int node, pmd =3D vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte =3D vmemmap_pte_populate(pmd, addr, node, altmap, reuse); + pte =3D vmemmap_pte_populate(pmd, addr, node, altmap, ptpfn, flags); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -269,13 +278,15 @@ static pte_t * __meminit vmemmap_populate_address(uns= igned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { unsigned long addr =3D start; pte_t *pte; =20 for (; addr < end; addr +=3D PAGE_SIZE) { - pte =3D vmemmap_populate_address(addr, node, altmap, reuse); + pte =3D vmemmap_populate_address(addr, node, altmap, + ptpfn, flags); if (!pte) return -ENOMEM; } @@ -286,7 +297,107 @@ static int __meminit vmemmap_populate_range(unsigned = long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned lon= g end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap, NULL); + return vmemmap_populate_range(start, end, node, altmap, -1, 0); +} + +/* + * Undo populate_hvo, and replace it with a normal base page mapping. + * Used in memory init in case a HVO mapping needs to be undone. + * + * This can happen when it is discovered that a memblock allocated + * hugetlb page spans multiple zones, which can only be verified + * after zones have been initialized. + * + * We know that: + * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually + * allocated through memblock, and mapped. + * + * 2) The rest of the vmemmap pages are mirrors of the last head page. + */ +int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr, pfn; + pte_t *pte; + int headpages; + + /* + * Should only be called early in boot, so nothing will + * be accessing these page structures. + */ + WARN_ON(!early_boot_irqs_disabled); + + headpages =3D headsize >> PAGE_SHIFT; + + /* + * Clear mirrored mappings for tail page structs. + */ + for (maddr =3D addr + headsize; maddr < end; maddr +=3D PAGE_SIZE) { + pte =3D virt_to_kpte(maddr); + pte_clear(&init_mm, maddr, pte); + } + + /* + * Clear and free mappings for head page and first tail page + * structs. + */ + for (maddr =3D addr; headpages-- > 0; maddr +=3D PAGE_SIZE) { + pte =3D virt_to_kpte(maddr); + pfn =3D pte_pfn(ptep_get(pte)); + pte_clear(&init_mm, maddr, pte); + memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE); + } + + flush_tlb_kernel_range(addr, end); + + return vmemmap_populate(addr, end, node, NULL); +} + +/* + * Write protect the mirrored tail page structs for HVO. This will be + * called from the hugetlb code when gathering and initializing the + * memblock allocated gigantic pages. The write protect can't be + * done earlier, since it can't be guaranteed that the reserved + * page structures will not be written to during initialization, + * even if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. + * + * The PTEs are known to exist, and nothing else should be touching + * these pages. The caller is responsible for any TLB flushing. + */ +void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr; + pte_t *pte; + + for (maddr =3D addr + headsize; maddr < end; maddr +=3D PAGE_SIZE) { + pte =3D virt_to_kpte(maddr); + ptep_set_wrprotect(&init_mm, maddr, pte); + } +} + +/* + * Populate vmemmap pages HVO-style. The first page contains the head + * page and needed tail pages, the other ones are mirrors of the first + * page. + */ +int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + pte_t *pte; + unsigned long maddr; + + for (maddr =3D addr; maddr < addr + headsize; maddr +=3D PAGE_SIZE) { + pte =3D vmemmap_populate_address(maddr, node, NULL, -1, 0); + if (!pte) + return -ENOMEM; + } + + /* + * Reuse the last page struct page mapped above for the rest. + */ + return vmemmap_populate_range(maddr, end, node, NULL, + pte_pfn(ptep_get(pte)), 0); } =20 void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, @@ -409,7 +520,8 @@ static int __meminit vmemmap_populate_compound_pages(un= signed long start_pfn, * with just tail struct pages. */ return vmemmap_populate_range(start, end, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); } =20 size =3D min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); @@ -417,13 +529,13 @@ static int __meminit vmemmap_populate_compound_pages(= unsigned long start_pfn, unsigned long next, last =3D addr + size; =20 /* Populate the head page vmemmap page */ - pte =3D vmemmap_populate_address(addr, node, NULL, NULL); + pte =3D vmemmap_populate_address(addr, node, NULL, -1, 0); if (!pte) return -ENOMEM; =20 /* Populate the tail pages vmemmap page */ next =3D addr + PAGE_SIZE; - pte =3D vmemmap_populate_address(next, node, NULL, NULL); + pte =3D vmemmap_populate_address(next, node, NULL, -1, 0); if (!pte) return -ENOMEM; =20 @@ -433,7 +545,8 @@ static int __meminit vmemmap_populate_compound_pages(un= signed long start_pfn, */ next +=3D PAGE_SIZE; rc =3D vmemmap_populate_range(next, last, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); if (rc) return -ENOMEM; } --=20 2.48.1.711.g2feabab25a-goog