[PATCH RFC v3 1/4] x86/mm: Use proper page table helpers for huge page generation

Yin Tirui posted 4 patches 1 month ago
[PATCH RFC v3 1/4] x86/mm: Use proper page table helpers for huge page generation
Posted by Yin Tirui 1 month ago
Historically, several core x86 mm subsystems (vmemmap, vmalloc, and CPA)
have abused `pfn_pte()` to generate PMD and PUD entries by passing
pgprot values containing the _PAGE_PSE flag, and then casting the
resulting pte_t to a pmd_t or pud_t.

This violates strict type safety and prevents us from enforcing the rule
that `pfn_pte()` should strictly generate pte without huge page attributes.

Fix these abuses by explicitly using the correct level-specific helpers
(`pfn_pmd()` and `pfn_pud()`) and their corresponding setters
(`set_pmd()`, `set_pud()`).

For the CPA (Change Page Attribute) code, which uses `pte_t` as a generic
container for page table entries across all levels in
__should_split_large_page(), pack the correctly generated PMD/PUD values
into the pte_t container.

This cleanup prepares the ground for making `pfn_pte()` strictly filter
out huge page attributes.

Signed-off-by: Yin Tirui <yintirui@huawei.com>
---
 arch/x86/mm/init_64.c        | 6 +++---
 arch/x86/mm/pat/set_memory.c | 6 +++++-
 arch/x86/mm/pgtable.c        | 4 ++--
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index df2261fa4f98..d65f3d05c66f 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1518,11 +1518,11 @@ static int __meminitdata node_start;
 void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
 			       unsigned long addr, unsigned long next)
 {
-	pte_t entry;
+	pmd_t entry;
 
-	entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
+	entry = pfn_pmd(__pa(p) >> PAGE_SHIFT,
 			PAGE_KERNEL_LARGE);
-	set_pmd(pmd, __pmd(pte_val(entry)));
+	set_pmd(pmd, entry);
 
 	/* check to see if we have contiguous blocks */
 	if (p_end != p || node_start != node) {
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 40581a720fe8..87aa0e9a8f82 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1059,7 +1059,11 @@ static int __should_split_large_page(pte_t *kpte, unsigned long address,
 		return 1;
 
 	/* All checks passed. Update the large page mapping. */
-	new_pte = pfn_pte(old_pfn, new_prot);
+	if (level == PG_LEVEL_2M)
+		new_pte = __pte(pmd_val(pfn_pmd(old_pfn, new_prot)));
+	else
+		new_pte = __pte(pud_val(pfn_pud(old_pfn, new_prot)));
+
 	__set_pmd_pte(kpte, address, new_pte);
 	cpa->flags |= CPA_FLUSHTLB;
 	cpa_inc_lp_preserved(level);
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 2e5ecfdce73c..61320fd44e16 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -644,7 +644,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
 	if (pud_present(*pud) && !pud_leaf(*pud))
 		return 0;
 
-	set_pte((pte_t *)pud, pfn_pte(
+	set_pud(pud, pfn_pud(
 		(u64)addr >> PAGE_SHIFT,
 		__pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE)));
 
@@ -676,7 +676,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
 	if (pmd_present(*pmd) && !pmd_leaf(*pmd))
 		return 0;
 
-	set_pte((pte_t *)pmd, pfn_pte(
+	set_pmd(pmd, pfn_pmd(
 		(u64)addr >> PAGE_SHIFT,
 		__pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE)));
 
-- 
2.22.0
Re: [PATCH RFC v3 1/4] x86/mm: Use proper page table helpers for huge page generation
Posted by Jonathan Cameron 3 weeks, 6 days ago
On Sat, 28 Feb 2026 15:09:03 +0800
Yin Tirui <yintirui@huawei.com> wrote:

> Historically, several core x86 mm subsystems (vmemmap, vmalloc, and CPA)
> have abused `pfn_pte()` to generate PMD and PUD entries by passing
> pgprot values containing the _PAGE_PSE flag, and then casting the
> resulting pte_t to a pmd_t or pud_t.
> 
> This violates strict type safety and prevents us from enforcing the rule
> that `pfn_pte()` should strictly generate pte without huge page attributes.
> 
> Fix these abuses by explicitly using the correct level-specific helpers
> (`pfn_pmd()` and `pfn_pud()`) and their corresponding setters
> (`set_pmd()`, `set_pud()`).
> 
> For the CPA (Change Page Attribute) code, which uses `pte_t` as a generic
> container for page table entries across all levels in
> __should_split_large_page(), pack the correctly generated PMD/PUD values
> into the pte_t container.
> 
> This cleanup prepares the ground for making `pfn_pte()` strictly filter
> out huge page attributes.
> 
> Signed-off-by: Yin Tirui <yintirui@huawei.com>
Hi. A tiny drive by review comment below.

> ---
>  arch/x86/mm/init_64.c        | 6 +++---
>  arch/x86/mm/pat/set_memory.c | 6 +++++-
>  arch/x86/mm/pgtable.c        | 4 ++--
>  3 files changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index df2261fa4f98..d65f3d05c66f 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1518,11 +1518,11 @@ static int __meminitdata node_start;
>  void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
>  			       unsigned long addr, unsigned long next)
>  {
> -	pte_t entry;
> +	pmd_t entry;
>  
> -	entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
> +	entry = pfn_pmd(__pa(p) >> PAGE_SHIFT,
>  			PAGE_KERNEL_LARGE);

Whilst you are here, can we make that a one liner. 
	entry = pfn_pmd(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL_LARGE);

Could even do

	pmd_t entry = pfn_pmd(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL_LARGE);
but that's more of a question of taste.


> -	set_pmd(pmd, __pmd(pte_val(entry)));
> +	set_pmd(pmd, entry);
>  
>  	/* check to see if we have contiguous blocks */
Re: [PATCH RFC v3 1/4] x86/mm: Use proper page table helpers for huge page generation
Posted by Yin Tirui 3 weeks, 2 days ago

On 3/6/2026 5:29 PM, Jonathan Cameron wrote:
> Whilst you are here, can we make that a one liner.
> 	entry = pfn_pmd(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL_LARGE);
> 
> Could even do
> 
> 	pmd_t entry = pfn_pmd(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL_LARGE);
> but that's more of a question of taste.
Hi Jonathan,

Thanks for the suggestion, I will fold this one-liner cleanup into the 
next respin.

-- 
Yin Tirui