From nobody Thu Apr 2 15:36:28 2026 Received: from canpmsgout05.his.huawei.com (canpmsgout05.his.huawei.com [113.46.200.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE2CF36495E for ; Sat, 28 Feb 2026 07:15:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.220 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772262925; cv=none; b=t/si5BRKOH+No/tizYZbX2TJYc+cJXYBGKwgFyaOPRZmgvT/oCl0NK1RMCQQEtoaSACT+kaYZL4PlUs5jUYOizJdE4eOOoJ54vhzrLrMXmIOMuYi+uv5TQXp3WTeZ4xaZYT4ZEws8q4xRoO/dypYUTA+SetOtA9L8OhFbivH7QM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772262925; c=relaxed/simple; bh=+SlEdBvPZCgSKc32L3wZvJ39KiG2IammxUtIK46KltA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=l7qDZKVK4XK2nHGiS/G9gj4YgywtX8VR0FdYykJydsR+Rku5n3+KkPTSDBMIgXIVZ3nxnb+/YVp3+7Yx6InNpgrveyBwq3X685zqrzvAv/Yx6fyTe+55Hr16StZQvnxLT0/roKlBh9CesQgVMNt6lTu2m0N5U4QDnQ/saQ4HF2o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=R2hHTXWA; arc=none smtp.client-ip=113.46.200.220 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="R2hHTXWA" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=+vKnFbDy7FB6sF7REDl1Rwe0wUQPquhjGlWDfeFYqFE=; b=R2hHTXWA18ZhrxOkzNZLndRDgNWvHTt+4DcHZrJf/rsTQ5W2EDWOF0cxB/RCoS2dA7df9cByL 8xnULNpZcgD71u+b36aCNilavhq6qOStTB4fDjckgq+oJm/h4yJIZPZzoYFBaja2Rnmzsy3r2Su MGv9qdoOdNwtKwG+o0Tz61c= Received: from mail.maildlp.com (unknown [172.19.163.104]) by canpmsgout05.his.huawei.com (SkyGuard) with ESMTPS id 4fNGYr3FQxz12LDx; Sat, 28 Feb 2026 15:10:40 +0800 (CST) Received: from kwepemr500001.china.huawei.com (unknown [7.202.194.229]) by mail.maildlp.com (Postfix) with ESMTPS id 4796C4056A; Sat, 28 Feb 2026 15:15:19 +0800 (CST) Received: from huawei.com (10.50.87.63) by kwepemr500001.china.huawei.com (7.202.194.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Feb 2026 15:15:18 +0800 From: Yin Tirui To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH RFC v3 1/4] x86/mm: Use proper page table helpers for huge page generation Date: Sat, 28 Feb 2026 15:09:03 +0800 Message-ID: <20260228070906.1418911-2-yintirui@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260228070906.1418911-1-yintirui@huawei.com> References: <20260228070906.1418911-1-yintirui@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemr500001.china.huawei.com (7.202.194.229) Content-Type: text/plain; charset="utf-8" Historically, several core x86 mm subsystems (vmemmap, vmalloc, and CPA) have abused `pfn_pte()` to generate PMD and PUD entries by passing pgprot values containing the _PAGE_PSE flag, and then casting the resulting pte_t to a pmd_t or pud_t. This violates strict type safety and prevents us from enforcing the rule that `pfn_pte()` should strictly generate pte without huge page attributes. Fix these abuses by explicitly using the correct level-specific helpers (`pfn_pmd()` and `pfn_pud()`) and their corresponding setters (`set_pmd()`, `set_pud()`). For the CPA (Change Page Attribute) code, which uses `pte_t` as a generic container for page table entries across all levels in __should_split_large_page(), pack the correctly generated PMD/PUD values into the pte_t container. This cleanup prepares the ground for making `pfn_pte()` strictly filter out huge page attributes. Signed-off-by: Yin Tirui --- arch/x86/mm/init_64.c | 6 +++--- arch/x86/mm/pat/set_memory.c | 6 +++++- arch/x86/mm/pgtable.c | 4 ++-- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index df2261fa4f98..d65f3d05c66f 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1518,11 +1518,11 @@ static int __meminitdata node_start; void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, unsigned long addr, unsigned long next) { - pte_t entry; + pmd_t entry; =20 - entry =3D pfn_pte(__pa(p) >> PAGE_SHIFT, + entry =3D pfn_pmd(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL_LARGE); - set_pmd(pmd, __pmd(pte_val(entry))); + set_pmd(pmd, entry); =20 /* check to see if we have contiguous blocks */ if (p_end !=3D p || node_start !=3D node) { diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 40581a720fe8..87aa0e9a8f82 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1059,7 +1059,11 @@ static int __should_split_large_page(pte_t *kpte, un= signed long address, return 1; =20 /* All checks passed. Update the large page mapping. */ - new_pte =3D pfn_pte(old_pfn, new_prot); + if (level =3D=3D PG_LEVEL_2M) + new_pte =3D __pte(pmd_val(pfn_pmd(old_pfn, new_prot))); + else + new_pte =3D __pte(pud_val(pfn_pud(old_pfn, new_prot))); + __set_pmd_pte(kpte, address, new_pte); cpa->flags |=3D CPA_FLUSHTLB; cpa_inc_lp_preserved(level); diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 2e5ecfdce73c..61320fd44e16 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -644,7 +644,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t= prot) if (pud_present(*pud) && !pud_leaf(*pud)) return 0; =20 - set_pte((pte_t *)pud, pfn_pte( + set_pud(pud, pfn_pud( (u64)addr >> PAGE_SHIFT, __pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE))); =20 @@ -676,7 +676,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t= prot) if (pmd_present(*pmd) && !pmd_leaf(*pmd)) return 0; =20 - set_pte((pte_t *)pmd, pfn_pte( + set_pmd(pmd, pfn_pmd( (u64)addr >> PAGE_SHIFT, __pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE))); =20 --=20 2.22.0