From nobody Fri Dec 19 15:36:04 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B89F521E0AA for ; Mon, 17 Feb 2025 14:08:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739801310; cv=none; b=mHks9eetHUqaKuXWcJcUnSAlRKgjoma5LFQm+vn8QW2YmmBINNE9ZIWHkCoFvmzoNVxKhdZ9BmVU31Ti4GEFciksP5awg8qvdVWeWYplrpAtTO1AoFakXy49q3NLAlMcMAi+IXx5A/NKvAI1NflKHz8sF2GcEIFS11NhcGm4/DE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739801310; c=relaxed/simple; bh=pC53vCEVnwRuB4XL6M1GMn/jEcM6Tw0PxfJBQB+N3N4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XITxSqn4TxrlSCmcizTOBWLBCwK6ycSKumWMFLRH0+nTrlnPD4MCQ4RbQL2kYJ1vESMdrI/v1VS8zGQES/jtWz5T1ZF4jDqzXkXMU55C92WgSb7+dwExZzpr//Rb/aQdTLh0TRPKKT+ERQ/BT0Ue39syxom22vaasJU6ZpMHnuY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 315891A2D; Mon, 17 Feb 2025 06:08:47 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B7B753F6A8; Mon, 17 Feb 2025 06:08:25 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/14] mm/page_table_check: Batch-check pmds/puds just like ptes Date: Mon, 17 Feb 2025 14:07:55 +0000 Message-ID: <20250217140809.1702789-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217140809.1702789-1-ryan.roberts@arm.com> References: <20250217140809.1702789-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert page_table_check_p[mu]d_set(...) to page_table_check_p[mu]ds_set(..., nr) to allow checking a contiguous set of pmds/puds in single batch. We retain page_table_check_p[mu]d_set(...) as macros that call new batch functions with nr=3D1 for compatibility. arm64 is about to reorganise its pte/pmd/pud helpers to reuse more code and to allow the implementation for huge_pte to more efficiently set ptes/pmds/puds in batches. We need these batch-helpers to make the refactoring possible. Reviewed-by: Anshuman Khandual Signed-off-by: Ryan Roberts --- include/linux/page_table_check.h | 30 +++++++++++++++++----------- mm/page_table_check.c | 34 +++++++++++++++++++------------- 2 files changed, 38 insertions(+), 26 deletions(-) diff --git a/include/linux/page_table_check.h b/include/linux/page_table_ch= eck.h index 6722941c7cb8..289620d4aad3 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -19,8 +19,10 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, = pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, pud_t pud); void __page_table_check_ptes_set(struct mm_struct *mm, pte_t *ptep, pte_t = pte, unsigned int nr); -void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t p= md); -void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t p= ud); +void __page_table_check_pmds_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t = pmd, + unsigned int nr); +void __page_table_check_puds_set(struct mm_struct *mm, pud_t *pudp, pud_t = pud, + unsigned int nr); void __page_table_check_pte_clear_range(struct mm_struct *mm, unsigned long addr, pmd_t pmd); @@ -74,22 +76,22 @@ static inline void page_table_check_ptes_set(struct mm_= struct *mm, __page_table_check_ptes_set(mm, ptep, pte, nr); } =20 -static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *p= mdp, - pmd_t pmd) +static inline void page_table_check_pmds_set(struct mm_struct *mm, + pmd_t *pmdp, pmd_t pmd, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; =20 - __page_table_check_pmd_set(mm, pmdp, pmd); + __page_table_check_pmds_set(mm, pmdp, pmd, nr); } =20 -static inline void page_table_check_pud_set(struct mm_struct *mm, pud_t *p= udp, - pud_t pud) +static inline void page_table_check_puds_set(struct mm_struct *mm, + pud_t *pudp, pud_t pud, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; =20 - __page_table_check_pud_set(mm, pudp, pud); + __page_table_check_puds_set(mm, pudp, pud, nr); } =20 static inline void page_table_check_pte_clear_range(struct mm_struct *mm, @@ -129,13 +131,13 @@ static inline void page_table_check_ptes_set(struct m= m_struct *mm, { } =20 -static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *p= mdp, - pmd_t pmd) +static inline void page_table_check_pmds_set(struct mm_struct *mm, + pmd_t *pmdp, pmd_t pmd, unsigned int nr) { } =20 -static inline void page_table_check_pud_set(struct mm_struct *mm, pud_t *p= udp, - pud_t pud) +static inline void page_table_check_puds_set(struct mm_struct *mm, + pud_t *pudp, pud_t pud, unsigned int nr) { } =20 @@ -146,4 +148,8 @@ static inline void page_table_check_pte_clear_range(str= uct mm_struct *mm, } =20 #endif /* CONFIG_PAGE_TABLE_CHECK */ + +#define page_table_check_pmd_set(mm, pmdp, pmd) page_table_check_pmds_set(= mm, pmdp, pmd, 1) +#define page_table_check_pud_set(mm, pudp, pud) page_table_check_puds_set(= mm, pudp, pud, 1) + #endif /* __LINUX_PAGE_TABLE_CHECK_H */ diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 509c6ef8de40..dae4a7d776b3 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -234,33 +234,39 @@ static inline void page_table_check_pmd_flags(pmd_t p= md) WARN_ON_ONCE(swap_cached_writable(pmd_to_swp_entry(pmd))); } =20 -void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t p= md) +void __page_table_check_pmds_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t = pmd, + unsigned int nr) { + unsigned int i; + unsigned long stride =3D PMD_SIZE >> PAGE_SHIFT; + if (&init_mm =3D=3D mm) return; =20 page_table_check_pmd_flags(pmd); =20 - __page_table_check_pmd_clear(mm, *pmdp); - if (pmd_user_accessible_page(pmd)) { - page_table_check_set(pmd_pfn(pmd), PMD_SIZE >> PAGE_SHIFT, - pmd_write(pmd)); - } + for (i =3D 0; i < nr; i++) + __page_table_check_pmd_clear(mm, *(pmdp + i)); + if (pmd_user_accessible_page(pmd)) + page_table_check_set(pmd_pfn(pmd), stride * nr, pmd_write(pmd)); } -EXPORT_SYMBOL(__page_table_check_pmd_set); +EXPORT_SYMBOL(__page_table_check_pmds_set); =20 -void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t p= ud) +void __page_table_check_puds_set(struct mm_struct *mm, pud_t *pudp, pud_t = pud, + unsigned int nr) { + unsigned int i; + unsigned long stride =3D PUD_SIZE >> PAGE_SHIFT; + if (&init_mm =3D=3D mm) return; =20 - __page_table_check_pud_clear(mm, *pudp); - if (pud_user_accessible_page(pud)) { - page_table_check_set(pud_pfn(pud), PUD_SIZE >> PAGE_SHIFT, - pud_write(pud)); - } + for (i =3D 0; i < nr; i++) + __page_table_check_pud_clear(mm, *(pudp + i)); + if (pud_user_accessible_page(pud)) + page_table_check_set(pud_pfn(pud), stride * nr, pud_write(pud)); } -EXPORT_SYMBOL(__page_table_check_pud_set); +EXPORT_SYMBOL(__page_table_check_puds_set); =20 void __page_table_check_pte_clear_range(struct mm_struct *mm, unsigned long addr, --=20 2.43.0