From nobody Fri Dec 19 13:50:35 2025 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59D7027E056 for ; Mon, 13 Oct 2025 09:21:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760347291; cv=none; b=kC/sMZQTCQNcszVCQTWllpwnCH0gW7OcThkwubho1o2FT56uuwWEEjLatfOsov1veIRgkUPxPh56c7HI8djPS/bTxhRphAkbNzKjjcq4I0WI0+oU7jFi8/+t14YQH0JJDVDUP2naASpktKWnE95/npqPDxnmHTBlIPDnB591kFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760347291; c=relaxed/simple; bh=pPUZHHzBVMY7B1eu3vIlrEjFJa2e23CkBuoEmF9Rx1s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ayv8mHusGE+wNxnPtt31UH9G27VL/xQSHojkbgjcI76PGHJ4gUt1HvrJT1zu7N7fALGJgZ78EJ6JfgGpU1iLRIec4+6kB44Qp9C8O6ZfDtlM3ZT32VvaHFlz19Z+qkUuI3lU7CgxKjfPnL83nks8Axy9gNO0lr5QS7VLXfujYEQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=AtrNoBa3; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="AtrNoBa3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1760347287; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=aWm1WdWttrhOYiXiJuOMK69KNjE6Idu3TEdWrDwmXPQ=; b=AtrNoBa3F3m0Ly+Wx3DPDNc4Ir7xn8lVA6ZzhDRVd0wBEb5C3DXNfkEgcT0J52HwyMzDDDppRdaGQac0KiJdfi1+8LVSFPig0gXMDEwIdR0xHHIiQMaLyiSmp7JZVgNtNetz0PeYfP+sstghG6tOtoJcHc5Iu7GlKvQAGpnS0Ro= Received: from localhost.localdomain(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0Wq1nigk_1760347285 cluster:ay36) by smtp.aliyun-inc.com; Mon, 13 Oct 2025 17:21:25 +0800 From: Huang Ying To: Catalin Marinas , Will Deacon , Andrew Morton , David Hildenbrand Cc: Huang Ying , Lorenzo Stoakes , Vlastimil Babka , Zi Yan , Baolin Wang , Ryan Roberts , Yang Shi , "Christoph Lameter (Ampere)" , Dev Jain , Barry Song , Anshuman Khandual , Yicong Yang , Kefeng Wang , Kevin Brodsky , Yin Fengwei , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH -v2 1/2] mm: add spurious fault fixing support for huge pmd Date: Mon, 13 Oct 2025 17:20:37 +0800 Message-Id: <20251013092038.6963-2-ying.huang@linux.alibaba.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20251013092038.6963-1-ying.huang@linux.alibaba.com> References: <20251013092038.6963-1-ying.huang@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In the current kernel, there is spurious fault fixing support for pte, but not for huge pmd because no architectures need it. But in the next patch in the series, we will change the write protection fault handling logic on arm64, so that some stale huge pmd entries may remain in the TLB. These entries need to be flushed via the huge pmd spurious fault fixing mechanism. Signed-off-by: Huang Ying Cc: Catalin Marinas Cc: Will Deacon Cc: Andrew Morton Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: Vlastimil Babka Cc: Zi Yan Cc: Baolin Wang Cc: Ryan Roberts Cc: Yang Shi Cc: "Christoph Lameter (Ampere)" Cc: Dev Jain Cc: Barry Song Cc: Anshuman Khandual Cc: Yicong Yang Cc: Kefeng Wang Cc: Kevin Brodsky Cc: Yin Fengwei Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/pgtable.h | 4 ++++ mm/huge_memory.c | 22 +++++++++++++++++----- mm/internal.h | 4 ++-- 3 files changed, 23 insertions(+), 7 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 32e8457ad535..341622ec80e4 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1232,6 +1232,10 @@ static inline void arch_swap_restore(swp_entry_t ent= ry, struct folio *folio) #define flush_tlb_fix_spurious_fault(vma, address, ptep) flush_tlb_page(vm= a, address) #endif =20 +#ifndef flush_tlb_fix_spurious_fault_pmd +#define flush_tlb_fix_spurious_fault_pmd(vma, address, ptep) do { } while = (0) +#endif + /* * When walking page tables, get the address of the next boundary, * or the end address of the range if that comes earlier. Although no diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1b81680b4225..8533457c52b7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1641,17 +1641,22 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vm= f, struct folio *folio, EXPORT_SYMBOL_GPL(vmf_insert_folio_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ =20 -void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write) +/* Returns whether the PMD entry is changed */ +int touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write) { + int changed; pmd_t _pmd; =20 _pmd =3D pmd_mkyoung(*pmd); if (write) _pmd =3D pmd_mkdirty(_pmd); - if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK, - pmd, _pmd, write)) + changed =3D pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK, + pmd, _pmd, write); + if (changed) update_mmu_cache_pmd(vma, addr, pmd); + + return changed; } =20 int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, @@ -1849,7 +1854,14 @@ void huge_pmd_set_accessed(struct vm_fault *vmf) if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) goto unlock; =20 - touch_pmd(vmf->vma, vmf->address, vmf->pmd, write); + if (!touch_pmd(vmf->vma, vmf->address, vmf->pmd, write)) { + /* See corresponding comments in handle_pte_fault(). */ + if (vmf->flags & FAULT_FLAG_TRIED) + goto unlock; + if (vmf->flags & FAULT_FLAG_WRITE) + flush_tlb_fix_spurious_fault_pmd(vmf->vma, vmf->address, + vmf->pmd); + } =20 unlock: spin_unlock(vmf->ptl); diff --git a/mm/internal.h b/mm/internal.h index 1561fc2ff5b8..8b58ab00a7cd 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1402,8 +1402,8 @@ int __must_check try_grab_folio(struct folio *folio, = int refs, */ void touch_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, bool write); -void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write); +int touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); =20 /* * Parses a string with mem suffixes into its order. Useful to parse kernel --=20 2.39.5 From nobody Fri Dec 19 13:50:35 2025 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09BA02E8B69 for ; Mon, 13 Oct 2025 09:21:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760347293; cv=none; b=U6bLEdzNm/wMV/mBdjzeZ5+sHGRcSYMufjEVYtV1+CUUvC8844xpHlsrESemIPRsYeHWbvc32iXULDgNMR7+04g6fyT0F77GgFybSUwb5aTNlpZRXz3DlJtYfqO/uEvxVHDbNmCfHF1IlCbtw1qnw5IP9Yarl0yBNIZbg766FiI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760347293; c=relaxed/simple; bh=HAje6UlInkr4846BpCmPEhae0Bs4Y3BPkXaF4xAHwac=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sjIGRTCsiNFImHqRHrBtZhcCdemQ+DdtRt6b+sHPThnjR/04tLclLzTNvJCVju7NPLQlujzucIbTGLtPrUOsDgzHWE1tptILAI2T9d1eZ7v3FX45Tm+kq2D/ClNo/WVNasj7RghH5O8wZqQRCKyi6Qa0GBnn0UJyjDmK9fkN++k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=yY4ZrO1V; arc=none smtp.client-ip=115.124.30.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="yY4ZrO1V" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1760347288; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=A7wLOTWZ+WWqSVJaeLdV/JRD7YxTP+JiifLk05aZ72U=; b=yY4ZrO1VJ7KzQCs4lXpmBbzWRFZljvODU+Z3JiPSCmcTfSUTjP+WWWgSIAQG4D+2+PZ/33fL1KuVFSUVaD7Bpd+tUXFzoXsuSyLfsaplrH8SVS1khwHIXOzMz3eh55aMCAJHWRxkuQBqkuB89XJUxwkIh1MS2xw5ErSXtOZlWtU= Received: from localhost.localdomain(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0Wq1nihE_1760347286 cluster:ay36) by smtp.aliyun-inc.com; Mon, 13 Oct 2025 17:21:26 +0800 From: Huang Ying To: Catalin Marinas , Will Deacon , Andrew Morton , David Hildenbrand Cc: Huang Ying , Lorenzo Stoakes , Vlastimil Babka , Zi Yan , Baolin Wang , Ryan Roberts , Yang Shi , "Christoph Lameter (Ampere)" , Dev Jain , Barry Song , Anshuman Khandual , Yicong Yang , Kefeng Wang , Kevin Brodsky , Yin Fengwei , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH -v2 2/2] arm64, tlbflush: don't TLBI broadcast if page reused in write fault Date: Mon, 13 Oct 2025 17:20:38 +0800 Message-Id: <20251013092038.6963-3-ying.huang@linux.alibaba.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20251013092038.6963-1-ying.huang@linux.alibaba.com> References: <20251013092038.6963-1-ying.huang@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A multi-thread customer workload with large memory footprint uses fork()/exec() to run some external programs every tens seconds. When running the workload on an arm64 server machine, it's observed that quite some CPU cycles are spent in the TLB flushing functions. While running the workload on the x86_64 server machine, it's not. This causes the performance on arm64 to be much worse than that on x86_64. During the workload running, after fork()/exec() write-protects all pages in the parent process, memory writing in the parent process will cause a write protection fault. Then the page fault handler will make the PTE/PDE writable if the page can be reused, which is almost always true in the workload. On arm64, to avoid the write protection fault on other CPUs, the page fault handler flushes the TLB globally with TLBI broadcast after changing the PTE/PDE. However, this isn't always necessary. Firstly, it's safe to leave some stall read-only TLB entries as long as they will be flushed finally. Secondly, it's quite possible that the original read-only PTE/PDEs aren't cached in remote TLB at all if the memory footprint is large. In fact, on x86_64, the page fault handler doesn't flush the remote TLB in this situation, which benefits the performance a lot. To improve the performance on arm64, make the write protection fault handler flush the TLB locally instead of globally via TLBI broadcast after making the PTE/PDE writable. If there are stall read-only TLB entries in the remote CPUs, the page fault handler on these CPUs will regard the page fault as spurious and flush the stall TLB entries. To test the patchset, make the usemem.c from vm-scalability (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git). support calling fork()/exec() periodically (merged). To mimic the behavior of the customer workload, run usemem with 4 threads, access 100GB memory, and call fork()/exec() every 40 seconds. Test results show that with the patchset the score of usemem improves ~40.6%. The cycles% of TLB flush functions reduces from ~50.5% to ~0.3% in perf profile. Signed-off-by: Huang Ying Cc: Catalin Marinas Cc: Will Deacon Cc: Andrew Morton Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: Vlastimil Babka Cc: Zi Yan Cc: Baolin Wang Cc: Ryan Roberts Cc: Yang Shi Cc: "Christoph Lameter (Ampere)" Cc: Dev Jain Cc: Barry Song Cc: Anshuman Khandual Cc: Yicong Yang Cc: Kefeng Wang Cc: Kevin Brodsky Cc: Yin Fengwei Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- arch/arm64/include/asm/pgtable.h | 14 +++++--- arch/arm64/include/asm/tlbflush.h | 56 +++++++++++++++++++++++++++++++ arch/arm64/mm/contpte.c | 3 +- arch/arm64/mm/fault.c | 2 +- 4 files changed, 67 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgta= ble.h index aa89c2e67ebc..35bae2e4bcfe 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -130,12 +130,16 @@ static inline void arch_leave_lazy_mmu_mode(void) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 /* - * Outside of a few very special situations (e.g. hibernation), we always - * use broadcast TLB invalidation instructions, therefore a spurious page - * fault on one CPU which has been handled concurrently by another CPU - * does not need to perform additional invalidation. + * We use local TLB invalidation instruction when reusing page in + * write protection fault handler to avoid TLBI broadcast in the hot + * path. This will cause spurious page faults if stall read-only TLB + * entries exist. */ -#define flush_tlb_fix_spurious_fault(vma, address, ptep) do { } while (0) +#define flush_tlb_fix_spurious_fault(vma, address, ptep) \ + local_flush_tlb_page_nonotify(vma, address) + +#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \ + local_flush_tlb_page_nonotify(vma, address) =20 /* * ZERO_PAGE is a global shared page that is always zero: used diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index 18a5dc0c9a54..651b31fd18bb 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -249,6 +249,18 @@ static inline unsigned long get_trans_granule(void) * cannot be easily determined, the value TLBI_TTL_UNKNOWN will * perform a non-hinted invalidation. * + * local_flush_tlb_page(vma, addr) + * Local variant of flush_tlb_page(). Stale TLB entries may + * remain in remote CPUs. + * + * local_flush_tlb_page_nonotify(vma, addr) + * Same as local_flush_tlb_page() except MMU notifier will not be + * called. + * + * local_flush_tlb_contpte_range(vma, start, end) + * Invalidate the virtual-address range '[start, end)' mapped with + * contpte on local CPU for the user address space corresponding + * to 'vma->mm'. Stale TLB entries may remain in remote CPUs. * * Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented * on top of these routines, since that is our interface to the mmu_gather @@ -282,6 +294,33 @@ static inline void flush_tlb_mm(struct mm_struct *mm) mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } =20 +static inline void __local_flush_tlb_page_nonotify_nosync( + struct mm_struct *mm, unsigned long uaddr) +{ + unsigned long addr; + + dsb(nshst); + addr =3D __TLBI_VADDR(uaddr, ASID(mm)); + __tlbi(vale1, addr); + __tlbi_user(vale1, addr); +} + +static inline void local_flush_tlb_page_nonotify( + struct vm_area_struct *vma, unsigned long uaddr) +{ + __local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr); + dsb(nsh); +} + +static inline void local_flush_tlb_page(struct vm_area_struct *vma, + unsigned long uaddr) +{ + __local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr); + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, uaddr & PAGE_MASK, + (uaddr & PAGE_MASK) + PAGE_SIZE); + dsb(nsh); +} + static inline void __flush_tlb_page_nosync(struct mm_struct *mm, unsigned long uaddr) { @@ -472,6 +511,23 @@ static inline void __flush_tlb_range(struct vm_area_st= ruct *vma, dsb(ish); } =20 +static inline void local_flush_tlb_contpte_range(struct vm_area_struct *vm= a, + unsigned long start, unsigned long end) +{ + unsigned long asid, pages; + + start =3D round_down(start, PAGE_SIZE); + end =3D round_up(end, PAGE_SIZE); + pages =3D (end - start) >> PAGE_SHIFT; + + dsb(nshst); + asid =3D ASID(vma->vm_mm); + __flush_tlb_range_op(vale1, start, pages, PAGE_SIZE, asid, + 3, true, lpa2_is_enabled()); + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); + dsb(nsh); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index c0557945939c..0f9bbb7224dc 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -622,8 +622,7 @@ int contpte_ptep_set_access_flags(struct vm_area_struct= *vma, __ptep_set_access_flags(vma, addr, ptep, entry, 0); =20 if (dirty) - __flush_tlb_range(vma, start_addr, addr, - PAGE_SIZE, true, 3); + local_flush_tlb_contpte_range(vma, start_addr, addr); } else { __contpte_try_unfold(vma->vm_mm, addr, ptep, orig_pte); __ptep_set_access_flags(vma, addr, ptep, entry, dirty); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index d816ff44faff..22f54f5afe3f 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -235,7 +235,7 @@ int __ptep_set_access_flags(struct vm_area_struct *vma, =20 /* Invalidate a stale read-only entry */ if (dirty) - flush_tlb_page(vma, address); + local_flush_tlb_page(vma, address); return 1; } =20 --=20 2.39.5