From nobody Fri Apr 3 16:01:16 2026 Received: from out-177.mta1.migadu.com (out-177.mta1.migadu.com [95.215.58.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62AA73D890B for ; Tue, 24 Mar 2026 08:53:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774342413; cv=none; b=fiFGtA9JIHnuofApAjQ6W/bVSDauePJQ7ksaLoTfE+Lljyj02v08tdqO8n5yfKaWuXVF+LhbATpcIqVBwd8MtNOIc8Fr1IxdCXAl0ZxdNfJc04Z8OtYoCTLZmZ/kHktDPdsmd2bXG9KJj4aGR/r+8JRJx5DHyq7H2JzMAwyDWFk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774342413; c=relaxed/simple; bh=B80OrE2TyDwgJ0jOPDUtK8dp4/gddcy+8J6bsgw0WhI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=twjKw9oFMiMPUsbJeT7PE/h4B6JnoRUDAghTihKnHZLWKLEtD8AS4e6dZQxvDBx32OZ1wULU+L+FTlcn0TUuk0Mnc+BuxyXTEdpDvqgcgLTjJgFKxgf5+jWBQLYz4SApLiVLTYAUjTePqrmgn3+ujf+6T69ma4jerODefAbcLGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=bfZiOVLP; arc=none smtp.client-ip=95.215.58.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="bfZiOVLP" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1774342410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r4i42uUk09a42HzxE9DZCh9MNHfSJlOmtFh/Ez0FeKo=; b=bfZiOVLPMKSauHlnb2CihyiJSRqCLYaNHoEysG29XcsVxPVbt/L8QA8FsXXbEMXk50FCPn SPiaiSVXikhWgg+RF2PDxDOwAuJ2y1Mphr2eC1iPeU7k/byHcFHNdnGThc3ZJolikPLt0O Q/Utvb4y7tulwQbjhKpwXUn1WcdoSEE= From: Lance Yang To: akpm@linux-foundation.org Cc: peterz@infradead.org, david@kernel.org, dave.hansen@intel.com, dave.hansen@linux.intel.com, ypodemsk@redhat.com, hughd@google.com, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de, lorenzo.stoakes@oracle.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, shy828301@gmail.com, riel@surriel.com, jannh@google.com, jgross@suse.com, seanjc@google.com, pbonzini@redhat.com, boris.ostrovsky@oracle.com, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ioworker0@gmail.com, Lance Yang Subject: [PATCH v8 1/2] mm/mmu_gather: prepare to skip redundant sync IPIs Date: Tue, 24 Mar 2026 16:52:37 +0800 Message-ID: <20260324085238.44477-2-lance.yang@linux.dev> In-Reply-To: <20260324085238.44477-1-lance.yang@linux.dev> References: <20260324085238.44477-1-lance.yang@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Lance Yang When page table operations require synchronization with software/lockless walkers, they call tlb_remove_table_sync_{one,rcu}() after flushing the TLB (tlb->freed_tables or tlb->unshared_tables). On architectures where the TLB flush already sends IPIs to all target CPUs, the subsequent sync IPI broadcast is redundant. This is not only costly on large systems where it disrupts all CPUs even for single-process page table operations, but has also been reported to hurt RT workloads[1]. Introduce tlb_table_flush_implies_ipi_broadcast() to check if the prior TLB flush already provided the necessary synchronization. When true, the sync calls can early-return. A few cases rely on this synchronization: 1) hugetlb PMD unshare[2]: The problem is not the freeing but the reuse of the PMD table for other purposes in the last remaining user after unsharing. 2) khugepaged collapse[3]: Ensure no concurrent GUP-fast before collapsing and (possibly) freeing the page table / re-depositing it. Currently always returns false (no behavior change). The follow-up patch will enable the optimization for x86. [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@k= ernel.org/ [2] https://lore.kernel.org/linux-mm/6a364356-5fea-4a6c-b959-ba3b22ce9c88@k= ernel.org/ [3] https://lore.kernel.org/linux-mm/2cb4503d-3a3f-4f6c-8038-7b3d1c74b3c2@k= ernel.org/ Suggested-by: David Hildenbrand (Arm) Acked-by: David Hildenbrand (Arm) Signed-off-by: Lance Yang --- include/asm-generic/tlb.h | 17 +++++++++++++++++ mm/mmu_gather.c | 15 +++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index bdcc2778ac64..cb41cc6a0024 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -240,6 +240,23 @@ static inline void tlb_remove_table(struct mmu_gather = *tlb, void *table) } #endif /* CONFIG_MMU_GATHER_TABLE_FREE */ =20 +/** + * tlb_table_flush_implies_ipi_broadcast - does TLB flush imply IPI sync + * + * When page table operations require synchronization with software/lockle= ss + * walkers, they flush the TLB (tlb->freed_tables or tlb->unshared_tables) + * then call tlb_remove_table_sync_{one,rcu}(). If the flush already sent + * IPIs to all CPUs, the sync call is redundant. + * + * Returns false by default. Architectures can override by defining this. + */ +#ifndef tlb_table_flush_implies_ipi_broadcast +static inline bool tlb_table_flush_implies_ipi_broadcast(void) +{ + return false; +} +#endif + #ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE /* * This allows an architecture that does not use the linux page-tables for diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 3985d856de7f..37a6a711c37e 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -283,6 +283,14 @@ void tlb_remove_table_sync_one(void) * It is however sufficient for software page-table walkers that rely on * IRQ disabling. */ + + /* + * Skip IPI if the preceding TLB flush already synchronized with + * all CPUs that could be doing software/lockless page table walks. + */ + if (tlb_table_flush_implies_ipi_broadcast()) + return; + smp_call_function(tlb_remove_table_smp_sync, NULL, 1); } =20 @@ -312,6 +320,13 @@ static void tlb_remove_table_free(struct mmu_table_bat= ch *batch) */ void tlb_remove_table_sync_rcu(void) { + /* + * Skip RCU wait if the preceding TLB flush already synchronized + * with all CPUs that could be doing software/lockless page table walks. + */ + if (tlb_table_flush_implies_ipi_broadcast()) + return; + synchronize_rcu(); } =20 --=20 2.49.0 From nobody Fri Apr 3 16:01:16 2026 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF77338F65B for ; Tue, 24 Mar 2026 08:53:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774342432; cv=none; b=QjJWXlSr9IFWc5f0bknSXWITvAT9nsm0wJGK+ot3zfjQgFEdtT8vNFCiQ7dMjqRbhwriinsKdecr/HBDq+FHlhGjhKmoBcN7HzWoIkwjyTmjefn1nZWw61wtDWNwKsUgXhOKN7JzqN16HTgeoEszFUfZRiEq6uNr4BULvKKm+Jc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774342432; c=relaxed/simple; bh=5A9m0BUSwXdy9sb9k8BJtgZ3S5xNCVV+8LeCNxbnQCg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GpiT2CHqavmy6qvBE89LwXTlbJk/aiQCDSPBt1eQd/CaWDMqZq9n5jCqcI5VJTm9Nis/w4kKwvzX6PaoK2D6XxSp1NI2vHdHuJ2nJlQC3qwTp9iW0CTnKOTKaSOKAiywHjZr3Zzu86XJvVg/lfDFHNmm91B9C6iye5EqWLVfuM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=hnis4mFm; arc=none smtp.client-ip=91.218.175.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="hnis4mFm" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1774342418; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kze1ges43lCb7kREwIJMd42h9t5NMryI/FoPBiTgyD4=; b=hnis4mFmxk8UE5Ph4hvLT2YpX6ce7VQ4AmCZgAMaj4z+uGLDqMhYDoJvPDCWcfi0cgNbJi IFKDU7+TfEUCxM8/gnJq65wUDQYGfxvbQMfNvAu236q4OxM0y4MuqWhoswILqskVnJIBLo g67MREHBdhQsfRHzMJd6jBxvyOqpGck= From: Lance Yang To: akpm@linux-foundation.org Cc: peterz@infradead.org, david@kernel.org, dave.hansen@intel.com, dave.hansen@linux.intel.com, ypodemsk@redhat.com, hughd@google.com, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de, lorenzo.stoakes@oracle.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, shy828301@gmail.com, riel@surriel.com, jannh@google.com, jgross@suse.com, seanjc@google.com, pbonzini@redhat.com, boris.ostrovsky@oracle.com, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ioworker0@gmail.com, Lance Yang Subject: [PATCH v8 2/2] x86/tlb: skip redundant sync IPIs for native TLB flush Date: Tue, 24 Mar 2026 16:52:38 +0800 Message-ID: <20260324085238.44477-3-lance.yang@linux.dev> In-Reply-To: <20260324085238.44477-1-lance.yang@linux.dev> References: <20260324085238.44477-1-lance.yang@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Lance Yang Some page table operations need to synchronize with software/lockless walkers after a TLB flush by calling tlb_remove_table_sync_{one,rcu}(). On x86, that extra synchronization is redundant when the preceding TLB flush already broadcast IPIs to all relevant CPUs. native_pv_tlb_init() checks whether native_flush_tlb_multi() is in use. On CONFIG_PARAVIRT systems, it checks pv_ops; on non-PARAVIRT, native flush is always in use. It decides once at boot whether to enable the optimization: if using native TLB flush and INVLPGB is not supported, we know IPIs were sent and can skip the redundant sync. The decision is fixed via a static key as Peter suggested[1]. PV backends (KVM, Xen, Hyper-V) typically have their own implementations and don't call native_flush_tlb_multi() directly, so they cannot be trusted to provide the IPI guarantees we need. Also treat unshared_tables like freed_tables when issuing the TLB flush, so lazy-TLB CPUs receive IPIs during unsharing of page tables as well. This allows us to safely implement tlb_table_flush_implies_ipi_broadcast(). Two-step plan as David suggested[2]: Step 1 (this patch): Skip redundant sync when we're 100% certain the TLB flush sent IPIs. INVLPGB is excluded because when supported, we cannot guarantee IPIs were sent, keeping it clean and simple. Step 2 (future work): Send targeted IPIs only to CPUs actually doing software/lockless page table walks, benefiting all architectures. Regarding Step 2, it obviously only applies to setups where Step 1 does not apply: like x86 with INVLPGB or arm64. [1] https://lore.kernel.org/linux-mm/20260302145652.GH1395266@noisy.program= ming.kicks-ass.net/ [2] https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@k= ernel.org/ Suggested-by: Peter Zijlstra Suggested-by: David Hildenbrand (Arm) Acked-by: David Hildenbrand (Arm) Signed-off-by: Lance Yang --- arch/x86/include/asm/tlb.h | 18 +++++++++++++++++- arch/x86/include/asm/tlbflush.h | 2 ++ arch/x86/kernel/smpboot.c | 1 + arch/x86/mm/tlb.c | 15 +++++++++++++++ 4 files changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 866ea78ba156..fc586ec8e768 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -5,11 +5,21 @@ #define tlb_flush tlb_flush static inline void tlb_flush(struct mmu_gather *tlb); =20 +#define tlb_table_flush_implies_ipi_broadcast tlb_table_flush_implies_ipi_= broadcast +static inline bool tlb_table_flush_implies_ipi_broadcast(void); + #include #include #include #include =20 +DECLARE_STATIC_KEY_FALSE(tlb_ipi_broadcast_key); + +static inline bool tlb_table_flush_implies_ipi_broadcast(void) +{ + return static_branch_likely(&tlb_ipi_broadcast_key); +} + static inline void tlb_flush(struct mmu_gather *tlb) { unsigned long start =3D 0UL, end =3D TLB_FLUSH_ALL; @@ -20,7 +30,13 @@ static inline void tlb_flush(struct mmu_gather *tlb) end =3D tlb->end; } =20 - flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables); + /* + * Treat unshared_tables just like freed_tables, such that lazy-TLB + * CPUs also receive IPIs during unsharing of page tables, allowing + * us to safely implement tlb_table_flush_implies_ipi_broadcast(). + */ + flush_tlb_mm_range(tlb->mm, start, end, stride_shift, + tlb->freed_tables || tlb->unshared_tables); } =20 static inline void invlpg(unsigned long addr) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 5a3cdc439e38..8ba853154b46 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -18,6 +18,8 @@ =20 DECLARE_PER_CPU(u64, tlbstate_untag_mask); =20 +void __init native_pv_tlb_init(void); + void __flush_tlb_all(void); =20 #define TLB_FLUSH_ALL -1UL diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 294a8ea60298..df776b645a9c 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1256,6 +1256,7 @@ void __init native_smp_prepare_boot_cpu(void) switch_gdt_and_percpu_base(me); =20 native_pv_lock_init(); + native_pv_tlb_init(); } =20 void __init native_smp_cpus_done(unsigned int max_cpus) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 621e09d049cb..8f5585ebaf09 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -26,6 +26,8 @@ =20 #include "mm_internal.h" =20 +DEFINE_STATIC_KEY_FALSE(tlb_ipi_broadcast_key); + #ifdef CONFIG_PARAVIRT # define STATIC_NOPV #else @@ -1834,3 +1836,16 @@ static int __init create_tlb_single_page_flush_ceili= ng(void) return 0; } late_initcall(create_tlb_single_page_flush_ceiling); + +void __init native_pv_tlb_init(void) +{ +#ifdef CONFIG_PARAVIRT + if (pv_ops.mmu.flush_tlb_multi !=3D native_flush_tlb_multi) + return; +#endif + + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return; + + static_branch_enable(&tlb_ipi_broadcast_key); +} --=20 2.49.0