From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6FFC204C13 for ; Tue, 4 Mar 2025 13:58:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096707; cv=none; b=jvxayWCwemmcgJGkoSlAkyxZuPFguWfawfcO+dru0AEFSXb8ZtQBNk1Z7w+jYlW0vkSH8XW88X0/ssMUWgnnYgXCgyK6jhIVvYNkwVPIY/AYo8n8g4pNWR9YNSePsxnvjjwKDEYE9mR8ixVnYorw1zmeCR7EyDDqqpVQ72zG//o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096707; c=relaxed/simple; bh=vdVKhRglpt9Vosn6QZofV8P5uPpS3iho3gB/iAAuoYE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j0UeC/MM7w8KmNbVqM8H8F0FRvT+pP4C+G3uH/OYFmN+OFgOp7NeZKiStiJLiRXj7d3ZhGqlO8LG/+HOA3kmNpy3J7NbNw0RrJggscrwac+zc/Ms0M0EK+8oZrYhMPKXnUvjJYkmz5W5RoyVswMKgXfsHt+3/CEaKsiY0EQzKyg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=h8kbxrx7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="h8kbxrx7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4144AC4CEE9; Tue, 4 Mar 2025 13:58:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096706; bh=vdVKhRglpt9Vosn6QZofV8P5uPpS3iho3gB/iAAuoYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h8kbxrx7XK4Q+NvxBwsqKAlUsRBlNusvYOQqa0kf5IrWbDf80uhG5BGtg+q9zejMf oAqZYkG9/qjpExATxHbmuTpbZK2dfkB7e6NzRlJsNoDhm2Mlg8TE+bEZz5S/zi/QOR /Zw0xLHN3jXCTS2tzLM1RblDh0T7FFfcDnBF7wlZt7FWicx8bsydT8MZoHrPcWDSYV t/u5zBdrjbIS1pMarOZxjDvEt/zd9iNJo/N/qn4PRfEBtEHfm1mrO9nnV1a+HuIXfv WtAWNU17nNtTk9VdJaXMcKNm9IwQcRzKR5IQ3yE6ntI9ZLYVjMCJhWyICrHRxL689W 4CHBTPC+GBXiw== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Dave Hansen , Borislav Petkov Subject: [PATCH v15 01/11] x86/mm: Consolidate full flush threshold decision Date: Tue, 4 Mar 2025 14:58:06 +0100 Message-ID: <20250304135816.12356-2-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Reduce code duplication by consolidating the decision point for whether to = do individual invalidations or a full flush inside get_flush_tlb_info(). Suggested-by: Dave Hansen Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Borislav Petkov (AMD) Acked-by: Dave Hansen Link: https://lore.kernel.org/r/20250226030129.530345-2-riel@surriel.com --- arch/x86/mm/tlb.c | 41 +++++++++++++++++++---------------------- 1 file changed, 19 insertions(+), 22 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index ffc25b348041..dbcb5c968ff9 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1000,6 +1000,15 @@ static struct flush_tlb_info *get_flush_tlb_info(str= uct mm_struct *mm, BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) !=3D 1); #endif =20 + /* + * If the number of flushes is so large that a full flush + * would be faster, do a full flush. + */ + if ((end - start) >> stride_shift > tlb_single_page_flush_ceiling) { + start =3D 0; + end =3D TLB_FLUSH_ALL; + } + info->start =3D start; info->end =3D end; info->mm =3D mm; @@ -1026,17 +1035,8 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsign= ed long start, bool freed_tables) { struct flush_tlb_info *info; + int cpu =3D get_cpu(); u64 new_tlb_gen; - int cpu; - - cpu =3D get_cpu(); - - /* Should we flush just the requested range? */ - if ((end =3D=3D TLB_FLUSH_ALL) || - ((end - start) >> stride_shift) > tlb_single_page_flush_ceiling) { - start =3D 0; - end =3D TLB_FLUSH_ALL; - } =20 /* This is also a barrier that synchronizes with switch_mm(). */ new_tlb_gen =3D inc_mm_tlb_gen(mm); @@ -1089,22 +1089,19 @@ static void do_kernel_range_flush(void *info) =20 void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - /* Balance as user space task's flush, a bit conservative */ - if (end =3D=3D TLB_FLUSH_ALL || - (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) { - on_each_cpu(do_flush_tlb_all, NULL, 1); - } else { - struct flush_tlb_info *info; + struct flush_tlb_info *info; + + guard(preempt)(); =20 - preempt_disable(); - info =3D get_flush_tlb_info(NULL, start, end, 0, false, - TLB_GENERATION_INVALID); + info =3D get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false, + TLB_GENERATION_INVALID); =20 + if (info->end =3D=3D TLB_FLUSH_ALL) + on_each_cpu(do_flush_tlb_all, NULL, 1); + else on_each_cpu(do_kernel_range_flush, info, 1); =20 - put_flush_tlb_info(); - preempt_enable(); - } + put_flush_tlb_info(); } =20 /* --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 524EE204C13 for ; Tue, 4 Mar 2025 13:58:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096710; cv=none; b=eHxRaZYf5rjI/4kK4GrQPpFxsHMuAufJjfHou9hXNUX9oPcdPHAkAHNKi5O1/DD+KqYp+5b7iQTwHIa/3cc61zIDBnasNL9T2515DM8x2gaJrdcZEuxqaYA5Au+JMILdiBY+g3PYNI7gj0CGMJXDjJFFCqI0NjdNHo5jaYkyBak= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096710; c=relaxed/simple; bh=RTPSleP7Ih9cuOhakawnif+qFen0hQD2ElBjJf/R2uo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l8Xh8MNVCaHwOYHXePEt7/YgFPfrF7PY8vkJImlbgKmJ6rf29SgsHfdD3BbR5UxC0bYd3Y9C4Q9Bp693YDpfoVtuFI59iK00tNXqGr99u7HeNUBEnpy5TpjcG2IberVnqFGUv1OSlGS69CQFZt5rwaZJcU1kjHpL39u+Xi6EWS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OZIgd2wB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OZIgd2wB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D06C1C4CEE5; Tue, 4 Mar 2025 13:58:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096709; bh=RTPSleP7Ih9cuOhakawnif+qFen0hQD2ElBjJf/R2uo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OZIgd2wB3QlNyLuuLAxLvQkYbI3bmfHrOI1t5jIwzoVCi+tTxuy/UGkhYq/jlZt4L 28ZJV3wE2+RmDHPbX3VGbOG6rF+xcIOp+QX2XWIFMii1TdPtzyu3iVjxbqX5tPed83 nTj6XLNc+HT049+p4GrntMUWDjtpHP0PfvInJPB0BpQS1S/gByelbiR1Vs/h3pWme0 aFL/oYfdxO0b25B1iYW/5amNk1nOO5VG8hcpCXucZVs8oUsp66yJg+VrG+BCuv8Sj1 VuIlrn1ixV9FAzTp+YFM/4pi8jPMbqyCjDthffiyEgv7GQJgmda+06qO3pqH8o9+kH JaLL+fZtldtew== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 02/11] x86/mm: Add INVLPGB feature and Kconfig entry Date: Tue, 4 Mar 2025 14:58:07 +0100 Message-ID: <20250304135816.12356-3-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel In addition, the CPU advertises the maximum number of pages that can be shot down with one INVLPGB instruction in CPUID. Save that information for later use. [ bp: use cpu_has(), typos, massage. ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-3-riel@surriel.com --- arch/x86/Kconfig.cpu | 4 ++++ arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/disabled-features.h | 8 +++++++- arch/x86/include/asm/tlbflush.h | 3 +++ arch/x86/kernel/cpu/amd.c | 6 ++++++ 5 files changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index 2a7279d80460..25c55cc17c5e 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -401,6 +401,10 @@ menuconfig PROCESSOR_SELECT This lets you choose what x86 vendor support code your kernel will include. =20 +config BROADCAST_TLB_FLUSH + def_bool y + depends on CPU_SUP_AMD && 64BIT + config CPU_SUP_INTEL default y bool "Support Intel processors" if PROCESSOR_SELECT diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpuf= eatures.h index 508c0dad116b..8770dc185fe9 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -338,6 +338,7 @@ #define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */ #define X86_FEATURE_IRPERF (13*32+ 1) /* "irperf" Instructions Retired Co= unt */ #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* "xsaveerptr" Always save/res= tore FP error pointers */ +#define X86_FEATURE_INVLPGB (13*32+ 3) /* INVLPGB and TLBSYNC instruction= s supported */ #define X86_FEATURE_RDPRU (13*32+ 4) /* "rdpru" Read processor register a= t user level */ #define X86_FEATURE_WBNOINVD (13*32+ 9) /* "wbnoinvd" WBNOINVD instructio= n */ #define X86_FEATURE_AMD_IBPB (13*32+12) /* Indirect Branch Prediction Bar= rier */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/as= m/disabled-features.h index c492bdc97b05..625a89259968 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -129,6 +129,12 @@ #define DISABLE_SEV_SNP (1 << (X86_FEATURE_SEV_SNP & 31)) #endif =20 +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH +#define DISABLE_INVLPGB 0 +#else +#define DISABLE_INVLPGB (1 << (X86_FEATURE_INVLPGB & 31)) +#endif + /* * Make sure to add features to the correct mask */ @@ -146,7 +152,7 @@ #define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET| \ DISABLE_CALL_DEPTH_TRACKING|DISABLE_USER_SHSTK) #define DISABLED_MASK12 (DISABLE_FRED|DISABLE_LAM) -#define DISABLED_MASK13 0 +#define DISABLED_MASK13 (DISABLE_INVLPGB) #define DISABLED_MASK14 0 #define DISABLED_MASK15 0 #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UM= IP| \ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 3da645139748..855c13da2045 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -183,6 +183,9 @@ static inline void cr4_init_shadow(void) extern unsigned long mmu_cr4_features; extern u32 *trampoline_cr4_features; =20 +/* How many pages can be invalidated with one INVLPGB. */ +extern u16 invlpgb_count_max; + extern void initialize_tlbstate_and_flush(void); =20 /* diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 54194f5995de..7a72ef47a983 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -29,6 +29,8 @@ =20 #include "cpu.h" =20 +u16 invlpgb_count_max __ro_after_init; + static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p) { u32 gprs[8] =3D { 0 }; @@ -1139,6 +1141,10 @@ static void cpu_detect_tlb_amd(struct cpuinfo_x86 *c) tlb_lli_2m[ENTRIES] =3D eax & mask; =20 tlb_lli_4m[ENTRIES] =3D tlb_lli_2m[ENTRIES] >> 1; + + /* Max number of pages INVLPGB can invalidate in one shot */ + if (cpu_has(c, X86_FEATURE_INVLPGB)) + invlpgb_count_max =3D (cpuid_edx(0x80000008) & 0xffff) + 1; } =20 static const struct cpu_dev amd_cpu_dev =3D { --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C731C204C23 for ; Tue, 4 Mar 2025 13:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096713; cv=none; b=Dr7VYMonODQuEwdVc1jqoavP6S3M/p0Nsc2VsLM+afKpuPtlCU2B8H2H4RKoFRZnueYMHLobHrLzMRUOMRALBGLCH4zW9vlNcN/OnDQE8WvbyV+bEyrWkYDmxuJlO0pezQ2j+li6MAusaXgZyg0DLIRic4kjT0vosWnb23ZNCPQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096713; c=relaxed/simple; bh=X1huOsN5W4jKLuaRwLL36yDmkKY/JDVc78p3WqGQVBs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U6gJQNGtuaDas/024fiu2oqOCjvlOy7dwO56tebM1OJWOKVZ/d09t523TOeb3huBkPIfcUrh8FE/tcI6Q17XEmHBrY1nXRXdvYpqvGJUsEDPzsR0YwZkJ/DS2ymKfI5avYPe1P41ER4zPJ1w9d4lRc9/Y/lWM6UsS/kRxVJE0kQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dyCCJ2el; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dyCCJ2el" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4164FC4CEE7; Tue, 4 Mar 2025 13:58:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096713; bh=X1huOsN5W4jKLuaRwLL36yDmkKY/JDVc78p3WqGQVBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dyCCJ2el7Tll84ljQirbpN9UXDOr1tBvbVY6JChmK1K/KX8vxcYbzjQMOxB8YaB2g jn400jK3VU/APZOnikrergnf8HR5i6vNVfPZBTW/+1jFlsYjsbsgHj6wDQ8vcD3gH0 O9VpzREeaAaqFN8/MCnbe9k9BYbaI1JXYVSS1v+leMLf81F3zO1p2SN374YLDeRSjQ E6SK009MqxA+V8wUDnhUKbt/MtJxM9OzER6wgXmMoPuEITmoIFkZDYYqeTUqcKEcQI 7blFfCoHOzK8s3V89XKBFzaEC0NTe9Bjd05+qIorFP1czuA4GDJwToJ29cx/KPdJm0 GneKxW3yoWLxw== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 03/11] x86/mm: Add INVLPGB support code Date: Tue, 4 Mar 2025 14:58:08 +0100 Message-ID: <20250304135816.12356-4-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Add helper functions and definitions needed to use broadcast TLB invalidation on AMD CPUs. [ bp: - Cleanup commit message - Improve and expand comments - push the preemption guards inside the invlpgb* helpers - merge improvements from dhansen - add !CONFIG_BROADCAST_TLB_FLUSH function stubs because Clang can't do DCE properly yet and looks at the inline asm and complains about it getting a u64 argument on 32-bit code ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-4-riel@surriel.com --- arch/x86/include/asm/tlb.h | 126 +++++++++++++++++++++++++++++++++++++ 1 file changed, 126 insertions(+) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 77f52bc1578a..8ffcae7beb55 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,6 +6,9 @@ static inline void tlb_flush(struct mmu_gather *tlb); =20 #include +#include +#include +#include =20 static inline void tlb_flush(struct mmu_gather *tlb) { @@ -25,4 +28,127 @@ static inline void invlpg(unsigned long addr) asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); } =20 +enum addr_stride { + PTE_STRIDE =3D 0, + PMD_STRIDE =3D 1 +}; + +#ifdef CONFIG_BROADCAST_TLB_FLUSH +/* + * INVLPGB does broadcast TLB invalidation across all the CPUs in the syst= em. + * + * The INVLPGB instruction is weakly ordered, and a batch of invalidations= can + * be done in a parallel fashion. + * + * The instruction takes the number of extra pages to invalidate, beyond + * the first page, while __invlpgb gets the more human readable number of + * pages to invalidate. + * + * The bits in rax[0:2] determine respectively which components of the add= ress + * (VA, PCID, ASID) get compared when flushing. If neither bits are set, *= any* + * address in the specified range matches. + * + * TLBSYNC is used to ensure that pending INVLPGB invalidations initiated = from + * this CPU have completed. + */ +static inline void __invlpgb(unsigned long asid, unsigned long pcid, + unsigned long addr, u16 nr_pages, + enum addr_stride stride, u8 flags) +{ + u32 edx =3D (pcid << 16) | asid; + u32 ecx =3D (stride << 31) | (nr_pages - 1); + u64 rax =3D addr | flags; + + /* The low bits in rax are for flags. Verify addr is clean. */ + VM_WARN_ON_ONCE(addr & ~PAGE_MASK); + + /* INVLPGB; supported in binutils >=3D 2.36. */ + asm volatile(".byte 0x0f, 0x01, 0xfe" :: "a" (rax), "c" (ecx), "d" (edx)); +} + +static inline void __tlbsync(void) +{ + /* + * TLBSYNC waits for INVLPGB instructions originating on the same CPU + * to have completed. Print a warning if the task has been migrated, + * and might not be waiting on all the INVLPGBs issued during this TLB + * invalidation sequence. + */ + cant_migrate(); + + /* TLBSYNC: supported in binutils >=3D 0.36. */ + asm volatile(".byte 0x0f, 0x01, 0xff" ::: "memory"); +} +#else +/* Some compilers (I'm looking at you clang!) simply can't do DCE */ +static inline void __invlpgb(unsigned long asid, unsigned long pcid, + unsigned long addr, u16 nr_pages, + enum addr_stride s, u8 flags) { } +static inline void __tlbsync(void) { } +#endif + +/* + * INVLPGB can be targeted by virtual address, PCID, ASID, or any combinat= ion + * of the three. For example: + * - FLAG_VA | FLAG_INCLUDE_GLOBAL: invalidate all TLB entries at the addr= ess + * - FLAG_PCID: invalidate all TLB entries matching the PCID + * + * The first is used to invalidate (kernel) mappings at a particular + * address across all processes. + * + * The latter invalidates all TLB entries matching a PCID. + */ +#define INVLPGB_FLAG_VA BIT(0) +#define INVLPGB_FLAG_PCID BIT(1) +#define INVLPGB_FLAG_ASID BIT(2) +#define INVLPGB_FLAG_INCLUDE_GLOBAL BIT(3) +#define INVLPGB_FLAG_FINAL_ONLY BIT(4) +#define INVLPGB_FLAG_INCLUDE_NESTED BIT(5) + +/* The implied mode when all bits are clear: */ +#define INVLPGB_MODE_ALL_NONGLOBALS 0UL + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool stride) +{ + enum addr_stride str =3D stride ? PMD_STRIDE : PTE_STRIDE; + u8 flags =3D INVLPGB_FLAG_PCID | INVLPGB_FLAG_VA; + + __invlpgb(0, pcid, addr, nr, str, flags); +} + +/* Flush all mappings for a given PCID, not including globals. */ +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb(0, pcid, 0, 1, PTE_STRIDE, INVLPGB_FLAG_PCID); +} + +/* Flush all mappings, including globals, for all PCIDs. */ +static inline void invlpgb_flush_all(void) +{ + /* + * TLBSYNC at the end needs to make sure all flushes done on the + * current CPU have been executed system-wide. Therefore, make + * sure nothing gets migrated in-between but disable preemption + * as it is cheaper. + */ + guard(preempt)(); + __invlpgb(0, 0, 0, 1, PTE_STRIDE, INVLPGB_FLAG_INCLUDE_GLOBAL); + __tlbsync(); +} + +/* Flush addr, including globals, for all PCIDs. */ +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb(0, 0, addr, nr, PTE_STRIDE, INVLPGB_FLAG_INCLUDE_GLOBAL); +} + +/* Flush all mappings for all PCIDs except globals. */ +static inline void invlpgb_flush_all_nonglobals(void) +{ + guard(preempt)(); + __invlpgb(0, 0, 0, 1, PTE_STRIDE, INVLPGB_MODE_ALL_NONGLOBALS); + __tlbsync(); +} #endif /* _ASM_X86_TLB_H */ --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C066A2054FB for ; Tue, 4 Mar 2025 13:58:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096716; cv=none; b=WKoe/jmEK53bLoHYGUT30UsEyN/c1XvbtfqXaZfI9JDHFUukz/BVTY108S+Y+aN10Y+o+N9MyTvj0pas72Ok7tVZZXhS9Ds2kcj3m9gEKxQAxOo7tJZECsoU8t9/H+b5xFvjBaIKmd1y3lRk4oQ5zSSKyOiSamGaOTylzhenkeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096716; c=relaxed/simple; bh=SCBkxqxjfa8HkA0BTRuWWUnRPjL2osKJM/O5wsn1zgg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MKl0ULndkvn1T3Mb7VvWpwxO5ioOHzVT6box4gQXSl/NGWRQEn54pBX51RmLIu27lMZP6EIIf4C6rWYBRqu4igOcak7P4d+My98QEKV2EBDHxiDtxw50qX2My/Gpk2DDv3GGsWl7xOpZZbCakfPtZqlG8iZ1iehxkeACGqFL938= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=A7wXgql1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="A7wXgql1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A76C2C4CEE9; Tue, 4 Mar 2025 13:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096716; bh=SCBkxqxjfa8HkA0BTRuWWUnRPjL2osKJM/O5wsn1zgg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A7wXgql127OzEvUasxqBMsRN6H4aOQJkUBD9QFFp0MlWYDxeAMoemW1MSkD4DdFqs Y+dzSoFejSn2qTJdDM/MMYQzqDcJHWXSkBLHFhvyJlXZnqTc+q9qU4EfWlJiPxNMri 4Ub42ppW+HYV/33X/CNhXK9xRzE0lEcRMQ2a+lFHZggSFPa8OWGKesGc7na8LhUotY 13TM2ptAtVMkYvywbkYqXhdYkHXtIFNDSsoD1E3T7O4ctYWFvG24yllC8qH3EJqjd5 ScPHoLs5KpaTir7CU2nrRL173YHIC5U/nkdibqRywUCjXPphsc/MxLuXvYg+Gj3Uqi 61rZwEDraemlg== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 04/11] x86/mm: Use INVLPGB for kernel TLB flushes Date: Tue, 4 Mar 2025 14:58:09 +0100 Message-ID: <20250304135816.12356-5-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Use broadcast TLB invalidation for kernel addresses when available. Remove the need to send IPIs for kernel TLB flushes. [ bp: Integrate dhansen's comments additions, merge the flush_tlb_all() change into this one too. ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-5-riel@surriel.com --- arch/x86/mm/tlb.c | 48 +++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 44 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index dbcb5c968ff9..8cd084bc3d98 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1064,7 +1064,6 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigne= d long start, mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } =20 - static void do_flush_tlb_all(void *info) { count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED); @@ -1074,7 +1073,32 @@ static void do_flush_tlb_all(void *info) void flush_tlb_all(void) { count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); - on_each_cpu(do_flush_tlb_all, NULL, 1); + + /* First try (faster) hardware-assisted TLB invalidation. */ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) + invlpgb_flush_all(); + else + /* Fall back to the IPI-based invalidation. */ + on_each_cpu(do_flush_tlb_all, NULL, 1); +} + +/* Flush an arbitrarily large range of memory with INVLPGB. */ +static void invlpgb_kernel_range_flush(struct flush_tlb_info *info) +{ + unsigned long addr, nr; + + for (addr =3D info->start; addr < info->end; addr +=3D nr << PAGE_SHIFT) { + nr =3D (info->end - addr) >> PAGE_SHIFT; + + /* + * INVLPGB has a limit on the size of ranges it can + * flush. Break up large flushes. + */ + nr =3D clamp_val(nr, 1, invlpgb_count_max); + + invlpgb_flush_addr_nosync(addr, nr); + } + __tlbsync(); } =20 static void do_kernel_range_flush(void *info) @@ -1087,6 +1111,22 @@ static void do_kernel_range_flush(void *info) flush_tlb_one_kernel(addr); } =20 +static void kernel_tlb_flush_all(struct flush_tlb_info *info) +{ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) + invlpgb_flush_all(); + else + on_each_cpu(do_flush_tlb_all, NULL, 1); +} + +static void kernel_tlb_flush_range(struct flush_tlb_info *info) +{ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) + invlpgb_kernel_range_flush(info); + else + on_each_cpu(do_kernel_range_flush, info, 1); +} + void flush_tlb_kernel_range(unsigned long start, unsigned long end) { struct flush_tlb_info *info; @@ -1097,9 +1137,9 @@ void flush_tlb_kernel_range(unsigned long start, unsi= gned long end) TLB_GENERATION_INVALID); =20 if (info->end =3D=3D TLB_FLUSH_ALL) - on_each_cpu(do_flush_tlb_all, NULL, 1); + kernel_tlb_flush_all(info); else - on_each_cpu(do_kernel_range_flush, info, 1); + kernel_tlb_flush_range(info); =20 put_flush_tlb_info(); } --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FF12204C3F for ; Tue, 4 Mar 2025 13:58:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096720; cv=none; b=B+Xg5jJXkqLKNq2cgKBNwqglkelDQsd2jFhtbCmeua0dHa+sS1bKfQJsUl6zQutImuhMnQqxcx6G9A42FlxoEpjLkXEgCjEbSeRub4885KwIrEIyHCJ8vM5RJqvYyRqes5xdwJa4ztm1ynEJjkgl+r1F9qrQWxXvjkOQlnil+X0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096720; c=relaxed/simple; bh=txoN3V9rWwXFEyWDRXIoMCIFbMoUCiBnnRfSGjS9m3M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qNHpd4kZVQnoIx5PsB7AsbwEKZ6x5k1+7CYYMOmv4FVuw9ZVIGSWMzUkd7+x8wQRL6HbeXNwe/8m6qrFq5yaVI6Se7lAWEdJGVfs8yt/u9APF6SVjledYa7LfFNWhvfEPNqNNnCHV4s8lSa7U4DgEn2w0n8q9HcOSrA+uTkKS5k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kd8Yw7Lw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kd8Yw7Lw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 174FDC4CEE5; Tue, 4 Mar 2025 13:58:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096720; bh=txoN3V9rWwXFEyWDRXIoMCIFbMoUCiBnnRfSGjS9m3M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kd8Yw7Lwr4sjOpYWFXwXQSEC5xzprBOCIBE7bXwYr8GTXC+xFDjpLzy06CmfMPBne u6ZdRClLHEfy/M+wAKbra6zqvNXfIiC4v94rdgan9Rd1Z/UVXIeuRLeQOB0w8w0Xlu 5D8xfVyMQnYo7qU7qdPIZZsF1BViGBgk33U39jmiZdbrGUj9z3t4C8bt3CBSbFT5l9 /2r7w6F9gxN5CfJM9sClY0EMsUCrLtI0k6as50Bjxm6gCLyKhbZy2BdTMpSEFVRycd o0c3oY9PoYlFvhAh3shml2NRuxEbfXoD8Rlj1YvT24ywiz2F33RnzIx9Taqe6dint8 rD4d7P/Jk/CkA== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 05/11] x86/mm: Use broadcast TLB flushing in page reclaim Date: Tue, 4 Mar 2025 14:58:10 +0100 Message-ID: <20250304135816.12356-6-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Page reclaim tracks only the CPU(s) where the TLB needs to be flushed, rath= er than all the individual mappings that may be getting invalidated. Use broadcast TLB flushing when that is available. [ bp: Massage commit message. ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-7-riel@surriel.com --- arch/x86/mm/tlb.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 8cd084bc3d98..76b4a88afb56 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1320,7 +1320,9 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_b= atch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { + invlpgb_flush_all_nonglobals(); + } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 024D5204C3F for ; Tue, 4 Mar 2025 13:58:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096724; cv=none; b=rJaF2z5xFzMH00eBnK1C+R2528jgxFTVO8hMuBNklOM4ptLeSR+sT74vOxFaHbJhvAa7gnHWUCVS5sowoc8futuaVoSuUTo6zosY9pmAiBB54q8Q0diWy8yHpv308LqXJM685vdhgWVb60OiYa3ULpvEwecqdHEEEJjZhFR97M8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096724; c=relaxed/simple; bh=AVkerff15j+dPRSSgj4ss5DTO/4GX95JT9jOuxMQPW4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FyL7PTVNSZbf5dr7cKnGhHn4AmFK5QIPh2jBrkQOkTRY36pV6LOkt7Nxb8fwSGhEu02+PNU9RbYozUbMundwkH+nTiaqLjHfVWc370kovQ9OFwoYHFl1pPJRB5qZen1o00NcDR0wjUA1R57TljfaCyMoUkCQEbh+XDJxlz+3M0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JP9sY8U5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JP9sY8U5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DE90C4CEEB; Tue, 4 Mar 2025 13:58:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096723; bh=AVkerff15j+dPRSSgj4ss5DTO/4GX95JT9jOuxMQPW4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JP9sY8U5pmzL0VXVtQ8DDXVIcr5nWRhexUni6oc/fUrknFMJtHLFHKJRB1st1emWQ kU5CCsTVKjRpi6pn/M9QIA9yFglTQfOdkolgWfCWMMUZO434DUnu3IRQ4hoF7bEvMq dABsMZf1WIAzxP0r25tMwC1VddoeVqG0FwTLJR9iSLjJy4ECVG7F2H7qe9VlML8JNc saYG1dibzGIgBoZg3HVjRejXpCqBiYYKVorKP2ltAhoO1jbrgfkPEMJhLZm/vbHfQn 9hMn+gaJCyiYpVVD5KEqdLlbxqxt8q2+eTMHWyZHf/si6HoKkBa+442e09197SgHqA wup9zdwT5oiKQ== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 06/11] x86/mm: Add global ASID allocation helper functions Date: Tue, 4 Mar 2025 14:58:11 +0100 Message-ID: <20250304135816.12356-7-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Add functions to manage global ASID space. Multithreaded processes that are simultaneously active on 4 or more CPUs can get a global ASID, resulting in= the same PCID being used for that process on every CPU. This in turn will allow the kernel to use hardware-assisted TLB flushing through AMD INVLPGB or Intel RAR for these processes. [ bp: - Extend use_global_asid() comment - s/X86_BROADCAST_TLB_FLUSH/BROADCAST_TLB_FLUSH/g - other touchups ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-8-riel@surriel.com --- arch/x86/include/asm/mmu.h | 12 +++ arch/x86/include/asm/mmu_context.h | 2 + arch/x86/include/asm/tlbflush.h | 37 +++++++ arch/x86/mm/tlb.c | 154 ++++++++++++++++++++++++++++- 4 files changed, 202 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 3b496cdcb74b..8b8055a8eb9e 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -69,6 +69,18 @@ typedef struct { u16 pkey_allocation_map; s16 execute_only_pkey; #endif + +#ifdef CONFIG_BROADCAST_TLB_FLUSH + /* + * The global ASID will be a non-zero value when the process has + * the same ASID across all CPUs, allowing it to make use of + * hardware-assisted remote TLB invalidation like AMD INVLPGB. + */ + u16 global_asid; + + /* The process is transitioning to a new global ASID number. */ + bool asid_transition; +#endif } mm_context_t; =20 #define INIT_MM_CONTEXT(mm) \ diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_= context.h index 795fdd53bd0a..a2c70e495b1b 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -139,6 +139,8 @@ static inline void mm_reset_untag_mask(struct mm_struct= *mm) #define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); =20 +extern void mm_free_global_asid(struct mm_struct *mm); + /* * Init a new mm. Used on mm copies, like at fork() * and on mm's that are brand-new, like at execve(). diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 855c13da2045..f7b374bcdc0b 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -6,6 +6,7 @@ #include #include =20 +#include #include #include #include @@ -234,6 +235,42 @@ void flush_tlb_one_kernel(unsigned long addr); void flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info); =20 +static inline bool is_dyn_asid(u16 asid) +{ + return asid < TLB_NR_DYN_ASIDS; +} + +#ifdef CONFIG_BROADCAST_TLB_FLUSH +static inline u16 mm_global_asid(struct mm_struct *mm) +{ + u16 asid; + + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return 0; + + asid =3D smp_load_acquire(&mm->context.global_asid); + + /* mm->context.global_asid is either 0, or a global ASID */ + VM_WARN_ON_ONCE(asid && is_dyn_asid(asid)); + + return asid; +} + +static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) +{ + /* + * Notably flush_tlb_mm_range() -> broadcast_tlb_flush() -> + * finish_asid_transition() needs to observe asid_transition =3D true + * once it observes global_asid. + */ + mm->context.asid_transition =3D true; + smp_store_release(&mm->context.global_asid, asid); +} +#else +static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } +static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) {= } +#endif /* CONFIG_BROADCAST_TLB_FLUSH */ + #ifdef CONFIG_PARAVIRT #include #endif diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 76b4a88afb56..6c24d967b77d 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -74,13 +74,15 @@ * use different names for each of them: * * ASID - [0, TLB_NR_DYN_ASIDS-1] - * the canonical identifier for an mm + * the canonical identifier for an mm, dynamically allocated on ea= ch CPU + * [TLB_NR_DYN_ASIDS, MAX_ASID_AVAILABLE-1] + * the canonical, global identifier for an mm, identical across al= l CPUs * - * kPCID - [1, TLB_NR_DYN_ASIDS] + * kPCID - [1, MAX_ASID_AVAILABLE] * the value we write into the PCID part of CR3; corresponds to the * ASID+1, because PCID 0 is special. * - * uPCID - [2048 + 1, 2048 + TLB_NR_DYN_ASIDS] + * uPCID - [2048 + 1, 2048 + MAX_ASID_AVAILABLE] * for KPTI each mm has two address spaces and thus needs two * PCID values, but we can still do with a single ASID denomination * for each mm. Corresponds to kPCID + 2048. @@ -251,6 +253,152 @@ static void choose_new_asid(struct mm_struct *next, u= 64 next_tlb_gen, *need_flush =3D true; } =20 +/* + * Global ASIDs are allocated for multi-threaded processes that are + * active on multiple CPUs simultaneously, giving each of those + * processes the same PCID on every CPU, for use with hardware-assisted + * TLB shootdown on remote CPUs, like AMD INVLPGB or Intel RAR. + * + * These global ASIDs are held for the lifetime of the process. + */ +static DEFINE_RAW_SPINLOCK(global_asid_lock); +static u16 last_global_asid =3D MAX_ASID_AVAILABLE; +static DECLARE_BITMAP(global_asid_used, MAX_ASID_AVAILABLE); +static DECLARE_BITMAP(global_asid_freed, MAX_ASID_AVAILABLE); +static int global_asid_available =3D MAX_ASID_AVAILABLE - TLB_NR_DYN_ASIDS= - 1; + +/* + * When the search for a free ASID in the global ASID space reaches + * MAX_ASID_AVAILABLE, a global TLB flush guarantees that previously + * freed global ASIDs are safe to re-use. + * + * This way the global flush only needs to happen at ASID rollover + * time, and not at ASID allocation time. + */ +static void reset_global_asid_space(void) +{ + lockdep_assert_held(&global_asid_lock); + + invlpgb_flush_all_nonglobals(); + + /* + * The TLB flush above makes it safe to re-use the previously + * freed global ASIDs. + */ + bitmap_andnot(global_asid_used, global_asid_used, + global_asid_freed, MAX_ASID_AVAILABLE); + bitmap_clear(global_asid_freed, 0, MAX_ASID_AVAILABLE); + + /* Restart the search from the start of global ASID space. */ + last_global_asid =3D TLB_NR_DYN_ASIDS; +} + +static u16 allocate_global_asid(void) +{ + u16 asid; + + lockdep_assert_held(&global_asid_lock); + + /* The previous allocation hit the edge of available address space */ + if (last_global_asid >=3D MAX_ASID_AVAILABLE - 1) + reset_global_asid_space(); + + asid =3D find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE, last_gl= obal_asid); + + if (asid >=3D MAX_ASID_AVAILABLE && !global_asid_available) { + /* This should never happen. */ + VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n", + global_asid_available); + return 0; + } + + /* Claim this global ASID. */ + __set_bit(asid, global_asid_used); + last_global_asid =3D asid; + global_asid_available--; + return asid; +} + +/* + * Check whether a process is currently active on more than @threshold CPU= s. + * This is a cheap estimation on whether or not it may make sense to assign + * a global ASID to this process, and use broadcast TLB invalidation. + */ +static bool mm_active_cpus_exceeds(struct mm_struct *mm, int threshold) +{ + int count =3D 0; + int cpu; + + /* This quick check should eliminate most single threaded programs. */ + if (cpumask_weight(mm_cpumask(mm)) <=3D threshold) + return false; + + /* Slower check to make sure. */ + for_each_cpu(cpu, mm_cpumask(mm)) { + /* Skip the CPUs that aren't really running this process. */ + if (per_cpu(cpu_tlbstate.loaded_mm, cpu) !=3D mm) + continue; + + if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu)) + continue; + + if (++count > threshold) + return true; + } + return false; +} + +/* + * Assign a global ASID to the current process, protecting against + * races between multiple threads in the process. + */ +static void use_global_asid(struct mm_struct *mm) +{ + u16 asid; + + guard(raw_spinlock_irqsave)(&global_asid_lock); + + /* This process is already using broadcast TLB invalidation. */ + if (mm_global_asid(mm)) + return; + + /* + * The last global ASID was consumed while waiting for the lock. + * + * If this fires, a more aggressive ASID reuse scheme might be + * needed. + */ + if (!global_asid_available) { + VM_WARN_ONCE(1, "Ran out of global ASIDs\n"); + return; + } + + asid =3D allocate_global_asid(); + if (!asid) + return; + + mm_assign_global_asid(mm, asid); +} + +void mm_free_global_asid(struct mm_struct *mm) +{ + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return; + + if (!mm_global_asid(mm)) + return; + + guard(raw_spinlock_irqsave)(&global_asid_lock); + + /* The global ASID can be re-used only after flush at wrap-around. */ +#ifdef CONFIG_BROADCAST_TLB_FLUSH + __set_bit(mm->context.global_asid, global_asid_freed); + + mm->context.global_asid =3D 0; + global_asid_available++; +#endif +} + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it. --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13D08205AC2 for ; Tue, 4 Mar 2025 13:58:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096727; cv=none; b=nJuYCVVpUIphdwW8eIZ5pvfuh/H3AGFy/Rb7zAX4ZkKNEycjrWLN7zowJbNpAWKcdYcFwqQF9OaXriQOskRDycoswVUFCwJTotaNheXlPyjUASTBPYGzshKv06G7CgF7zki1JSSCAATvKpqb/rYyyNwnG3SwgjcJwMp/+4a4Gj4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096727; c=relaxed/simple; bh=peFXxf6pvLf5YEyvcQ+VbBtK2U9T/s+L+AAUIiO804s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z/9tgcr09tw4RJjs9NtwV4I7SA4ZBervV8+HvbdH7e6vW+9+AJQ2k6RNEGIbU17lFH4OCBs1ngmuuNxM1CSMYYE6hHLTDFPj+3sPebwnbNG1yqh+Q1evmWVR3r1T4IZrOckLt9yY6eFVOKXIzz5BNupydYIohBE+ufyFhYDzFxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i7QlZWM/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i7QlZWM/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E4F4FC4CEE9; Tue, 4 Mar 2025 13:58:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096726; bh=peFXxf6pvLf5YEyvcQ+VbBtK2U9T/s+L+AAUIiO804s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i7QlZWM/CvKDjp1KSyaZW0Gg+U5tDAu/tniChxnze6F0p3R4UsBk4jH0EF/Kzs3Mx 0moEMEQ3opI+uD/pw7uBzKAS6sbhHhSJkBE4pKAC4hIiyOQ1I6bV7Ae63yPKmdKtNh EtB/ZqyF/dPQKdEpvFBzOZkzsiRzXZcD+vXEUXJhYa2R72hMWWFqkkmB67ZwlqnTQn R512uPWnT7sVtlM3yK4XhbSY1taGfbn8u8PdmACuqr8Kn2V4o7m3/G28tHK6YZBDx8 uQU61uZPanCTq0lOqmnhjIkUF1YceJP0PeCCPXUZaTWx8qKKsnlzwJKmxmTvA+CC84 CkqRKyss0jfHA== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 07/11] x86/mm: Handle global ASID context switch and TLB flush Date: Tue, 4 Mar 2025 14:58:12 +0100 Message-ID: <20250304135816.12356-8-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Do context switch and TLB flush support for processes that use a global ASID and PCID across all CPUs. At both context switch time and TLB flush time, it needs to be checked whet= her a task is switching to a global ASID, and, if so, reload the TLB with the n= ew ASID as appropriate. In both code paths, the TLB flush is avoided if a global ASID is used, beca= use the global ASIDs are always kept up to date across CPUs, even when the process is not running on a CPU. [ bp: - Massage - :%s/\/cpu_feature_enabled/cgi ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-9-riel@surriel.com --- arch/x86/include/asm/tlbflush.h | 14 ++++++ arch/x86/mm/tlb.c | 77 ++++++++++++++++++++++++++++++--- 2 files changed, 84 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index f7b374bcdc0b..1f61a39a8776 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -240,6 +240,11 @@ static inline bool is_dyn_asid(u16 asid) return asid < TLB_NR_DYN_ASIDS; } =20 +static inline bool is_global_asid(u16 asid) +{ + return !is_dyn_asid(asid); +} + #ifdef CONFIG_BROADCAST_TLB_FLUSH static inline u16 mm_global_asid(struct mm_struct *mm) { @@ -266,9 +271,18 @@ static inline void mm_assign_global_asid(struct mm_str= uct *mm, u16 asid) mm->context.asid_transition =3D true; smp_store_release(&mm->context.global_asid, asid); } + +static inline bool mm_in_asid_transition(struct mm_struct *mm) +{ + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return false; + + return mm && READ_ONCE(mm->context.asid_transition); +} #else static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) {= } +static inline bool mm_in_asid_transition(struct mm_struct *mm) { return fa= lse; } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ =20 #ifdef CONFIG_PARAVIRT diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 6c24d967b77d..b5681e6f2333 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -227,6 +227,20 @@ static void choose_new_asid(struct mm_struct *next, u6= 4 next_tlb_gen, return; } =20 + /* + * TLB consistency for global ASIDs is maintained with hardware assisted + * remote TLB flushing. Global ASIDs are always up to date. + */ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { + u16 global_asid =3D mm_global_asid(next); + + if (global_asid) { + *new_asid =3D global_asid; + *need_flush =3D false; + return; + } + } + if (this_cpu_read(cpu_tlbstate.invalidate_other)) clear_asid_other(); =20 @@ -399,6 +413,23 @@ void mm_free_global_asid(struct mm_struct *mm) #endif } =20 +/* + * Is the mm transitioning from a CPU-local ASID to a global ASID? + */ +static bool mm_needs_global_asid(struct mm_struct *mm, u16 asid) +{ + u16 global_asid =3D mm_global_asid(mm); + + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return false; + + /* Process is transitioning to a global ASID */ + if (global_asid && asid !=3D global_asid) + return true; + + return false; +} + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it. @@ -704,7 +735,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struc= t mm_struct *next, */ if (prev =3D=3D next) { /* Not actually switching mm's */ - VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=3D + VM_WARN_ON(is_dyn_asid(prev_asid) && + this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=3D next->context.ctx_id); =20 /* @@ -721,6 +753,20 @@ void switch_mm_irqs_off(struct mm_struct *unused, stru= ct mm_struct *next, !cpumask_test_cpu(cpu, mm_cpumask(next)))) cpumask_set_cpu(cpu, mm_cpumask(next)); =20 + /* Check if the current mm is transitioning to a global ASID */ + if (mm_needs_global_asid(next, prev_asid)) { + next_tlb_gen =3D atomic64_read(&next->context.tlb_gen); + choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); + goto reload_tlb; + } + + /* + * Broadcast TLB invalidation keeps this ASID up to date + * all the time. + */ + if (is_global_asid(prev_asid)) + return; + /* * If the CPU is not in lazy TLB mode, we are just switching * from one thread in a process to another thread in the same @@ -754,6 +800,13 @@ void switch_mm_irqs_off(struct mm_struct *unused, stru= ct mm_struct *next, */ cond_mitigation(tsk); =20 + /* + * Let nmi_uaccess_okay() and finish_asid_transition() + * know that CR3 is changing. + */ + this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); + barrier(); + /* * Leave this CPU in prev's mm_cpumask. Atomic writes to * mm_cpumask can be expensive under contention. The CPU @@ -768,14 +821,12 @@ void switch_mm_irqs_off(struct mm_struct *unused, str= uct mm_struct *next, next_tlb_gen =3D atomic64_read(&next->context.tlb_gen); =20 choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); - - /* Let nmi_uaccess_okay() know that we're changing CR3. */ - this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); - barrier(); } =20 +reload_tlb: new_lam =3D mm_lam_cr3_mask(next); if (need_flush) { + VM_WARN_ON_ONCE(is_global_asid(new_asid)); this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); load_new_mm_cr3(next->pgd, new_asid, new_lam, true); @@ -894,7 +945,7 @@ static void flush_tlb_func(void *info) const struct flush_tlb_info *f =3D info; struct mm_struct *loaded_mm =3D this_cpu_read(cpu_tlbstate.loaded_mm); u32 loaded_mm_asid =3D this_cpu_read(cpu_tlbstate.loaded_mm_asid); - u64 local_tlb_gen =3D this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb= _gen); + u64 local_tlb_gen; bool local =3D smp_processor_id() =3D=3D f->initiating_cpu; unsigned long nr_invalidate =3D 0; u64 mm_tlb_gen; @@ -917,6 +968,16 @@ static void flush_tlb_func(void *info) if (unlikely(loaded_mm =3D=3D &init_mm)) return; =20 + /* Reload the ASID if transitioning into or out of a global ASID */ + if (mm_needs_global_asid(loaded_mm, loaded_mm_asid)) { + switch_mm_irqs_off(NULL, loaded_mm, NULL); + loaded_mm_asid =3D this_cpu_read(cpu_tlbstate.loaded_mm_asid); + } + + /* Broadcast ASIDs are always kept up to date with INVLPGB. */ + if (is_global_asid(loaded_mm_asid)) + return; + VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].ctx_id) !=3D loaded_mm->context.ctx_id); =20 @@ -934,6 +995,8 @@ static void flush_tlb_func(void *info) return; } =20 + local_tlb_gen =3D this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen= ); + if (unlikely(f->new_tlb_gen !=3D TLB_GENERATION_INVALID && f->new_tlb_gen <=3D local_tlb_gen)) { /* @@ -1101,7 +1164,7 @@ STATIC_NOPV void native_flush_tlb_multi(const struct = cpumask *cpumask, * up on the new contents of what used to be page tables, while * doing a speculative memory access. */ - if (info->freed_tables) + if (info->freed_tables || mm_in_asid_transition(info->mm)) on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true); else on_each_cpu_cond_mask(should_flush_tlb, flush_tlb_func, --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFC1F204C29 for ; Tue, 4 Mar 2025 13:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096730; cv=none; b=URpJaj0hyeYFcPgKWwZtWo3Ezee6fTQW8wCM67zBGDUrudWtWe6bmtT1ATbgRb5kBdyjajX96kdPkWYAn/lcg8DDrk4wVBFAJq1m3vbRAXahmatFLdKoLIsGxY4QtiXJKHJxy0pqX8wdRjdeb0AYfRvKt1kAeO3jGDYxHvey2RA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096730; c=relaxed/simple; bh=zGmupePrg6X0HyaMkp7BsDdleTUSKa3KpW1HRLcE7Sc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OEGdnVTL85IJfS7uo486INagCZGI0CiXAs+y3U+VoZTx8ixbT7ldzDlHpDOc/Z13ctW71rL663B3WrnJFPkFBspvJGqk4NuphNn93Hq0eMSpr9SkEEMln8uMUMK8MtbB5OArWlRuhb9b6MCmEahBFGodHAZ+2Iwn++hb3hF9p6E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=vF23nCnL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="vF23nCnL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A1BEC4CEE5; Tue, 4 Mar 2025 13:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096730; bh=zGmupePrg6X0HyaMkp7BsDdleTUSKa3KpW1HRLcE7Sc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vF23nCnLD/EN0hiNA2S0Dqw90+rhjkP7uGC0lVh/1ZHuOtrBMANtrepOA+aV8+MSg xgQ3p8MST+8/lt1MLO2MGozYt0O0K8QxxfiQwlnoblC+mjaG2PxViOVq2Ey1bn1u1W LST5ksnoIkQCTZetTqtr14iX9Ge96BNiyflO2nZlNIdba3YhxfHu6uorNEOYk8Q1ZN VA4+d0KCUxq25Nf042VOC2jEWSMDGciSFvhrCSNt7DNEHCiNNtyerc8nn3+d0/WKE7 np3qfANNc6IBDPhbUPt2PdJtVlJtsvBd4o0khHeqErwRCkbI7idmpiS3cAkqF1e4yW jzYPtG1e7BX6Q== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 08/11] x86/mm: Add global ASID process exit helpers Date: Tue, 4 Mar 2025 14:58:13 +0100 Message-ID: <20250304135816.12356-9-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel A global ASID is allocated for the lifetime of a process. Free the global A= SID at process exit time. [ bp: Massage, create helpers, hide details inside them. ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-10-riel@surriel.com --- arch/x86/include/asm/mmu_context.h | 8 +++++++- arch/x86/include/asm/tlbflush.h | 9 +++++++++ 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_= context.h index a2c70e495b1b..2398058b6e83 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -2,7 +2,6 @@ #ifndef _ASM_X86_MMU_CONTEXT_H #define _ASM_X86_MMU_CONTEXT_H =20 -#include #include #include #include @@ -13,6 +12,7 @@ #include #include #include +#include =20 extern atomic64_t last_mm_ctx_id; =20 @@ -139,6 +139,9 @@ static inline void mm_reset_untag_mask(struct mm_struct= *mm) #define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); =20 +#define mm_init_global_asid mm_init_global_asid +extern void mm_init_global_asid(struct mm_struct *mm); + extern void mm_free_global_asid(struct mm_struct *mm); =20 /* @@ -163,6 +166,8 @@ static inline int init_new_context(struct task_struct *= tsk, mm->context.execute_only_pkey =3D -1; } #endif + + mm_init_global_asid(mm); mm_reset_untag_mask(mm); init_new_context_ldt(mm); return 0; @@ -172,6 +177,7 @@ static inline int init_new_context(struct task_struct *= tsk, static inline void destroy_context(struct mm_struct *mm) { destroy_context_ldt(mm); + mm_free_global_asid(mm); } =20 extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 1f61a39a8776..e6c3be06dd21 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -261,6 +261,14 @@ static inline u16 mm_global_asid(struct mm_struct *mm) return asid; } =20 +static inline void mm_init_global_asid(struct mm_struct *mm) +{ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { + mm->context.global_asid =3D 0; + mm->context.asid_transition =3D false; + } +} + static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) { /* @@ -281,6 +289,7 @@ static inline bool mm_in_asid_transition(struct mm_stru= ct *mm) } #else static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } +static inline void mm_init_global_asid(struct mm_struct *mm) { } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) {= } static inline bool mm_in_asid_transition(struct mm_struct *mm) { return fa= lse; } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 409A1205E25 for ; Tue, 4 Mar 2025 13:58:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096734; cv=none; b=GvGNO8M9AInMxbXTIsWAzAO8Ke5Qoo+h6Vn8GuVY3TjEn8euA/LQIO1Rf52xyDaTyriJEMblaJn9njm7TvS66cm6gj5Any/97T2JGJQ352WdD4yD+6cCVUSMGlCV93JIE1AAxAzHEgsi8wBKpiddfLA2EFD5T9vlNkk15T0R2sA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096734; c=relaxed/simple; bh=z2DELLe0Mw9azg4Y88YeuBgadHUuT0vo+TmJ6yIo+tY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XG7GLZOPlqWPw+EU56HDslE53U7MDE5JcdDG+xsTc6jO7bjyF7Z9OLftS3mL3FtxodVBJDYUQ25jKGwn8ulHX3gYk1+jot0ozYp1FFUOSbK185SaAHqf5A4GpW1bAnWcyUB2Q1LaV9a/SxSfjsTgDWFR76H24VsJ6mu9U9PIeYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=S8+HPt3i; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="S8+HPt3i" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0033C4CEEC; Tue, 4 Mar 2025 13:58:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096733; bh=z2DELLe0Mw9azg4Y88YeuBgadHUuT0vo+TmJ6yIo+tY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S8+HPt3ijNm2idGQ/KNL7I1fl9hwX2WAjFZuhtSua12qfxgd234GwBSdBWD6AyXtg ObChu36YdR+j1y4KCTaF5rn9itEJV5UDlYJnSmmgLMmxDIIi2uHpGyL02ePbMsC3YT QmSjTvacRbODazBwh56pbdrU4Eni0Fuzq8mGQiVhKYg7xRVPMsT93Jf/mEq1UzfdGF M8DFz00+8x4Ie8wJJt3bXAhktL41fswds+SatUToUA/fDxcVilxw5cUWT2hGUArq1D aejw6+SS10aNx0LubbfUGq7u/IuNXsx1ukEfyvbgA4VzS8QGAkqkQoh3RBABvT3QiT 0GC5k8AvFNZWQ== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 09/11] x86/mm: Enable broadcast TLB invalidation for multi-threaded processes Date: Tue, 4 Mar 2025 14:58:14 +0100 Message-ID: <20250304135816.12356-10-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel There is not enough room in the 12-bit ASID address space to hand out broadcast ASIDs to every process. Only hand out broadcast ASIDs to processes when they are observed to be simultaneously running on 4 or more CPUs. This also allows single threaded process to continue using the cheaper, loc= al TLB invalidation instructions like INVLPGB. Due to the structure of flush_tlb_mm_range(), the INVLPGB flushing is done = in a generically named broadcast_tlb_flush() function which can later also be used for Intel RAR. Combined with the removal of unnecessary lru_add_drain calls() (see https://lore.kernel.org/r/20241219153253.3da9e8aa@fangorn) this results in a nice performance boost for the will-it-scale tlb_flush2_threads test on an AMD Milan system with 36 cores: - vanilla kernel: 527k loops/second - lru_add_drain removal: 731k loops/second - only INVLPGB: 527k loops/second - lru_add_drain + INVLPGB: 1157k loops/second Profiling with only the INVLPGB changes showed while TLB invalidation went down from 40% of the total CPU time to only around 4% of CPU time, the contention simply moved to the LRU lock. Fixing both at the same time about doubles the number of iterations per sec= ond from this case. Comparing will-it-scale tlb_flush2_threads with several different numbers of threads on a 72 CPU AMD Milan shows similar results. The number represents = the total number of loops per second across all the threads: threads tip INVLPGB 1 315k 304k 2 423k 424k 4 644k 1032k 8 652k 1267k 16 737k 1368k 32 759k 1199k 64 636k 1094k 72 609k 993k 1 and 2 thread performance is similar with and without INVLPGB, because INVLPGB is only used on processes using 4 or more CPUs simultaneously. The number is the median across 5 runs. Some numbers closer to real world performance can be found at Phoronix, tha= nks to Michael: https://www.phoronix.com/news/AMD-INVLPGB-Linux-Benefits [ bp: - Massage - :%s/\/cpu_feature_enabled/cgi - :%s/\/mm_clear_asid_transition/cgi - Fold in a 0day bot fix: https://lore.kernel.org/oe-kbuild-all/20250304= 0000.GtiWUsBm-lkp@intel.com ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Nadav Amit Link: https://lore.kernel.org/r/20250226030129.530345-11-riel@surriel.com WIP Signed-off-by: Borislav Petkov (AMD) --- arch/x86/include/asm/tlbflush.h | 6 ++ arch/x86/mm/tlb.c | 104 +++++++++++++++++++++++++++++++- 2 files changed, 109 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index e6c3be06dd21..7cad283d502d 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -280,6 +280,11 @@ static inline void mm_assign_global_asid(struct mm_str= uct *mm, u16 asid) smp_store_release(&mm->context.global_asid, asid); } =20 +static inline void mm_clear_asid_transition(struct mm_struct *mm) +{ + WRITE_ONCE(mm->context.asid_transition, false); +} + static inline bool mm_in_asid_transition(struct mm_struct *mm) { if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) @@ -291,6 +296,7 @@ static inline bool mm_in_asid_transition(struct mm_stru= ct *mm) static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } static inline void mm_init_global_asid(struct mm_struct *mm) { } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) {= } +static inline void mm_clear_asid_transition(struct mm_struct *mm) { } static inline bool mm_in_asid_transition(struct mm_struct *mm) { return fa= lse; } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ =20 diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index b5681e6f2333..0efd99053c09 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -430,6 +430,105 @@ static bool mm_needs_global_asid(struct mm_struct *mm= , u16 asid) return false; } =20 +/* + * x86 has 4k ASIDs (2k when compiled with KPTI), but the largest x86 + * systems have over 8k CPUs. Because of this potential ASID shortage, + * global ASIDs are handed out to processes that have frequent TLB + * flushes and are active on 4 or more CPUs simultaneously. + */ +static void consider_global_asid(struct mm_struct *mm) +{ + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return; + + /* Check every once in a while. */ + if ((current->pid & 0x1f) !=3D (jiffies & 0x1f)) + return; + + /* + * Assign a global ASID if the process is active on + * 4 or more CPUs simultaneously. + */ + if (mm_active_cpus_exceeds(mm, 3)) + use_global_asid(mm); +} + +static void finish_asid_transition(struct flush_tlb_info *info) +{ + struct mm_struct *mm =3D info->mm; + int bc_asid =3D mm_global_asid(mm); + int cpu; + + if (!mm_in_asid_transition(mm)) + return; + + for_each_cpu(cpu, mm_cpumask(mm)) { + /* + * The remote CPU is context switching. Wait for that to + * finish, to catch the unlikely case of it switching to + * the target mm with an out of date ASID. + */ + while (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) =3D=3D LOADED_MM_= SWITCHING) + cpu_relax(); + + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) !=3D mm) + continue; + + /* + * If at least one CPU is not using the global ASID yet, + * send a TLB flush IPI. The IPI should cause stragglers + * to transition soon. + * + * This can race with the CPU switching to another task; + * that results in a (harmless) extra IPI. + */ + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm_asid, cpu)) !=3D bc_asid) { + flush_tlb_multi(mm_cpumask(info->mm), info); + return; + } + } + + /* All the CPUs running this process are using the global ASID. */ + mm_clear_asid_transition(mm); +} + +static void broadcast_tlb_flush(struct flush_tlb_info *info) +{ + bool pmd =3D info->stride_shift =3D=3D PMD_SHIFT; + unsigned long asid =3D mm_global_asid(info->mm); + unsigned long addr =3D info->start; + + /* + * TLB flushes with INVLPGB are kicked off asynchronously. + * The inc_mm_tlb_gen() guarantees page table updates are done + * before these TLB flushes happen. + */ + if (info->end =3D=3D TLB_FLUSH_ALL) { + invlpgb_flush_single_pcid_nosync(kern_pcid(asid)); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_single_pcid_nosync(user_pcid(asid)); + } else do { + unsigned long nr =3D 1; + + if (info->stride_shift <=3D PMD_SHIFT) { + nr =3D (info->end - addr) >> info->stride_shift; + nr =3D clamp_val(nr, 1, invlpgb_count_max); + } + + invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd); + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd); + + addr +=3D nr << info->stride_shift; + } while (addr < info->end); + + finish_asid_transition(info); + + /* Wait for the INVLPGBs kicked off above to finish. */ + __tlbsync(); +} + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it. @@ -1260,9 +1359,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsign= ed long start, * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { + if (mm_global_asid(mm)) { + broadcast_tlb_flush(info); + } else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { info->trim_cpumask =3D should_trim_cpumask(mm); flush_tlb_multi(mm_cpumask(mm), info); + consider_global_asid(mm); } else if (mm =3D=3D this_cpu_read(cpu_tlbstate.loaded_mm)) { lockdep_assert_irqs_enabled(); local_irq_disable(); --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47EDD205E34 for ; Tue, 4 Mar 2025 13:58:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096737; cv=none; b=n4zUN25yTgCUzym6niDy8e285cSjvIjNGTa+MLvUHuBMpOcvmLhkKIKrjCpeV5a0XQjdQFQZBKL6diey07+RjQ/vWnSRHfaSS9w3e9O95EcrI1JaKfibEFGGxom5bvPvNGW1sD5nKrf+T9caLYJ1opjo+o9OKNZlMDhxBsXPuRk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096737; c=relaxed/simple; bh=stV7xxxBalUUJGGfdqVe/j6rs+E8JtA24+pOHC7n1qU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DevJDKgw04jmd5YDVcZOCnCWEaMb6Gtuc/vkIXwIM61smwESGRWfdlatxY9I5bb+vCu7VlftyvKy8e4RG8QyVdOMVdO7MLsy+0BeTmGJDtNMxgcGPnm2+gBRSTLJp0C/Dim8KZnPA6Srg+45tEytYKgPyl3SN5rIopobwOV8N3Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lwH/QIJc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lwH/QIJc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30815C4CEE5; Tue, 4 Mar 2025 13:58:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096737; bh=stV7xxxBalUUJGGfdqVe/j6rs+E8JtA24+pOHC7n1qU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lwH/QIJcNEZuv14gJ1GlBIj0C24z2Il17tn5+VNrDVbF78IHCNS8RRmVhVrnoHm1u MvuOAMxjYV8ex3ydzFufLf4b883+FR9Ppd0ORuMOjG0SyOmLlv1BIqPTGlZc2+h+Xo fzoPx2i5uqIDcGTcSfRiJ7K54l1/wxAED8x2dSZ0UqDXwyvkKFI5n+tPM7hHaAy/0g bZwc3GyP0h45riZXdUset9sogaMUvybd11rtIYjdNdmh0Zz7XuSL36e8n0hePF8CIi 02APEkK9B8DT91ndAFC3ou06Mooc3lFi11klMfxsAqGnuVa7NoAYlGtuhywJ537vwK kL/63Oj2YACEw== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 10/11] x86/mm: Do targeted broadcast flushing from tlbbatch code Date: Tue, 4 Mar 2025 14:58:15 +0100 Message-ID: <20250304135816.12356-11-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel Instead of doing a system-wide TLB flush from arch_tlbbatch_flush(), queue = up asynchronous, targeted flushes from arch_tlbbatch_add_pending(). This also allows to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flush= ing from the reclaim and compaction paths. [ bp: - Massage - :%s/\/cpu_feature_enabled/cgi - merge in improvements from dhansen ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-12-riel@surriel.com --- arch/x86/include/asm/tlb.h | 10 ++-- arch/x86/include/asm/tlbflush.h | 27 ++++++---- arch/x86/mm/tlb.c | 88 +++++++++++++++++++++++++++++++-- 3 files changed, 108 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 8ffcae7beb55..e8561a846754 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -108,9 +108,9 @@ static inline void __tlbsync(void) { } /* The implied mode when all bits are clear: */ #define INVLPGB_MODE_ALL_NONGLOBALS 0UL =20 -static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, - unsigned long addr, - u16 nr, bool stride) +static inline void __invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool stride) { enum addr_stride str =3D stride ? PMD_STRIDE : PTE_STRIDE; u8 flags =3D INVLPGB_FLAG_PCID | INVLPGB_FLAG_VA; @@ -119,7 +119,7 @@ static inline void invlpgb_flush_user_nr_nosync(unsigne= d long pcid, } =20 /* Flush all mappings for a given PCID, not including globals. */ -static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +static inline void __invlpgb_flush_single_pcid_nosync(unsigned long pcid) { __invlpgb(0, pcid, 0, 1, PTE_STRIDE, INVLPGB_FLAG_PCID); } @@ -139,7 +139,7 @@ static inline void invlpgb_flush_all(void) } =20 /* Flush addr, including globals, for all PCIDs. */ -static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +static inline void __invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) { __invlpgb(0, 0, addr, nr, PTE_STRIDE, INVLPGB_FLAG_INCLUDE_GLOBAL); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 7cad283d502d..214d912ac148 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -105,6 +105,9 @@ struct tlb_state { * need to be invalidated. */ bool invalidate_other; +#ifdef CONFIG_BROADCAST_TLB_FLUSH + bool need_tlbsync; +#endif =20 #ifdef CONFIG_ADDRESS_MASKING /* @@ -292,12 +295,24 @@ static inline bool mm_in_asid_transition(struct mm_st= ruct *mm) =20 return mm && READ_ONCE(mm->context.asid_transition); } + +static inline bool cpu_need_tlbsync(void) +{ + return this_cpu_read(cpu_tlbstate.need_tlbsync); +} + +static inline void cpu_set_tlbsync(bool state) +{ + this_cpu_write(cpu_tlbstate.need_tlbsync, state); +} #else static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } static inline void mm_init_global_asid(struct mm_struct *mm) { } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) {= } static inline void mm_clear_asid_transition(struct mm_struct *mm) { } static inline bool mm_in_asid_transition(struct mm_struct *mm) { return fa= lse; } +static inline bool cpu_need_tlbsync(void) { return false; } +static inline void cpu_set_tlbsync(bool state) { } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ =20 #ifdef CONFIG_PARAVIRT @@ -347,21 +362,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } =20 -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_ba= tch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } =20 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *ba= tch, + struct mm_struct *mm, + unsigned long uaddr); =20 static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 0efd99053c09..61065975c139 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -492,6 +492,37 @@ static void finish_asid_transition(struct flush_tlb_in= fo *info) mm_clear_asid_transition(mm); } =20 +static inline void tlbsync(void) +{ + if (cpu_need_tlbsync()) { + __tlbsync(); + cpu_set_tlbsync(false); + } +} + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool pmd_stride) +{ + __invlpgb_flush_user_nr_nosync(pcid, addr, nr, pmd_stride); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb_flush_single_pcid_nosync(pcid); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb_flush_addr_nosync(addr, nr); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + static void broadcast_tlb_flush(struct flush_tlb_info *info) { bool pmd =3D info->stride_shift =3D=3D PMD_SHIFT; @@ -790,6 +821,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struc= t mm_struct *next, if (IS_ENABLED(CONFIG_PROVE_LOCKING)) WARN_ON_ONCE(!irqs_disabled()); =20 + tlbsync(); + /* * Verify that CR3 is what we think it is. This will catch * hypothetical buggy code that directly switches to swapper_pg_dir @@ -966,6 +999,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struc= t mm_struct *next, */ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { + tlbsync(); + if (this_cpu_read(cpu_tlbstate.loaded_mm) =3D=3D &init_mm) return; =20 @@ -1633,9 +1668,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_b= atch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - invlpgb_flush_all_nonglobals(); - } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); @@ -1644,12 +1677,61 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap= _batch *batch) local_irq_enable(); } =20 + /* + * Wait for outstanding INVLPGB flushes. batch->cpumask will + * be empty when the batch was handled completely by INVLPGB. + * Note that mm_in_asid_transition() mm's may use INVLPGB and + * the flush_tlb_multi() IPIs at the same time. + */ + tlbsync(); + cpumask_clear(&batch->cpumask); =20 put_flush_tlb_info(); put_cpu(); } =20 +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, unsigned long uaddr) +{ + u16 global_asid =3D mm_global_asid(mm); + + if (global_asid) { + /* + * Global ASIDs can be flushed with INVLPGB. Flush + * now instead of batching them for later. A later + * tlbsync() is required to ensure these completed. + */ + invlpgb_flush_user_nr_nosync(kern_pcid(global_asid), uaddr, 1, false); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(global_asid), uaddr, 1, false); + + /* + * Some CPUs might still be using a local ASID for this + * process, and require IPIs, while others are using the + * global ASID. + * + * In this corner case, both broadcast TLB invalidation + * and IPIs need to be sent. The IPIs will help + * stragglers transition to the broadcast ASID. + */ + if (mm_in_asid_transition(mm)) + global_asid =3D 0; + } + + if (!global_asid) { + /* + * Mark the mm and the CPU so that + * the TLB gets flushed later. + */ + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or --=20 2.43.0 From nobody Sun Dec 14 13:56:10 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1C812063D7 for ; Tue, 4 Mar 2025 13:59:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096740; cv=none; b=KYgdMzC12ErSi6jEAyGDrEdsKI0cmPn28f7RG854K6lBlhp2nCaznwICDjN37o/C0PhR2OWFmNVvik7UNe8AgKESQeoqs6NEueGpq6YemQBwYVmNG9wGQFjIwkdWJjYp4LDcOgFDrQr0MhQ6RkOpbVcuh42BBBo7GGN9/+3ZpZ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741096740; c=relaxed/simple; bh=9c2Sjy45ucPykbiLx09DjVwbvVa+jkeWfLgJg4HPpMw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YITo3AoJ16B5wCcfGMoIfYEZdJ9kTsUKMdgJAknxEz47i40XvmfZzBsQUeNCPnAutpR7ZmaPd8an+BYGupcJfXfOU/SlmSUqeLjoD+Rz21+dXWLewywwWlOpQqipDwFFGBO3XV8Qg6TVFPhoL5ILRyUZAHSXj7tsGjAVx4iOyj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=li36mmNf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="li36mmNf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95CCAC4CEE5; Tue, 4 Mar 2025 13:58:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096740; bh=9c2Sjy45ucPykbiLx09DjVwbvVa+jkeWfLgJg4HPpMw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=li36mmNfukBpxBkysoR3bxmyUpKSu6bxn6vCrX/WOwfkw6/ZMkyU2UBjUbzp9EIdO 0RKSgZ1sUYMKzoBN7WAh8ToBmY7S0k/5vfOnWOlJg6eXNl76M3spdD/h6G0WzMK4EL iuWOdPCneAD/eG7wnFtqJj8/DaY0dRcjc9NnfsT73A0Fj/t5WLW8blOsynLGlVKQNs q3aQELgkEn3cO2RBmwos/wAVpTSMjZhsQdVtP6n+25lF/6MA865p14lSRUdhus28BL DJcPDIszXJBAyDtNlg8Gd1H7/DkNUGyDnOVXtwiU1HhwLy7S6vbh8eMthWNqmqePmC uSSJ1Ogw0mwDw== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 11/11] x86/mm: Enable AMD translation cache extensions Date: Tue, 4 Mar 2025 14:58:16 +0100 Message-ID: <20250304135816.12356-12-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Rik van Riel With AMD TCE (translation cache extensions) only the intermediate mappings that cover the address range zapped by INVLPG / INVLPGB get invalidated, rather than all intermediate mappings getting zapped at every TLB invalidat= ion. This can help reduce the TLB miss rate, by keeping more intermediate mappin= gs in the cache. From the AMD manual: Translation Cache Extension (TCE) Bit. Bit 15, read/write. Setting this bit= to 1 changes how the INVLPG, INVLPGB, and INVPCID instructions operate on TLB entries. When this bit is 0, these instructions remove the target PTE from = the TLB as well as all upper-level table entries that are cached in the TLB, whether or not they are associated with the target PTE. When this bit is s= et, these instructions will remove the target PTE and only those upper-level entries that lead to the target PTE in the page table hierarchy, leaving unrelated upper-level entries intact. [ bp: use cpu_has()... I know, it is a mess. ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-13-riel@surriel.com --- arch/x86/include/asm/msr-index.h | 2 ++ arch/x86/kernel/cpu/amd.c | 4 ++++ tools/arch/x86/include/asm/msr-index.h | 2 ++ 3 files changed, 8 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 72765b2fe0d8..1aacd6b68fab 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -25,6 +25,7 @@ #define _EFER_SVME 12 /* Enable virtualization */ #define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */ #define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */ +#define _EFER_TCE 15 /* Enable Translation Cache Extensions */ #define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */ =20 #define EFER_SCE (1<<_EFER_SCE) @@ -34,6 +35,7 @@ #define EFER_SVME (1<<_EFER_SVME) #define EFER_LMSLE (1<<_EFER_LMSLE) #define EFER_FFXSR (1<<_EFER_FFXSR) +#define EFER_TCE (1<<_EFER_TCE) #define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS) =20 /* diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 7a72ef47a983..705853315c0d 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1075,6 +1075,10 @@ static void init_amd(struct cpuinfo_x86 *c) =20 /* AMD CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */ clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE); + + /* Enable Translation Cache Extension */ + if (cpu_has(c, X86_FEATURE_TCE)) + msr_set_bit(MSR_EFER, _EFER_TCE); } =20 #ifdef CONFIG_X86_32 diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/includ= e/asm/msr-index.h index 3ae84c3b8e6d..dc1c1057f26e 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -25,6 +25,7 @@ #define _EFER_SVME 12 /* Enable virtualization */ #define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */ #define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */ +#define _EFER_TCE 15 /* Enable Translation Cache Extensions */ #define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */ =20 #define EFER_SCE (1<<_EFER_SCE) @@ -34,6 +35,7 @@ #define EFER_SVME (1<<_EFER_SVME) #define EFER_LMSLE (1<<_EFER_LMSLE) #define EFER_FFXSR (1<<_EFER_FFXSR) +#define EFER_TCE (1<<_EFER_TCE) #define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS) =20 /* --=20 2.43.0