__write_ibpb() does IBPB, which (among other things) flushes branch type
predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation
has been disabled, branch type flushing isn't needed, in which case the
lighter-weight SBPB can be used.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/x86/entry/entry.S | 2 +-
arch/x86/kernel/cpu/bugs.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
index 3a53319988b9..a5b421ec19c0 100644
--- a/arch/x86/entry/entry.S
+++ b/arch/x86/entry/entry.S
@@ -21,7 +21,7 @@
SYM_FUNC_START(__write_ibpb)
ANNOTATE_NOENDBR
movl $MSR_IA32_PRED_CMD, %ecx
- movl $PRED_CMD_IBPB, %eax
+ movl _ASM_RIP(x86_pred_cmd), %eax
xorl %edx, %edx
wrmsr
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 310cb3f7139c..c8b8dc829046 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -58,7 +58,7 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
-u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
+u32 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
EXPORT_SYMBOL_GPL(x86_pred_cmd);
static u64 __ro_after_init x86_arch_cap_msr;
--
2.48.1
On Wed, Apr 2, 2025 at 11:20 AM Josh Poimboeuf <jpoimboe@kernel.org> wrote: > > __write_ibpb() does IBPB, which (among other things) flushes branch type > predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation > has been disabled, branch type flushing isn't needed, in which case the > lighter-weight SBPB can be used. When nested SVM is not supported, should KVM "promote" SRSO_USER_KERNEL_NO on the host to SRSO_NO in KVM_GET_SUPPORTED_CPUID? Or is a Linux guest clever enough to do the promotion itself if CPUID.80000001H:ECX.SVM[bit 2] is clear?
On Wed, Apr 02, 2025 at 02:04:04PM -0700, Jim Mattson wrote: > On Wed, Apr 2, 2025 at 11:20 AM Josh Poimboeuf <jpoimboe@kernel.org> wrote: > > > > __write_ibpb() does IBPB, which (among other things) flushes branch type > > predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation > > has been disabled, branch type flushing isn't needed, in which case the > > lighter-weight SBPB can be used. > > When nested SVM is not supported, should KVM "promote" > SRSO_USER_KERNEL_NO on the host to SRSO_NO in KVM_GET_SUPPORTED_CPUID? > Or is a Linux guest clever enough to do the promotion itself if > CPUID.80000001H:ECX.SVM[bit 2] is clear? I'm afraid that question is beyond my pay grade, maybe some AMD or virt folks can chime in. -- Josh
On 4/2/25 13:19, Josh Poimboeuf wrote: > __write_ibpb() does IBPB, which (among other things) flushes branch type > predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation > has been disabled, branch type flushing isn't needed, in which case the > lighter-weight SBPB can be used. Maybe add something here that indicates the x86_pred_cmd variable tracks this optimization so switch to using that variable vs the hardcoded IBPB? Thanks, Tom > > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> > --- > arch/x86/entry/entry.S | 2 +- > arch/x86/kernel/cpu/bugs.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S > index 3a53319988b9..a5b421ec19c0 100644 > --- a/arch/x86/entry/entry.S > +++ b/arch/x86/entry/entry.S > @@ -21,7 +21,7 @@ > SYM_FUNC_START(__write_ibpb) > ANNOTATE_NOENDBR > movl $MSR_IA32_PRED_CMD, %ecx > - movl $PRED_CMD_IBPB, %eax > + movl _ASM_RIP(x86_pred_cmd), %eax > xorl %edx, %edx > wrmsr > > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c > index 310cb3f7139c..c8b8dc829046 100644 > --- a/arch/x86/kernel/cpu/bugs.c > +++ b/arch/x86/kernel/cpu/bugs.c > @@ -58,7 +58,7 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); > DEFINE_PER_CPU(u64, x86_spec_ctrl_current); > EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); > > -u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; > +u32 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; > EXPORT_SYMBOL_GPL(x86_pred_cmd); > > static u64 __ro_after_init x86_arch_cap_msr;
On Wed, Apr 02, 2025 at 03:41:25PM -0500, Tom Lendacky wrote: > On 4/2/25 13:19, Josh Poimboeuf wrote: > > __write_ibpb() does IBPB, which (among other things) flushes branch type > > predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation > > has been disabled, branch type flushing isn't needed, in which case the > > lighter-weight SBPB can be used. > > Maybe add something here that indicates the x86_pred_cmd variable tracks > this optimization so switch to using that variable vs the hardcoded IBPB? Indeed, adding a second paragraph to clarify that: x86/bugs: Use SBPB in write_ibpb() if applicable write_ibpb() does IBPB, which (among other things) flushes branch type predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation has been disabled, branch type flushing isn't needed, in which case the lighter-weight SBPB can be used. The 'x86_pred_cmd' variable already keeps track of whether IBPB or SBPB should be used. Use that instead of hardcoding IBPB. -- Josh
© 2016 - 2025 Red Hat, Inc.