arch/x86/kvm/svm/vmenter.S | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
From: Amit Shah <amit.shah@amd.com>
Remove superfluous RSB filling after a VMEXIT when the CPU already has
flushed the RSB after a VMEXIT when AutoIBRS is enabled.
The initial implementation for adding RETPOLINES added an ALTERNATIVES
implementation for filling the RSB after a VMEXIT in
commit 117cc7a908c836 ("x86/retpoline: Fill return stack buffer on vmexit")
Later, X86_FEATURE_RSB_VMEXIT was added in
commit 2b129932201673 ("x86/speculation: Add RSB VM Exit protections")
The AutoIBRS (on AMD CPUs) feature implementation added in
commit e7862eda309ecf ("x86/cpu: Support AMD Automatic IBRS")
used the already-implemented logic for EIBRS in
spectre_v2_determine_rsb_fill_type_on_vmexit() -- but did not update the
code at VMEXIT to act on the mode selected in that function -- resulting
in VMEXITs continuing to clear the RSB when RETPOLINES are enabled,
despite the presence of AutoIBRS.
Signed-off-by: Amit Shah <amit.shah@amd.com>
---
v2:
- tweak commit message re: Boris's comments.
---
arch/x86/kvm/svm/vmenter.S | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
index a0c8eb37d3e1..2ed80aea3bb1 100644
--- a/arch/x86/kvm/svm/vmenter.S
+++ b/arch/x86/kvm/svm/vmenter.S
@@ -209,10 +209,8 @@ SYM_FUNC_START(__svm_vcpu_run)
7: vmload %_ASM_AX
8:
-#ifdef CONFIG_MITIGATION_RETPOLINE
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
- FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
-#endif
+ FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
/* Clobbers RAX, RCX, RDX. */
RESTORE_HOST_SPEC_CTRL
@@ -348,10 +346,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
2: cli
-#ifdef CONFIG_MITIGATION_RETPOLINE
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
- FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
-#endif
+ FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
/* Clobbers RAX, RCX, RDX, consumes RDI (@svm) and RSI (@spec_ctrl_intercepted). */
RESTORE_HOST_SPEC_CTRL
--
2.45.2
On Wed, Jun 26, 2024, Amit Shah wrote: > --- > arch/x86/kvm/svm/vmenter.S | 8 ++------ > 1 file changed, 2 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S > index a0c8eb37d3e1..2ed80aea3bb1 100644 > --- a/arch/x86/kvm/svm/vmenter.S > +++ b/arch/x86/kvm/svm/vmenter.S > @@ -209,10 +209,8 @@ SYM_FUNC_START(__svm_vcpu_run) > 7: vmload %_ASM_AX > 8: > > -#ifdef CONFIG_MITIGATION_RETPOLINE > /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ > - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE > -#endif > + FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT Out of an abundance of paranoia, shouldn't this be? FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\ X86_FEATURE_RSB_VMEXIT_LITE Hmm, but it looks like that would incorrectly trigger the "lite" flavor for families 0xf - 0x12. I assume those old CPUs aren't affected by whatever on earth EIBRS_PBRSB is. /* AMD Family 0xf - 0x12 */ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI), VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI), VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI), VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI), /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB | NO_BHI), VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB | NO_BHI), > > /* Clobbers RAX, RCX, RDX. */ > RESTORE_HOST_SPEC_CTRL > @@ -348,10 +346,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) > > 2: cli > > -#ifdef CONFIG_MITIGATION_RETPOLINE > /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ > - FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE > -#endif > + FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT > > /* Clobbers RAX, RCX, RDX, consumes RDI (@svm) and RSI (@spec_ctrl_intercepted). */ > RESTORE_HOST_SPEC_CTRL > -- > 2.45.2 >
On Fri, Jun 28, 2024 at 09:09:15AM -0700, Sean Christopherson wrote:
> Hmm, but it looks like that would incorrectly trigger the "lite" flavor for
> families 0xf - 0x12. I assume those old CPUs aren't affected by whatever on earth
> EIBRS_PBRSB is.
https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/post-barrier-return-stack-buffer-predictions.html
We could add NO_EIBRS_PBRSB to those old families too. Amit, feel free
to send a separate patch.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On Fri, Jun 28, 2024 at 9:09 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jun 26, 2024, Amit Shah wrote:
> > ---
> > arch/x86/kvm/svm/vmenter.S | 8 ++------
> > 1 file changed, 2 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
> > index a0c8eb37d3e1..2ed80aea3bb1 100644
> > --- a/arch/x86/kvm/svm/vmenter.S
> > +++ b/arch/x86/kvm/svm/vmenter.S
> > @@ -209,10 +209,8 @@ SYM_FUNC_START(__svm_vcpu_run)
> > 7: vmload %_ASM_AX
> > 8:
> >
> > -#ifdef CONFIG_MITIGATION_RETPOLINE
> > /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
> > - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
> > -#endif
> > + FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
>
> Out of an abundance of paranoia, shouldn't this be?
>
> FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\
> X86_FEATURE_RSB_VMEXIT_LITE
>
> Hmm, but it looks like that would incorrectly trigger the "lite" flavor for
> families 0xf - 0x12. I assume those old CPUs aren't affected by whatever on earth
> EIBRS_PBRSB is.
>
> /* AMD Family 0xf - 0x12 */
> VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
>
> /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
> VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB | NO_BHI),
> VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB | NO_BHI),
Your assumption is correct. As for why the cpu_vuln_whitelist[]
doesn't say so explicitly, you need to read between the lines...
> /*
> * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
> * flag and protect from vendor-specific bugs via the whitelist.
> *
> * Don't use AutoIBRS when SNP is enabled because it degrades host
> * userspace indirect branch performance.
> */
> if ((x86_arch_cap_msr & ARCH_CAP_IBRS_ALL) ||
> (cpu_has(c, X86_FEATURE_AUTOIBRS) &&
> !cpu_feature_enabled(X86_FEATURE_SEV_SNP))) {
> setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
> if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
> !(x86_arch_cap_msr & ARCH_CAP_PBRSB_NO))
> setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
> }
Families 0FH through 12H don't have EIBRS or AutoIBRS, so there's no
cpu_vuln_whitelist[] lookup. Hence, no need to set the NO_EIBRS_PBRSB
bit, even if it is accurate.
> >
> > /* Clobbers RAX, RCX, RDX. */
> > RESTORE_HOST_SPEC_CTRL
> > @@ -348,10 +346,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
> >
> > 2: cli
> >
> > -#ifdef CONFIG_MITIGATION_RETPOLINE
> > /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
> > - FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
> > -#endif
> > + FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
> >
> > /* Clobbers RAX, RCX, RDX, consumes RDI (@svm) and RSI (@spec_ctrl_intercepted). */
> > RESTORE_HOST_SPEC_CTRL
> > --
> > 2.45.2
> >
>
On Fri, 2024-06-28 at 11:48 -0700, Jim Mattson wrote:
> On Fri, Jun 28, 2024 at 9:09 AM Sean Christopherson
> <seanjc@google.com> wrote:
> >
> > On Wed, Jun 26, 2024, Amit Shah wrote:
> > > ---
> > > arch/x86/kvm/svm/vmenter.S | 8 ++------
> > > 1 file changed, 2 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/svm/vmenter.S
> > > b/arch/x86/kvm/svm/vmenter.S
> > > index a0c8eb37d3e1..2ed80aea3bb1 100644
> > > --- a/arch/x86/kvm/svm/vmenter.S
> > > +++ b/arch/x86/kvm/svm/vmenter.S
> > > @@ -209,10 +209,8 @@ SYM_FUNC_START(__svm_vcpu_run)
> > > 7: vmload %_ASM_AX
> > > 8:
> > >
> > > -#ifdef CONFIG_MITIGATION_RETPOLINE
> > > /* IMPORTANT: Stuff the RSB immediately after VM-Exit,
> > > before RET! */
> > > - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS,
> > > X86_FEATURE_RETPOLINE
> > > -#endif
> > > + FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS,
> > > X86_FEATURE_RSB_VMEXIT
> >
> > Out of an abundance of paranoia, shouldn't this be?
> >
> > FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS,
> > X86_FEATURE_RSB_VMEXIT,\
> > X86_FEATURE_RSB_VMEXIT_LITE
> >
> > Hmm, but it looks like that would incorrectly trigger the "lite"
> > flavor for
> > families 0xf - 0x12. I assume those old CPUs aren't affected by
> > whatever on earth
> > EIBRS_PBRSB is.
> >
> > /* AMD Family 0xf - 0x12 */
> > VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF |
> > NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> > VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF |
> > NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> > VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF |
> > NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> > VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF |
> > NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_BHI),
> >
> > /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches
> > won't work */
> > VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF |
> > NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB |
> > NO_BHI),
> > VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF |
> > NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB |
> > NO_BHI),
>
> Your assumption is correct. As for why the cpu_vuln_whitelist[]
> doesn't say so explicitly, you need to read between the lines...
>
> > /*
> > * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the
> > Intel feature
> > * flag and protect from vendor-specific bugs via the
> > whitelist.
> > *
> > * Don't use AutoIBRS when SNP is enabled because it
> > degrades host
> > * userspace indirect branch performance.
> > */
> > if ((x86_arch_cap_msr & ARCH_CAP_IBRS_ALL) ||
> > (cpu_has(c, X86_FEATURE_AUTOIBRS) &&
> > !cpu_feature_enabled(X86_FEATURE_SEV_SNP))) {
> > setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
> > if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB)
> > &&
> > !(x86_arch_cap_msr & ARCH_CAP_PBRSB_NO))
> > setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
> > }
>
> Families 0FH through 12H don't have EIBRS or AutoIBRS, so there's no
> cpu_vuln_whitelist[] lookup. Hence, no need to set the NO_EIBRS_PBRSB
> bit, even if it is accurate.
The commit that adds the RSB_VMEXIT_LITE feature flag does describe the
bug in a good amount of detail:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2b1299322016731d56807aa49254a5ea3080b6b3
I've not seen any indication this is required for AMD CPUs.
David, do you agree we don't need this?
Amit
>
© 2016 - 2025 Red Hat, Inc.