[PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch

Roger Pau Monne posted 2 patches 3 years, 3 months ago
There is a newer version of this series
[PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Roger Pau Monne 3 years, 3 months ago
The current logic for AMD SSBD context switches it on every
vm{entry,exit} if the Xen and guest selections don't match.  This is
expensive when not using SPEC_CTRL, and hence should be avoided as
much as possible.

When SSBD is not being set from SPEC_CTRL on AMD don't context switch
at vm{entry,exit} and instead only context switch SSBD when switching
vCPUs.  This has the side effect of running Xen code with the guest
selection of SSBD, the documentation is updated to note this behavior.
Also note that then when `ssbd` is selected on the command line guest
SSBD selection will not have an effect, and the hypervisor will run
with SSBD unconditionally enabled when not using SPEC_CTRL itself.

This fixes an issue with running C code in a GIF=0 region, that's
problematic when using UBSAN or other instrumentation techniques.

As a result of no longer running the code to set SSBD in a GIF=0
region the locking of amd_set_legacy_ssbd() can be done using normal
spinlocks, and some more checks can be added to assure it works as
intended.

Finally it's also worth noticing that since the guest SSBD selection
is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
propagate the value to the hardware as part of handling the wrmsr.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Fix calling set_reg unconditionally.
 - Fix comment.
 - Call amd_set_ssbd() from guest_wrmsr().

Changes since v1:
 - Just check virt_spec_ctrl value != 0 on context switch.
 - Remove stray asm newline.
 - Use val in svm_set_reg().
 - Fix style in amd.c.
 - Do not clear ssbd
---
 docs/misc/xen-command-line.pandoc | 10 +++---
 xen/arch/x86/cpu/amd.c            | 55 +++++++++++++++++--------------
 xen/arch/x86/hvm/svm/entry.S      |  6 ----
 xen/arch/x86/hvm/svm/svm.c        | 45 ++++++++++---------------
 xen/arch/x86/include/asm/amd.h    |  2 +-
 xen/arch/x86/msr.c                |  9 +++++
 6 files changed, 63 insertions(+), 64 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 0fbdcb574f..424b12cfb2 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2372,10 +2372,12 @@ By default, Xen will use STIBP when IBRS is in use (IBRS implies STIBP), and
 when hardware hints recommend using it as a blanket setting.
 
 On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=`
-option can be used to force or prevent Xen using the feature itself.  On AMD
-hardware, this is a global option applied at boot, and not virtualised for
-guest use.  On Intel hardware, the feature is virtualised for guests,
-independently of Xen's choice of setting.
+option can be used to force or prevent Xen using the feature itself.  The
+feature is virtualised for guests, independently of Xen's choice of setting.
+On AMD hardware, disabling Xen SSBD usage on the command line (`ssbd=0` which
+is the default value) can lead to Xen running with the guest SSBD selection
+depending on hardware support, on the same hardware setting `ssbd=1` will
+result in SSBD always being enabled, regardless of guest choice.
 
 On hardware supporting PSFD (Predictive Store Forwarding Disable), the `psfd=`
 option can be used to force or prevent Xen using the feature itself.  By
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 98c52d0686..05d72c6501 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -742,7 +742,7 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 }
 
 static struct ssbd_ls_cfg {
-    bool locked;
+    spinlock_t lock;
     unsigned int count;
 } __cacheline_aligned *ssbd_ls_cfg;
 static unsigned int __ro_after_init ssbd_max_cores;
@@ -753,7 +753,7 @@ bool __init amd_setup_legacy_ssbd(void)
 	unsigned int i;
 
 	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
-	    boot_cpu_data.x86_num_siblings <= 1)
+	    boot_cpu_data.x86_num_siblings <= 1 || opt_ssbd)
 		return true;
 
 	/*
@@ -776,46 +776,51 @@ bool __init amd_setup_legacy_ssbd(void)
 	if (!ssbd_ls_cfg)
 		return false;
 
-	if (opt_ssbd)
-		for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
-			/* Set initial state, applies to any (hotplug) CPU. */
-			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;
+	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
+		spin_lock_init(&ssbd_ls_cfg[i].lock);
 
 	return true;
 }
 
-/*
- * Executed from GIF==0 context: avoid using BUG/ASSERT or other functionality
- * that relies on exceptions as those are not expected to run in GIF==0
- * context.
- */
-void amd_set_legacy_ssbd(bool enable)
+static void core_set_legacy_ssbd(bool enable)
 {
 	const struct cpuinfo_x86 *c = &current_cpu_data;
 	struct ssbd_ls_cfg *status;
+	unsigned long flags;
 
 	if ((c->x86 != 0x17 && c->x86 != 0x18) || c->x86_num_siblings <= 1) {
-		set_legacy_ssbd(c, enable);
+		BUG_ON(!set_legacy_ssbd(c, enable));
 		return;
 	}
 
+	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
+	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
 	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
 	                      c->cpu_core_id];
 
-	/*
-	 * Open code a very simple spinlock: this function is used with GIF==0
-	 * and different IF values, so would trigger the checklock detector.
-	 * Instead of trying to workaround the detector, use a very simple lock
-	 * implementation: it's better to reduce the amount of code executed
-	 * with GIF==0.
-	 */
-	while (test_and_set_bool(status->locked))
-		cpu_relax();
+	spin_lock_irqsave(&status->lock, flags);
 	status->count += enable ? 1 : -1;
+	ASSERT(status->count <= c->x86_num_siblings);
 	if (enable ? status->count == 1 : !status->count)
-		set_legacy_ssbd(c, enable);
-	barrier();
-	write_atomic(&status->locked, false);
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	spin_unlock_irqrestore(&status->lock, flags);
+}
+
+void amd_set_ssbd(bool enable)
+{
+	if (opt_ssbd)
+		/*
+		 * Ignore attempts to turn off SSBD, it's hardcoded on the
+		 * command line.
+		 */
+		return;
+
+	if (cpu_has_virt_ssbd)
+		wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0);
+	else if (amd_legacy_ssbd)
+		core_set_legacy_ssbd(enable);
+	else
+		ASSERT_UNREACHABLE();
 }
 
 /*
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index a26589aa9a..981cd82e7c 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -59,9 +59,6 @@ __UNLIKELY_END(nsvm_hap)
 
         clgi
 
-        ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \
-                        X86_FEATURE_VIRT_SC_MSR_HVM
-
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -131,9 +128,6 @@ __UNLIKELY_END(nsvm_hap)
         ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
-        ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \
-                        X86_FEATURE_VIRT_SC_MSR_HVM
-
         /*
          * STGI is executed unconditionally, and is sufficiently serialising
          * to safely resolve any Spectre-v1 concerns in the above logic.
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 1aeaabcb13..8b101d4f27 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
 
     /* Resume use of ISTs now that the host TR is reinstated. */
     enable_each_ist(idt_tables[cpu]);
+
+    /*
+     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
+     * is already cleared by svm_vmexit_spec_ctrl.
+     */
+    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
+    {
+        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
+        amd_set_ssbd(false);
+    }
 }
 
 static void cf_check svm_ctxt_switch_to(struct vcpu *v)
@@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
 
     if ( cpu_has_msr_tsc_aux )
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
+
+    /* Load SSBD if set by the guest. */
+    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
+    {
+        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
+        amd_set_ssbd(true);
+    }
 }
 
 static void noreturn cf_check svm_do_resume(void)
@@ -3116,34 +3133,6 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
-/* Called with GIF=0. */
-void vmexit_virt_spec_ctrl(void)
-{
-    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
-
-    if ( val == current->arch.msrs->virt_spec_ctrl.raw )
-        return;
-
-    if ( cpu_has_virt_ssbd )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
-    else
-        amd_set_legacy_ssbd(val);
-}
-
-/* Called with GIF=0. */
-void vmentry_virt_spec_ctrl(void)
-{
-    unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
-
-    if ( val == (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
-        return;
-
-    if ( cpu_has_virt_ssbd )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
-    else
-        amd_set_legacy_ssbd(val);
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index 6a42f68542..81ed71710f 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -153,6 +153,6 @@ void amd_check_disable_c1e(unsigned int port, u8 value);
 
 extern bool amd_legacy_ssbd;
 bool amd_setup_legacy_ssbd(void);
-void amd_set_legacy_ssbd(bool enable);
+void amd_set_ssbd(bool enable);
 
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 95416995a5..5609b71e99 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -24,6 +24,7 @@
 #include <xen/nospec.h>
 #include <xen/sched.h>
 
+#include <asm/amd.h>
 #include <asm/debugreg.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/viridian.h>
@@ -697,7 +698,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
                 msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
         }
         else
+        {
             msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
+            if ( v == curr )
+                /*
+                 * Propagate the value to hardware, as it won't be set on guest
+                 * resume path.
+                 */
+                amd_set_ssbd(val & SPEC_CTRL_SSBD);
+        }
         break;
 
     case MSR_AMD64_DE_CFG:
-- 
2.37.3


Re: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Andrew Cooper 3 years, 2 months ago
On 03/11/2022 17:02, Roger Pau Monne wrote:
> The current logic for AMD SSBD context switches it on every
> vm{entry,exit} if the Xen and guest selections don't match.  This is
> expensive when not using SPEC_CTRL, and hence should be avoided as
> much as possible.
>
> When SSBD is not being set from SPEC_CTRL on AMD don't context switch
> at vm{entry,exit} and instead only context switch SSBD when switching
> vCPUs.  This has the side effect of running Xen code with the guest
> selection of SSBD, the documentation is updated to note this behavior.
> Also note that then when `ssbd` is selected on the command line guest
> SSBD selection will not have an effect, and the hypervisor will run
> with SSBD unconditionally enabled when not using SPEC_CTRL itself.
>
> This fixes an issue with running C code in a GIF=0 region, that's
> problematic when using UBSAN or other instrumentation techniques.

This paragraph needs to be at the top, because it's the reason why this
is a blocker bug for 4.17.  Everything else is discussing why we take
the approach we take.

(and to be clear, it's slow even with MSR_SPEC_CTRL.  It's just that its
a whole lot less slow than with the LS_CFG MSR.)

>
> As a result of no longer running the code to set SSBD in a GIF=0
> region the locking of amd_set_legacy_ssbd() can be done using normal
> spinlocks, and some more checks can be added to assure it works as
> intended.
>
> Finally it's also worth noticing that since the guest SSBD selection
> is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
> propagate the value to the hardware as part of handling the wrmsr.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v2:
>  - Fix calling set_reg unconditionally.
>  - Fix comment.
>  - Call amd_set_ssbd() from guest_wrmsr().
>
> Changes since v1:
>  - Just check virt_spec_ctrl value != 0 on context switch.
>  - Remove stray asm newline.
>  - Use val in svm_set_reg().
>  - Fix style in amd.c.
>  - Do not clear ssbd
> ---
>  docs/misc/xen-command-line.pandoc | 10 +++---
>  xen/arch/x86/cpu/amd.c            | 55 +++++++++++++++++--------------
>  xen/arch/x86/hvm/svm/entry.S      |  6 ----
>  xen/arch/x86/hvm/svm/svm.c        | 45 ++++++++++---------------
>  xen/arch/x86/include/asm/amd.h    |  2 +-
>  xen/arch/x86/msr.c                |  9 +++++

Need to patch msr.h now that the semantics of virt_spec_ctrl have changed.


> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 98c52d0686..05d72c6501 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> <snip>
> +void amd_set_ssbd(bool enable)
> +{
> +	if (opt_ssbd)
> +		/*
> +		 * Ignore attempts to turn off SSBD, it's hardcoded on the
> +		 * command line.
> +		 */
> +		return;
> +
> +	if (cpu_has_virt_ssbd)
> +		wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0);
> +	else if (amd_legacy_ssbd)
> +		core_set_legacy_ssbd(enable);
> +	else
> +		ASSERT_UNREACHABLE();

This assert is reachable on Fam14 and older, I think.

> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 1aeaabcb13..8b101d4f27 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
>  
>      /* Resume use of ISTs now that the host TR is reinstated. */
>      enable_each_ist(idt_tables[cpu]);
> +
> +    /*
> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> +     * is already cleared by svm_vmexit_spec_ctrl.
> +     */
> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> +    {
> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> +        amd_set_ssbd(false);
> +    }
>  }
>  
>  static void cf_check svm_ctxt_switch_to(struct vcpu *v)
> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
>  
>      if ( cpu_has_msr_tsc_aux )
>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> +
> +    /* Load SSBD if set by the guest. */
> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> +    {
> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> +        amd_set_ssbd(true);
> +    }

While this functions, it's still a perf problem.  You now flip the bit
twice when switching between vcpus with legacy SSBD.

This wouldn't be so bad if you'd also fixed the inner function to not do
a read/modify/write on the very slow MSR, because then we'd only be
touching it twice, not 4 times.

This isn't critical to fix for 4.17, but will need addressing in due course.

However, as the patch does need a respin, amd_set_ssbd() is too
generic.  It's previous name, amd_set_legacy_ssbd(), was more
appropriate, as it clearly highlights the fact that it's the
non-MSR_SPEC_CTRL path.

~Andrew
Re: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Roger Pau Monné 3 years, 2 months ago
On Mon, Nov 14, 2022 at 09:39:50PM +0000, Andrew Cooper wrote:
> On 03/11/2022 17:02, Roger Pau Monne wrote:
> > The current logic for AMD SSBD context switches it on every
> > vm{entry,exit} if the Xen and guest selections don't match.  This is
> > expensive when not using SPEC_CTRL, and hence should be avoided as
> > much as possible.
> >
> > When SSBD is not being set from SPEC_CTRL on AMD don't context switch
> > at vm{entry,exit} and instead only context switch SSBD when switching
> > vCPUs.  This has the side effect of running Xen code with the guest
> > selection of SSBD, the documentation is updated to note this behavior.
> > Also note that then when `ssbd` is selected on the command line guest
> > SSBD selection will not have an effect, and the hypervisor will run
> > with SSBD unconditionally enabled when not using SPEC_CTRL itself.
> >
> > This fixes an issue with running C code in a GIF=0 region, that's
> > problematic when using UBSAN or other instrumentation techniques.
> 
> This paragraph needs to be at the top, because it's the reason why this
> is a blocker bug for 4.17.  Everything else is discussing why we take
> the approach we take.

Done.

> (and to be clear, it's slow even with MSR_SPEC_CTRL.  It's just that its
> a whole lot less slow than with the LS_CFG MSR.)
> 
> >
> > As a result of no longer running the code to set SSBD in a GIF=0
> > region the locking of amd_set_legacy_ssbd() can be done using normal
> > spinlocks, and some more checks can be added to assure it works as
> > intended.
> >
> > Finally it's also worth noticing that since the guest SSBD selection
> > is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
> > propagate the value to the hardware as part of handling the wrmsr.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes since v2:
> >  - Fix calling set_reg unconditionally.
> >  - Fix comment.
> >  - Call amd_set_ssbd() from guest_wrmsr().
> >
> > Changes since v1:
> >  - Just check virt_spec_ctrl value != 0 on context switch.
> >  - Remove stray asm newline.
> >  - Use val in svm_set_reg().
> >  - Fix style in amd.c.
> >  - Do not clear ssbd
> > ---
> >  docs/misc/xen-command-line.pandoc | 10 +++---
> >  xen/arch/x86/cpu/amd.c            | 55 +++++++++++++++++--------------
> >  xen/arch/x86/hvm/svm/entry.S      |  6 ----
> >  xen/arch/x86/hvm/svm/svm.c        | 45 ++++++++++---------------
> >  xen/arch/x86/include/asm/amd.h    |  2 +-
> >  xen/arch/x86/msr.c                |  9 +++++
> 
> Need to patch msr.h now that the semantics of virt_spec_ctrl have changed.

Sure, will adjust the comment there.

> 
> > diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> > index 98c52d0686..05d72c6501 100644
> > --- a/xen/arch/x86/cpu/amd.c
> > +++ b/xen/arch/x86/cpu/amd.c
> > <snip>
> > +void amd_set_ssbd(bool enable)
> > +{
> > +	if (opt_ssbd)
> > +		/*
> > +		 * Ignore attempts to turn off SSBD, it's hardcoded on the
> > +		 * command line.
> > +		 */
> > +		return;
> > +
> > +	if (cpu_has_virt_ssbd)
> > +		wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0);
> > +	else if (amd_legacy_ssbd)
> > +		core_set_legacy_ssbd(enable);
> > +	else
> > +		ASSERT_UNREACHABLE();
> 
> This assert is reachable on Fam14 and older, I think.

Hm, I'm unsure how.  Calls to this function are gated on the vCPU
having virt_ssbd set in the CPUID policy, and that can only happen if
there's a way to set SSBD.

Can you elaborate on the path that you think can trigger this?

As I would think that's the path that needs fixing, rather than
removing the assert here.

> > diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> > index 1aeaabcb13..8b101d4f27 100644
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
> >  
> >      /* Resume use of ISTs now that the host TR is reinstated. */
> >      enable_each_ist(idt_tables[cpu]);
> > +
> > +    /*
> > +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> > +     * is already cleared by svm_vmexit_spec_ctrl.
> > +     */
> > +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> > +    {
> > +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> > +        amd_set_ssbd(false);
> > +    }
> >  }
> >  
> >  static void cf_check svm_ctxt_switch_to(struct vcpu *v)
> > @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
> >  
> >      if ( cpu_has_msr_tsc_aux )
> >          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> > +
> > +    /* Load SSBD if set by the guest. */
> > +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> > +    {
> > +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> > +        amd_set_ssbd(true);
> > +    }
> 
> While this functions, it's still a perf problem.  You now flip the bit
> twice when switching between vcpus with legacy SSBD.
> 
> This wouldn't be so bad if you'd also fixed the inner function to not do
> a read/modify/write on the very slow MSR, because then we'd only be
> touching it twice, not 4 times.
> 
> This isn't critical to fix for 4.17, but will need addressing in due course.

Indeed.  I know about this, but didn't consider it critical enough to
fix in the current release status.

> However, as the patch does need a respin, amd_set_ssbd() is too
> generic.  It's previous name, amd_set_legacy_ssbd(), was more
> appropriate, as it clearly highlights the fact that it's the
> non-MSR_SPEC_CTRL path.

Can adjust the function name, not a problem.

Thanks, Roger.

Re: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Jan Beulich 3 years, 3 months ago
On 03.11.2022 18:02, Roger Pau Monne wrote:
> The current logic for AMD SSBD context switches it on every
> vm{entry,exit} if the Xen and guest selections don't match.  This is
> expensive when not using SPEC_CTRL, and hence should be avoided as
> much as possible.
> 
> When SSBD is not being set from SPEC_CTRL on AMD don't context switch
> at vm{entry,exit} and instead only context switch SSBD when switching
> vCPUs.  This has the side effect of running Xen code with the guest
> selection of SSBD, the documentation is updated to note this behavior.
> Also note that then when `ssbd` is selected on the command line guest
> SSBD selection will not have an effect, and the hypervisor will run
> with SSBD unconditionally enabled when not using SPEC_CTRL itself.
> 
> This fixes an issue with running C code in a GIF=0 region, that's
> problematic when using UBSAN or other instrumentation techniques.
> 
> As a result of no longer running the code to set SSBD in a GIF=0
> region the locking of amd_set_legacy_ssbd() can be done using normal
> spinlocks, and some more checks can be added to assure it works as
> intended.
> 
> Finally it's also worth noticing that since the guest SSBD selection
> is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
> propagate the value to the hardware as part of handling the wrmsr.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one further remark:

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
>  
>      /* Resume use of ISTs now that the host TR is reinstated. */
>      enable_each_ist(idt_tables[cpu]);
> +
> +    /*
> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> +     * is already cleared by svm_vmexit_spec_ctrl.
> +     */
> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> +    {
> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> +        amd_set_ssbd(false);
> +    }
>  }

Is "cleared" in the comment correct when "spec-ctrl=ssbd"? I think "suitably
set" or "cleared/set" or some such would be wanted. This could certainly be
adjusted while committing (if you agree), but I will want to give Andrew some
time anyway before putting it in, to avoid there again being objections after
a change in this area has gone in.

Jan

RE: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Henry Wang 3 years, 3 months ago
Hi Andrew,

> Subject: Re: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context
> switch
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one further remark:
> 
> Is "cleared" in the comment correct when "spec-ctrl=ssbd"? I think "suitably
> set" or "cleared/set" or some such would be wanted. This could certainly be
> adjusted while committing (if you agree), but I will want to give Andrew some
> time anyway before putting it in, to avoid there again being objections after
> a change in this area has gone in.

Also this one please :) Any feedback would be appreciated.

Kind regards,
Henry

> 
> Jan
Re: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Roger Pau Monné 3 years, 3 months ago
On Fri, Nov 04, 2022 at 09:10:21AM +0100, Jan Beulich wrote:
> On 03.11.2022 18:02, Roger Pau Monne wrote:
> > The current logic for AMD SSBD context switches it on every
> > vm{entry,exit} if the Xen and guest selections don't match.  This is
> > expensive when not using SPEC_CTRL, and hence should be avoided as
> > much as possible.
> > 
> > When SSBD is not being set from SPEC_CTRL on AMD don't context switch
> > at vm{entry,exit} and instead only context switch SSBD when switching
> > vCPUs.  This has the side effect of running Xen code with the guest
> > selection of SSBD, the documentation is updated to note this behavior.
> > Also note that then when `ssbd` is selected on the command line guest
> > SSBD selection will not have an effect, and the hypervisor will run
> > with SSBD unconditionally enabled when not using SPEC_CTRL itself.
> > 
> > This fixes an issue with running C code in a GIF=0 region, that's
> > problematic when using UBSAN or other instrumentation techniques.
> > 
> > As a result of no longer running the code to set SSBD in a GIF=0
> > region the locking of amd_set_legacy_ssbd() can be done using normal
> > spinlocks, and some more checks can be added to assure it works as
> > intended.
> > 
> > Finally it's also worth noticing that since the guest SSBD selection
> > is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
> > propagate the value to the hardware as part of handling the wrmsr.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one further remark:
> 
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
> >  
> >      /* Resume use of ISTs now that the host TR is reinstated. */
> >      enable_each_ist(idt_tables[cpu]);
> > +
> > +    /*
> > +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> > +     * is already cleared by svm_vmexit_spec_ctrl.
> > +     */
> > +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> > +    {
> > +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> > +        amd_set_ssbd(false);
> > +    }
> >  }
> 
> Is "cleared" in the comment correct when "spec-ctrl=ssbd"? I think "suitably
> set" or "cleared/set" or some such would be wanted. This could certainly be
> adjusted while committing (if you agree), but I will want to give Andrew some
> time anyway before putting it in, to avoid there again being objections after
> a change in this area has gone in.

Hm, indeed, maybe "already handled" to avoid getting into the
set/clear nomenclature.

Thanks, Roger.

RE: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context switch
Posted by Henry Wang 3 years, 3 months ago
Hi Roger,

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Subject: [PATCH for-4.17 v3 1/2] amd/virt_ssbd: set SSBD at vCPU context
> switch
> 
> The current logic for AMD SSBD context switches it on every
> vm{entry,exit} if the Xen and guest selections don't match.  This is
> expensive when not using SPEC_CTRL, and hence should be avoided as
> much as possible.
> 
> When SSBD is not being set from SPEC_CTRL on AMD don't context switch
> at vm{entry,exit} and instead only context switch SSBD when switching
> vCPUs.  This has the side effect of running Xen code with the guest
> selection of SSBD, the documentation is updated to note this behavior.
> Also note that then when `ssbd` is selected on the command line guest
> SSBD selection will not have an effect, and the hypervisor will run
> with SSBD unconditionally enabled when not using SPEC_CTRL itself.
> 
> This fixes an issue with running C code in a GIF=0 region, that's
> problematic when using UBSAN or other instrumentation techniques.
> 
> As a result of no longer running the code to set SSBD in a GIF=0
> region the locking of amd_set_legacy_ssbd() can be done using normal
> spinlocks, and some more checks can be added to assure it works as
> intended.
> 
> Finally it's also worth noticing that since the guest SSBD selection
> is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
> propagate the value to the hardware as part of handling the wrmsr.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks for respinning the patch!

Release-acked-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry