:p
atchew
Login
Hello, The following series aims to remove running C code with GIF=0 on the AMD vm{entry,exit} paths. As a result, the context switching of SSBD is done when context switching vCPUs, and hence Xen code is run with the guest selection of SSBD. First patch is a bugfix for missing VIRT_SPEC_CTRL MSR loading, while second takes care of removing the loading of VIRT_SPEC_CTRL on guest/hypervisor context switch. Last patch is a cleanup, that's already reviewed. I tested on Naples and Milan CPUs (and migrating from Naples to Milan correctly carrying the VIRT_SSBD bit), but I haven't tested on a platform that exposes VIRT_SSBD itself. I think the path is sufficiently similar to the legacy one. Currently running a gitlab CI loop in order to check everything is OK. Roger Pau Monne (3): hvm/msr: load VIRT_SPEC_CTRL amd/virt_ssbd: set SSBD at vCPU context switch amd: remove VIRT_SC_MSR_HVM synthetic feature docs/misc/xen-command-line.pandoc | 10 +++-- xen/arch/x86/cpu/amd.c | 56 ++++++++++++++------------ xen/arch/x86/cpuid.c | 9 +++-- xen/arch/x86/hvm/hvm.c | 1 + xen/arch/x86/hvm/svm/entry.S | 6 --- xen/arch/x86/hvm/svm/svm.c | 49 ++++++++++------------ xen/arch/x86/include/asm/amd.h | 3 +- xen/arch/x86/include/asm/cpufeatures.h | 2 +- xen/arch/x86/msr.c | 7 ++++ xen/arch/x86/spec_ctrl.c | 8 ++-- 10 files changed, 78 insertions(+), 73 deletions(-) -- 2.37.3
Add MSR_VIRT_SPEC_CTRL to the list of MSRs handled by hvm_load_cpu_msrs(), or else it would be lost. Fixes: 8ffd5496f4 ('amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> --- I'm confused as to why we have two different list of MSR to send and load, one in msrs_to_send[] and the other open-coded in hvm_load_cpu_msrs(), but given the release status it's no time to clean that up. --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/hvm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -XXX,XX +XXX,XX @@ static int cf_check hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h) case MSR_INTEL_MISC_FEATURES_ENABLES: case MSR_IA32_BNDCFGS: case MSR_IA32_XSS: + case MSR_VIRT_SPEC_CTRL: case MSR_AMD64_DR0_ADDRESS_MASK: case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK: rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val); -- 2.37.3
The current logic for AMD SSBD context switches it on every vm{entry,exit} if the Xen and guest selections don't match. This is expensive when not using SPEC_CTRL, and hence should be avoided as much as possible. When SSBD is not being set from SPEC_CTRL on AMD don't context switch at vm{entry,exit} and instead only context switch SSBD when switching vCPUs. This has the side effect of running Xen code with the guest selection of SSBD, the documentation is updated to note this behavior. Also note that then when `ssbd` is selected on the command line guest SSBD selection will not have an effect, and the hypervisor will run with SSBD unconditionally enabled when not using SPEC_CTRL itself. This fixes an issue with running C code in a GIF=0 region, that's problematic when using UBSAN or other instrumentation techniques. As a result of no longer running the code to set SSBD in a GIF=0 region the locking of amd_set_legacy_ssbd() can be done using normal spinlocks, and some more checks can be added to assure it works as intended. Finally it's also worth noticing that since the guest SSBD selection is no longer set on vmentry the VIRT_SPEC_MSR handling needs to propagate the value to the hardware as part of handling the wrmsr. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> --- Changes since v1: - Just check virt_spec_ctrl value != 0 on context switch. - Remove stray asm newline. - Use val in svm_set_reg(). - Fix style in amd.c. - Do not clear ssbd --- docs/misc/xen-command-line.pandoc | 10 +++--- xen/arch/x86/cpu/amd.c | 55 +++++++++++++++++-------------- xen/arch/x86/hvm/svm/entry.S | 6 ---- xen/arch/x86/hvm/svm/svm.c | 49 ++++++++++++--------------- xen/arch/x86/include/asm/amd.h | 2 +- xen/arch/x86/msr.c | 7 ++++ 6 files changed, 65 insertions(+), 64 deletions(-) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index XXXXXXX..XXXXXXX 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -XXX,XX +XXX,XX @@ By default, Xen will use STIBP when IBRS is in use (IBRS implies STIBP), and when hardware hints recommend using it as a blanket setting. On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=` -option can be used to force or prevent Xen using the feature itself. On AMD -hardware, this is a global option applied at boot, and not virtualised for -guest use. On Intel hardware, the feature is virtualised for guests, -independently of Xen's choice of setting. +option can be used to force or prevent Xen using the feature itself. The +feature is virtualised for guests, independently of Xen's choice of setting. +On AMD hardware, disabling Xen SSBD usage on the command line (`ssbd=0` which +is the default value) can lead to Xen running with the guest SSBD selection +depending on hardware support, on the same hardware setting `ssbd=1` will +result in SSBD always being enabled, regardless of guest choice. On hardware supporting PSFD (Predictive Store Forwarding Disable), the `psfd=` option can be used to force or prevent Xen using the feature itself. By diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -XXX,XX +XXX,XX @@ void amd_init_ssbd(const struct cpuinfo_x86 *c) } static struct ssbd_ls_cfg { - bool locked; + spinlock_t lock; unsigned int count; } __cacheline_aligned *ssbd_ls_cfg; static unsigned int __ro_after_init ssbd_max_cores; @@ -XXX,XX +XXX,XX @@ bool __init amd_setup_legacy_ssbd(void) unsigned int i; if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) || - boot_cpu_data.x86_num_siblings <= 1) + boot_cpu_data.x86_num_siblings <= 1 || opt_ssbd) return true; /* @@ -XXX,XX +XXX,XX @@ bool __init amd_setup_legacy_ssbd(void) if (!ssbd_ls_cfg) return false; - if (opt_ssbd) - for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++) - /* Set initial state, applies to any (hotplug) CPU. */ - ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings; + for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++) + spin_lock_init(&ssbd_ls_cfg[i].lock); return true; } -/* - * Executed from GIF==0 context: avoid using BUG/ASSERT or other functionality - * that relies on exceptions as those are not expected to run in GIF==0 - * context. - */ -void amd_set_legacy_ssbd(bool enable) +static void core_set_legacy_ssbd(bool enable) { const struct cpuinfo_x86 *c = ¤t_cpu_data; struct ssbd_ls_cfg *status; + unsigned long flags; if ((c->x86 != 0x17 && c->x86 != 0x18) || c->x86_num_siblings <= 1) { - set_legacy_ssbd(c, enable); + BUG_ON(!set_legacy_ssbd(c, enable)); return; } + BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS); + BUG_ON(c->cpu_core_id >= ssbd_max_cores); status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores + c->cpu_core_id]; - /* - * Open code a very simple spinlock: this function is used with GIF==0 - * and different IF values, so would trigger the checklock detector. - * Instead of trying to workaround the detector, use a very simple lock - * implementation: it's better to reduce the amount of code executed - * with GIF==0. - */ - while (test_and_set_bool(status->locked)) - cpu_relax(); + spin_lock_irqsave(&status->lock, flags); status->count += enable ? 1 : -1; + ASSERT(status->count <= c->x86_num_siblings); if (enable ? status->count == 1 : !status->count) - set_legacy_ssbd(c, enable); - barrier(); - write_atomic(&status->locked, false); + BUG_ON(!set_legacy_ssbd(c, enable)); + spin_unlock_irqrestore(&status->lock, flags); +} + +void amd_set_ssbd(bool enable) +{ + if (opt_ssbd) + /* + * Ignore attempts to turn off SSBD, it's hardcoded on the + * command line. + */ + return; + + if (cpu_has_virt_ssbd) + wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0); + else if (amd_legacy_ssbd) + core_set_legacy_ssbd(enable); + else + ASSERT_UNREACHABLE(); } /* diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/svm/entry.S +++ b/xen/arch/x86/hvm/svm/entry.S @@ -XXX,XX +XXX,XX @@ __UNLIKELY_END(nsvm_hap) clgi - ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \ - X86_FEATURE_VIRT_SC_MSR_HVM - /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ /* SPEC_CTRL_EXIT_TO_SVM Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ .macro svm_vmentry_spec_ctrl @@ -XXX,XX +XXX,XX @@ __UNLIKELY_END(nsvm_hap) ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ - ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \ - X86_FEATURE_VIRT_SC_MSR_HVM - /* * STGI is executed unconditionally, and is sufficiently serialising * to safely resolve any Spectre-v1 concerns in the above logic. diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -XXX,XX +XXX,XX @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v) /* Resume use of ISTs now that the host TR is reinstated. */ enable_each_ist(idt_tables[cpu]); + + /* + * Clear previous guest selection of SSBD if set. Note that SPEC_CTRL.SSBD + * is already cleared by svm_vmexit_spec_ctrl. + */ + if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD ) + { + ASSERT(v->domain->arch.cpuid->extd.virt_ssbd); + amd_set_ssbd(false); + } } static void cf_check svm_ctxt_switch_to(struct vcpu *v) @@ -XXX,XX +XXX,XX @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v) if ( cpu_has_msr_tsc_aux ) wrmsr_tsc_aux(v->arch.msrs->tsc_aux); + + /* Load SSBD if set by the guest. */ + if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD ) + { + ASSERT(v->domain->arch.cpuid->extd.virt_ssbd); + amd_set_ssbd(true); + } } static void noreturn cf_check svm_do_resume(void) @@ -XXX,XX +XXX,XX @@ static void cf_check svm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) vmcb->spec_ctrl = val; break; + case MSR_VIRT_SPEC_CTRL: + amd_set_ssbd(val & SPEC_CTRL_SSBD); + break; + default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n", __func__, v, reg, val); @@ -XXX,XX +XXX,XX @@ void svm_vmexit_handler(struct cpu_user_regs *regs) vmcb_set_vintr(vmcb, intr); } -/* Called with GIF=0. */ -void vmexit_virt_spec_ctrl(void) -{ - unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0; - - if ( val == current->arch.msrs->virt_spec_ctrl.raw ) - return; - - if ( cpu_has_virt_ssbd ) - wrmsr(MSR_VIRT_SPEC_CTRL, val, 0); - else - amd_set_legacy_ssbd(val); -} - -/* Called with GIF=0. */ -void vmentry_virt_spec_ctrl(void) -{ - unsigned int val = current->arch.msrs->virt_spec_ctrl.raw; - - if ( val == (opt_ssbd ? SPEC_CTRL_SSBD : 0) ) - return; - - if ( cpu_has_virt_ssbd ) - wrmsr(MSR_VIRT_SPEC_CTRL, val, 0); - else - amd_set_legacy_ssbd(val); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/amd.h +++ b/xen/arch/x86/include/asm/amd.h @@ -XXX,XX +XXX,XX @@ void amd_check_disable_c1e(unsigned int port, u8 value); extern bool amd_legacy_ssbd; bool amd_setup_legacy_ssbd(void); -void amd_set_legacy_ssbd(bool enable); +void amd_set_ssbd(bool enable); #endif /* __AMD_H__ */ diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/msr.c +++ b/xen/arch/x86/msr.c @@ -XXX,XX +XXX,XX @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val) msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD; } else + { msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD; + /* + * Propagate the value to hardware, as it won't be context switched + * on vmentry. + */ + goto set_reg; + } break; case MSR_AMD64_DE_CFG: -- 2.37.3
Since the VIRT_SPEC_CTRL.SSBD selection is no longer context switched on vm{entry,exit} there's no need to use a synthetic feature bit for it anymore. Remove the bit and instead use a global variable. No functional change intended. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> --- xen/arch/x86/cpu/amd.c | 1 + xen/arch/x86/cpuid.c | 9 +++++---- xen/arch/x86/include/asm/amd.h | 1 + xen/arch/x86/include/asm/cpufeatures.h | 2 +- xen/arch/x86/spec_ctrl.c | 8 ++++---- 5 files changed, 12 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -XXX,XX +XXX,XX @@ boolean_param("allow_unsafe", opt_allow_unsafe); /* Signal whether the ACPI C1E quirk is required. */ bool __read_mostly amd_acpi_c1e_quirk; bool __ro_after_init amd_legacy_ssbd; +bool __ro_after_init amd_virt_spec_ctrl; static inline int rdmsr_amd_safe(unsigned int msr, unsigned int *lo, unsigned int *hi) diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -XXX,XX +XXX,XX @@ #include <xen/param.h> #include <xen/sched.h> #include <xen/nospec.h> +#include <asm/amd.h> #include <asm/cpuid.h> #include <asm/hvm/hvm.h> #include <asm/hvm/nestedhvm.h> @@ -XXX,XX +XXX,XX @@ static void __init calculate_hvm_max_policy(void) /* * VIRT_SSBD is exposed in the default policy as a result of - * VIRT_SC_MSR_HVM being set, it also needs exposing in the max policy. + * amd_virt_spec_ctrl being set, it also needs exposing in the max policy. */ - if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ) + if ( amd_virt_spec_ctrl ) __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset); /* @@ -XXX,XX +XXX,XX @@ static void __init calculate_hvm_def_policy(void) /* * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus - * VIRT_SC_MSR_HVM is set. + * amd_virt_spec_ctrl is set. */ - if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ) + if ( amd_virt_spec_ctrl ) __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset); sanitise_featureset(hvm_featureset); diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/amd.h +++ b/xen/arch/x86/include/asm/amd.h @@ -XXX,XX +XXX,XX @@ extern bool amd_acpi_c1e_quirk; void amd_check_disable_c1e(unsigned int port, u8 value); extern bool amd_legacy_ssbd; +extern bool amd_virt_spec_ctrl; bool amd_setup_legacy_ssbd(void); void amd_set_ssbd(bool enable); diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/cpufeatures.h +++ b/xen/arch/x86/include/asm/cpufeatures.h @@ -XXX,XX +XXX,XX @@ XEN_CPUFEATURE(APERFMPERF, X86_SYNTH( 8)) /* APERFMPERF */ XEN_CPUFEATURE(MFENCE_RDTSC, X86_SYNTH( 9)) /* MFENCE synchronizes RDTSC */ XEN_CPUFEATURE(XEN_SMEP, X86_SYNTH(10)) /* SMEP gets used by Xen itself */ XEN_CPUFEATURE(XEN_SMAP, X86_SYNTH(11)) /* SMAP gets used by Xen itself */ -XEN_CPUFEATURE(VIRT_SC_MSR_HVM, X86_SYNTH(12)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */ +/* Bit 12 unused. */ XEN_CPUFEATURE(IND_THUNK_LFENCE, X86_SYNTH(13)) /* Use IND_THUNK_LFENCE */ XEN_CPUFEATURE(IND_THUNK_JMP, X86_SYNTH(14)) /* Use IND_THUNK_JMP */ XEN_CPUFEATURE(SC_NO_BRANCH_HARDEN, X86_SYNTH(15)) /* (Disable) Conditional branch hardening */ diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/spec_ctrl.c +++ b/xen/arch/x86/spec_ctrl.c @@ -XXX,XX +XXX,XX @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps) (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) || boot_cpu_has(X86_FEATURE_SC_RSB_HVM) || boot_cpu_has(X86_FEATURE_IBPB_ENTRY_HVM) || - boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) || + amd_virt_spec_ctrl || opt_eager_fpu || opt_md_clear_hvm) ? "" : " None", boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ? " MSR_SPEC_CTRL" : "", (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) || - boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL" - : "", + amd_virt_spec_ctrl) ? " MSR_VIRT_SPEC_CTRL" + : "", boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ? " RSB" : "", opt_eager_fpu ? " EAGER_FPU" : "", opt_md_clear_hvm ? " MD_CLEAR" : "", @@ -XXX,XX +XXX,XX @@ void __init init_speculation_mitigations(void) /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */ if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && (cpu_has_virt_ssbd || (amd_legacy_ssbd && amd_setup_legacy_ssbd())) ) - setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM); + amd_virt_spec_ctrl = true; /* Figure out default_xen_spec_ctrl. */ if ( has_spec_ctrl && ibrs ) -- 2.37.3
Hello, Just two patches remaining, and the last one is already Acked. First patch deals with moving the switching of SSBD from guest vm{entry,exit} to vCPU context switch, and lets Xen run with the guest SSBD selection under some circumstances by default. Andrew has expressed reservations to me privately with patch 2/2, but I'm still sending it so that comments can be made publicly (or the patch applied). Thanks, Roger. Roger Pau Monne (2): amd/virt_ssbd: set SSBD at vCPU context switch amd: remove VIRT_SC_MSR_HVM synthetic feature docs/misc/xen-command-line.pandoc | 10 +++-- xen/arch/x86/cpu/amd.c | 56 ++++++++++++++------------ xen/arch/x86/cpuid.c | 9 +++-- xen/arch/x86/hvm/svm/entry.S | 6 --- xen/arch/x86/hvm/svm/svm.c | 45 ++++++++------------- xen/arch/x86/include/asm/amd.h | 1 + xen/arch/x86/include/asm/cpufeatures.h | 2 +- xen/arch/x86/include/asm/msr.h | 3 +- xen/arch/x86/msr.c | 9 +++++ xen/arch/x86/spec_ctrl.c | 8 ++-- 10 files changed, 75 insertions(+), 74 deletions(-) -- 2.37.3
This fixes an issue with running C code in a GIF=0 region, that's problematic when using UBSAN or other instrumentation techniques. The current logic for AMD SSBD context switches it on every vm{entry,exit} if the Xen and guest selections don't match. This is expensive when not using SPEC_CTRL, and hence should be avoided as much as possible. When SSBD is not being set from SPEC_CTRL on AMD don't context switch at vm{entry,exit} and instead only context switch SSBD when switching vCPUs. This has the side effect of running Xen code with the guest selection of SSBD, the documentation is updated to note this behavior. Also note that then when `ssbd` is selected on the command line guest SSBD selection will not have an effect, and the hypervisor will run with SSBD unconditionally enabled when not using SPEC_CTRL itself. As a result of no longer running the code to set SSBD in a GIF=0 region the locking of amd_set_legacy_ssbd() can be done using normal spinlocks, and some more checks can be added to assure it works as intended. Finally it's also worth noticing that since the guest SSBD selection is no longer set on vmentry the VIRT_SPEC_MSR handling needs to propagate the value to the hardware as part of handling the wrmsr. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> --- Changes since v3: - Fix commit message order. - Remove msr.h comment about context switching virt_spec_ctrl. - s/amd_set_ssbd/amd_set_legacy_ssbd/. - Adjust comment about clearing SSBD in svm_ctxt_switch_from(). Changes since v2: - Fix calling set_reg unconditionally. - Fix comment. - Call amd_set_ssbd() from guest_wrmsr(). Changes since v1: - Just check virt_spec_ctrl value != 0 on context switch. - Remove stray asm newline. - Use val in svm_set_reg(). - Fix style in amd.c. - Do not clear ssbd --- docs/misc/xen-command-line.pandoc | 10 +++--- xen/arch/x86/cpu/amd.c | 55 +++++++++++++++++-------------- xen/arch/x86/hvm/svm/entry.S | 6 ---- xen/arch/x86/hvm/svm/svm.c | 45 ++++++++++--------------- xen/arch/x86/include/asm/msr.h | 3 +- xen/arch/x86/msr.c | 9 +++++ 6 files changed, 63 insertions(+), 65 deletions(-) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index XXXXXXX..XXXXXXX 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -XXX,XX +XXX,XX @@ By default, Xen will use STIBP when IBRS is in use (IBRS implies STIBP), and when hardware hints recommend using it as a blanket setting. On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=` -option can be used to force or prevent Xen using the feature itself. On AMD -hardware, this is a global option applied at boot, and not virtualised for -guest use. On Intel hardware, the feature is virtualised for guests, -independently of Xen's choice of setting. +option can be used to force or prevent Xen using the feature itself. The +feature is virtualised for guests, independently of Xen's choice of setting. +On AMD hardware, disabling Xen SSBD usage on the command line (`ssbd=0` which +is the default value) can lead to Xen running with the guest SSBD selection +depending on hardware support, on the same hardware setting `ssbd=1` will +result in SSBD always being enabled, regardless of guest choice. On hardware supporting PSFD (Predictive Store Forwarding Disable), the `psfd=` option can be used to force or prevent Xen using the feature itself. By diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -XXX,XX +XXX,XX @@ void amd_init_ssbd(const struct cpuinfo_x86 *c) } static struct ssbd_ls_cfg { - bool locked; + spinlock_t lock; unsigned int count; } __cacheline_aligned *ssbd_ls_cfg; static unsigned int __ro_after_init ssbd_max_cores; @@ -XXX,XX +XXX,XX @@ bool __init amd_setup_legacy_ssbd(void) unsigned int i; if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) || - boot_cpu_data.x86_num_siblings <= 1) + boot_cpu_data.x86_num_siblings <= 1 || opt_ssbd) return true; /* @@ -XXX,XX +XXX,XX @@ bool __init amd_setup_legacy_ssbd(void) if (!ssbd_ls_cfg) return false; - if (opt_ssbd) - for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++) - /* Set initial state, applies to any (hotplug) CPU. */ - ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings; + for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++) + spin_lock_init(&ssbd_ls_cfg[i].lock); return true; } -/* - * Executed from GIF==0 context: avoid using BUG/ASSERT or other functionality - * that relies on exceptions as those are not expected to run in GIF==0 - * context. - */ -void amd_set_legacy_ssbd(bool enable) +static void core_set_legacy_ssbd(bool enable) { const struct cpuinfo_x86 *c = ¤t_cpu_data; struct ssbd_ls_cfg *status; + unsigned long flags; if ((c->x86 != 0x17 && c->x86 != 0x18) || c->x86_num_siblings <= 1) { - set_legacy_ssbd(c, enable); + BUG_ON(!set_legacy_ssbd(c, enable)); return; } + BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS); + BUG_ON(c->cpu_core_id >= ssbd_max_cores); status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores + c->cpu_core_id]; - /* - * Open code a very simple spinlock: this function is used with GIF==0 - * and different IF values, so would trigger the checklock detector. - * Instead of trying to workaround the detector, use a very simple lock - * implementation: it's better to reduce the amount of code executed - * with GIF==0. - */ - while (test_and_set_bool(status->locked)) - cpu_relax(); + spin_lock_irqsave(&status->lock, flags); status->count += enable ? 1 : -1; + ASSERT(status->count <= c->x86_num_siblings); if (enable ? status->count == 1 : !status->count) - set_legacy_ssbd(c, enable); - barrier(); - write_atomic(&status->locked, false); + BUG_ON(!set_legacy_ssbd(c, enable)); + spin_unlock_irqrestore(&status->lock, flags); +} + +void amd_set_legacy_ssbd(bool enable) +{ + if (opt_ssbd) + /* + * Ignore attempts to turn off SSBD, it's hardcoded on the + * command line. + */ + return; + + if (cpu_has_virt_ssbd) + wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0); + else if (amd_legacy_ssbd) + core_set_legacy_ssbd(enable); + else + ASSERT_UNREACHABLE(); } /* diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/svm/entry.S +++ b/xen/arch/x86/hvm/svm/entry.S @@ -XXX,XX +XXX,XX @@ __UNLIKELY_END(nsvm_hap) clgi - ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \ - X86_FEATURE_VIRT_SC_MSR_HVM - /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ /* SPEC_CTRL_EXIT_TO_SVM Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ .macro svm_vmentry_spec_ctrl @@ -XXX,XX +XXX,XX @@ __UNLIKELY_END(nsvm_hap) ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ - ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \ - X86_FEATURE_VIRT_SC_MSR_HVM - /* * STGI is executed unconditionally, and is sufficiently serialising * to safely resolve any Spectre-v1 concerns in the above logic. diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -XXX,XX +XXX,XX @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v) /* Resume use of ISTs now that the host TR is reinstated. */ enable_each_ist(idt_tables[cpu]); + + /* + * Possibly clear previous guest selection of SSBD if set. Note that + * SPEC_CTRL.SSBD is already handled by svm_vmexit_spec_ctrl. + */ + if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD ) + { + ASSERT(v->domain->arch.cpuid->extd.virt_ssbd); + amd_set_legacy_ssbd(false); + } } static void cf_check svm_ctxt_switch_to(struct vcpu *v) @@ -XXX,XX +XXX,XX @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v) if ( cpu_has_msr_tsc_aux ) wrmsr_tsc_aux(v->arch.msrs->tsc_aux); + + /* Load SSBD if set by the guest. */ + if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD ) + { + ASSERT(v->domain->arch.cpuid->extd.virt_ssbd); + amd_set_legacy_ssbd(true); + } } static void noreturn cf_check svm_do_resume(void) @@ -XXX,XX +XXX,XX @@ void svm_vmexit_handler(struct cpu_user_regs *regs) vmcb_set_vintr(vmcb, intr); } -/* Called with GIF=0. */ -void vmexit_virt_spec_ctrl(void) -{ - unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0; - - if ( val == current->arch.msrs->virt_spec_ctrl.raw ) - return; - - if ( cpu_has_virt_ssbd ) - wrmsr(MSR_VIRT_SPEC_CTRL, val, 0); - else - amd_set_legacy_ssbd(val); -} - -/* Called with GIF=0. */ -void vmentry_virt_spec_ctrl(void) -{ - unsigned int val = current->arch.msrs->virt_spec_ctrl.raw; - - if ( val == (opt_ssbd ? SPEC_CTRL_SSBD : 0) ) - return; - - if ( cpu_has_virt_ssbd ) - wrmsr(MSR_VIRT_SPEC_CTRL, val, 0); - else - amd_set_legacy_ssbd(val); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/msr.h +++ b/xen/arch/x86/include/asm/msr.h @@ -XXX,XX +XXX,XX @@ struct vcpu_msrs /* * 0xc001011f - MSR_VIRT_SPEC_CTRL (if !X86_FEATURE_AMD_SSBD) * - * AMD only. Guest selected value, context switched on guest VM - * entry/exit. + * AMD only. Guest selected value. */ struct { uint32_t raw; diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/msr.c +++ b/xen/arch/x86/msr.c @@ -XXX,XX +XXX,XX @@ #include <xen/nospec.h> #include <xen/sched.h> +#include <asm/amd.h> #include <asm/debugreg.h> #include <asm/hvm/nestedhvm.h> #include <asm/hvm/viridian.h> @@ -XXX,XX +XXX,XX @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val) msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD; } else + { msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD; + if ( v == curr ) + /* + * Propagate the value to hardware, as it won't be set on guest + * resume path. + */ + amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD); + } break; case MSR_AMD64_DE_CFG: -- 2.37.3
Since the VIRT_SPEC_CTRL.SSBD selection is no longer context switched on vm{entry,exit} there's no need to use a synthetic feature bit for it anymore. Remove the bit and instead use a global variable. No functional change intended. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Henry Wang <Henry.Wang@arm.com> --- xen/arch/x86/cpu/amd.c | 1 + xen/arch/x86/cpuid.c | 9 +++++---- xen/arch/x86/include/asm/amd.h | 1 + xen/arch/x86/include/asm/cpufeatures.h | 2 +- xen/arch/x86/spec_ctrl.c | 8 ++++---- 5 files changed, 12 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -XXX,XX +XXX,XX @@ boolean_param("allow_unsafe", opt_allow_unsafe); /* Signal whether the ACPI C1E quirk is required. */ bool __read_mostly amd_acpi_c1e_quirk; bool __ro_after_init amd_legacy_ssbd; +bool __ro_after_init amd_virt_spec_ctrl; static inline int rdmsr_amd_safe(unsigned int msr, unsigned int *lo, unsigned int *hi) diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -XXX,XX +XXX,XX @@ #include <xen/param.h> #include <xen/sched.h> #include <xen/nospec.h> +#include <asm/amd.h> #include <asm/cpuid.h> #include <asm/hvm/hvm.h> #include <asm/hvm/nestedhvm.h> @@ -XXX,XX +XXX,XX @@ static void __init calculate_hvm_max_policy(void) /* * VIRT_SSBD is exposed in the default policy as a result of - * VIRT_SC_MSR_HVM being set, it also needs exposing in the max policy. + * amd_virt_spec_ctrl being set, it also needs exposing in the max policy. */ - if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ) + if ( amd_virt_spec_ctrl ) __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset); /* @@ -XXX,XX +XXX,XX @@ static void __init calculate_hvm_def_policy(void) /* * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus - * VIRT_SC_MSR_HVM is set. + * amd_virt_spec_ctrl is set. */ - if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ) + if ( amd_virt_spec_ctrl ) __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset); sanitise_featureset(hvm_featureset); diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/amd.h +++ b/xen/arch/x86/include/asm/amd.h @@ -XXX,XX +XXX,XX @@ extern bool amd_acpi_c1e_quirk; void amd_check_disable_c1e(unsigned int port, u8 value); extern bool amd_legacy_ssbd; +extern bool amd_virt_spec_ctrl; bool amd_setup_legacy_ssbd(void); void amd_set_legacy_ssbd(bool enable); diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/cpufeatures.h +++ b/xen/arch/x86/include/asm/cpufeatures.h @@ -XXX,XX +XXX,XX @@ XEN_CPUFEATURE(APERFMPERF, X86_SYNTH( 8)) /* APERFMPERF */ XEN_CPUFEATURE(MFENCE_RDTSC, X86_SYNTH( 9)) /* MFENCE synchronizes RDTSC */ XEN_CPUFEATURE(XEN_SMEP, X86_SYNTH(10)) /* SMEP gets used by Xen itself */ XEN_CPUFEATURE(XEN_SMAP, X86_SYNTH(11)) /* SMAP gets used by Xen itself */ -XEN_CPUFEATURE(VIRT_SC_MSR_HVM, X86_SYNTH(12)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */ +/* Bit 12 unused. */ XEN_CPUFEATURE(IND_THUNK_LFENCE, X86_SYNTH(13)) /* Use IND_THUNK_LFENCE */ XEN_CPUFEATURE(IND_THUNK_JMP, X86_SYNTH(14)) /* Use IND_THUNK_JMP */ XEN_CPUFEATURE(SC_NO_BRANCH_HARDEN, X86_SYNTH(15)) /* (Disable) Conditional branch hardening */ diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/spec_ctrl.c +++ b/xen/arch/x86/spec_ctrl.c @@ -XXX,XX +XXX,XX @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps) (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) || boot_cpu_has(X86_FEATURE_SC_RSB_HVM) || boot_cpu_has(X86_FEATURE_IBPB_ENTRY_HVM) || - boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) || + amd_virt_spec_ctrl || opt_eager_fpu || opt_md_clear_hvm) ? "" : " None", boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ? " MSR_SPEC_CTRL" : "", (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) || - boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL" - : "", + amd_virt_spec_ctrl) ? " MSR_VIRT_SPEC_CTRL" + : "", boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ? " RSB" : "", opt_eager_fpu ? " EAGER_FPU" : "", opt_md_clear_hvm ? " MD_CLEAR" : "", @@ -XXX,XX +XXX,XX @@ void __init init_speculation_mitigations(void) /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */ if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && (cpu_has_virt_ssbd || (amd_legacy_ssbd && amd_setup_legacy_ssbd())) ) - setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM); + amd_virt_spec_ctrl = true; /* Figure out default_xen_spec_ctrl. */ if ( has_spec_ctrl && ibrs ) -- 2.37.3