This make the S3 BSP path consistent with AP paths, and reduces the amount of
state needing stashing specially. Also, it takes care of re-setting up Xen's
LBR configuration if requested, which was missing previously.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/acpi/suspend.c | 25 ++-----------------------
1 file changed, 2 insertions(+), 23 deletions(-)
diff --git a/xen/arch/x86/acpi/suspend.c b/xen/arch/x86/acpi/suspend.c
index 32d0f71ffd..1c2f1c470e 100644
--- a/xen/arch/x86/acpi/suspend.c
+++ b/xen/arch/x86/acpi/suspend.c
@@ -15,8 +15,6 @@
#include <asm/xstate.h>
#include <xen/hypercall.h>
-static unsigned long saved_lstar, saved_cstar;
-static unsigned long saved_sysenter_esp, saved_sysenter_eip;
static unsigned long saved_fs_base, saved_gs_base, saved_kernel_gs_base;
static uint64_t saved_xcr0;
@@ -25,14 +23,6 @@ void save_rest_processor_state(void)
saved_fs_base = rdfsbase();
saved_gs_base = rdgsbase();
rdmsrl(MSR_SHADOW_GS_BASE, saved_kernel_gs_base);
- rdmsrl(MSR_CSTAR, saved_cstar);
- rdmsrl(MSR_LSTAR, saved_lstar);
-
- if ( cpu_has_sep )
- {
- rdmsrl(MSR_IA32_SYSENTER_ESP, saved_sysenter_esp);
- rdmsrl(MSR_IA32_SYSENTER_EIP, saved_sysenter_eip);
- }
if ( cpu_has_xsave )
saved_xcr0 = get_xcr0();
@@ -46,24 +36,13 @@ void restore_rest_processor_state(void)
/* Restore full CR4 (inc MCE) now that the IDT is in place. */
write_cr4(mmu_cr4_features);
- /* Recover syscall MSRs */
- wrmsrl(MSR_LSTAR, saved_lstar);
- wrmsrl(MSR_CSTAR, saved_cstar);
- wrmsrl(MSR_STAR, XEN_MSR_STAR);
- wrmsrl(MSR_SYSCALL_MASK, XEN_SYSCALL_MASK);
+ /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
+ percpu_traps_init();
wrfsbase(saved_fs_base);
wrgsbase(saved_gs_base);
wrmsrl(MSR_SHADOW_GS_BASE, saved_kernel_gs_base);
- if ( cpu_has_sep )
- {
- /* Recover sysenter MSRs */
- wrmsrl(MSR_IA32_SYSENTER_ESP, saved_sysenter_esp);
- wrmsrl(MSR_IA32_SYSENTER_EIP, saved_sysenter_eip);
- wrmsr(MSR_IA32_SYSENTER_CS, __HYPERVISOR_CS, 0);
- }
-
if ( cpu_has_xsave && !set_xcr0(saved_xcr0) )
BUG();
--
2.11.0
On 20.04.2020 16:59, Andrew Cooper wrote: > @@ -46,24 +36,13 @@ void restore_rest_processor_state(void) > /* Restore full CR4 (inc MCE) now that the IDT is in place. */ > write_cr4(mmu_cr4_features); > > - /* Recover syscall MSRs */ > - wrmsrl(MSR_LSTAR, saved_lstar); > - wrmsrl(MSR_CSTAR, saved_cstar); > - wrmsrl(MSR_STAR, XEN_MSR_STAR); > - wrmsrl(MSR_SYSCALL_MASK, XEN_SYSCALL_MASK); > + /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */ > + percpu_traps_init(); Without tweaks to subarch_percpu_traps_init() this assumes that, now and going forward, map_domain_page() will work reliably at this early point of resume. I'm not opposed, i.e. Acked-by: Jan Beulich <jbeulich@suse.com> but it would feel less fragile if the redundant re-writing of the stubs would be skipped in the BSP resume case (I didn't check whether it's also redundant for APs, but I suspect it's not, as the stub pages may get allocated anew). Jan
On 21/04/2020 08:24, Jan Beulich wrote:
> On 20.04.2020 16:59, Andrew Cooper wrote:
>> @@ -46,24 +36,13 @@ void restore_rest_processor_state(void)
>> /* Restore full CR4 (inc MCE) now that the IDT is in place. */
>> write_cr4(mmu_cr4_features);
>>
>> - /* Recover syscall MSRs */
>> - wrmsrl(MSR_LSTAR, saved_lstar);
>> - wrmsrl(MSR_CSTAR, saved_cstar);
>> - wrmsrl(MSR_STAR, XEN_MSR_STAR);
>> - wrmsrl(MSR_SYSCALL_MASK, XEN_SYSCALL_MASK);
>> + /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
>> + percpu_traps_init();
> Without tweaks to subarch_percpu_traps_init() this assumes that,
> now and going forward, map_domain_page() will work reliably at
> this early point of resume. I'm not opposed, i.e.
> Acked-by: Jan Beulich <jbeulich@suse.com>
I think this reasonable to expect, and robust going forward.
We are properly in d[IDLE]v0 context when it comes to pagetables, and
there is nothing interesting between this point and coming fully back
online.
That said, I could easily move the call to later in the resume path for
even more certainty.
diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 3ad7dfc9a3..d5a468eddd 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -297,6 +297,8 @@ static int enter_state(u32 state)
ci->spec_ctrl_flags |= (default_spec_ctrl_flags & SCF_ist_wrmsr);
spec_ctrl_exit_idle(ci);
+ /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
+ percpu_traps_init();
+
done:
spin_debug_enable();
local_irq_restore(flags);
In fact - I prefer this, because it works towards one low priority goal
of deleting {save,restore}_rest_processor_state() which I've still got a
pending series for.
Would your ack still stand if I tweak the patch in this way?
> but it would feel less fragile if the redundant re-writing of
> the stubs would be skipped in the BSP resume case (I didn't
> check whether it's also redundant for APs, but I suspect it's
> not, as the stub pages may get allocated anew).
I don't really agree. Symmetry (even if it is expected to be redundant)
is much more easy to reason about in terms of robustness. S3 is not a
fastpath.
~Andrew
On 23.04.2020 20:24, Andrew Cooper wrote:
> On 21/04/2020 08:24, Jan Beulich wrote:
>> On 20.04.2020 16:59, Andrew Cooper wrote:
>>> @@ -46,24 +36,13 @@ void restore_rest_processor_state(void)
>>> /* Restore full CR4 (inc MCE) now that the IDT is in place. */
>>> write_cr4(mmu_cr4_features);
>>>
>>> - /* Recover syscall MSRs */
>>> - wrmsrl(MSR_LSTAR, saved_lstar);
>>> - wrmsrl(MSR_CSTAR, saved_cstar);
>>> - wrmsrl(MSR_STAR, XEN_MSR_STAR);
>>> - wrmsrl(MSR_SYSCALL_MASK, XEN_SYSCALL_MASK);
>>> + /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
>>> + percpu_traps_init();
>> Without tweaks to subarch_percpu_traps_init() this assumes that,
>> now and going forward, map_domain_page() will work reliably at
>> this early point of resume. I'm not opposed, i.e.
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>
> I think this reasonable to expect, and robust going forward.
>
> We are properly in d[IDLE]v0 context when it comes to pagetables, and
> there is nothing interesting between this point and coming fully back
> online.
>
> That said, I could easily move the call to later in the resume path for
> even more certainty.
>
> diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
> index 3ad7dfc9a3..d5a468eddd 100644
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -297,6 +297,8 @@ static int enter_state(u32 state)
> ci->spec_ctrl_flags |= (default_spec_ctrl_flags & SCF_ist_wrmsr);
> spec_ctrl_exit_idle(ci);
>
> + /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
> + percpu_traps_init();
> +
> done:
> spin_debug_enable();
> local_irq_restore(flags);
>
> In fact - I prefer this, because it works towards one low priority goal
> of deleting {save,restore}_rest_processor_state() which I've still got a
> pending series for.
>
> Would your ack still stand if I tweak the patch in this way?
Yes.
Jan
© 2016 - 2025 Red Hat, Inc.