[PATCH v2 0/4] x86: XSA-348 follow-up

Jan Beulich posted 4 patches 3 years, 4 months ago
Only 0 patches received!
[PATCH v2 0/4] x86: XSA-348 follow-up
Posted by Jan Beulich 3 years, 4 months ago
The changes for this XSA introduced an inefficiency and could
do with some further hardening. Addressing of this wasn't
sensible as part of the XSA, though (but you may take this as
an explanation of why this starts out as v2).

1: x86: verify function type (and maybe attribute) in switch_stack_and_jump()
2: x86: clobber registers in switch_stack_and_jump() when !LIVEPATCH
3: x86/PV: avoid double stack reset during schedule tail handling
4: livepatch: adjust a stale comment

Jan

[PATCH v2 1/4] x86: verify function type (and maybe attribute) in switch_stack_and_jump()
Posted by Jan Beulich 3 years, 4 months ago
It is imperative that the functions passed here are taking no arguments,
return no values, and don't return in the first place. While the type
can be checked uniformly, the attribute check is limited to gcc 9 and
newer (no clang support for this so far afaict).

Note that I didn't want to have the "true" fallback "implementation" of
__builtin_has_attribute(..., __noreturn__) generally available, as
"true" may not be a suitable fallback in other cases.

Note further that the noreturn addition to startup_cpu_idle_loop()'s
declaration requires adding unreachable() to Arm's
switch_stack_and_jump(), or else the build would break. I suppose this
should have been there already.

For vmx_asm_do_vmentry() along with adding the attribute, also restrict
its scope.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
v2: Fix Arm build.

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -63,7 +63,7 @@
 #include <asm/monitor.h>
 #include <asm/xstate.h>
 
-void svm_asm_do_resume(void);
+void noreturn svm_asm_do_resume(void);
 
 u32 svm_feature_flags;
 
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1850,6 +1850,8 @@ void vmx_vmentry_failure(void)
     domain_crash(curr->domain);
 }
 
+void noreturn vmx_asm_do_vmentry(void);
+
 void vmx_do_resume(void)
 {
     struct vcpu *v = current;
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -619,7 +619,7 @@ static inline bool using_2M_mapping(void
            !l1_table_offset((unsigned long)__2M_rwdata_end);
 }
 
-static void noinline init_done(void)
+static void noreturn init_done(void)
 {
     void *va;
     unsigned long start, end;
--- a/xen/include/asm-arm/current.h
+++ b/xen/include/asm-arm/current.h
@@ -43,8 +43,10 @@ static inline struct cpu_info *get_cpu_i
 
 #define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs)
 
-#define switch_stack_and_jump(stack, fn)                                \
-    asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack) : "memory" )
+#define switch_stack_and_jump(stack, fn) do {                           \
+    asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack) : "memory" ); \
+    unreachable();                                                      \
+} while ( false )
 
 #define reset_stack_and_jump(fn) switch_stack_and_jump(get_cpu_info(), fn)
 
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -23,7 +23,7 @@ asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
 #include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
-void ret_from_intr(void);
+void noreturn ret_from_intr(void);
 
 /*
  * This output constraint should be used for any inline asm which has a "call"
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -155,9 +155,18 @@ unsigned long get_stack_dump_bottom (uns
 # define SHADOW_STACK_WORK ""
 #endif
 
+#if __GNUC__ >= 9
+# define ssaj_has_attr_noreturn(fn) __builtin_has_attribute(fn, __noreturn__)
+#else
+/* Simply can't check the property with older gcc. */
+# define ssaj_has_attr_noreturn(fn) true
+#endif
+
 #define switch_stack_and_jump(fn, instr, constr)                        \
     ({                                                                  \
         unsigned int tmp;                                               \
+        (void)((fn) == (void (*)(void))NULL);                           \
+        BUILD_BUG_ON(!ssaj_has_attr_noreturn(fn));                      \
         __asm__ __volatile__ (                                          \
             SHADOW_STACK_WORK                                           \
             "mov %[stk], %%rsp;"                                        \
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -93,7 +93,6 @@ typedef enum {
 #define PI_xAPIC_NDST_MASK      0xFF00
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
-void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
 void noreturn vmx_do_resume(void);
 void vmx_vlapic_msr_changed(struct vcpu *v);
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -736,7 +736,7 @@ void sched_context_switched(struct vcpu
 void continue_running(
     struct vcpu *same);
 
-void startup_cpu_idle_loop(void);
+void noreturn startup_cpu_idle_loop(void);
 extern void (*pm_idle) (void);
 extern void (*dead_idle) (void);
 


Re: [PATCH v2 1/4] x86: verify function type (and maybe attribute) in switch_stack_and_jump()
Posted by Wei Liu 3 years, 4 months ago
On Tue, Dec 15, 2020 at 05:11:41PM +0100, Jan Beulich wrote:
> It is imperative that the functions passed here are taking no arguments,
> return no values, and don't return in the first place. While the type
> can be checked uniformly, the attribute check is limited to gcc 9 and
> newer (no clang support for this so far afaict).
> 
> Note that I didn't want to have the "true" fallback "implementation" of
> __builtin_has_attribute(..., __noreturn__) generally available, as
> "true" may not be a suitable fallback in other cases.
> 
> Note further that the noreturn addition to startup_cpu_idle_loop()'s
> declaration requires adding unreachable() to Arm's
> switch_stack_and_jump(), or else the build would break. I suppose this
> should have been there already.
> 
> For vmx_asm_do_vmentry() along with adding the attribute, also restrict
> its scope.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>

Re: [PATCH v2 1/4] x86: verify function type (and maybe attribute) in switch_stack_and_jump()
Posted by Julien Grall 3 years, 4 months ago
Hi Jan,

On 15/12/2020 16:11, Jan Beulich wrote:
> It is imperative that the functions passed here are taking no arguments,
> return no values, and don't return in the first place. While the type
> can be checked uniformly, the attribute check is limited to gcc 9 and
> newer (no clang support for this so far afaict).
> 
> Note that I didn't want to have the "true" fallback "implementation" of
> __builtin_has_attribute(..., __noreturn__) generally available, as
> "true" may not be a suitable fallback in other cases.
> 
> Note further that the noreturn addition to startup_cpu_idle_loop()'s
> declaration requires adding unreachable() to Arm's
> switch_stack_and_jump(), or else the build would break. I suppose this
> should have been there already.
> 
> For vmx_asm_do_vmentry() along with adding the attribute, also restrict
> its scope.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

For the Arm bits:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall

[PATCH v2 2/4] x86: clobber registers in switch_stack_and_jump() when !LIVEPATCH
Posted by Jan Beulich 3 years, 4 months ago
In order to have the same effect on registers as a call to
check_for_livepatch_work() may have, clobber all call-clobbered
registers in debug builds.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -120,6 +120,14 @@ unsigned long get_stack_dump_bottom (uns
 
 #ifdef CONFIG_LIVEPATCH
 # define CHECK_FOR_LIVEPATCH_WORK "call check_for_livepatch_work;"
+#elif defined(CONFIG_DEBUG)
+/* Mimic the clobbering effect a call has on registers. */
+# define CHECK_FOR_LIVEPATCH_WORK \
+    "mov $0x1234567890abcdef, %%rax\n\t" \
+    "mov %%rax, %%rcx; mov %%rax, %%rdx\n\t" \
+    "mov %%rax, %%rsi; mov %%rax, %%rdi\n\t" \
+    "mov %%rax, %%r8; mov %%rax, %%r9\n\t" \
+    "mov %%rax, %%r10; mov %%rax, %%r11\n\t"
 #else
 # define CHECK_FOR_LIVEPATCH_WORK ""
 #endif


Re: [PATCH v2 2/4] x86: clobber registers in switch_stack_and_jump() when !LIVEPATCH
Posted by Wei Liu 3 years, 4 months ago
On Tue, Dec 15, 2020 at 05:12:12PM +0100, Jan Beulich wrote:
> In order to have the same effect on registers as a call to
> check_for_livepatch_work() may have, clobber all call-clobbered
> registers in debug builds.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>

[PATCH v2 3/4] x86/PV: avoid double stack reset during schedule tail handling
Posted by Jan Beulich 3 years, 4 months ago
Invoking check_wakeup_from_wait() from assembly allows the new
continue_pv_domain() to replace the prior continue_nonidle_domain() as
the tail hook, eliminating an extra reset_stack_and_jump().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>

--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -110,12 +110,6 @@ static int parse_pcid(const char *s)
     return rc;
 }
 
-static void noreturn continue_nonidle_domain(void)
-{
-    check_wakeup_from_wait();
-    reset_stack_and_jump(ret_from_intr);
-}
-
 static int setup_compat_l4(struct vcpu *v)
 {
     struct page_info *pg;
@@ -341,13 +335,14 @@ void pv_domain_destroy(struct domain *d)
     FREE_XENHEAP_PAGE(d->arch.pv.gdt_ldt_l1tab);
 }
 
+void noreturn continue_pv_domain(void);
 
 int pv_domain_initialise(struct domain *d)
 {
     static const struct arch_csw pv_csw = {
         .from = paravirt_ctxt_switch_from,
         .to   = paravirt_ctxt_switch_to,
-        .tail = continue_nonidle_domain,
+        .tail = continue_pv_domain,
     };
     int rc = -ENOMEM;
 
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -557,8 +557,10 @@ ENTRY(dom_crash_sync_extable)
         .text
 
 /* No special register assumptions. */
-ENTRY(ret_from_intr)
 #ifdef CONFIG_PV
+ENTRY(continue_pv_domain)
+        call  check_wakeup_from_wait
+ret_from_intr:
         GET_CURRENT(bx)
         testb $3, UREGS_cs(%rsp)
         jz    restore_all_xen
@@ -567,6 +569,7 @@ ENTRY(ret_from_intr)
         je    test_all_events
         jmp   compat_test_all_events
 #else
+ret_from_intr:
         ASSERT_CONTEXT_IS_XEN
         jmp   restore_all_xen
 #endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -23,7 +23,6 @@ asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
 #include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
-void noreturn ret_from_intr(void);
 
 /*
  * This output constraint should be used for any inline asm which has a "call"


Re: [PATCH v2 3/4] x86/PV: avoid double stack reset during schedule tail handling
Posted by Wei Liu 3 years, 4 months ago
On Tue, Dec 15, 2020 at 05:12:36PM +0100, Jan Beulich wrote:
> Invoking check_wakeup_from_wait() from assembly allows the new
> continue_pv_domain() to replace the prior continue_nonidle_domain() as
> the tail hook, eliminating an extra reset_stack_and_jump().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>

[PATCH v2 4/4] livepatch: adjust a stale comment
Posted by Jan Beulich 3 years, 4 months ago
As of 005de45c887e ("xen: do live patching only from main idle loop")
the comment ahead of livepatch_do_action() has been stale.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -1392,8 +1392,8 @@ static inline bool was_action_consistent
 }
 
 /*
- * This function is executed having all other CPUs with no deep stack (we may
- * have cpu_idle on it) and IRQs disabled.
+ * This function is executed having all other CPUs with no deep stack (when
+ * idle) and IRQs disabled.
  */
 static void livepatch_do_action(void)
 {


Re: [PATCH v2 4/4] livepatch: adjust a stale comment
Posted by Konrad Rzeszutek Wilk 3 years, 4 months ago
On Tue, Dec 15, 2020 at 05:13:43PM +0100, Jan Beulich wrote:
> As of 005de45c887e ("xen: do live patching only from main idle loop")
> the comment ahead of livepatch_do_action() has been stale.
> 
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/livepatch.c
> +++ b/xen/common/livepatch.c
> @@ -1392,8 +1392,8 @@ static inline bool was_action_consistent
>  }
>  
>  /*
> - * This function is executed having all other CPUs with no deep stack (we may
> - * have cpu_idle on it) and IRQs disabled.
> + * This function is executed having all other CPUs with no deep stack (when
> + * idle) and IRQs disabled.
>   */
>  static void livepatch_do_action(void)
>  {
>