[Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Andrew Cooper posted 2 patches 13 weeks ago

[Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Andrew Cooper 13 weeks ago
The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
assistance with instruction length.  As a result, any instruction-induced task
switch has the outgoing task's %eip pointing at the instruction switch caused
the switch, rather than after it.

This causes explicit use of task gates to livelock (as when the task returns,
it executes the task-switching instruction again), and any restartable task to
become a nop after its first instantiation (the entry state points at the
ret/iret instruction used to exit the task).

32bit Windows in particular is known to use task gates for NMI handling, and
to use NMI IPIs.

In the task switch handler, distinguish instruction-induced from
interrupt/exception-induced task switches, and decode the instruction under
%rip to calculate its length.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>

The implementation of svm_get_task_switch_insn_len() is bug-compatible with
svm_get_insn_len() when it comes to conditional #GP'ing.  I still haven't had
time to address this more thoroughly.

AMD does permit TASK_SWITCH not to be intercepted and, I'm informed does do
the right thing when it comes to a TSS crossing a page boundary.  However, it
is not actually safe to leave task switches unintercepted.  Any NPT or shadow
page fault, even from logdirty/paging/etc will corrupt guest state in an
unrecoverable manner.
---
 xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/svm/emulate.h |  1 +
 3 files changed, 92 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
index 3e52592847..176c25f60d 100644
--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
 }
 
 /*
+ * TASK_SWITCH vmexits never provide an instruction length.  We must always
+ * decode under %rip to find the answer.
+ */
+unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
+{
+    struct hvm_emulate_ctxt ctxt;
+    struct x86_emulate_state *state;
+    unsigned int emul_len, modrm_reg;
+
+    ASSERT(v == current);
+    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
+    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
+    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
+    if ( IS_ERR_OR_NULL(state) )
+        return 0;
+
+    emul_len = x86_insn_length(state, &ctxt.ctxt);
+
+    /*
+     * Check for an instruction which can cause a task switch.  Any far
+     * jmp/call/ret, any software interrupt/exception, and iret.
+     */
+    switch ( ctxt.ctxt.opcode )
+    {
+    case 0xff: /* Grp 5 */
+        /* call / jmp (far, absolute indirect) */
+        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
+             (modrm_reg != 3 && modrm_reg != 5) )
+        {
+            /* Wrong instruction.  Throw #GP back for now. */
+    default:
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            emul_len = 0;
+            break;
+        }
+        /* Fallthrough */
+    case 0x62: /* bound */
+    case 0x9a: /* call (far, absolute) */
+    case 0xca: /* ret imm16 (far) */
+    case 0xcb: /* ret (far) */
+    case 0xcc: /* int3 */
+    case 0xcd: /* int imm8 */
+    case 0xce: /* into */
+    case 0xcf: /* iret */
+    case 0xea: /* jmp (far, absolute) */
+    case 0xf1: /* icebp */
+        break;
+    }
+
+    x86_emulate_free_state(state);
+
+    return emul_len;
+}
+
+/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 049b800e20..ba9c24a70c 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
 
     case VMEXIT_TASK_SWITCH: {
         enum hvm_task_switch_reason reason;
-        int32_t errcode = -1;
+        int32_t errcode = -1, insn_len = -1;
+
+        /*
+         * All TASK_SWITCH intercepts have fault-like semantics.  NRIP is
+         * never provided, even for instruction-induced task switches, but we
+         * need to know the instruction length in order to set %eip suitably
+         * in the outgoing TSS.
+         *
+         * For a task switch which vectored through the IDT, look at the type
+         * to distinguish interrupts/exceptions from instruction based
+         * switches.
+         */
+        if ( vmcb->eventinj.fields.v )
+        {
+            /*
+             * HW_EXCEPTION, NMI and EXT_INTR are not instruction based.  All
+             * others are.
+             */
+            if ( vmcb->eventinj.fields.type <= X86_EVENTTYPE_HW_EXCEPTION )
+                insn_len = 0;
+
+            /*
+             * Clobber the vectoring information, as we are going to emulate
+             * the task switch in full.
+             */
+            vmcb->eventinj.bytes = 0;
+        }
+
+        /*
+         * insn_len being -1 indicates that we have an instruction-induced
+         * task switch.  Decode under %rip to find its length.
+         */
+        if ( insn_len < 0 && (insn_len = svm_get_task_switch_insn_len(v)) == 0 )
+            break;
+
         if ( (vmcb->exitinfo2 >> 36) & 1 )
             reason = TSW_iret;
         else if ( (vmcb->exitinfo2 >> 38) & 1 )
@@ -2786,15 +2820,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
         if ( (vmcb->exitinfo2 >> 44) & 1 )
             errcode = (uint32_t)vmcb->exitinfo2;
 
-        /*
-         * Some processors set the EXITINTINFO field when the task switch
-         * is caused by a task gate in the IDT. In this case we will be
-         * emulating the event injection, so we do not want the processor
-         * to re-inject the original event!
-         */
-        vmcb->eventinj.bytes = 0;
-
-        hvm_task_switch(vmcb->exitinfo1, reason, errcode, 0);
+        hvm_task_switch(vmcb->exitinfo1, reason, errcode, insn_len);
         break;
     }
 
diff --git a/xen/include/asm-x86/hvm/svm/emulate.h b/xen/include/asm-x86/hvm/svm/emulate.h
index 9af10061c5..d7364f774a 100644
--- a/xen/include/asm-x86/hvm/svm/emulate.h
+++ b/xen/include/asm-x86/hvm/svm/emulate.h
@@ -51,6 +51,7 @@
 struct vcpu;
 
 unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc);
+unsigned int svm_get_task_switch_insn_len(struct vcpu *v);
 
 #endif /* __ASM_X86_HVM_SVM_EMULATE_H__ */
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Roger Pau Monné 13 weeks ago
On Thu, Nov 21, 2019 at 10:15:51PM +0000, Andrew Cooper wrote:
> The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
> assistance with instruction length.  As a result, any instruction-induced task
> switch has the outgoing task's %eip pointing at the instruction switch caused
                                                                  ^ that
> the switch, rather than after it.
> 
> This causes explicit use of task gates to livelock (as when the task returns,
> it executes the task-switching instruction again), and any restartable task to
> become a nop after its first instantiation (the entry state points at the
> ret/iret instruction used to exit the task).
> 
> 32bit Windows in particular is known to use task gates for NMI handling, and
> to use NMI IPIs.
> 
> In the task switch handler, distinguish instruction-induced from
> interrupt/exception-induced task switches, and decode the instruction under
> %rip to calculate its length.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Juergen Gross <jgross@suse.com>
> 
> The implementation of svm_get_task_switch_insn_len() is bug-compatible with
> svm_get_insn_len() when it comes to conditional #GP'ing.  I still haven't had
> time to address this more thoroughly.
> 
> AMD does permit TASK_SWITCH not to be intercepted and, I'm informed does do
> the right thing when it comes to a TSS crossing a page boundary.  However, it
> is not actually safe to leave task switches unintercepted.  Any NPT or shadow
> page fault, even from logdirty/paging/etc will corrupt guest state in an
> unrecoverable manner.
> ---
>  xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
>  xen/include/asm-x86/hvm/svm/emulate.h |  1 +
>  3 files changed, 92 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
> index 3e52592847..176c25f60d 100644
> --- a/xen/arch/x86/hvm/svm/emulate.c
> +++ b/xen/arch/x86/hvm/svm/emulate.c
> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>  }
>  
>  /*
> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
> + * decode under %rip to find the answer.
> + */
> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
> +{
> +    struct hvm_emulate_ctxt ctxt;
> +    struct x86_emulate_state *state;
> +    unsigned int emul_len, modrm_reg;
> +
> +    ASSERT(v == current);
> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
> +    if ( IS_ERR_OR_NULL(state) )

Maybe crash the guest in this case? Not advancing the instruction
pointer in a software induced task switch will create a loop AFAICT?

> +        return 0;
> +
> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
> +
> +    /*
> +     * Check for an instruction which can cause a task switch.  Any far
> +     * jmp/call/ret, any software interrupt/exception, and iret.
> +     */
> +    switch ( ctxt.ctxt.opcode )
> +    {
> +    case 0xff: /* Grp 5 */
> +        /* call / jmp (far, absolute indirect) */
> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
> +             (modrm_reg != 3 && modrm_reg != 5) )
> +        {
> +            /* Wrong instruction.  Throw #GP back for now. */
> +    default:
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +            emul_len = 0;
> +            break;
> +        }
> +        /* Fallthrough */
> +    case 0x62: /* bound */
> +    case 0x9a: /* call (far, absolute) */

I'm slightly loss here, in the case of call or jmp for example, don't
you need the instruction pointer to point to the destination of the
call/jmp instead of the next instruction?

> +    case 0xca: /* ret imm16 (far) */
> +    case 0xcb: /* ret (far) */
> +    case 0xcc: /* int3 */
> +    case 0xcd: /* int imm8 */
> +    case 0xce: /* into */
> +    case 0xcf: /* iret */
> +    case 0xea: /* jmp (far, absolute) */
> +    case 0xf1: /* icebp */
> +        break;
> +    }
> +
> +    x86_emulate_free_state(state);
> +
> +    return emul_len;
> +}
> +
> +/*
>   * Local variables:
>   * mode: C
>   * c-file-style: "BSD"
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 049b800e20..ba9c24a70c 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>  
>      case VMEXIT_TASK_SWITCH: {
>          enum hvm_task_switch_reason reason;
> -        int32_t errcode = -1;
> +        int32_t errcode = -1, insn_len = -1;

Plain int seem better for insn_len?

Also I'm not sure there's a reason that errcode uses int32_t, but
that's not introduced here anyway.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Andrew Cooper 13 weeks ago
On 22/11/2019 13:59, Roger Pau Monné wrote:
> On Thu, Nov 21, 2019 at 10:15:51PM +0000, Andrew Cooper wrote:
>> The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
>> assistance with instruction length.  As a result, any instruction-induced task
>> switch has the outgoing task's %eip pointing at the instruction switch caused
>                                                                   ^ that
>> the switch, rather than after it.
>>
>> This causes explicit use of task gates to livelock (as when the task returns,
>> it executes the task-switching instruction again), and any restartable task to
>> become a nop after its first instantiation (the entry state points at the
>> ret/iret instruction used to exit the task).
>>
>> 32bit Windows in particular is known to use task gates for NMI handling, and
>> to use NMI IPIs.
>>
>> In the task switch handler, distinguish instruction-induced from
>> interrupt/exception-induced task switches, and decode the instruction under
>> %rip to calculate its length.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Juergen Gross <jgross@suse.com>
>>
>> The implementation of svm_get_task_switch_insn_len() is bug-compatible with
>> svm_get_insn_len() when it comes to conditional #GP'ing.  I still haven't had
>> time to address this more thoroughly.
>>
>> AMD does permit TASK_SWITCH not to be intercepted and, I'm informed does do
>> the right thing when it comes to a TSS crossing a page boundary.  However, it
>> is not actually safe to leave task switches unintercepted.  Any NPT or shadow
>> page fault, even from logdirty/paging/etc will corrupt guest state in an
>> unrecoverable manner.
>> ---
>>  xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
>>  xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
>>  xen/include/asm-x86/hvm/svm/emulate.h |  1 +
>>  3 files changed, 92 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
>> index 3e52592847..176c25f60d 100644
>> --- a/xen/arch/x86/hvm/svm/emulate.c
>> +++ b/xen/arch/x86/hvm/svm/emulate.c
>> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>>  }
>>  
>>  /*
>> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
>> + * decode under %rip to find the answer.
>> + */
>> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
>> +{
>> +    struct hvm_emulate_ctxt ctxt;
>> +    struct x86_emulate_state *state;
>> +    unsigned int emul_len, modrm_reg;
>> +
>> +    ASSERT(v == current);
>> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
>> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
>> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
>> +    if ( IS_ERR_OR_NULL(state) )
> Maybe crash the guest in this case? Not advancing the instruction
> pointer in a software induced task switch will create a loop AFAICT?

Your analysis is correct, but crashing the guest would be a user=>kernel
DoS, which is worse than a livelock.

We do have some logic to try and cope with this in svm.c, and I think
I've got a better idea of how to make use of it.

>
>> +        return 0;
>> +
>> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
>> +
>> +    /*
>> +     * Check for an instruction which can cause a task switch.  Any far
>> +     * jmp/call/ret, any software interrupt/exception, and iret.
>> +     */
>> +    switch ( ctxt.ctxt.opcode )
>> +    {
>> +    case 0xff: /* Grp 5 */
>> +        /* call / jmp (far, absolute indirect) */
>> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
>> +             (modrm_reg != 3 && modrm_reg != 5) )
>> +        {
>> +            /* Wrong instruction.  Throw #GP back for now. */
>> +    default:
>> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> +            emul_len = 0;
>> +            break;
>> +        }
>> +        /* Fallthrough */
>> +    case 0x62: /* bound */
>> +    case 0x9a: /* call (far, absolute) */
> I'm slightly loss here, in the case of call or jmp for example, don't
> you need the instruction pointer to point to the destination of the
> call/jmp instead of the next instruction?

No, but that is by design.

Far calls provide a selector:offset pair (either imm or mem operands),
rather than a displacement within the same code segment.

Selector may be new code selector, at which point offset is important,
and execution continues at %cs:%rip.  This case isn't interesting for
us, and doesn't vmexit in the first place.

When Selector is a Task State Segment, or Task Gate selector, a task
switch occurs (subject to cpl checks, etc).

In this case, the entrypoint of the new task is stashed in the new tasks
TSS (cs and eip fields).  The offset from the original call/jmp
instruction is discarded as it isn't relevant.   (After all,
particularly on a privilege level transition task switch, you don't want
the unprivileged caller able to start executing from somewhere which
isn't the designated entrypoint.)

Just to complete the set, selector may also be a Call Gate selector,
which is far lighter weight than a fully blown task switch, and whose
entry point is part of the Call Gate descriptor itself.

>> +    case 0xca: /* ret imm16 (far) */
>> +    case 0xcb: /* ret (far) */
>> +    case 0xcc: /* int3 */
>> +    case 0xcd: /* int imm8 */
>> +    case 0xce: /* into */
>> +    case 0xcf: /* iret */
>> +    case 0xea: /* jmp (far, absolute) */
>> +    case 0xf1: /* icebp */
>> +        break;
>> +    }
>> +
>> +    x86_emulate_free_state(state);
>> +
>> +    return emul_len;
>> +}
>> +
>> +/*
>>   * Local variables:
>>   * mode: C
>>   * c-file-style: "BSD"
>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>> index 049b800e20..ba9c24a70c 100644
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>  
>>      case VMEXIT_TASK_SWITCH: {
>>          enum hvm_task_switch_reason reason;
>> -        int32_t errcode = -1;
>> +        int32_t errcode = -1, insn_len = -1;
> Plain int seem better for insn_len?
>
> Also I'm not sure there's a reason that errcode uses int32_t, but
> that's not introduced here anyway.

I was just using what was already here.  I'm not sure why it is int32_t
either, but this is consistent throughout the task switch infrastructure.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Jan Beulich 13 weeks ago
On 21.11.2019 23:15, Andrew Cooper wrote:
> --- a/xen/arch/x86/hvm/svm/emulate.c
> +++ b/xen/arch/x86/hvm/svm/emulate.c
> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>  }
>  
>  /*
> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
> + * decode under %rip to find the answer.
> + */
> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
> +{
> +    struct hvm_emulate_ctxt ctxt;
> +    struct x86_emulate_state *state;
> +    unsigned int emul_len, modrm_reg;
> +
> +    ASSERT(v == current);

You look to be using v here just for this ASSERT() - is this really
worth it? By making the function take "void" it would be quite obvious
that it would act on the current vCPU only.

> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
> +    if ( IS_ERR_OR_NULL(state) )
> +        return 0;
> +
> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
> +
> +    /*
> +     * Check for an instruction which can cause a task switch.  Any far
> +     * jmp/call/ret, any software interrupt/exception, and iret.
> +     */
> +    switch ( ctxt.ctxt.opcode )
> +    {
> +    case 0xff: /* Grp 5 */
> +        /* call / jmp (far, absolute indirect) */
> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||

DYM "== 3", to bail upon non-memory operands?

> +             (modrm_reg != 3 && modrm_reg != 5) )
> +        {
> +            /* Wrong instruction.  Throw #GP back for now. */
> +    default:
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +            emul_len = 0;
> +            break;
> +        }
> +        /* Fallthrough */
> +    case 0x62: /* bound */

Does "bound" really belong on this list? It raising #BR is like
insns raising random other exceptions, not like INTO / INT3,
where the IDT descriptor also has to have suitable DPL for the
exception to actually get delivered (rather than #GP). I.e. it
shouldn't make it here in the first place, due to the
X86_EVENTTYPE_HW_EXCEPTION check in the caller.

IOW if "bound" needs to be here, then all others need to be as
well, unless they can't cause any exception at all.

> +    case 0x9a: /* call (far, absolute) */
> +    case 0xca: /* ret imm16 (far) */
> +    case 0xcb: /* ret (far) */
> +    case 0xcc: /* int3 */
> +    case 0xcd: /* int imm8 */
> +    case 0xce: /* into */
> +    case 0xcf: /* iret */
> +    case 0xea: /* jmp (far, absolute) */
> +    case 0xf1: /* icebp */

Same perhaps for ICEBP, albeit I'm less certain here, as its
behavior is too poorly documented (if at all).

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>  
>      case VMEXIT_TASK_SWITCH: {
>          enum hvm_task_switch_reason reason;
> -        int32_t errcode = -1;
> +        int32_t errcode = -1, insn_len = -1;
> +
> +        /*
> +         * All TASK_SWITCH intercepts have fault-like semantics.  NRIP is
> +         * never provided, even for instruction-induced task switches, but we
> +         * need to know the instruction length in order to set %eip suitably
> +         * in the outgoing TSS.
> +         *
> +         * For a task switch which vectored through the IDT, look at the type
> +         * to distinguish interrupts/exceptions from instruction based
> +         * switches.
> +         */
> +        if ( vmcb->eventinj.fields.v )
> +        {
> +            /*
> +             * HW_EXCEPTION, NMI and EXT_INTR are not instruction based.  All
> +             * others are.
> +             */
> +            if ( vmcb->eventinj.fields.type <= X86_EVENTTYPE_HW_EXCEPTION )
> +                insn_len = 0;
> +
> +            /*
> +             * Clobber the vectoring information, as we are going to emulate
> +             * the task switch in full.
> +             */
> +            vmcb->eventinj.bytes = 0;
> +        }
> +
> +        /*
> +         * insn_len being -1 indicates that we have an instruction-induced
> +         * task switch.  Decode under %rip to find its length.
> +         */
> +        if ( insn_len < 0 && (insn_len = svm_get_task_switch_insn_len(v)) == 0 )
> +            break;

Won't this live-lock the guest? I.e. isn't it better to e.g. crash it
if svm_get_task_switch_insn_len() didn't raise #GP(0)?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Andrew Cooper 13 weeks ago
On 22/11/2019 13:31, Jan Beulich wrote:
> On 21.11.2019 23:15, Andrew Cooper wrote:
>> --- a/xen/arch/x86/hvm/svm/emulate.c
>> +++ b/xen/arch/x86/hvm/svm/emulate.c
>> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>>  }
>>  
>>  /*
>> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
>> + * decode under %rip to find the answer.
>> + */
>> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
>> +{
>> +    struct hvm_emulate_ctxt ctxt;
>> +    struct x86_emulate_state *state;
>> +    unsigned int emul_len, modrm_reg;
>> +
>> +    ASSERT(v == current);
> You look to be using v here just for this ASSERT() - is this really
> worth it? By making the function take "void" it would be quite obvious
> that it would act on the current vCPU only.

This was cribbed largely from svm_get_insn_len(), which also behaves the
same.

>
>> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
>> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
>> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
>> +    if ( IS_ERR_OR_NULL(state) )
>> +        return 0;
>> +
>> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
>> +
>> +    /*
>> +     * Check for an instruction which can cause a task switch.  Any far
>> +     * jmp/call/ret, any software interrupt/exception, and iret.
>> +     */
>> +    switch ( ctxt.ctxt.opcode )
>> +    {
>> +    case 0xff: /* Grp 5 */
>> +        /* call / jmp (far, absolute indirect) */
>> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
> DYM "== 3", to bail upon non-memory operands?

Ah yes (and this demonstrates that I really need to get an XTF tested
sorted soon.)

>
>> +             (modrm_reg != 3 && modrm_reg != 5) )
>> +        {
>> +            /* Wrong instruction.  Throw #GP back for now. */
>> +    default:
>> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> +            emul_len = 0;
>> +            break;
>> +        }
>> +        /* Fallthrough */
>> +    case 0x62: /* bound */
> Does "bound" really belong on this list? It raising #BR is like
> insns raising random other exceptions, not like INTO / INT3,
> where the IDT descriptor also has to have suitable DPL for the
> exception to actually get delivered (rather than #GP). I.e. it
> shouldn't make it here in the first place, due to the
> X86_EVENTTYPE_HW_EXCEPTION check in the caller.
>
> IOW if "bound" needs to be here, then all others need to be as
> well, unless they can't cause any exception at all.

More experimentation required.  BOUND doesn't appear to be special cased
by SVM, but is by VT-x.  VT-x however does throw it in the same category
as #UD, and identify it to be a hardware exception.

I suspect you are right, and t doesn't want to be here.

>> +    case 0x9a: /* call (far, absolute) */
>> +    case 0xca: /* ret imm16 (far) */
>> +    case 0xcb: /* ret (far) */
>> +    case 0xcc: /* int3 */
>> +    case 0xcd: /* int imm8 */
>> +    case 0xce: /* into */
>> +    case 0xcf: /* iret */
>> +    case 0xea: /* jmp (far, absolute) */
>> +    case 0xf1: /* icebp */
> Same perhaps for ICEBP, albeit I'm less certain here, as its
> behavior is too poorly documented (if at all).

ICEBP's #DB is a trap, not a fault, so instruction length is important.

>
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>  
>>      case VMEXIT_TASK_SWITCH: {
>>          enum hvm_task_switch_reason reason;
>> -        int32_t errcode = -1;
>> +        int32_t errcode = -1, insn_len = -1;
>> +
>> +        /*
>> +         * All TASK_SWITCH intercepts have fault-like semantics.  NRIP is
>> +         * never provided, even for instruction-induced task switches, but we
>> +         * need to know the instruction length in order to set %eip suitably
>> +         * in the outgoing TSS.
>> +         *
>> +         * For a task switch which vectored through the IDT, look at the type
>> +         * to distinguish interrupts/exceptions from instruction based
>> +         * switches.
>> +         */
>> +        if ( vmcb->eventinj.fields.v )
>> +        {
>> +            /*
>> +             * HW_EXCEPTION, NMI and EXT_INTR are not instruction based.  All
>> +             * others are.
>> +             */
>> +            if ( vmcb->eventinj.fields.type <= X86_EVENTTYPE_HW_EXCEPTION )
>> +                insn_len = 0;
>> +
>> +            /*
>> +             * Clobber the vectoring information, as we are going to emulate
>> +             * the task switch in full.
>> +             */
>> +            vmcb->eventinj.bytes = 0;
>> +        }
>> +
>> +        /*
>> +         * insn_len being -1 indicates that we have an instruction-induced
>> +         * task switch.  Decode under %rip to find its length.
>> +         */
>> +        if ( insn_len < 0 && (insn_len = svm_get_task_switch_insn_len(v)) == 0 )
>> +            break;
> Won't this live-lock the guest?

Potentially, yes.

> I.e. isn't it better to e.g. crash it
> if svm_get_task_switch_insn_len() didn't raise #GP(0)?

No - that would need and XSA if we got it wrong, as none of these are
privileged instruction.

However, it occurs to me that we are in a position to use
svm_crash_or_fault(), so I'll respin with that in mind.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Jan Beulich 13 weeks ago
On 22.11.2019 14:55, Andrew Cooper wrote:
> On 22/11/2019 13:31, Jan Beulich wrote:
>> On 21.11.2019 23:15, Andrew Cooper wrote:
>>> +        /* Fallthrough */
>>> +    case 0x62: /* bound */
>> Does "bound" really belong on this list? It raising #BR is like
>> insns raising random other exceptions, not like INTO / INT3,
>> where the IDT descriptor also has to have suitable DPL for the
>> exception to actually get delivered (rather than #GP). I.e. it
>> shouldn't make it here in the first place, due to the
>> X86_EVENTTYPE_HW_EXCEPTION check in the caller.
>>
>> IOW if "bound" needs to be here, then all others need to be as
>> well, unless they can't cause any exception at all.
> 
> More experimentation required.  BOUND doesn't appear to be special cased
> by SVM, but is by VT-x.  VT-x however does throw it in the same category
> as #UD, and identify it to be a hardware exception.
> 
> I suspect you are right, and t doesn't want to be here.
> 
>>> +    case 0x9a: /* call (far, absolute) */
>>> +    case 0xca: /* ret imm16 (far) */
>>> +    case 0xcb: /* ret (far) */
>>> +    case 0xcc: /* int3 */
>>> +    case 0xcd: /* int imm8 */
>>> +    case 0xce: /* into */
>>> +    case 0xcf: /* iret */
>>> +    case 0xea: /* jmp (far, absolute) */
>>> +    case 0xf1: /* icebp */
>> Same perhaps for ICEBP, albeit I'm less certain here, as its
>> behavior is too poorly documented (if at all).
> 
> ICEBP's #DB is a trap, not a fault, so instruction length is important.

Hmm, this may point at a bigger issue then: Single step and data
breakpoints are traps, too. But of course they can occur with
arbitrary insns. Do their intercepts occur with guest RIP already
updated? (They wouldn't currently make it here anyway because of
the X86_EVENTTYPE_HW_EXCEPTION check in the caller.) If they do,
are you sure ICEBP-#DB's doesn't?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Andrew Cooper 13 weeks ago
On 22/11/2019 14:31, Jan Beulich wrote:
> On 22.11.2019 14:55, Andrew Cooper wrote:
>> On 22/11/2019 13:31, Jan Beulich wrote:
>>> On 21.11.2019 23:15, Andrew Cooper wrote:
>>>> +        /* Fallthrough */
>>>> +    case 0x62: /* bound */
>>> Does "bound" really belong on this list? It raising #BR is like
>>> insns raising random other exceptions, not like INTO / INT3,
>>> where the IDT descriptor also has to have suitable DPL for the
>>> exception to actually get delivered (rather than #GP). I.e. it
>>> shouldn't make it here in the first place, due to the
>>> X86_EVENTTYPE_HW_EXCEPTION check in the caller.
>>>
>>> IOW if "bound" needs to be here, then all others need to be as
>>> well, unless they can't cause any exception at all.
>> More experimentation required.  BOUND doesn't appear to be special cased
>> by SVM, but is by VT-x.  VT-x however does throw it in the same category
>> as #UD, and identify it to be a hardware exception.
>>
>> I suspect you are right, and t doesn't want to be here.
>>
>>>> +    case 0x9a: /* call (far, absolute) */
>>>> +    case 0xca: /* ret imm16 (far) */
>>>> +    case 0xcb: /* ret (far) */
>>>> +    case 0xcc: /* int3 */
>>>> +    case 0xcd: /* int imm8 */
>>>> +    case 0xce: /* into */
>>>> +    case 0xcf: /* iret */
>>>> +    case 0xea: /* jmp (far, absolute) */
>>>> +    case 0xf1: /* icebp */
>>> Same perhaps for ICEBP, albeit I'm less certain here, as its
>>> behavior is too poorly documented (if at all).
>> ICEBP's #DB is a trap, not a fault, so instruction length is important.
> Hmm, this may point at a bigger issue then: Single step and data
> breakpoints are traps, too. But of course they can occur with
> arbitrary insns. Do their intercepts occur with guest RIP already
> updated?

Based on other behaviour, I'm going to guess yes on SVM and no on VT-x.

We'll take the #DB intercept, re-inject, and should see a vectoring task
switch.  The type should match the re-inject, so will be SW_INT/EXC with
a length on VT-x, and be HW_EXCEPTION with no length on SVM.

Either way, I think the logic presented here will work correctly.

> (They wouldn't currently make it here anyway because of
> the X86_EVENTTYPE_HW_EXCEPTION check in the caller.) If they do,
> are you sure ICEBP-#DB's doesn't?

ICEBP itself doesn't get intercepted.  Only the resulting #DB does,
which will will trigger a #DB-vectoring task switch, irrespective of its
exact origin.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task

Posted by Andrew Cooper 13 weeks ago
On 21/11/2019 22:15, Andrew Cooper wrote:
> The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
> assistance with instruction length.  As a result, any instruction-induced task
> switch has the outgoing task's %eip pointing at the instruction switch caused
> the switch, rather than after it.
>
> This causes explicit use of task gates to livelock (as when the task returns,
> it executes the task-switching instruction again), and any restartable task to
> become a nop after its first instantiation (the entry state points at the
> ret/iret instruction used to exit the task).

FWIW, I've rewritten this paragraph as:

This causes callers of task gates to livelock (repeatedly execute the
call/jmp
to enter the task), and any restartable task to become a nop after its first
use (the (re)entry state points at the ret/iret used to exit the task).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel