From nobody Tue Apr 7 06:49:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 359F3ECAAA1 for ; Tue, 30 Aug 2022 23:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232200AbiH3XTd (ORCPT ); Tue, 30 Aug 2022 19:19:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231747AbiH3XS1 (ORCPT ); Tue, 30 Aug 2022 19:18:27 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25656A2AA3 for ; Tue, 30 Aug 2022 16:16:52 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id m34-20020a634c62000000b0042aff6dff12so6127610pgl.14 for ; Tue, 30 Aug 2022 16:16:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc; bh=mc+H7wKWEoNkKB3uhxK6b0JeaBO10pwijUeej6sqFMA=; b=o8VZ9k9HW5p53aG+asX+v/Z+XBDG7RGsxGLExUGQfmWaxOyaoN07pTGWG2JxmvbM86 qNyjtBlYBsGolyr6WAK74Lexn0cw6RtFvcwbr1DgfYeGxNRuJMkr9Ru19sChYTb8m4DB hohY2e1QcEZbiNQmnmmK+k0ahYwhEQ93wXqmx60E7g8StCJP/Fg3Rjc4sQX1fjPK2E/h C2hG8ozSsIqyJCrkII3tOKlQ7+dfRN5bZbCY3cb/nbjRpmLLiUDE+zP1lqx0Ky32Ko+S 558UXq5Ohji3gcSu1J0xRnlrCHb8IJUJSDHwercJTNHaftwvXasu+cOSYuLsi/Mui301 tcaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc; bh=mc+H7wKWEoNkKB3uhxK6b0JeaBO10pwijUeej6sqFMA=; b=ybnBP2xl3iQERoryDz+X4d4hvJFSf7rBXMY68VZverOLJqn/jQyAPCos66qD3o+PeK SfBlleeli36S+adM8GZItnhwKFlwyxphdRi1YwVeNfj5zjqP9NGhcaracO+6qq/FwOs1 Zf+4APBRE857FeVusf9wMwGOfdRBBOodyeYLCgfmwIcnmFFEG/nJmEvt7kLoeI+foKEF 5ymQUOrPSy3rZREAxZDniV5USdL+vhhxrqe/1+X/TtENElGMRJUJ2tIKOpLSdtFLusLh /GgjN7WnGZdOwVdlk5n2Jco/GbvSeOp5MzSmZs2eAlH9kQpFJgVVkD3P1xzMb5pBB5cv xB3Q== X-Gm-Message-State: ACgBeo0o41cRxv4RfQ3uCbAljtkCPKVWCHro9mTJGhHWtNSUHZoCdoLj AhclJ204VU0MVxvRlQuLZ3/BqEZwKOM= X-Google-Smtp-Source: AA6agR5dQruTnezARDk5a/e3k1iLLq30xf/WneYAuFJ9x4Is24gkmQFoUCQqK3xj2tvbIJzQVlQ7gMLr8Bo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:b413:b0:172:a628:7915 with SMTP id x19-20020a170902b41300b00172a6287915mr23359621plr.99.1661901400667; Tue, 30 Aug 2022 16:16:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 30 Aug 2022 23:16:01 +0000 In-Reply-To: <20220830231614.3580124-1-seanjc@google.com> Mime-Version: 1.0 References: <20220830231614.3580124-1-seanjc@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830231614.3580124-15-seanjc@google.com> Subject: [PATCH v5 14/27] KVM: x86: Make kvm_queued_exception a properly named, visible struct From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Maxim Levitsky , Oliver Upton , Peter Shier Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the definition of "struct kvm_queued_exception" out of kvm_vcpu_arch in anticipation of adding a second instance in kvm_vcpu_arch to handle exceptions that occur when vectoring an injected exception and are morphed to VM-Exit instead of leading to #DF. Opportunistically take advantage of the churn to rename "nr" to "vector". No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 23 +++++----- arch/x86/kvm/svm/nested.c | 47 ++++++++++--------- arch/x86/kvm/svm/svm.c | 14 +++--- arch/x86/kvm/vmx/nested.c | 42 +++++++++-------- arch/x86/kvm/vmx/vmx.c | 20 ++++----- arch/x86/kvm/x86.c | 80 ++++++++++++++++----------------- arch/x86/kvm/x86.h | 3 +- 7 files changed, 113 insertions(+), 116 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 71b65b8bb8cc..624a0676a8f9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -639,6 +639,17 @@ struct kvm_vcpu_xen { struct timer_list poll_timer; }; =20 +struct kvm_queued_exception { + bool pending; + bool injected; + bool has_error_code; + u8 vector; + u32 error_code; + unsigned long payload; + bool has_payload; + u8 nested_apf; +}; + struct kvm_vcpu_arch { /* * rip and regs accesses must go through @@ -737,16 +748,8 @@ struct kvm_vcpu_arch { =20 u8 event_exit_inst_len; =20 - struct kvm_queued_exception { - bool pending; - bool injected; - bool has_error_code; - u8 nr; - u32 error_code; - unsigned long payload; - bool has_payload; - u8 nested_apf; - } exception; + /* Exceptions to be injected to the guest. */ + struct kvm_queued_exception exception; =20 struct kvm_queued_interrupt { bool injected; diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 76dcc8a3e849..8f991592d277 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -468,7 +468,7 @@ static void nested_save_pending_event_to_vmcb12(struct = vcpu_svm *svm, unsigned int nr; =20 if (vcpu->arch.exception.injected) { - nr =3D vcpu->arch.exception.nr; + nr =3D vcpu->arch.exception.vector; exit_int_info =3D nr | SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_EXEPT; =20 if (vcpu->arch.exception.has_error_code) { @@ -1306,42 +1306,45 @@ int nested_svm_check_permissions(struct kvm_vcpu *v= cpu) =20 static bool nested_exit_on_exception(struct vcpu_svm *svm) { - unsigned int nr =3D svm->vcpu.arch.exception.nr; + unsigned int vector =3D svm->vcpu.arch.exception.vector; =20 - return (svm->nested.ctl.intercepts[INTERCEPT_EXCEPTION] & BIT(nr)); + return (svm->nested.ctl.intercepts[INTERCEPT_EXCEPTION] & BIT(vector)); } =20 -static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm) +static void nested_svm_inject_exception_vmexit(struct kvm_vcpu *vcpu) { - unsigned int nr =3D svm->vcpu.arch.exception.nr; + struct kvm_queued_exception *ex =3D &vcpu->arch.exception; + struct vcpu_svm *svm =3D to_svm(vcpu); struct vmcb *vmcb =3D svm->vmcb; =20 - vmcb->control.exit_code =3D SVM_EXIT_EXCP_BASE + nr; + vmcb->control.exit_code =3D SVM_EXIT_EXCP_BASE + ex->vector; vmcb->control.exit_code_hi =3D 0; =20 - if (svm->vcpu.arch.exception.has_error_code) - vmcb->control.exit_info_1 =3D svm->vcpu.arch.exception.error_code; + if (ex->has_error_code) + vmcb->control.exit_info_1 =3D ex->error_code; =20 /* * EXITINFO2 is undefined for all exception intercepts other * than #PF. */ - if (nr =3D=3D PF_VECTOR) { - if (svm->vcpu.arch.exception.nested_apf) - vmcb->control.exit_info_2 =3D svm->vcpu.arch.apf.nested_apf_token; - else if (svm->vcpu.arch.exception.has_payload) - vmcb->control.exit_info_2 =3D svm->vcpu.arch.exception.payload; + if (ex->vector =3D=3D PF_VECTOR) { + if (ex->nested_apf) + vmcb->control.exit_info_2 =3D vcpu->arch.apf.nested_apf_token; + else if (ex->has_payload) + vmcb->control.exit_info_2 =3D ex->payload; else - vmcb->control.exit_info_2 =3D svm->vcpu.arch.cr2; - } else if (nr =3D=3D DB_VECTOR) { + vmcb->control.exit_info_2 =3D vcpu->arch.cr2; + } else if (ex->vector =3D=3D DB_VECTOR) { /* See inject_pending_event. */ - kvm_deliver_exception_payload(&svm->vcpu); - if (svm->vcpu.arch.dr7 & DR7_GD) { - svm->vcpu.arch.dr7 &=3D ~DR7_GD; - kvm_update_dr7(&svm->vcpu); + kvm_deliver_exception_payload(vcpu, ex); + + if (vcpu->arch.dr7 & DR7_GD) { + vcpu->arch.dr7 &=3D ~DR7_GD; + kvm_update_dr7(vcpu); } - } else - WARN_ON(svm->vcpu.arch.exception.has_payload); + } else { + WARN_ON(ex->has_payload); + } =20 nested_svm_vmexit(svm); } @@ -1379,7 +1382,7 @@ static int svm_check_nested_events(struct kvm_vcpu *v= cpu) return -EBUSY; if (!nested_exit_on_exception(svm)) return 0; - nested_svm_inject_exception_vmexit(svm); + nested_svm_inject_exception_vmexit(vcpu); return 0; } =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a9d3d5a5137f..dbd10d61f29d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -463,22 +463,20 @@ static int svm_update_soft_interrupt_rip(struct kvm_v= cpu *vcpu) =20 static void svm_inject_exception(struct kvm_vcpu *vcpu) { + struct kvm_queued_exception *ex =3D &vcpu->arch.exception; struct vcpu_svm *svm =3D to_svm(vcpu); - unsigned nr =3D vcpu->arch.exception.nr; - bool has_error_code =3D vcpu->arch.exception.has_error_code; - u32 error_code =3D vcpu->arch.exception.error_code; =20 - kvm_deliver_exception_payload(vcpu); + kvm_deliver_exception_payload(vcpu, ex); =20 - if (kvm_exception_is_soft(nr) && + if (kvm_exception_is_soft(ex->vector) && svm_update_soft_interrupt_rip(vcpu)) return; =20 - svm->vmcb->control.event_inj =3D nr + svm->vmcb->control.event_inj =3D ex->vector | SVM_EVTINJ_VALID - | (has_error_code ? SVM_EVTINJ_VALID_ERR : 0) + | (ex->has_error_code ? SVM_EVTINJ_VALID_ERR : 0) | SVM_EVTINJ_TYPE_EXEPT; - svm->vmcb->control.event_inj_err =3D error_code; + svm->vmcb->control.event_inj_err =3D ex->error_code; } =20 static void svm_init_erratum_383(void) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 51005fef0148..cbbe62a84493 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -446,29 +446,27 @@ static bool nested_vmx_is_page_fault_vmexit(struct vm= cs12 *vmcs12, */ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long= *exit_qual) { + struct kvm_queued_exception *ex =3D &vcpu->arch.exception; struct vmcs12 *vmcs12 =3D get_vmcs12(vcpu); - unsigned int nr =3D vcpu->arch.exception.nr; - bool has_payload =3D vcpu->arch.exception.has_payload; - unsigned long payload =3D vcpu->arch.exception.payload; =20 - if (nr =3D=3D PF_VECTOR) { - if (vcpu->arch.exception.nested_apf) { + if (ex->vector =3D=3D PF_VECTOR) { + if (ex->nested_apf) { *exit_qual =3D vcpu->arch.apf.nested_apf_token; return 1; } - if (nested_vmx_is_page_fault_vmexit(vmcs12, - vcpu->arch.exception.error_code)) { - *exit_qual =3D has_payload ? payload : vcpu->arch.cr2; + if (nested_vmx_is_page_fault_vmexit(vmcs12, ex->error_code)) { + *exit_qual =3D ex->has_payload ? ex->payload : vcpu->arch.cr2; return 1; } - } else if (vmcs12->exception_bitmap & (1u << nr)) { - if (nr =3D=3D DB_VECTOR) { - if (!has_payload) { - payload =3D vcpu->arch.dr6; - payload &=3D ~DR6_BT; - payload ^=3D DR6_ACTIVE_LOW; + } else if (vmcs12->exception_bitmap & (1u << ex->vector)) { + if (ex->vector =3D=3D DB_VECTOR) { + if (ex->has_payload) { + *exit_qual =3D ex->payload; + } else { + *exit_qual =3D vcpu->arch.dr6; + *exit_qual &=3D ~DR6_BT; + *exit_qual ^=3D DR6_ACTIVE_LOW; } - *exit_qual =3D payload; } else *exit_qual =3D 0; return 1; @@ -3718,7 +3716,7 @@ static void vmcs12_save_pending_event(struct kvm_vcpu= *vcpu, is_double_fault(exit_intr_info))) { vmcs12->idt_vectoring_info_field =3D 0; } else if (vcpu->arch.exception.injected) { - nr =3D vcpu->arch.exception.nr; + nr =3D vcpu->arch.exception.vector; idt_vectoring =3D nr | VECTORING_INFO_VALID_MASK; =20 if (kvm_exception_is_soft(nr)) { @@ -3822,11 +3820,11 @@ static int vmx_complete_nested_posted_interrupt(str= uct kvm_vcpu *vcpu) static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu, unsigned long exit_qual) { + struct kvm_queued_exception *ex =3D &vcpu->arch.exception; + u32 intr_info =3D ex->vector | INTR_INFO_VALID_MASK; struct vmcs12 *vmcs12 =3D get_vmcs12(vcpu); - unsigned int nr =3D vcpu->arch.exception.nr; - u32 intr_info =3D nr | INTR_INFO_VALID_MASK; =20 - if (vcpu->arch.exception.has_error_code) { + if (ex->has_error_code) { /* * Intel CPUs do not generate error codes with bits 31:16 set, * and more importantly VMX disallows setting bits 31:16 in the @@ -3836,11 +3834,11 @@ static void nested_vmx_inject_exception_vmexit(stru= ct kvm_vcpu *vcpu, * generate "full" 32-bit error codes, so KVM allows userspace * to inject exception error codes with bits 31:16 set. */ - vmcs12->vm_exit_intr_error_code =3D (u16)vcpu->arch.exception.error_code; + vmcs12->vm_exit_intr_error_code =3D (u16)ex->error_code; intr_info |=3D INTR_INFO_DELIVER_CODE_MASK; } =20 - if (kvm_exception_is_soft(nr)) + if (kvm_exception_is_soft(ex->vector)) intr_info |=3D INTR_TYPE_SOFT_EXCEPTION; else intr_info |=3D INTR_TYPE_HARD_EXCEPTION; @@ -3871,7 +3869,7 @@ static void nested_vmx_inject_exception_vmexit(struct= kvm_vcpu *vcpu, static inline unsigned long vmx_get_pending_dbg_trap(struct kvm_vcpu *vcpu) { if (!vcpu->arch.exception.pending || - vcpu->arch.exception.nr !=3D DB_VECTOR) + vcpu->arch.exception.vector !=3D DB_VECTOR) return 0; =20 /* General Detect #DBs are always fault-like. */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index be4348fa176c..07c4246415e9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1659,7 +1659,7 @@ static void vmx_update_emulated_instruction(struct kv= m_vcpu *vcpu) */ if (nested_cpu_has_mtf(vmcs12) && (!vcpu->arch.exception.pending || - vcpu->arch.exception.nr =3D=3D DB_VECTOR)) + vcpu->arch.exception.vector =3D=3D DB_VECTOR)) vmx->nested.mtf_pending =3D true; else vmx->nested.mtf_pending =3D false; @@ -1686,15 +1686,13 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu) =20 static void vmx_inject_exception(struct kvm_vcpu *vcpu) { + struct kvm_queued_exception *ex =3D &vcpu->arch.exception; + u32 intr_info =3D ex->vector | INTR_INFO_VALID_MASK; struct vcpu_vmx *vmx =3D to_vmx(vcpu); - unsigned nr =3D vcpu->arch.exception.nr; - bool has_error_code =3D vcpu->arch.exception.has_error_code; - u32 error_code =3D vcpu->arch.exception.error_code; - u32 intr_info =3D nr | INTR_INFO_VALID_MASK; =20 - kvm_deliver_exception_payload(vcpu); + kvm_deliver_exception_payload(vcpu, ex); =20 - if (has_error_code) { + if (ex->has_error_code) { /* * Despite the error code being architecturally defined as 32 * bits, and the VMCS field being 32 bits, Intel CPUs and thus @@ -1705,21 +1703,21 @@ static void vmx_inject_exception(struct kvm_vcpu *v= cpu) * the upper bits to avoid VM-Fail, losing information that * does't really exist is preferable to killing the VM. */ - vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, (u16)error_code); + vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, (u16)ex->error_code); intr_info |=3D INTR_INFO_DELIVER_CODE_MASK; } =20 if (vmx->rmode.vm86_active) { int inc_eip =3D 0; - if (kvm_exception_is_soft(nr)) + if (kvm_exception_is_soft(ex->vector)) inc_eip =3D vcpu->arch.event_exit_inst_len; - kvm_inject_realmode_interrupt(vcpu, nr, inc_eip); + kvm_inject_realmode_interrupt(vcpu, ex->vector, inc_eip); return; } =20 WARN_ON_ONCE(vmx->emulation_required); =20 - if (kvm_exception_is_soft(nr)) { + if (kvm_exception_is_soft(ex->vector)) { vmcs_write32(VM_ENTRY_INSTRUCTION_LEN, vmx->vcpu.arch.event_exit_inst_len); intr_info |=3D INTR_TYPE_SOFT_EXCEPTION; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 24b538b8b0ee..bed42a75b515 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -561,16 +561,13 @@ static int exception_type(int vector) return EXCPT_FAULT; } =20 -void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu) +void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu, + struct kvm_queued_exception *ex) { - unsigned nr =3D vcpu->arch.exception.nr; - bool has_payload =3D vcpu->arch.exception.has_payload; - unsigned long payload =3D vcpu->arch.exception.payload; - - if (!has_payload) + if (!ex->has_payload) return; =20 - switch (nr) { + switch (ex->vector) { case DB_VECTOR: /* * "Certain debug exceptions may clear bit 0-3. The @@ -595,8 +592,8 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcp= u) * So they need to be flipped for DR6. */ vcpu->arch.dr6 |=3D DR6_ACTIVE_LOW; - vcpu->arch.dr6 |=3D payload; - vcpu->arch.dr6 ^=3D payload & DR6_ACTIVE_LOW; + vcpu->arch.dr6 |=3D ex->payload; + vcpu->arch.dr6 ^=3D ex->payload & DR6_ACTIVE_LOW; =20 /* * The #DB payload is defined as compatible with the 'pending @@ -607,12 +604,12 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *v= cpu) vcpu->arch.dr6 &=3D ~BIT(12); break; case PF_VECTOR: - vcpu->arch.cr2 =3D payload; + vcpu->arch.cr2 =3D ex->payload; break; } =20 - vcpu->arch.exception.has_payload =3D false; - vcpu->arch.exception.payload =3D 0; + ex->has_payload =3D false; + ex->payload =3D 0; } EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload); =20 @@ -651,17 +648,18 @@ static void kvm_multiple_exception(struct kvm_vcpu *v= cpu, vcpu->arch.exception.injected =3D false; } vcpu->arch.exception.has_error_code =3D has_error; - vcpu->arch.exception.nr =3D nr; + vcpu->arch.exception.vector =3D nr; vcpu->arch.exception.error_code =3D error_code; vcpu->arch.exception.has_payload =3D has_payload; vcpu->arch.exception.payload =3D payload; if (!is_guest_mode(vcpu)) - kvm_deliver_exception_payload(vcpu); + kvm_deliver_exception_payload(vcpu, + &vcpu->arch.exception); return; } =20 /* to check exception */ - prev_nr =3D vcpu->arch.exception.nr; + prev_nr =3D vcpu->arch.exception.vector; if (prev_nr =3D=3D DF_VECTOR) { /* triple fault -> shutdown */ kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); @@ -679,7 +677,7 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcp= u, vcpu->arch.exception.pending =3D true; vcpu->arch.exception.injected =3D false; vcpu->arch.exception.has_error_code =3D true; - vcpu->arch.exception.nr =3D DF_VECTOR; + vcpu->arch.exception.vector =3D DF_VECTOR; vcpu->arch.exception.error_code =3D 0; vcpu->arch.exception.has_payload =3D false; vcpu->arch.exception.payload =3D 0; @@ -5015,25 +5013,24 @@ static int kvm_vcpu_ioctl_x86_set_mce(struct kvm_vc= pu *vcpu, static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { + struct kvm_queued_exception *ex =3D &vcpu->arch.exception; + process_nmi(vcpu); =20 if (kvm_check_request(KVM_REQ_SMI, vcpu)) process_smi(vcpu); =20 /* - * In guest mode, payload delivery should be deferred, - * so that the L1 hypervisor can intercept #PF before - * CR2 is modified (or intercept #DB before DR6 is - * modified under nVMX). Unless the per-VM capability, - * KVM_CAP_EXCEPTION_PAYLOAD, is set, we may not defer the delivery of - * an exception payload and handle after a KVM_GET_VCPU_EVENTS. Since we - * opportunistically defer the exception payload, deliver it if the - * capability hasn't been requested before processing a - * KVM_GET_VCPU_EVENTS. + * In guest mode, payload delivery should be deferred if the exception + * will be intercepted by L1, e.g. KVM should not modifying CR2 if L1 + * intercepts #PF, ditto for DR6 and #DBs. If the per-VM capability, + * KVM_CAP_EXCEPTION_PAYLOAD, is not set, userspace may or may not + * propagate the payload and so it cannot be safely deferred. Deliver + * the payload if the capability hasn't been requested. */ if (!vcpu->kvm->arch.exception_payload_enabled && - vcpu->arch.exception.pending && vcpu->arch.exception.has_payload) - kvm_deliver_exception_payload(vcpu); + ex->pending && ex->has_payload) + kvm_deliver_exception_payload(vcpu, ex); =20 /* * The API doesn't provide the instruction length for software @@ -5041,26 +5038,25 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(stru= ct kvm_vcpu *vcpu, * isn't advanced, we should expect to encounter the exception * again. */ - if (kvm_exception_is_soft(vcpu->arch.exception.nr)) { + if (kvm_exception_is_soft(ex->vector)) { events->exception.injected =3D 0; events->exception.pending =3D 0; } else { - events->exception.injected =3D vcpu->arch.exception.injected; - events->exception.pending =3D vcpu->arch.exception.pending; + events->exception.injected =3D ex->injected; + events->exception.pending =3D ex->pending; /* * For ABI compatibility, deliberately conflate * pending and injected exceptions when * KVM_CAP_EXCEPTION_PAYLOAD isn't enabled. */ if (!vcpu->kvm->arch.exception_payload_enabled) - events->exception.injected |=3D - vcpu->arch.exception.pending; + events->exception.injected |=3D ex->pending; } - events->exception.nr =3D vcpu->arch.exception.nr; - events->exception.has_error_code =3D vcpu->arch.exception.has_error_code; - events->exception.error_code =3D vcpu->arch.exception.error_code; - events->exception_has_payload =3D vcpu->arch.exception.has_payload; - events->exception_payload =3D vcpu->arch.exception.payload; + events->exception.nr =3D ex->vector; + events->exception.has_error_code =3D ex->has_error_code; + events->exception.error_code =3D ex->error_code; + events->exception_has_payload =3D ex->has_payload; + events->exception_payload =3D ex->payload; =20 events->interrupt.injected =3D vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft; @@ -5132,7 +5128,7 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct = kvm_vcpu *vcpu, process_nmi(vcpu); vcpu->arch.exception.injected =3D events->exception.injected; vcpu->arch.exception.pending =3D events->exception.pending; - vcpu->arch.exception.nr =3D events->exception.nr; + vcpu->arch.exception.vector =3D events->exception.nr; vcpu->arch.exception.has_error_code =3D events->exception.has_error_code; vcpu->arch.exception.error_code =3D events->exception.error_code; vcpu->arch.exception.has_payload =3D events->exception_has_payload; @@ -9706,7 +9702,7 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu) =20 static void kvm_inject_exception(struct kvm_vcpu *vcpu) { - trace_kvm_inj_exception(vcpu->arch.exception.nr, + trace_kvm_inj_exception(vcpu->arch.exception.vector, vcpu->arch.exception.has_error_code, vcpu->arch.exception.error_code, vcpu->arch.exception.injected); @@ -9778,12 +9774,12 @@ static int inject_pending_event(struct kvm_vcpu *vc= pu, bool *req_immediate_exit) * describe the behavior of General Detect #DBs, which are * fault-like. They do _not_ set RF, a la code breakpoints. */ - if (exception_type(vcpu->arch.exception.nr) =3D=3D EXCPT_FAULT) + if (exception_type(vcpu->arch.exception.vector) =3D=3D EXCPT_FAULT) __kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) | X86_EFLAGS_RF); =20 - if (vcpu->arch.exception.nr =3D=3D DB_VECTOR) { - kvm_deliver_exception_payload(vcpu); + if (vcpu->arch.exception.vector =3D=3D DB_VECTOR) { + kvm_deliver_exception_payload(vcpu, &vcpu->arch.exception); if (vcpu->arch.dr7 & DR7_GD) { vcpu->arch.dr7 &=3D ~DR7_GD; kvm_update_dr7(vcpu); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 1926d2cb8e79..4147d27f9fbc 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -286,7 +286,8 @@ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, =20 int handle_ud(struct kvm_vcpu *vcpu); =20 -void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu); +void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu, + struct kvm_queued_exception *ex); =20 void kvm_vcpu_mtrr_init(struct kvm_vcpu *vcpu); u8 kvm_mtrr_get_guest_memory_type(struct kvm_vcpu *vcpu, gfn_t gfn); --=20 2.37.2.672.g94769d06f0-goog