From nobody Fri Feb 13 14:04:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E5DBCE7A8D for ; Sun, 24 Sep 2023 12:45:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229714AbjIXMpO (ORCPT ); Sun, 24 Sep 2023 08:45:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229449AbjIXMpL (ORCPT ); Sun, 24 Sep 2023 08:45:11 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1F44106 for ; Sun, 24 Sep 2023 05:44:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695559459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WV2huBY/GxD4ImPycOVjD/A33F2mmWshbEVS6hcV8NQ=; b=X5Eh4ecI3jtOowxnhrPEa76z6oO8HPrfeqipMev9vaiMmETZjrNVtG3TTJA+uPrzS9r3J4 1ZpHI9ZxK9bJs8K/YmXGgh5xn+0F4UN5YdgjKLmvzZmhEPqZ7kRq4XW0Y2ujxdd0i5OmHg 5KDUnLGYKTF24pZRUXj6Il3jCh218bo= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-491-IRbxKGzmNVqnrQazqR6aVg-1; Sun, 24 Sep 2023 08:44:18 -0400 X-MC-Unique: IRbxKGzmNVqnrQazqR6aVg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7FBBC29AA2C8; Sun, 24 Sep 2023 12:44:17 +0000 (UTC) Received: from localhost.localdomain (unknown [10.45.226.141]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0A93240C6EA8; Sun, 24 Sep 2023 12:44:14 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Dave Hansen , x86@kernel.org, linux-kernel@vger.kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini , Ingo Molnar , Thomas Gleixner , Borislav Petkov , Maxim Levitsky Subject: [PATCH v2 1/4] KVM: x86: refactor req_immediate_exit logic Date: Sun, 24 Sep 2023 15:44:07 +0300 Message-Id: <20230924124410.897646-2-mlevitsk@redhat.com> In-Reply-To: <20230924124410.897646-1-mlevitsk@redhat.com> References: <20230924124410.897646-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" - move req_immediate_exit variable from arch specific to common code. - remove arch specific callback .request_immediate_exit and move the code down to the arch's vcpu_run's code. No functional change is intended. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 5 ++--- arch/x86/kvm/svm/svm.c | 5 +++-- arch/x86/kvm/vmx/vmx.c | 18 ++++++----------- arch/x86/kvm/vmx/vmx.h | 2 -- arch/x86/kvm/x86.c | 31 +++++++++++++----------------- 6 files changed, 24 insertions(+), 38 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index e3054e3e46d52d..f654a7f4cc8c0c 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -101,7 +101,6 @@ KVM_X86_OP(write_tsc_multiplier) KVM_X86_OP(get_exit_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) -KVM_X86_OP(request_immediate_exit) KVM_X86_OP(sched_in) KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging) KVM_X86_OP_OPTIONAL(vcpu_blocking) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 17715cb8731d5d..383a1d0cc0743b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1011,6 +1011,8 @@ struct kvm_vcpu_arch { */ bool pdptrs_from_userspace; =20 + bool req_immediate_exit; + #if IS_ENABLED(CONFIG_HYPERV) hpa_t hv_root_tdp; #endif @@ -1690,8 +1692,6 @@ struct kvm_x86_ops { struct x86_exception *exception); void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu); =20 - void (*request_immediate_exit)(struct kvm_vcpu *vcpu); - void (*sched_in)(struct kvm_vcpu *kvm, int cpu); =20 /* @@ -2176,7 +2176,6 @@ extern bool kvm_find_async_pf_gfn(struct kvm_vcpu *vc= pu, gfn_t gfn); =20 int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu); int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err); -void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu); =20 void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9507df93f410a6..60b130b7f9d510 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4176,6 +4176,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu) clgi(); kvm_load_guest_xsave_state(vcpu); =20 + if (vcpu->arch.req_immediate_exit) + smp_send_reschedule(vcpu->cpu); + kvm_wait_lapic_expire(vcpu); =20 /* @@ -5004,8 +5007,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata =3D { .check_intercept =3D svm_check_intercept, .handle_exit_irqoff =3D svm_handle_exit_irqoff, =20 - .request_immediate_exit =3D __kvm_request_immediate_exit, - .sched_in =3D svm_sched_in, =20 .nested_ops =3D &svm_nested_ops, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 72e3943f36935c..eb7e42235e8811 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -67,6 +67,8 @@ #include "x86.h" #include "smm.h" =20 +#include + MODULE_AUTHOR("Qumranet"); MODULE_LICENSE("GPL"); =20 @@ -1288,8 +1290,6 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcp= u) u16 fs_sel, gs_sel; int i; =20 - vmx->req_immediate_exit =3D false; - /* * Note that guest MSRs to be saved/restored can also be changed * when guest state is loaded. This happens when guest transitions @@ -5996,7 +5996,7 @@ static fastpath_t handle_fastpath_preemption_timer(st= ruct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - if (!vmx->req_immediate_exit && + if (!vcpu->arch.req_immediate_exit && !unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled)) { kvm_lapic_expired_hv_timer(vcpu); return EXIT_FASTPATH_REENTER_GUEST; @@ -7154,7 +7154,7 @@ static void vmx_update_hv_timer(struct kvm_vcpu *vcpu) u64 tscl; u32 delta_tsc; =20 - if (vmx->req_immediate_exit) { + if (vcpu->arch.req_immediate_exit) { vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, 0); vmx->loaded_vmcs->hv_timer_soft_disabled =3D false; } else if (vmx->hv_deadline_tsc !=3D -1) { @@ -7357,6 +7357,8 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) =20 if (enable_preemption_timer) vmx_update_hv_timer(vcpu); + else if (vcpu->arch.req_immediate_exit) + smp_send_reschedule(vcpu->cpu); =20 kvm_wait_lapic_expire(vcpu); =20 @@ -7902,11 +7904,6 @@ static __init void vmx_set_cpu_caps(void) kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG); } =20 -static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu) -{ - to_vmx(vcpu)->req_immediate_exit =3D true; -} - static int vmx_check_intercept_io(struct kvm_vcpu *vcpu, struct x86_instruction_info *info) { @@ -8315,8 +8312,6 @@ static struct kvm_x86_ops vmx_x86_ops __initdata =3D { .check_intercept =3D vmx_check_intercept, .handle_exit_irqoff =3D vmx_handle_exit_irqoff, =20 - .request_immediate_exit =3D vmx_request_immediate_exit, - .sched_in =3D vmx_sched_in, =20 .cpu_dirty_log_size =3D PML_ENTITY_NUM, @@ -8574,7 +8569,6 @@ static __init int hardware_setup(void) if (!enable_preemption_timer) { vmx_x86_ops.set_hv_timer =3D NULL; vmx_x86_ops.cancel_hv_timer =3D NULL; - vmx_x86_ops.request_immediate_exit =3D __kvm_request_immediate_exit; } =20 kvm_caps.supported_mce_cap |=3D MCG_LMCE_P; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c2130d2c8e24bb..4dabd16a3d7180 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -330,8 +330,6 @@ struct vcpu_vmx { unsigned int ple_window; bool ple_window_dirty; =20 - bool req_immediate_exit; - /* Support for PML */ #define PML_ENTITY_NUM 512 struct page *pml_pg; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9f18b06bbda66b..dfb7d25ed94f26 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10049,8 +10049,7 @@ static void kvm_inject_exception(struct kvm_vcpu *v= cpu) * ordering between that side effect, the instruction completing, _and_ the * delivery of the asynchronous event. */ -static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu, - bool *req_immediate_exit) +static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu) { bool can_inject; int r; @@ -10227,8 +10226,9 @@ static int kvm_check_and_inject_events(struct kvm_v= cpu *vcpu, =20 if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->has_events && - kvm_x86_ops.nested_ops->has_events(vcpu)) - *req_immediate_exit =3D true; + kvm_x86_ops.nested_ops->has_events(vcpu)) { + vcpu->arch.req_immediate_exit =3D true; + } =20 /* * KVM must never queue a new exception while injecting an event; KVM @@ -10245,10 +10245,9 @@ static int kvm_check_and_inject_events(struct kvm_= vcpu *vcpu, WARN_ON_ONCE(vcpu->arch.exception.pending || vcpu->arch.exception_vmexit.pending); return 0; - out: if (r =3D=3D -EBUSY) { - *req_immediate_exit =3D true; + vcpu->arch.req_immediate_exit =3D true; r =3D 0; } return r; @@ -10475,12 +10474,6 @@ static void kvm_vcpu_reload_apic_access_page(struc= t kvm_vcpu *vcpu) static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu); } =20 -void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) -{ - smp_send_reschedule(vcpu->cpu); -} -EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit); - /* * Called within kvm->srcu read side. * Returns 1 to let vcpu_run() continue the guest execution loop without @@ -10495,7 +10488,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_cpu_accept_dm_intr(vcpu); fastpath_t exit_fastpath; =20 - bool req_immediate_exit =3D false; =20 if (kvm_request_pending(vcpu)) { if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) { @@ -10657,7 +10649,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto out; } =20 - r =3D kvm_check_and_inject_events(vcpu, &req_immediate_exit); + r =3D kvm_check_and_inject_events(vcpu); if (r < 0) { r =3D 0; goto out; @@ -10726,10 +10718,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto cancel_injection; } =20 - if (req_immediate_exit) { + + if (vcpu->arch.req_immediate_exit) kvm_make_request(KVM_REQ_EVENT, vcpu); - static_call(kvm_x86_request_immediate_exit)(vcpu); - } =20 fpregs_assert_state_consistent(); if (test_thread_flag(TIF_NEED_FPU_LOAD)) @@ -10761,6 +10752,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) (kvm_get_apic_mode(vcpu) !=3D LAPIC_MODE_DISABLED)); =20 exit_fastpath =3D static_call(kvm_x86_vcpu_run)(vcpu); + if (likely(exit_fastpath !=3D EXIT_FASTPATH_REENTER_GUEST)) break; =20 @@ -10776,6 +10768,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ++vcpu->stat.exits; } =20 + vcpu->arch.req_immediate_exit =3D false; /* * Do this here before restoring debug registers on the host. And * since we do this before handling the vmexit, a DR access vmexit @@ -10863,8 +10856,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) return r; =20 cancel_injection: - if (req_immediate_exit) + if (vcpu->arch.req_immediate_exit) { + vcpu->arch.req_immediate_exit =3D false; kvm_make_request(KVM_REQ_EVENT, vcpu); + } static_call(kvm_x86_cancel_injection)(vcpu); if (unlikely(vcpu->arch.apic_attention)) kvm_lapic_sync_from_vapic(vcpu); --=20 2.26.3 From nobody Fri Feb 13 14:04:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39580CE7A88 for ; Sun, 24 Sep 2023 12:45:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229731AbjIXMpR (ORCPT ); Sun, 24 Sep 2023 08:45:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229658AbjIXMpL (ORCPT ); Sun, 24 Sep 2023 08:45:11 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49328109 for ; Sun, 24 Sep 2023 05:44:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695559462; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fFJC83yFEMh96Rc1m+ZUEAQuB1+TZDtoy8648SDKRR8=; b=BX/qFGc0tPAOmTN2D6jg71JNLmG9+a+jNgN99pBrNpSZDti5TsXNRH9iUfoXMc5cHcocB7 DO0kQ5Xu/gwcA/YAwHqvVtzxWfx0v+xuSst20PGYGdZ9yzmMPq8rSuTQJYIcp343yUB0Um 7fpGUuiheAvk5rzijNAVMWc7vtTnXC0= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-142-SsqCgeoXM32n1LVmv9vs9w-1; Sun, 24 Sep 2023 08:44:20 -0400 X-MC-Unique: SsqCgeoXM32n1LVmv9vs9w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 262241C0514F; Sun, 24 Sep 2023 12:44:20 +0000 (UTC) Received: from localhost.localdomain (unknown [10.45.226.141]) by smtp.corp.redhat.com (Postfix) with ESMTP id D103140C6EA8; Sun, 24 Sep 2023 12:44:17 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Dave Hansen , x86@kernel.org, linux-kernel@vger.kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini , Ingo Molnar , Thomas Gleixner , Borislav Petkov , Maxim Levitsky Subject: [PATCH v2 2/4] KVM: x86: add more information to the kvm_entry tracepoint Date: Sun, 24 Sep 2023 15:44:08 +0300 Message-Id: <20230924124410.897646-3-mlevitsk@redhat.com> In-Reply-To: <20230924124410.897646-1-mlevitsk@redhat.com> References: <20230924124410.897646-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add: - Flag showing that VM is in a guest mode on entry. - Flag showing that immediate vm exit is set to happen after the entry. - VMX/SVM specific interrupt injection info (like in a vm exit). Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 5 ++++- arch/x86/kvm/svm/svm.c | 17 +++++++++++++++++ arch/x86/kvm/trace.h | 19 +++++++++++++++++-- arch/x86/kvm/vmx/vmx.c | 12 ++++++++++++ 5 files changed, 51 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index f654a7f4cc8c0c..346fed6e3c33aa 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -99,6 +99,7 @@ KVM_X86_OP(get_l2_tsc_multiplier) KVM_X86_OP(write_tsc_offset) KVM_X86_OP(write_tsc_multiplier) KVM_X86_OP(get_exit_info) +KVM_X86_OP(get_entry_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) KVM_X86_OP(sched_in) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 383a1d0cc0743b..321721813474f7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1679,13 +1679,16 @@ struct kvm_x86_ops { void (*write_tsc_multiplier)(struct kvm_vcpu *vcpu); =20 /* - * Retrieve somewhat arbitrary exit information. Intended to + * Retrieve somewhat arbitrary exit/entry information. Intended to * be used only from within tracepoints or error paths. */ void (*get_exit_info)(struct kvm_vcpu *vcpu, u32 *reason, u64 *info1, u64 *info2, u32 *exit_int_info, u32 *exit_int_info_err_code); =20 + void (*get_entry_info)(struct kvm_vcpu *vcpu, + u32 *inj_info, u32 *inj_info_error_code); + int (*check_intercept)(struct kvm_vcpu *vcpu, struct x86_instruction_info *info, enum x86_intercept_stage stage, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 60b130b7f9d510..cd65c04be3d0e2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3504,6 +3504,22 @@ static void svm_get_exit_info(struct kvm_vcpu *vcpu,= u32 *reason, *error_code =3D 0; } =20 +static void svm_get_entry_info(struct kvm_vcpu *vcpu, + u32 *inj_info, + u32 *inj_info_error_code) +{ + struct vmcb_control_area *control =3D &to_svm(vcpu)->vmcb->control; + + *inj_info =3D control->event_inj; + + if ((*inj_info & SVM_EXITINTINFO_VALID) && + (*inj_info & SVM_EXITINTINFO_VALID_ERR)) + *inj_info_error_code =3D control->event_inj_err; + else + *inj_info_error_code =3D 0; + +} + static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -4992,6 +5008,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata =3D { .required_apicv_inhibits =3D AVIC_REQUIRED_APICV_INHIBITS, =20 .get_exit_info =3D svm_get_exit_info, + .get_entry_info =3D svm_get_entry_info, =20 .vcpu_after_set_cpuid =3D svm_vcpu_after_set_cpuid, =20 diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index 83843379813ee3..f4c56f59f5c11b 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -21,14 +21,29 @@ TRACE_EVENT(kvm_entry, TP_STRUCT__entry( __field( unsigned int, vcpu_id ) __field( unsigned long, rip ) - ), + __field( u32, inj_info ) + __field( u32, inj_info_err ) + __field( bool, guest_mode ) + __field( bool, req_imm_exit ) + ), =20 TP_fast_assign( __entry->vcpu_id =3D vcpu->vcpu_id; __entry->rip =3D kvm_rip_read(vcpu); + + static_call(kvm_x86_get_entry_info)(vcpu, + &__entry->inj_info, + &__entry->inj_info_err); + + __entry->req_imm_exit =3D vcpu->arch.req_immediate_exit; + __entry->guest_mode =3D is_guest_mode(vcpu); ), =20 - TP_printk("vcpu %u, rip 0x%lx", __entry->vcpu_id, __entry->rip) + TP_printk("vcpu %u, rip 0x%lx inj 0x%08x inj_error_code 0x%08x%s%s", + __entry->vcpu_id, __entry->rip, + __entry->inj_info, __entry->inj_info_err, + __entry->req_imm_exit ? " [imm exit]" : "", + __entry->guest_mode ? " [guest]" : "") ); =20 /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index eb7e42235e8811..9dd13f52d4999c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6156,6 +6156,17 @@ static void vmx_get_exit_info(struct kvm_vcpu *vcpu,= u32 *reason, } } =20 +static void vmx_get_entry_info(struct kvm_vcpu *vcpu, + u32 *inj_info, + u32 *inj_info_error_code) +{ + *inj_info =3D vmcs_read32(VM_ENTRY_INTR_INFO_FIELD); + if (is_exception_with_error_code(*inj_info)) + *inj_info_error_code =3D vmcs_read32(VM_ENTRY_EXCEPTION_ERROR_CODE); + else + *inj_info_error_code =3D 0; +} + static void vmx_destroy_pml_buffer(struct vcpu_vmx *vmx) { if (vmx->pml_pg) { @@ -8297,6 +8308,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata =3D { .get_mt_mask =3D vmx_get_mt_mask, =20 .get_exit_info =3D vmx_get_exit_info, + .get_entry_info =3D vmx_get_entry_info, =20 .vcpu_after_set_cpuid =3D vmx_vcpu_after_set_cpuid, =20 --=20 2.26.3 From nobody Fri Feb 13 14:04:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A70BCCE7A8A for ; Sun, 24 Sep 2023 12:45:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229708AbjIXMqB (ORCPT ); Sun, 24 Sep 2023 08:46:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229459AbjIXMp6 (ORCPT ); Sun, 24 Sep 2023 08:45:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AE9910E for ; Sun, 24 Sep 2023 05:44:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695559469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=woWU04N+glUzD420ymk+4+G9vxtbf/gGrQWI0ejKWmM=; b=T6CiLbeG8XhdZq1QNr6fbR8WScFTIgqlrvxTG6eNuTDh+gRdp3T/DmrsA/U/mN+8wMarlO dXHzEFscixOgtihsgPyU0hTxPx4rB3RJ2gDaCVbpvsyGLB7MNjUqXcS09SVuwZO48eNJfX tTMsKikxfHmQr7vEAbjI8EX28DdMZuk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-553-RcpmpXc_Nr6f4ntLhMcivQ-1; Sun, 24 Sep 2023 08:44:23 -0400 X-MC-Unique: RcpmpXc_Nr6f4ntLhMcivQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C1EBA85A5A8; Sun, 24 Sep 2023 12:44:22 +0000 (UTC) Received: from localhost.localdomain (unknown [10.45.226.141]) by smtp.corp.redhat.com (Postfix) with ESMTP id 777B740C6EA8; Sun, 24 Sep 2023 12:44:20 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Dave Hansen , x86@kernel.org, linux-kernel@vger.kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini , Ingo Molnar , Thomas Gleixner , Borislav Petkov , Maxim Levitsky Subject: [PATCH v2 3/4] KVM: x86: add more information to kvm_exit tracepoint Date: Sun, 24 Sep 2023 15:44:09 +0300 Message-Id: <20230924124410.897646-4-mlevitsk@redhat.com> In-Reply-To: <20230924124410.897646-1-mlevitsk@redhat.com> References: <20230924124410.897646-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add: - Flag that shows that the VM is in guest mode - Bitmap of pending kvm requests. - Flag showing that this VM exit is due to request to have an immediate VM exit. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/trace.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index f4c56f59f5c11b..0657a3a348b4ae 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -320,12 +320,18 @@ TRACE_EVENT(name, \ __field( u32, intr_info ) \ __field( u32, error_code ) \ __field( unsigned int, vcpu_id ) \ + __field( bool, guest_mode ) \ + __field( u64, requests ) \ + __field( bool, req_imm_exit ) \ ), \ \ TP_fast_assign( \ __entry->guest_rip =3D kvm_rip_read(vcpu); \ __entry->isa =3D isa; \ __entry->vcpu_id =3D vcpu->vcpu_id; \ + __entry->guest_mode =3D is_guest_mode(vcpu); \ + __entry->requests =3D READ_ONCE(vcpu->requests); \ + __entry->req_imm_exit =3D vcpu->arch.req_immediate_exit; \ static_call(kvm_x86_get_exit_info)(vcpu, \ &__entry->exit_reason, \ &__entry->info1, \ @@ -335,11 +341,15 @@ TRACE_EVENT(name, \ ), \ \ TP_printk("vcpu %u reason %s%s%s rip 0x%lx info1 0x%016llx " \ - "info2 0x%016llx intr_info 0x%08x error_code 0x%08x", \ + "info2 0x%016llx intr_info 0x%08x error_code 0x%08x " \ + "requests 0x%016llx%s%s", \ __entry->vcpu_id, \ kvm_print_exit_reason(__entry->exit_reason, __entry->isa), \ __entry->guest_rip, __entry->info1, __entry->info2, \ - __entry->intr_info, __entry->error_code) \ + __entry->intr_info, __entry->error_code, \ + __entry->requests, \ + __entry->guest_mode ? " [guest]" : "", \ + __entry->req_imm_exit ? " [imm exit]" : "") \ ) =20 /* --=20 2.26.3 From nobody Fri Feb 13 14:04:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2F49CE7A8F for ; Sun, 24 Sep 2023 12:45:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229770AbjIXMqE (ORCPT ); Sun, 24 Sep 2023 08:46:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229632AbjIXMp7 (ORCPT ); Sun, 24 Sep 2023 08:45:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9FDB10D for ; Sun, 24 Sep 2023 05:44:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695559469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RG2dgZRGZETUP2WAWYwTl0IqdCaYPeuaDthozK2XncQ=; b=EYCyk7byJB8BVQ9rHmp5/OoEqCGJsV4tE5+Um1VXJizeC/6tdEe0LBL4Ezjh3duaackSuy GAj+0xQjSGu4bgbxM6Kjgfhbj8WiRV5RQ3LKRhV5osWlu718bXnHYL9C7tejqZOZ15EsMi 4f0c473TUAON6NwSMEM4pwRJS1YdwFw= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-461-GejbuHlGNWefbidFW3DvAQ-1; Sun, 24 Sep 2023 08:44:25 -0400 X-MC-Unique: GejbuHlGNWefbidFW3DvAQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 680DC29AA2C5; Sun, 24 Sep 2023 12:44:25 +0000 (UTC) Received: from localhost.localdomain (unknown [10.45.226.141]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1EFA340C6EA8; Sun, 24 Sep 2023 12:44:22 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Dave Hansen , x86@kernel.org, linux-kernel@vger.kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini , Ingo Molnar , Thomas Gleixner , Borislav Petkov , Maxim Levitsky Subject: [PATCH v2 4/4] KVM: x86: add new nested vmexit tracepoints Date: Sun, 24 Sep 2023 15:44:10 +0300 Message-Id: <20230924124410.897646-5-mlevitsk@redhat.com> In-Reply-To: <20230924124410.897646-1-mlevitsk@redhat.com> References: <20230924124410.897646-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add 3 new tracepoints for nested VM exits which are intended to capture extra information to gain insights about the nested guest behavior. The new tracepoints are: - kvm_nested_msr - kvm_nested_hypercall These tracepoints capture extra register state to be able to know which MSR or which hypercall was done. - kvm_nested_page_fault This tracepoint allows to capture extra info about which host pagefault error code caused the nested page fault. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/nested.c | 22 +++++++++ arch/x86/kvm/trace.h | 82 +++++++++++++++++++++++++++++++-- arch/x86/kvm/vmx/nested.c | 21 +++++++++ arch/x86/kvm/vmx/vmx.c | 1 + arch/x86/kvm/x86.c | 3 ++ 6 files changed, 127 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 321721813474f7..64c195fb8789f7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -966,6 +966,7 @@ struct kvm_vcpu_arch { =20 /* set at EPT violation at this point */ unsigned long exit_qualification; + u32 ept_fault_error_code; =20 /* pv related host specific info */ struct { diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index dd496c9e5f91f2..1cd9c3ab60ab3a 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -38,6 +38,8 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *v= cpu, { struct vcpu_svm *svm =3D to_svm(vcpu); struct vmcb *vmcb =3D svm->vmcb; + u64 host_error_code =3D vmcb->control.exit_info_1; + =20 if (vmcb->control.exit_code !=3D SVM_EXIT_NPF) { /* @@ -48,11 +50,15 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu = *vcpu, vmcb->control.exit_code_hi =3D 0; vmcb->control.exit_info_1 =3D (1ULL << 32); vmcb->control.exit_info_2 =3D fault->address; + host_error_code =3D 0; } =20 vmcb->control.exit_info_1 &=3D ~0xffffffffULL; vmcb->control.exit_info_1 |=3D fault->error_code; =20 + trace_kvm_nested_page_fault(fault->address, host_error_code, + fault->error_code); + nested_svm_vmexit(svm); } =20 @@ -1139,6 +1145,22 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.exit_int_info_err, KVM_ISA_SVM); =20 + /* Collect some info about nested VM exits */ + switch (vmcb12->control.exit_code) { + case SVM_EXIT_MSR: + trace_kvm_nested_msr(vmcb12->control.exit_info_1 =3D=3D 1, + kvm_rcx_read(vcpu), + (vmcb12->save.rax & -1u) | + (((u64)(kvm_rdx_read(vcpu) & -1u) << 32))); + break; + case SVM_EXIT_VMMCALL: + trace_kvm_nested_hypercall(vmcb12->save.rax, + kvm_rbx_read(vcpu), + kvm_rcx_read(vcpu), + kvm_rdx_read(vcpu)); + break; + } + kvm_vcpu_unmap(vcpu, &map, true); =20 nested_svm_transition_tlb_flush(vcpu); diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index 0657a3a348b4ae..d08ae87c536324 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -620,7 +620,7 @@ TRACE_EVENT(kvm_pv_eoi, ); =20 /* - * Tracepoint for nested VMRUN + * Tracepoint for nested VMRUN/VMENTER */ TRACE_EVENT(kvm_nested_vmenter, TP_PROTO(__u64 rip, __u64 vmcb, __u64 nested_rip, __u32 int_ctl, @@ -753,8 +753,84 @@ TRACE_EVENT(kvm_nested_intr_vmexit, TP_printk("rip: 0x%016llx", __entry->rip) ); =20 + /* - * Tracepoint for nested #vmexit because of interrupt pending + * Tracepoint for nested guest MSR access. + */ +TRACE_EVENT(kvm_nested_msr, + TP_PROTO(bool write, u32 ecx, u64 data), + TP_ARGS(write, ecx, data), + + TP_STRUCT__entry( + __field( bool, write ) + __field( u32, ecx ) + __field( u64, data ) + ), + + TP_fast_assign( + __entry->write =3D write; + __entry->ecx =3D ecx; + __entry->data =3D data; + ), + + TP_printk("msr_%s %x =3D 0x%llx", + __entry->write ? "write" : "read", + __entry->ecx, __entry->data) +); + +/* + * Tracepoint for nested hypercalls, capturing generic info about the + * hypercall + */ + +TRACE_EVENT(kvm_nested_hypercall, + TP_PROTO(u64 rax, u64 rbx, u64 rcx, u64 rdx), + TP_ARGS(rax, rbx, rcx, rdx), + + TP_STRUCT__entry( + __field( u64, rax ) + __field( u64, rbx ) + __field( u64, rcx ) + __field( u64, rdx ) + ), + + TP_fast_assign( + __entry->rax =3D rax; + __entry->rbx =3D rbx; + __entry->rcx =3D rcx; + __entry->rdx =3D rdx; + ), + + TP_printk("rax 0x%llx rbx 0x%llx rcx 0x%llx rdx 0x%llx", + __entry->rax, __entry->rbx, __entry->rcx, __entry->rdx) +); + + +TRACE_EVENT(kvm_nested_page_fault, + TP_PROTO(u64 gpa, u64 host_error_code, u64 guest_error_code), + TP_ARGS(gpa, host_error_code, guest_error_code), + + TP_STRUCT__entry( + __field( u64, gpa ) + __field( u64, host_error_code ) + __field( u64, guest_errror_code ) + ), + + TP_fast_assign( + __entry->gpa =3D gpa; + __entry->host_error_code =3D host_error_code; + __entry->guest_errror_code =3D guest_error_code; + ), + + TP_printk("gpa 0x%llx host err 0x%llx guest err 0x%llx", + __entry->gpa, + __entry->host_error_code, + __entry->guest_errror_code) +); + + +/* + * Tracepoint for invlpga */ TRACE_EVENT(kvm_invlpga, TP_PROTO(__u64 rip, int asid, u64 address), @@ -777,7 +853,7 @@ TRACE_EVENT(kvm_invlpga, ); =20 /* - * Tracepoint for nested #vmexit because of interrupt pending + * Tracepoint for skinit */ TRACE_EVENT(kvm_skinit, TP_PROTO(__u64 rip, __u32 slb), diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c5ec0ef51ff78f..b3b89d5152cd39 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -402,6 +402,10 @@ static void nested_ept_inject_page_fault(struct kvm_vc= pu *vcpu, */ nested_ept_invalidate_addr(vcpu, vmcs12->ept_pointer, fault->address); + + trace_kvm_nested_page_fault(fault->address, + vcpu->arch.ept_fault_error_code, + fault->error_code); } =20 nested_vmx_vmexit(vcpu, vm_exit_reason, 0, exit_qualification); @@ -4877,6 +4881,23 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm= _exit_reason, vmcs12->vm_exit_intr_error_code, KVM_ISA_VMX); =20 + switch ((u16)vmcs12->vm_exit_reason) { + case EXIT_REASON_MSR_READ: + case EXIT_REASON_MSR_WRITE: + trace_kvm_nested_msr(vmcs12->vm_exit_reason =3D=3D EXIT_REASON_MSR_WRIT= E, + kvm_rcx_read(vcpu), + (kvm_rax_read(vcpu) & -1u) | + (((u64)(kvm_rdx_read(vcpu) & -1u) << 32))); + break; + case EXIT_REASON_VMCALL: + trace_kvm_nested_hypercall(kvm_rax_read(vcpu), + kvm_rbx_read(vcpu), + kvm_rcx_read(vcpu), + kvm_rdx_read(vcpu)); + break; + + } + load_vmcs12_host_state(vcpu, vmcs12); =20 return; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9dd13f52d4999c..05fadcd38fde75 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5773,6 +5773,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; =20 vcpu->arch.exit_qualification =3D exit_qualification; + vcpu->arch.ept_fault_error_code =3D error_code; =20 /* * Check that the GPA doesn't exceed physical memory limits, as that is diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dfb7d25ed94f26..766d0dc333eac3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13637,6 +13637,9 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmenter); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmexit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmexit_inject); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_intr_vmexit); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_hypercall); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_page_fault); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_msr); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmenter_failed); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_invlpga); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_skinit); --=20 2.26.3