From nobody Mon Sep 15 11:33:05 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90AD9C678D9 for ; Thu, 12 Jan 2023 16:39:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240389AbjALQiy (ORCPT ); Thu, 12 Jan 2023 11:38:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230118AbjALQhV (ORCPT ); Thu, 12 Jan 2023 11:37:21 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 085B330B; Thu, 12 Jan 2023 08:33:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673541222; x=1705077222; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eYMwUshWV732UFa0ZU7fNXVEbJDeAy6NEPRLWOY7PCo=; b=EnzG+uBZxUNbm56rOKQHSa3FCD34t5S8L6toAeAvMa7nQImSEcDkGe4R eBhQAAbSsBUaclQg3O6WDt04nEgtzHeUwVZKZPNdsnEvB5Ph/K5xlDytP hd0hxiZpmAw6kmO6aD1lB7qZ+lONjiifOkY5rJp6JgCs4V/xpL/Z9XtHc /EpfUSEcq3d9AlIvUJqc3m0S3YbEBb0xr3BO3InGcMextZXe8/ZoUkCx6 Yle3mIBLrrED+3Meqv6MiGtmbRD+8U9VmXghsovfi2lA6KzqHDUSK7z4x m3r0ForUGq6gSjLTrPz5G0HY/f66LvWBLzeGgKs37G+SnU7Efu6PX6dV0 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="386089677" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="386089677" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:33:34 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="726372526" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="726372526" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:33:34 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [PATCH v11 084/113] KVM: TDX: Add a place holder to handle TDX VM exit Date: Thu, 12 Jan 2023 08:32:32 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Wire up handle_exit and handle_exit_irqoff methods and add a place holder to handle VM exit. Add helper functions to get exit info, exit qualification, etc. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/main.c | 33 ++++++++++++-- arch/x86/kvm/vmx/tdx.c | 88 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 10 +++++ 3 files changed, 128 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 2b31ae11f46d..f9339d8f95eb 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -166,6 +166,23 @@ static bool vt_protected_apic_has_interrupt(struct kvm= _vcpu *vcpu) return tdx_protected_apic_has_interrupt(vcpu); } =20 +static int vt_handle_exit(struct kvm_vcpu *vcpu, + enum exit_fastpath_completion fastpath) +{ + if (is_td_vcpu(vcpu)) + return tdx_handle_exit(vcpu, fastpath); + + return vmx_handle_exit(vcpu, fastpath); +} + +static void vt_handle_exit_irqoff(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_handle_exit_irqoff(vcpu); + + vmx_handle_exit_irqoff(vcpu); +} + static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu) { struct pi_desc *pi =3D vcpu_to_pi_desc(vcpu); @@ -367,6 +384,16 @@ static void vt_request_immediate_exit(struct kvm_vcpu = *vcpu) vmx_request_immediate_exit(vcpu); } =20 +static void vt_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, + u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code) +{ + if (is_td_vcpu(vcpu)) + return tdx_get_exit_info(vcpu, reason, info1, info2, intr_info, + error_code); + + return vmx_get_exit_info(vcpu, reason, info1, info2, intr_info, error_cod= e); +} + static u8 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { if (is_td_vcpu(vcpu)) @@ -452,7 +479,7 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { =20 .vcpu_pre_run =3D vt_vcpu_pre_run, .vcpu_run =3D vt_vcpu_run, - .handle_exit =3D vmx_handle_exit, + .handle_exit =3D vt_handle_exit, .skip_emulated_instruction =3D vmx_skip_emulated_instruction, .update_emulated_instruction =3D vmx_update_emulated_instruction, .set_interrupt_shadow =3D vt_set_interrupt_shadow, @@ -487,7 +514,7 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { .set_identity_map_addr =3D vmx_set_identity_map_addr, .get_mt_mask =3D vt_get_mt_mask, =20 - .get_exit_info =3D vmx_get_exit_info, + .get_exit_info =3D vt_get_exit_info, =20 .vcpu_after_set_cpuid =3D vmx_vcpu_after_set_cpuid, =20 @@ -501,7 +528,7 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { .load_mmu_pgd =3D vt_load_mmu_pgd, =20 .check_intercept =3D vmx_check_intercept, - .handle_exit_irqoff =3D vmx_handle_exit_irqoff, + .handle_exit_irqoff =3D vt_handle_exit_irqoff, =20 .request_immediate_exit =3D vt_request_immediate_exit, =20 diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 95c9a1906b62..964154f7bc60 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -65,6 +65,26 @@ static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u= 16 hkid) return pa | ((hpa_t)hkid << boot_cpu_data.x86_phys_bits); } =20 +static __always_inline unsigned long tdexit_exit_qual(struct kvm_vcpu *vcp= u) +{ + return kvm_rcx_read(vcpu); +} + +static __always_inline unsigned long tdexit_ext_exit_qual(struct kvm_vcpu = *vcpu) +{ + return kvm_rdx_read(vcpu); +} + +static __always_inline unsigned long tdexit_gpa(struct kvm_vcpu *vcpu) +{ + return kvm_r8_read(vcpu); +} + +static __always_inline unsigned long tdexit_intr_info(struct kvm_vcpu *vcp= u) +{ + return kvm_r9_read(vcpu); +} + static inline bool is_td_vcpu_created(struct vcpu_tdx *tdx) { return tdx->tdvpr_pa; @@ -710,6 +730,25 @@ void tdx_inject_nmi(struct kvm_vcpu *vcpu) td_management_write8(to_tdx(vcpu), TD_VCPU_PEND_NMI, 1); } =20 +void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx =3D to_tdx(vcpu); + u16 exit_reason =3D tdx->exit_reason.basic; + + if (exit_reason =3D=3D EXIT_REASON_EXCEPTION_NMI) + vmx_handle_exception_nmi_irqoff(vcpu, tdexit_intr_info(vcpu)); + else if (exit_reason =3D=3D EXIT_REASON_EXTERNAL_INTERRUPT) + vmx_handle_external_interrupt_irqoff(vcpu, + tdexit_intr_info(vcpu)); +} + +static int tdx_handle_triple_fault(struct kvm_vcpu *vcpu) +{ + vcpu->run->exit_reason =3D KVM_EXIT_SHUTDOWN; + vcpu->mmio_needed =3D 0; + return 0; +} + void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); @@ -1036,6 +1075,55 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, i= nt delivery_mode, __vmx_deliver_posted_interrupt(vcpu, &tdx->pi_desc, vector); } =20 +int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath) +{ + union tdx_exit_reason exit_reason =3D to_tdx(vcpu)->exit_reason; + + /* See the comment of tdh_sept_seamcall(). */ + if (unlikely(exit_reason.full =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_S= EPT))) + return 1; + + if (unlikely(exit_reason.non_recoverable || exit_reason.error)) { + if (exit_reason.basic =3D=3D EXIT_REASON_TRIPLE_FAULT) + return tdx_handle_triple_fault(vcpu); + + kvm_pr_unimpl("TD exit 0x%llx, %d hkid 0x%x hkid pa 0x%llx\n", + exit_reason.full, exit_reason.basic, + to_kvm_tdx(vcpu->kvm)->hkid, + set_hkid_to_hpa(0, to_kvm_tdx(vcpu->kvm)->hkid)); + goto unhandled_exit; + } + + WARN_ON_ONCE(fastpath !=3D EXIT_FASTPATH_NONE); + + switch (exit_reason.basic) { + default: + break; + } + +unhandled_exit: + vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->run->internal.suberror =3D KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASO= N; + vcpu->run->internal.ndata =3D 2; + vcpu->run->internal.data[0] =3D exit_reason.full; + vcpu->run->internal.data[1] =3D vcpu->arch.last_vmentry_cpu; + return 0; +} + +void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, + u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code) +{ + struct vcpu_tdx *tdx =3D to_tdx(vcpu); + + *reason =3D tdx->exit_reason.full; + + *info1 =3D tdexit_exit_qual(vcpu); + *info2 =3D tdexit_ext_exit_qual(vcpu); + + *intr_info =3D tdexit_intr_info(vcpu); + *error_code =3D 0; +} + int tdx_dev_ioctl(void __user *argp) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index a05ae400f1ae..38fd5c3eee2f 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -161,11 +161,16 @@ void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcp= u); void tdx_vcpu_put(struct kvm_vcpu *vcpu); void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu); +void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu); +int tdx_handle_exit(struct kvm_vcpu *vcpu, + enum exit_fastpath_completion fastpath); u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); =20 void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector); void tdx_inject_nmi(struct kvm_vcpu *vcpu); +void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, + u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code); =20 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -195,11 +200,16 @@ static inline void tdx_prepare_switch_to_guest(struct= kvm_vcpu *vcpu) {} static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {} static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)= { return false; } +static inline void tdx_handle_exit_irqoff(struct kvm_vcpu *vcpu) {} +static inline int tdx_handle_exit(struct kvm_vcpu *vcpu, + enum exit_fastpath_completion fastpath) { return 0; } static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is= _mmio) { return 0; } =20 static inline void tdx_deliver_interrupt(struct kvm_lapic *apic, int deliv= ery_mode, int trig_mode, int vector) {} static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {} +static inline void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason, u= 64 *info1, + u64 *info2, u32 *intr_info, u32 *error_code) {} =20 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { retur= n -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)= { return -EOPNOTSUPP; } --=20 2.25.1