From nobody Mon Feb 9 00:08:44 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B66AC433FE for ; Fri, 4 Mar 2022 20:33:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230116AbiCDUdv (ORCPT ); Fri, 4 Mar 2022 15:33:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230011AbiCDUcV (ORCPT ); Fri, 4 Mar 2022 15:32:21 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B0741E7453; Fri, 4 Mar 2022 12:31:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646425892; x=1677961892; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=drnBLG1sSQgTJROEhefPJucJ7IehUS/WFdesWGynUpY=; b=j8p2cr7elIuU2o+HGUaUqkcp1kOe9yJ1KAKti3FBF/VMeW8+0+N7BsaW od7RjfULPXF3v1d4iCxeLmdbD75jlapLFJMrY33GS0BJeqTZAzMc6sKNf YVOph+yf4Cz19kdmorsz89+Tfd1DL7v622/joFkp9yiSKzODqzaedI442 oBDc9NMBhlbNGqK0MGS2oTfHYtZg91hYyhDVWamkfKD2DOOynzvnjH/9H GDK3jY5BkzAqcuhMBDax2RkxV23uk9W71CaT84Wn4zYTnwoZcBf4gNkmo 6lBT+BV3DfLGzF2LzbeyFnbxK+4BNhMHCnTOuFRTaE17S0ki8BZXXp8KH A==; X-IronPort-AV: E=McAfee;i="6200,9189,10276"; a="251624273" X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="251624273" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:35 -0800 X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="552344479" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:35 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Subject: [RFC PATCH v5 072/104] KVM: TDX: handle vcpu migration over logical processor Date: Fri, 4 Mar 2022 11:49:28 -0800 Message-Id: <3dd2729b27d1db696c61c7f7acf5a2c8ecaa1502.1646422845.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata For vcpu migration, in the case of VMX, VCMS is flushed on the source pcpu, and load it on the target pcpu. There are corresponding TDX SEAMCALL APIs, call them on vcpu migration. The logic is mostly same as VMX except the TDX SEAMCALLs are used. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/main.c | 20 +++++++++++++-- arch/x86/kvm/vmx/tdx.c | 51 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 2 ++ 3 files changed, 71 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index f9d43f2de145..2cd5ba0e8788 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -121,6 +121,14 @@ static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu) return vmx_vcpu_run(vcpu); } =20 +static void vt_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_vcpu_load(vcpu, cpu); + + return vmx_vcpu_load(vcpu, cpu); +} + static void vt_flush_tlb_all(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -162,6 +170,14 @@ static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa= _t root_hpa, vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level); } =20 +static void vt_sched_in(struct kvm_vcpu *vcpu, int cpu) +{ + if (is_td_vcpu(vcpu)) + return; + + vmx_sched_in(vcpu, cpu); +} + static int vt_mem_enc_op(struct kvm *kvm, void __user *argp) { if (!is_td(kvm)) @@ -199,7 +215,7 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { .vcpu_reset =3D vt_vcpu_reset, =20 .prepare_guest_switch =3D vt_prepare_switch_to_guest, - .vcpu_load =3D vmx_vcpu_load, + .vcpu_load =3D vt_vcpu_load, .vcpu_put =3D vt_vcpu_put, =20 .update_exception_bitmap =3D vmx_update_exception_bitmap, @@ -285,7 +301,7 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { =20 .request_immediate_exit =3D vmx_request_immediate_exit, =20 - .sched_in =3D vmx_sched_in, + .sched_in =3D vt_sched_in, =20 .cpu_dirty_log_size =3D PML_ENTITY_NUM, .update_cpu_dirty_logging =3D vmx_update_cpu_dirty_logging, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 37cf7d43435d..a6b1a8ce888d 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -85,6 +85,18 @@ static inline bool is_td_finalized(struct kvm_tdx *kvm_t= dx) return kvm_tdx->finalized; } =20 +static inline void tdx_disassociate_vp(struct kvm_vcpu *vcpu) +{ + /* + * Ensure tdx->cpu_list is updated is before setting vcpu->cpu to -1, + * otherwise, a different CPU can see vcpu->cpu =3D -1 and add the vCPU + * to its list before its deleted from this CPUs list. + */ + smp_wmb(); + + vcpu->cpu =3D -1; +} + static void tdx_clear_page(unsigned long page) { const void *zero_page =3D (const void *) __va(page_to_phys(ZERO_PAGE(0))); @@ -155,6 +167,39 @@ static void tdx_reclaim_td_page(struct tdx_td_page *pa= ge) free_page(page->va); } =20 +static void tdx_flush_vp(void *arg) +{ + struct kvm_vcpu *vcpu =3D arg; + u64 err; + + /* Task migration can race with CPU offlining. */ + if (vcpu->cpu !=3D raw_smp_processor_id()) + return; + + /* + * No need to do TDH_VP_FLUSH if the vCPU hasn't been initialized. The + * list tracking still needs to be updated so that it's correct if/when + * the vCPU does get initialized. + */ + if (is_td_vcpu_created(to_tdx(vcpu))) { + err =3D tdh_vp_flush(to_tdx(vcpu)->tdvpr.pa); + if (unlikely(err && err !=3D TDX_VCPU_NOT_ASSOCIATED)) { + if (WARN_ON_ONCE(err)) + pr_tdx_error(TDH_VP_FLUSH, err, NULL); + } + } + + tdx_disassociate_vp(vcpu); +} + +static void tdx_flush_vp_on_cpu(struct kvm_vcpu *vcpu) +{ + if (unlikely(vcpu->cpu =3D=3D -1)) + return; + + smp_call_function_single(vcpu->cpu, tdx_flush_vp, vcpu, 1); +} + static int tdx_do_tdh_phymem_cache_wb(void *param) { u64 err =3D 0; @@ -425,6 +470,12 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) return ret; } =20 +void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + if (vcpu->cpu !=3D cpu) + tdx_flush_vp_on_cpu(vcpu); +} + void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) { struct vcpu_tdx *tdx =3D to_tdx(vcpu); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 8b871c5f52cf..ceafd6e18f4e 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -143,6 +143,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_ev= ent); fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu); void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); void tdx_vcpu_put(struct kvm_vcpu *vcpu); +void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); =20 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -166,6 +167,7 @@ static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu= , bool init_event) {} static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT= _FASTPATH_NONE; } static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {} +static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {} =20 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { retur= n -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)= { return -EOPNOTSUPP; } --=20 2.25.1