From nobody Mon Feb 9 00:08:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7652AC433FE for ; Fri, 4 Mar 2022 20:31:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229929AbiCDUc0 (ORCPT ); Fri, 4 Mar 2022 15:32:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229975AbiCDUcV (ORCPT ); Fri, 4 Mar 2022 15:32:21 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED36E1E744E; Fri, 4 Mar 2022 12:31:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646425890; x=1677961890; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xkZKmepfRFKvdVJKlDuCj3gJiq8Iwg99L0pXt3QXCmU=; b=BzGKySNjHS9fMSJnUNxLICKXEgh6wFqOcamnjTImScvQQUn8437LJHgc DT6ZGXLe8qYwNm4uCnzbbwu73UvRfWOx2NlN8IBzTXuF/clfyLn6esJRr LA1lCJeEZTvSyCPGxnFdrOf8pGVMXtCFtCEcW7/wtgC5EsZxg6T+oiYz4 dCfUpdKx+qnj3lbyyFVq9jLui9Jpx2GDVjR/2sNj2G3oye/pFTVn+ynlK GvWWjCT0u9CC9uyzHNyaZIKyR+aXilfnTSoKbT4uqDJQdUgp9PUWu7GWt T/z1Kz14lysz9Ns+bdHDdFEFEGS+mCs+J5jG+TSid8W141crE+IQ9mQXx A==; X-IronPort-AV: E=McAfee;i="6200,9189,10276"; a="251624261" X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="251624261" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:32 -0800 X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="552344453" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:32 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Subject: [RFC PATCH v5 065/104] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) Date: Fri, 4 Mar 2022 11:49:21 -0800 Message-Id: <47bfde64180fc00ed236a2e13b25423c984a0eef.1646422845.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata On entering/exiting TDX vcpu, Preserved or clobbered CPU state is different from VMX case. Add TDX hooks to save/restore host/guest CPU state. Save/restore kernel GS base MSR. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/main.c | 28 +++++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx.c | 39 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.h | 4 ++++ arch/x86/kvm/vmx/x86_ops.h | 4 ++++ 4 files changed, 73 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 2e5a7a72d560..f9d43f2de145 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -89,6 +89,30 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool in= it_event) return vmx_vcpu_reset(vcpu, init_event); } =20 +static void vt_prepare_switch_to_guest(struct kvm_vcpu *vcpu) +{ + /* + * All host state is saved/restored across SEAMCALL/SEAMRET, and the + * guest state of a TD is obviously off limits. Deferring MSRs and DRs + * is pointless because the TDX module needs to load *something* so as + * not to expose guest state. + */ + if (is_td_vcpu(vcpu)) { + tdx_prepare_switch_to_guest(vcpu); + return; + } + + vmx_prepare_switch_to_guest(vcpu); +} + +static void vt_vcpu_put(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_vcpu_put(vcpu); + + return vmx_vcpu_put(vcpu); +} + static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -174,9 +198,9 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { .vcpu_free =3D vt_vcpu_free, .vcpu_reset =3D vt_vcpu_reset, =20 - .prepare_guest_switch =3D vmx_prepare_switch_to_guest, + .prepare_guest_switch =3D vt_prepare_switch_to_guest, .vcpu_load =3D vmx_vcpu_load, - .vcpu_put =3D vmx_vcpu_put, + .vcpu_put =3D vt_vcpu_put, =20 .update_exception_bitmap =3D vmx_update_exception_bitmap, .get_msr_feature =3D vmx_get_msr_feature, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index ebe4f9bf19e7..7a288aae03ba 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include =20 #include =20 @@ -407,6 +408,9 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.guest_state_protected =3D !(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG); =20 + tdx->host_state_need_save =3D true; + tdx->host_state_need_restore =3D false; + return 0; =20 free_tdvpx: @@ -420,6 +424,39 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) return ret; } =20 +void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx =3D to_tdx(vcpu); + + if (!tdx->host_state_need_save) + return; + + if (likely(is_64bit_mm(current->mm))) + tdx->msr_host_kernel_gs_base =3D current->thread.gsbase; + else + tdx->msr_host_kernel_gs_base =3D read_msr(MSR_KERNEL_GS_BASE); + + tdx->host_state_need_save =3D false; +} + +static void tdx_prepare_switch_to_host(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx =3D to_tdx(vcpu); + + tdx->host_state_need_save =3D true; + if (!tdx->host_state_need_restore) + return; + + wrmsrl(MSR_KERNEL_GS_BASE, tdx->msr_host_kernel_gs_base); + tdx->host_state_need_restore =3D false; +} + +void tdx_vcpu_put(struct kvm_vcpu *vcpu) +{ + vmx_vcpu_pi_put(vcpu); + tdx_prepare_switch_to_host(vcpu); +} + void tdx_vcpu_free(struct kvm_vcpu *vcpu) { struct vcpu_tdx *tdx =3D to_tdx(vcpu); @@ -535,6 +572,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) =20 tdx_vcpu_enter_exit(vcpu, tdx); =20 + tdx->host_state_need_restore =3D true; + vcpu->arch.regs_avail &=3D ~VMX_REGS_LAZY_LOAD_SET; trace_kvm_exit(vcpu, KVM_ISA_VMX); =20 diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index e950404ce5de..8b1cf9c158e3 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -84,6 +84,10 @@ struct vcpu_tdx { union tdx_exit_reason exit_reason; =20 bool initialized; + + bool host_state_need_save; + bool host_state_need_restore; + u64 msr_host_kernel_gs_base; }; =20 static inline bool is_td(struct kvm *kvm) diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 44404dd25737..8b871c5f52cf 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -141,6 +141,8 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu); +void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); +void tdx_vcpu_put(struct kvm_vcpu *vcpu); =20 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -162,6 +164,8 @@ static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu= ) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) = {} static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT= _FASTPATH_NONE; } +static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {} +static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {} =20 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { retur= n -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)= { return -EOPNOTSUPP; } --=20 2.25.1