From nobody Wed Dec 31 06:47:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AB4AC4332F for ; Tue, 7 Nov 2023 15:13:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235325AbjKGPNx (ORCPT ); Tue, 7 Nov 2023 10:13:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234994AbjKGPNZ (ORCPT ); Tue, 7 Nov 2023 10:13:25 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 441FA210A; Tue, 7 Nov 2023 07:00:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369255; x=1730905255; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bGcR5JIP8U4Y/wZxznPJUFcxeja4IPH8raMWmzND7FA=; b=PLuvWTpiL5bWonmuX0C+e7WJOhCiLZCqBLa7qvOFDE4CCJEN18TYH9Su WgfWC7BM1/O9sTIBrdPXRTI5D26tqYy0rivRNRG6oVaKmzLZJ7953DgVj ladDE0HCvi+vlsFCjO0imgP/vriabtuaPXEV+mBtNVPe5azhpFmQV8YEt qtEgt5L+02rsdhStttvHKAIZOEFe6JmRq5VticPNCLe587VWpmXLOVzT4 zWlXqDy0n5BZBjyrTEUSnvCx4/bXMGW84SIynDuNy7wnD6bD8/bmafElH 5KiS/NEvH+OOc8Av2R2Fdl4PDAToRUYAr6/Ygd7e86liPBqBvHAFwlynv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="2462446" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="2462446" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10851465" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Yang Weijiang Subject: [PATCH v17 067/116] KVM: TDX: Add TSX_CTRL msr into uret_msrs list Date: Tue, 7 Nov 2023 06:56:33 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yang Weijiang TDX module resets the TSX_CTRL MSR to 0 at TD exit if TSX is enabled for TD. Or it preserves the TSX_CTRL MSR if TSX is disabled for TD. VMM can rely on uret_msrs mechanism to defer the reload of host value until exiting to user space. Signed-off-by: Yang Weijiang Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 33 +++++++++++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx.h | 8 ++++++++ 2 files changed, 39 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index fbc3a1920f79..3ee65df99421 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -521,14 +521,21 @@ static struct tdx_uret_msr tdx_uret_msrs[] =3D { {.msr =3D MSR_LSTAR,}, {.msr =3D MSR_TSC_AUX,}, }; +static unsigned int tdx_uret_tsx_ctrl_slot; =20 -static void tdx_user_return_update_cache(void) +static void tdx_user_return_update_cache(struct kvm_vcpu *vcpu) { int i; =20 for (i =3D 0; i < ARRAY_SIZE(tdx_uret_msrs); i++) kvm_user_return_update_cache(tdx_uret_msrs[i].slot, tdx_uret_msrs[i].defval); + /* + * TSX_CTRL is reset to 0 if guest TSX is supported. Otherwise + * preserved. + */ + if (to_kvm_tdx(vcpu->kvm)->tsx_supported && tdx_uret_tsx_ctrl_slot !=3D -= 1) + kvm_user_return_update_cache(tdx_uret_tsx_ctrl_slot, 0); } =20 static void tdx_restore_host_xsave_state(struct kvm_vcpu *vcpu) @@ -623,7 +630,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) =20 tdx_vcpu_enter_exit(tdx); =20 - tdx_user_return_update_cache(); + tdx_user_return_update_cache(vcpu); tdx_restore_host_xsave_state(vcpu); tdx->host_state_need_restore =3D true; =20 @@ -1149,6 +1156,22 @@ static int setup_tdparams_xfam(struct kvm_cpuid2 *cp= uid, struct td_params *td_pa return 0; } =20 +static bool tdparams_tsx_supported(struct kvm_cpuid2 *cpuid) +{ + const struct kvm_cpuid_entry2 *entry; + u64 mask; + u32 ebx; + + entry =3D kvm_find_cpuid_entry2(cpuid->entries, cpuid->nent, 0x7, 0); + if (entry) + ebx =3D entry->ebx; + else + ebx =3D 0; + + mask =3D __feature_bit(X86_FEATURE_HLE) | __feature_bit(X86_FEATURE_RTM); + return ebx & mask; +} + static int setup_tdparams(struct kvm *kvm, struct td_params *td_params, struct kvm_tdx_init_vm *init_vm) { @@ -1194,6 +1217,7 @@ static int setup_tdparams(struct kvm *kvm, struct td_= params *td_params, MEMCPY_SAME_SIZE(td_params->mrowner, init_vm->mrowner); MEMCPY_SAME_SIZE(td_params->mrownerconfig, init_vm->mrownerconfig); =20 + to_kvm_tdx(kvm)->tsx_supported =3D tdparams_tsx_supported(cpuid); return 0; } =20 @@ -1857,6 +1881,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x8= 6_ops) return -EIO; } } + tdx_uret_tsx_ctrl_slot =3D kvm_find_user_return_msr(MSR_IA32_TSX_CTRL); + if (tdx_uret_tsx_ctrl_slot =3D=3D -1 && boot_cpu_has(X86_FEATURE_MSR_TSX_= CTRL)) { + pr_err("MSR_IA32_TSX_CTRL isn't included by kvm_find_user_return_msr\n"); + return -EIO; + } =20 max_pkgs =3D topology_max_packages(); tdx_mng_key_config_lock =3D kcalloc(max_pkgs, sizeof(*tdx_mng_key_config_= lock), diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 610bd3f4e952..45f5c2744d78 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -17,6 +17,14 @@ struct kvm_tdx { u64 xfam; int hkid; =20 + /* + * Used on each TD-exit, see tdx_user_return_update_cache(). + * TSX_CTRL value on TD exit + * - set 0 if guest TSX enabled + * - preserved if guest TSX disabled + */ + bool tsx_supported; + hpa_t source_pa; =20 bool finalized; --=20 2.25.1