From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90B28C10F31 for ; Sat, 10 Dec 2022 16:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229568AbiLJQS6 (ORCPT ); Sat, 10 Dec 2022 11:18:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229847AbiLJQSx (ORCPT ); Sat, 10 Dec 2022 11:18:53 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8F431837A; Sat, 10 Dec 2022 08:18:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689132; x=1702225132; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+MOSPWIOY6Qewya/GY6hl3SXr1wDBdrmlGizJkgTazI=; b=WErpQeHG6JA2aMAgm1Wmy9JFBz4BpIE1inom/bY6CUm75BP0q4mEsZfH 66YlsgCKSzw0LizUIT89S4CHmdCx87cng8c7Th9JJr1+H2C7ORcbcgaVY 6cGWS32m76lnjtQAJg2I/+Hs5zMX7diUTKaIrM4dXvtIpdXwTLCFP0BXx cdibTRSmIzNgIJNtfNK6htB1WrvnPtULrrBEvgLOPMB/YN3R+mgPFWWyU x2j19+E2r6LuVnOH6lgpEjEn/n4141MnzVZiv3hIHFOwGCzq2ROp81DLF L9AAWtZFgqN3X3h3SYGH5GOPNb8Ix8HUVO8g8PneW3noBaIW7TDQUfu5q Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780419" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780419" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:52 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208616" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208616" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:49 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 1/9] x86/speculation: Introduce Intel SPEC_CTRL BHI related definition Date: Sun, 11 Dec 2022 00:00:38 +0800 Message-Id: <20221210160046.2608762-2-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Define BHI_NO bit and new control of BHI hardware mitigation in IA32_SPEC_CTRL. These definitions are used by following KVM patches to determine whether to enforce BHI hardware mitigiations for guests transparently. BHI_NO means the processor isn't vulnernable to BHI attacks. BHI_DIS_S is a new indirect predictor control. Once enabled, BHI_DIS_S prevents predicted targets of indirect branches executed in CPL0/1/2 from being selected based on branch history from branches executed in CPL3. While set in the VMX root, it also prevents predicted targets executed in CPL0 from being selected based on branch history from branches executed in a VMX non-root. Branch History Injection (BHI) describes a specific form of intra-mode BTI, where an attacker may manipulate branch history before transitioning from user to supervisor mode (or from VMX non-root/guest to root mode) in an effort to cause an indirect branch predictor to select a specific predictor entry for an indirect branch, and a disclosure gadget at the predicted target will transiently execute. This may be possible since the relevant branch history may contain branches taken in previous security contexts, and in particular, in other predictor modes. Refer to below link for more information: https://www.intel.com/content/www/us/en/developer/articles/technical/softwa= re-security-guidance/technical-documentation/branch-history-injection.html Signed-off-by: Zhang Chen --- arch/x86/include/asm/msr-index.h | 6 ++++++ tools/arch/x86/include/asm/msr-index.h | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 4a2af82553e4..1143ac9400c3 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -53,6 +53,8 @@ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store By= pass Disable */ #define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */ #define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT) +#define SPEC_CTRL_BHI_DIS_S_SHIFT 10 /* Enable BHI_DIS_S behavior */ +#define SPEC_CTRL_BHI_DIS_S BIT(SPEC_CTRL_BHI_DIS_S_SHIFT) =20 #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ @@ -150,6 +152,10 @@ * are restricted to targets in * kernel. */ +#define ARCH_CAP_BHI_NO BIT(20) /* + * Not susceptible to Branch History + * Injection. + */ #define ARCH_CAP_PBRSB_NO BIT(24) /* * Not susceptible to Post-Barrier * Return Stack Buffer Predictions. diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/includ= e/asm/msr-index.h index f17ade084720..aed18b76dee0 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -53,6 +53,8 @@ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store By= pass Disable */ #define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */ #define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT) +#define SPEC_CTRL_BHI_DIS_S_SHIFT 10 /* Enable BHI_DIS_S behavior = */ +#define SPEC_CTRL_BHI_DIS_S BIT(SPEC_CTRL_BHI_DIS_S_SHIFT) =20 #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ @@ -150,6 +152,10 @@ * are restricted to targets in * kernel. */ +#define ARCH_CAP_BHI_NO BIT(20) /* + * Not susceptible to Branch History + * Injection. + */ #define ARCH_CAP_PBRSB_NO BIT(24) /* * Not susceptible to Post-Barrier * Return Stack Buffer Predictions. --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27D7EC4708D for ; Sat, 10 Dec 2022 16:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbiLJQTJ (ORCPT ); Sat, 10 Dec 2022 11:19:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229863AbiLJQS6 (ORCPT ); Sat, 10 Dec 2022 11:18:58 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D54041903C; Sat, 10 Dec 2022 08:18:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689135; x=1702225135; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jJywodcOgHQB5TBBvwp4cOqodYErhHuV9OjQHIcmq+8=; b=kD8bJerWlYZ84OWcjdj+cNsMLrho4toNcJf9C2/Tlw0qX6YpnLAllWSo +v+QDkLr+4J7PXi3xd1x9jnpgvCxS9Kb9a3aZWaVD70NMRnpzVo8uzBCX fT+wfVOVJZGqx9HTHZqAsSsWpVvgpczAbyRcakkp82K46fjoVlmXxNJ7S dnWnBt43w9n+aOs890fSzZYpmpgB/m4w6jffotBglmFDO8nzww8o2rby3 nNgbFK4sn1gY4ehUvKBMNLSfbPbxrYRx6A9QZMp7hYfP8Xfb5pu1xdCyd 8ygFunGZYZ9d39qc+Yrimq6eUMWeX11GpZB7lFX9o3Efv+SYQbeqKTFBh w==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780434" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780434" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:55 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208627" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208627" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:52 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 2/9] KVM: x86: Add a kvm-only leaf for RRSBA_CTRL Date: Sun, 11 Dec 2022 00:00:39 +0800 Message-Id: <20221210160046.2608762-3-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" KVM needs to check if guests can see RRSBA_CTRL. If a guest is using retpoline and cannot see RRSBA_CTRL and the host enumerates RRSBA, KVM is responsible for setting RRSBA_DIS_S for the guest. This allows VM migration from parts doesn't enumerates RRSBA to those that enumerate RRSBA. Signed-off-by: Zhang Chen --- arch/x86/kvm/cpuid.c | 4 ++++ arch/x86/kvm/reverse_cpuid.h | 7 +++++++ 2 files changed, 11 insertions(+) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 62bc7a01cecc..8d45bc0b4b7c 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -668,6 +668,10 @@ void kvm_set_cpu_caps(void) SF(SGX1) | SF(SGX2) ); =20 + kvm_cpu_cap_init_scattered(CPUID_7_2_EDX, + SF(RRSBA_CTRL) + ); + kvm_cpu_cap_mask(CPUID_8000_0001_ECX, F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ | F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) | diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h index a19d473d0184..4c38ed61c505 100644 --- a/arch/x86/kvm/reverse_cpuid.h +++ b/arch/x86/kvm/reverse_cpuid.h @@ -13,6 +13,7 @@ */ enum kvm_only_cpuid_leafs { CPUID_12_EAX =3D NCAPINTS, + CPUID_7_2_EDX, NR_KVM_CPU_CAPS, =20 NKVMCAPINTS =3D NR_KVM_CPU_CAPS - NCAPINTS, @@ -24,6 +25,9 @@ enum kvm_only_cpuid_leafs { #define KVM_X86_FEATURE_SGX1 KVM_X86_FEATURE(CPUID_12_EAX, 0) #define KVM_X86_FEATURE_SGX2 KVM_X86_FEATURE(CPUID_12_EAX, 1) =20 +/* Intel-defined sub-features, CPUID level 0x00000007:2 (EDX)*/ +#define KVM_X86_FEATURE_RRSBA_CTRL KVM_X86_FEATURE(CPUID_7_2_EDX, 2) + struct cpuid_reg { u32 function; u32 index; @@ -46,6 +50,7 @@ static const struct cpuid_reg reverse_cpuid[] =3D { [CPUID_8000_0007_EBX] =3D {0x80000007, 0, CPUID_EBX}, [CPUID_7_EDX] =3D { 7, 0, CPUID_EDX}, [CPUID_7_1_EAX] =3D { 7, 1, CPUID_EAX}, + [CPUID_7_2_EDX] =3D { 7, 2, CPUID_EDX}, [CPUID_12_EAX] =3D {0x00000012, 0, CPUID_EAX}, [CPUID_8000_001F_EAX] =3D {0x8000001f, 0, CPUID_EAX}, }; @@ -78,6 +83,8 @@ static __always_inline u32 __feature_translate(int x86_fe= ature) return KVM_X86_FEATURE_SGX1; else if (x86_feature =3D=3D X86_FEATURE_SGX2) return KVM_X86_FEATURE_SGX2; + else if (x86_feature =3D=3D X86_FEATURE_RRSBA_CTRL) + return KVM_X86_FEATURE_RRSBA_CTRL; =20 return x86_feature; } --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7790EC4332F for ; Sat, 10 Dec 2022 16:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229902AbiLJQTN (ORCPT ); Sat, 10 Dec 2022 11:19:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229877AbiLJQTD (ORCPT ); Sat, 10 Dec 2022 11:19:03 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3661193F0; Sat, 10 Dec 2022 08:18:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689138; x=1702225138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sSdYThVrddLtjTse/5AQd2MpfOega7QZ/yPiX4mvGVs=; b=ibIpHy57kBOjhYGb/FTX1tuNxG/GYy8/JmXOpBqvWpL8iSNftS8YEHMP FnQ/NKuodRm1FF0sCrlBnl/teFyPVwk8swnbNSAVuj618WQXwASYIoyP5 JK/06cWXem+Nud4yirh3YpqvHpI6UKBW/ATmZ+mPp+Rb9A0EnDdQDAwup z7VWFC0/m6NNnAkkpSyGr754wRLWssVuSTua4Hs6VRO+Mn/VpUGSSwOuu U+2b/cGfFmd5ac7d6wEbtPkvd0JlS4LRmuGCJ8sfFi5f+W+ntV7wxIMcF VdR7Bm4k1+bGikIdokQn91vHrcRK+3DROu0QbOMHhKHVGeo5e9VipZdMi g==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780447" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780447" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:58 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208638" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208638" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:55 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 3/9] KVM: x86: Add a kvm-only leaf for BHI_CTRL Date: Sun, 11 Dec 2022 00:00:40 +0800 Message-Id: <20221210160046.2608762-4-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" KVM needs to check if guests can see BHI_CTRL. If a guest is using BHB-clearing sequence and cannot see BHI_CTRL and the host enumerates BHI, KVM is responsible for setting BHI_DIS_S for the guest. This allows VM migration from parts doesn't enumerate BHI to those that enumerate BHI. Signed-off-by: Zhang Chen --- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/reverse_cpuid.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 8d45bc0b4b7c..91af27cc57e5 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -669,7 +669,7 @@ void kvm_set_cpu_caps(void) ); =20 kvm_cpu_cap_init_scattered(CPUID_7_2_EDX, - SF(RRSBA_CTRL) + SF(RRSBA_CTRL) | F(BHI_CTRL) ); =20 kvm_cpu_cap_mask(CPUID_8000_0001_ECX, diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h index 4c38ed61c505..cf4e209ce2f6 100644 --- a/arch/x86/kvm/reverse_cpuid.h +++ b/arch/x86/kvm/reverse_cpuid.h @@ -27,6 +27,8 @@ enum kvm_only_cpuid_leafs { =20 /* Intel-defined sub-features, CPUID level 0x00000007:2 (EDX)*/ #define KVM_X86_FEATURE_RRSBA_CTRL KVM_X86_FEATURE(CPUID_7_2_EDX, 2) +/* X86_FEATURE_BHI_CTRL only used by KVM */ +#define X86_FEATURE_BHI_CTRL KVM_X86_FEATURE(CPUID_7_2_EDX, 4) =20 struct cpuid_reg { u32 function; --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3A0DC4332F for ; Sat, 10 Dec 2022 16:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229769AbiLJQTR (ORCPT ); Sat, 10 Dec 2022 11:19:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229900AbiLJQTG (ORCPT ); Sat, 10 Dec 2022 11:19:06 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC9D819C37; Sat, 10 Dec 2022 08:19:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689141; x=1702225141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rGTFfxeYu4LvCk1ggEo/vPBN6U6DPzIZBZwFkOw9ZXY=; b=LH712DdFgBk2lIe3WAggHPgsf+Nu61aTyXzkfdIcmEy2cizfOr8Rd652 m5xETSaGn9rcn4mDwNnVf5zqlsX6f24X+mUaD8/ea8bOm/QrWw98vT0GP VKDn6XRXdyTJ9MAMJdoLl//AAaWiD1naOgE7HYCFudxoUpeqXWtrVShOB +4WugC9NBNVupN7nzRlh6oZgvz/H5Zbc+dJYDP+hzvvVbohEZVlz9zJVi wJOY93hoA56SkT1v8ogeBBurpi+YyCG5pAuwgEgOsmfMECzXSACkEQiNp dRDZtJu8Hv34j1qbMEl812xMtC09unpVW6QqIAdOExZSF2lJOBLa1qvm7 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780456" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780456" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:01 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208652" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208652" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:18:58 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 4/9] x86/kvm/vmx: Virtualize Intel IA32_SPEC_CTRL Date: Sun, 11 Dec 2022 00:00:41 +0800 Message-Id: <20221210160046.2608762-5-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently KVM disables interception of IA32_SPEC_CTRL after a non-0 is written to IA32_SPEC_CTRL by guest. Then, guest is allowed to write any value to hardware. "virtualize IA32_SPEC_CTRL" is a new tertiary vm-exec control. It provides VMM the capability to restrict the value of IA32_SPEC_CTRL in hardware even if the MSR isn't intercepted (e.g., prevent VMs from changing some bits in IA32_SPEC_CTRL) in an efficient way. Two new fields are added to VMCS, IA32_SPEC_CTRL_MASK: Setting a bit in this field prevents guest software from modifying the corresponding bit in the IA32_SPEC_CTRL MSR IA32_SPEC_CTRL_SHADOW: This field contains the value that guest software expects to be in the IA32_SPEC_CTRL MSR In VMX non-root mode, when IA32_SPEC_CTRL isn't intercepted by VMM, guest's accesses to IA32_SPEC_CTRL is virtualized by the processor according to the two new fields: RDMSR IA32_SPEC_CTRL returns the shadow value. WRMSR IA32_SPEC_CTRL writes EDX:EAX to the shadow field and calculate a new value according to the guest value (EDX:EAX), current value of IA32_SPEC_CTRL MSR and the IA32_SPEC_CTRL_MASK field (specifically, (cur_val & mask) | (guest_val & ~mask)) and write it to IA32_SPEC_CTRL MSR. Enable "virtual IA32_SPEC_CTRL" if it is supported. With "virtual IA32_SPEC_CTRL" enabled, IA32_SPEC_CTRL MSR value seen from guest p.o.v is different from the value in hardware when guest is running. We refer to the two values as below: 1. effective value of IA32_SPEC_CTRL. This value is the one programmed in hardware when the vCPU is running. 2. shadow value of IA32_SPEC_CTRL. This value is returned when rdmsr is used inside a guest to read IA32_SPEC_CTRL. This value doesn't affect CPU's enablement of indirect predictor controls. In KVM, vmx->spec_ctrl always stores the effective value of IA32_SPEC_CTRL when guest is running (even when "virtual IA32_SPEC_CTRL" is disabled. In this case, the shadow value is equal to the effective one). When "virtual IA32_SPEC_CTRL" is enabled, the shadow value of IA32_SPEC_CTRL is stored in IA32_SPEC_CTRL_SHADOW field in VMCS. IA32_SPEC_CTRL_MASK is always 0 for now. It means all supported bits in hardware is allowed to be toggled by guest's wrmsr. The mask will be changed by following patches. Note "virtual IA32_SPEC_CTRL" is now used by VMM to enforce some bits of IA32_SPEC_CTRL to 1 (i.e., enabled some HW mitigations transparently for guests). In theory, VMM can disable some HW mitigations behind guests. But to keep this series simple, we leave that for future work. Co-developed-by: Chao Gao Signed-off-by: Chao Gao Signed-off-by: Zhang Chen --- arch/x86/include/asm/vmx.h | 5 +++++ arch/x86/include/asm/vmxfeatures.h | 2 ++ arch/x86/kvm/vmx/capabilities.h | 5 +++++ arch/x86/kvm/vmx/vmx.c | 30 ++++++++++++++++++++++++++++-- arch/x86/kvm/vmx/vmx.h | 10 +++++++++- 5 files changed, 49 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 498dc600bd5c..c2efdad491c1 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -81,6 +81,7 @@ * Definitions of Tertiary Processor-Based VM-Execution Controls. */ #define TERTIARY_EXEC_IPI_VIRT VMCS_CONTROL_BIT(IPI_VIRT) +#define TERTIARY_EXEC_VIRT_SPEC_CTRL VMCS_CONTROL_BIT(VIRT_SPEC_CTRL) =20 #define PIN_BASED_EXT_INTR_MASK VMCS_CONTROL_BIT(INTR_EXIT= ING) #define PIN_BASED_NMI_EXITING VMCS_CONTROL_BIT(NMI_EXITI= NG) @@ -233,6 +234,10 @@ enum vmcs_field { TERTIARY_VM_EXEC_CONTROL_HIGH =3D 0x00002035, PID_POINTER_TABLE =3D 0x00002042, PID_POINTER_TABLE_HIGH =3D 0x00002043, + IA32_SPEC_CTRL_MASK =3D 0x0000204A, + IA32_SPEC_CTRL_MASK_HIGH =3D 0x0000204B, + IA32_SPEC_CTRL_SHADOW =3D 0x0000204C, + IA32_SPEC_CTRL_SHADOW_HIGH =3D 0x0000204D, GUEST_PHYSICAL_ADDRESS =3D 0x00002400, GUEST_PHYSICAL_ADDRESS_HIGH =3D 0x00002401, VMCS_LINK_POINTER =3D 0x00002800, diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxf= eatures.h index c6a7eed03914..d3b7237d9c42 100644 --- a/arch/x86/include/asm/vmxfeatures.h +++ b/arch/x86/include/asm/vmxfeatures.h @@ -89,4 +89,6 @@ =20 /* Tertiary Processor-Based VM-Execution Controls, word 3 */ #define VMX_FEATURE_IPI_VIRT ( 3*32+ 4) /* Enable IPI virtualization */ +#define VMX_FEATURE_VIRT_SPEC_CTRL ( 3*32+ 7) /* Virtualize IA32_SPEC_CTR= L */ + #endif /* _ASM_X86_VMXFEATURES_H */ diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilitie= s.h index 07254314f3dd..a9a0adcd403b 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -138,6 +138,11 @@ static inline bool cpu_has_tertiary_exec_ctrls(void) CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; } =20 +static inline bool cpu_has_virt_spec_ctrl(void) +{ + return vmcs_config.cpu_based_3rd_exec_ctrl & TERTIARY_EXEC_VIRT_SPEC_CTRL; +} + static inline bool cpu_has_vmx_virtualize_apic_accesses(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 63247c57c72c..407061b369b4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1898,7 +1898,10 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) !guest_has_spec_ctrl_msr(vcpu)) return 1; =20 - msr_info->data =3D to_vmx(vcpu)->spec_ctrl; + if (cpu_has_virt_spec_ctrl()) + msr_info->data =3D vmcs_read64(IA32_SPEC_CTRL_SHADOW); + else + msr_info->data =3D to_vmx(vcpu)->spec_ctrl; break; case MSR_IA32_SYSENTER_CS: msr_info->data =3D vmcs_read32(GUEST_SYSENTER_CS); @@ -2160,10 +2163,22 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) !guest_has_spec_ctrl_msr(vcpu)) return 1; =20 - if (kvm_spec_ctrl_test_value(data)) + if (kvm_spec_ctrl_test_value(data | vmx->spec_ctrl_mask)) return 1; =20 vmx->spec_ctrl =3D data; + + if (cpu_has_virt_spec_ctrl()) { + vmcs_write64(IA32_SPEC_CTRL_SHADOW, data); + /* + * Some bits are allowed to be toggled by guest. + * Update the effective value of IA32_SPEC_CTRL + * MSR according to the value written by guest + * but keep bits in the mask set. + */ + vmx->spec_ctrl =3D data | vmx->spec_ctrl_mask; + } + if (!data) break; =20 @@ -4673,6 +4688,11 @@ static void init_vmcs(struct vcpu_vmx *vmx) if (cpu_has_vmx_xsaves()) vmcs_write64(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP); =20 + if (cpu_has_virt_spec_ctrl()) { + vmcs_write64(IA32_SPEC_CTRL_SHADOW, 0); + vmcs_write64(IA32_SPEC_CTRL_MASK, vmx->spec_ctrl_mask); + } + if (enable_pml) { vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg)); vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1); @@ -4738,6 +4758,12 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bo= ol init_event) __vmx_vcpu_reset(vcpu); =20 vmx->rmode.vm86_active =3D 0; + + if (cpu_has_virt_spec_ctrl()) { + vmx->spec_ctrl_mask =3D 0; + vmcs_write64(IA32_SPEC_CTRL_MASK, vmx->spec_ctrl_mask); + vmcs_write64(IA32_SPEC_CTRL_SHADOW, 0); + } vmx->spec_ctrl =3D 0; =20 vmx->msr_ia32_umwait_control =3D 0; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a3da84f4ea45..c5a41ae14237 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -290,7 +290,14 @@ struct vcpu_vmx { u64 msr_guest_kernel_gs_base; #endif =20 + /* The value of hardware IA32_SPEC_CTRL MSR when guest is running */ u64 spec_ctrl; + /* + * The bits KVM doesn't allow guest to toggle. + * A bit set in the mask should always be set in guest + * IA32_SPEC_CTRL_MSR. + */ + u64 spec_ctrl_mask; u32 msr_ia32_umwait_control; =20 /* @@ -589,7 +596,8 @@ static inline u8 vmx_get_rvi(void) =20 #define KVM_REQUIRED_VMX_TERTIARY_VM_EXEC_CONTROL 0 #define KVM_OPTIONAL_VMX_TERTIARY_VM_EXEC_CONTROL \ - (TERTIARY_EXEC_IPI_VIRT) + (TERTIARY_EXEC_IPI_VIRT | \ + TERTIARY_EXEC_VIRT_SPEC_CTRL) =20 #define BUILD_CONTROLS_SHADOW(lname, uname, bits) \ static inline void lname##_controls_set(struct vcpu_vmx *vmx, u##bits val)= \ --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E360AC4332F for ; Sat, 10 Dec 2022 16:19:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229971AbiLJQTc (ORCPT ); Sat, 10 Dec 2022 11:19:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229886AbiLJQTJ (ORCPT ); Sat, 10 Dec 2022 11:19:09 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 344871A050; Sat, 10 Dec 2022 08:19:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689145; x=1702225145; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jLpGaEw8EehKip+aIeniq8E5bkTFoFrT23Jq8paNNGU=; b=Im0/8lsqx9qlnItz+qS5iKZdgujFqRwtcssjn1Nv5Ms8/9qLoI+FRCHZ 9Ejnf6+53OT+phwkPGiiZeT75kwvds0c9DJEd+Qs2BVNYvM9hcnLlXMXo qtR529ONAYnfrL3QUQT1gm0DFpRtIX5RnXW4nwaCvlJcowodtMkCYkHgr BK5FB947LADI0pKd9n9jJYEzc/Faujl5xYkJZdcymuDkGqNhmIQHfka2Q QC+HZ9epBg+fTlNVvsQMVUGXxPvXgTzWz+CYE41uN+X0wIeSTySA4etfE T1viK5IXbcbQw5APrJ7GRqWMyY3dxL1EEehiQkk3A/kmb9pxY5W9u9WsE g==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780466" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780466" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:04 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208667" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208667" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:01 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 5/9] x86/bugs: Use Virtual MSRs to request hardware mitigations Date: Sun, 11 Dec 2022 00:00:42 +0800 Message-Id: <20221210160046.2608762-6-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Pawan Gupta Guests that have different family/model than the host may not be aware of hardware mitigations(such as RRSBA_DIS_S) available on host. This is particularly true when guests migrate. To solve this problem Intel processors have added a virtual MSR interface through which guests can report their mitigation status and request VMM to deploy relevant hardware mitigations. Use this virtualized MSR interface to request relevant hardware controls for retpoline mitigation. Signed-off-by: Pawan Gupta --- arch/x86/include/asm/msr-index.h | 23 +++++++++++++++++++++++ arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 1143ac9400c3..1166b472377c 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -165,6 +165,7 @@ * IA32_XAPIC_DISABLE_STATUS MSR * supported */ +#define ARCH_CAP_VIRTUAL_ENUM BIT(63) /* MSR_VIRTUAL_ENUMERATION supporte= d */ =20 #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* @@ -1062,6 +1063,28 @@ #define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14) #define MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS (1ULL << 29) #define MSR_IA32_VMX_MISC_PREEMPTION_TIMER_SCALE 0x1F + +/* Intel virtual MSRs */ +#define MSR_VIRTUAL_ENUMERATION 0x50000000 +#define VIRT_ENUM_MITIGATION_CTRL_SUPPORT BIT(0) /* + * Mitigation ctrl via virtual + * MSRs supported + */ + +#define MSR_VIRTUAL_MITIGATION_ENUM 0x50000001 +#define MITI_ENUM_BHB_CLEAR_SEQ_S_SUPPORT BIT(0) /* VMM supports BHI_DIS_S= */ +#define MITI_ENUM_RETPOLINE_S_SUPPORT BIT(1) /* VMM supports RRSBA_DIS_S = */ + +#define MSR_VIRTUAL_MITIGATION_CTRL 0x50000002 +#define MITI_CTRL_BHB_CLEAR_SEQ_S_USED BIT(0) /* + * Request VMM to deploy + * BHI_DIS_S mitigation + */ +#define MITI_CTRL_RETPOLINE_S_USED BIT(1) /* + * Request VMM to deploy + * RRSBA_DIS_S mitigation + */ + /* AMD-V MSRs */ =20 #define MSR_VM_CR 0xc0010114 diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 3e3230cccaa7..a9e869f568ee 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1379,6 +1379,28 @@ static void __init spectre_v2_determine_rsb_fill_typ= e_at_vmexit(enum spectre_v2_ dump_stack(); } =20 +/* Speculation control using virtualized MSRs */ +static void __init spec_ctrl_setup_virtualized_msr(void) +{ + u64 msr_virt_enum, msr_mitigation_enum, msr_mitigation_ctrl; + + if (!(x86_read_arch_cap_msr() & ARCH_CAP_VIRTUAL_ENUM)) + return; + + rdmsrl(MSR_VIRTUAL_ENUMERATION, msr_virt_enum); + if (!(msr_virt_enum & VIRT_ENUM_MITIGATION_CTRL_SUPPORT)) + return; + + rdmsrl(MSR_VIRTUAL_MITIGATION_ENUM, msr_mitigation_enum); + /* When retpoline is being used, request relevant hardware controls */ + if (boot_cpu_has(X86_FEATURE_RETPOLINE) && + msr_mitigation_enum & MITI_ENUM_RETPOLINE_S_SUPPORT) { + rdmsrl(MSR_VIRTUAL_MITIGATION_CTRL, msr_mitigation_ctrl); + msr_mitigation_ctrl |=3D MITI_CTRL_RETPOLINE_S_USED; + wrmsrl(MSR_VIRTUAL_MITIGATION_CTRL, msr_mitigation_ctrl); + } +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd =3D spectre_v2_parse_cmdline(); @@ -1485,6 +1507,8 @@ static void __init spectre_v2_select_mitigation(void) mode =3D=3D SPECTRE_V2_RETPOLINE) spec_ctrl_disable_kernel_rrsba(); =20 + spec_ctrl_setup_virtualized_msr(); + spectre_v2_enabled =3D mode; pr_info("%s\n", spectre_v2_strings[mode]); =20 --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EC10C4332F for ; Sat, 10 Dec 2022 16:19:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229684AbiLJQTf (ORCPT ); Sat, 10 Dec 2022 11:19:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229960AbiLJQTM (ORCPT ); Sat, 10 Dec 2022 11:19:12 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FE641A075; Sat, 10 Dec 2022 08:19:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689148; x=1702225148; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=X/JigODFLszBFp5OhaLGZi3lCZqCPVYtRW8l71cFStg=; b=m+wq2cJfPKSy5TK/dqaMQFRz+LrH5kl8Z5jMCg4DvcYoJ/d7XvPivV8/ vCkJ+jPoROnPFuxPEaYLqFeCGsqvfyNS0f9fYfakSyaWTdxNcE/bcaoON X57O7QIBSnJ8cUoqTDA+t958QgQuM2QQ8qxOBOruVHAwdELOleWVvh/SY mHFMxEa06emRudXFKguY9EXDCjAzuUzCcMoJyMUiQX+1cd9hY+oVtlQff 1jJOpG7hE7NsofxOaRDPKwq/6XsmI1PfE81ew1EWx1JCfdvcUgjA6n5Rp msWrqXJP1kxiQLSlkEizamqHUZ1ZKYrXRmhLy4IqDAJLNs6zPDRsTgv09 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780473" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780473" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:07 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208674" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208674" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:04 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 6/9] kvm/x86: Add ARCH_CAP_VIRTUAL_ENUM for guest MSR_IA32_ARCH_CAPABILITIES Date: Sun, 11 Dec 2022 00:00:43 +0800 Message-Id: <20221210160046.2608762-7-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add the 63 bit in MSR_IA32_ARCH_CAPABILITIES for enable the virtual MSRs. Virtual MSRs can allow guests to notify VMM whether or not they are using specific software mitigation, allowing a VMM to enable there hardware control only where necessary. As Intel spec defination, expose virtual MSR for guest. Make guest have ability to check virtual MSR 0x50000000. Signed-off-by: Zhang Chen --- arch/x86/kvm/vmx/vmx.c | 15 +++++++++++++++ arch/x86/kvm/vmx/vmx.h | 1 + arch/x86/kvm/x86.c | 16 +++++++++++++++- 3 files changed, 31 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 407061b369b4..6ed6b743be0e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2001,6 +2001,12 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) case MSR_IA32_DEBUGCTLMSR: msr_info->data =3D vmcs_read64(GUEST_IA32_DEBUGCTL); break; + case MSR_VIRTUAL_ENUMERATION: + if (!msr_info->host_initiated && + !(vcpu->arch.arch_capabilities & ARCH_CAP_VIRTUAL_ENUM)) + return 1; + msr_info->data =3D vmx->msr_virtual_enumeration; + break; default: find_uret_msr: msr =3D vmx_find_uret_msr(vmx, msr_info->index); @@ -2375,6 +2381,15 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) } ret =3D kvm_set_msr_common(vcpu, msr_info); break; + case MSR_VIRTUAL_ENUMERATION: + if (msr_info->host_initiated && + !(vcpu->arch.arch_capabilities & ARCH_CAP_VIRTUAL_ENUM)) + return 1; + if (data & ~VIRT_ENUM_MITIGATION_CTRL_SUPPORT) + return 1; + vmx->msr_virtual_enumeration =3D data & + VIRT_ENUM_MITIGATION_CTRL_SUPPORT; + break; =20 default: find_uret_msr: diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c5a41ae14237..fc873cf45f70 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -298,6 +298,7 @@ struct vcpu_vmx { * IA32_SPEC_CTRL_MSR. */ u64 spec_ctrl_mask; + u64 msr_virtual_enumeration; u32 msr_ia32_umwait_control; =20 /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2835bd796639..6be0a3f1281f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1532,6 +1532,8 @@ static const u32 emulated_msrs_all[] =3D { MSR_IA32_VMX_EPT_VPID_CAP, MSR_IA32_VMX_VMFUNC, =20 + MSR_VIRTUAL_ENUMERATION, + MSR_K7_HWCR, MSR_KVM_POLL_CONTROL, }; @@ -1567,6 +1569,7 @@ static const u32 msr_based_features_all[] =3D { MSR_IA32_UCODE_REV, MSR_IA32_ARCH_CAPABILITIES, MSR_IA32_PERF_CAPABILITIES, + MSR_VIRTUAL_ENUMERATION, }; =20 static u32 msr_based_features[ARRAY_SIZE(msr_based_features_all)]; @@ -1588,7 +1591,8 @@ static unsigned int num_msr_based_features; ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \ ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \ ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \ - ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO) + ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | \ + ARCH_CAP_VIRTUAL_ENUM) =20 static u64 kvm_get_arch_capabilities(void) { @@ -1607,6 +1611,13 @@ static u64 kvm_get_arch_capabilities(void) */ data |=3D ARCH_CAP_PSCHANGE_MC_NO; =20 + /* + * Virtual MSRs can allow guests to notify VMM whether or not + * they are using specific software mitigation, allowing a VMM + * to enable there hardware control only where necessary. + */ + data |=3D ARCH_CAP_VIRTUAL_ENUM; + /* * If we're doing cache flushes (either "always" or "cond") * we will do one whenever the guest does a vmlaunch/vmresume. @@ -1657,6 +1668,9 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *= msr) case MSR_IA32_UCODE_REV: rdmsrl_safe(msr->index, &msr->data); break; + case MSR_VIRTUAL_ENUMERATION: + msr->data =3D VIRT_ENUM_MITIGATION_CTRL_SUPPORT; + break; default: return static_call(kvm_x86_get_msr_feature)(msr); } --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE0B1C4332F for ; Sat, 10 Dec 2022 16:19:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbiLJQTz (ORCPT ); Sat, 10 Dec 2022 11:19:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229861AbiLJQT2 (ORCPT ); Sat, 10 Dec 2022 11:19:28 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F87F1A3B8; Sat, 10 Dec 2022 08:19:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689151; x=1702225151; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=X4QhB3OLSQoeCPxb9auXE0M405Iu+KPZQCWwLNnsklY=; b=WnvsZCLpD0ycPTr5fe9JARbylMTM/EIPz79IEWZ2MQE9JPNjShbEwrR0 5MKKFCx89mYNKKCgFjiYxSY2DkXm+AZPazZ+tVQtyRaf2jCbY2QhtiK4s XBxLe0LDmhDPGYPD08CdeygsFZBqPZ2cZBB8JDEypMnvTdnE+24tEBc5/ uiPm/tRpEJ/nic54JG1GxNHnx/a3VhtCKbvqDvyqE5ni1mv4vW6sdF4S/ Gl6zsldUxA+8iZ+93pbuG+AdkOnGHv8dlnYvWVqJHNi9o4pEjXC8R/CPV njC/D89cbnmxXS6U2wI/9OvbSaXeAbAVB4rixGklFZybErMREbcaiIlts Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780489" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780489" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:10 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208689" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208689" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:07 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 7/9] kvm/x86: Add MSR_VIRTUAL_MITIGATION_ENUM/CTRL emulation Date: Sun, 11 Dec 2022 00:00:44 +0800 Message-Id: <20221210160046.2608762-8-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce Intel virtual MSR_VIRTUAL_MITIGATION_ENUM(0x50000001) and MSR_VIRTUAL_MITIGATION_CTRL(0x50000002). The MSR_VIRTUAL_MITIGATION_ENUM to tell guest about supported mitigations and enable the MSR_VIRTUAL_MITIGATION_CTRL virtual MSRs for guest, VMM will help to setup virtual spec ctrl mask for SPEC_CTRL_RRSBA_DIS_S, SPEC_CTRL_BHI_DIS_S as guest's needs. Signed-off-by: Zhang Chen --- arch/x86/kvm/vmx/vmx.h | 16 ++++++++++++++++ arch/x86/kvm/x86.c | 7 +++++++ 2 files changed, 23 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index fc873cf45f70..6abda05cc426 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -29,6 +29,10 @@ #endif =20 #define MAX_NR_LOADSTORE_MSRS 8 +#define MITI_ENUM_SUPPORTED (MITI_ENUM_BHB_CLEAR_SEQ_S_SUPPORT | \ + MITI_ENUM_RETPOLINE_S_SUPPORT) +#define MITI_CTRL_USED (MITI_CTRL_BHB_CLEAR_SEQ_S_USED | \ + MITI_CTRL_RETPOLINE_S_USED) =20 struct vmx_msrs { unsigned int nr; @@ -301,6 +305,18 @@ struct vcpu_vmx { u64 msr_virtual_enumeration; u32 msr_ia32_umwait_control; =20 + /* + * Guest read only, Only available if MITIGATION_CTRL_SUPPORT + * is enumerated. + */ + u64 msr_virtual_mitigation_enum; + + /* + * Read/Write, Only available if MITIGATION_CTRL_SUPPORT + * is enumerated. + */ + u64 msr_virtual_mitigation_ctrl; + /* * loaded_vmcs points to the VMCS currently used in this vcpu. For a * non-nested (L1) guest, it always points to vmcs01. For a nested diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6be0a3f1281f..f6c314def6a8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1533,6 +1533,8 @@ static const u32 emulated_msrs_all[] =3D { MSR_IA32_VMX_VMFUNC, =20 MSR_VIRTUAL_ENUMERATION, + MSR_VIRTUAL_MITIGATION_ENUM, + MSR_VIRTUAL_MITIGATION_CTRL, =20 MSR_K7_HWCR, MSR_KVM_POLL_CONTROL, @@ -1570,6 +1572,7 @@ static const u32 msr_based_features_all[] =3D { MSR_IA32_ARCH_CAPABILITIES, MSR_IA32_PERF_CAPABILITIES, MSR_VIRTUAL_ENUMERATION, + MSR_VIRTUAL_MITIGATION_ENUM, }; =20 static u32 msr_based_features[ARRAY_SIZE(msr_based_features_all)]; @@ -1671,6 +1674,10 @@ static int kvm_get_msr_feature(struct kvm_msr_entry = *msr) case MSR_VIRTUAL_ENUMERATION: msr->data =3D VIRT_ENUM_MITIGATION_CTRL_SUPPORT; break; + case MSR_VIRTUAL_MITIGATION_ENUM: + msr->data =3D MITI_ENUM_BHB_CLEAR_SEQ_S_SUPPORT | + MITI_ENUM_RETPOLINE_S_SUPPORT; + break; default: return static_call(kvm_x86_get_msr_feature)(msr); } --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38342C4332F for ; Sat, 10 Dec 2022 16:20:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230042AbiLJQUH (ORCPT ); Sat, 10 Dec 2022 11:20:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230037AbiLJQTb (ORCPT ); Sat, 10 Dec 2022 11:19:31 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24F6C1AA06; Sat, 10 Dec 2022 08:19:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689154; x=1702225154; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OX7uFn6N9uY8SXcXm88RgI0VqH58t7OqQqhqDApVw/w=; b=mAsC1GMmfJSP7Reqs0bkUS83fbDDPpHcCcxGNVxbAjsWYlq7UlBV70Qk fZP9RLsFGSZfXDuwYuhB3AIlp1USc17Q8jIHMxyhaT0yGO14U91ZOP/G+ NPPSgJP3G31GuhPCiQ8+dNI3ShzjmdD9zAHpmls1EsfeRHtNqfgIcV6P3 Jq+IEzBFp+BHuigIH2+XzwuGddMnHEE1Zgn/ONPMcldFBRl9hu/fjX6LT kPTOOwHO7rHSNG/NEzveae9Rqt2TzfJkU9e1pPmiM9+OQkoqHaRDKstCI wkoguwTWytSs6+oGq+9mSRrxSZ6/Vj1OTaPxrw4cDB2AjS+ZJG2e7idfL w==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780507" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780507" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:13 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208706" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208706" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:10 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 8/9] x86/kvm/vmx: Initialize SPEC_CTRL MASK for RRSBA Date: Sun, 11 Dec 2022 00:00:45 +0800 Message-Id: <20221210160046.2608762-9-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" VMMs can address mitigations issues in migration pool by applying the needed controls whenever the guest is operating on a newer processor. If a guest is using retpoline to mitigate intra-mode BTI in CPL0, the VMM can set RRSBA_DIS_S when the guest runs on hardware which enumerates RRSBA. Signed-off-by: Zhang Chen --- arch/x86/kvm/vmx/vmx.c | 57 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 56 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6ed6b743be0e..fb0f3b1639b9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2007,6 +2007,20 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) return 1; msr_info->data =3D vmx->msr_virtual_enumeration; break; + case MSR_VIRTUAL_MITIGATION_ENUM: + if (!msr_info->host_initiated && + !(vmx->msr_virtual_enumeration & + VIRT_ENUM_MITIGATION_CTRL_SUPPORT)) + return 1; + msr_info->data =3D vmx->msr_virtual_mitigation_enum; + break; + case MSR_VIRTUAL_MITIGATION_CTRL: + if (!msr_info->host_initiated && + !(vmx->msr_virtual_enumeration & + VIRT_ENUM_MITIGATION_CTRL_SUPPORT)) + return 1; + msr_info->data =3D vmx->msr_virtual_mitigation_ctrl; + break; default: find_uret_msr: msr =3D vmx_find_uret_msr(vmx, msr_info->index); @@ -2056,7 +2070,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) struct vmx_uret_msr *msr; int ret =3D 0; u32 msr_index =3D msr_info->index; - u64 data =3D msr_info->data; + u64 data =3D msr_info->data, arch_msr; u32 index; =20 switch (msr_index) { @@ -2390,6 +2404,46 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) vmx->msr_virtual_enumeration =3D data & VIRT_ENUM_MITIGATION_CTRL_SUPPORT; break; + case MSR_VIRTUAL_MITIGATION_ENUM: + if (msr_info->host_initiated && + !(vmx->msr_virtual_enumeration & + VIRT_ENUM_MITIGATION_CTRL_SUPPORT)) + return 1; + if (data & ~MITI_ENUM_SUPPORTED) + return 1; + vmx->msr_virtual_mitigation_enum =3D data; + break; + case MSR_VIRTUAL_MITIGATION_CTRL: + if (!msr_info->host_initiated && + !(vmx->msr_virtual_enumeration & + VIRT_ENUM_MITIGATION_CTRL_SUPPORT)) + return 1; + if (data & ~MITI_CTRL_USED) + return 1; + + if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, arch_msr); + + if (data & MITI_CTRL_RETPOLINE_S_USED && + boot_cpu_has(X86_FEATURE_RRSBA_CTRL) && + arch_msr & ARCH_CAP_RRSBA) + vmx->spec_ctrl_mask |=3D SPEC_CTRL_RRSBA_DIS_S; + else + vmx->spec_ctrl_mask &=3D ~SPEC_CTRL_RRSBA_DIS_S; + + if (cpu_has_virt_spec_ctrl()) { + vmcs_write64(IA32_SPEC_CTRL_MASK, vmx->spec_ctrl_mask); + } else if (vmx->spec_ctrl_mask) { + pr_err_once("Virtual spec ctrl is missing. Cannot keep " + "bits in %llx always set\n", + vmx->spec_ctrl_mask); + vmx->spec_ctrl_mask =3D 0; + } + + vmx->spec_ctrl =3D vmx->spec_ctrl | vmx->spec_ctrl_mask; + + vmx->msr_virtual_mitigation_ctrl =3D data; + break; =20 default: find_uret_msr: @@ -4774,6 +4828,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, boo= l init_event) =20 vmx->rmode.vm86_active =3D 0; =20 + vmx->msr_virtual_mitigation_ctrl =3D 0; if (cpu_has_virt_spec_ctrl()) { vmx->spec_ctrl_mask =3D 0; vmcs_write64(IA32_SPEC_CTRL_MASK, vmx->spec_ctrl_mask); --=20 2.25.1 From nobody Thu Sep 18 03:59:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FA10C4167B for ; Sat, 10 Dec 2022 16:20:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230100AbiLJQUR (ORCPT ); Sat, 10 Dec 2022 11:20:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229910AbiLJQTd (ORCPT ); Sat, 10 Dec 2022 11:19:33 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3073A1AA30; Sat, 10 Dec 2022 08:19:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670689157; x=1702225157; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SkNqvz6MYSdiJAug06PS7TZ56gPXaAGM412+ZDbyQLU=; b=eghg1UErXLVq3PlKvShACs5hXgqjfabTrIfwMKw6axcmeXA3uP/CilGg sp+e6G+D5G/zz4XqUVoEQydDyqbS9GzMVFKlPNWitDMXOiaBWymS3IHiD u3YTGNwoC2+8v03xp4wRqvIWLrfEwem+bK6l7t/Z0FWq81erafIuj3cCm LRmJFTNLtihNVl5pvgmno/byTCSBkHrSgPGeH0cd4K2ybJCO6QF2eJyUs XQmpLXS3Kubpe5p6JLGiXsE5WAb9DhRsyVYqwvjmzQ856FUn1zEa6pPWn H/Sib43LcaCa34LLb7ph5gM5y8SaBE9hFJyh6c6ird4lZ3PmQ8hu+ro/D Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="318780521" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="318780521" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10557"; a="711208722" X-IronPort-AV: E=Sophos;i="5.96,234,1665471600"; d="scan'208";a="711208722" Received: from unknown (HELO localhost.localdomain) ([10.239.161.133]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2022 08:19:13 -0800 From: Zhang Chen To: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Zhang Chen , Chao Gao , Pawan Gupta , Paolo Bonzini , Sean Christopherson , "H. Peter Anvin" , Dave Hansen , Borislav Petkov , Ingo Molnar , Thomas Gleixner Subject: [RFC PATCH 9/9] x86/kvm/vmx: Initialize SPEC_CTRL MASK for BHI Date: Sun, 11 Dec 2022 00:00:46 +0800 Message-Id: <20221210160046.2608762-10-chen.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221210160046.2608762-1-chen.zhang@intel.com> References: <20221210160046.2608762-1-chen.zhang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org VMMs can address mitigations issues in migration pool by applying the needed controls whenever the guest is operating on a newer processor. If a guest is using the BHB-clearing sequence on transitions into CPL0 to mitigate BHI, the VMM can use the =E2=80=9Cvirtual IA32_SPEC_CTRL=E2=80=9D VM-execution control to set BHI_DI= S_S on newer hardware which does not enumerate BHI_NO. Signed-off-by: Zhang Chen --- arch/x86/kvm/vmx/vmx.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fb0f3b1639b9..980d1ace9718 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2431,6 +2431,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) else vmx->spec_ctrl_mask &=3D ~SPEC_CTRL_RRSBA_DIS_S; =20 + if (data & MITI_CTRL_BHB_CLEAR_SEQ_S_USED && + kvm_cpu_cap_has(X86_FEATURE_BHI_CTRL) && + !(arch_msr & ARCH_CAP_BHI_NO)) + vmx->spec_ctrl_mask |=3D SPEC_CTRL_BHI_DIS_S; + else + vmx->spec_ctrl_mask &=3D ~SPEC_CTRL_BHI_DIS_S; + if (cpu_has_virt_spec_ctrl()) { vmcs_write64(IA32_SPEC_CTRL_MASK, vmx->spec_ctrl_mask); } else if (vmx->spec_ctrl_mask) { --=20 2.25.1