From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAC7CC07E8A for ; Tue, 15 Aug 2023 20:38:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238495AbjHOUhu (ORCPT ); Tue, 15 Aug 2023 16:37:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238582AbjHOUhc (ORCPT ); Tue, 15 Aug 2023 16:37:32 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A6C81FF3 for ; Tue, 15 Aug 2023 13:36:59 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-68874e3d89dso691535b3a.1 for ; Tue, 15 Aug 2023 13:36:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131817; x=1692736617; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=LxhxKcrFotwJQ4zXyPONZCebIAlPqsSs9CH/9edQop0=; b=bj+PXpWlZ7XfkKb4SRKBbwL7SoSHEUDD5ckoAw2oDS6Ke62B3JZ1T19/LzJA9RjlRK QBAMK0vbXvFT7evYAAn/GNYPnndmVmGOjwoHDR/YO0zDdI7w/DvR1DT66BRdTYj+5/s5 at690uwMASz8CDMdPWQ15WZoRRuYD93/seeSslVjF5gWv/hdb9WxDJQDQTtAOvJoS0Tp +ZKWm9W44nQqN3AG4hjK+JpPLXZnmZ2YIlKJt+VCNFxtmi4G/P8H2E8QYLH/grCKHn3+ SM52RUIBGxTixx8I1pJ82OdGFJsS+14QpRlzLGlhuBZ5Tw8z4P3Cv5QYW8Obz495A8bJ 0ZQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131817; x=1692736617; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LxhxKcrFotwJQ4zXyPONZCebIAlPqsSs9CH/9edQop0=; b=cTHIlVT7DsCJE9IMDsHcFPhK1p2gwfUOmfsWaPSvnK2ufdufB1TKnAsdy2Rvbl76MW sU1g8kqBkAxnHhWjqWe6Pc2tMzVSpuVEsZ7FQJOIQ2ieL446nO8n3oz6FwWEIlwwvNWY yY2Zye8NjoFqrXUBDQkH5mlt9cBhIVwE4PS8HtP0ZfhntcMIw6hxMjc4HBj2i+vp3Dq5 NB7pOfQLTQ3q6dIqWWRREty3vSxDx0dX4I9jcJXTLLLBu5UidLDfdrp/IFf6/dxFRIGn 4kVeG+JMOiBTKRpFh9ohZP8u7QXDQHboCy/gx6nLgkXcTdEn9YimBmH/X/1xxGCQEoiU wrjA== X-Gm-Message-State: AOJu0Ywr7iz/ONWLV5Aanw44SEjRG/wfUb1uUc2G2U0/sric8Anmos+4 8IVYbyQkE+0AuFrMtqCliMhdLJxdTh4= X-Google-Smtp-Source: AGHT+IGBxJa5dASO30nxuQUhZTH6cFETIEtK9WDd4OiBCgRh0iZb43uuIrHIXrBHs2b+Sf1rJklROSxZbRA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:4a09:b0:688:7ac7:9f4b with SMTP id do9-20020a056a004a0900b006887ac79f4bmr82706pfb.1.1692131817570; Tue, 15 Aug 2023 13:36:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:39 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-2-seanjc@google.com> Subject: [PATCH v3 01/15] KVM: x86: Add a framework for enabling KVM-governed x86 features From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce yet another X86_FEATURE flag framework to manage and cache KVM governed features (for lack of a better name). "Governed" in this case means that KVM has some level of involvement and/or vested interest in whether or not an X86_FEATURE can be used by the guest. The intent of the framework is twofold: to simplify caching of guest CPUID flags that KVM needs to frequently query, and to add clarity to such caching, e.g. it isn't immediately obvious that SVM's bundle of flags for "optional nested SVM features" track whether or not a flag is exposed to L1. Begrudgingly define KVM_MAX_NR_GOVERNED_FEATURES for the size of the bitmap to avoid exposing governed_features.h in arch/x86/include/asm/, but add a FIXME to call out that it can and should be cleaned up once "struct kvm_vcpu_arch" is no longer expose to the kernel at large. Cc: Zeng Guang Signed-off-by: Sean Christopherson Reviewed-by: Binbin Wu Reviewed-by: Kai Huang Reviewed-by: Yuan Yao --- arch/x86/include/asm/kvm_host.h | 19 +++++++++++++ arch/x86/kvm/cpuid.c | 4 +++ arch/x86/kvm/cpuid.h | 46 ++++++++++++++++++++++++++++++++ arch/x86/kvm/governed_features.h | 9 +++++++ 4 files changed, 78 insertions(+) create mode 100644 arch/x86/kvm/governed_features.h diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 19d64f019240..60d430b4650f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -831,6 +831,25 @@ struct kvm_vcpu_arch { struct kvm_cpuid_entry2 *cpuid_entries; struct kvm_hypervisor_cpuid kvm_cpuid; =20 + /* + * FIXME: Drop this macro and use KVM_NR_GOVERNED_FEATURES directly + * when "struct kvm_vcpu_arch" is no longer defined in an + * arch/x86/include/asm header. The max is mostly arbitrary, i.e. + * can be increased as necessary. + */ +#define KVM_MAX_NR_GOVERNED_FEATURES BITS_PER_LONG + + /* + * Track whether or not the guest is allowed to use features that are + * governed by KVM, where "governed" means KVM needs to manage state + * and/or explicitly enable the feature in hardware. Typically, but + * not always, governed features can be used by the guest if and only + * if both KVM and userspace want to expose the feature to the guest. + */ + struct { + DECLARE_BITMAP(enabled, KVM_MAX_NR_GOVERNED_FEATURES); + } governed_features; + u64 reserved_gpa_bits; int maxphyaddr; =20 diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 5a88affb2e1a..4ba43ae008cb 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -313,6 +313,10 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *= vcpu) struct kvm_lapic *apic =3D vcpu->arch.apic; struct kvm_cpuid_entry2 *best; =20 + BUILD_BUG_ON(KVM_NR_GOVERNED_FEATURES > KVM_MAX_NR_GOVERNED_FEATURES); + bitmap_zero(vcpu->arch.governed_features.enabled, + KVM_MAX_NR_GOVERNED_FEATURES); + best =3D kvm_find_cpuid_entry(vcpu, 1); if (best && apic) { if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER)) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index b1658c0de847..284fa4704553 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -232,4 +232,50 @@ static __always_inline bool guest_pv_has(struct kvm_vc= pu *vcpu, return vcpu->arch.pv_cpuid.features & (1u << kvm_feature); } =20 +enum kvm_governed_features { +#define KVM_GOVERNED_FEATURE(x) KVM_GOVERNED_##x, +#include "governed_features.h" + KVM_NR_GOVERNED_FEATURES +}; + +static __always_inline int kvm_governed_feature_index(unsigned int x86_fea= ture) +{ + switch (x86_feature) { +#define KVM_GOVERNED_FEATURE(x) case x: return KVM_GOVERNED_##x; +#include "governed_features.h" + default: + return -1; + } +} + +static __always_inline bool kvm_is_governed_feature(unsigned int x86_featu= re) +{ + return kvm_governed_feature_index(x86_feature) >=3D 0; +} + +static __always_inline void kvm_governed_feature_set(struct kvm_vcpu *vcpu, + unsigned int x86_feature) +{ + BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); + + __set_bit(kvm_governed_feature_index(x86_feature), + vcpu->arch.governed_features.enabled); +} + +static __always_inline void kvm_governed_feature_check_and_set(struct kvm_= vcpu *vcpu, + unsigned int x86_feature) +{ + if (kvm_cpu_cap_has(x86_feature) && guest_cpuid_has(vcpu, x86_feature)) + kvm_governed_feature_set(vcpu, x86_feature); +} + +static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu, + unsigned int x86_feature) +{ + BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); + + return test_bit(kvm_governed_feature_index(x86_feature), + vcpu->arch.governed_features.enabled); +} + #endif diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h new file mode 100644 index 000000000000..40ce8e6608cd --- /dev/null +++ b/arch/x86/kvm/governed_features.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#if !defined(KVM_GOVERNED_FEATURE) || defined(KVM_GOVERNED_X86_FEATURE) +BUILD_BUG() +#endif + +#define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) + +#undef KVM_GOVERNED_X86_FEATURE +#undef KVM_GOVERNED_FEATURE --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A16FC04FE1 for ; Tue, 15 Aug 2023 20:38:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238415AbjHOUhs (ORCPT ); Tue, 15 Aug 2023 16:37:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238633AbjHOUhf (ORCPT ); Tue, 15 Aug 2023 16:37:35 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46D8C2101 for ; Tue, 15 Aug 2023 13:37:02 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-589c772be14so81533407b3.0 for ; Tue, 15 Aug 2023 13:37:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131819; x=1692736619; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=WBkpDKr6jj4NzlnIGGdQVal1P0a3ByasoO5IEWSIJRE=; b=PohXGEt5p5Gq1eWQMfSEBAlCwCuyTp0UUAhPLRCgxqzUZpMgn02SBiu69mZXoWEq3y 3ZvimOOCTpl9ZMhcK/3SP14PXFtJl02TzytZHA5VrIs+SxLBlTFJAtkwxwpIszrGxQyG ypiMSggFd6oqDl5CNl8HLMEeTp//ytZeEG1RV4iGa4fPCPfUUtGX76EHoruSxAd6waDS ilzsqaA+xcc1M22yoAx/4E9Y8WyWUj4ZXujOWv2hnid1bWf9iyRpweqgXAUtIzCFl8Gx Dpe/2zsa44Nj5cfs9qPSwK98v6C+6XwIl5ozo3wSr2dw9S7RFD0Ka0T65AiEf9w6pykt 7jiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131819; x=1692736619; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WBkpDKr6jj4NzlnIGGdQVal1P0a3ByasoO5IEWSIJRE=; b=L02qBEYbf/JX1vfFn8HgtYcV4V/+9Eyf8Qh6Gp7rIBiAzFdey1meNRtKOsHC9G8QL5 UrLbSDq2NFW/vbtC5HQ7D2H+SXgqP1o9coJ/SyrXPJ8b7fHwH695VwsWZ2jxVSqhhvgD 2srYdN1qLaD5RaL6O0NsNQED4XxSNshIrJs7vD2a3BVsVsNef+S92SwIc6fIaNg8Y6yu lePBzf/R8m/9uauWjqOlPMZ31Hvrg3IGKCA57Y3u8iAXEAsv16/y1WpLvkuc97dKgweE pxDhNBxB5sdX1cGjA7SwQlx2KzoGzXx0yC16b3liO47txtSq/29fwAAX3TArp1rcw6b9 sbNw== X-Gm-Message-State: AOJu0YxJmv2wUhkdi86A3UhOuj61p/A/wC5XlIlQlhnh3LzJ/+Q5jiNQ XeAY2+rerud6f5nrVHFP1Rypnl0XMuw= X-Google-Smtp-Source: AGHT+IEsB6TpySeJzscsreqlViYCX5OPRs/9V0pFq6s7NBXcybCVD+S+fZLYmnQdMHuHjRmXkoZYZ4UlaF0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:3013:b0:576:e268:903d with SMTP id ey19-20020a05690c301300b00576e268903dmr48638ywb.2.1692131819462; Tue, 15 Aug 2023 13:36:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:40 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-3-seanjc@google.com> Subject: [PATCH v3 02/15] KVM: x86/mmu: Use KVM-governed feature framework to track "GBPAGES enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the governed feature framework to track whether or not the guest can use 1GiB pages, and drop the one-off helper that wraps the surprisingly non-trivial logic surrounding 1GiB page usage in the guest. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/cpuid.c | 17 +++++++++++++++++ arch/x86/kvm/governed_features.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 20 +++----------------- 3 files changed, 22 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 4ba43ae008cb..67e9f79fe059 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -312,11 +312,28 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) { struct kvm_lapic *apic =3D vcpu->arch.apic; struct kvm_cpuid_entry2 *best; + bool allow_gbpages; =20 BUILD_BUG_ON(KVM_NR_GOVERNED_FEATURES > KVM_MAX_NR_GOVERNED_FEATURES); bitmap_zero(vcpu->arch.governed_features.enabled, KVM_MAX_NR_GOVERNED_FEATURES); =20 + /* + * If TDP is enabled, let the guest use GBPAGES if they're supported in + * hardware. The hardware page walker doesn't let KVM disable GBPAGES, + * i.e. won't treat them as reserved, and KVM doesn't redo the GVA->GPA + * walk for performance and complexity reasons. Not to mention KVM + * _can't_ solve the problem because GVA->GPA walks aren't visible to + * KVM once a TDP translation is installed. Mimic hardware behavior so + * that KVM's is at least consistent, i.e. doesn't randomly inject #PF. + * If TDP is disabled, honor *only* guest CPUID as KVM has full control + * and can install smaller shadow pages if the host lacks 1GiB support. + */ + allow_gbpages =3D tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : + guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); + if (allow_gbpages) + kvm_governed_feature_set(vcpu, X86_FEATURE_GBPAGES); + best =3D kvm_find_cpuid_entry(vcpu, 1); if (best && apic) { if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER)) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 40ce8e6608cd..b29c15d5e038 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -5,5 +5,7 @@ BUILD_BUG() =20 #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) =20 +KVM_GOVERNED_X86_FEATURE(GBPAGES) + #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5bdda75bfd10..9e4cd8b4a202 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4779,28 +4779,13 @@ static void __reset_rsvds_bits_mask(struct rsvd_bit= s_validate *rsvd_check, } } =20 -static bool guest_can_use_gbpages(struct kvm_vcpu *vcpu) -{ - /* - * If TDP is enabled, let the guest use GBPAGES if they're supported in - * hardware. The hardware page walker doesn't let KVM disable GBPAGES, - * i.e. won't treat them as reserved, and KVM doesn't redo the GVA->GPA - * walk for performance and complexity reasons. Not to mention KVM - * _can't_ solve the problem because GVA->GPA walks aren't visible to - * KVM once a TDP translation is installed. Mimic hardware behavior so - * that KVM's is at least consistent, i.e. doesn't randomly inject #PF. - */ - return tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : - guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); -} - static void reset_guest_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, context->cpu_role.base.level, is_efer_nx(context), - guest_can_use_gbpages(vcpu), + guest_can_use(vcpu, X86_FEATURE_GBPAGES), is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); } @@ -4877,7 +4862,8 @@ static void reset_shadow_zero_bits_mask(struct kvm_vc= pu *vcpu, __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->root_role.level, context->root_role.efer_nx, - guest_can_use_gbpages(vcpu), is_pse, is_amd); + guest_can_use(vcpu, X86_FEATURE_GBPAGES), + is_pse, is_amd); =20 if (!shadow_me_mask) return; --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A580C10F00 for ; Tue, 15 Aug 2023 20:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238838AbjHOUh4 (ORCPT ); Tue, 15 Aug 2023 16:37:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238657AbjHOUhg (ORCPT ); Tue, 15 Aug 2023 16:37:36 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C76A269D for ; Tue, 15 Aug 2023 13:37:04 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-686f0605dbfso8703472b3a.3 for ; Tue, 15 Aug 2023 13:37:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131821; x=1692736621; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=o5Fw8iHbcV8xK4DrDbdpdFHBEPCT+wzf42bdYUnsFwM=; b=QqYZTngZalMttEh18rnC4PLxQefF07rtAGLtY6TXMgEH5QDZ289O7UXT4u1Ujkwclo 9oLd+5MeuHlx3XfhoH+reDRGW8NM2wVSV6DTFEvO2Ad8BZBeY++BTiclLN/VLRgySn2O xN0/fty+SJ6WTlTTl5p7cQ2HsUR/yaPIidYp38zwNmfPqPSWlsvVr9sAa40dv7517GxB OxTrqJu0AjmAzGxdBvV2cyBSmiqRgGrpUr4Gh3I+xeBxzYth56V+Q1J3M5Wf72YZLbDd zf7LTpbcwizvQLy0Hwzx2fy/KdK9wqvZKfiKVnC+EwLMHH2W/FogYww1Q+KjVLjA+CFD B5TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131821; x=1692736621; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=o5Fw8iHbcV8xK4DrDbdpdFHBEPCT+wzf42bdYUnsFwM=; b=epvLZJdl2uByR8r5KI4C9UkCY2w7FR1qppsM8aksC8pG19YVFxXDagdxYmQlC/zONt HnrNoX3W8mpEBxsF7Z+2LjrVrw4Ee5n7xpPxnTdFX9BDv2y7VdsAqXNHx6IWmTifI8x5 9YpJLduB5EShm/ioZPKSwjBIN8sqY5UhxDiHZ7tsA4pn1DPUwzaz2s5q2HTU51vpAXIr Rpp4tih8UZrsbCQurLO9pw+pf5/ZWepBaBVoI7aOUyMFmeh16K24m6jfcJuiqrF1a1Yx EybF/CPrKF4XAfUGBj2lK/FtAftU10ZmItKTHh4YatSwUyLRpskOfvd0euh60BM98DmO Jh5Q== X-Gm-Message-State: AOJu0YwW6SDAR9KWlhaS0hk54zCC9SRpkneiVHyYRx4Zg57WGuCjU1y3 CSOguZT+kleu6K1XxxM7xlFNzfWCvAA= X-Google-Smtp-Source: AGHT+IFe12jxTHjrc7c6wUko5dd5R5Y8ZCoG71Ke9T+D+UtA3k/jZBXLzkxawfZKTlJHI3Ts++fg66QePJI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1ad3:b0:687:3f7a:aea7 with SMTP id f19-20020a056a001ad300b006873f7aaea7mr6546953pfv.0.1692131821648; Tue, 15 Aug 2023 13:37:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:41 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-4-seanjc@google.com> Subject: [PATCH v3 03/15] KVM: VMX: Recompute "XSAVES enabled" only after CPUID update From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Recompute whether or not XSAVES is enabled for the guest only if the guest's CPUID model changes instead of redoing the computation every time KVM generates vmcs01's secondary execution controls. The boot_cpu_has() and cpu_has_vmx_xsaves() checks should never change after KVM is loaded, and if they do the kernel/KVM is hosed. Opportunistically add a comment explaining _why_ XSAVES is effectively exposed to the guest if and only if XSAVE is also exposed to the guest. Practically speaking, no functional change intended (KVM will do fewer computations, but should still see the same xsaves_enabled value whenever KVM looks at it). Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/vmx/vmx.c | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 434bf524e712..1bf85bd53416 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4612,19 +4612,10 @@ static u32 vmx_secondary_exec_control(struct vcpu_v= mx *vmx) if (!enable_pml || !atomic_read(&vcpu->kvm->nr_memslots_dirty_logging)) exec_control &=3D ~SECONDARY_EXEC_ENABLE_PML; =20 - if (cpu_has_vmx_xsaves()) { - /* Exposing XSAVES only when XSAVE is exposed */ - bool xsaves_enabled =3D - boot_cpu_has(X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); - - vcpu->arch.xsaves_enabled =3D xsaves_enabled; - + if (cpu_has_vmx_xsaves()) vmx_adjust_secondary_exec_control(vmx, &exec_control, SECONDARY_EXEC_XSAVES, - xsaves_enabled, false); - } + vcpu->arch.xsaves_enabled, false); =20 /* * RDPID is also gated by ENABLE_RDTSCP, turn on the control if either @@ -7749,8 +7740,15 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - /* xsaves_enabled is recomputed in vmx_compute_secondary_exec_control(). = */ - vcpu->arch.xsaves_enabled =3D false; + /* + * XSAVES is effectively enabled if and only if XSAVE is also exposed + * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be + * set if and only if XSAVE is supported. + */ + vcpu->arch.xsaves_enabled =3D cpu_has_vmx_xsaves() && + boot_cpu_has(X86_FEATURE_XSAVE) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); =20 vmx_setup_uret_msrs(vmx); =20 --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 482E8C16B13 for ; Tue, 15 Aug 2023 20:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238927AbjHOUh6 (ORCPT ); Tue, 15 Aug 2023 16:37:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238671AbjHOUhg (ORCPT ); Tue, 15 Aug 2023 16:37:36 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 970B126A4 for ; Tue, 15 Aug 2023 13:37:04 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-686b879f630so6264048b3a.3 for ; Tue, 15 Aug 2023 13:37:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131823; x=1692736623; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=194Q9JKRkX0kn5HS7G+Yhmq9XbiwQ8AINE2AwfpTISk=; b=UcjAxrAD0iT9xyNSSRKw6S6Mnh+F9jsWWrhMuioT2vmmxmn8yfZR5CzeYjfZ7qbGk1 aAX8OdtgJ8Q9vokeo3rgEmW+jMQZvpoa2tNiXA6Ol2FEahLf2fpPDxKwHXfzbZrVKf2R f5GfJJevtQw3EiWD6D3UY/FVmDNzE0zmxwOLDrYVoJzYDSP22gi1K2GuYGhu0eunYGYI G54NHubi6Tk/NDwt+DzuhIqPNUjYIeafzun97fCvpJfq5ng7SOL2/RIHw9xbYW1nN5G5 afQVSljfzOQ5VFRdxqn0MM+BXMv5sQd6VC9NjXq5vbbfs2btQYY1tkk7DNpqtQEuQiVJ Psbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131823; x=1692736623; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=194Q9JKRkX0kn5HS7G+Yhmq9XbiwQ8AINE2AwfpTISk=; b=igqte7ttrA0oNzgw6ofFueAVspfMbA91YVvsQKmMknvmmZEN2rQUgOpwj582YLFGbk ebopItUo53K09Zwy6JkAVAkmYilaBfgRjDbg2EDwKhs6JadedWhsnOXpsYrNaSZX8JJ0 tKWdWhgHe+IObxPalvuD2BHHBWMu6KpjdQNXylPBmwNaGM5mDWwME4m9CLHAXI5b+kGp PJG0ywGiAeVtGMDkltEIhDjASlP80YTt0HvXupdknx4NL7N9l3NwbU9DIREaEbluN8Na bIQ6NqLVyRBthKu11lFRX6VGP/CvWwS7ySs9VdAhDuhDMxUNrY4YvNXhPqXjCTkPKjpd sZvw== X-Gm-Message-State: AOJu0YxU54iJdXifYvTTVOlUxBwnZV+hskYG62jTVmda+omog9i+EVwj JBrcNaJKEMzS73gNhmG5VJc+ielQgmM= X-Google-Smtp-Source: AGHT+IHQKU3jEXExnIEFi74xujdPT3vtmMe3RhUwRMHGoznyBB7vqMTLiRTpubj1+/t4vfHF5k4NsahV0AY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1a0f:b0:687:4554:5642 with SMTP id g15-20020a056a001a0f00b0068745545642mr5954001pfv.0.1692131823190; Tue, 15 Aug 2023 13:37:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:42 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-5-seanjc@google.com> Subject: [PATCH v3 04/15] KVM: VMX: Check KVM CPU caps, not just VMX MSR support, for XSAVE enabling From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Check KVM CPU capabilities instead of raw VMX support for XSAVES when determining whether or not XSAVER can/should be exposed to the guest. Practically speaking, it's nonsensical/impossible for a CPU to support "enable XSAVES" without XSAVES being supported natively. The real motivation for checking kvm_cpu_cap_has() is to allow using the governed feature's standard check-and-set logic. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1bf85bd53416..78f292b7e2c5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7745,7 +7745,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be * set if and only if XSAVE is supported. */ - vcpu->arch.xsaves_enabled =3D cpu_has_vmx_xsaves() && + vcpu->arch.xsaves_enabled =3D kvm_cpu_cap_has(X86_FEATURE_XSAVES) && boot_cpu_has(X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B98DFC07E8D for ; Tue, 15 Aug 2023 20:38:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238598AbjHOUhv (ORCPT ); Tue, 15 Aug 2023 16:37:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238759AbjHOUhm (ORCPT ); Tue, 15 Aug 2023 16:37:42 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 372292120 for ; Tue, 15 Aug 2023 13:37:10 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-56357814339so6408266a12.3 for ; Tue, 15 Aug 2023 13:37:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131825; x=1692736625; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=WurYGAomMGc7n6x2R93MY9zuBdBMLR5/nrLIAa8YDOE=; b=4KaxEtxXC9zmUhFrvN7HA8/qKeGUs1SPVQI36b07Q1LDNK7P4q532bRLCy+mqb7tXs uROaA81G+aep0Ygo6VWB0UBLqVv08aPnPH5wGyw4Wxoi9ou9u1PFGpVk1YvP7C5+Vyaw lVTc+kXytk1HpVVdty4gVf2lJWoS34n3LR9lF5tQYajqxkI+tMitxoQN7LYYFy34t/0x x2es5YB+8u1v0Pf6ZrYEE5dTx1EUtIW+fmSPdoNPe9j973orII64rrcwSXeKC/iR6NWf Jq1SN9NN4+0n3xSlWESXGaSf5c66wWiVbCdm3ZWgs9qc9rRMPQIMb3ZEsFcIIGl4xdY8 /ujg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131825; x=1692736625; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WurYGAomMGc7n6x2R93MY9zuBdBMLR5/nrLIAa8YDOE=; b=TsLphdEUys5kMJmZRF445eZc0o93cQsf2qGtMbCY0NQc0PndC80hrDejKglmAaeFnE vpj7dCW4wJmzVJi+p3C7GqO4+xqJ0d/hhmV/4/tiW6uVZX1/oJ/C1CtfKEQHTPL8rnpG H5suK0CkXJSAUBkwT3Q2bQYMi015LHDROFTgi/WZyApFAb+XR4wPN8rGTxMPXLaN+EYb y38cm/h7pzcJs6KDR3wCXS8ERme7fQLYNKsxHoXfISwUXBmqXF0WdKshL2GZzACzY+Wo 7cOjcIyRZE72Yu3ONJmX8ttDBHsPwsHBjmpjRx+hjJnufVmM3WAV4rXcwpT8D8zeiZ6D dwmA== X-Gm-Message-State: AOJu0Yz2tt1/wNpCrRw+Tt/5LFIFOPVKGyl//+ijn4/CsQTAGvDCXFqY 3ZkSE4vKzfuDVc2tn3dbO0NHx8xVBCg= X-Google-Smtp-Source: AGHT+IGI4znoa3NlI8jKoDjG6pqwDn8wbvMz9Bw9ZmIkAfptctbAguuErAFgvyCj5+BUAFZvtrlyf/liKPo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e744:b0:1b8:95fc:d18 with SMTP id p4-20020a170902e74400b001b895fc0d18mr5773125plf.8.1692131824868; Tue, 15 Aug 2023 13:37:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:43 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-6-seanjc@google.com> Subject: [PATCH v3 05/15] KVM: VMX: Rename XSAVES control to follow KVM's preferred "ENABLE_XYZ" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename the XSAVES secondary execution control to follow KVM's preferred style so that XSAVES related logic can use common macros that depend on KVM's preferred style. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Vitaly Kuznetsov --- arch/x86/include/asm/vmx.h | 2 +- arch/x86/kvm/vmx/capabilities.h | 2 +- arch/x86/kvm/vmx/hyperv.c | 2 +- arch/x86/kvm/vmx/nested.c | 6 +++--- arch/x86/kvm/vmx/nested.h | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 2 +- 7 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 0d02c4aafa6f..0e73616b82f3 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -71,7 +71,7 @@ #define SECONDARY_EXEC_RDSEED_EXITING VMCS_CONTROL_BIT(RDSEED_EXITING) #define SECONDARY_EXEC_ENABLE_PML VMCS_CONTROL_BIT(PAGE_MOD_= LOGGING) #define SECONDARY_EXEC_PT_CONCEAL_VMX VMCS_CONTROL_BIT(PT_CONCEAL_VMX) -#define SECONDARY_EXEC_XSAVES VMCS_CONTROL_BIT(XSAVES) +#define SECONDARY_EXEC_ENABLE_XSAVES VMCS_CONTROL_BIT(XSAVES) #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC VMCS_CONTROL_BIT(MODE_BASED_EPT= _EXEC) #define SECONDARY_EXEC_PT_USE_GPA VMCS_CONTROL_BIT(PT_USE_GPA) #define SECONDARY_EXEC_TSC_SCALING VMCS_CONTROL_BIT(TSC_SCALI= NG) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilitie= s.h index d0abee35d7ba..41a4533f9989 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -252,7 +252,7 @@ static inline bool cpu_has_vmx_pml(void) static inline bool cpu_has_vmx_xsaves(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & - SECONDARY_EXEC_XSAVES; + SECONDARY_EXEC_ENABLE_XSAVES; } =20 static inline bool cpu_has_vmx_waitpkg(void) diff --git a/arch/x86/kvm/vmx/hyperv.c b/arch/x86/kvm/vmx/hyperv.c index 79450e1ed7cf..313b8bb5b8a7 100644 --- a/arch/x86/kvm/vmx/hyperv.c +++ b/arch/x86/kvm/vmx/hyperv.c @@ -78,7 +78,7 @@ SECONDARY_EXEC_DESC | \ SECONDARY_EXEC_ENABLE_RDTSCP | \ SECONDARY_EXEC_ENABLE_INVPCID | \ - SECONDARY_EXEC_XSAVES | \ + SECONDARY_EXEC_ENABLE_XSAVES | \ SECONDARY_EXEC_RDSEED_EXITING | \ SECONDARY_EXEC_RDRAND_EXITING | \ SECONDARY_EXEC_TSC_SCALING | \ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 516391cc0d64..22e08d30baef 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2307,7 +2307,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx= , struct loaded_vmcs *vmcs0 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | SECONDARY_EXEC_ENABLE_INVPCID | SECONDARY_EXEC_ENABLE_RDTSCP | - SECONDARY_EXEC_XSAVES | + SECONDARY_EXEC_ENABLE_XSAVES | SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | SECONDARY_EXEC_APIC_REGISTER_VIRT | @@ -6331,7 +6331,7 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu = *vcpu, * If if it were, XSS would have to be checked against * the XSS exit bitmap in vmcs12. */ - return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES); + return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_XSAVES); case EXIT_REASON_UMWAIT: case EXIT_REASON_TPAUSE: return nested_cpu_has2(vmcs12, @@ -6874,7 +6874,7 @@ static void nested_vmx_setup_secondary_ctls(u32 ept_c= aps, SECONDARY_EXEC_ENABLE_INVPCID | SECONDARY_EXEC_ENABLE_VMFUNC | SECONDARY_EXEC_RDSEED_EXITING | - SECONDARY_EXEC_XSAVES | + SECONDARY_EXEC_ENABLE_XSAVES | SECONDARY_EXEC_TSC_SCALING | SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE; =20 diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 96952263b029..b4b9d51438c6 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -168,7 +168,7 @@ static inline int nested_cpu_has_ept(struct vmcs12 *vmc= s12) =20 static inline bool nested_cpu_has_xsaves(struct vmcs12 *vmcs12) { - return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES); + return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_XSAVES); } =20 static inline bool nested_cpu_has_pml(struct vmcs12 *vmcs12) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 78f292b7e2c5..22975cc949b7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4614,7 +4614,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx= *vmx) =20 if (cpu_has_vmx_xsaves()) vmx_adjust_secondary_exec_control(vmx, &exec_control, - SECONDARY_EXEC_XSAVES, + SECONDARY_EXEC_ENABLE_XSAVES, vcpu->arch.xsaves_enabled, false); =20 /* diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 32384ba38499..cde902b44d97 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -562,7 +562,7 @@ static inline u8 vmx_get_rvi(void) SECONDARY_EXEC_APIC_REGISTER_VIRT | \ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | \ SECONDARY_EXEC_SHADOW_VMCS | \ - SECONDARY_EXEC_XSAVES | \ + SECONDARY_EXEC_ENABLE_XSAVES | \ SECONDARY_EXEC_RDSEED_EXITING | \ SECONDARY_EXEC_RDRAND_EXITING | \ SECONDARY_EXEC_ENABLE_PML | \ --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60D8AC04A94 for ; Tue, 15 Aug 2023 20:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238494AbjHOUiV (ORCPT ); Tue, 15 Aug 2023 16:38:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238408AbjHOUhs (ORCPT ); Tue, 15 Aug 2023 16:37:48 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 205271BFF for ; Tue, 15 Aug 2023 13:37:21 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-589ac93bc6eso76751067b3.0 for ; Tue, 15 Aug 2023 13:37:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131826; x=1692736626; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9kKru1xDygxSy2AxSK9xPxd/4keZZoge6XeY6EftrqI=; b=htfA+d+JXn13TTbcDzCya+D7HSIVA2axkaPqJBsVoewSSktqGFz525FTRsVq9P9NIo XssECs22HGRjHACcszmT4HDr0cCbMplg4AiK8vopMyWynPbngYBqFpLkOAZMwpXhMDbT 5qonhCEnd6RlLg+5/Al88Ly3w/i5eBToHJm6xVbMPKRmuwaaxqaAS6vMb1zVynSaQEEa ZPnRNc59ffRkfezUBzJ8pHTC3SpeFBdZBI7ZO/S54Sa3/Io48gPUIT2bcxHEsqgvUhgM 7RO9evsn9B2hiA4Bp2XmcgCbm7Lc0LkyvwEc73uEZuRD5ufrApC45FSOkKz9z1xkk021 IIJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131826; x=1692736626; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9kKru1xDygxSy2AxSK9xPxd/4keZZoge6XeY6EftrqI=; b=fBB809QHLFrZJ/3TYzxAXAc+yas99S24YMBsGjMSCKWYYpDBAeHXahoyZiZuy5ks+m jbKprmK4BPkmD5U3ETKcCR8SgpNqsypa2tmhXlnLFZlS4wQozhXI6yq5aWk1I3uB88GH Y2jQbfL9OzSyBakyYCapqssB0SCtfWPHAjlBr/6OR8i5vIfjsNQaQnZgwD1u0kVjoDJ5 PtpQ3QgSDg5IP/gkMW7ZqbMyYhQbe/I3KhV7OXr0CbOhJxCKVbz3O/SwCKNP75uNzL+R nyQRhQy8ntmUkQDpRWtLU/SmjAF59J/ysVAVTX1ncHJWmo7F9oOmq597Re6g2zrJaGhE DNKw== X-Gm-Message-State: AOJu0YzP9M3BDsm2kP6hkPVBuyXqjcM8i2hklhpVF7IDGQsIufTfG96h zuIxaVVAVFmqxDtt3GnVsqgd7c549JI= X-Google-Smtp-Source: AGHT+IFLooXv8KeeyZ9LeWXYlkLL8LYlczka3he0qW1aJSthMS3lMIEznzkNNVvbvPVE9fC1vwtLS4MAfGY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:4420:0:b0:584:41a6:6cd8 with SMTP id r32-20020a814420000000b0058441a66cd8mr195692ywa.8.1692131826593; Tue, 15 Aug 2023 13:37:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:44 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-7-seanjc@google.com> Subject: [PATCH v3 06/15] KVM: x86: Use KVM-governed feature framework to track "XSAVES enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the governed feature framework to track if XSAVES is "enabled", i.e. if XSAVES can be used by the guest. Add a comment in the SVM code to explain the very unintuitive logic of deliberately NOT checking if XSAVES is enumerated in the guest CPUID model. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/svm.c | 17 ++++++++++++--- arch/x86/kvm/vmx/vmx.c | 36 ++++++++++++++++---------------- arch/x86/kvm/x86.c | 4 ++-- 5 files changed, 35 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 60d430b4650f..9f57aa33798b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -746,7 +746,6 @@ struct kvm_vcpu_arch { u64 smi_count; bool at_instruction_boundary; bool tpr_access_reporting; - bool xsaves_enabled; bool xfd_no_write_intercept; u64 ia32_xss; u64 microcode_version; diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index b29c15d5e038..b896a64e4ac3 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -6,6 +6,7 @@ BUILD_BUG() #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) =20 KVM_GOVERNED_X86_FEATURE(GBPAGES) +KVM_GOVERNED_X86_FEATURE(XSAVES) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6aaa3c7b4578..d67f6e23dcd2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4273,9 +4273,20 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_cpuid_entry2 *best; =20 - vcpu->arch.xsaves_enabled =3D guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - boot_cpu_has(X86_FEATURE_XSAVE) && - boot_cpu_has(X86_FEATURE_XSAVES); + /* + * SVM doesn't provide a way to disable just XSAVES in the guest, KVM + * can only disable all variants of by disallowing CR4.OSXSAVE from + * being set. As a result, if the host has XSAVE and XSAVES, and the + * guest has XSAVE enabled, the guest can execute XSAVES without + * faulting. Treat XSAVES as enabled in this case regardless of + * whether it's advertised to the guest so that KVM context switches + * XSS on VM-Enter/VM-Exit. Failure to do so would effectively give + * the guest read/write access to the host's XSS. + */ + if (boot_cpu_has(X86_FEATURE_XSAVE) && + boot_cpu_has(X86_FEATURE_XSAVES) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) + kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); =20 /* Update nrips enabled cache */ svm->nrips_enabled =3D kvm_cpu_cap_has(X86_FEATURE_NRIPS) && diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 22975cc949b7..6314ca32a5cf 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4543,16 +4543,19 @@ vmx_adjust_secondary_exec_control(struct vcpu_vmx *= vmx, u32 *exec_control, * based on a single guest CPUID bit, with a dedicated feature bit. This = also * verifies that the control is actually supported by KVM and hardware. */ -#define vmx_adjust_sec_exec_control(vmx, exec_control, name, feat_name, ct= rl_name, exiting) \ -({ \ - bool __enabled; \ - \ - if (cpu_has_vmx_##name()) { \ - __enabled =3D guest_cpuid_has(&(vmx)->vcpu, \ - X86_FEATURE_##feat_name); \ - vmx_adjust_secondary_exec_control(vmx, exec_control, \ - SECONDARY_EXEC_##ctrl_name, __enabled, exiting); \ - } \ +#define vmx_adjust_sec_exec_control(vmx, exec_control, name, feat_name, ct= rl_name, exiting) \ +({ \ + struct kvm_vcpu *__vcpu =3D &(vmx)->vcpu; \ + bool __enabled; \ + \ + if (cpu_has_vmx_##name()) { \ + if (kvm_is_governed_feature(X86_FEATURE_##feat_name)) \ + __enabled =3D guest_can_use(__vcpu, X86_FEATURE_##feat_name); \ + else \ + __enabled =3D guest_cpuid_has(__vcpu, X86_FEATURE_##feat_name); \ + vmx_adjust_secondary_exec_control(vmx, exec_control, SECONDARY_EXEC_##ct= rl_name,\ + __enabled, exiting); \ + } \ }) =20 /* More macro magic for ENABLE_/opt-in versus _EXITING/opt-out controls. */ @@ -4612,10 +4615,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vm= x *vmx) if (!enable_pml || !atomic_read(&vcpu->kvm->nr_memslots_dirty_logging)) exec_control &=3D ~SECONDARY_EXEC_ENABLE_PML; =20 - if (cpu_has_vmx_xsaves()) - vmx_adjust_secondary_exec_control(vmx, &exec_control, - SECONDARY_EXEC_ENABLE_XSAVES, - vcpu->arch.xsaves_enabled, false); + vmx_adjust_sec_exec_feature(vmx, &exec_control, xsaves, XSAVES); =20 /* * RDPID is also gated by ENABLE_RDTSCP, turn on the control if either @@ -4634,6 +4634,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx= *vmx) SECONDARY_EXEC_ENABLE_RDTSCP, rdpid_or_rdtscp_enabled, false); } + vmx_adjust_sec_exec_feature(vmx, &exec_control, invpcid, INVPCID); =20 vmx_adjust_sec_exec_exiting(vmx, &exec_control, rdrand, RDRAND); @@ -7745,10 +7746,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be * set if and only if XSAVE is supported. */ - vcpu->arch.xsaves_enabled =3D kvm_cpu_cap_has(X86_FEATURE_XSAVES) && - boot_cpu_has(X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVES); + if (boot_cpu_has(X86_FEATURE_XSAVE) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_XSAVES); =20 vmx_setup_uret_msrs(vmx); =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index eba35d43e3fe..34945c7dba38 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1016,7 +1016,7 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 !=3D host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); =20 - if (vcpu->arch.xsaves_enabled && + if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && vcpu->arch.ia32_xss !=3D host_xss) wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); } @@ -1047,7 +1047,7 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 !=3D host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); =20 - if (vcpu->arch.xsaves_enabled && + if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && vcpu->arch.ia32_xss !=3D host_xss) wrmsrl(MSR_IA32_XSS, host_xss); } --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3E11C04E69 for ; Tue, 15 Aug 2023 20:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238698AbjHOUiY (ORCPT ); Tue, 15 Aug 2023 16:38:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238648AbjHOUhx (ORCPT ); Tue, 15 Aug 2023 16:37:53 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D4F81FDD for ; Tue, 15 Aug 2023 13:37:28 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-26b357b9bc2so2001606a91.2 for ; Tue, 15 Aug 2023 13:37:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131828; x=1692736628; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1duX83tLmXfiLKauPTdlHGvRHH3JeEZ5FTf3herZaZE=; b=QpNAmErgTRq2z9Gp6bhI4e6jfzejGkr1DdynDPOyqX9+ZnBlf6jEqHo26msKm07rH2 E3Aq0pgkEa2oIw5XajX2PGuJa7pEfakNkLXMo1L5B6LimOk0LZG73f7ou1+bzOSViXkr qpB6V2xxMAooAX2b9N5MGb4A7eL2GP9CUstAZUnRJAFquQ7sC6dIu0TNSpzLg4GZgyUj vIKIxtdfMBfwFhIgR+dejI7yY2QKMzJTkfP5Z6sfR844u48wCPFXppAiMX29ixWdkG4N Bn/uNY2Ut5X4LEr88AG2HGrkDZVs4TvGx3EmU1QOVg6uIenRXAGScnpeo2iY6goopDAH KOQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131828; x=1692736628; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1duX83tLmXfiLKauPTdlHGvRHH3JeEZ5FTf3herZaZE=; b=ZWwRXxsvTTUTdOTXy7zHxBI/UnKd3F9bFuMgF36VLIxZC6dgfEQF97b7w3jc1YzJMG yvydItam6BQ24QjDAriXaUf+WBCQd+6aQ6aw8EPwVOQoJyBCyxXaTbZ9PSsMqJkTkNmn 0p/3hwKaJFAyFBM6mVEis4S6GiIZixeIph6sB2imYrIAUGQ9DZ4Uu3khMx/ia9QfmmhO Snl56rdVqQ9Y5jh5SvywPXulDJponz0CtE+lqhGXt8FGX0r+l6FG+8OuCxNv6q+GJiDZ g2BDWd3Ms+u/2IKfUvB3uWKaedFaCzrIxnO9V80y/7yzAcpitl5/OsfDj3hyCqQ0Wz3M RTWg== X-Gm-Message-State: AOJu0YwsA1eLJVInrSU8/Pt8ekWd+Z+27lpgNLWRrHMsLw9/Ac9I50ge ioHGzcFCFKyRoc/8Gn6MFaJ4HBhSCkE= X-Google-Smtp-Source: AGHT+IEMC/zrqnzdAGz3ZCpbaGOXEyE5htuOnagBejjoIzBtPK/VEk+z7v1PvsJGo72sX2Oa2uV282ZaIVo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:a512:b0:269:4286:3496 with SMTP id a18-20020a17090aa51200b0026942863496mr3062078pjq.9.1692131828211; Tue, 15 Aug 2023 13:37:08 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:45 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-8-seanjc@google.com> Subject: [PATCH v3 07/15] KVM: nVMX: Use KVM-governed feature framework to track "nested VMX enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "VMX exposed to L1" via a governed feature flag instead of using a dedicated helper to provide the same functionality. The main goal is to drive convergence between VMX and SVM with respect to querying features that are controllable via module param (SVM likes to cache nested features), avoiding the guest CPUID lookups at runtime is just a bonus and unlikely to provide any meaningful performance benefits. No functional change intended. Reviewed-by: Yuan Yao Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/vmx/nested.c | 7 ++++--- arch/x86/kvm/vmx/vmx.c | 21 ++++++--------------- arch/x86/kvm/vmx/vmx.h | 1 - 4 files changed, 11 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index b896a64e4ac3..22446614bf49 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -7,6 +7,7 @@ BUILD_BUG() =20 KVM_GOVERNED_X86_FEATURE(GBPAGES) KVM_GOVERNED_X86_FEATURE(XSAVES) +KVM_GOVERNED_X86_FEATURE(VMX) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 22e08d30baef..c5ec0ef51ff7 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6426,7 +6426,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu, vmx =3D to_vmx(vcpu); vmcs12 =3D get_vmcs12(vcpu); =20 - if (nested_vmx_allowed(vcpu) && + if (guest_can_use(vcpu, X86_FEATURE_VMX) && (vmx->nested.vmxon || vmx->nested.smm.vmxon)) { kvm_state.hdr.vmx.vmxon_pa =3D vmx->nested.vmxon_ptr; kvm_state.hdr.vmx.vmcs12_pa =3D vmx->nested.current_vmptr; @@ -6567,7 +6567,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, if (kvm_state->flags & ~KVM_STATE_NESTED_EVMCS) return -EINVAL; } else { - if (!nested_vmx_allowed(vcpu)) + if (!guest_can_use(vcpu, X86_FEATURE_VMX)) return -EINVAL; =20 if (!page_address_valid(vcpu, kvm_state->hdr.vmx.vmxon_pa)) @@ -6601,7 +6601,8 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, return -EINVAL; =20 if ((kvm_state->flags & KVM_STATE_NESTED_EVMCS) && - (!nested_vmx_allowed(vcpu) || !vmx->nested.enlightened_vmcs_enabled)) + (!guest_can_use(vcpu, X86_FEATURE_VMX) || + !vmx->nested.enlightened_vmcs_enabled)) return -EINVAL; =20 vmx_leave_nested(vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6314ca32a5cf..caeb415eb5a3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1908,17 +1908,6 @@ static void vmx_write_tsc_multiplier(struct kvm_vcpu= *vcpu) vmcs_write64(TSC_MULTIPLIER, vcpu->arch.tsc_scaling_ratio); } =20 -/* - * nested_vmx_allowed() checks whether a guest should be allowed to use VMX - * instructions and MSRs (i.e., nested VMX). Nested VMX is disabled for - * all guests if the "nested" module option is off, and can also be disabl= ed - * for a single guest by disabling its VMX cpuid bit. - */ -bool nested_vmx_allowed(struct kvm_vcpu *vcpu) -{ - return nested && guest_cpuid_has(vcpu, X86_FEATURE_VMX); -} - /* * Userspace is allowed to set any supported IA32_FEATURE_CONTROL regardle= ss of * guest CPUID. Note, KVM allows userspace to set "VMX in SMX" to maintain @@ -2046,7 +2035,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) [msr_info->index - MSR_IA32_SGXLEPUBKEYHASH0]; break; case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: - if (!nested_vmx_allowed(vcpu)) + if (!guest_can_use(vcpu, X86_FEATURE_VMX)) return 1; if (vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index, &msr_info->data)) @@ -2354,7 +2343,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: if (!msr_info->host_initiated) return 1; /* they are read-only */ - if (!nested_vmx_allowed(vcpu)) + if (!guest_can_use(vcpu, X86_FEATURE_VMX)) return 1; return vmx_set_vmx_msr(vcpu, msr_index, data); case MSR_IA32_RTIT_CTL: @@ -7750,13 +7739,15 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_XSAVES); =20 + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VMX); + vmx_setup_uret_msrs(vmx); =20 if (cpu_has_secondary_exec_ctrls()) vmcs_set_secondary_exec_control(vmx, vmx_secondary_exec_control(vmx)); =20 - if (nested_vmx_allowed(vcpu)) + if (guest_can_use(vcpu, X86_FEATURE_VMX)) vmx->msr_ia32_feature_control_valid_bits |=3D FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; @@ -7765,7 +7756,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) ~(FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX); =20 - if (nested_vmx_allowed(vcpu)) + if (guest_can_use(vcpu, X86_FEATURE_VMX)) nested_vmx_cr_fixed1_bits_update(vcpu); =20 if (boot_cpu_has(X86_FEATURE_INTEL_PT) && diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index cde902b44d97..c2130d2c8e24 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -374,7 +374,6 @@ struct kvm_vmx { u64 *pid_table; }; =20 -bool nested_vmx_allowed(struct kvm_vcpu *vcpu); void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu, struct loaded_vmcs *buddy); int allocate_vpid(void); --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6877C04FE1 for ; Tue, 15 Aug 2023 20:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238840AbjHOUi1 (ORCPT ); Tue, 15 Aug 2023 16:38:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238933AbjHOUh6 (ORCPT ); Tue, 15 Aug 2023 16:37:58 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34DDD2107 for ; Tue, 15 Aug 2023 13:37:34 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-58c8b2d6784so483827b3.3 for ; Tue, 15 Aug 2023 13:37:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131830; x=1692736630; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=s1GLCOlFiW6CQnO93B56EozlQrNd+mwfhWAJrix4ffg=; b=sX6duUPVKTWrl6TGRXZZBeMIaUfI/z3CZ7snFkRb7nF5HcF5GKi8nUIa0a+ZZe98G5 YOY5/bnBmvfdM5eVFfAOUq6NQpUDEVyfR60aa7QKicWeOT3W+Mz9E9XdteNiUjgaY6nN PFRRm72TPw9L2WbQkL25hRm277Qy4KDUbfzuKfAoLxdUWasJQ+YPH69pIhz8n38YOh9B ErytJO0InzbHLi9jerqKs8kw2k2MLcf9WxoJSYmstHp1EpZPKBXoM7SAaf+HKals1z2a Hv9liacy0Zlge728+snQJt0uH11qLJjNOAyO5H5Z4x25PQbR6yvxpl/pRkBYN2H5JWCX zyVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131830; x=1692736630; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=s1GLCOlFiW6CQnO93B56EozlQrNd+mwfhWAJrix4ffg=; b=cFFKb2UOV2Yu4fj86m9KquUWbGMCThCckSI/fP82hDP5aYCQOEVEw4b0WfvsXqI8Mh aCwtebhURnFz2lRy/kpAWESPrqHboFICoBKpQkbT4CHG46+y+nLd9h0bvPRXyYOrN0lc 1eROq58QdY6fgzaK0LA99fJQNsqCCFOVcz8otGJKVB2/1Yi9BGGXOGjwJFMKdJqHUTWw AOLMmCbsZ2j0kHwnycal2udQ/bHaCpJpjJgCUTlvVtwkLdcmvM7z85xjusU1e95KXF3E Fv7yA0P9ga9CH3U9i+WQ8YUxxgxBcjRa34+d0yGMJy0ilK1RQNpcivdxcseFggVQ1CX+ usGw== X-Gm-Message-State: AOJu0Ywy4uo+2Sg+zF9XkQDJ420b5TXQVssrW128bJM5Vil8qa2qcnQK kWqC64w6t93Q71vPVyQNcnyWnDVJUvI= X-Google-Smtp-Source: AGHT+IGjAeNOX8rZMkKgBuzpvt8zkwrUL40wHSa04a5WK3p/snxKbM21F/hzv/bjKwlN/iFvurhIqpM6jZY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ad60:0:b0:562:837:122f with SMTP id l32-20020a81ad60000000b005620837122fmr189901ywk.9.1692131830148; Tue, 15 Aug 2023 13:37:10 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:46 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-9-seanjc@google.com> Subject: [PATCH v3 08/15] KVM: nSVM: Use KVM-governed feature framework to track "NRIPS enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "NRIPS exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 6 +++--- arch/x86/kvm/svm/svm.c | 4 +--- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 22446614bf49..722b66af412c 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -8,6 +8,7 @@ BUILD_BUG() KVM_GOVERNED_X86_FEATURE(GBPAGES) KVM_GOVERNED_X86_FEATURE(XSAVES) KVM_GOVERNED_X86_FEATURE(VMX) +KVM_GOVERNED_X86_FEATURE(NRIPS) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 3342cc4a5189..9092f3f8dccf 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -716,7 +716,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, * what a nrips=3D0 CPU would do (L1 is responsible for advancing RIP * prior to injecting the event). */ - if (svm->nrips_enabled) + if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) vmcb02->control.next_rip =3D svm->nested.ctl.next_rip; else if (boot_cpu_has(X86_FEATURE_NRIPS)) vmcb02->control.next_rip =3D vmcb12_rip; @@ -726,7 +726,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, svm->soft_int_injected =3D true; svm->soft_int_csbase =3D vmcb12_csbase; svm->soft_int_old_rip =3D vmcb12_rip; - if (svm->nrips_enabled) + if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) svm->soft_int_next_rip =3D svm->nested.ctl.next_rip; else svm->soft_int_next_rip =3D vmcb12_rip; @@ -1026,7 +1026,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (vmcb12->control.exit_code !=3D SVM_EXIT_ERR) nested_save_pending_event_to_vmcb12(svm, vmcb12); =20 - if (svm->nrips_enabled) + if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) vmcb12->control.next_rip =3D vmcb02->control.next_rip; =20 vmcb12->control.int_ctl =3D svm->nested.ctl.int_ctl; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d67f6e23dcd2..c8b97cb3138c 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4288,9 +4288,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); =20 - /* Update nrips enabled cache */ - svm->nrips_enabled =3D kvm_cpu_cap_has(X86_FEATURE_NRIPS) && - guest_cpuid_has(vcpu, X86_FEATURE_NRIPS); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); =20 svm->tsc_scaling_enabled =3D tsc_scaling && guest_cpuid_has(vcpu, X86_FEA= TURE_TSCRATEMSR); svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5115b35a4d31..e147f2046ffa 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool nrips_enabled : 1; bool tsc_scaling_enabled : 1; bool v_vmload_vmsave_enabled : 1; bool lbrv_enabled : 1; --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48D9DC07E8C for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238973AbjHOUib (ORCPT ); Tue, 15 Aug 2023 16:38:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238950AbjHOUh7 (ORCPT ); Tue, 15 Aug 2023 16:37:59 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E8F11FF9 for ; Tue, 15 Aug 2023 13:37:35 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-589fae40913so28344697b3.0 for ; Tue, 15 Aug 2023 13:37:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131832; x=1692736632; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/gyHxemWXBUuNtn9G6ZmWdv+kLBQgF3gAw0dAQJK39U=; b=QjV9eR7WVBYauzi+3qEmMZRbGnogEfU+B0zFQOnCsCmKlA5Y7+WABAvNAKoUACRX2L FJ7bX6MfATSvoHUMEjiRdyOyWdK7TAlddykPKqbCqchQezNPP6N6uWjI6TcTnnONoQOo Y7zIs8TyBi4tOanPhjiQgenTT3IGnkss334NihSIzpzM+0JLpXHYP+bxavbA3mzM5b4e cQ5Tm47QyV+q1KcQgqcNOl3dUPAgOeWXukEWRpmrEqdobuhlJXP8E+uEKa/EOYHQPTBY ltSICRi8gyLsbi/nOUR5RjYVMs8l1CUTjh/OVD9xL22H1feIKA+awmeD6atr/MdsHmTo 2iGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131832; x=1692736632; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/gyHxemWXBUuNtn9G6ZmWdv+kLBQgF3gAw0dAQJK39U=; b=AdvxGUSgb0y+Oq3xxFvro+Tv5YuQ6JUdgl3fM1r/hiIngF//xSqwM29jyX81Plg5nw Bk4Lic50SN6eAdFKrG3oeZeP7Vv7+WZ/Hg4+b8dX8UwcGOqRmXdqMcfeZ4SbLX/t5KBw 7iiIN2qYHkIT7bwC8VqcFa9PUtwK9m+wy3FoxKPOPM22/4HJlpLYqg/K/411vlKcoNhc zLZdQ7HbP8aMQCMELwSp32DdkJnGLA1Vbicf0hJeHL/mw8xxMjqx5z6wtYfiwKeayVco jHAL0acteFeWFMXSsQwKlUjqiK43avsxp7qt5JhhcsLt75mwX7HyLjQp7lcAqQiQEAje /0xA== X-Gm-Message-State: AOJu0YzvToCafLrSiWH8wu6x39j4mj5I6pWpPDGZG1fWEUrMNdSe74v+ H62DeNKTDzp/FNnh5OA+m6GMlQOs7fk= X-Google-Smtp-Source: AGHT+IGI0BUgnGQhn/IYqVCfzOsUxVZRnsC+Kr6E6SXC6uQOYKMqqfBcEmr5TP2ZI+7oiAkj1r2Qc7Km70Q= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:4c7:b0:d5d:511b:16da with SMTP id v7-20020a05690204c700b00d5d511b16damr180847ybs.2.1692131832254; Tue, 15 Aug 2023 13:37:12 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:47 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-10-seanjc@google.com> Subject: [PATCH v3 09/15] KVM: nSVM: Use KVM-governed feature framework to track "TSC scaling enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "TSC scaling exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, this fixes a benign bug where KVM would mark TSC scaling as exposed to L1 even if overall nested SVM supported is disabled, i.e. KVM would let L1 write MSR_AMD64_TSC_RATIO even when KVM didn't advertise TSCRATEMSR support to userspace. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/svm/svm.c | 10 ++++++---- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 722b66af412c..32c0469cf952 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -9,6 +9,7 @@ KVM_GOVERNED_X86_FEATURE(GBPAGES) KVM_GOVERNED_X86_FEATURE(XSAVES) KVM_GOVERNED_X86_FEATURE(VMX) KVM_GOVERNED_X86_FEATURE(NRIPS) +KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 9092f3f8dccf..da65948064dc 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -695,7 +695,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, =20 vmcb02->control.tsc_offset =3D vcpu->arch.tsc_offset; =20 - if (svm->tsc_scaling_enabled && + if (guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR) && svm->tsc_ratio_msr !=3D kvm_caps.default_tsc_scaling_ratio) nested_svm_update_tsc_ratio_msr(vcpu); =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c8b97cb3138c..15c79457d8c5 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2809,7 +2809,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) =20 switch (msr_info->index) { case MSR_AMD64_TSC_RATIO: - if (!msr_info->host_initiated && !svm->tsc_scaling_enabled) + if (!msr_info->host_initiated && + !guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR)) return 1; msr_info->data =3D svm->tsc_ratio_msr; break; @@ -2959,7 +2960,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) switch (ecx) { case MSR_AMD64_TSC_RATIO: =20 - if (!svm->tsc_scaling_enabled) { + if (!guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR)) { =20 if (!msr->host_initiated) return 1; @@ -2981,7 +2982,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) =20 svm->tsc_ratio_msr =3D data; =20 - if (svm->tsc_scaling_enabled && is_guest_mode(vcpu)) + if (guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR) && + is_guest_mode(vcpu)) nested_svm_update_tsc_ratio_msr(vcpu); =20 break; @@ -4289,8 +4291,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); =20 kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); =20 - svm->tsc_scaling_enabled =3D tsc_scaling && guest_cpuid_has(vcpu, X86_FEA= TURE_TSCRATEMSR); svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); =20 svm->v_vmload_vmsave_enabled =3D vls && guest_cpuid_has(vcpu, X86_FEATURE= _V_VMSAVE_VMLOAD); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index e147f2046ffa..3696f10e2887 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool tsc_scaling_enabled : 1; bool v_vmload_vmsave_enabled : 1; bool lbrv_enabled : 1; bool pause_filter_enabled : 1; --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 666F5C07E8E for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239043AbjHOUie (ORCPT ); Tue, 15 Aug 2023 16:38:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238990AbjHOUh7 (ORCPT ); Tue, 15 Aug 2023 16:37:59 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66E132102 for ; Tue, 15 Aug 2023 13:37:36 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-58c54f4e2a2so14340077b3.3 for ; Tue, 15 Aug 2023 13:37:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131834; x=1692736634; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xweNaIPfBOW/m2asVm+iJf6F67YoA7+If7dzBXn/b2Y=; b=2oLSelu7YkNFbhZ/UQnbxUbVBAXwQYUifvFWTVXmtT4FzbqBb4rzm3HnWelumm7aaj ern2HGbKmvY5u7ctzw7P7dxSvh02STXDN7MwyP52+48bArlfGV6h58bZdNl0NGvDAziS Bw3xVpuEo7BfRpATSQfhfBQfH3FEqDGHxydQfz0n5B55+IIiEEKvX9azNbN+DCWXELE0 32SxhRkJiN1c5+hx5nRMdl3yf3zvYlSRALQoqwwwwTDzkRXioAQM8+RV3to588YmkUkQ xBSYkh8vt7x6Vn4zxrkpq+Ho2UepxYjfhJA1aOXeXQAQeBoaRPqCZ1oWdQ9iVSA1gsx3 Qxeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131834; x=1692736634; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xweNaIPfBOW/m2asVm+iJf6F67YoA7+If7dzBXn/b2Y=; b=Rl26+ojNor7fqkpsNxwj3aK9zW50Kwh8qdseD8PrB1ISnGzJMafbBUzteoRiRYn4x1 mwYwS14HLRBQ4p6ZgYJ2GccbXupEDe0OXbgaECu3km71Xbvs7TBUdyMTV1XYzkDURUN3 unIt6a1BHNwqCVCGz9jEOmltqPJ4fLCk88nRyTRZBd/j2ALPIx1swhNk8RZGAd3Rrjon /T9gHtMdSQgOOB5mGwNLYEHSFk31+YIx8QDcs2qzleHFDq3219AwqeGeJilbGkQTEOH2 T+YmfUh6SjIp1aT4D/yUHolm//ZAeyxDprecE7zDiThzeYM2j+JCam4gyiGvXvr7Lupu UJ7Q== X-Gm-Message-State: AOJu0Yw33WGKekm2r2uZpVnLK2MQAs48fY7wOiQ+082HZNEsUc0A4lXi QXgvMvVkl2xANUOsgCExdA8t0Ot28Ks= X-Google-Smtp-Source: AGHT+IE8HWufBzXiSTHPBdArXsit6jTp3EmYvkjHZ8UH0EAgkj5U6Kyx0wfq5t5UtKprhvbBJJaPc841cxQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:af17:0:b0:586:5d03:67c8 with SMTP id n23-20020a81af17000000b005865d0367c8mr197252ywh.3.1692131834240; Tue, 15 Aug 2023 13:37:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:48 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-11-seanjc@google.com> Subject: [PATCH v3 10/15] KVM: nSVM: Use KVM-governed feature framework to track "vVM{SAVE,LOAD} enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "virtual VMSAVE/VMLOAD exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Opportunistically add a comment explaining why KVM disallows virtual VMLOAD/VMSAVE when the vCPU model is Intel. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/svm/svm.c | 10 +++++++--- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 32c0469cf952..f01a95fd0071 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -10,6 +10,7 @@ KVM_GOVERNED_X86_FEATURE(XSAVES) KVM_GOVERNED_X86_FEATURE(VMX) KVM_GOVERNED_X86_FEATURE(NRIPS) KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) +KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index da65948064dc..24d47ebeb0e0 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -107,7 +107,7 @@ static void nested_svm_uninit_mmu_context(struct kvm_vc= pu *vcpu) =20 static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm) { - if (!svm->v_vmload_vmsave_enabled) + if (!guest_can_use(&svm->vcpu, X86_FEATURE_V_VMSAVE_VMLOAD)) return true; =20 if (!nested_npt_enabled(svm)) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 15c79457d8c5..7cecbb58c60f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1201,8 +1201,6 @@ static inline void init_vmcb_after_set_cpuid(struct k= vm_vcpu *vcpu) =20 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0); - - svm->v_vmload_vmsave_enabled =3D false; } else { /* * If hardware supports Virtual VMLOAD VMSAVE then enable it @@ -4295,7 +4293,13 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) =20 svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); =20 - svm->v_vmload_vmsave_enabled =3D vls && guest_cpuid_has(vcpu, X86_FEATURE= _V_VMSAVE_VMLOAD); + /* + * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that + * VMLOAD drops bits 63:32 of SYSENTER (ignoring the fact that exposing + * SVM on Intel is bonkers and extremely unlikely to work). + */ + if (!guest_cpuid_is_intel(vcpu)) + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); =20 svm->pause_filter_enabled =3D kvm_cpu_cap_has(X86_FEATURE_PAUSEFILTER) && guest_cpuid_has(vcpu, X86_FEATURE_PAUSEFILTER); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 3696f10e2887..b3fdaab57363 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool v_vmload_vmsave_enabled : 1; bool lbrv_enabled : 1; bool pause_filter_enabled : 1; bool pause_threshold_enabled : 1; --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AB84C07E8D for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239173AbjHOUig (ORCPT ); Tue, 15 Aug 2023 16:38:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239021AbjHOUiA (ORCPT ); Tue, 15 Aug 2023 16:38:00 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 206AC268D for ; Tue, 15 Aug 2023 13:37:38 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-565c824a23bso2578758a12.3 for ; Tue, 15 Aug 2023 13:37:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131836; x=1692736636; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TekLB3os5J/4eT6iwdBNnZ9rD2hON94ASFWDIn1alRU=; b=H6GvwrGk+BB0l/EI0GF172Oy1nU5hZOAw8Rvbk3K9BPaCaqP1yJZ65FQSaIVEL/FOH GSKXRjZqR9pmrmOEGndDEDv+D7c/TnCg25xjuG8Kb1nsZHOd/4dfhA0S/QKrn2X/adut CM933nBNHpsSXpEP4VlJ/f9wGdIcXj4cUW6tZPjUKfpoj+GoyFJaM0nhxk2NslblBr7T uaTZuaBiDL/2mzYe5lwBbwspnppSARUDgy39AbLXTl6SUDoyvKv9r64nARAWEKWjIpvB Ss34sv/naNdQx8PddpNniSHZzA/dRdvfFQCS/qjREizf6QH9fMjuMhQne8zzlpQb4rO5 Mstg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131836; x=1692736636; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TekLB3os5J/4eT6iwdBNnZ9rD2hON94ASFWDIn1alRU=; b=N2gE+ZArM+GMBQlmIbRdHKDMcwak7VRPFBAgD1raP+vUfpuum8JiZB978o0VHR2TTW Sh+z8A7a0lUGOAgj36KxuORqvpPn3XMf4L3K7c8ziZoJSbvTU2L5cqMe8TCCRmVzl3Et 9algJ5dYAJnqAmrixtndk7SHcoKhHVeYUaFHybeNZsr9kpfHCzhTo6QfZ4P9hZV2Kp3P tkczG86v4onUuIHwNoQApCMiDZSI/ASzpFUkVSAx0OFuyRq55DLWJNn5IeWSea+zKA2s n2fGB5iiznETe8H21KPujF/r99W/qHoX5gR6hhLyo+cYB4XHHhbmaqEzOr5VwyiLbLP1 1gRQ== X-Gm-Message-State: AOJu0YxyCgeZv6O49P6Wn8NBuwZFOujhqplQTZbOSItyrRXPuDUVKaUj xs7UaOdCibKRk4AzUJjgC/Tqoh8Lnd4= X-Google-Smtp-Source: AGHT+IFa1+rQvY3jz+h72qM9pQPWLY1qxdEcfJZja1qdIu3F661j7bDBaAryA40O49aDagrx0u96PRbyO1o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:8c5b:0:b0:564:2c32:360a with SMTP id q27-20020a638c5b000000b005642c32360amr2641629pgn.12.1692131836190; Tue, 15 Aug 2023 13:37:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:49 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-12-seanjc@google.com> Subject: [PATCH v3 11/15] KVM: nSVM: Use KVM-governed feature framework to track "LBRv enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "LBR virtualization exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "lbrv" param means that the code isn't strictly equivalent, as lbrv_enabled could have been set if nested=3Dfalse where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are gated by nSVM being enabled. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 23 +++++++++++++---------- arch/x86/kvm/svm/svm.c | 5 ++--- arch/x86/kvm/svm/svm.h | 1 - 4 files changed, 16 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index f01a95fd0071..3a4c0e40e1e0 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -11,6 +11,7 @@ KVM_GOVERNED_X86_FEATURE(VMX) KVM_GOVERNED_X86_FEATURE(NRIPS) KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) +KVM_GOVERNED_X86_FEATURE(LBRV) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 24d47ebeb0e0..f50f74b1a04e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -552,6 +552,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm = *svm, struct vmcb *vmcb12 bool new_vmcb12 =3D false; struct vmcb *vmcb01 =3D svm->vmcb01.ptr; struct vmcb *vmcb02 =3D svm->nested.vmcb02.ptr; + struct kvm_vcpu *vcpu =3D &svm->vcpu; =20 nested_vmcb02_compute_g_pat(svm); =20 @@ -577,18 +578,18 @@ static void nested_vmcb02_prepare_save(struct vcpu_sv= m *svm, struct vmcb *vmcb12 vmcb_mark_dirty(vmcb02, VMCB_DT); } =20 - kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED); + kvm_set_rflags(vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED); =20 - svm_set_efer(&svm->vcpu, svm->nested.save.efer); + svm_set_efer(vcpu, svm->nested.save.efer); =20 - svm_set_cr0(&svm->vcpu, svm->nested.save.cr0); - svm_set_cr4(&svm->vcpu, svm->nested.save.cr4); + svm_set_cr0(vcpu, svm->nested.save.cr0); + svm_set_cr4(vcpu, svm->nested.save.cr4); =20 svm->vcpu.arch.cr2 =3D vmcb12->save.cr2; =20 - kvm_rax_write(&svm->vcpu, vmcb12->save.rax); - kvm_rsp_write(&svm->vcpu, vmcb12->save.rsp); - kvm_rip_write(&svm->vcpu, vmcb12->save.rip); + kvm_rax_write(vcpu, vmcb12->save.rax); + kvm_rsp_write(vcpu, vmcb12->save.rsp); + kvm_rip_write(vcpu, vmcb12->save.rip); =20 /* In case we don't even reach vcpu_run, the fields are not updated */ vmcb02->save.rax =3D vmcb12->save.rax; @@ -602,7 +603,8 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm = *svm, struct vmcb *vmcb12 vmcb_mark_dirty(vmcb02, VMCB_DR); } =20 - if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENA= BLE_MASK))) { + if (unlikely(guest_can_use(vcpu, X86_FEATURE_LBRV) && + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { /* * Reserved bits of DEBUGCTL are ignored. Be consistent with * svm_set_msr's definition of reserved bits. @@ -734,7 +736,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, =20 vmcb02->control.virt_ext =3D vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK; - if (svm->lbrv_enabled) + if (guest_can_use(vcpu, X86_FEATURE_LBRV)) vmcb02->control.virt_ext |=3D (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK); =20 @@ -1065,7 +1067,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (!nested_exit_on_intr(svm)) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); =20 - if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENA= BLE_MASK))) { + if (unlikely(guest_can_use(vcpu, X86_FEATURE_LBRV) && + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { svm_copy_lbrs(vmcb12, vmcb02); svm_update_lbrv(vcpu); } else if (unlikely(vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK)) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7cecbb58c60f..de40745bc8a6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1032,7 +1032,7 @@ void svm_update_lbrv(struct kvm_vcpu *vcpu) struct vcpu_svm *svm =3D to_svm(vcpu); bool current_enable_lbrv =3D svm->vmcb->control.virt_ext & LBR_CTL_ENABLE= _MASK; bool enable_lbrv =3D (svm_get_lbr_vmcb(svm)->save.dbgctl & DEBUGCTLMSR_LB= R) || - (is_guest_mode(vcpu) && svm->lbrv_enabled && + (is_guest_mode(vcpu) && guest_can_use(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)); =20 if (enable_lbrv =3D=3D current_enable_lbrv) @@ -4290,8 +4290,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) =20 kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); - - svm->lbrv_enabled =3D lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LBRV); =20 /* * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index b3fdaab57363..45cbbdeac3a3 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,7 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool lbrv_enabled : 1; bool pause_filter_enabled : 1; bool pause_threshold_enabled : 1; bool vgif_enabled : 1; --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88A20C10F00 for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239445AbjHOUii (ORCPT ); Tue, 15 Aug 2023 16:38:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239030AbjHOUiB (ORCPT ); Tue, 15 Aug 2023 16:38:01 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8B352110 for ; Tue, 15 Aug 2023 13:37:39 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-58c8cbf0a0dso834007b3.1 for ; Tue, 15 Aug 2023 13:37:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131838; x=1692736638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=quIPS1pAkj8snq1ZBSsiom42DynaYCyzTMFv1WUiS2M=; b=RvRWqcWMpEvAk1DUNu/JSGL6I3nolm8Pj4MN/HkYUT+sPFvsYA3TSAh5vBr7kmMt/H J40DMugrpKOgkOr4zldFpm0Ft197XG7kKdrlSaAMlXuXwcE+1qJVh6lVQbcHk1gvCtxF IPdKxacSEA9rUJy+wn4nJqerjEyHVLI1o1SwKD36GQhcVKF2C51gd42peJyWsvi5J+Pk i1i32qqaCfDJm7juIktsbPASIFqkyH6DH3CZiFskIb0vX0BWbGaIubYLHqFxw95URUMp WBptsQKNO4eS6W9Z/tqU22fGyr1wAVg6glABsnRaMcp/YO1Hdt1p7sN3VTGZt6gFQczz faPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131838; x=1692736638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=quIPS1pAkj8snq1ZBSsiom42DynaYCyzTMFv1WUiS2M=; b=EtJrAPj7shSc32cBSaop5U64F3FCRezGIKGZJbfQkyR3l+tQrU3iuazeA0fx1pqVxA MndEwXAGYU2BGXv70PLivzZQ/KtPoM39xSGswKQk15DsIsS5yApGY2iYFaACGqYI0Fz/ 4uZEEK3UoIGcdT4luZmtOFO94GXjPfubuMT8Yu5Nph4kxRh2CLiY99imWOAPXLOZvwbV ktPALVsfM2+DX2bzTjVm9lO1KwbUEDavXNt4W5le8eIEKUTbc84jcEHtu4LegkHeRslf kFIGYatr8sm+dEImLZZkzmaCmlkdD3hNzWckfMSynn22cvB6hcQf/wvljFpW/RndsIeB UwdA== X-Gm-Message-State: AOJu0Yy40fEthMDNzqh/Q01LLZ9xv/UeJjx1DSwtNcWV8nnfsaIyGX/l rpQywmgjnz8VHMWSHC88nM6gSsDDj+w= X-Google-Smtp-Source: AGHT+IGvFX/F2YB5U602ZWBWYYnMgRv9dvTBT04aawRwHtFIqhCUHsWEJDTV7yn9hv1vy8J/KeqCknhg/6k= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:dc87:0:b0:ca3:3341:6315 with SMTP id y129-20020a25dc87000000b00ca333416315mr1741ybe.0.1692131838149; Tue, 15 Aug 2023 13:37:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:50 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-13-seanjc@google.com> Subject: [PATCH v3 12/15] KVM: nSVM: Use KVM-governed feature framework to track "Pause Filter enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "Pause Filtering is exposed to L1" via governed feature flags instead of using dedicated bits/flags in vcpu_svm. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/governed_features.h | 2 ++ arch/x86/kvm/svm/nested.c | 10 ++++++++-- arch/x86/kvm/svm/svm.c | 7 ++----- arch/x86/kvm/svm/svm.h | 2 -- 4 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 3a4c0e40e1e0..9afd34f30599 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -12,6 +12,8 @@ KVM_GOVERNED_X86_FEATURE(NRIPS) KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) KVM_GOVERNED_X86_FEATURE(LBRV) +KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) +KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index f50f74b1a04e..ac03b2bc5b2c 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -743,8 +743,14 @@ static void nested_vmcb02_prepare_control(struct vcpu_= svm *svm, if (!nested_vmcb_needs_vls_intercept(svm)) vmcb02->control.virt_ext |=3D VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; =20 - pause_count12 =3D svm->pause_filter_enabled ? svm->nested.ctl.pause_filte= r_count : 0; - pause_thresh12 =3D svm->pause_threshold_enabled ? svm->nested.ctl.pause_f= ilter_thresh : 0; + if (guest_can_use(vcpu, X86_FEATURE_PAUSEFILTER)) + pause_count12 =3D svm->nested.ctl.pause_filter_count; + else + pause_count12 =3D 0; + if (guest_can_use(vcpu, X86_FEATURE_PFTHRESHOLD)) + pause_thresh12 =3D svm->nested.ctl.pause_filter_thresh; + else + pause_thresh12 =3D 0; if (kvm_pause_in_guest(svm->vcpu.kvm)) { /* use guest values since host doesn't intercept PAUSE */ vmcb02->control.pause_filter_count =3D pause_count12; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index de40745bc8a6..9bfff65e8b7a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4300,11 +4300,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) if (!guest_cpuid_is_intel(vcpu)) kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); =20 - svm->pause_filter_enabled =3D kvm_cpu_cap_has(X86_FEATURE_PAUSEFILTER) && - guest_cpuid_has(vcpu, X86_FEATURE_PAUSEFILTER); - - svm->pause_threshold_enabled =3D kvm_cpu_cap_has(X86_FEATURE_PFTHRESHOLD)= && - guest_cpuid_has(vcpu, X86_FEATURE_PFTHRESHOLD); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); =20 svm->vgif_enabled =3D vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 45cbbdeac3a3..d57a096e070a 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,8 +259,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool pause_filter_enabled : 1; - bool pause_threshold_enabled : 1; bool vgif_enabled : 1; bool vnmi_enabled : 1; =20 --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9766EC07E8F for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239640AbjHOUik (ORCPT ); Tue, 15 Aug 2023 16:38:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239107AbjHOUiD (ORCPT ); Tue, 15 Aug 2023 16:38:03 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B64A8211F for ; Tue, 15 Aug 2023 13:37:42 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1bdcb3fc6a4so38631225ad.3 for ; Tue, 15 Aug 2023 13:37:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131840; x=1692736640; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=T8HV7UgIa1dG2H5Sf/ED5TJcqQr5XJgF9Itu7yEIR4Q=; b=eRl6rED/Pj2iZDULtLilXVj59YR7XMUFqHnszIcxYB5GnQ+iWW9mAG9veo0hrSniC/ 9QNra63TT3K0jSUT+FxdTV6Ddex/szHO6sFqpBLz3S6BwCVN6bqODIWqR8dLiaFClgER J/1jxVz9oU0Wiau77mexr81UK3oI/uAVpyHD71+khbEqeJszPPlDzM1m+ZsgFF+g7U7d IkTJZo+Ug6ezuIT1dkaCx/ACl0e8DmJro07eeJgBXef5tdWaMxcBYf+1wUF3xS+Ji0PW iKu972u8veilFphQuTnxZwg9pHVo1kmQn2pMH1Z4m+eb/Cxopj2WSPsDT8FvW/LlU1hp NRtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131840; x=1692736640; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=T8HV7UgIa1dG2H5Sf/ED5TJcqQr5XJgF9Itu7yEIR4Q=; b=beL6U6/cMCHZLkryd4r5noVzrt3EQCEAx8PD0qP/pu5cDjy5bccPOePWgJolpVBGm2 U5sDDQOJEezCPTEqOrhgCArS0XCLO5+ilCc3W5T23VBBEDiWUM6+Yx095xJSZLLJIsJK vdk061wmj/4MH6wpQuIdmkTAP8QV/ux/ciEWAxIrvAItE3fETEco34uevbaKF+3UIWGG aXC9REx+wFIBGe7lBEjT3z24WRkbndlnXEpDjmkFWtpQ8Ng2jtRUgsZJGmVbWdZHDfSI kV1xdnDeer+viA4P60/KYxPMc16twq8nXgyoLrROjLfcJsDF1CwQp93LZpd0g7PokDyf 5j1Q== X-Gm-Message-State: AOJu0YxWHYkslQes8ACriJno9CKvlzrZd+lqCUcPV4hd25M/UzZcybHc Ot2O6vHL8xe+ScFiRXfAlytpImX5s08= X-Google-Smtp-Source: AGHT+IFBBFSCSZHMid9QQ0CZPxzLqvzwEiYVZ9f1w0QfrdjyI6Rw65m/hlyGdz+4GDRzt0C902bFASlnFvY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f54e:b0:1b9:d335:1742 with SMTP id h14-20020a170902f54e00b001b9d3351742mr5767760plf.11.1692131840421; Tue, 15 Aug 2023 13:37:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:51 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-14-seanjc@google.com> Subject: [PATCH v3 13/15] KVM: nSVM: Use KVM-governed feature framework to track "vGIF enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "virtual GIF exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "vgif" param means that the code isn't strictly equivalent, as vgif_enabled could have been set if nested=3Dfalse where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/nested.c | 3 ++- arch/x86/kvm/svm/svm.c | 3 +-- arch/x86/kvm/svm/svm.h | 5 +++-- 4 files changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 9afd34f30599..368696c2e96b 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -14,6 +14,7 @@ KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) KVM_GOVERNED_X86_FEATURE(LBRV) KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) +KVM_GOVERNED_X86_FEATURE(VGIF) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index ac03b2bc5b2c..dd496c9e5f91 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -660,7 +660,8 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes. */ =20 - if (svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK)) + if (guest_can_use(vcpu, X86_FEATURE_VGIF) && + (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK)) int_ctl_vmcb12_bits |=3D (V_GIF_MASK | V_GIF_ENABLE_MASK); else int_ctl_vmcb01_bits |=3D (V_GIF_MASK | V_GIF_ENABLE_MASK); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9bfff65e8b7a..9eac0ad3403e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4302,8 +4302,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) =20 kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); - - svm->vgif_enabled =3D vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VGIF); =20 svm->vnmi_enabled =3D vnmi && guest_cpuid_has(vcpu, X86_FEATURE_VNMI); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index d57a096e070a..eaddaac6bf18 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -22,6 +22,7 @@ #include #include =20 +#include "cpuid.h" #include "kvm_cache_regs.h" =20 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) @@ -259,7 +260,6 @@ struct vcpu_svm { bool soft_int_injected; =20 /* optional nested SVM features that are enabled for this guest */ - bool vgif_enabled : 1; bool vnmi_enabled : 1; =20 u32 ldr_reg; @@ -443,7 +443,8 @@ static inline bool svm_is_intercept(struct vcpu_svm *sv= m, int bit) =20 static inline bool nested_vgif_enabled(struct vcpu_svm *svm) { - return svm->vgif_enabled && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); + return guest_can_use(&svm->vcpu, X86_FEATURE_VGIF) && + (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); } =20 static inline struct vmcb *get_vgif_vmcb(struct vcpu_svm *svm) --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC3BAC10F1A for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239696AbjHOUin (ORCPT ); Tue, 15 Aug 2023 16:38:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239152AbjHOUiE (ORCPT ); Tue, 15 Aug 2023 16:38:04 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AF362123 for ; Tue, 15 Aug 2023 13:37:43 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d6411f96b35so4729269276.1 for ; Tue, 15 Aug 2023 13:37:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131842; x=1692736642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VwtSsZcreifatu5kqX13Ny2lYN44xoh9lZLAjI9CDV8=; b=AmAafd7qRjGmX/mLXgqNgydSCc9aK6NcG30m1fuahWAHRHqvFlIbpCKPn+NV6I5UVF OFG34PdV66G+P3lRdo3c1y7tJzczc0T1F6szwbEAaFLi3/wpayPhLYeg25cUaa8PhtsI rcMZxEuEFj092/IFMueABKOaw6IGtwFU48abuvfsvlDZo2QNoSY+a9h6ywJAIhJj3JsS +Hkq4kJZLeYxqWAg+kNPeCFpOkK3axC8bfUC6JTGpBQX17ELYbMghqFqX6Xao0AWByP5 4nVc2Y2GWh9r2Vh1SEvdlERQ5o5yDm30+H+IDd6P4qCJNiaK+toyaviGrU7zakTQGJyL 56Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131842; x=1692736642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VwtSsZcreifatu5kqX13Ny2lYN44xoh9lZLAjI9CDV8=; b=Dhh/MkYrqAIbI9OVwXEFs++TyG7sixC7/CFAPesfaqLbxBOkV5h+S+EvFpLFgTUleb q/klBUeaCBTW0QLm9seYEkSJ+/56sokh3S+5nCdz5jnaCJLnHQTIMFjooye2A3zd+pgJ 6GPbZrV83v9ecU8SPEuiedOTEm10aUd6u6ZaRIvvZxRLmbVvD+6pjrbLNWaswxS1+4LI MROy6JtZJi5xH13GPqThvjCpbvrYezA8BsCDP4H/wk4AXO2heqd0m6vlIqmIFd8GrRZv P52W4nkJiFCmxh7wd5Y9KtCCSNLufUZTtrceFB2dCQfSeZX0oQVW/fLr5NfjBmmyAXGY WzvQ== X-Gm-Message-State: AOJu0YzZQ2sfENKwYndGIduzpcMNDyyZEEesfYIftgaQfBqAGNr/6hhE uP8Qd/NBUFM0PRg5ZQCs4678qyUbypY= X-Google-Smtp-Source: AGHT+IEg51nJ1FSI9AYLBI/nqW4Ys6jGGyoFtvJP9mNyIiW2JZRRrW9LZzRJC7GQGVWIqdNrI6plVPfgu5M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:2683:0:b0:d62:7f3f:621d with SMTP id m125-20020a252683000000b00d627f3f621dmr188998ybm.11.1692131842554; Tue, 15 Aug 2023 13:37:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:52 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-15-seanjc@google.com> Subject: [PATCH v3 14/15] KVM: nSVM: Use KVM-governed feature framework to track "vNMI enabled" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track "virtual NMI exposed to L1" via a governed feature flag instead of using a dedicated bit/flag in vcpu_svm. Note, checking KVM's capabilities instead of the "vnmi" param means that the code isn't strictly equivalent, as vnmi_enabled could have been set if nested=3Dfalse where as that the governed feature cannot. But that's a glorified nop as the feature/flag is consumed only by paths that are gated by nSVM being enabled. Signed-off-by: Sean Christopherson --- arch/x86/kvm/governed_features.h | 1 + arch/x86/kvm/svm/svm.c | 3 +-- arch/x86/kvm/svm/svm.h | 5 +---- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h index 368696c2e96b..423a73395c10 100644 --- a/arch/x86/kvm/governed_features.h +++ b/arch/x86/kvm/governed_features.h @@ -15,6 +15,7 @@ KVM_GOVERNED_X86_FEATURE(LBRV) KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) KVM_GOVERNED_X86_FEATURE(VGIF) +KVM_GOVERNED_X86_FEATURE(VNMI) =20 #undef KVM_GOVERNED_X86_FEATURE #undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9eac0ad3403e..a139c626fa8b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4303,8 +4303,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VGIF); - - svm->vnmi_enabled =3D vnmi && guest_cpuid_has(vcpu, X86_FEATURE_VNMI); + kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VNMI); =20 svm_recalc_instruction_intercepts(vcpu, svm); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index eaddaac6bf18..2237230aad98 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -259,9 +259,6 @@ struct vcpu_svm { unsigned long soft_int_next_rip; bool soft_int_injected; =20 - /* optional nested SVM features that are enabled for this guest */ - bool vnmi_enabled : 1; - u32 ldr_reg; u32 dfr_reg; struct page *avic_backing_page; @@ -495,7 +492,7 @@ static inline bool nested_npt_enabled(struct vcpu_svm *= svm) =20 static inline bool nested_vnmi_enabled(struct vcpu_svm *svm) { - return svm->vnmi_enabled && + return guest_can_use(&svm->vcpu, X86_FEATURE_VNMI) && (svm->nested.ctl.int_ctl & V_NMI_ENABLE_MASK); } =20 --=20 2.41.0.694.ge786442a9b-goog From nobody Thu Sep 11 14:00:53 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2B2FC10F19 for ; Tue, 15 Aug 2023 20:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239739AbjHOUip (ORCPT ); Tue, 15 Aug 2023 16:38:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239150AbjHOUiE (ORCPT ); Tue, 15 Aug 2023 16:38:04 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B67922122 for ; Tue, 15 Aug 2023 13:37:44 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-58c6564646aso6423167b3.2 for ; Tue, 15 Aug 2023 13:37:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692131844; x=1692736644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=3dK+siegY/lcPBE26gvMNEY42khfxilV1mO9bjQlHqg=; b=OrZw6nNj8f0nsd/1qt+U0X1Xd3xCQvB44YKCUfK0QTrigF+dnhpT9lKet46Dffuves aYULYfbfoOYrRNYDyhuBdMYE2xQJqGJ9XTeJnsdvpg0n/TABpQc40EGXuyB8Ly4JxnhS r3QIzpAjBGxlouYEZIyurPp852JBt4V5T7yRbKK0NM3Fs0ov+U4qX4xd5YUpTRBNKe2k e/E5UfgslO1jzpZQDdb2d1qLhko/KtPuLD+E8maPcv6ajRkrQumIeb9DDOu4AxAXv5bb Yjss0thyeIARTiUj6Unqxd3DbEm2reCrdesIRgz+Z6xmWFQXBup4AJAt+bRelShLGCVl cE7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692131844; x=1692736644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3dK+siegY/lcPBE26gvMNEY42khfxilV1mO9bjQlHqg=; b=R6mR0oF20Xicf2KpqSYA8jArd0ERdIhrjhj2J6WGyxP0P1Gw3lSBEORvEtKUM1PAHO LBRDBs97lC70FF3W+CPRFVPhQwWY5Nd11JKMdFFyiYiCRe0c7R2bjmXliMBn4LfEJmH+ hP6jkNoyQsmKkqmm7nUIv7Vnngk6PXmiEWfUz4+Z+Ho4cQZd/OGI00Yxfy6fsWgMetdX NyJB3GG+Cj2i/QmKKOz82RcVRQDlyKjUX8Du3yC9jJDqUGiQnclwl3gmDEAbwuFKf0Uf Mh7H7Dx6qdLATgYaWormj71YxqVE0IBTaSkxoTBTOKfaDbU0HhSC4kREZN9ov+Yj6oNc 6WRg== X-Gm-Message-State: AOJu0YzCFHupjkL1MrcXR47aagEH35dldFqeQxvkISVCR+RdynZ4+j+j xs6uULFrkGSClomp2kSv8Bkz6CPiVjY= X-Google-Smtp-Source: AGHT+IGVmxZgnM/94vT7V9EdxdBjzJvfoob+eub2qtcm+uyZa4akRj6gZeoxuKQTUpT74MB8wuzHWXJ3x8c= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b725:0:b0:579:e07c:2798 with SMTP id v37-20020a81b725000000b00579e07c2798mr183717ywh.2.1692131844699; Tue, 15 Aug 2023 13:37:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 15 Aug 2023 13:36:53 -0700 In-Reply-To: <20230815203653.519297-1-seanjc@google.com> Mime-Version: 1.0 References: <20230815203653.519297-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230815203653.519297-16-seanjc@google.com> Subject: [PATCH v3 15/15] KVM: x86: Disallow guest CPUID lookups when IRQs are disabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zeng Guang , Yuan Yao Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that KVM has a framework for caching guest CPUID feature flags, add a "rule" that IRQs must be enabled when doing guest CPUID lookups, and enforce the rule via a lockdep assertion. CPUID lookups are slow, and within KVM, IRQs are only ever disabled in hot paths, e.g. the core run loop, fast page fault handling, etc. I.e. querying guest CPUID with IRQs disabled, especially in the run loop, should be avoided. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 67e9f79fe059..e961e9a05847 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -11,6 +11,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 #include +#include "linux/lockdep.h" #include #include #include @@ -84,6 +85,18 @@ static inline struct kvm_cpuid_entry2 *cpuid_entry2_find( struct kvm_cpuid_entry2 *e; int i; =20 + /* + * KVM has a semi-arbitrary rule that querying the guest's CPUID model + * with IRQs disabled is disallowed. The CPUID model can legitimately + * have over one hundred entries, i.e. the lookup is slow, and IRQs are + * typically disabled in KVM only when KVM is in a performance critical + * path, e.g. the core VM-Enter/VM-Exit run loop. Nothing will break + * if this rule is violated, this assertion is purely to flag potential + * performance issues. If this fires, consider moving the lookup out + * of the hotpath, e.g. by caching information during CPUID updates. + */ + lockdep_assert_irqs_enabled(); + for (i =3D 0; i < nent; i++) { e =3D &entries[i]; =20 --=20 2.41.0.694.ge786442a9b-goog