From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DB45C4167B for ; Fri, 10 Nov 2023 23:55:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230108AbjKJXzo (ORCPT ); Fri, 10 Nov 2023 18:55:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230005AbjKJXzl (ORCPT ); Fri, 10 Nov 2023 18:55:41 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7D2A4212 for ; Fri, 10 Nov 2023 15:55:35 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da13698a6d3so3324856276.0 for ; Fri, 10 Nov 2023 15:55:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660535; x=1700265335; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=uF0rSYh2e8K6oYM2L+VYggQrVkUAjq3E8kTRdlqplS0=; b=v+gxPXRyC0cRLZkPhFrce2GX4kdBq+2jQUFIzKNj8pC+ZaUvcpOEwieGGcdbd+VD53 M8oLYclztZxdy7IgVe1bWFC/umP0ko4hw25EgSStlEfnUA857i0GCSs+phb3IEE+xDJ0 MIZCVmmnutbpHDlfRTKkenfJax8SXP2OAE2csus5UII0r6awr1PCAV8K/8TROZGdpcl2 YPEK6GVujoefvUlt6JmOZG6JrBsc6KI/PhBYxMaJ/5h8Mvoj7VODeeFihnazoGlwAvQ9 AATFLKEMCavXgN0bfwpJ2XGWnUaWJB9HVU9icMbUXD4MLnxQUc9eM5+oTdWLbstFXlmq LnGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660535; x=1700265335; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uF0rSYh2e8K6oYM2L+VYggQrVkUAjq3E8kTRdlqplS0=; b=Ztm/L6ZChrA4j/Vr+bhuKB8MmaG2HQCfLOHk9WKdCOWE2aP6E2LYPz2VwwTJ+jlT5p /g9YPNPJuJRlDY51POdl2ijMKmoftVkhZ4AfWBOlVzcUDv8inPAggsp9uXBFrFr4FAcr S++y7QjFbxpQm3enqKDQZHTxQpYstf66g3cl+kHCsMliYPyvd+eNesX6n9UeBIyToLcR +1bDZksMxLSzExQRVHN3/iUS+EWJEStOolV4IedW5dBuiXsARuc5Gk7CPGMM7ERxc7dB l+j6frz/2ndPh3Pg2ZZzhY4hPNwZ9+XHGEjgAZbvwk+TTD+DjJn04HLS4cyor2/Wqswr UW9A== X-Gm-Message-State: AOJu0YzkBUrFriWoA7MxH1IHQ3IORhcNjL/mNf23Hg/QXn/TCBZ1wj73 cvHA6YAks3/8ZcM6hK/pnLlstQ09LXQ= X-Google-Smtp-Source: AGHT+IHVNvhZqMCVh1N5IT8tN+FYPPIrl7Hcm8PpdcCEanGGoMPUqDg50bAeWQ2eqkrThOfa+V3zAclcF7U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:9e8c:0:b0:d9a:520f:1988 with SMTP id p12-20020a259e8c000000b00d9a520f1988mr18798ybq.4.1699660535230; Fri, 10 Nov 2023 15:55:35 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:20 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-2-seanjc@google.com> Subject: [PATCH 1/9] KVM: x86: Rename "governed features" helpers to use "guest_cpu_cap" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As the first step toward replacing KVM's so-called "governed features" framework with a more comprehensive, less poorly named implementation, replace the "kvm_governed_feature" function prefix with "guest_cpu_cap" and rename guest_can_use() to guest_cpu_cap_has(). The "guest_cpu_cap" naming scheme mirrors that of "kvm_cpu_cap", and provides a more clear distinction between guest capabilities, which are KVM controlled (heh, or one might say "governed"), and guest CPUID, which with few exceptions is fully userspace controlled. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maxim Levitsky --- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/cpuid.h | 12 ++++++------ arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/svm/nested.c | 22 +++++++++++----------- arch/x86/kvm/svm/svm.c | 26 +++++++++++++------------- arch/x86/kvm/svm/svm.h | 4 ++-- arch/x86/kvm/vmx/nested.c | 6 +++--- arch/x86/kvm/vmx/vmx.c | 14 +++++++------- arch/x86/kvm/x86.c | 4 ++-- 9 files changed, 47 insertions(+), 47 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index dda6fc4cfae8..4f464187b063 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -345,7 +345,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *v= cpu) allow_gbpages =3D tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); if (allow_gbpages) - kvm_governed_feature_set(vcpu, X86_FEATURE_GBPAGES); + guest_cpu_cap_set(vcpu, X86_FEATURE_GBPAGES); =20 best =3D kvm_find_cpuid_entry(vcpu, 1); if (best && apic) { diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 0b90532b6e26..245416ffa34c 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -254,7 +254,7 @@ static __always_inline bool kvm_is_governed_feature(uns= igned int x86_feature) return kvm_governed_feature_index(x86_feature) >=3D 0; } =20 -static __always_inline void kvm_governed_feature_set(struct kvm_vcpu *vcpu, +static __always_inline void guest_cpu_cap_set(struct kvm_vcpu *vcpu, unsigned int x86_feature) { BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); @@ -263,15 +263,15 @@ static __always_inline void kvm_governed_feature_set(= struct kvm_vcpu *vcpu, vcpu->arch.governed_features.enabled); } =20 -static __always_inline void kvm_governed_feature_check_and_set(struct kvm_= vcpu *vcpu, - unsigned int x86_feature) +static __always_inline void guest_cpu_cap_check_and_set(struct kvm_vcpu *v= cpu, + unsigned int x86_feature) { if (kvm_cpu_cap_has(x86_feature) && guest_cpuid_has(vcpu, x86_feature)) - kvm_governed_feature_set(vcpu, x86_feature); + guest_cpu_cap_set(vcpu, x86_feature); } =20 -static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu, - unsigned int x86_feature) +static __always_inline bool guest_cpu_cap_has(struct kvm_vcpu *vcpu, + unsigned int x86_feature) { BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); =20 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b0f01d605617..cfed824587b9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4801,7 +4801,7 @@ static void reset_guest_rsvds_bits_mask(struct kvm_vc= pu *vcpu, __reset_rsvds_bits_mask(&context->guest_rsvd_check, vcpu->arch.reserved_gpa_bits, context->cpu_role.base.level, is_efer_nx(context), - guest_can_use(vcpu, X86_FEATURE_GBPAGES), + guest_cpu_cap_has(vcpu, X86_FEATURE_GBPAGES), is_cr4_pse(context), guest_cpuid_is_amd_or_hygon(vcpu)); } @@ -4878,7 +4878,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vc= pu *vcpu, __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->root_role.level, context->root_role.efer_nx, - guest_can_use(vcpu, X86_FEATURE_GBPAGES), + guest_cpu_cap_has(vcpu, X86_FEATURE_GBPAGES), is_pse, is_amd); =20 if (!shadow_me_mask) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 3fea8c47679e..ea0895262b12 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -107,7 +107,7 @@ static void nested_svm_uninit_mmu_context(struct kvm_vc= pu *vcpu) =20 static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm) { - if (!guest_can_use(&svm->vcpu, X86_FEATURE_V_VMSAVE_VMLOAD)) + if (!guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_V_VMSAVE_VMLOAD)) return true; =20 if (!nested_npt_enabled(svm)) @@ -603,7 +603,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm = *svm, struct vmcb *vmcb12 vmcb_mark_dirty(vmcb02, VMCB_DR); } =20 - if (unlikely(guest_can_use(vcpu, X86_FEATURE_LBRV) && + if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { /* * Reserved bits of DEBUGCTL are ignored. Be consistent with @@ -660,7 +660,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes. */ =20 - if (guest_can_use(vcpu, X86_FEATURE_VGIF) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_VGIF) && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK)) int_ctl_vmcb12_bits |=3D (V_GIF_MASK | V_GIF_ENABLE_MASK); else @@ -698,7 +698,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, =20 vmcb02->control.tsc_offset =3D vcpu->arch.tsc_offset; =20 - if (guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_TSCRATEMSR) && svm->tsc_ratio_msr !=3D kvm_caps.default_tsc_scaling_ratio) nested_svm_update_tsc_ratio_msr(vcpu); =20 @@ -719,7 +719,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, * what a nrips=3D0 CPU would do (L1 is responsible for advancing RIP * prior to injecting the event). */ - if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_NRIPS)) vmcb02->control.next_rip =3D svm->nested.ctl.next_rip; else if (boot_cpu_has(X86_FEATURE_NRIPS)) vmcb02->control.next_rip =3D vmcb12_rip; @@ -729,7 +729,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm, svm->soft_int_injected =3D true; svm->soft_int_csbase =3D vmcb12_csbase; svm->soft_int_old_rip =3D vmcb12_rip; - if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_NRIPS)) svm->soft_int_next_rip =3D svm->nested.ctl.next_rip; else svm->soft_int_next_rip =3D vmcb12_rip; @@ -737,18 +737,18 @@ static void nested_vmcb02_prepare_control(struct vcpu= _svm *svm, =20 vmcb02->control.virt_ext =3D vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK; - if (guest_can_use(vcpu, X86_FEATURE_LBRV)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV)) vmcb02->control.virt_ext |=3D (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK); =20 if (!nested_vmcb_needs_vls_intercept(svm)) vmcb02->control.virt_ext |=3D VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; =20 - if (guest_can_use(vcpu, X86_FEATURE_PAUSEFILTER)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_PAUSEFILTER)) pause_count12 =3D svm->nested.ctl.pause_filter_count; else pause_count12 =3D 0; - if (guest_can_use(vcpu, X86_FEATURE_PFTHRESHOLD)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_PFTHRESHOLD)) pause_thresh12 =3D svm->nested.ctl.pause_filter_thresh; else pause_thresh12 =3D 0; @@ -1035,7 +1035,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (vmcb12->control.exit_code !=3D SVM_EXIT_ERR) nested_save_pending_event_to_vmcb12(svm, vmcb12); =20 - if (guest_can_use(vcpu, X86_FEATURE_NRIPS)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_NRIPS)) vmcb12->control.next_rip =3D vmcb02->control.next_rip; =20 vmcb12->control.int_ctl =3D svm->nested.ctl.int_ctl; @@ -1074,7 +1074,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (!nested_exit_on_intr(svm)) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); =20 - if (unlikely(guest_can_use(vcpu, X86_FEATURE_LBRV) && + if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { svm_copy_lbrs(vmcb12, vmcb02); svm_update_lbrv(vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 1855a6d7c976..8a99a73b6ee5 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1046,7 +1046,7 @@ void svm_update_lbrv(struct kvm_vcpu *vcpu) struct vcpu_svm *svm =3D to_svm(vcpu); bool current_enable_lbrv =3D svm->vmcb->control.virt_ext & LBR_CTL_ENABLE= _MASK; bool enable_lbrv =3D (svm_get_lbr_vmcb(svm)->save.dbgctl & DEBUGCTLMSR_LB= R) || - (is_guest_mode(vcpu) && guest_can_use(vcpu, X86_FEATURE_LBRV) && + (is_guest_mode(vcpu) && guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)); =20 if (enable_lbrv =3D=3D current_enable_lbrv) @@ -2835,7 +2835,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) switch (msr_info->index) { case MSR_AMD64_TSC_RATIO: if (!msr_info->host_initiated && - !guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_TSCRATEMSR)) return 1; msr_info->data =3D svm->tsc_ratio_msr; break; @@ -2985,7 +2985,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) switch (ecx) { case MSR_AMD64_TSC_RATIO: =20 - if (!guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR)) { + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_TSCRATEMSR)) { =20 if (!msr->host_initiated) return 1; @@ -3007,7 +3007,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) =20 svm->tsc_ratio_msr =3D data; =20 - if (guest_can_use(vcpu, X86_FEATURE_TSCRATEMSR) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_TSCRATEMSR) && is_guest_mode(vcpu)) nested_svm_update_tsc_ratio_msr(vcpu); =20 @@ -4318,11 +4318,11 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) if (boot_cpu_has(X86_FEATURE_XSAVE) && boot_cpu_has(X86_FEATURE_XSAVES) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) - kvm_governed_feature_set(vcpu, X86_FEATURE_XSAVES); + guest_cpu_cap_set(vcpu, X86_FEATURE_XSAVES); =20 - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_NRIPS); - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LBRV); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_NRIPS); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_LBRV); =20 /* * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that @@ -4330,12 +4330,12 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) * SVM on Intel is bonkers and extremely unlikely to work). */ if (!guest_cpuid_is_intel(vcpu)) - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); =20 - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VGIF); - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VNMI); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_VGIF); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_VNMI); =20 svm_recalc_instruction_intercepts(vcpu, svm); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index be67ab7fdd10..e49af42b4a33 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -443,7 +443,7 @@ static inline bool svm_is_intercept(struct vcpu_svm *sv= m, int bit) =20 static inline bool nested_vgif_enabled(struct vcpu_svm *svm) { - return guest_can_use(&svm->vcpu, X86_FEATURE_VGIF) && + return guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_VGIF) && (svm->nested.ctl.int_ctl & V_GIF_ENABLE_MASK); } =20 @@ -495,7 +495,7 @@ static inline bool nested_npt_enabled(struct vcpu_svm *= svm) =20 static inline bool nested_vnmi_enabled(struct vcpu_svm *svm) { - return guest_can_use(&svm->vcpu, X86_FEATURE_VNMI) && + return guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_VNMI) && (svm->nested.ctl.int_ctl & V_NMI_ENABLE_MASK); } =20 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c5ec0ef51ff7..4750d1696d58 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6426,7 +6426,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu, vmx =3D to_vmx(vcpu); vmcs12 =3D get_vmcs12(vcpu); =20 - if (guest_can_use(vcpu, X86_FEATURE_VMX) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_VMX) && (vmx->nested.vmxon || vmx->nested.smm.vmxon)) { kvm_state.hdr.vmx.vmxon_pa =3D vmx->nested.vmxon_ptr; kvm_state.hdr.vmx.vmcs12_pa =3D vmx->nested.current_vmptr; @@ -6567,7 +6567,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, if (kvm_state->flags & ~KVM_STATE_NESTED_EVMCS) return -EINVAL; } else { - if (!guest_can_use(vcpu, X86_FEATURE_VMX)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_VMX)) return -EINVAL; =20 if (!page_address_valid(vcpu, kvm_state->hdr.vmx.vmxon_pa)) @@ -6601,7 +6601,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, return -EINVAL; =20 if ((kvm_state->flags & KVM_STATE_NESTED_EVMCS) && - (!guest_can_use(vcpu, X86_FEATURE_VMX) || + (!guest_cpu_cap_has(vcpu, X86_FEATURE_VMX) || !vmx->nested.enlightened_vmcs_enabled)) return -EINVAL; =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index be20a60047b1..6328f0d47c64 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2050,7 +2050,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) [msr_info->index - MSR_IA32_SGXLEPUBKEYHASH0]; break; case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: - if (!guest_can_use(vcpu, X86_FEATURE_VMX)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_VMX)) return 1; if (vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index, &msr_info->data)) @@ -2358,7 +2358,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: if (!msr_info->host_initiated) return 1; /* they are read-only */ - if (!guest_can_use(vcpu, X86_FEATURE_VMX)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_VMX)) return 1; return vmx_set_vmx_msr(vcpu, msr_index, data); case MSR_IA32_RTIT_CTL: @@ -4567,7 +4567,7 @@ vmx_adjust_secondary_exec_control(struct vcpu_vmx *vm= x, u32 *exec_control, \ if (cpu_has_vmx_##name()) { \ if (kvm_is_governed_feature(X86_FEATURE_##feat_name)) \ - __enabled =3D guest_can_use(__vcpu, X86_FEATURE_##feat_name); \ + __enabled =3D guest_cpu_cap_has(__vcpu, X86_FEATURE_##feat_name); \ else \ __enabled =3D guest_cpuid_has(__vcpu, X86_FEATURE_##feat_name); \ vmx_adjust_secondary_exec_control(vmx, exec_control, SECONDARY_EXEC_##ct= rl_name,\ @@ -7757,9 +7757,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) */ if (boot_cpu_has(X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_XSAVES); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_XSAVES); =20 - kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_VMX); + guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_VMX); =20 vmx_setup_uret_msrs(vmx); =20 @@ -7767,7 +7767,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) vmcs_set_secondary_exec_control(vmx, vmx_secondary_exec_control(vmx)); =20 - if (guest_can_use(vcpu, X86_FEATURE_VMX)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_VMX)) vmx->msr_ia32_feature_control_valid_bits |=3D FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; @@ -7776,7 +7776,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) ~(FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX); =20 - if (guest_can_use(vcpu, X86_FEATURE_VMX)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_VMX)) nested_vmx_cr_fixed1_bits_update(vcpu); =20 if (boot_cpu_has(X86_FEATURE_INTEL_PT) && diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2c924075f6f1..04a77b764a36 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1025,7 +1025,7 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 !=3D host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); =20 - if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVES) && vcpu->arch.ia32_xss !=3D host_xss) wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); } @@ -1056,7 +1056,7 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 !=3D host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); =20 - if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVES) && vcpu->arch.ia32_xss !=3D host_xss) wrmsrl(MSR_IA32_XSS, host_xss); } --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43D59C072AB for ; Fri, 10 Nov 2023 23:55:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344789AbjKJXzr (ORCPT ); Fri, 10 Nov 2023 18:55:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229953AbjKJXzk (ORCPT ); Fri, 10 Nov 2023 18:55:40 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C592010A for ; Fri, 10 Nov 2023 15:55:37 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5afa86b8d66so36473597b3.3 for ; Fri, 10 Nov 2023 15:55:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660537; x=1700265337; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=u/XXiWlF6BUUu8dl994zQCq6OKzTOSt5eqMjS/tu3eo=; b=G/lC/+RX78DBHsb3PTdqef+BdyZqGcyrzMjqafI1ntmtoqpZ9ky5fktBTabQTvoWfJ FVLM9ZYux7zP/zSdvMqMHmgrjFpm9eHSnUrLytDrWSKj3tKR/C5XR+QhmelLp+u5ZHLL GwWTs4GLENoLcflGOv8UFufBGPMSzvSYgyYVGyYf+GyPNTJONKrrxfReKCmQvH70XeQV oNLL8BAmaXFbL5JLWe6yEDi7PzSTnqQPkqQL/BMQt2waXesuLL4mmcH0fqdBVNWBV2nm Ih3oNu70w5PwRqiWL9TFCdCFcghwrVjU2dnOjsTpOYxK8mDbtyNNGSyrhn1CNsPlSini ueAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660537; x=1700265337; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=u/XXiWlF6BUUu8dl994zQCq6OKzTOSt5eqMjS/tu3eo=; b=bp4EG2OToyaxeOtbKnY12hWBhf5ZQCj5C6nLdMQz7jinWiDip1Ump1F0M1YpftGtTy 4irFXlCmd7pgaVOfFcQkfwNUkxOZT0gjOmmhpgDTYjCwknAr286W+tcx+gqxuQ/jwK8Y NPDL1IhYVgWykaa0OMdMvT1C35uRauN9eZPsBx7pn3MDDMNp7lNMLaCVHPVRCVlmd5/M tbYepYT26uhCvAgVStCJzppCMSpjseMZkvVDBUjbK/06ODHebUufViFyl8NiY5UAcw2l nEFPKh3SR41i68liNe04yUecjFIRLXKNTAE7FlCdQT3j0w/a0YrVrJ1qYnsR0p7g/YW9 Y50w== X-Gm-Message-State: AOJu0YyjQQN0ZDMous9XN5JqDM202vw4q7bsACMH+dWhnG2SBi6AOb0n 5hSHQtiG/gNsSgf4zdMTqj7wL7JE5L4= X-Google-Smtp-Source: AGHT+IEdv5bxMYkF41KPqWKMHFEx13DMbsg/dC/mw7M/QWBaNkFG6vIFcI0rI1P+uyitAmJZvYcYDRIFfVg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:4894:0:b0:5a8:170d:45a9 with SMTP id v142-20020a814894000000b005a8170d45a9mr21242ywa.8.1699660537085; Fri, 10 Nov 2023 15:55:37 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:21 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-3-seanjc@google.com> Subject: [PATCH 2/9] KVM: x86: Replace guts of "goverened" features with comprehensive cpu_caps From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace the internals of the governed features framework with a more comprehensive "guest CPU capabilities" implementation, i.e. with a guest version of kvm_cpu_caps. Keep the skeleton of governed features around for now as vmx_adjust_sec_exec_control() relies on detecting governed features to do the right thing for XSAVES, and switching all guest feature queries to guest_cpu_cap_has() requires subtle and non-trivial changes, i.e. is best done as a standalone change. Tracking *all* guest capabilities that KVM cares will allow excising the poorly named "governed features" framework, and effectively optimizes all KVM queries of guest capabilities, i.e. doesn't require making a subjective decision as to whether or not a feature is worth "governing", and doesn't require adding the code to do so. The cost of tracking all features is currently 92 bytes per vCPU on 64-bit kernels: 100 bytes for cpu_caps versus 8 bytes for governed_features. That cost is well worth paying even if the only benefit was eliminating the "governed features" terminology. And practically speaking, the real cost is zero unless those 92 bytes pushes the size of vcpu_vmx or vcpu_svm into a new order-N allocation, and if that happens there are better ways to reduce the footprint of kvm_vcpu_arch, e.g. making the PMU and/or MTRR state separate allocations. Suggested-by: Maxim Levitsky Signed-off-by: Sean Christopherson Reviewed-by: Binbin Wu --- arch/x86/include/asm/kvm_host.h | 40 ++++++++++++++++++++------------- arch/x86/kvm/cpuid.c | 4 +--- arch/x86/kvm/cpuid.h | 14 ++++++------ arch/x86/kvm/reverse_cpuid.h | 15 ------------- 4 files changed, 32 insertions(+), 41 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d7036982332e..1d43dd5fdea7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -722,6 +722,22 @@ struct kvm_queued_exception { bool has_payload; }; =20 +/* + * Hardware-defined CPUID leafs that are either scattered by the kernel or= are + * unknown to the kernel, but need to be directly used by KVM. Note, these + * word values conflict with the kernel's "bug" caps, but KVM doesn't use = those. + */ +enum kvm_only_cpuid_leafs { + CPUID_12_EAX =3D NCAPINTS, + CPUID_7_1_EDX, + CPUID_8000_0007_EDX, + CPUID_8000_0022_EAX, + NR_KVM_CPU_CAPS, + + NKVMCAPINTS =3D NR_KVM_CPU_CAPS - NCAPINTS, +}; + + struct kvm_vcpu_arch { /* * rip and regs accesses must go through @@ -840,23 +856,15 @@ struct kvm_vcpu_arch { struct kvm_hypervisor_cpuid kvm_cpuid; =20 /* - * FIXME: Drop this macro and use KVM_NR_GOVERNED_FEATURES directly - * when "struct kvm_vcpu_arch" is no longer defined in an - * arch/x86/include/asm header. The max is mostly arbitrary, i.e. - * can be increased as necessary. + * Track the effective guest capabilities, i.e. the features the vCPU + * is allowed to use. Typically, but not always, features can be used + * by the guest if and only if both KVM and userspace want to expose + * the feature to the guest. A common exception is for virtualization + * holes, i.e. when KVM can't prevent the guest from using a feature, + * in which case the vCPU "has" the feature regardless of what KVM or + * userspace desires. */ -#define KVM_MAX_NR_GOVERNED_FEATURES BITS_PER_LONG - - /* - * Track whether or not the guest is allowed to use features that are - * governed by KVM, where "governed" means KVM needs to manage state - * and/or explicitly enable the feature in hardware. Typically, but - * not always, governed features can be used by the guest if and only - * if both KVM and userspace want to expose the feature to the guest. - */ - struct { - DECLARE_BITMAP(enabled, KVM_MAX_NR_GOVERNED_FEATURES); - } governed_features; + u32 cpu_caps[NR_KVM_CPU_CAPS]; =20 u64 reserved_gpa_bits; int maxphyaddr; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 4f464187b063..4bf3c2d4dc7c 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -327,9 +327,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *v= cpu) struct kvm_cpuid_entry2 *best; bool allow_gbpages; =20 - BUILD_BUG_ON(KVM_NR_GOVERNED_FEATURES > KVM_MAX_NR_GOVERNED_FEATURES); - bitmap_zero(vcpu->arch.governed_features.enabled, - KVM_MAX_NR_GOVERNED_FEATURES); + memset(vcpu->arch.cpu_caps, 0, sizeof(vcpu->arch.cpu_caps)); =20 /* * If TDP is enabled, let the guest use GBPAGES if they're supported in diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 245416ffa34c..9f18c4395b71 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -255,12 +255,12 @@ static __always_inline bool kvm_is_governed_feature(u= nsigned int x86_feature) } =20 static __always_inline void guest_cpu_cap_set(struct kvm_vcpu *vcpu, - unsigned int x86_feature) + unsigned int x86_feature) { - BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); + unsigned int x86_leaf =3D __feature_leaf(x86_feature); =20 - __set_bit(kvm_governed_feature_index(x86_feature), - vcpu->arch.governed_features.enabled); + reverse_cpuid_check(x86_leaf); + vcpu->arch.cpu_caps[x86_leaf] |=3D __feature_bit(x86_feature); } =20 static __always_inline void guest_cpu_cap_check_and_set(struct kvm_vcpu *v= cpu, @@ -273,10 +273,10 @@ static __always_inline void guest_cpu_cap_check_and_s= et(struct kvm_vcpu *vcpu, static __always_inline bool guest_cpu_cap_has(struct kvm_vcpu *vcpu, unsigned int x86_feature) { - BUILD_BUG_ON(!kvm_is_governed_feature(x86_feature)); + unsigned int x86_leaf =3D __feature_leaf(x86_feature); =20 - return test_bit(kvm_governed_feature_index(x86_feature), - vcpu->arch.governed_features.enabled); + reverse_cpuid_check(x86_leaf); + return vcpu->arch.cpu_caps[x86_leaf] & __feature_bit(x86_feature); } =20 #endif diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h index b81650678375..4b658491e8f8 100644 --- a/arch/x86/kvm/reverse_cpuid.h +++ b/arch/x86/kvm/reverse_cpuid.h @@ -6,21 +6,6 @@ #include #include =20 -/* - * Hardware-defined CPUID leafs that are either scattered by the kernel or= are - * unknown to the kernel, but need to be directly used by KVM. Note, these - * word values conflict with the kernel's "bug" caps, but KVM doesn't use = those. - */ -enum kvm_only_cpuid_leafs { - CPUID_12_EAX =3D NCAPINTS, - CPUID_7_1_EDX, - CPUID_8000_0007_EDX, - CPUID_8000_0022_EAX, - NR_KVM_CPU_CAPS, - - NKVMCAPINTS =3D NR_KVM_CPU_CAPS - NCAPINTS, -}; - /* * Define a KVM-only feature flag. * --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F215EC4332F for ; Fri, 10 Nov 2023 23:55:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229823AbjKJXzt (ORCPT ); Fri, 10 Nov 2023 18:55:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229813AbjKJXzn (ORCPT ); Fri, 10 Nov 2023 18:55:43 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5CB410A for ; Fri, 10 Nov 2023 15:55:39 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5b7f3f47547so2329953a12.3 for ; Fri, 10 Nov 2023 15:55:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660539; x=1700265339; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TEvKbmMv5tmjw9/hFI6q0BipOyhXYsOyAsnic4I0og0=; b=g4ijxBWdm17Jo+gu7XmK8BjsbsuzuBAi0/n2lAj6WcX061eGxLk69ttoZy04zZ1IX2 IJZMEznmfBe4+VhNxr1GPWRRlyu0qEdTj2tV8YnQSbZpz8uhGbn6ef8x9hwBO7b74CqL dC6qd8bTg1LNZYlmt/0w29pYSUzfKoICkKtJ+2oziHkJK7p1mm1ZVy7FGPo2gTsa9wkf oqXwYEERZTHPGcrYzP77QQCcOUU3ie2UO1fKqKa5gk1N+EJEY59J4lNErHebff4jiPJ2 z2d5eE4ElspeNpP+Z/6fQLEzzBelAcBQgOhgOAf3BcwM0oju7HUFWT06Md16U/lTIkjq xyBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660539; x=1700265339; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TEvKbmMv5tmjw9/hFI6q0BipOyhXYsOyAsnic4I0og0=; b=p4tPDS70R3k3ijh9CBJelMrPiwNenLaMkdmU8zPrJB4QGpQkznIOJQhlrzPNWrYaHq f9df+c0DDZ4363dKaLH3NngR0A/NHuRRZnapXb7pItInrCabJA2msmKAsfb7idvE5e8C K+dmC6+GRZs4MXHhNFqZHpWTkpSJc7sXLA3CYqyokizO28kLJYlD6wRYQevZnNGH8uGa vHpNj6XzCOkHZedVEhxVYXCGx5OEQoTS5ZFXctBwL2LBPRqYbCPQUGDTMfBUsGiqzqvt +BcsJxHVVU+Exg71g8VqCfwdpUEvR9gCLw9gWipgoWP1zpqlPNzM1Mi6vIVl5tC3wrVX AUDg== X-Gm-Message-State: AOJu0YxnhoQRuCvhwY/LCy8OyYoArM278wjEKpdWxi8pPFRd2GiYdKcI vicdu0m5jMskHKwkyLu8ejQft4ws9iQ= X-Google-Smtp-Source: AGHT+IGyNbSb+ql2rI0AnGy2mw0gzvn4bCOpj75pgm5vdF/yvPUf+ux5kb15qiGz3jZtJPrUM/bpTlDslBQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:51c:b0:5bd:bb6b:78a with SMTP id bx28-20020a056a02051c00b005bdbb6b078amr200071pgb.6.1699660539208; Fri, 10 Nov 2023 15:55:39 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:22 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-4-seanjc@google.com> Subject: [PATCH 3/9] KVM: x86: Initialize guest cpu_caps based on guest CPUID From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Initialize a vCPU's capabilities based on the guest CPUID provided by userspace instead of simply zeroing the entire array. This will allow using cpu_caps to query *all* CPUID-based guest capabilities, i.e. will allow converting all usage of guest_cpuid_has() to guest_cpu_cap_has(). Zeroing the array was the logical choice when using cpu_caps was opt-in, e.g. "unsupported" was generally a safer default, and the whole point of governed features is that KVM would need to check host and guest support, i.e. making everything unsupported by default didn't require more code. But requiring KVM to manually "enable" every CPUID-based feature in cpu_caps would require an absurd amount of boilerplate code. Follow existing CPUID/kvm_cpu_caps nomenclature where possible, e.g. for the change() and clear() APIs. Replace check_and_set() with restrict() to try and capture that KVM is restricting userspace's desired guest feature set based on KVM's capabilities. This is intended to be gigantic nop, i.e. should not have any impact on guest or KVM functionality. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 43 +++++++++++++++++++++++++++++++++++++++--- arch/x86/kvm/cpuid.h | 25 +++++++++++++++++++++--- arch/x86/kvm/svm/svm.c | 24 +++++++++++------------ arch/x86/kvm/vmx/vmx.c | 6 ++++-- 4 files changed, 78 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 4bf3c2d4dc7c..5cf3d697ecb3 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -321,13 +321,51 @@ static bool kvm_cpuid_has_hyperv(struct kvm_cpuid_ent= ry2 *entries, int nent) return entry && entry->eax =3D=3D HYPERV_CPUID_SIGNATURE_EAX; } =20 +/* + * This isn't truly "unsafe", but all callers except kvm_cpu_after_set_cpu= id() + * should use __cpuid_entry_get_reg(), which provides compile-time validat= ion + * of the input. + */ +static u32 cpuid_get_reg_unsafe(struct kvm_cpuid_entry2 *entry, u32 reg) +{ + switch (reg) { + case CPUID_EAX: + return entry->eax; + case CPUID_EBX: + return entry->ebx; + case CPUID_ECX: + return entry->ecx; + case CPUID_EDX: + return entry->edx; + default: + WARN_ON_ONCE(1); + return 0; + } +} + static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) { struct kvm_lapic *apic =3D vcpu->arch.apic; struct kvm_cpuid_entry2 *best; bool allow_gbpages; + int i; =20 - memset(vcpu->arch.cpu_caps, 0, sizeof(vcpu->arch.cpu_caps)); + BUILD_BUG_ON(ARRAY_SIZE(reverse_cpuid) !=3D NR_KVM_CPU_CAPS); + + /* + * Reset guest capabilities to userspace's guest CPUID definition, i.e. + * honor userspace's definition for features that don't require KVM or + * hardware management/support (or that KVM simply doesn't care about). + */ + for (i =3D 0; i < NR_KVM_CPU_CAPS; i++) { + const struct cpuid_reg cpuid =3D reverse_cpuid[i]; + + best =3D kvm_find_cpuid_entry_index(vcpu, cpuid.function, cpuid.index); + if (best) + vcpu->arch.cpu_caps[i] =3D cpuid_get_reg_unsafe(best, cpuid.reg); + else + vcpu->arch.cpu_caps[i] =3D 0; + } =20 /* * If TDP is enabled, let the guest use GBPAGES if they're supported in @@ -342,8 +380,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *v= cpu) */ allow_gbpages =3D tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); - if (allow_gbpages) - guest_cpu_cap_set(vcpu, X86_FEATURE_GBPAGES); + guest_cpu_cap_change(vcpu, X86_FEATURE_GBPAGES, allow_gbpages); =20 best =3D kvm_find_cpuid_entry(vcpu, 1); if (best && apic) { diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 9f18c4395b71..1707ef10b269 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -263,11 +263,30 @@ static __always_inline void guest_cpu_cap_set(struct = kvm_vcpu *vcpu, vcpu->arch.cpu_caps[x86_leaf] |=3D __feature_bit(x86_feature); } =20 -static __always_inline void guest_cpu_cap_check_and_set(struct kvm_vcpu *v= cpu, - unsigned int x86_feature) +static __always_inline void guest_cpu_cap_clear(struct kvm_vcpu *vcpu, + unsigned int x86_feature) { - if (kvm_cpu_cap_has(x86_feature) && guest_cpuid_has(vcpu, x86_feature)) + unsigned int x86_leaf =3D __feature_leaf(x86_feature); + + reverse_cpuid_check(x86_leaf); + vcpu->arch.cpu_caps[x86_leaf] &=3D ~__feature_bit(x86_feature); +} + +static __always_inline void guest_cpu_cap_change(struct kvm_vcpu *vcpu, + unsigned int x86_feature, + bool guest_has_cap) +{ + if (guest_has_cap) guest_cpu_cap_set(vcpu, x86_feature); + else + guest_cpu_cap_clear(vcpu, x86_feature); +} + +static __always_inline void guest_cpu_cap_restrict(struct kvm_vcpu *vcpu, + unsigned int x86_feature) +{ + if (!kvm_cpu_cap_has(x86_feature)) + guest_cpu_cap_clear(vcpu, x86_feature); } =20 static __always_inline bool guest_cpu_cap_has(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 8a99a73b6ee5..5827328e30f1 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4315,14 +4315,14 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) * XSS on VM-Enter/VM-Exit. Failure to do so would effectively give * the guest read/write access to the host's XSS. */ - if (boot_cpu_has(X86_FEATURE_XSAVE) && - boot_cpu_has(X86_FEATURE_XSAVES) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) - guest_cpu_cap_set(vcpu, X86_FEATURE_XSAVES); + guest_cpu_cap_change(vcpu, X86_FEATURE_XSAVES, + boot_cpu_has(X86_FEATURE_XSAVE) && + boot_cpu_has(X86_FEATURE_XSAVES) && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)); =20 - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_NRIPS); - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_TSCRATEMSR); - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_LBRV); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_NRIPS); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_TSCRATEMSR); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_LBRV); =20 /* * Intercept VMLOAD if the vCPU mode is Intel in order to emulate that @@ -4330,12 +4330,12 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) * SVM on Intel is bonkers and extremely unlikely to work). */ if (!guest_cpuid_is_intel(vcpu)) - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); =20 - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_PAUSEFILTER); - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_PFTHRESHOLD); - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_VGIF); - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_VNMI); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_PAUSEFILTER); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_PFTHRESHOLD); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_VGIF); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_VNMI); =20 svm_recalc_instruction_intercepts(vcpu, svm); =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6328f0d47c64..5a056ad1ae55 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7757,9 +7757,11 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu= *vcpu) */ if (boot_cpu_has(X86_FEATURE_XSAVE) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_XSAVES); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_XSAVES); + else + guest_cpu_cap_clear(vcpu, X86_FEATURE_XSAVES); =20 - guest_cpu_cap_check_and_set(vcpu, X86_FEATURE_VMX); + guest_cpu_cap_restrict(vcpu, X86_FEATURE_VMX); =20 vmx_setup_uret_msrs(vmx); =20 --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E51FC41535 for ; Fri, 10 Nov 2023 23:55:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230121AbjKJXzv (ORCPT ); Fri, 10 Nov 2023 18:55:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230100AbjKJXzo (ORCPT ); Fri, 10 Nov 2023 18:55:44 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2B3B4212 for ; Fri, 10 Nov 2023 15:55:41 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5ae5b12227fso36794137b3.0 for ; Fri, 10 Nov 2023 15:55:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660541; x=1700265341; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oLHGbgMEnF7UGu+6KnCI0TJhWQAK6x2mt0wz6s7ykc4=; b=dAzZU6CdykaoHdvYARoV6neDZVvw7hVI4xpMyWN7gM9IWzVKxO3zkFNR6kFkTQfyvq uk566WTmbyYGz1Ljxvqph6xCL+KLLH/6Pc5/OKOUonEk1sLodD1LwU2uYSTQ5Flm6Pax 1bNebUib4UWKL6dqJSWIeUrkdmNToszAAbcGvmeOe7qwkRPgrMbXMeaAlRnsdlJDkJhL Klql/IVqWc3zUmXd1nYq1Mup4Yqi3CpqSofs379JcCLQPC+jGGvksi4SHbDJAVLErZ7u IeGMavAkJFGf9LtblTyFq5U4B5p+98N736XPDXISA0xIt1XnNF3xF31IqotaPcFuxpqf IMQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660541; x=1700265341; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oLHGbgMEnF7UGu+6KnCI0TJhWQAK6x2mt0wz6s7ykc4=; b=SiSVP4FREm1dA/knY5Cu7Xoxzyh02+rVIm8J2oYGINUBnmLtrPXVXKUBujv/e6wQK2 KyVAydbmGHMbH3RpWCny3w1ESzPp0h7gmQNoOyC67ccxqrIMg/aoasBGLlOON8y9T/jj +piVesziSldt1aURwlsfCynrczfyyvLFu1RLjpiowu/7DBkECgOIhJFOjthNcG2+USPC DipeaB8JrjKf6tH71fzH5gGwSn9OoDInnJB5HD81zE5uL/M3mbq/v9sEdtNAh9GW94T4 mLEe5fl0vHEtB9hOw4j3dLknNHe3FZ5c/l+OG0J6nPA7/MPIVNslyIH4+2GQaDMMUCca svBQ== X-Gm-Message-State: AOJu0YxfTTulNO1RwhNS07H4Yt6Fk1vKOwK5ahslrTxfoHY6yyyYyrLR QcgHhrMwnKVjlQBVhcFGdEo2AxMlTnw= X-Google-Smtp-Source: AGHT+IESTak3Kvc+5R2t9g4ghv/O+uX4mJqcQgiYQ6ovjTnEqtz6DMMF2xAoZX/ZzdZFBnD+aha49EHQq5I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:a18b:0:b0:5be:94a6:d84b with SMTP id y133-20020a81a18b000000b005be94a6d84bmr23544ywg.5.1699660540914; Fri, 10 Nov 2023 15:55:40 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:23 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-5-seanjc@google.com> Subject: [PATCH 4/9] KVM: x86: Avoid double CPUID lookup when updating MWAIT at runtime From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the handling of X86_FEATURE_MWAIT during CPUID runtime updates to utilize the lookup done for other CPUID.0x1 features. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maxim Levitsky --- arch/x86/kvm/cpuid.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 5cf3d697ecb3..6777780be6ae 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -276,6 +276,11 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu= *vcpu, struct kvm_cpuid_e =20 cpuid_entry_change(best, X86_FEATURE_APIC, vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE); + + if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) + cpuid_entry_change(best, X86_FEATURE_MWAIT, + vcpu->arch.ia32_misc_enable_msr & + MSR_IA32_MISC_ENABLE_MWAIT); } =20 best =3D cpuid_entry2_find(entries, nent, 7, 0); @@ -296,14 +301,6 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu= *vcpu, struct kvm_cpuid_e if (kvm_hlt_in_guest(vcpu->kvm) && best && (best->eax & (1 << KVM_FEATURE_PV_UNHALT))) best->eax &=3D ~(1 << KVM_FEATURE_PV_UNHALT); - - if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) { - best =3D cpuid_entry2_find(entries, nent, 0x1, KVM_CPUID_INDEX_NOT_SIGNI= FICANT); - if (best) - cpuid_entry_change(best, X86_FEATURE_MWAIT, - vcpu->arch.ia32_misc_enable_msr & - MSR_IA32_MISC_ENABLE_MWAIT); - } } =20 void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu) --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF66BC4332F for ; Fri, 10 Nov 2023 23:55:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234918AbjKJXz7 (ORCPT ); Fri, 10 Nov 2023 18:55:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344717AbjKJXzq (ORCPT ); Fri, 10 Nov 2023 18:55:46 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 697CC448C for ; Fri, 10 Nov 2023 15:55:43 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-28016806be2so3296287a91.1 for ; Fri, 10 Nov 2023 15:55:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660543; x=1700265343; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=lj6QboIyWdPKwpOPfJ3u7CONO+6S55iCaYCfcwC+TWo=; b=iEc25nsvsauTcalh84eeP2HJzFe5VLQfjJRlce3g+JgdgW88cWCLLkwEWIsPEeRen5 PS+gkCXWD4jhG86f4Z/5iI7UYt20uB43lH1Tp7EdGruwFDctylDUtOLypXMrolNmJsGk /Yf5aPFUZ6W62xO+UFACyXDpyHPz5wQR/LTgW8P/d6TkO5mOXJj4eUms4UEA9bGFBwS1 poWH/MCetJjhhebB1/B3Xcz0xeKegNd5TbFzEwFJB5F0UrtvlxVnol/dkNMpymRG0n0+ Mqlrr/hHJQw46i8dW1pAhnyvTU1sJ92Ly/7KBHo6U9gYO9BP1G/MXHogAEk1vwT/7O7e 9TDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660543; x=1700265343; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lj6QboIyWdPKwpOPfJ3u7CONO+6S55iCaYCfcwC+TWo=; b=GDXirv4NwRMdQOia5l7HTjqOH6283KAPR2GgyxtmZB7n3/6CthntUIblLWloWdDvNV V7jBoxC4rOgQXMVRjkgWQwnrVlBup6IopB7cx3lxFAjhu9/wDfcyf4LRGzijPjV4f50P V6KmS7rdbwYjjSJDV6o+Nb1iRQaKIbaL58xlZGBkNn/ynYLcqCUtNDhh4iiHWw2+hgPm WS3goQxL/ADFRL2PFE1LEbfuqpsmL6YFNUAz2hVSBjVsfdDmC637WT4pETE7o5V1RP3D hYwEMIqBdEktfo0558I7WEAG8RShxxQk9Mlx4EvObJVRE3jisih5jmV8H8pMkvUWS2yr 8slg== X-Gm-Message-State: AOJu0YxKf9euGgM0ETzuC3TdhBQMdpKkEIA5mPs/0kxRQUwtm4IHweGm UecZKc4KjZ3d2N5KnkdkfUOmd3dynWk= X-Google-Smtp-Source: AGHT+IGnfhC7CLi2jPB1BP4mqYcKdyc5bXkMBRuQ0Ukestup4m0lzMxogNBHWXgmaVlSgUzREdADxCqw4EE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:e413:b0:27d:3322:68aa with SMTP id hv19-20020a17090ae41300b0027d332268aamr164287pjb.2.1699660542941; Fri, 10 Nov 2023 15:55:42 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:24 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-6-seanjc@google.com> Subject: [PATCH 5/9] KVM: x86: Drop unnecessary check that cpuid_entry2_find() returns right leaf From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop an unnecessary check that cpuid_entry2_find() returns the correct leaf when getting CPUID.0x7.0x0 to update X86_FEATURE_OSPKE, as cpuid_entry2_find() never returns an entry for the wrong function. And not that it matters, but cpuid_entry2_find() will always return a precise match for CPUID.0x7.0x0 since the index is significant. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maxim Levitsky --- arch/x86/kvm/cpuid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6777780be6ae..36bd04030989 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -284,7 +284,7 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu = *vcpu, struct kvm_cpuid_e } =20 best =3D cpuid_entry2_find(entries, nent, 7, 0); - if (best && boot_cpu_has(X86_FEATURE_PKU) && best->function =3D=3D 0x7) + if (best && boot_cpu_has(X86_FEATURE_PKU)) cpuid_entry_change(best, X86_FEATURE_OSPKE, kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE)); =20 --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42126C4332F for ; Fri, 10 Nov 2023 23:56:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230423AbjKJX4C (ORCPT ); Fri, 10 Nov 2023 18:56:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbjKJXzx (ORCPT ); Fri, 10 Nov 2023 18:55:53 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9EC144A4 for ; Fri, 10 Nov 2023 15:55:45 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5b053454aeeso35693667b3.0 for ; Fri, 10 Nov 2023 15:55:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660545; x=1700265345; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Fb9o9vTAMmqpgdOCiBkm6k44a/yj7aZt1WRlJMtEY4o=; b=Xioiww6CRtU7NhVAue6jOlwQ2uX39yPyeR2avaQTenK6yZhBnDh3W7qoeVsjV1qtoX SkJi9sA0E9X8JjWh3KzSqfz7yCwagtmzZlDHiFzYfUwHogpG1f9o4OmwUct03HBLJaTP IZBU2UgqTFMfE6Z5WTDVGFWqwjoZFS45PYnBeif9fSmaDSqQYb6B+NdBoP54NYrONtfQ B1/6COJplyL+u2nd93tr/eGASRUcLQsPESH/YG3wS9zeGIrJ5GC8vggc+K+pSQwbXvrv UTln9UGpDeRW6Q1R4iXWSX43uOT01Hr9GYJ+8hf/3n9CqUynTMPN7zx9JBM/yULnjxx6 UZUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660545; x=1700265345; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Fb9o9vTAMmqpgdOCiBkm6k44a/yj7aZt1WRlJMtEY4o=; b=FudNMWrn64Qu7LB+lRrzmj+nM+o4ohtegiS/kMxXGgFKUV5oaTba/9avBfFIjHVTd5 M0h3Hrg5/vRuhWrwHneGz/A1QeBfiIeggB6xGtnbgUyRkOpk8Cj65x9ve8Ho+oZjaP1j O/vI4P2Lsw5oCuOyw5Xg9TUyjfLzfx+rTF9DR4UL87vnXI3F4kI6YZvgHL0dt21WUb01 wzo2zY5CnvUJpL/L00fc/silXKTmP5IGofGUUwiLIpX38azshBc3kEjtkXLaLnPyiqBl 17HMaECh3My2Wr0BidpW0jf9o7U8mq6cpSHqD1Cu3uuLK4RKwN0jHnF6aNL3Wop75tK+ eR7A== X-Gm-Message-State: AOJu0Yw7LA4w0b2UjCw2ptkEtJSI38NtfEH7jg7e5EvOMwDaYp3LCNOV y7/5PVHo880IxlFeAF4PjZIyK+/u3Us= X-Google-Smtp-Source: AGHT+IFqBdtg0HXh/YArsdBLAwe9BL3Yvb3UvXpeYX1kEg2TJDL7UyOILsOkr21bny94fhrfzcPukMFYbpE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:318a:0:b0:d9a:36cd:482e with SMTP id x132-20020a25318a000000b00d9a36cd482emr14832ybx.13.1699660544920; Fri, 10 Nov 2023 15:55:44 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:25 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-7-seanjc@google.com> Subject: [PATCH 6/9] KVM: x86: Update guest cpu_caps at runtime for dynamic CPUID-based features From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When updating guest CPUID entries to emulate runtime behavior, e.g. when the guest enables a CR4-based feature that is tied to a CPUID flag, also update the vCPU's cpu_caps accordingly. This will allow replacing all usage of guest_cpuid_has() with guest_cpu_cap_has(). Take care not to update guest capabilities when KVM is updating CPUID entries that *may* become the vCPU's CPUID, e.g. if userspace tries to set bogus CPUID information. No extra call to update cpu_caps is needed as the cpu_caps are initialized from the incoming guest CPUID, i.e. will automatically get the updated values. Note, none of the features in question use guest_cpu_cap_has() at this time, i.e. aside from settings bits in cpu_caps, this is a glorified nop. Signed-off-by: Sean Christopherson Reviewed-by: Robert Hoo Reviewed-by: Yang Weijiang --- arch/x86/kvm/cpuid.c | 48 +++++++++++++++++++++++++++++++------------- 1 file changed, 34 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 36bd04030989..37a991439fe6 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -262,31 +262,48 @@ static u64 cpuid_get_supported_xcr0(struct kvm_cpuid_= entry2 *entries, int nent) return (best->eax | ((u64)best->edx << 32)) & kvm_caps.supported_xcr0; } =20 +static __always_inline void kvm_update_feature_runtime(struct kvm_vcpu *vc= pu, + struct kvm_cpuid_entry2 *entry, + unsigned int x86_feature, + bool has_feature) +{ + if (entry) + cpuid_entry_change(entry, x86_feature, has_feature); + + if (vcpu) + guest_cpu_cap_change(vcpu, x86_feature, has_feature); +} + static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_c= puid_entry2 *entries, int nent) { struct kvm_cpuid_entry2 *best; + struct kvm_vcpu *caps =3D vcpu; + + /* + * Don't update vCPU capabilities if KVM is updating CPUID entries that + * are coming in from userspace! + */ + if (entries !=3D vcpu->arch.cpuid_entries) + caps =3D NULL; =20 best =3D cpuid_entry2_find(entries, nent, 1, KVM_CPUID_INDEX_NOT_SIGNIFIC= ANT); - if (best) { - /* Update OSXSAVE bit */ - if (boot_cpu_has(X86_FEATURE_XSAVE)) - cpuid_entry_change(best, X86_FEATURE_OSXSAVE, + + if (boot_cpu_has(X86_FEATURE_XSAVE)) + kvm_update_feature_runtime(caps, best, X86_FEATURE_OSXSAVE, kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)); =20 - cpuid_entry_change(best, X86_FEATURE_APIC, - vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE); + kvm_update_feature_runtime(caps, best, X86_FEATURE_APIC, + vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE); =20 - if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) - cpuid_entry_change(best, X86_FEATURE_MWAIT, - vcpu->arch.ia32_misc_enable_msr & - MSR_IA32_MISC_ENABLE_MWAIT); - } + if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) + kvm_update_feature_runtime(caps, best, X86_FEATURE_MWAIT, + vcpu->arch.ia32_misc_enable_msr & MSR_IA32_MISC_ENABLE_MWAIT); =20 best =3D cpuid_entry2_find(entries, nent, 7, 0); - if (best && boot_cpu_has(X86_FEATURE_PKU)) - cpuid_entry_change(best, X86_FEATURE_OSPKE, - kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE)); + if (boot_cpu_has(X86_FEATURE_PKU)) + kvm_update_feature_runtime(caps, best, X86_FEATURE_OSPKE, + kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE)); =20 best =3D cpuid_entry2_find(entries, nent, 0xD, 0); if (best) @@ -353,6 +370,9 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *v= cpu) * Reset guest capabilities to userspace's guest CPUID definition, i.e. * honor userspace's definition for features that don't require KVM or * hardware management/support (or that KVM simply doesn't care about). + * + * Note, KVM has already done runtime updates on guest CPUID, i.e. this + * will also correctly set runtime features in guest CPU capabilities. */ for (i =3D 0; i < NR_KVM_CPU_CAPS; i++) { const struct cpuid_reg cpuid =3D reverse_cpuid[i]; --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A4EAC4332F for ; Fri, 10 Nov 2023 23:56:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344959AbjKJX4H (ORCPT ); Fri, 10 Nov 2023 18:56:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234838AbjKJXz7 (ORCPT ); Fri, 10 Nov 2023 18:55:59 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 583F346A8 for ; Fri, 10 Nov 2023 15:55:48 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-daee86e2d70so2875964276.0 for ; Fri, 10 Nov 2023 15:55:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660547; x=1700265347; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=UULTl/CyX49jqhiU5nSfvUmNnB7m6080sqG/jitSL4w=; b=HkEaraVMU8Q1Xv6qlJMKTXr3R1QZdSZDUhjMTeXdlphPZBIvOABKFefoexphcPV207 9zudU26QRzrrBbtmlGKdI+JM6D06F7Y+bvKH5yqhuE7vBYzfBtuMxgj0w2mxL9R/0crL cc6OKPxeYSOb4Z/BC22uLZWRNMRrDYTDxMsO+BPAoJcKdlZnxRP+wRFofkpg0PAyom1U rC43d7XjW/6He/+pdX+J9CrJeev5+3YerO020Ywg7q3l9VVbFStsvnGv/xf+lQhd0jE3 J3RNw+FtKfEDxhudsIsTpiNzCI5zDYqtza1WJRUvzDcZTIJn4CikRmWCKlHAgv5TLGZW Ht7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660547; x=1700265347; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=UULTl/CyX49jqhiU5nSfvUmNnB7m6080sqG/jitSL4w=; b=Vpv2F2zaDT8PhNl0dG5J4bqCZEt+K8ya20oQZLqj5UTmo9HkYpnCT53H5hsL7VLwCz hyY+I54jVviy8ZxbdYfgRKGAVeH5dorXI2Do+RYljBquD+ZTHk0/a6ElN6rrMsnpcJWw XuZGpQx8Qd27gbt8t3A2Ydufmay9X79wgOtlMKZEq0g719w6SYPJ642U8w/6HQZkJvMu EX/39wl037ieOxIjiHu06UfHzEbQhji/YclElO/rGrM3oNcqFiPX5BWqNCmC8mtcCwLE /5PXf+LYdryGy1omUTqXCJyYGajofQUiaaWRpIufkgoKhkq54y62k80RVEs+85I2u0Qh XAHg== X-Gm-Message-State: AOJu0YzvkRvwCz2VJ0E31kcr8HZMkdKdnYJysUTG0N8+KCwqD7iPCQTM ka2HzOx6izaSn7Q3XnUOfQkj08PKdcU= X-Google-Smtp-Source: AGHT+IEmMZEklqzEl46skV0jCyEKXRJqMj1H5jtFG7Z2uM2pVcO1WJh448Dutk6lRZZB//FZxGGR0+c/Kxk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:aba7:0:b0:dae:292e:68de with SMTP id v36-20020a25aba7000000b00dae292e68demr16244ybi.6.1699660547065; Fri, 10 Nov 2023 15:55:47 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:26 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-8-seanjc@google.com> Subject: [PATCH 7/9] KVM: x86: Shuffle code to prepare for dropping guest_cpuid_has() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the implementations of guest_has_{spec_ctrl,pred_cmd}_msr() down below guest_cpu_cap_has() so that their use of guest_cpuid_has() can be replaced with calls to guest_cpu_cap_has(). No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maxim Levitsky --- arch/x86/kvm/cpuid.h | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 1707ef10b269..bebf94a69630 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -163,21 +163,6 @@ static inline int guest_cpuid_stepping(struct kvm_vcpu= *vcpu) return x86_stepping(best->eax); } =20 -static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu) -{ - return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_STIBP) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD)); -} - -static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu) -{ - return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) || - guest_cpuid_has(vcpu, X86_FEATURE_SBPB)); -} - static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu) { return vcpu->arch.msr_platform_info & MSR_PLATFORM_INFO_CPUID_FAULT; @@ -298,4 +283,19 @@ static __always_inline bool guest_cpu_cap_has(struct k= vm_vcpu *vcpu, return vcpu->arch.cpu_caps[x86_leaf] & __feature_bit(x86_feature); } =20 +static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu) +{ + return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || + guest_cpuid_has(vcpu, X86_FEATURE_AMD_STIBP) || + guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) || + guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD)); +} + +static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu) +{ + return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || + guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) || + guest_cpuid_has(vcpu, X86_FEATURE_SBPB)); +} + #endif --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D394C4332F for ; Fri, 10 Nov 2023 23:56:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344841AbjKJX42 (ORCPT ); Fri, 10 Nov 2023 18:56:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235114AbjKJX4D (ORCPT ); Fri, 10 Nov 2023 18:56:03 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDB6F46BA for ; Fri, 10 Nov 2023 15:55:49 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-da3b6438170so3184515276.1 for ; Fri, 10 Nov 2023 15:55:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660549; x=1700265349; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=j17vU8KgGm8WRGzkEgSoBujhtusUeBTSFdUxzK6wwuY=; b=MpR78KHfK7yxKUvlTS7vhDzUdVxxrirVaMSpzteT9mo8fdJ8QiK5/8Oip7rtBnPGB6 ivHCi7t+5AY9cm6bNMUjh9HpgWlS6bJoEoadb8xnhUrP0QfTRmJFRBmCJFQg49B1ijht lzhIb3dPPF3+s5sGZt9hNH/n4sBAR5Cav/sIT9y7ltpByDCQdp4rxcW58vobNFiz+evk ed+PaZcrXED1xUXBLa9LYK602yGlV94yJnqW13UBx3qpXNL+OlOEREVNN7nFhkiH9MoT gVN3ivxYUaj5NrVYldcEN6vOOhBnvAqtitM+3dNgn6pNPRinZ7+KjKMJGzlVvXr1Z5Pb m9LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660549; x=1700265349; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=j17vU8KgGm8WRGzkEgSoBujhtusUeBTSFdUxzK6wwuY=; b=ghfYWqy98ZUkDveVO8lsvwVDfio7gtan8vZqeUkodvX7wsv8zX4fcGijqatG5JdZTv ZixRaFwGHlN9yTBmeCJOBQlkKttBoxuCHG3cv2RlrFsQ83K5+O/bfpEw3CvuuIONYAlj AxQrNqzpcpV74Lz9NJpCU1VMiwMq+UTQ70n3058MhYCDSv/ph7MRG1wXFoHhIZG1HkFW n+suuVzDZGRRf2w9ViWFD/vN1XStEg+hQswVC5UrN7RGCoj/3OeAumrxhMTHSA6HwCSQ JR6BP7J92u3ir4Wel57sfbadf/BBsLLb9+sLsTVtY7zcxz+2DtiF3tz4sYmQfuUNme83 7DFw== X-Gm-Message-State: AOJu0YyQ59amsBnqIB/KqRypSSlAS6YpWavWXNX1qwhLmmrhJQX0m5Oj Gsdlkh625Ch8SSYMNvkJI8h+9Bs6ZTc= X-Google-Smtp-Source: AGHT+IFyVUBu5XPhvd3PDcenJn1xtZxfr3L2NmKc81k79CyE5d9X18QR9aHjGrW3eMVBmOf5ipKBHq+EQhE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:aad4:0:b0:d9a:6007:223a with SMTP id t78-20020a25aad4000000b00d9a6007223amr13342ybi.8.1699660549241; Fri, 10 Nov 2023 15:55:49 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:27 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-9-seanjc@google.com> Subject: [PATCH 8/9] KVM: x86: Replace all guest CPUID feature queries with cpu_caps check From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Switch all queries of guest features from guest CPUID to guest capabilities, i.e. replace all calls to guest_cpuid_has() with calls to guest_cpu_cap_has(), and drop guest_cpuid_has() and its helper guest_cpuid_get_register(). Opportunistically drop the unused guest_cpuid_clear(), as there should be no circumstance in which KVM needs to _clear_ a guest CPUID feature now that everything is tracked via cpu_caps. E.g. KVM may need to _change_ a feature to emulate dynamic CPUID flags, but KVM should never need to clear a feature in guest CPUID to prevent it from being used by the guest. Delete the last remnants of the governed features framework, as the lone holdout was vmx_adjust_secondary_exec_control()'s divergent behavior for governed vs. ungoverned features. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 4 +- arch/x86/kvm/cpuid.h | 70 ++++---------------------------- arch/x86/kvm/governed_features.h | 21 ---------- arch/x86/kvm/lapic.c | 2 +- arch/x86/kvm/mtrr.c | 2 +- arch/x86/kvm/smm.c | 10 ++--- arch/x86/kvm/svm/pmu.c | 8 ++-- arch/x86/kvm/svm/sev.c | 4 +- arch/x86/kvm/svm/svm.c | 20 ++++----- arch/x86/kvm/vmx/nested.c | 12 +++--- arch/x86/kvm/vmx/pmu_intel.c | 4 +- arch/x86/kvm/vmx/sgx.c | 14 +++---- arch/x86/kvm/vmx/vmx.c | 47 ++++++++++----------- arch/x86/kvm/vmx/vmx.h | 2 +- arch/x86/kvm/x86.c | 68 +++++++++++++++---------------- 15 files changed, 104 insertions(+), 184 deletions(-) delete mode 100644 arch/x86/kvm/governed_features.h diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 37a991439fe6..6407e5c45f20 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -396,7 +396,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *v= cpu) * and can install smaller shadow pages if the host lacks 1GiB support. */ allow_gbpages =3D tdp_enabled ? boot_cpu_has(X86_FEATURE_GBPAGES) : - guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES); + guest_cpu_cap_has(vcpu, X86_FEATURE_GBPAGES); guest_cpu_cap_change(vcpu, X86_FEATURE_GBPAGES, allow_gbpages); =20 best =3D kvm_find_cpuid_entry(vcpu, 1); @@ -419,7 +419,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *v= cpu) =20 kvm_pmu_refresh(vcpu); vcpu->arch.cr4_guest_rsvd_bits =3D - __cr4_reserved_bits(guest_cpuid_has, vcpu); + __cr4_reserved_bits(guest_cpu_cap_has, vcpu); =20 kvm_hv_set_cpuid(vcpu, kvm_cpuid_has_hyperv(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent)); diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index bebf94a69630..98694dfe062e 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -72,41 +72,6 @@ static __always_inline void cpuid_entry_override(struct = kvm_cpuid_entry2 *entry, *reg =3D kvm_cpu_caps[leaf]; } =20 -static __always_inline u32 *guest_cpuid_get_register(struct kvm_vcpu *vcpu, - unsigned int x86_feature) -{ - const struct cpuid_reg cpuid =3D x86_feature_cpuid(x86_feature); - struct kvm_cpuid_entry2 *entry; - - entry =3D kvm_find_cpuid_entry_index(vcpu, cpuid.function, cpuid.index); - if (!entry) - return NULL; - - return __cpuid_entry_get_reg(entry, cpuid.reg); -} - -static __always_inline bool guest_cpuid_has(struct kvm_vcpu *vcpu, - unsigned int x86_feature) -{ - u32 *reg; - - reg =3D guest_cpuid_get_register(vcpu, x86_feature); - if (!reg) - return false; - - return *reg & __feature_bit(x86_feature); -} - -static __always_inline void guest_cpuid_clear(struct kvm_vcpu *vcpu, - unsigned int x86_feature) -{ - u32 *reg; - - reg =3D guest_cpuid_get_register(vcpu, x86_feature); - if (reg) - *reg &=3D ~__feature_bit(x86_feature); -} - static inline bool guest_cpuid_is_amd_or_hygon(struct kvm_vcpu *vcpu) { struct kvm_cpuid_entry2 *best; @@ -218,27 +183,6 @@ static __always_inline bool guest_pv_has(struct kvm_vc= pu *vcpu, return vcpu->arch.pv_cpuid.features & (1u << kvm_feature); } =20 -enum kvm_governed_features { -#define KVM_GOVERNED_FEATURE(x) KVM_GOVERNED_##x, -#include "governed_features.h" - KVM_NR_GOVERNED_FEATURES -}; - -static __always_inline int kvm_governed_feature_index(unsigned int x86_fea= ture) -{ - switch (x86_feature) { -#define KVM_GOVERNED_FEATURE(x) case x: return KVM_GOVERNED_##x; -#include "governed_features.h" - default: - return -1; - } -} - -static __always_inline bool kvm_is_governed_feature(unsigned int x86_featu= re) -{ - return kvm_governed_feature_index(x86_feature) >=3D 0; -} - static __always_inline void guest_cpu_cap_set(struct kvm_vcpu *vcpu, unsigned int x86_feature) { @@ -285,17 +229,17 @@ static __always_inline bool guest_cpu_cap_has(struct = kvm_vcpu *vcpu, =20 static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu) { - return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_STIBP) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD)); + return (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) || + guest_cpu_cap_has(vcpu, X86_FEATURE_AMD_STIBP) || + guest_cpu_cap_has(vcpu, X86_FEATURE_AMD_IBRS) || + guest_cpu_cap_has(vcpu, X86_FEATURE_AMD_SSBD)); } =20 static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu) { - return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || - guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) || - guest_cpuid_has(vcpu, X86_FEATURE_SBPB)); + return (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) || + guest_cpu_cap_has(vcpu, X86_FEATURE_AMD_IBPB) || + guest_cpu_cap_has(vcpu, X86_FEATURE_SBPB)); } =20 #endif diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_featu= res.h deleted file mode 100644 index 423a73395c10..000000000000 --- a/arch/x86/kvm/governed_features.h +++ /dev/null @@ -1,21 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#if !defined(KVM_GOVERNED_FEATURE) || defined(KVM_GOVERNED_X86_FEATURE) -BUILD_BUG() -#endif - -#define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x) - -KVM_GOVERNED_X86_FEATURE(GBPAGES) -KVM_GOVERNED_X86_FEATURE(XSAVES) -KVM_GOVERNED_X86_FEATURE(VMX) -KVM_GOVERNED_X86_FEATURE(NRIPS) -KVM_GOVERNED_X86_FEATURE(TSCRATEMSR) -KVM_GOVERNED_X86_FEATURE(V_VMSAVE_VMLOAD) -KVM_GOVERNED_X86_FEATURE(LBRV) -KVM_GOVERNED_X86_FEATURE(PAUSEFILTER) -KVM_GOVERNED_X86_FEATURE(PFTHRESHOLD) -KVM_GOVERNED_X86_FEATURE(VGIF) -KVM_GOVERNED_X86_FEATURE(VNMI) - -#undef KVM_GOVERNED_X86_FEATURE -#undef KVM_GOVERNED_FEATURE diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 245b20973cae..f5fab29c827f 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -584,7 +584,7 @@ void kvm_apic_set_version(struct kvm_vcpu *vcpu) * version first and level-triggered interrupts never get EOIed in * IOAPIC. */ - if (guest_cpuid_has(vcpu, X86_FEATURE_X2APIC) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_X2APIC) && !ioapic_in_kernel(vcpu->kvm)) v |=3D APIC_LVR_DIRECTED_EOI; kvm_lapic_set_reg(apic, APIC_LVR, v); diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index a67c28a56417..9e8cb38ae1db 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -128,7 +128,7 @@ static u8 mtrr_disabled_type(struct kvm_vcpu *vcpu) * enable MTRRs and it is obviously undesirable to run the * guest entirely with UC memory and we use WB. */ - if (guest_cpuid_has(vcpu, X86_FEATURE_MTRR)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_MTRR)) return MTRR_TYPE_UNCACHABLE; else return MTRR_TYPE_WRBACK; diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index dc3d95fdca7d..3ca4154d9fa0 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -290,7 +290,7 @@ void enter_smm(struct kvm_vcpu *vcpu) memset(smram.bytes, 0, sizeof(smram.bytes)); =20 #ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) enter_smm_save_state_64(vcpu, &smram.smram64); else #endif @@ -360,7 +360,7 @@ void enter_smm(struct kvm_vcpu *vcpu) kvm_set_segment(vcpu, &ds, VCPU_SREG_SS); =20 #ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) if (static_call(kvm_x86_set_efer)(vcpu, 0)) goto error; #endif @@ -593,7 +593,7 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) * supports long mode. */ #ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) { + if (guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) { struct kvm_segment cs_desc; unsigned long cr4; =20 @@ -616,7 +616,7 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) kvm_set_cr0(vcpu, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); =20 #ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) { + if (guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) { unsigned long cr4, efer; =20 /* Clear CR4.PAE before clearing EFER.LME. */ @@ -639,7 +639,7 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) return X86EMUL_UNHANDLEABLE; =20 #ifdef CONFIG_X86_64 - if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) return rsm_load_state_64(ctxt, &smram.smram64); else #endif diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 373ff6a6687b..16d396a31c16 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -46,7 +46,7 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_p= mu *pmu, u32 msr, =20 switch (msr) { case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: - if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_PERFCTR_CORE)) return NULL; /* * Each PMU counter has a pair of CTL and CTR MSRs. CTLn @@ -113,7 +113,7 @@ static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32= msr) case MSR_K7_EVNTSEL0 ... MSR_K7_PERFCTR3: return pmu->version > 0; case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: - return guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE); + return guest_cpu_cap_has(vcpu, X86_FEATURE_PERFCTR_CORE); case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: @@ -184,7 +184,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) union cpuid_0x80000022_ebx ebx; =20 pmu->version =3D 1; - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFMON_V2)) { + if (guest_cpu_cap_has(vcpu, X86_FEATURE_PERFMON_V2)) { pmu->version =3D 2; /* * Note, PERFMON_V2 is also in 0x80000022.0x0, i.e. the guest @@ -194,7 +194,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) x86_feature_cpuid(X86_FEATURE_PERFMON_V2).index); ebx.full =3D kvm_find_cpuid_entry_index(vcpu, 0x80000022, 0)->ebx; pmu->nr_arch_gp_counters =3D ebx.split.num_core_pmc; - } else if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { + } else if (guest_cpu_cap_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { pmu->nr_arch_gp_counters =3D AMD64_NUM_COUNTERS_CORE; } else { pmu->nr_arch_gp_counters =3D AMD64_NUM_COUNTERS; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 4900c078045a..05008d33ae63 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2967,8 +2967,8 @@ static void sev_es_vcpu_after_set_cpuid(struct vcpu_s= vm *svm) struct kvm_vcpu *vcpu =3D &svm->vcpu; =20 if (boot_cpu_has(X86_FEATURE_V_TSC_AUX)) { - bool v_tsc_aux =3D guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) || - guest_cpuid_has(vcpu, X86_FEATURE_RDPID); + bool v_tsc_aux =3D guest_cpu_cap_has(vcpu, X86_FEATURE_RDTSCP) || + guest_cpu_cap_has(vcpu, X86_FEATURE_RDPID); =20 set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, v_tsc_aux, v_tsc_aux= ); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 5827328e30f1..9e3a9191dac1 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1185,14 +1185,14 @@ static void svm_recalc_instruction_intercepts(struc= t kvm_vcpu *vcpu, */ if (kvm_cpu_cap_has(X86_FEATURE_INVPCID)) { if (!npt_enabled || - !guest_cpuid_has(&svm->vcpu, X86_FEATURE_INVPCID)) + !guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_INVPCID)) svm_set_intercept(svm, INTERCEPT_INVPCID); else svm_clr_intercept(svm, INTERCEPT_INVPCID); } =20 if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP)) { - if (guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_RDTSCP)) svm_clr_intercept(svm, INTERCEPT_RDTSCP); else svm_set_intercept(svm, INTERCEPT_RDTSCP); @@ -2905,7 +2905,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) break; case MSR_AMD64_VIRT_SPEC_CTRL: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_VIRT_SSBD)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_VIRT_SSBD)) return 1; =20 msr_info->data =3D svm->virt_spec_ctrl; @@ -3052,7 +3052,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr) break; case MSR_AMD64_VIRT_SPEC_CTRL: if (!msr->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_VIRT_SSBD)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_VIRT_SSBD)) return 1; =20 if (data & ~SPEC_CTRL_SSBD) @@ -3224,7 +3224,7 @@ static int invpcid_interception(struct kvm_vcpu *vcpu) unsigned long type; gva_t gva; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_INVPCID)) { + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_INVPCID)) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; } @@ -4318,7 +4318,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) guest_cpu_cap_change(vcpu, X86_FEATURE_XSAVES, boot_cpu_has(X86_FEATURE_XSAVE) && boot_cpu_has(X86_FEATURE_XSAVES) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)); + guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE)); =20 guest_cpu_cap_restrict(vcpu, X86_FEATURE_NRIPS); guest_cpu_cap_restrict(vcpu, X86_FEATURE_TSCRATEMSR); @@ -4345,7 +4345,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) =20 if (boot_cpu_has(X86_FEATURE_FLUSH_L1D)) set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0, - !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)); + !!guest_cpu_cap_has(vcpu, X86_FEATURE_FLUSH_L1D)); =20 if (sev_guest(vcpu->kvm)) sev_vcpu_after_set_cpuid(svm); @@ -4602,7 +4602,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union= kvm_smram *smram) * responsible for ensuring nested SVM and SMIs are mutually exclusive. */ =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) return 1; =20 smram->smram64.svm_guest_flag =3D 1; @@ -4649,14 +4649,14 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, con= st union kvm_smram *smram) =20 const struct kvm_smram_state_64 *smram64 =3D &smram->smram64; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) return 0; =20 /* Non-zero if SMI arrived while vCPU was in guest mode. */ if (!smram64->svm_guest_flag) return 0; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SVM)) return 1; =20 if (!(smram64->efer & EFER_SVME)) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 4750d1696d58..f046813e34c1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2005,7 +2005,7 @@ static enum nested_evmptrld_status nested_vmx_handle_= enlightened_vmptrld( bool evmcs_gpa_changed =3D false; u64 evmcs_gpa; =20 - if (likely(!guest_cpuid_has_evmcs(vcpu))) + if (likely(!guest_cpu_cap_has_evmcs(vcpu))) return EVMPTRLD_DISABLED; =20 evmcs_gpa =3D nested_get_evmptr(vcpu); @@ -2888,7 +2888,7 @@ static int nested_vmx_check_controls(struct kvm_vcpu = *vcpu, nested_check_vm_entry_controls(vcpu, vmcs12)) return -EINVAL; =20 - if (guest_cpuid_has_evmcs(vcpu)) + if (guest_cpu_cap_has_evmcs(vcpu)) return nested_evmcs_check_controls(vmcs12); =20 return 0; @@ -3170,7 +3170,7 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vc= pu) * L2 was running), map it here to make sure vmcs12 changes are * properly reflected. */ - if (guest_cpuid_has_evmcs(vcpu) && + if (guest_cpu_cap_has_evmcs(vcpu) && vmx->nested.hv_evmcs_vmptr =3D=3D EVMPTR_MAP_PENDING) { enum nested_evmptrld_status evmptrld_status =3D nested_vmx_handle_enlightened_vmptrld(vcpu, false); @@ -4814,7 +4814,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_= exit_reason, * doesn't isolate different VMCSs, i.e. in this case, doesn't provide * separate modes for L2 vs L1. */ - if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL)) indirect_branch_prediction_barrier(); =20 /* Update any VMCS fields that might have changed while L2 ran */ @@ -5302,7 +5302,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) * state. It is possible that the area will stay mapped as * vmx->nested.hv_evmcs but this shouldn't be a problem. */ - if (likely(!guest_cpuid_has_evmcs(vcpu) || + if (likely(!guest_cpu_cap_has_evmcs(vcpu) || !evmptr_is_valid(nested_get_evmptr(vcpu)))) { if (vmptr =3D=3D vmx->nested.current_vmptr) nested_release_vmcs12(vcpu); @@ -6092,7 +6092,7 @@ static bool nested_vmx_exit_handled_encls(struct kvm_= vcpu *vcpu, { u32 encls_leaf; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_SGX) || + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SGX) || !nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING)) return false; =20 diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 820d3e1f6b4f..98d579c0ce28 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -160,7 +160,7 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kv= m_vcpu *vcpu, =20 static inline u64 vcpu_get_perf_capabilities(struct kvm_vcpu *vcpu) { - if (!guest_cpuid_has(vcpu, X86_FEATURE_PDCM)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_PDCM)) return 0; =20 return vcpu->arch.perf_capabilities; @@ -210,7 +210,7 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u= 32 msr) ret =3D vcpu_get_perf_capabilities(vcpu) & PERF_CAP_PEBS_FORMAT; break; case MSR_IA32_DS_AREA: - ret =3D guest_cpuid_has(vcpu, X86_FEATURE_DS); + ret =3D guest_cpu_cap_has(vcpu, X86_FEATURE_DS); break; case MSR_PEBS_DATA_CFG: perf_capabilities =3D vcpu_get_perf_capabilities(vcpu); diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c index 3e822e582497..9616b4ac0662 100644 --- a/arch/x86/kvm/vmx/sgx.c +++ b/arch/x86/kvm/vmx/sgx.c @@ -122,7 +122,7 @@ static int sgx_inject_fault(struct kvm_vcpu *vcpu, gva_= t gva, int trapnr) * likely than a bad userspace address. */ if ((trapnr =3D=3D PF_VECTOR || !boot_cpu_has(X86_FEATURE_SGX2)) && - guest_cpuid_has(vcpu, X86_FEATURE_SGX2)) { + guest_cpu_cap_has(vcpu, X86_FEATURE_SGX2)) { memset(&ex, 0, sizeof(ex)); ex.vector =3D PF_VECTOR; ex.error_code =3D PFERR_PRESENT_MASK | PFERR_WRITE_MASK | @@ -365,7 +365,7 @@ static inline bool encls_leaf_enabled_in_guest(struct k= vm_vcpu *vcpu, u32 leaf) return true; =20 if (leaf >=3D EAUG && leaf <=3D EMODT) - return guest_cpuid_has(vcpu, X86_FEATURE_SGX2); + return guest_cpu_cap_has(vcpu, X86_FEATURE_SGX2); =20 return false; } @@ -381,8 +381,8 @@ int handle_encls(struct kvm_vcpu *vcpu) { u32 leaf =3D (u32)kvm_rax_read(vcpu); =20 - if (!enable_sgx || !guest_cpuid_has(vcpu, X86_FEATURE_SGX) || - !guest_cpuid_has(vcpu, X86_FEATURE_SGX1)) { + if (!enable_sgx || !guest_cpu_cap_has(vcpu, X86_FEATURE_SGX) || + !guest_cpu_cap_has(vcpu, X86_FEATURE_SGX1)) { kvm_queue_exception(vcpu, UD_VECTOR); } else if (!encls_leaf_enabled_in_guest(vcpu, leaf) || !sgx_enabled_in_guest_bios(vcpu) || !is_paging(vcpu)) { @@ -479,15 +479,15 @@ void vmx_write_encls_bitmap(struct kvm_vcpu *vcpu, st= ruct vmcs12 *vmcs12) if (!cpu_has_vmx_encls_vmexit()) return; =20 - if (guest_cpuid_has(vcpu, X86_FEATURE_SGX) && + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SGX) && sgx_enabled_in_guest_bios(vcpu)) { - if (guest_cpuid_has(vcpu, X86_FEATURE_SGX1)) { + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SGX1)) { bitmap &=3D ~GENMASK_ULL(ETRACK, ECREATE); if (sgx_intercept_encls_ecreate(vcpu)) bitmap |=3D (1 << ECREATE); } =20 - if (guest_cpuid_has(vcpu, X86_FEATURE_SGX2)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SGX2)) bitmap &=3D ~GENMASK_ULL(EMODT, EAUG); =20 /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5a056ad1ae55..815692dc0aff 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1874,8 +1874,8 @@ static void vmx_setup_uret_msrs(struct vcpu_vmx *vmx) vmx_setup_uret_msr(vmx, MSR_EFER, update_transition_efer(vmx)); =20 vmx_setup_uret_msr(vmx, MSR_TSC_AUX, - guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP) || - guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDPID)); + guest_cpu_cap_has(&vmx->vcpu, X86_FEATURE_RDTSCP) || + guest_cpu_cap_has(&vmx->vcpu, X86_FEATURE_RDPID)); =20 /* * hle=3D0, rtm=3D0, tsx_ctrl=3D1 can be found with some combinations of = new @@ -2028,7 +2028,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case MSR_IA32_BNDCFGS: if (!kvm_mpx_supported() || (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_MPX))) + !guest_cpu_cap_has(vcpu, X86_FEATURE_MPX))) return 1; msr_info->data =3D vmcs_read64(GUEST_BNDCFGS); break; @@ -2044,7 +2044,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) break; case MSR_IA32_SGXLEPUBKEYHASH0 ... MSR_IA32_SGXLEPUBKEYHASH3: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_SGX_LC)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_SGX_LC)) return 1; msr_info->data =3D to_vmx(vcpu)->msr_ia32_sgxlepubkeyhash [msr_info->index - MSR_IA32_SGXLEPUBKEYHASH0]; @@ -2062,7 +2062,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) * sanity checking and refuse to boot. Filter all unsupported * features out. */ - if (!msr_info->host_initiated && guest_cpuid_has_evmcs(vcpu)) + if (!msr_info->host_initiated && guest_cpu_cap_has_evmcs(vcpu)) nested_evmcs_filter_control_msr(vcpu, msr_info->index, &msr_info->data); break; @@ -2131,7 +2131,7 @@ static u64 nested_vmx_truncate_sysenter_addr(struct k= vm_vcpu *vcpu, u64 data) { #ifdef CONFIG_X86_64 - if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) return (u32)data; #endif return (unsigned long)data; @@ -2142,7 +2142,7 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu= *vcpu, bool host_initiated u64 debugctl =3D 0; =20 if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) && - (host_initiated || guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)= )) + (host_initiated || guest_cpu_cap_has(vcpu, X86_FEATURE_BUS_LOCK_DETEC= T))) debugctl |=3D DEBUGCTLMSR_BUS_LOCK_DETECT; =20 if ((kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT) && @@ -2246,7 +2246,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case MSR_IA32_BNDCFGS: if (!kvm_mpx_supported() || (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_MPX))) + !guest_cpu_cap_has(vcpu, X86_FEATURE_MPX))) return 1; if (is_noncanonical_address(data & PAGE_MASK, vcpu) || (data & MSR_IA32_BNDCFGS_RSVD)) @@ -2348,7 +2348,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) * behavior, but it's close enough. */ if (!msr_info->host_initiated && - (!guest_cpuid_has(vcpu, X86_FEATURE_SGX_LC) || + (!guest_cpu_cap_has(vcpu, X86_FEATURE_SGX_LC) || ((vmx->msr_ia32_feature_control & FEAT_CTL_LOCKED) && !(vmx->msr_ia32_feature_control & FEAT_CTL_SGX_LC_ENABLED)))) return 1; @@ -2434,9 +2434,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) if ((data & PERF_CAP_PEBS_MASK) !=3D (kvm_caps.supported_perf_cap & PERF_CAP_PEBS_MASK)) return 1; - if (!guest_cpuid_has(vcpu, X86_FEATURE_DS)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_DS)) return 1; - if (!guest_cpuid_has(vcpu, X86_FEATURE_DTES64)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_DTES64)) return 1; if (!cpuid_model_is_consistent(vcpu)) return 1; @@ -4566,10 +4566,7 @@ vmx_adjust_secondary_exec_control(struct vcpu_vmx *v= mx, u32 *exec_control, bool __enabled; \ \ if (cpu_has_vmx_##name()) { \ - if (kvm_is_governed_feature(X86_FEATURE_##feat_name)) \ - __enabled =3D guest_cpu_cap_has(__vcpu, X86_FEATURE_##feat_name); \ - else \ - __enabled =3D guest_cpuid_has(__vcpu, X86_FEATURE_##feat_name); \ + __enabled =3D guest_cpu_cap_has(__vcpu, X86_FEATURE_##feat_name); \ vmx_adjust_secondary_exec_control(vmx, exec_control, SECONDARY_EXEC_##ct= rl_name,\ __enabled, exiting); \ } \ @@ -4644,8 +4641,8 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx= *vmx) */ if (cpu_has_vmx_rdtscp()) { bool rdpid_or_rdtscp_enabled =3D - guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) || - guest_cpuid_has(vcpu, X86_FEATURE_RDPID); + guest_cpu_cap_has(vcpu, X86_FEATURE_RDTSCP) || + guest_cpu_cap_has(vcpu, X86_FEATURE_RDPID); =20 vmx_adjust_secondary_exec_control(vmx, &exec_control, SECONDARY_EXEC_ENABLE_RDTSCP, @@ -5947,7 +5944,7 @@ static int handle_invpcid(struct kvm_vcpu *vcpu) } operand; int gpr_index; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_INVPCID)) { + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_INVPCID)) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; } @@ -7756,7 +7753,7 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) * set if and only if XSAVE is supported. */ if (boot_cpu_has(X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVE)) + guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE)) guest_cpu_cap_restrict(vcpu, X86_FEATURE_XSAVES); else guest_cpu_cap_clear(vcpu, X86_FEATURE_XSAVES); @@ -7782,21 +7779,21 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) nested_vmx_cr_fixed1_bits_update(vcpu); =20 if (boot_cpu_has(X86_FEATURE_INTEL_PT) && - guest_cpuid_has(vcpu, X86_FEATURE_INTEL_PT)) + guest_cpu_cap_has(vcpu, X86_FEATURE_INTEL_PT)) update_intel_pt_cfg(vcpu); =20 if (boot_cpu_has(X86_FEATURE_RTM)) { struct vmx_uret_msr *msr; msr =3D vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); if (msr) { - bool enabled =3D guest_cpuid_has(vcpu, X86_FEATURE_RTM); + bool enabled =3D guest_cpu_cap_has(vcpu, X86_FEATURE_RTM); vmx_set_guest_uret_msr(vmx, msr, enabled ? 0 : TSX_CTRL_RTM_DISABLE); } } =20 if (kvm_cpu_cap_has(X86_FEATURE_XFD)) vmx_set_intercept_for_msr(vcpu, MSR_IA32_XFD_ERR, MSR_TYPE_R, - !guest_cpuid_has(vcpu, X86_FEATURE_XFD)); + !guest_cpu_cap_has(vcpu, X86_FEATURE_XFD)); =20 if (boot_cpu_has(X86_FEATURE_IBPB)) vmx_set_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W, @@ -7804,17 +7801,17 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcp= u *vcpu) =20 if (boot_cpu_has(X86_FEATURE_FLUSH_L1D)) vmx_set_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W, - !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)); + !guest_cpu_cap_has(vcpu, X86_FEATURE_FLUSH_L1D)); =20 set_cr4_guest_host_mask(vmx); =20 vmx_write_encls_bitmap(vcpu, NULL); - if (guest_cpuid_has(vcpu, X86_FEATURE_SGX)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SGX)) vmx->msr_ia32_feature_control_valid_bits |=3D FEAT_CTL_SGX_ENABLED; else vmx->msr_ia32_feature_control_valid_bits &=3D ~FEAT_CTL_SGX_ENABLED; =20 - if (guest_cpuid_has(vcpu, X86_FEATURE_SGX_LC)) + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SGX_LC)) vmx->msr_ia32_feature_control_valid_bits |=3D FEAT_CTL_SGX_LC_ENABLED; else diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c2130d2c8e24..edca0a4276fb 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -745,7 +745,7 @@ static inline bool vmx_can_use_ipiv(struct kvm_vcpu *vc= pu) return lapic_in_kernel(vcpu) && enable_ipiv; } =20 -static inline bool guest_cpuid_has_evmcs(struct kvm_vcpu *vcpu) +static inline bool guest_cpu_cap_has_evmcs(struct kvm_vcpu *vcpu) { /* * eVMCS is exposed to the guest if Hyper-V is enabled in CPUID and diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 04a77b764a36..a6b8f844a5bc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -487,7 +487,7 @@ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr= _data *msr_info) enum lapic_mode old_mode =3D kvm_get_apic_mode(vcpu); enum lapic_mode new_mode =3D kvm_apic_mode(msr_info->data); u64 reserved_bits =3D kvm_vcpu_reserved_gpa_bits_raw(vcpu) | 0x2ff | - (guest_cpuid_has(vcpu, X86_FEATURE_X2APIC) ? 0 : X2APIC_ENABLE); + (guest_cpu_cap_has(vcpu, X86_FEATURE_X2APIC) ? 0 : X2APIC_ENABLE); =20 if ((msr_info->data & reserved_bits) !=3D 0 || new_mode =3D=3D LAPIC_MODE= _INVALID) return 1; @@ -1362,10 +1362,10 @@ static u64 kvm_dr6_fixed(struct kvm_vcpu *vcpu) { u64 fixed =3D DR6_FIXED_1; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_RTM)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_RTM)) fixed |=3D DR6_RTM; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)) fixed |=3D DR6_BUS_LOCK; return fixed; } @@ -1721,20 +1721,20 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu= , unsigned index, u64 *data) =20 static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer) { - if (efer & EFER_AUTOIBRS && !guest_cpuid_has(vcpu, X86_FEATURE_AUTOIBRS)) + if (efer & EFER_AUTOIBRS && !guest_cpu_cap_has(vcpu, X86_FEATURE_AUTOIBRS= )) return false; =20 - if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT)) + if (efer & EFER_FFXSR && !guest_cpu_cap_has(vcpu, X86_FEATURE_FXSR_OPT)) return false; =20 - if (efer & EFER_SVME && !guest_cpuid_has(vcpu, X86_FEATURE_SVM)) + if (efer & EFER_SVME && !guest_cpu_cap_has(vcpu, X86_FEATURE_SVM)) return false; =20 if (efer & (EFER_LME | EFER_LMA) && - !guest_cpuid_has(vcpu, X86_FEATURE_LM)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_LM)) return false; =20 - if (efer & EFER_NX && !guest_cpuid_has(vcpu, X86_FEATURE_NX)) + if (efer & EFER_NX && !guest_cpu_cap_has(vcpu, X86_FEATURE_NX)) return false; =20 return true; @@ -1872,8 +1872,8 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 i= ndex, u64 data, return 1; =20 if (!host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && - !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_RDTSCP) && + !guest_cpu_cap_has(vcpu, X86_FEATURE_RDPID)) return 1; =20 /* @@ -1929,8 +1929,8 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u= 64 *data, return 1; =20 if (!host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && - !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_RDTSCP) && + !guest_cpu_cap_has(vcpu, X86_FEATURE_RDPID)) return 1; break; } @@ -2122,7 +2122,7 @@ EXPORT_SYMBOL_GPL(kvm_handle_invalid_op); static int kvm_emulate_monitor_mwait(struct kvm_vcpu *vcpu, const char *in= sn) { if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS) = && - !guest_cpuid_has(vcpu, X86_FEATURE_MWAIT)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_MWAIT)) return kvm_handle_invalid_op(vcpu); =20 pr_warn_once("%s instruction emulated as NOP!\n", insn); @@ -3761,11 +3761,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) if ((!guest_has_pred_cmd_msr(vcpu))) return 1; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) && - !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) && + !guest_cpu_cap_has(vcpu, X86_FEATURE_AMD_IBPB)) reserved_bits |=3D PRED_CMD_IBPB; =20 - if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SBPB)) reserved_bits |=3D PRED_CMD_SBPB; } =20 @@ -3786,7 +3786,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) } case MSR_IA32_FLUSH_CMD: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_FLUSH_L1D)) return 1; =20 if (!boot_cpu_has(X86_FEATURE_FLUSH_L1D) || (data & ~L1D_FLUSH)) @@ -3837,7 +3837,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) kvm_set_lapic_tscdeadline_msr(vcpu, data); break; case MSR_IA32_TSC_ADJUST: - if (guest_cpuid_has(vcpu, X86_FEATURE_TSC_ADJUST)) { + if (guest_cpu_cap_has(vcpu, X86_FEATURE_TSC_ADJUST)) { if (!msr_info->host_initiated) { s64 adj =3D data - vcpu->arch.ia32_tsc_adjust_msr; adjust_tsc_offset_guest(vcpu, adj); @@ -3864,7 +3864,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) =20 if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT) = && ((old_val ^ data) & MSR_IA32_MISC_ENABLE_MWAIT)) { - if (!guest_cpuid_has(vcpu, X86_FEATURE_XMM3)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_XMM3)) return 1; vcpu->arch.ia32_misc_enable_msr =3D data; kvm_update_cpuid_runtime(vcpu); @@ -3892,7 +3892,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) break; case MSR_IA32_XSS: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVES)) return 1; /* * KVM supports exposing PT to the guest, but does not support @@ -4039,12 +4039,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) kvm_pr_unimpl_wrmsr(vcpu, msr, data); break; case MSR_AMD64_OSVW_ID_LENGTH: - if (!guest_cpuid_has(vcpu, X86_FEATURE_OSVW)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_OSVW)) return 1; vcpu->arch.osvw.length =3D data; break; case MSR_AMD64_OSVW_STATUS: - if (!guest_cpuid_has(vcpu, X86_FEATURE_OSVW)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_OSVW)) return 1; vcpu->arch.osvw.status =3D data; break; @@ -4065,7 +4065,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) #ifdef CONFIG_X86_64 case MSR_IA32_XFD: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XFD)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_XFD)) return 1; =20 if (data & ~kvm_guest_supported_xfd(vcpu)) @@ -4075,7 +4075,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) break; case MSR_IA32_XFD_ERR: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XFD)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_XFD)) return 1; =20 if (data & ~kvm_guest_supported_xfd(vcpu)) @@ -4199,13 +4199,13 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) break; case MSR_IA32_ARCH_CAPABILITIES: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES)) return 1; msr_info->data =3D vcpu->arch.arch_capabilities; break; case MSR_IA32_PERF_CAPABILITIES: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_PDCM)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_PDCM)) return 1; msr_info->data =3D vcpu->arch.perf_capabilities; break; @@ -4361,7 +4361,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) msr_info->host_initiated); case MSR_IA32_XSS: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVES)) return 1; msr_info->data =3D vcpu->arch.ia32_xss; break; @@ -4404,12 +4404,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) msr_info->data =3D 0xbe702111; break; case MSR_AMD64_OSVW_ID_LENGTH: - if (!guest_cpuid_has(vcpu, X86_FEATURE_OSVW)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_OSVW)) return 1; msr_info->data =3D vcpu->arch.osvw.length; break; case MSR_AMD64_OSVW_STATUS: - if (!guest_cpuid_has(vcpu, X86_FEATURE_OSVW)) + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_OSVW)) return 1; msr_info->data =3D vcpu->arch.osvw.status; break; @@ -4428,14 +4428,14 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) #ifdef CONFIG_X86_64 case MSR_IA32_XFD: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XFD)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_XFD)) return 1; =20 msr_info->data =3D vcpu->arch.guest_fpu.fpstate->xfd; break; case MSR_IA32_XFD_ERR: if (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XFD)) + !guest_cpu_cap_has(vcpu, X86_FEATURE_XFD)) return 1; =20 msr_info->data =3D vcpu->arch.guest_fpu.xfd_err; @@ -8368,17 +8368,17 @@ static bool emulator_get_cpuid(struct x86_emulate_c= txt *ctxt, =20 static bool emulator_guest_has_movbe(struct x86_emulate_ctxt *ctxt) { - return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_MOVBE); + return guest_cpu_cap_has(emul_to_vcpu(ctxt), X86_FEATURE_MOVBE); } =20 static bool emulator_guest_has_fxsr(struct x86_emulate_ctxt *ctxt) { - return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_FXSR); + return guest_cpu_cap_has(emul_to_vcpu(ctxt), X86_FEATURE_FXSR); } =20 static bool emulator_guest_has_rdpid(struct x86_emulate_ctxt *ctxt) { - return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_RDPID); + return guest_cpu_cap_has(emul_to_vcpu(ctxt), X86_FEATURE_RDPID); } =20 static ulong emulator_read_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg) --=20 2.42.0.869.gea05f2083d-goog From nobody Wed Dec 31 02:33:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70EC4C4332F for ; Fri, 10 Nov 2023 23:56:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345212AbjKJX4N (ORCPT ); Fri, 10 Nov 2023 18:56:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345124AbjKJX4B (ORCPT ); Fri, 10 Nov 2023 18:56:01 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01D37478A for ; Fri, 10 Nov 2023 15:55:52 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5af592fed43so34315317b3.2 for ; Fri, 10 Nov 2023 15:55:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699660551; x=1700265351; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=qmj7GZ0SAOjD/VDeZDIJ7y/K+3O78AzZdQ/afS++F9w=; b=pM6pVi7nJiH1VW2fSQVC2iEpCLiB96dBV6OwYdYlfpC2N1QxzswHcd8WlRbLywOM35 ZU0cKFLTyFglKyY7QcsuxCqFncMlIlq5asWvP2HUd8dd5h84I3/LwsfLExd84Wno9ifx /Blllg4y8tlAFu+xRkz98CaCTVKwiVRyDDKYALTv7/8jrhNr5VljzjscK4BXghdqH9J+ oPFovF+7+tk7V+J+BLwfVoU2hZkikHpOQ0gvgS+s2ADC+a3pXkriT4HbVv3FnAzJboAt 86CprFHucIdFj9otwpL9vtfHniVtlbBZRP0K+ee98Pgxw4lrFai5q5lzuz/O1dGqn6QZ 9kSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699660551; x=1700265351; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qmj7GZ0SAOjD/VDeZDIJ7y/K+3O78AzZdQ/afS++F9w=; b=NiIWleVGmSEMGdb0x2a6ESxsk2Xbgi23ZROy7jjRNdjcuZ3HyKNXaykxkaNczcaGon qDi4SO7b1e4K+QW2iV6B9jh0AJqllHTVlRKllKRf8T/qYgJjFqgSAXDwPwZDL74Gu3kg g08UXV1/o3WHCl5fj3P2wsDLkRNFgiAAjCV4VI2NBPYqIZ6OWbRxwFv0AQqFYSPbEf9a dNziKevsddkDxOKUXDcKVN2AgdyVAu/T2Uy+NXP0Zs8RQdQWXeIn6PyZrUmx7+40A5h2 RcnKu6m1z0+kJN/YataoxoA0eajb2oHgGzOsYaj0zbUxdHyQzXBRXMuRM7qfO9w5Hr6b koCg== X-Gm-Message-State: AOJu0Yy2vxphkLH5YwQskQhsjB8kuRAJKEfKstwTkRiv23dVhJaNS8x8 W5+t8025dgH6+5VDKyrnJZiGC2Yn5xk= X-Google-Smtp-Source: AGHT+IEjA1FyHv2eFtTBRxlVBGlSko6kQgtbPOhwvosPV9tjPhS4aG+DEOi3c1T1lz38zGocB/oAF1i7h7w= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:914e:0:b0:5a7:db29:40e3 with SMTP id i75-20020a81914e000000b005a7db2940e3mr21020ywg.7.1699660551073; Fri, 10 Nov 2023 15:55:51 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 10 Nov 2023 15:55:28 -0800 In-Reply-To: <20231110235528.1561679-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110235528.1561679-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110235528.1561679-10-seanjc@google.com> Subject: [PATCH 9/9] KVM: x86: Restrict XSAVE in cpu_caps based on KVM capabilities From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Restrict XSAVE in guest cpu_caps so that XSAVES dependencies on XSAVE are automatically handled instead of manually checking for host and guest XSAVE support. Aside from modifying XSAVE in cpu_caps, this should be a glorified nop as KVM doesn't query guest XSAVE support (which is also why it wasn't/isn't a bug to leave XSAVE set in guest CPUID). Signed-off-by: Sean Christopherson Reviewed-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9e3a9191dac1..6fe2d7bf4959 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4315,8 +4315,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) * XSS on VM-Enter/VM-Exit. Failure to do so would effectively give * the guest read/write access to the host's XSS. */ + guest_cpu_cap_restrict(vcpu, X86_FEATURE_XSAVE); guest_cpu_cap_change(vcpu, X86_FEATURE_XSAVES, - boot_cpu_has(X86_FEATURE_XSAVE) && boot_cpu_has(X86_FEATURE_XSAVES) && guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE)); =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 815692dc0aff..7645945af5c5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7752,8 +7752,8 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu = *vcpu) * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be * set if and only if XSAVE is supported. */ - if (boot_cpu_has(X86_FEATURE_XSAVE) && - guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE)) + guest_cpu_cap_restrict(vcpu, X86_FEATURE_XSAVE); + if (guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE)) guest_cpu_cap_restrict(vcpu, X86_FEATURE_XSAVES); else guest_cpu_cap_clear(vcpu, X86_FEATURE_XSAVES); --=20 2.42.0.869.gea05f2083d-goog