From nobody Fri Dec 19 20:34:54 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A557DC4332F for ; Wed, 16 Feb 2022 10:27:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232984AbiBPK1U (ORCPT ); Wed, 16 Feb 2022 05:27:20 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:49810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232849AbiBPK1E (ORCPT ); Wed, 16 Feb 2022 05:27:04 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8DF1210AFA; Wed, 16 Feb 2022 02:26:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645007212; x=1676543212; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bLG50OWlg8x+9sP9Tb1iADErNwTwvBlYlKoh5jCbzNY=; b=Xsccs67cDZsdXA1+xFhcetNEfIw5oAZQ4MEhz4D//5JqYSlj6k/bvLsf N6ATV9lkFg4cPw4EuAVEJ0N4sWhXBb/PpBczzI17C0nenFVV52fORAmse 0EXRCEUbzXNRUF4U5fz+4NbfJwmhgu5h39cJ0i9c+r01vkYKw6VcpjSu0 QUT0oZucv3bMPXZRz4On8xlPTgluQkBalJnmBjAk5FpcHfb6Ckwuophr0 FvyYtFkLRDN1FAgKt1q7j90Yc2oUCOS1AVrVOv3Fl7DEN9Apbc+M26n55 hoI5gdf5le5eKKBiGXAy/06VDi2UjaABSc46NJRK+u0MSyqwA1QNdplD2 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="250312475" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="250312475" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 02:26:51 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="498708601" Received: from embargo.jf.intel.com ([10.165.9.183]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 02:26:50 -0800 From: Yang Weijiang To: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, vkuznets@redhat.com, wei.w.wang@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yang Weijiang , Like Xu Subject: [PATCH v9 09/17] KVM: x86/pmu: Refactor code to support guest Arch LBR Date: Tue, 15 Feb 2022 16:25:36 -0500 Message-Id: <20220215212544.51666-10-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215212544.51666-1-weijiang.yang@intel.com> References: <20220215212544.51666-1-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Take account of Arch LBR when do sanity checks before program vPMU for guest. Pass through Arch LBR recording MSRs to guest to gain better performance. Note, Arch LBR and Legacy LBR support are mutually exclusive, i.e., they're not both available on one platform. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Yang Weijiang --- arch/x86/kvm/vmx/pmu_intel.c | 37 +++++++++++++++++++++++++++++------- arch/x86/kvm/vmx/vmx.c | 3 +++ 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index e419a8c1ad0d..3f1ffc928e36 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -170,12 +170,16 @@ static inline struct kvm_pmc *get_fw_gp_pmc(struct kv= m_pmu *pmu, u32 msr) =20 bool intel_pmu_lbr_is_compatible(struct kvm_vcpu *vcpu) { + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) + return guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR); + /* * As a first step, a guest could only enable LBR feature if its * cpu model is the same as the host because the LBR registers * would be pass-through to the guest and they're model specific. */ - return boot_cpu_data.x86_model =3D=3D guest_cpuid_model(vcpu); + return !boot_cpu_has(X86_FEATURE_ARCH_LBR) && + boot_cpu_data.x86_model =3D=3D guest_cpuid_model(vcpu); } =20 bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu) @@ -193,12 +197,19 @@ static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcp= u *vcpu, u32 index) if (!intel_pmu_lbr_is_enabled(vcpu)) return ret; =20 - ret =3D (index =3D=3D MSR_LBR_SELECT) || (index =3D=3D MSR_LBR_TOS) || - (index >=3D records->from && index < records->from + records->nr) || - (index >=3D records->to && index < records->to + records->nr); + if (!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR)) + ret =3D (index =3D=3D MSR_LBR_SELECT) || (index =3D=3D MSR_LBR_TOS); + + if (!ret) { + ret =3D (index >=3D records->from && + index < records->from + records->nr) || + (index >=3D records->to && + index < records->to + records->nr); + } =20 if (!ret && records->info) - ret =3D (index >=3D records->info && index < records->info + records->nr= ); + ret =3D (index >=3D records->info && + index < records->info + records->nr); =20 return ret; } @@ -738,6 +749,9 @@ static void vmx_update_intercept_for_lbr_msrs(struct kv= m_vcpu *vcpu, bool set) vmx_set_intercept_for_msr(vcpu, lbr->info + i, MSR_TYPE_RW, set); } =20 + if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR)) + return; + vmx_set_intercept_for_msr(vcpu, MSR_LBR_SELECT, MSR_TYPE_RW, set); vmx_set_intercept_for_msr(vcpu, MSR_LBR_TOS, MSR_TYPE_RW, set); } @@ -778,10 +792,13 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); struct lbr_desc *lbr_desc =3D vcpu_to_lbr_desc(vcpu); + bool lbr_enable =3D guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ? + (vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) : + (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR); =20 if (!lbr_desc->event) { vmx_disable_lbr_msrs_passthrough(vcpu); - if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR) + if (lbr_enable) goto warn; if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use)) goto warn; @@ -798,13 +815,19 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu) return; =20 warn: + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) + wrmsrl(MSR_ARCH_LBR_DEPTH, lbr_desc->records.nr); pr_warn_ratelimited("kvm: vcpu-%d: fail to passthrough LBR.\n", vcpu->vcpu_id); } =20 static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) { - if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)) + bool lbr_enable =3D guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ? + (vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) : + (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR); + + if (!lbr_enable) intel_pmu_release_guest_lbr_event(vcpu); } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 7b6eb87ff6ad..62188f2ab2d4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -578,6 +578,9 @@ static bool is_valid_passthrough_msr(u32 msr) case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 31: case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: + case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31: + case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31: + case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ return true; } --=20 2.27.0