From nobody Mon Apr 6 17:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83F3FC54EE9 for ; Mon, 5 Sep 2022 12:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237502AbiIEMnt (ORCPT ); Mon, 5 Sep 2022 08:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238094AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46B942127B; Mon, 5 Sep 2022 05:40:07 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id 78so7991565pgb.13; Mon, 05 Sep 2022 05:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ssoGUZ4ZbkC1Wv+R+3XnE7I13lbD204WKPkjTaaF05k=; b=ATYARh2Ur/LSsSPMV1rkDqhpPw3L5aiffvYBBla6LjE489AIe0SnPi4ZDVw/82VZ98 w3DiE6JCVCi6NmQXtUao5HPwxXV/Dfosc8I0yF7KwfgDdACpg7H0XfNX8gatqn0vrbAL WtxgyPIbdVHxCJ0YrQ6nhXi1HD7sd1pQTl5whOG7oysVhEZRu67ybFjKACkbvXUPi2oT Fn0z5YSCmbFdrhi9lfqsg3PHyKo4k4ZxUCzctcst6nws4EsYG+7sbu7EZMiF72eo/UPU I2PWFdTxC3yPuftq/D22GyYz3vOXinFT2AwCX7EnI4TiQ0yOFP6fxe5RJg5eLBIr5/Qv hShg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ssoGUZ4ZbkC1Wv+R+3XnE7I13lbD204WKPkjTaaF05k=; b=HrAbVOYM3SYpDmXRQVTFI3kird/2tvrK4yToyvQ/b7mwPVsUlhpHxlyXNUGiS2YCeu aMGerA27v2A1AOZE1jSLGQmpfXdOMAMll6pHLheJq/HG8059TTdtGIJrRBy01rTTzH/0 j7Ri0cFoFaPiOzBHtFGnHOJ8M8UDyI9iIX99D3esRoIPYLE7PNY0S9aYYWsNU7v56fRD LRLrP4ww6QyFFwH6yYA8EiutIgi46BHNc1PdtADGihcsX25pME21fk9UrUaE5MMGffBi uNJVB1xlN6QUbXygxhzgqm+Laws2XtaDZh3LcMxRpRSHcP7MCJf8A7oiKV0/MzyzX15n DCpw== X-Gm-Message-State: ACgBeo3VqICjIrhWqkKijvjT6TG8+9tC5usOsgvGuYzgUwVfowoKjn2S ky/kRIhCD8PUzGti/+VmEYU= X-Google-Smtp-Source: AA6agR4GGSOx7YIggzyYMOLvTQHUbP/ZbPK3rKLsUvWRMzka5/DviEnu5+qfYjgCa6iuI0TR3DagSw== X-Received: by 2002:a63:40e:0:b0:42b:890d:594e with SMTP id 14-20020a63040e000000b0042b890d594emr38732333pge.331.1662381606782; Mon, 05 Sep 2022 05:40:06 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:06 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] KVM: x86/svm/pmu: Limit the maximum number of supported GP counters Date: Mon, 5 Sep 2022 20:39:41 +0800 Message-Id: <20220905123946.95223-2-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Like Xu The AMD PerfMonV2 specification allows for a maximum of 16 GP counters, which is clearly not supported with zero code effort in the current KVM. A local macro (named like INTEL_PMC_MAX_GENERIC) is introduced to take back control of this virt capability, which also makes it easier to statically partition all available counters between hosts and guests. Signed-off-by: Like Xu Reviewed-by: Jim Mattson --- arch/x86/kvm/pmu.h | 2 ++ arch/x86/kvm/svm/pmu.c | 7 ++++--- arch/x86/kvm/x86.c | 2 ++ 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 847e7112a5d3..e3a3813b6a38 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -18,6 +18,8 @@ #define VMWARE_BACKDOOR_PMC_REAL_TIME 0x10001 #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 =20 +#define KVM_AMD_PMC_MAX_GENERIC AMD64_NUM_COUNTERS_CORE + struct kvm_event_hw_type_mapping { u8 eventsel; u8 unit_mask; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 2ec420b85d6a..f99f2c869664 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -192,9 +192,10 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); int i; =20 - BUILD_BUG_ON(AMD64_NUM_COUNTERS_CORE > INTEL_PMC_MAX_GENERIC); + BUILD_BUG_ON(AMD64_NUM_COUNTERS_CORE > KVM_AMD_PMC_MAX_GENERIC); + BUILD_BUG_ON(KVM_AMD_PMC_MAX_GENERIC > INTEL_PMC_MAX_GENERIC); =20 - for (i =3D 0; i < AMD64_NUM_COUNTERS_CORE ; i++) { + for (i =3D 0; i < KVM_AMD_PMC_MAX_GENERIC ; i++) { pmu->gp_counters[i].type =3D KVM_PMC_GP; pmu->gp_counters[i].vcpu =3D vcpu; pmu->gp_counters[i].idx =3D i; @@ -207,7 +208,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); int i; =20 - for (i =3D 0; i < AMD64_NUM_COUNTERS_CORE; i++) { + for (i =3D 0; i < KVM_AMD_PMC_MAX_GENERIC; i++) { struct kvm_pmc *pmc =3D &pmu->gp_counters[i]; =20 pmc_stop_counter(pmc); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 43a6a7efc6ec..b9738efd8425 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1444,12 +1444,14 @@ static const u32 msrs_to_save_all[] =3D { MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, =20 + /* This part of MSRs should match KVM_AMD_PMC_MAX_GENERIC. */ MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3, MSR_K7_PERFCTR0, MSR_K7_PERFCTR1, MSR_K7_PERFCTR2, MSR_K7_PERFCTR3, MSR_F15H_PERF_CTL0, MSR_F15H_PERF_CTL1, MSR_F15H_PERF_CTL2, MSR_F15H_PERF_CTL3, MSR_F15H_PERF_CTL4, MSR_F15H_PERF_CTL5, MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2, MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5, + MSR_IA32_XFD, MSR_IA32_XFD_ERR, }; =20 --=20 2.37.3 From nobody Mon Apr 6 17:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 005F9ECAAD5 for ; Mon, 5 Sep 2022 12:44:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238348AbiIEMoN (ORCPT ); Mon, 5 Sep 2022 08:44:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238291AbiIEMnR (ORCPT ); Mon, 5 Sep 2022 08:43:17 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBA2024BC4; Mon, 5 Sep 2022 05:40:14 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id q15so8496442pfn.11; Mon, 05 Sep 2022 05:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ZJDlOLNEpQinUeL6S1qQXyWawMV51E1l4RDKs95cZlY=; b=A6CNuAqfSnXCKuT9yMqADdppFaa4VPh8gdOoQYijd3hQxho33aGHW1aPYBbIAsxvub vxoyCmbk7O7uhWUQVl01MLXsT1NFRHboCmFF/tlJpAPf5SWJQYyPMeRSy3ygsyhSYlXQ KnF5LPd1/jEefwobszmz/n/GSw5HZpDNsqyGtW5tH0MTwOdb8BOmX/DzgwYdyw9I37JK tBp3Rzd1bIZRXptrp6qiO38C8K/l4963RUDzGgKPmsP369pdvaz8HnmxejkRk/z/oHSk x4LvbXBBtjkyxv4Endpd0/8A9g6y8CrSSCJ/9PStUlrBuBFaihjPQATtja7MXXPwxPnL 7Y0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ZJDlOLNEpQinUeL6S1qQXyWawMV51E1l4RDKs95cZlY=; b=UiydInl6E+53/v3WMv0pQ5DL/liSczZfUYTdDJm6k16+meB+/JzbVzxu3vMNzd6kwZ KUVsafcM3brJA+Tp3CJo4rOOJLBdS+jWANbPuRz62HgWamujaIRoz059Gi2JjMe42bHu L45JpvqE/Eqqs1g9SRjJ/gEUgg2Eeu3Cb12ozX1x6HO4H1ZcuTpxkDOLdhJmG4pcJf9u C/stZmXr+Gydt+QZMZqWjcVKy26HmzEq8Kn//+PIjiC0fdzXGDll3iSY+cvD4SbgDmED mqeJtSvta0bc5SCmmSxTaXXmppReZ53umIJr2ZAKtlFcm+rna6f0eugnKz9OQ98mmRUt Uu6Q== X-Gm-Message-State: ACgBeo2W0UbsD4s9acHgIvRft3euT6Kj3+O25ihvhFDEQsU0o0CzC8xM UdnwR2SY4Y+JMtm2zbcysSVNiqMI1xbTfA== X-Google-Smtp-Source: AA6agR5+Sq28GBLhP3yoCm22e9i4Jf1XxK++1m3+SxgzCjiWXdiVYACsO0ohxHVfajoL436Ia06tnQ== X-Received: by 2002:a63:d406:0:b0:434:7829:2e73 with SMTP id a6-20020a63d406000000b0043478292e73mr1563852pgh.573.1662381614437; Mon, 05 Sep 2022 05:40:14 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:14 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [kvm-unit-tests PATCH 1/2] x86/pmu: Update rdpmc testcase to cover #GP and emulation path Date: Mon, 5 Sep 2022 20:39:45 +0800 Message-Id: <20220905123946.95223-6-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Like Xu Specifying an unsupported PMC encoding will cause a #GP(0). All testcases should be passed when the KVM_FEP prefix is added. Signed-off-by: Like Xu --- lib/x86/processor.h | 5 ++++- x86/pmu.c | 13 +++++++++++++ 2 files changed, 17 insertions(+), 1 deletion(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 10bca27..9c490d9 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -441,7 +441,10 @@ static inline int wrmsr_safe(u32 index, u64 val) static inline uint64_t rdpmc(uint32_t index) { uint32_t a, d; - asm volatile ("rdpmc" : "=3Da"(a), "=3Dd"(d) : "c"(index)); + if (is_fep_available()) + asm volatile (KVM_FEP "rdpmc" : "=3Da"(a), "=3Dd"(d) : "c"(index)); + else + asm volatile ("rdpmc" : "=3Da"(a), "=3Dd"(d) : "c"(index)); return a | ((uint64_t)d << 32); } =20 diff --git a/x86/pmu.c b/x86/pmu.c index 203a9d4..11607c0 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -758,12 +758,25 @@ static bool pmu_is_detected(void) return detect_intel_pmu(); } =20 +static void rdpmc_unsupported_counter(void *data) +{ + rdpmc(64); +} + +static void check_rdpmc_cause_gp(void) +{ + report(test_for_exception(GP_VECTOR, rdpmc_unsupported_counter, NULL), + "rdpmc with invalid PMC index raises #GP"); +} + int main(int ac, char **av) { setup_vm(); handle_irq(PC_VECTOR, cnt_overflow); buf =3D malloc(N*64); =20 + check_rdpmc_cause_gp(); + if (!pmu_is_detected()) return report_summary(); =20 --=20 2.37.3 From nobody Mon Apr 6 17:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87CBEECAAD3 for ; Mon, 5 Sep 2022 12:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236985AbiIEMoB (ORCPT ); Mon, 5 Sep 2022 08:44:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238314AbiIEMnR (ORCPT ); Mon, 5 Sep 2022 08:43:17 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C033D252BD; Mon, 5 Sep 2022 05:40:16 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id n65-20020a17090a5ac700b001fbb4fad865so8544356pji.1; Mon, 05 Sep 2022 05:40:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=seGHwr7wN/RToSOqMCREluZ1ZHPCTPepY5jr28/SuRQ=; b=llBMzmv0mFQMPd+JjOiF8DMmipDMSDRnDT6nGx1HD6E4QzXlGfj10vKHsQR75Tn6/N Uf+mPJfUynaXIjQYgEgGt0MEeEsQjUtLM+plhQCYSLZdEea/nZSrnZ1dd3mCjE42VPuN BTihFuVegK2G9/P5JapVYP/P17cBlFRocJCj3A62k2zlTGSXEdYS0LWu3+m7PQg8xw8Q NCDr9F4PQh/zCo+kHlt6Samc19M5Qoh/H3Mjed4C9fPbsxbEJnPjZH+KvOG+ODGrOQp2 jcsRW3Ufc//wsV4CMRopByxtVpRXs/6ymW57QL8ZIFwnhatOvLBg99qP5bnzvMmw/Dk7 YUmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=seGHwr7wN/RToSOqMCREluZ1ZHPCTPepY5jr28/SuRQ=; b=y1S357+Sxo8SMmqgZ3D7fdv1Rw2wp/CryhOnCimR0qu2NK7Ft+iwtm1xcT/qsbbKyO Hba4BHUrzr1DmOPx3cWNwk6m+kZS73kSEbmA6B8msvorZBdbFb4aQw7zM3gkcwNIDwoo CV0TecdEbg7a7uZRPzGu3Z6XlKEgjs/hCgg4MzDtq4mQQBpa0BO5CI9lu3EHBh61J/fA ZffCYu6/PLWXziDxBvoJ9EQY8vENdoJWexyzaAE8GuXcL7h5EnAqQspRAZqHzm8mCVOy w6uy/O4bzxC6IWGBhQD99Xpydxpdms67YBfPTjoxz2gXGCH3Pyw4IzkIwFTEtDFz9/oY +few== X-Gm-Message-State: ACgBeo16YIecTaDjYYXCCPppQjtchsOg/aSXCbdOO5cy/6F2WiS93Z/4 NFIHmpYZtWwFLdx6umzVFas= X-Google-Smtp-Source: AA6agR4kWpkpDog/KasihJVn1sfOQuUU04GApqMie1dL3PdK5m+JEBfFmv6fa7sP12jE2ZgulfY8YQ== X-Received: by 2002:a17:90b:1d08:b0:200:823f:9745 with SMTP id on8-20020a17090b1d0800b00200823f9745mr2203693pjb.84.1662381616274; Mon, 05 Sep 2022 05:40:16 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:16 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [kvm-unit-tests PATCH 2/2] x86/pmu: Add AMD Guest PerfMonV2 testcases Date: Mon, 5 Sep 2022 20:39:46 +0800 Message-Id: <20220905123946.95223-7-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Like Xu Updated test cases to cover KVM enabling code for AMD Guest PerfMonV2. The Intel-specific PMU helpers were added to check for AMD cpuid, and some of the same semantics of MSRs were assigned during the initialization phase. The vast majority of pmu test cases are reused seamlessly. On some x86 machines (AMD only), even with retired events, the same workload is measured repeatedly and the number of events collected is erratic, which essentially reflects the details of hardware implementation, and from a software perspective, the type of event is an unprecise event, which brings a tolerance check in the counter overflow testcases. Signed-off-by: Like Xu --- lib/x86/msr.h | 5 ++++ lib/x86/processor.h | 9 ++++++- x86/pmu.c | 61 ++++++++++++++++++++++++++++++++------------- 3 files changed, 56 insertions(+), 19 deletions(-) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 5f16a58..6f31155 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -419,6 +419,11 @@ #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 =20 +/* AMD Performance Counter Global Status and Control MSRs */ +#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 +#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 +#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 + /* Geode defined MSRs */ #define MSR_GEODE_BUSCONT_CONF0 0x00001900 =20 diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 9c490d9..b9592c4 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -796,8 +796,12 @@ static inline void flush_tlb(void) =20 static inline u8 pmu_version(void) { - if (!is_intel()) + if (!is_intel()) { + /* Performance Monitoring Version 2 Supported */ + if (cpuid(0x80000022).a & 0x1) + return 2; return 0; + } =20 return cpuid(10).a & 0xff; } @@ -824,6 +828,9 @@ static inline u8 pmu_nr_gp_counters(void) { if (is_intel()) { return (cpuid(10).a >> 8) & 0xff; + } else if (this_cpu_has_perf_global_ctrl()) { + /* Number of Core Performance Counters. */ + return cpuid(0x80000022).b & 0xf; } else if (!has_amd_perfctr_core()) { return AMD64_NUM_COUNTERS; } diff --git a/x86/pmu.c b/x86/pmu.c index 11607c0..6d5363b 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -72,6 +72,9 @@ struct pmu_event { #define PMU_CAP_FW_WRITES (1ULL << 13) static u32 gp_counter_base; static u32 gp_select_base; +static u32 global_status_msr; +static u32 global_ctl_msr; +static u32 global_status_clr_msr; static unsigned int gp_events_size; static unsigned int nr_gp_counters; =20 @@ -150,8 +153,7 @@ static void global_enable(pmu_counter_t *cnt) return; =20 cnt->idx =3D event_to_global_idx(cnt); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_CTRL) | - (1ull << cnt->idx)); + wrmsr(global_ctl_msr, rdmsr(global_ctl_msr) | (1ull << cnt->idx)); } =20 static void global_disable(pmu_counter_t *cnt) @@ -159,8 +161,7 @@ static void global_disable(pmu_counter_t *cnt) if (pmu_version() < 2) return; =20 - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_CTRL) & - ~(1ull << cnt->idx)); + wrmsr(global_ctl_msr, rdmsr(global_ctl_msr) & ~(1ull << cnt->idx)); } =20 static inline uint32_t get_gp_counter_msr(unsigned int i) @@ -326,6 +327,23 @@ static void check_counters_many(void) report(i =3D=3D n, "all counters"); } =20 +static bool is_the_count_reproducible(pmu_counter_t *cnt) +{ + unsigned int i; + uint64_t count; + + __measure(cnt, 0); + count =3D cnt->count; + + for (i =3D 0; i < 10; i++) { + __measure(cnt, 0); + if (count !=3D cnt->count) + return false; + } + + return true; +} + static void check_counter_overflow(void) { uint64_t count; @@ -334,13 +352,14 @@ static void check_counter_overflow(void) .ctr =3D gp_counter_base, .config =3D EVNTSEL_OS | EVNTSEL_USR | (*gp_events)[1].unit_sel /* instr= uctions */, }; + bool precise_event =3D is_the_count_reproducible(&cnt); + __measure(&cnt, 0); count =3D cnt.count; =20 /* clear status before test */ if (pmu_version() > 1) { - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, - rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + wrmsr(global_status_clr_msr, rdmsr(global_status_msr)); } =20 report_prefix_push("overflow"); @@ -373,7 +392,7 @@ static void check_counter_overflow(void) __measure(&cnt, cnt.count); =20 report(check_irq() =3D=3D (i % 2), "irq-%d", i); - if (pmu_version() > 1) + if (precise_event) report(cnt.count =3D=3D 1, "cntr-%d", i); else report(cnt.count < 4, "cntr-%d", i); @@ -381,10 +400,10 @@ static void check_counter_overflow(void) if (pmu_version() < 2) continue; =20 - status =3D rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + status =3D rdmsr(global_status_msr); report(status & (1ull << idx), "status-%d", i); - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, status); - status =3D rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + wrmsr(global_status_clr_msr, status); + status =3D rdmsr(global_status_msr); report(!(status & (1ull << idx)), "status clear-%d", i); } =20 @@ -492,8 +511,7 @@ static void check_running_counter_wrmsr(void) =20 /* clear status before overflow test */ if (pmu_version() > 1) { - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, - rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + wrmsr(global_status_clr_msr, rdmsr(global_status_msr)); } =20 start_event(&evt); @@ -508,7 +526,7 @@ static void check_running_counter_wrmsr(void) stop_event(&evt); =20 if (pmu_version() > 1) { - status =3D rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + status =3D rdmsr(global_status_msr); report(status & 1, "status"); } =20 @@ -532,8 +550,7 @@ static void check_emulated_instr(void) report_prefix_push("emulated instruction"); =20 if (pmu_version() > 1) { - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, - rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + wrmsr(global_status_clr_msr, rdmsr(global_status_msr)); } =20 start_event(&brnch_cnt); @@ -576,7 +593,7 @@ static void check_emulated_instr(void) : "eax", "ebx", "ecx", "edx"); =20 if (pmu_version() > 1) - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(global_ctl_msr, 0); =20 stop_event(&brnch_cnt); stop_event(&instr_cnt); @@ -590,7 +607,7 @@ static void check_emulated_instr(void) =20 if (pmu_version() > 1) { // Additionally check that those counters overflowed properly. - status =3D rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + status =3D rdmsr(global_status_msr); report(status & 1, "instruction counter overflow"); report(status & 2, "branch counter overflow"); } @@ -679,7 +696,7 @@ static void set_ref_cycle_expectations(void) if (!nr_gp_counters || !pmu_gp_counter_is_available(2)) return; =20 - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(global_ctl_msr, 0); =20 t0 =3D fenced_rdtsc(); start_event(&cnt); @@ -722,6 +739,10 @@ static bool detect_intel_pmu(void) gp_counter_base =3D MSR_IA32_PERFCTR0; gp_select_base =3D MSR_P6_EVNTSEL0; =20 + global_status_msr =3D MSR_CORE_PERF_GLOBAL_STATUS; + global_ctl_msr =3D MSR_CORE_PERF_GLOBAL_CTRL; + global_status_clr_msr =3D MSR_CORE_PERF_GLOBAL_OVF_CTRL; + report_prefix_push("Intel"); return true; } @@ -746,6 +767,10 @@ static bool detect_amd_pmu(void) gp_counter_base =3D MSR_F15H_PERF_CTR0; gp_select_base =3D MSR_F15H_PERF_CTL0; =20 + global_status_msr =3D MSR_AMD64_PERF_CNTR_GLOBAL_STATUS; + global_ctl_msr =3D MSR_AMD64_PERF_CNTR_GLOBAL_CTL; + global_status_clr_msr =3D MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR; + report_prefix_push("AMD"); return true; } --=20 2.37.3 From nobody Mon Apr 6 17:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2063C54EE9 for ; Mon, 5 Sep 2022 12:44:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238363AbiIEMoR (ORCPT ); Mon, 5 Sep 2022 08:44:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236578AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6721B220C7; Mon, 5 Sep 2022 05:40:09 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id 199so8538770pfz.2; Mon, 05 Sep 2022 05:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=guHLcLgjE06mkq1MSfHy1bMvLPZ+WAcCc6Skn/+UkiA=; b=RPIESGqb0BLQ4Xt3FfYbrLjtaHbY618Tga+osRFzKcUVWN6ZSUl6uMU6fhNvtDFxYb zy1hkkniKP2qe951dCyxit/baxSuNbkDP8tDJh8dPgGEAFk8zaAiDf15UpiBhSBARuUt 6xVcc1UuJ8uhw+O9P1AYVqM8qdOriCjS0wIQEOC5Dntz/EjlKcn9JatD+drGWQOFyWWn Ja6Kd87pAaWgDzT+w9xzWHD9I3s9em5LDCJsPIjCbeY87B0aiAp5bEaUQwplEk5Yznt2 Q/3EazNXN4lumtvkxgfPRGmKCnQ2KHBpem3V+yvyJC9XBpI8o+nyNj7Vi9+5equ2K63D Gc9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=guHLcLgjE06mkq1MSfHy1bMvLPZ+WAcCc6Skn/+UkiA=; b=QXY6AhkblkFOdPbQZiEnumPPlmjBqReg9GeBhcG4Zx7fZ7NGu+Cc0Ij7/rIicLDVlu tZ3Tr5YUyhVSuSmdh5PaFRQZn4NNh2+Kewwv+rtL9qKZhcKqh7ZfSz9BeeyaJxtKrgl6 vRgum39BFPHnBx4q+wx+EivKleXC8vq4/WuldUwJLommg5dIyhpZrh/jazpo9VSArIfw QWY23SIRgagq8EOqQW2DoDYk3PsO6JSTbRxgTfrPpfgYoKQeVtFzq/Wuykysa6ZiJH7r U3fZtG2Z7OUuK1u3FZ15Ibcb149eiNNNZABnHsnu6Aikv7xD9bwi6yxPbfyqCyEWKDQO Ru5Q== X-Gm-Message-State: ACgBeo0D2mgBOPdiN8bvQzjiRwnzim4EfuJjFK6Qe+BnWYV7BZRK+WWA pu+MJDvAThQms7alIqd+rQ8= X-Google-Smtp-Source: AA6agR7TttVK8MZoXzSPgB2wovAeFf7m+iq0EOnXzcKQ4mJH830r73VaKx1CrRRtGx0HPr3zCjIRgw== X-Received: by 2002:a63:151f:0:b0:434:2b09:ffeb with SMTP id v31-20020a63151f000000b004342b09ffebmr7825672pgl.498.1662381608698; Mon, 05 Sep 2022 05:40:08 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:08 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] KVM: x86/pmu: Make part of the Intel v2 PMU MSRs handling x86 generic Date: Mon, 5 Sep 2022 20:39:42 +0800 Message-Id: <20220905123946.95223-3-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Like Xu The AMD PerfMonV2 defines three registers similar to part of the Intel v2 PMU registers, including the GLOBAL_CTRL, GLOBAL_STATUS and GLOBAL_OVF_CTRL MSRs. For better code reuse, this specific part of the handling can be extracted to make it generic for X86. The new non-prefix pmc_is_enabled() works well as legacy AMD vPMU version is indexeqd as 1. Note that the specific *_is_valid_msr will continue to be used to avoid cross-vendor msr access. Signed-off-by: Like Xu --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 - arch/x86/kvm/pmu.c | 55 +++++++++++++++++++++--- arch/x86/kvm/pmu.h | 30 ++++++++++++- arch/x86/kvm/svm/pmu.c | 9 ---- arch/x86/kvm/vmx/pmu_intel.c | 58 +------------------------- 5 files changed, 80 insertions(+), 73 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/= kvm-x86-pmu-ops.h index c17e3e96fc1d..6c98f4bb4228 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -13,7 +13,6 @@ BUILD_BUG_ON(1) * at the call sites. */ KVM_X86_PMU_OP(hw_event_available) -KVM_X86_PMU_OP(pmc_is_enabled) KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) KVM_X86_PMU_OP(msr_idx_to_pmc) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3c42df3a55ff..7002e1b74108 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -83,11 +83,6 @@ void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_op= s) #undef __KVM_X86_PMU_OP } =20 -static inline bool pmc_is_enabled(struct kvm_pmc *pmc) -{ - return static_call(kvm_x86_pmu_pmc_is_enabled)(pmc); -} - static void kvm_pmi_trigger_fn(struct irq_work *irq_work) { struct kvm_pmu *pmu =3D container_of(irq_work, struct kvm_pmu, irq_work); @@ -455,11 +450,61 @@ static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *= vcpu, u32 msr) =20 int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + u32 msr =3D msr_info->index; + + switch (msr) { + case MSR_CORE_PERF_GLOBAL_STATUS: + msr_info->data =3D pmu->global_status; + return 0; + case MSR_CORE_PERF_GLOBAL_CTRL: + msr_info->data =3D pmu->global_ctrl; + return 0; + case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + msr_info->data =3D 0; + return 0; + default: + break; + } + return static_call(kvm_x86_pmu_get_msr)(vcpu, msr_info); } =20 int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + u32 msr =3D msr_info->index; + u64 data =3D msr_info->data; + u64 diff; + + switch (msr) { + case MSR_CORE_PERF_GLOBAL_STATUS: + if (msr_info->host_initiated) { + pmu->global_status =3D data; + return 0; + } + break; /* RO MSR */ + case MSR_CORE_PERF_GLOBAL_CTRL: + if (pmu->global_ctrl =3D=3D data) + return 0; + if (kvm_valid_perf_global_ctrl(pmu, data)) { + diff =3D pmu->global_ctrl ^ data; + pmu->global_ctrl =3D data; + reprogram_counters(pmu, diff); + return 0; + } + break; + case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + if (!(data & pmu->global_ovf_ctrl_mask)) { + if (!msr_info->host_initiated) + pmu->global_status &=3D ~data; + return 0; + } + break; + default: + break; + } + kvm_pmu_mark_pmc_in_use(vcpu, msr_info->index); return static_call(kvm_x86_pmu_set_msr)(vcpu, msr_info); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index e3a3813b6a38..3f9823b503fb 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -28,7 +28,6 @@ struct kvm_event_hw_type_mapping { =20 struct kvm_pmu_ops { bool (*hw_event_available)(struct kvm_pmc *pmc); - bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); @@ -191,6 +190,35 @@ static inline void kvm_pmu_request_counter_reprogam(st= ruct kvm_pmc *pmc) kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } =20 +/* + * Check if a PMC is enabled by comparing it against global_ctrl bits. + * + * If the current version of vPMU doesn't have global_ctrl MSR, + * all vPMCs are enabled (return TRUE). + */ +static inline bool pmc_is_enabled(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); + + if (pmu->version < 2) + return true; + + return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); +} + +static inline void reprogram_counters(struct kvm_pmu *pmu, u64 diff) +{ + int bit; + + if (!diff) + return; + + for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) + __set_bit(bit, pmu->reprogram_pmi); + + kvm_make_request(KVM_REQ_PMU, pmu_to_vcpu(pmu)); +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index f99f2c869664..3a20972e9f1a 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -76,14 +76,6 @@ static bool amd_hw_event_available(struct kvm_pmc *pmc) return true; } =20 -/* check if a PMC is enabled by comparing it against global_ctrl bits. Bec= ause - * AMD CPU doesn't have global_ctrl MSR, all PMCs are enabled (return TRUE= ). - */ -static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) -{ - return true; -} - static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -218,7 +210,6 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) =20 struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .hw_event_available =3D amd_hw_event_available, - .pmc_is_enabled =3D amd_pmc_is_enabled, .pmc_idx_to_pmc =3D amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc =3D amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D amd_msr_idx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1d3d0bd3e0e7..cfc6de706bf4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,18 +68,6 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_p= mu *pmu, int pmc_idx) } } =20 -static void reprogram_counters(struct kvm_pmu *pmu, u64 diff) -{ - int bit; - struct kvm_pmc *pmc; - - for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { - pmc =3D intel_pmc_idx_to_pmc(pmu, bit); - if (pmc) - kvm_pmu_request_counter_reprogam(pmc); - } -} - static bool intel_hw_event_available(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); @@ -102,17 +90,6 @@ static bool intel_hw_event_available(struct kvm_pmc *pm= c) return true; } =20 -/* check if a PMC is enabled by comparing it with globl_ctrl bits. */ -static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); - - if (!intel_pmu_has_perf_global_ctrl(pmu)) - return true; - - return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); -} - static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int i= dx) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -347,15 +324,6 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, st= ruct msr_data *msr_info) case MSR_CORE_PERF_FIXED_CTR_CTRL: msr_info->data =3D pmu->fixed_ctr_ctrl; return 0; - case MSR_CORE_PERF_GLOBAL_STATUS: - msr_info->data =3D pmu->global_status; - return 0; - case MSR_CORE_PERF_GLOBAL_CTRL: - msr_info->data =3D pmu->global_ctrl; - return 0; - case MSR_CORE_PERF_GLOBAL_OVF_CTRL: - msr_info->data =3D 0; - return 0; case MSR_IA32_PEBS_ENABLE: msr_info->data =3D pmu->pebs_enable; return 0; @@ -404,29 +372,6 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, st= ruct msr_data *msr_info) return 0; } break; - case MSR_CORE_PERF_GLOBAL_STATUS: - if (msr_info->host_initiated) { - pmu->global_status =3D data; - return 0; - } - break; /* RO MSR */ - case MSR_CORE_PERF_GLOBAL_CTRL: - if (pmu->global_ctrl =3D=3D data) - return 0; - if (kvm_valid_perf_global_ctrl(pmu, data)) { - diff =3D pmu->global_ctrl ^ data; - pmu->global_ctrl =3D data; - reprogram_counters(pmu, diff); - return 0; - } - break; - case MSR_CORE_PERF_GLOBAL_OVF_CTRL: - if (!(data & pmu->global_ovf_ctrl_mask)) { - if (!msr_info->host_initiated) - pmu->global_status &=3D ~data; - return 0; - } - break; case MSR_IA32_PEBS_ENABLE: if (pmu->pebs_enable =3D=3D data) return 0; @@ -783,7 +728,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) pmc =3D intel_pmc_idx_to_pmc(pmu, bit); =20 if (!pmc || !pmc_speculative_in_use(pmc) || - !intel_pmc_is_enabled(pmc) || !pmc->perf_event) + !pmc_is_enabled(pmc) || !pmc->perf_event) continue; =20 hw_idx =3D pmc->perf_event->hw.idx; @@ -795,7 +740,6 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) =20 struct kvm_pmu_ops intel_pmu_ops __initdata =3D { .hw_event_available =3D intel_hw_event_available, - .pmc_is_enabled =3D intel_pmc_is_enabled, .pmc_idx_to_pmc =3D intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc =3D intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D intel_msr_idx_to_pmc, --=20 2.37.3 From nobody Mon Apr 6 17:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11F67ECAAD3 for ; Mon, 5 Sep 2022 12:44:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238321AbiIEMn5 (ORCPT ); Mon, 5 Sep 2022 08:43:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238290AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7CB324941; Mon, 5 Sep 2022 05:40:11 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id o4so8318481pjp.4; Mon, 05 Sep 2022 05:40:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Ce7v6XPbHgf2/p0uT0z3Mwufd/C1Mv28FlyqnGf8RzY=; b=UeCrCQ0btQ3hROiN/wwHnMAWj98jcxdom1cEMZ/y99z3xkblRaadUC6xdPGkA7xmPI ymWylD5NUyiTxNIgaHCC/NLJSFf0Obb7XGEkNjR4D1YdR/fqKlcgi8g8qK4cSryj+JO5 60k/KJ90NoIw5CwKdLamvgyBqWN1cCcavd6XitH2AzlAAeJ2ourxpr13Ei0e/ZQY/c7D ZArFRKw02mO6M7fdtQeMLnOj0EPcGFHd9ZSBfODOgpHRj9SCYwcGUFgYkRSAT8L4Blg5 VGkEccKoQ+a4qWcLx/A9NUgUbzEiWem5ZwDSNZkccumsHdBE+vNYY7jCNmyQ9GIiu5V9 iyHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Ce7v6XPbHgf2/p0uT0z3Mwufd/C1Mv28FlyqnGf8RzY=; b=dLafWZHZHDkeZfAGM26jbmMbE8AAyoPmChzLP94ZY3DRGIj9znZDuYhy1M2SCwVsNa 3+PDyKUylrZlDbr8ijgAkIUB478eca+LtwHhZpXxteyCQFF4gsxoTyV+N86xTHgHiThf STc4GQNKznD8fpBoRPx7T8FpRnacFKMhjgWtfZGScENm6Z43AUNVItuiZHRrKBUWite8 hEEVaXjaw2n09G6eGTazGs/sWRQMqT1+XUoKkbXhGRVIGypasi1r1COvzRJCroqNmIFI gZ3V2tfrLfX2b8OGPjHNyoRGjlNy/XvV5pjLyy5qU2yA3ft8yVQ1oyQL26LbMJDXiDMa BSHQ== X-Gm-Message-State: ACgBeo2f148Vu8bCzHQfinaZysU7DmGt4eiF0NxfzyLU9qF6w89huNtF HRt4HGZp+wJOl5Ax5km5wKE= X-Google-Smtp-Source: AA6agR6aAuGpJR5LUW8O2ZNuIXkqhs3X/xn6cOvuNvtQUCv7wtb2qp0jMvDrhlKyzOvSY+10yDKBxA== X-Received: by 2002:a17:90b:38c8:b0:1ff:f1cc:b433 with SMTP id nn8-20020a17090b38c800b001fff1ccb433mr19536691pjb.161.1662381610793; Mon, 05 Sep 2022 05:40:10 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:10 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] KVM: x86/svm/pmu: Add AMD PerfMonV2 support Date: Mon, 5 Sep 2022 20:39:43 +0800 Message-Id: <20220905123946.95223-4-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Like Xu If AMD Performance Monitoring Version 2 (PerfMonV2) is detected by the guest, it can use a new scheme to manage the Core PMCs using the new global control and status registers. In addition to benefiting from the PerfMonV2 functionality in the same way as the host (higher precision), the guest also can reduce the number of vm-exits by lowering the total number of MSRs accesses. In terms of implementation details, amd_is_valid_msr() is resurrected since three newly added MSRs could not be mapped to one vPMC. The possibility of emulating PerfMonV2 on the mainframe has also been eliminated for reasons of precision. Co-developed-by: Sandipan Das Signed-off-by: Sandipan Das Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 +++++ arch/x86/kvm/svm/pmu.c | 50 +++++++++++++++++++++++++++++++++--------- arch/x86/kvm/x86.c | 11 ++++++++++ 3 files changed, 57 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7002e1b74108..56b4f898a246 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -455,12 +455,15 @@ int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr= _data *msr_info) =20 switch (msr) { case MSR_CORE_PERF_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: msr_info->data =3D pmu->global_status; return 0; case MSR_CORE_PERF_GLOBAL_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: msr_info->data =3D pmu->global_ctrl; return 0; case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: msr_info->data =3D 0; return 0; default: @@ -479,12 +482,14 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr= _data *msr_info) =20 switch (msr) { case MSR_CORE_PERF_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: if (msr_info->host_initiated) { pmu->global_status =3D data; return 0; } break; /* RO MSR */ case MSR_CORE_PERF_GLOBAL_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: if (pmu->global_ctrl =3D=3D data) return 0; if (kvm_valid_perf_global_ctrl(pmu, data)) { @@ -495,6 +500,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_d= ata *msr_info) } break; case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: if (!(data & pmu->global_ovf_ctrl_mask)) { if (!msr_info->host_initiated) pmu->global_status &=3D ~data; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 3a20972e9f1a..4c7d408e3caa 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -92,12 +92,6 @@ static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_v= cpu *vcpu, return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30)); } =20 -static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) -{ - /* All MSRs refer to exactly one PMC, so msr_idx_to_pmc is enough. */ - return false; -} - static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -109,6 +103,29 @@ static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_v= cpu *vcpu, u32 msr) return pmc; } =20 +static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + switch (msr) { + case MSR_K7_EVNTSEL0 ... MSR_K7_PERFCTR3: + return pmu->version > 0; + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: + return guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE); + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: + return pmu->version > 1; + default: + if (msr > MSR_F15H_PERF_CTR5 && + msr < MSR_F15H_PERF_CTL0 + 2 * KVM_AMD_PMC_MAX_GENERIC) + return pmu->version > 1; + break; + } + + return amd_msr_idx_to_pmc(vcpu, msr); +} + static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_inf= o) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -162,20 +179,31 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, str= uct msr_data *msr_info) static void amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct kvm_cpuid_entry2 *entry; + union cpuid_0x80000022_ebx ebx; =20 - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) + pmu->version =3D 1; + entry =3D kvm_find_cpuid_entry_index(vcpu, 0x80000022, 0); + if (kvm_pmu_cap.version > 1 && entry && (entry->eax & BIT(0))) { + pmu->version =3D 2; + ebx.full =3D entry->ebx; + pmu->nr_arch_gp_counters =3D min3((unsigned int)ebx.split.num_core_pmc, + (unsigned int)kvm_pmu_cap.num_counters_gp, + (unsigned int)KVM_AMD_PMC_MAX_GENERIC); + pmu->global_ctrl_mask =3D ~((1ull << pmu->nr_arch_gp_counters) - 1); + pmu->global_ovf_ctrl_mask =3D pmu->global_ctrl_mask; + } else if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { pmu->nr_arch_gp_counters =3D AMD64_NUM_COUNTERS_CORE; - else + } else { pmu->nr_arch_gp_counters =3D AMD64_NUM_COUNTERS; + } =20 pmu->counter_bitmask[KVM_PMC_GP] =3D ((u64)1 << 48) - 1; pmu->reserved_bits =3D 0xfffffff000280000ull; pmu->raw_event_mask =3D AMD64_RAW_EVENT_MASK; - pmu->version =3D 1; /* not applicable to AMD; but clean them to prevent any fall out */ pmu->counter_bitmask[KVM_PMC_FIXED] =3D 0; pmu->nr_arch_fixed_counters =3D 0; - pmu->global_status =3D 0; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); } =20 @@ -206,6 +234,8 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) pmc_stop_counter(pmc); pmc->counter =3D pmc->prev_counter =3D pmc->eventsel =3D 0; } + + pmu->global_ctrl =3D pmu->global_status =3D 0; } =20 struct kvm_pmu_ops amd_pmu_ops __initdata =3D { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b9738efd8425..96bb01c5eab8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1424,6 +1424,11 @@ static const u32 msrs_to_save_all[] =3D { MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, + + MSR_AMD64_PERF_CNTR_GLOBAL_CTL, + MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, + MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, + MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, MSR_ARCH_PERFMON_PERFCTR0 + 2, MSR_ARCH_PERFMON_PERFCTR0 + 3, MSR_ARCH_PERFMON_PERFCTR0 + 4, MSR_ARCH_PERFMON_PERFCTR0 + 5, @@ -3856,6 +3861,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case MSR_IA32_DS_AREA: case MSR_PEBS_DATA_CFG: case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: if (kvm_pmu_is_valid_msr(vcpu, msr)) return kvm_pmu_set_msr(vcpu, msr_info); /* @@ -3959,6 +3967,9 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) case MSR_IA32_DS_AREA: case MSR_PEBS_DATA_CFG: case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: if (kvm_pmu_is_valid_msr(vcpu, msr_info->index)) return kvm_pmu_get_msr(vcpu, msr_info); /* --=20 2.37.3 From nobody Mon Apr 6 17:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7239DC54EE9 for ; Mon, 5 Sep 2022 12:43:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238194AbiIEMny (ORCPT ); Mon, 5 Sep 2022 08:43:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238275AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C82D24960; Mon, 5 Sep 2022 05:40:13 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id q15so8496376pfn.11; Mon, 05 Sep 2022 05:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ZLlyAnNZ793JZrMXvTbxQFR4/v2HJhNRk38kZVF4NEc=; b=CoGB00PWicobs6W/fJFKBqn58QUlWmfH0ia6jcCebcoRBXHv+hdVXOLdn3qy7FoSRB fJqfSW5xZYr8opZjmm/CuME0KMigPiGiadwAEpJcBY2XqiYoBLNX01wcZOyJIF+yFJQj RyrLBz2CQ8nEqHGt6HgrZhGyxDVOUkskJxqjy+0ABKCZf53iJHDQX2DNthPpZuiqH73R xGumJy/S8TEnfBQAETKu1J71jBEkyh+JX+jlCIxA0lDTJE9/HelzP0XIsnej1BD28oPo 9cZUUZA5PqqfQuEmrxo8W2AWJuYpGYq25qVznPKdS+HCP0hy6CsuN03lXXS4ei3PKfGW 22qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ZLlyAnNZ793JZrMXvTbxQFR4/v2HJhNRk38kZVF4NEc=; b=rozyNzrkDwgzsWz2VkHxcG9NKo8a+vj2graC3Rx042glAai3R+BK4bGNTvYOrgaVul BKH3B+Ucig12jWRSep09x618MLSoEkk4xvmGUZhVAYX5HrTcZGHq34VkEnPLHBKKNToB BJoDzEs4lFGow+F7UjDLPbR6LkhMGGiF0B5ms1H4X/GWLwKA6aZdLn2bnQlPgHdqCncW ukgCfSYGKXNr8FiXzJQHLvsiBwncOrcUeiO3uxxFtYqp5QILXcQ2ryTntjNVlsSGMAYk XHQp4ndXM6FuFJVc9VmHYu+P9HLqaM9fiPndcLUrT4lxizKRfufe9PmpzW8PIWF1P9MU qqog== X-Gm-Message-State: ACgBeo259Vye2yZuIXkToqnWj+Krgaxx9j4s3pjI1N6/lYSs6G0+4WcR 8GLa4DdyY6bNa6DchPPHPis= X-Google-Smtp-Source: AA6agR5LcZ0vz1buzH2luEZJ5XY89F8e59zZrBG/yE2h/lEaRTEmku0V1DJBBHjjuKeFQabp0FcaEg== X-Received: by 2002:a63:8643:0:b0:42b:66ab:b051 with SMTP id x64-20020a638643000000b0042b66abb051mr43002626pgd.259.1662381612570; Mon, 05 Sep 2022 05:40:12 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:12 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] KVM: x86/cpuid: Add AMD CPUID ExtPerfMonAndDbg leaf 0x80000022 Date: Mon, 5 Sep 2022 20:39:44 +0800 Message-Id: <20220905123946.95223-5-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sandipan Das CPUID leaf 0x80000022 i.e. ExtPerfMonAndDbg advertises some new performance monitoring features for AMD processors. Bit 0 of EAX indicates support for Performance Monitoring Version 2 (PerfMonV2) features. If found to be set during PMU initialization, the EBX bits of the same CPUID function can be used to determine the number of available PMCs for different PMU types. Expose the relevant bits via KVM_GET_SUPPORTED_CPUID so that guests can make use of the PerfMonV2 features. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Sandipan Das --- arch/x86/include/asm/perf_event.h | 8 ++++++++ arch/x86/kvm/cpuid.c | 21 ++++++++++++++++++++- 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index f6fc8dd51ef4..c848f504e467 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -214,6 +214,14 @@ union cpuid_0x80000022_ebx { unsigned int full; }; =20 +union cpuid_0x80000022_eax { + struct { + /* Performance Monitoring Version 2 Supported */ + unsigned int perfmon_v2:1; + } split; + unsigned int full; +}; + struct x86_pmu_capability { int version; int num_counters_gp; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 75dcf7a72605..08a29ab096d2 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -1094,7 +1094,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_ar= ray *array, u32 function) entry->edx =3D 0; break; case 0x80000000: - entry->eax =3D min(entry->eax, 0x80000021); + entry->eax =3D min(entry->eax, 0x80000022); /* * Serializing LFENCE is reported in a multitude of ways, and * NullSegClearsBase is not reported in CPUID on Zen2; help @@ -1203,6 +1203,25 @@ static inline int __do_cpuid_func(struct kvm_cpuid_a= rray *array, u32 function) if (!static_cpu_has_bug(X86_BUG_NULL_SEG)) entry->eax |=3D BIT(6); break; + /* AMD Extended Performance Monitoring and Debug */ + case 0x80000022: { + union cpuid_0x80000022_eax eax; + union cpuid_0x80000022_ebx ebx; + + entry->eax =3D entry->ebx =3D entry->ecx =3D entry->edx =3D 0; + if (!enable_pmu) + break; + + if (kvm_pmu_cap.version > 1) { + /* AMD PerfMon is only supported up to V2 in the KVM. */ + eax.split.perfmon_v2 =3D 1; + ebx.split.num_core_pmc =3D min(kvm_pmu_cap.num_counters_gp, + KVM_AMD_PMC_MAX_GENERIC); + } + entry->eax =3D eax.full; + entry->ebx =3D ebx.full; + break; + } /*Add support for Centaur's CPUID instruction*/ case 0xC0000000: /*Just support up to 0xC0000004 now*/ --=20 2.37.3