From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DC60C4332F for ; Fri, 10 Nov 2023 02:29:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345750AbjKJC3I (ORCPT ); Thu, 9 Nov 2023 21:29:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345728AbjKJC3G (ORCPT ); Thu, 9 Nov 2023 21:29:06 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEE8C449A for ; Thu, 9 Nov 2023 18:29:03 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5be72cd8d11so23175537b3.3 for ; Thu, 09 Nov 2023 18:29:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583343; x=1700188143; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=z5ryUyEZOO+/tMRx5bR+5vxzqerjQqbnVQhlXKSl6+s=; b=rTt8sICLvmhzb66yf4Xg/orrbf5+mXq3xrlorMU8hAZRy+5Xy3GtXnMiKFrTxwvoq7 xIlIew+1GwdH7VANSsY1nBxmLqI4pRX0O2+U9KZJE7BOIzFeYtAhcsrZsYQ4pxtX9JUy fZ8exvqcNitO75mteFqoliQsFOrvEmDZeMQpchAxnzrwi+b5v+OzZALB/CZzjxgCnK63 HoBn7V1EDPTohPVe9UGUcXxgfmudnLwMU+58GBWcLOzMPcgGF4C62WaX++b+groccAah fEZ9/FtBS5ejYjaOLfxohyD0g6sdjyC8MpP3Tsm+PlSKDWf1Qg5lmDclTRCaHeBoywIZ w32g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583343; x=1700188143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=z5ryUyEZOO+/tMRx5bR+5vxzqerjQqbnVQhlXKSl6+s=; b=fh+rmG9TwfiAg7d4e6Q3hIrByLL2PXFnSCQ06TyWIDa9lNxW2PPDW5rQVV0817i+UC 2kcv8IFBp7skqOXdO2C1ZXNV5shvIrPtyK+RVrYOJH2agRJ4HWBnquUsu7f00k8VIluG e9bKLt6sDf1JPZ4K60DnYnQ1db+/wzObFJM2dfOiHzDbv8+wmVKLKqnNqY8xZCozQ3wH 3QFB6JB54Fmoi9f0/Gssoo3HbZO51Jvsn9W94Sv9NjacwN/0QKlZzfE8JPHWODax1YhU 6C5eu8pQSsUIs6lN+6tpTSA70rFuzF5mqpT29SUUueq5s5Pb5wlKHUYGIeSKpsawgQDS 3Vkw== X-Gm-Message-State: AOJu0YyE8s9b/ao4/ifftg/UIKnoaSbJKpBr25X4kbtjksGqRYEb2Qcw U27QfBFKm1fGR/8IiQsNxFCg5d9/4G8= X-Google-Smtp-Source: AGHT+IEOoL78sfqQfuTgzbOo5Q96/UpjUTTutLExKXWogUjIC0qmLjdeWWb3VMacgMBVzqQmS2VhR9s9lw8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ba4a:0:b0:d9a:5e10:c34d with SMTP id z10-20020a25ba4a000000b00d9a5e10c34dmr162289ybj.11.1699583342989; Thu, 09 Nov 2023 18:29:02 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:48 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-2-seanjc@google.com> Subject: [PATCH 01/10] KVM: x86/pmu: Zero out PMU metadata on AMD if PMU is disabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the purging of common PMU metadata from intel_pmu_refresh() to kvm_pmu_refresh(), and invoke the vendor refresh() hook if and only if the VM is supposed to have a vPMU. KVM already denies access to the PMU based on kvm->arch.enable_pmu, as get_gp_pmc_amd() returns NULL for all PMCs in that case, i.e. KVM already violates AMD's architecture by not virtualizing a PMU (kernels have long since learned to not panic when the PMU is unavailable). But configuring the PMU as if it were enabled causes unwanted side effects, e.g. calls to kvm_pmu_trigger_event() waste an absurd number of cycles due to the all_valid_pmc_idx bitmap being non-zero. Fixes: b1d66dad65dc ("KVM: x86/svm: Add module param to control PMU virtual= ization") Reported-by: Konstantin Khorenko Closes: https://lore.kernel.org/all/20231109180646.2963718-2-khorenko@virtu= ozzo.com Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 20 ++++++++++++++++++-- arch/x86/kvm/vmx/pmu_intel.c | 16 ++-------------- 2 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 1b74a29ed250..b52bab7dc422 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -739,6 +739,8 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) */ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) { + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + if (KVM_BUG_ON(kvm_vcpu_has_run(vcpu), vcpu->kvm)) return; =20 @@ -748,8 +750,22 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) */ kvm_pmu_reset(vcpu); =20 - bitmap_zero(vcpu_to_pmu(vcpu)->all_valid_pmc_idx, X86_PMC_IDX_MAX); - static_call(kvm_x86_pmu_refresh)(vcpu); + pmu->version =3D 0; + pmu->nr_arch_gp_counters =3D 0; + pmu->nr_arch_fixed_counters =3D 0; + pmu->counter_bitmask[KVM_PMC_GP] =3D 0; + pmu->counter_bitmask[KVM_PMC_FIXED] =3D 0; + pmu->reserved_bits =3D 0xffffffff00200000ull; + pmu->raw_event_mask =3D X86_RAW_EVENT_MASK; + pmu->global_ctrl_mask =3D ~0ull; + pmu->global_status_mask =3D ~0ull; + pmu->fixed_ctr_ctrl_mask =3D ~0ull; + pmu->pebs_enable_mask =3D ~0ull; + pmu->pebs_data_cfg_mask =3D ~0ull; + bitmap_zero(pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); + + if (vcpu->kvm->arch.enable_pmu) + static_call(kvm_x86_pmu_refresh)(vcpu); } =20 void kvm_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c3a841d8df27..0d2fd9fdcf4b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -463,19 +463,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) u64 counter_mask; int i; =20 - pmu->nr_arch_gp_counters =3D 0; - pmu->nr_arch_fixed_counters =3D 0; - pmu->counter_bitmask[KVM_PMC_GP] =3D 0; - pmu->counter_bitmask[KVM_PMC_FIXED] =3D 0; - pmu->version =3D 0; - pmu->reserved_bits =3D 0xffffffff00200000ull; - pmu->raw_event_mask =3D X86_RAW_EVENT_MASK; - pmu->global_ctrl_mask =3D ~0ull; - pmu->global_status_mask =3D ~0ull; - pmu->fixed_ctr_ctrl_mask =3D ~0ull; - pmu->pebs_enable_mask =3D ~0ull; - pmu->pebs_data_cfg_mask =3D ~0ull; - memset(&lbr_desc->records, 0, sizeof(lbr_desc->records)); =20 /* @@ -487,8 +474,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) return; =20 entry =3D kvm_find_cpuid_entry(vcpu, 0xa); - if (!entry || !vcpu->kvm->arch.enable_pmu) + if (!entry) return; + eax.full =3D entry->eax; edx.full =3D entry->edx; =20 --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56983C4167D for ; Fri, 10 Nov 2023 02:29:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345762AbjKJC3L (ORCPT ); Thu, 9 Nov 2023 21:29:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345746AbjKJC3I (ORCPT ); Thu, 9 Nov 2023 21:29:08 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E26FE449A for ; Thu, 9 Nov 2023 18:29:05 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1cc56a9ece7so16580475ad.3 for ; Thu, 09 Nov 2023 18:29:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583345; x=1700188145; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GlkhPPn6nZDv6bOYLVvOscI4P+zXzeSQiIG/AC/sj/I=; b=IkP870qs9jvQWXsGaSfIGhMqIvwEzm9LPvVJ0ueQ0zIJRvteHxeza5xHsitorBxpyK xM/4yD2/lR4Uf/d9bmoij0J+wkYrNYB+S0Ot2HL4lWDhVGWbFVQqNIz6CGEvXpU+61iG esZ9dqXBteQljUtz4higRq0J22RddRevrEix4F6Yd2wekSAlJg2EUFC3q5l9M4qJtV+H hqj0iVk4s5nWfVJEWnMCjdh+9sx3M+WkF92+Qbdh8VuL+YeO+IAABkV12fy4EHJmx3F7 5uVxfp+kEc5z3L9p269CD4LWR3e3Kb4l0wLzwT/3q1g/42qQQex3xC77vRyNYHJ0T7JM B/JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583345; x=1700188145; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GlkhPPn6nZDv6bOYLVvOscI4P+zXzeSQiIG/AC/sj/I=; b=l+oBc69KndzIaFOh5Auae1l8RBpCBOyzi9phNVQPdlVRSYDFbll0dJdrJNrE8ldVmb KDB6WKCE5N+pEYXlRAJCh8wLgfKnXW2Yh41M0HHQbOD3N8DTtFumZNsqDHE+cYkOGU52 zAb2AYtXWFaHP4askApL77BisvelN/rGhCUAmXNaAHhW28HadIzGv/efGXmRrZWxQC9i m2X4k2o7MaMCOPZYrDI9lThgi79zl5s70geGzHcmOJ2wBCO9ZJAJLCH9aPOw4KARSLzV Ecn3qmXBLmZxZxPsw4ORa60lYzml8BemIW7xPalxNe+u2FtHxNqbVmS9ZTFAwNHAx9tT Ce7Q== X-Gm-Message-State: AOJu0YxWgH3K7C5J644upMU/Wn/WFsHPQZnQXirj7g3+uW+qiYbkFF3D zspE+4/DqN2frIGGkP+zzIi6yBTD6Fc= X-Google-Smtp-Source: AGHT+IEN6+OgyXbLfjlEOgo7Ie4q6e8D88WD1mennCFfHp0tDEnQffxSo3K/M1KYKqn2aUYg1UiSCMpPjMM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:1389:b0:1cc:cc77:73ba with SMTP id jx9-20020a170903138900b001cccc7773bamr839603plb.8.1699583345452; Thu, 09 Nov 2023 18:29:05 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:49 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-3-seanjc@google.com> Subject: [PATCH 02/10] KVM: x86/pmu: Add common define to capture fixed counters offset From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a common define to "officially" solidify KVM's split of counters, i.e. to commit to using bits 31:0 to track general purpose counters and bits 63:32 to track fixed counters (which only Intel supports). KVM already bleeds this behavior all over common PMU code, and adding a KVM- defined macro allows clarifying that the value is a _base_, as oppposed to the _flag_ that is used to access fixed PMCs via RDPMC (which perf confusingly calls INTEL_PMC_FIXED_RDPMC_BASE). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 8 ++++---- arch/x86/kvm/pmu.h | 4 +++- arch/x86/kvm/vmx/pmu_intel.c | 12 ++++++------ 3 files changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b52bab7dc422..714fa6dd912e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -67,7 +67,7 @@ static const struct x86_cpu_id vmx_pebs_pdist_cpu[] =3D { * all perf counters (both gp and fixed). The mapping relationship * between pmc and perf counters is as the following: * * Intel: [0 .. KVM_INTEL_PMC_MAX_GENERIC-1] <=3D> gp counters - * [INTEL_PMC_IDX_FIXED .. INTEL_PMC_IDX_FIXED + 2] <=3D> = fixed + * [KVM_FIXED_PMC_BASE_IDX .. KVM_FIXED_PMC_BASE_IDX + 2] = <=3D> fixed * * AMD: [0 .. AMD64_NUM_COUNTERS-1] and, for families 15H * and later, [0 .. AMD64_NUM_COUNTERS_CORE-1] <=3D> gp counters */ @@ -411,7 +411,7 @@ static bool is_gp_event_allowed(struct kvm_x86_pmu_even= t_filter *f, static bool is_fixed_event_allowed(struct kvm_x86_pmu_event_filter *filter, int idx) { - int fixed_idx =3D idx - INTEL_PMC_IDX_FIXED; + int fixed_idx =3D idx - KVM_FIXED_PMC_BASE_IDX; =20 if (filter->action =3D=3D KVM_PMU_EVENT_DENY && test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) @@ -465,7 +465,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) =20 if (pmc_is_fixed(pmc)) { fixed_ctr_ctrl =3D fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED); + pmc->idx - KVM_FIXED_PMC_BASE_IDX); if (fixed_ctr_ctrl & 0x1) eventsel |=3D ARCH_PERFMON_EVENTSEL_OS; if (fixed_ctr_ctrl & 0x2) @@ -831,7 +831,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) select_user =3D config & ARCH_PERFMON_EVENTSEL_USR; } else { config =3D fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED); + pmc->idx - KVM_FIXED_PMC_BASE_IDX); select_os =3D config & 0x1; select_user =3D config & 0x2; } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 87ecf22f5b25..7ffa4f1dedb0 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -18,6 +18,8 @@ #define VMWARE_BACKDOOR_PMC_REAL_TIME 0x10001 #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 =20 +#define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED + struct kvm_pmu_ops { struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, @@ -130,7 +132,7 @@ static inline bool pmc_speculative_in_use(struct kvm_pm= c *pmc) =20 if (pmc_is_fixed(pmc)) return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + pmc->idx - KVM_FIXED_PMC_BASE_IDX) & 0x3; =20 return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 0d2fd9fdcf4b..61252bb733c4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -42,18 +42,18 @@ static void reprogram_fixed_counters(struct kvm_pmu *pm= u, u64 data) =20 pmc =3D get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); =20 - __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); + __set_bit(KVM_FIXED_PMC_BASE_IDX + i, pmu->pmc_in_use); kvm_pmu_request_counter_reprogram(pmc); } } =20 static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_i= dx) { - if (pmc_idx < INTEL_PMC_IDX_FIXED) { + if (pmc_idx < KVM_FIXED_PMC_BASE_IDX) { return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, MSR_P6_EVNTSEL0); } else { - u32 idx =3D pmc_idx - INTEL_PMC_IDX_FIXED; + u32 idx =3D pmc_idx - KVM_FIXED_PMC_BASE_IDX; =20 return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); } @@ -508,7 +508,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) pmu->fixed_ctr_ctrl_mask &=3D ~(0xbull << (i * 4)); counter_mask =3D ~(((1ull << pmu->nr_arch_gp_counters) - 1) | - (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED)); + (((1ull << pmu->nr_arch_fixed_counters) - 1) << KVM_FIXED_PMC_BASE_IDX)); pmu->global_ctrl_mask =3D counter_mask; =20 /* @@ -552,7 +552,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits &=3D ~ICL_EVENTSEL_ADAPTIVE; for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) { pmu->fixed_ctr_ctrl_mask &=3D - ~(1ULL << (INTEL_PMC_IDX_FIXED + i * 4)); + ~(1ULL << (KVM_FIXED_PMC_BASE_IDX + i * 4)); } pmu->pebs_data_cfg_mask =3D ~0xff00000full; } else { @@ -578,7 +578,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) for (i =3D 0; i < KVM_PMC_MAX_FIXED; i++) { pmu->fixed_counters[i].type =3D KVM_PMC_FIXED; pmu->fixed_counters[i].vcpu =3D vcpu; - pmu->fixed_counters[i].idx =3D i + INTEL_PMC_IDX_FIXED; + pmu->fixed_counters[i].idx =3D i + KVM_FIXED_PMC_BASE_IDX; pmu->fixed_counters[i].current_config =3D 0; pmu->fixed_counters[i].eventsel =3D intel_get_fixed_pmc_eventsel(i); } --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29877C4332F for ; Fri, 10 Nov 2023 02:29:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345783AbjKJC3O (ORCPT ); Thu, 9 Nov 2023 21:29:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345753AbjKJC3K (ORCPT ); Thu, 9 Nov 2023 21:29:10 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C1C644BA for ; Thu, 9 Nov 2023 18:29:08 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d9b9aeb4962so1961694276.3 for ; Thu, 09 Nov 2023 18:29:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583347; x=1700188147; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=md27dvFGlb5hqViLj+F5m5pmIHcQvWG8IIW5rr2ILTg=; b=MSmY0KNgVhLeYaew0QFsjs3FhLSTvxKXPCl04y8eigQ4iZAjyj32iNrc4OJ/6lVGRo nPxi+Bf1Itdbj1o1H0zFi/+8P2GwNy7UiJzZJv1+hZobGr9B4D1y2VoRd8cNdDbmD2sv bV7kA4NdXkPspzg8DmOnwD4iHsY7Qzc5Sm95D/Oj2kCk8OrN6RnA2dAtYarMbtss78iE g5ZYP6IbsTxsdIeKXJyc2FideiWJUtFWR1IbADepTbIdAf9bQuRXkFW3g1dDcm9VnWs5 kPXWH9+KuWLiEjp/WcT+AAGCaqe5zx8YJeRpfVAeWwCl9um+QajmUKvvpH01lRDKkAIv zqRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583347; x=1700188147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=md27dvFGlb5hqViLj+F5m5pmIHcQvWG8IIW5rr2ILTg=; b=RB+MQGusPyKMaXVRgCrJciSHnrNg87T0w94Fh25zjXiKsfhP+/Y1J9EzBq/enfFR73 sd5UhcWaEGbbq2/7uGrtyfNpPJQ5TZHvLUQPHjfdNajrINg7h0nb/Pgzwy0Rn7OVIskm 1Ap4bEoXgyqqh6klJT1tm0Osm0xF8ZZaoDHOHb8D5b4c+EnIxE7Wuut2i26VvLbUFMjQ p+YSh6XtU0NiJQaV/oFEBiEKkWuvqhCTncyRlPJLtM1lDmJ6sxSyU/C9HGkpSu44n/XA wI8QCjjeIgEaJ0DgvSyciyExjgGrp+K7MR53u1wdyJ3LrOhEUTShjMjOFFQr+crJDXoq cfrw== X-Gm-Message-State: AOJu0YxCQ3q9CFBDszJdmxFCXfvx6jWSDVxgci/oqWUIAbdkK5vZKD2a 1LRqSGGxm4ADSjHpJavCFacQImAj9lg= X-Google-Smtp-Source: AGHT+IEPIA5i6GwZZmuF2E4xXE3dLgW/PVXTWanSm3nsZNSdpMXZWr3b6o61SS8QwqedZXfcgDN7oTKXJ64= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:cb97:0:b0:da0:73c2:db78 with SMTP id b145-20020a25cb97000000b00da073c2db78mr188375ybg.9.1699583347379; Thu, 09 Nov 2023 18:29:07 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:50 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-4-seanjc@google.com> Subject: [PATCH 03/10] KVM: x86/pmu: Move pmc_idx => pmc translation helper to common code From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a common helper for *internal* PMC lookups, and delete the ops hook and Intel's implementation. Keep AMD's implementation, but rename it to amd_pmu_get_pmc() to make it somewhat more obvious that it's suited for both KVM-internal and guest-initiated lookups. Because KVM tracks all counters in a single bitmap, getting a counter when iterating over a bitmap, e.g. of all valid PMCs, requires a small amount of math, that while simple, isn't super obvious and doesn't use the same semantics as PMC lookups from RDPMC! Although AMD doesn't support fixed counters, the common PMU code still behaves as if there a split, the high half of which just happens to always be empty. Opportunstically add a comment to explain both what is going on, and why KVM uses a single bitmap, e.g. the boilerplate for iterating over separate bitmaps could be done via macros, so it's not (just) about deduplicating code. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 - arch/x86/kvm/pmu.c | 8 +++---- arch/x86/kvm/pmu.h | 29 +++++++++++++++++++++++++- arch/x86/kvm/svm/pmu.c | 7 +++---- arch/x86/kvm/vmx/pmu_intel.c | 15 +------------ 5 files changed, 36 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/= kvm-x86-pmu-ops.h index d7eebee4450c..e5e7f036587f 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -12,7 +12,6 @@ BUILD_BUG_ON(1) * a NULL definition, for example if "static_call_cond()" will be used * at the call sites. */ -KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) KVM_X86_PMU_OP(msr_idx_to_pmc) KVM_X86_PMU_OP(is_valid_rdpmc_ecx) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 714fa6dd912e..6ee05ad35f55 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -505,7 +505,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) int bit; =20 for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc =3D static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit= ); + struct kvm_pmc *pmc =3D kvm_pmc_idx_to_pmc(pmu, bit); =20 if (unlikely(!pmc)) { clear_bit(bit, pmu->reprogram_pmi); @@ -715,7 +715,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); =20 for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc =3D static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc =3D kvm_pmc_idx_to_pmc(pmu, i); if (!pmc) continue; =20 @@ -791,7 +791,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmu->pmc_in_use, X86_PMC_IDX_MAX); =20 for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc =3D static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc =3D kvm_pmc_idx_to_pmc(pmu, i); =20 if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); @@ -846,7 +846,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 p= erf_hw_id) int i; =20 for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc =3D static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc =3D kvm_pmc_idx_to_pmc(pmu, i); =20 if (!pmc || !pmc_event_is_allowed(pmc)) continue; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7ffa4f1dedb0..2235772a495b 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -4,6 +4,8 @@ =20 #include =20 +#include + #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) #define pmc_to_pmu(pmc) (&(pmc)->vcpu->arch.pmu) @@ -21,7 +23,6 @@ #define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED =20 struct kvm_pmu_ops { - struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, u32 msr); @@ -56,6 +57,32 @@ static inline bool kvm_pmu_has_perf_global_ctrl(struct k= vm_pmu *pmu) return pmu->version > 1; } =20 +/* + * KVM tracks all counters in 64-bit bitmaps, with general purpose counters + * mapped to bits 31:0 and fixed counters mapped to 63:32, e.g. fixed coun= ter 0 + * is tracked internally via index 32. On Intel, (AMD doesn't support fix= ed + * counters), this mirrors how fixed counters are mapped to PERF_GLOBAL_CT= RL + * and similar MSRs, i.e. tracking fixed counters at base index 32 reduces= the + * amounter of boilerplate needed to iterate over PMCs *and* simplifies co= mmon + * enabling/disable/reset operations. + * + * WARNING! This helper is only for lookups that are initiated by KVM, it= is + * NOT safe for guest lookups, e.g. will do the wrong thing if passed a raw + * ECX value from RDPMC (fixed counters are accessed by setting bit 30 in = ECX + * for RDPMC, not by adding 32 to the fixed counter index). + */ +static inline struct kvm_pmc *kvm_pmc_idx_to_pmc(struct kvm_pmu *pmu, int = idx) +{ + if (idx < pmu->nr_arch_gp_counters) + return &pmu->gp_counters[idx]; + + idx -=3D KVM_FIXED_PMC_BASE_IDX; + if (idx >=3D 0 && idx < pmu->nr_arch_fixed_counters) + return &pmu->fixed_counters[idx]; + + return NULL; +} + static inline u64 pmc_bitmask(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 1fafc46f61c9..b6c1d1c3f204 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -25,7 +25,7 @@ enum pmu_type { PMU_TYPE_EVNTSEL, }; =20 -static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) +static struct kvm_pmc *amd_pmu_get_pmc(struct kvm_pmu *pmu, int pmc_idx) { unsigned int num_counters =3D pmu->nr_arch_gp_counters; =20 @@ -70,7 +70,7 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_p= mu *pmu, u32 msr, return NULL; } =20 - return amd_pmc_idx_to_pmc(pmu, idx); + return amd_pmu_get_pmc(pmu, idx); } =20 static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) @@ -84,7 +84,7 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu,= unsigned int idx) static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { - return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx); + return amd_pmu_get_pmc(vcpu_to_pmu(vcpu), idx); } =20 static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) @@ -226,7 +226,6 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) } =20 struct kvm_pmu_ops amd_pmu_ops __initdata =3D { - .pmc_idx_to_pmc =3D amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc =3D amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D amd_msr_idx_to_pmc, .is_valid_rdpmc_ecx =3D amd_is_valid_rdpmc_ecx, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 61252bb733c4..4254411be467 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -47,18 +47,6 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu= , u64 data) } } =20 -static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_i= dx) -{ - if (pmc_idx < KVM_FIXED_PMC_BASE_IDX) { - return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, - MSR_P6_EVNTSEL0); - } else { - u32 idx =3D pmc_idx - KVM_FIXED_PMC_BASE_IDX; - - return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); - } -} - static u32 intel_rdpmc_get_masked_idx(struct kvm_pmu *pmu, u32 idx) { /* @@ -710,7 +698,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) =20 for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX) { - pmc =3D intel_pmc_idx_to_pmc(pmu, bit); + pmc =3D kvm_pmc_idx_to_pmc(pmu, bit); =20 if (!pmc || !pmc_speculative_in_use(pmc) || !pmc_is_globally_enabled(pmc) || !pmc->perf_event) @@ -727,7 +715,6 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } =20 struct kvm_pmu_ops intel_pmu_ops __initdata =3D { - .pmc_idx_to_pmc =3D intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc =3D intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D intel_msr_idx_to_pmc, .is_valid_rdpmc_ecx =3D intel_is_valid_rdpmc_ecx, --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5D80C4332F for ; Fri, 10 Nov 2023 02:29:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345841AbjKJC3U (ORCPT ); Thu, 9 Nov 2023 21:29:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345792AbjKJC3P (ORCPT ); Thu, 9 Nov 2023 21:29:15 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 088BC4695 for ; Thu, 9 Nov 2023 18:29:10 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da07b5e6f75so2202773276.0 for ; Thu, 09 Nov 2023 18:29:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583349; x=1700188149; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/ewAJtcLDJCjiqPeU8Oy31szzWjWOzI4Ilaj+lyQ/Xs=; b=bFn5J5mSZ22wXwvlrgl0UN1xcmAvRiLLTCNYRfW8DB9SC9qNK5USE3A9heh63+PoYG IJH565kGwuM0TtiJvr+o+yFQHpBN7ZGgl/XViZy5GNPqJ3kU4ERAb8hU4TGh7XkHpZGX sxUYhFa84p5ino0OaMYUR66BhbEU/Gu91c6GvQUSa4QrbR2AMLPeRAukIi2lpKarqVDb iOsCk1uJ6a3HelkeU+rAH99VypbuuwHWB4ycH41SQjbZg5mw4HY+Lau6yy7l/Bq63YkE ryTzOTivWndXGblsjsj+a902INleON4kxqNfrtFk+2UaAP5O3Sh9QFHvxe/knQGom03e 4atg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583349; x=1700188149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/ewAJtcLDJCjiqPeU8Oy31szzWjWOzI4Ilaj+lyQ/Xs=; b=QHXpo/zfEiZdQfiD+L71PxVc3r0NVl+hJ7X1ocF/JhUphJP0lN/U/hMKeefOOEIfQZ XhaLstFBT0j+Tj61kiGHrvEL6e2F4PGkztqQD9DUEDIFyicfwrFq2PDXDWBcUJfpt0H2 087HQQdXxUm3cq7iFiwWdihCWOfoC0QEo7PGZXfJreIZ+WQJu4LGB/K42yhzv81xmYfR tdvNR+JscL/DmLrY1tTIQjovHQsDNTkNLDLa6jrOLjLPy3F2pnJaL5xxJh7vL+I+3+y8 3a6vsr+ek1NepM+ByqNxu6eYNd7kulZCOpnqInNF2SLX/5XSUDHng4mUce7czPbpn9XC lvBg== X-Gm-Message-State: AOJu0YzldmE+bOMY151wfR3xnVWhej89nzy2QsuBh7N8OC+P8KuHinSE l2mLcxaFxpW7LjFrVJcqZcR55MJQp7Y= X-Google-Smtp-Source: AGHT+IHGMt2jktFG5bE27wELiNJLEDJKDs34DhbaMrjwulehu1Br5N+AE3S6fFh8oMDyRfcQwpBoDsQQEX4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:18d:b0:d89:42d7:e72d with SMTP id t13-20020a056902018d00b00d8942d7e72dmr45730ybh.3.1699583349370; Thu, 09 Nov 2023 18:29:09 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:51 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-5-seanjc@google.com> Subject: [PATCH 04/10] KVM: x86/pmu: Snapshot and clear reprogramming bitmap before reprogramming From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor the handling of the reprogramming bitmap to snapshot and clear to-be-processed bits before doing the reprogramming, and then explicitly set bits for PMCs that need to be reprogrammed (again). This will allow adding a macro to iterate over all valid PMCs without having to add special handling for the reprogramming bit, which (a) can have bits set for non-existent PMCs and (b) needs to clear such bits to avoid wasting cycles in perpetuity. Note, the existing behavior of clearing bits after reprogramming does NOT have a race with kvm_vm_ioctl_set_pmu_event_filter(). Setting a new PMU filter synchronizes SRCU _before_ setting the bitmap, i.e. guarantees that the vCPU isn't in the middle of reprogramming with a stale filter prior to setting the bitmap. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.c | 52 ++++++++++++++++++--------------- 2 files changed, 30 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d8bc9ba88cfc..22ba24d0fd4f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -535,6 +535,7 @@ struct kvm_pmc { #define KVM_PMC_MAX_FIXED 3 #define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_= PMC_MAX_FIXED - 1) #define KVM_AMD_PMC_MAX_GENERIC 6 + struct kvm_pmu { u8 version; unsigned nr_arch_gp_counters; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 6ee05ad35f55..ee921b24d9e4 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -444,7 +444,7 @@ static bool pmc_event_is_allowed(struct kvm_pmc *pmc) check_pmu_event_filter(pmc); } =20 -static void reprogram_counter(struct kvm_pmc *pmc) +static int reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); u64 eventsel =3D pmc->eventsel; @@ -455,7 +455,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) emulate_overflow =3D pmc_pause_counter(pmc); =20 if (!pmc_event_is_allowed(pmc)) - goto reprogram_complete; + return 0; =20 if (emulate_overflow) __kvm_perf_overflow(pmc, false); @@ -476,43 +476,49 @@ static void reprogram_counter(struct kvm_pmc *pmc) } =20 if (pmc->current_config =3D=3D new_config && pmc_resume_counter(pmc)) - goto reprogram_complete; + return 0; =20 pmc_release_perf_event(pmc); =20 pmc->current_config =3D new_config; =20 - /* - * If reprogramming fails, e.g. due to contention, leave the counter's - * regprogram bit set, i.e. opportunistically try again on the next PMU - * refresh. Don't make a new request as doing so can stall the guest - * if reprogramming repeatedly fails. - */ - if (pmc_reprogram_counter(pmc, PERF_TYPE_RAW, - (eventsel & pmu->raw_event_mask), - !(eventsel & ARCH_PERFMON_EVENTSEL_USR), - !(eventsel & ARCH_PERFMON_EVENTSEL_OS), - eventsel & ARCH_PERFMON_EVENTSEL_INT)) - return; - -reprogram_complete: - clear_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi); + return pmc_reprogram_counter(pmc, PERF_TYPE_RAW, + (eventsel & pmu->raw_event_mask), + !(eventsel & ARCH_PERFMON_EVENTSEL_USR), + !(eventsel & ARCH_PERFMON_EVENTSEL_OS), + eventsel & ARCH_PERFMON_EVENTSEL_INT); } =20 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { + DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); int bit; =20 - for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { + bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX); + + /* + * The reprogramming bitmap can be written asynchronously by something + * other than the task that holds vcpu->mutex, take care to clear only + * the bits that will actually processed. + */ + BUILD_BUG_ON(sizeof(bitmap) !=3D sizeof(atomic64_t)); + atomic64_andnot(*(s64 *)bitmap, &pmu->__reprogram_pmi); + + for_each_set_bit(bit, bitmap, X86_PMC_IDX_MAX) { struct kvm_pmc *pmc =3D kvm_pmc_idx_to_pmc(pmu, bit); =20 - if (unlikely(!pmc)) { - clear_bit(bit, pmu->reprogram_pmi); + if (unlikely(!pmc)) continue; - } =20 - reprogram_counter(pmc); + /* + * If reprogramming fails, e.g. due to contention, re-set the + * regprogram bit set, i.e. opportunistically try again on the + * next PMU refresh. Don't make a new request as doing so can + * stall the guest if reprogramming repeatedly fails. + */ + if (reprogram_counter(pmc)) + set_bit(pmc->idx, pmu->reprogram_pmi); } =20 /* --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3827BC4167B for ; Fri, 10 Nov 2023 02:29:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345779AbjKJC3W (ORCPT ); Thu, 9 Nov 2023 21:29:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345773AbjKJC3S (ORCPT ); Thu, 9 Nov 2023 21:29:18 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C24446A3 for ; Thu, 9 Nov 2023 18:29:12 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5b31e000e97so21857847b3.1 for ; Thu, 09 Nov 2023 18:29:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583351; x=1700188151; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5SlrZj36n0KChIF9h7ClFhiP6zYdMuLYWl1Pm7fXb0o=; b=aGTsrWI6rNx2REpLaZfKFSLSD/hcmTzs1814Pp82zEYuYvL3JggSLNVzAob/txjcJs ldcP57jwqy9gnJNPzwVvM4qC/WOsXZ7+/5UblA3iyFsP5QK53pXs37YBELaYwv9NNLKi 9MMo2fBt40/WVfbubfj/RfJ0EscJyX9nTCODarsHSbV0xcYg7K56LU43PEoCWP5HgMSD /QetNZmWomGIDPb1Cr7GObxq32cUg3hNIWqnoawoWGgckqVd9EijwzaM8IqocF3TeE65 MEJWuV4srT4g31h1NomwG3MmDAKyNMS/KNL/8cz/YvK5a19yS5e3zMLcosAI8G/qF2zu Hs4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583351; x=1700188151; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5SlrZj36n0KChIF9h7ClFhiP6zYdMuLYWl1Pm7fXb0o=; b=ifZyNYbWWTRvIOQzX1tuqsaUYjH2udHoKftmjRnitjnufji2eBaUPzSyGvjR9x3A1z FhmHs+Egv+wW+7+qDFBaCtYM/OVnVihvdL4AEE1461Q4S6KZuaGzz1V1TnUm9reWwQbd YGQKIrAMsJ98FDNuyuUjSjBJVWQvnuGkFv5vfB0pp3q4NgfoE2T9VXSyBUNKu66AZxo3 TOJuk1aD3Fn3Vsz3l438uvX0mkCDqxUh/mEToW9wGw7YWlhD7LBFxm+Rw+coNngfno9e 6dwnIYpt4NUNA/gBnkuJU2vOq4hMeEVOHMdWoK9SwdYHze5tWhoo8XcObXVd+A96ZxOe w0Zw== X-Gm-Message-State: AOJu0YzKueT6JcHfZMiUw2vGZYxDmYANDwScCl2zEhCet09awMLinVP4 psR6FGsoJcV0Qkj5KK5tFQCqjDvQf0c= X-Google-Smtp-Source: AGHT+IGFCa9KkZ81i+JJNjZhR4CIbz66m6zI5H/yCZMjmXJL+cscV1zQxhDZNMxmnsOJUKFv8SOM0ug181w= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a5b:f12:0:b0:da1:513d:8a3c with SMTP id x18-20020a5b0f12000000b00da1513d8a3cmr161843ybr.11.1699583351272; Thu, 09 Nov 2023 18:29:11 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:52 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-6-seanjc@google.com> Subject: [PATCH 05/10] KVM: x86/pmu: Add macros to iterate over all PMCs given a bitmap From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add and use kvm_for_each_pmc() to dedup a variety of open coded for-loops that iterate over valid PMCs given a bitmap (and because seeing checkpatch whine about bad macro style is always amusing). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 26 +++++++------------------- arch/x86/kvm/pmu.h | 6 ++++++ arch/x86/kvm/vmx/pmu_intel.c | 7 ++----- 3 files changed, 15 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index ee921b24d9e4..0e2175170038 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -493,6 +493,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; int bit; =20 bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX); @@ -505,12 +506,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) BUILD_BUG_ON(sizeof(bitmap) !=3D sizeof(atomic64_t)); atomic64_andnot(*(s64 *)bitmap, &pmu->__reprogram_pmi); =20 - for_each_set_bit(bit, bitmap, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc =3D kvm_pmc_idx_to_pmc(pmu, bit); - - if (unlikely(!pmc)) - continue; - + kvm_for_each_pmc(pmu, pmc, bit, bitmap) { /* * If reprogramming fails, e.g. due to contention, re-set the * regprogram bit set, i.e. opportunistically try again on the @@ -720,11 +716,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) =20 bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); =20 - for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc =3D kvm_pmc_idx_to_pmc(pmu, i); - if (!pmc) - continue; - + kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) { pmc_stop_counter(pmc); pmc->counter =3D 0; pmc->emulated_counter =3D 0; @@ -796,10 +788,8 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) bitmap_andnot(bitmask, pmu->all_valid_pmc_idx, pmu->pmc_in_use, X86_PMC_IDX_MAX); =20 - for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc =3D kvm_pmc_idx_to_pmc(pmu, i); - - if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) + kvm_for_each_pmc(pmu, pmc, i, bitmask) { + if (pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); } =20 @@ -851,10 +841,8 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 = perf_hw_id) struct kvm_pmc *pmc; int i; =20 - for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc =3D kvm_pmc_idx_to_pmc(pmu, i); - - if (!pmc || !pmc_event_is_allowed(pmc)) + kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) { + if (!pmc_event_is_allowed(pmc)) continue; =20 /* Ignore checks for edge detect, pin control, invert and CMASK bits */ diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 2235772a495b..cb62a4e44849 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -83,6 +83,12 @@ static inline struct kvm_pmc *kvm_pmc_idx_to_pmc(struct = kvm_pmu *pmu, int idx) return NULL; } =20 +#define kvm_for_each_pmc(pmu, pmc, i, bitmap) \ + for_each_set_bit(i, bitmap, X86_PMC_IDX_MAX) \ + if (!(pmc =3D kvm_pmc_idx_to_pmc(pmu, i))) \ + continue; \ + else \ + static inline u64 pmc_bitmask(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 4254411be467..ee3e122d3617 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -696,11 +696,8 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) struct kvm_pmc *pmc =3D NULL; int bit, hw_idx; =20 - for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl, - X86_PMC_IDX_MAX) { - pmc =3D kvm_pmc_idx_to_pmc(pmu, bit); - - if (!pmc || !pmc_speculative_in_use(pmc) || + kvm_for_each_pmc(pmu, pmc, bit, (unsigned long *)&pmu->global_ctrl) { + if (!pmc_speculative_in_use(pmc) || !pmc_is_globally_enabled(pmc) || !pmc->perf_event) continue; =20 --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D90ACC4167B for ; Fri, 10 Nov 2023 02:29:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345826AbjKJC30 (ORCPT ); Thu, 9 Nov 2023 21:29:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345753AbjKJC3S (ORCPT ); Thu, 9 Nov 2023 21:29:18 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E51546B1 for ; Thu, 9 Nov 2023 18:29:13 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-6b31cb3cc7eso1613580b3a.0 for ; Thu, 09 Nov 2023 18:29:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583353; x=1700188153; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oUplQH4QKJNeOGVBIGmyoAKP+5jCzgTgFjMw0NNsbV8=; b=bxd9OyhRQfuVpkEeKnGJO9SydgcnMgzqO0QovgXETG/z0aCw3KhdoLNYUMUiwxCx3Z mho76pDdNhzA1TR0VsXvgZ7Lsg5+4z+ZGkwt5cXC3vIJWXb2FjhRlcpByRsx7dIMexEr 45hD/HNiouVhXQ1YdZkzonuwFZ9mM0tTvpnh1aD7wJZmxRWpsi+92dv36pQl/LASUU6i +5zC+9EispEhTtKs7C0mtKaLnWVaw8bjBV4C7MakHq7i6tBhSGLygXiAYoT5WryMck0J xaTtnXPsngXix9N/bvseKFhdG3puqm/nRWFeXll13Xy0qKY8ZFXxkm2VbzX9ZJwPabZZ BDPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583353; x=1700188153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oUplQH4QKJNeOGVBIGmyoAKP+5jCzgTgFjMw0NNsbV8=; b=FLqX/jxf8EeEcxn+iEGkwdOC9szVG9mnCAbiF+TkLu+X5k6q3F8Dqbleq/y6SbdX/B c04FVug9MDBLUJjSJgTUIh3yK9QV3V4WYWUQUxAjTmrsfEHTQdClP6SBTP92Nlp176cE uoDvJJ+ZCpTMbgLJAPb3lT+AmPT6zbUFXN65RPVXMLv0ayG9foBDj8WEcu/FM0lgfbCK SCiKPstSyLbPh/k8q2mBubWtaa5NdrRTKuv+/SkqoY0NiCqvxxy0RhiTyTAQz4P7pwQ+ HoiTzgQu7lA3pyyEef4qt6i/IV6RdD1iOhzNE7CYzS/LMk5UCiAlW/fUq1+w2XJpjlOu vJ6A== X-Gm-Message-State: AOJu0YwbKTDNIjuPJUN7OWr5wy6H+NF+/Lz/UVvuSSlL2g0vx0YmyDwU 40C/ABKFCX/OClg5rpAOTOsIioSYc4Y= X-Google-Smtp-Source: AGHT+IEucuu8/wKbjkKW4lLWW91QLJUJ5SOAPQg86CMbuYGjka0qEOMY6aBvXFhk7y3bJLf4TgCoQArYkak= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:850a:b0:690:29c0:ef51 with SMTP id ha10-20020a056a00850a00b0069029c0ef51mr307317pfb.1.1699583352940; Thu, 09 Nov 2023 18:29:12 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:53 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-7-seanjc@google.com> Subject: [PATCH 06/10] KVM: x86/pmu: Process only enabled PMCs when emulating events in software From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mask off disabled counters based on PERF_GLOBAL_CTRL *before* iterating over PMCs to emulate (branch) instruction required events in software. In the common case where the guest isn't utilizing the PMU, pre-checking for enabled counters turns a relatively expensive search into a few AND uops and a Jcc. Sadly, PMUs without PERF_GLOBAL_CTRL, e.g. most existing AMD CPUs, are out of luck as there is no way to check that a PMC isn't being used without checking the PMC's event selector. Cc: Konstantin Khorenko Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0e2175170038..488d21024a92 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -837,11 +837,20 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) =20 void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) { + DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; int i; =20 - kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) { + BUILD_BUG_ON(sizeof(pmu->global_ctrl) * BITS_PER_BYTE !=3D X86_PMC_IDX_MA= X); + + if (!kvm_pmu_has_perf_global_ctrl(pmu)) + bitmap_copy(bitmap, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); + else if (!bitmap_and(bitmap, pmu->all_valid_pmc_idx, + (unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX)) + return; + + kvm_for_each_pmc(pmu, pmc, i, bitmap) { if (!pmc_event_is_allowed(pmc)) continue; =20 --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A2DFC4332F for ; Fri, 10 Nov 2023 02:29:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345896AbjKJC31 (ORCPT ); Thu, 9 Nov 2023 21:29:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345771AbjKJC3T (ORCPT ); Thu, 9 Nov 2023 21:29:19 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3668346BE for ; Thu, 9 Nov 2023 18:29:15 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-6b31cb3cc7eso1613607b3a.0 for ; Thu, 09 Nov 2023 18:29:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583354; x=1700188154; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=np39bj4y47m/snfejc8OQRZHu9kCDIS45raWJG/evqc=; b=YaXfzQLLWzGFehAPnaCHlwRdmOiWL1cCISp59hEMd9jWZFwnjZycaATC10SZOkcr9k DPHxsir2kj/OuqIkNYTDJ0so7H84zGAiLH+uSWA/prN6+0T0HnxC+FP76NxsYwoDeqKC B8j/O/IA7uWqahpzavuYUs+D81XH/0tnMo1DIO3DaYxGTOYLGg5GcGW6A8XQLsJq7Ose oVrzfz+FBwCkSqwhRlkxtVBNNR6xp0z3jkH7o5nWpCieq1Zdos1mfgoagk3Ql3/zhlI5 99JTtV3cQFKWnF/eqY9ZABc2q8AabVPjBeip9w7LDkkK3Ul3gzHr/K0wCcShT7orXh5E FYDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583354; x=1700188154; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=np39bj4y47m/snfejc8OQRZHu9kCDIS45raWJG/evqc=; b=V2gZVJLSgeUQ5Ok0VVH+9bjdFSJXPDWG6ppgriF3/28Esh/ddIqzIrRtmrXZ1LZZ07 pJcqkblzmSaCThN3FbwxzfoQu08dFvD643ZUTbs6DRxfDzs2fzTRUGT/xM8R8lpXdohH X+jhrtctTJBaDqv3hsnIzwLVSlM5om279vpr9uSQ9NY+9ZTatQ31GiSPnoB8Mfoe5q0H buB/gvquLt9Sp/9aKmeOgnN/2bgRqyKvkdUGZ6ijgbSOqlYKGwQwDxHs2I8+iztx5jRe +Ej4pUA86KvreFcVEBrYpmQIXXvdFp2dgXePkoLas3UIlMYSWQwQyLMK8Uk1+qt30s79 eG7g== X-Gm-Message-State: AOJu0YzzRYdc9JTedRDN/VHEda9aasXLUEl/62iKONJMD2cBAkzkFOa/ vuUuJubyvh3TKBA/I8Bk5BN53tVFjZo= X-Google-Smtp-Source: AGHT+IHFAHHN1XHjo3fceIzvJRl7LtTpp6+aRpgDzmT4RzXKGGE6Nx//CNMXJPpbSGYwJOo6JudAIEevVnw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:a09:b0:6c3:9efc:6747 with SMTP id p9-20020a056a000a0900b006c39efc6747mr437017pfh.3.1699583354700; Thu, 09 Nov 2023 18:29:14 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:54 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-8-seanjc@google.com> Subject: [PATCH 07/10] KVM: x86/pmu: Snapshot event selectors that KVM emulates in software From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Snapshot the event selectors for the events that KVM emulates in software, which is currently instructions retired and branch instructions retired. The event selectors a tied to the underlying CPU, i.e. are constant for a given platform even though perf doesn't manage the mappings as such. Getting the event selectors from perf isn't exactly cheap, especially if mitigations are enabled, as at least one indirect call is involved. Snapshot the values in KVM instead of optimizing perf as working with the raw event selectors will be required if KVM ever wants to emulate events that aren't part of perf's uABI, i.e. that don't have an "enum perf_hw_id" entry. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 17 ++++++++--------- arch/x86/kvm/pmu.h | 13 ++++++++++++- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/x86.c | 6 +++--- 4 files changed, 24 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 488d21024a92..45cb8b2a024b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -29,6 +29,9 @@ struct x86_pmu_capability __read_mostly kvm_pmu_cap; EXPORT_SYMBOL_GPL(kvm_pmu_cap); =20 +struct kvm_pmu_emulated_event_selectors __read_mostly kvm_pmu_eventsel; +EXPORT_SYMBOL_GPL(kvm_pmu_eventsel); + /* Precise Distribution of Instructions Retired (PDIR) */ static const struct x86_cpu_id vmx_pebs_pdir_cpu[] =3D { X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, NULL), @@ -809,13 +812,6 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) kvm_pmu_request_counter_reprogram(pmc); } =20 -static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, - unsigned int perf_hw_id) -{ - return !((pmc->eventsel ^ perf_get_hw_event_config(perf_hw_id)) & - AMD64_RAW_EVENT_MASK_NB); -} - static inline bool cpl_is_matched(struct kvm_pmc *pmc) { bool select_os, select_user; @@ -835,7 +831,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) return (static_call(kvm_x86_get_cpl)(pmc->vcpu) =3D=3D 0) ? select_os : s= elect_user; } =20 -void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) { DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -855,7 +851,10 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 = perf_hw_id) continue; =20 /* Ignore checks for edge detect, pin control, invert and CMASK bits */ - if (eventsel_match_perf_hw_id(pmc, perf_hw_id) && cpl_is_matched(pmc)) + if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) + continue; + + if (cpl_is_matched(pmc)) kvm_pmu_incr_counter(pmc); } } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index cb62a4e44849..9dc5f549c98c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -22,6 +22,11 @@ =20 #define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED =20 +struct kvm_pmu_emulated_event_selectors { + u64 INSTRUCTIONS_RETIRED; + u64 BRANCH_INSTRUCTIONS_RETIRED; +}; + struct kvm_pmu_ops { struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); @@ -171,6 +176,7 @@ static inline bool pmc_speculative_in_use(struct kvm_pm= c *pmc) } =20 extern struct x86_pmu_capability kvm_pmu_cap; +extern struct kvm_pmu_emulated_event_selectors kvm_pmu_eventsel; =20 static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_o= ps) { @@ -212,6 +218,11 @@ static inline void kvm_init_pmu_capability(const struc= t kvm_pmu_ops *pmu_ops) pmu_ops->MAX_NR_GP_COUNTERS); kvm_pmu_cap.num_counters_fixed =3D min(kvm_pmu_cap.num_counters_fixed, KVM_PMC_MAX_FIXED); + + kvm_pmu_eventsel.INSTRUCTIONS_RETIRED =3D + perf_get_hw_event_config(PERF_COUNT_HW_INSTRUCTIONS); + kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED =3D + perf_get_hw_event_config(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); } =20 static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) @@ -259,7 +270,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu); void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); -void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id); +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); =20 bool is_vmware_backdoor_pmc(u32 pmc_idx); =20 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c5ec0ef51ff7..cf985085467b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3564,7 +3564,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool= launch) return 1; } =20 - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED); =20 if (CC(evmptrld_status =3D=3D EVMPTRLD_VMFAIL)) return nested_vmx_failInvalid(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index efbf52a9dc83..9d9b5f9e4b28 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8839,7 +8839,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vc= pu) if (unlikely(!r)) return 0; =20 - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.INSTRUCTIONS_RETIRED); =20 /* * rflags is the old, "raw" value of the flags. The new value has @@ -9152,9 +9152,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gp= a_t cr2_or_gpa, */ if (!ctxt->have_exception || exception_type(ctxt->exception.vector) =3D=3D EXCPT_TRAP) { - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.INSTRUCTIONS_RETIRED); if (ctxt->is_branch) - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIR= ED); kvm_rip_write(vcpu, ctxt->eip); if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) r =3D kvm_vcpu_do_singlestep(vcpu); --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1D7CC4332F for ; Fri, 10 Nov 2023 02:29:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345817AbjKJC3h (ORCPT ); Thu, 9 Nov 2023 21:29:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345805AbjKJC3U (ORCPT ); Thu, 9 Nov 2023 21:29:20 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AE53469A for ; Thu, 9 Nov 2023 18:29:17 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a9012ab0adso22950677b3.1 for ; Thu, 09 Nov 2023 18:29:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583356; x=1700188156; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ViZOepyyuHgGywDg5p020KLGJgQ5sVyIecAWPub5Hkc=; b=FVYanNPjEfH+zeSC5oEqFite2i8NIG+HdTxHDaSx6wnqHorabARC7TWVNA9o5LYRaS TCEyrbmz2I4tdvWVo5zg5a2SjyPRbuvd2xwLDfdj1r0eZuoGExVICIckYrGhkoPgYF9F rxvro0cjTbu4zNRXx7/8ejk3q8INK80cCE7N+vUHD6nJyhOlD7/ARqs6gX/VVQQkugHg cDT5mHrtS93c4vXPaVfqr8bDT7U8orVJYuVFowQccUvjwv2qaysrVb0EZ/+JL+IDsoYF J7wQgdtDSVt5KsGbthOYVbWfRFiFLFl8OFc6nDi0I1nwnvzEfU9eaq4Zav/TJePoeW6k Q3Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583356; x=1700188156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ViZOepyyuHgGywDg5p020KLGJgQ5sVyIecAWPub5Hkc=; b=Dev/fiJ18E9Rf5H02qcs8NGQT+zQXw74FhTrhODs7ejmTLagui8pJnnib21hWWJZec dGDMiU5heQPPdJ73mij0SewWHZwsRkPareRePTu7EQcNmReg4HcyYZwyqUGmJF7ghH2d E69bBXn32SYOSsYyRPIFDzN4Jpwcc4HCXT6nyz39TVak29Bbk1+iwRfoD7hqXJFuJPnq aKro/mPY5yasTrXgg6DKPDNDK5YUUcfUbbpFfWFvhm7mD3IMyooWJO/pOA6B961HgccO hLLDIS9oxrf56fxdfETqGoGpwTHX+nH+SQO5y42IECw6o6EAcG3ljrpWytRf7yihkT90 AyOw== X-Gm-Message-State: AOJu0YxmjzBsLIkUVgaZzudiP8v46pI2QmdIK+S/meucJ7llNWVz3Kpy Ge1O1CKgy2XHaHd/MZKGrIlGOGkDsVc= X-Google-Smtp-Source: AGHT+IEz4Npt3on2y4c0ZdLIwoyVoJbdLf5cd4Ndcm7P56uuGjm8a/8+nhf+7OOsw6XqF0hZ+DExIx7za1w= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:9a44:0:b0:5be:94a6:d84b with SMTP id r65-20020a819a44000000b005be94a6d84bmr197455ywg.5.1699583356692; Thu, 09 Nov 2023 18:29:16 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:55 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-9-seanjc@google.com> Subject: [PATCH 08/10] KVM: x86/pmu: Expand the comment about what bits are check emulating events From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Expand the comment about what bits are and aren't checked when emulating PMC events in software. As pointed out by Jim, AMD's mask includes bits 35:32, which on Intel overlap with the IN_TX and IN_TXCP bits (32 and 33) as well as reserved bits (34 and 45). Checking The IN_TX* bits is actually correct, as it's safe to assert that the vCPU can't be in an HLE/RTM transaction if KVM is emulating an instruction, i.e. KVM *shouldn't count if either of those bits is set. For the reserved bits, KVM is has equal odds of being right if Intel adds new behavior, i.e. ignoring them is just as likely to be correct as checking them. Opportunistically explain *why* the other flags aren't checked. Suggested-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 45cb8b2a024b..ba561849fd3e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -850,7 +850,20 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 = eventsel) if (!pmc_event_is_allowed(pmc)) continue; =20 - /* Ignore checks for edge detect, pin control, invert and CMASK bits */ + /* + * Ignore checks for edge detect (all events currently emulated + * but KVM are always rising edges), pin control (unsupported + * by modern CPUs), and counter mask and its invert flag (KVM + * doesn't emulate multiple events in a single clock cycle). + * + * Note, the uppermost nibble of AMD's mask overlaps Intel's + * IN_TX (bit 32) and IN_TXCP (bit 33), as well as two reserved + * bits (bits 35:34). Checking the "in HLE/RTM transaction" + * flags is correct as the vCPU can't be in a transaction if + * KVM is emulating an instruction. Checking the reserved bits + * might be wrong if they are defined in the future, but so + * could ignoring them, so do the simple thing for now. + */ if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) continue; =20 --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24A6BC4332F for ; Fri, 10 Nov 2023 02:29:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345877AbjKJC3l (ORCPT ); Thu, 9 Nov 2023 21:29:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345785AbjKJC3V (ORCPT ); Thu, 9 Nov 2023 21:29:21 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A37546AA for ; Thu, 9 Nov 2023 18:29:19 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5bd0c909c50so1504887a12.3 for ; Thu, 09 Nov 2023 18:29:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583358; x=1700188158; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=fjDXmprU9Q2uJ1P2ZcBSz5/xZE4cUzlnzK3sR8+dOOk=; b=g1Wmfa/6C6BFSkjQ/aTj/HWNjTE5L0IP32bwcFxXycGtynuqUozvRb7LLkI9Hp+b4N 4do0qPR60+Nvb3um+hwpVaLKMb1qIY3rfV82sb7rKRBiLe/fE+3287+EN5LyfMGTCrkR Fh//GeAGFsBxIzTEqrSkuE5NshS/9WfSdsK39Vm1ie0+6En9lgM+YtrRPySrvoUTiVyf KiYe609q7hrYgUWuhUYu+YxF7VCJy7TndTt3h/3cz3JqRBJSgAWWW2V6cjsP3T7K9La7 IIlX7zEDYk3I8hBgRm74myvOTay5G4aqee578bKaPfc/S3ljZdvr1X7h+n4pes7nOGaT +N4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583358; x=1700188158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fjDXmprU9Q2uJ1P2ZcBSz5/xZE4cUzlnzK3sR8+dOOk=; b=ACTP0P/jfS/N3i9bOtmP1T5tcu/qAubj2UlXHxlo3BEXXx86DG9rzEfAcQLKSA4ELd Icbj5HrixPRxfDduzwMilrw0F2zewoLZobwz2OMwfvGCe9ImXKFidzXkWHQ99+kUvEku /5M+8DuegBppyYrAG2yzdY9m1BWgsZ7yESXb2bPUSR6gDgFXOIXrh2LftmgSt95oERDC uy/j4qNqOXnD4qNb9yETBFKy22B1LMsQyz+UMuiGY+lNmnKn6MRoaijAVZsTo8UDUhhz 8Np3xObm3Yf0QAKWW14VgNZRFmgPwLrbva0WFUfB6EC9ewDa9EzTt51I1JU6PMggHyhV 2nbg== X-Gm-Message-State: AOJu0YztSAXt9+UeTzhLD+c9O8GPfGfSnQCvRBsp8MHArCO0jQ/z5aAI WlTU6uWzMe0kb5+Vgz7uHv4QkVQ/7IQ= X-Google-Smtp-Source: AGHT+IF8gdeQdC2vix2B9j35/6cn/ogiC7i3MtASLOgt733RFCpaQp4XZVOTIKgNYM7TT4sAmY3DEPPy3HQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2591:b0:1cb:d9ff:e26f with SMTP id jb17-20020a170903259100b001cbd9ffe26fmr829172plb.5.1699583358643; Thu, 09 Nov 2023 18:29:18 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:56 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-10-seanjc@google.com> Subject: [PATCH 09/10] KVM: x86/pmu: Check eventsel first when emulating (branch) insns retired From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When triggering events, i.e. emulating PMC events in software, check for a matching event selector before checking the event is allowed. The "is allowed" check *might* be cheap, but it could also be very costly, e.g. if userspace has defined a large PMU event filter. The event selector check on the other hand is all but guaranteed to be <10 uops, e.g. looks something like: 0xffffffff8105e615 <+5>: movabs $0xf0000ffff,%rax 0xffffffff8105e61f <+15>: xor %rdi,%rsi 0xffffffff8105e622 <+18>: test %rax,%rsi 0xffffffff8105e625 <+21>: sete %al Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index ba561849fd3e..a5ea729b16f2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -847,9 +847,6 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 e= ventsel) return; =20 kvm_for_each_pmc(pmu, pmc, i, bitmap) { - if (!pmc_event_is_allowed(pmc)) - continue; - /* * Ignore checks for edge detect (all events currently emulated * but KVM are always rising edges), pin control (unsupported @@ -864,11 +861,11 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64= eventsel) * might be wrong if they are defined in the future, but so * could ignoring them, so do the simple thing for now. */ - if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) + if (((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) || + !pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc)) continue; =20 - if (cpl_is_matched(pmc)) - kvm_pmu_incr_counter(pmc); + kvm_pmu_incr_counter(pmc); } } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); --=20 2.42.0.869.gea05f2083d-goog From nobody Tue Dec 16 09:01:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0F94C4332F for ; Fri, 10 Nov 2023 02:29:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345823AbjKJC3s (ORCPT ); Thu, 9 Nov 2023 21:29:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345832AbjKJC33 (ORCPT ); Thu, 9 Nov 2023 21:29:29 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 317C247A3 for ; Thu, 9 Nov 2023 18:29:21 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5be72cd8d11so23178807b3.3 for ; Thu, 09 Nov 2023 18:29:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583360; x=1700188160; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=nldJcIoxrE7izfzF9tusQOHRIp84adzxmoDFeEutDYo=; b=zRKh4SLVVccVIIY2w+/ksNwChGdMHeCC8urKg8hX1Uj3SHDFS47SJachQQiZoLiAi2 3s805g1nae/9qvvqbpBJ2GIyQENa1+ZDSQh+1zy/rPE3tcYTR5TXpKm1MSrxzjAyvHUh ndqcyEOvOMqSH6/ANOnVHKCjMoMLczgOIrqI0KoYOtIIS4tIOMZjXceHGG/4EU8SIU3e ah0a/KwctVPF0JUX7KP3f8863qrBcqBZiFmFi/3XjWQ7od3sLF2YSV9XmUO+JmhlV6aY J//CueNrqVCQcXng1Xey/Ro/sOGdMlxnmiG8736ASI7XEarduldYXGzV3a/uWzQHfSh+ iNCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583360; x=1700188160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nldJcIoxrE7izfzF9tusQOHRIp84adzxmoDFeEutDYo=; b=solVveqCX5m7Z+wb/qDUNuH5Jc2AOYjiDsIPUovQN5xP07SdgxOxzHVJn6lNYkWwPF AY4c2xRyS5e8YlmmkZPe00571Q0VEHsYZcl9c08u0nWJfi6WmPgCo6u4vtAOjZ/5ErFF ea83Nw8UeBVqg6N2qBhWBCZSjhHMpayws1mfbsitKaeVPUOYW8L8z0AWmc7DyVMcFDFb o/mKU2ncyarqvm3k+bswo7U3EcjvwYklYdNlJeW/RaVdG8G2pYVObtvASfeBDcx8VEwZ oUojCPOXMQgyfpWr+2IGVFi6hKGnSF9gfXd0t2iz9byFQQai4FJYpY/hxzmFfwkKtJwe lFgw== X-Gm-Message-State: AOJu0YwAD2KBq41Y6oNkuAFUQpFOsDVQvSAaflicf70zF/C/AB4gG70k YB4adMw7f935nI1Q2y9C4ZJsgwCBi1o= X-Google-Smtp-Source: AGHT+IFi2OuUxDtj5dLEyBOq5j0Fpq5odHUnkilHEMHd4jTjkDkeO9IJsflGDTKbTVEEKP8m+JUs86IyIw8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:9286:0:b0:595:5cf0:a9b0 with SMTP id j128-20020a819286000000b005955cf0a9b0mr173826ywg.9.1699583360474; Thu, 09 Nov 2023 18:29:20 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:57 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-11-seanjc@google.com> Subject: [PATCH 10/10] KVM: x86/pmu: Avoid CPL lookup if PMC enabline for USER and KERNEL is the same From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't bother querying the CPL if a PMC is (not) counting for both USER and KERNEL, i.e. if the end result is guaranteed to be the same regardless of the CPL. Querying the CPL on Intel requires a VMREAD, i.e. isn't free, and a single CMP+Jcc is cheap. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a5ea729b16f2..231604d6dc86 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -828,6 +828,13 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) select_user =3D config & 0x2; } =20 + /* + * Skip the CPL lookup, which isn't free on Intel, if the result will + * be the same regardless of the CPL. + */ + if (select_os =3D=3D select_user) + return select_os; + return (static_call(kvm_x86_get_cpl)(pmc->vcpu) =3D=3D 0) ? select_os : s= elect_user; } =20 --=20 2.42.0.869.gea05f2083d-goog