From nobody Fri Dec 19 17:34:24 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC502C00A8F for ; Mon, 23 Oct 2023 23:40:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231562AbjJWXkM (ORCPT ); Mon, 23 Oct 2023 19:40:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231433AbjJWXkI (ORCPT ); Mon, 23 Oct 2023 19:40:08 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7337F9 for ; Mon, 23 Oct 2023 16:40:05 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c9c6f9e4d2so32594385ad.2 for ; Mon, 23 Oct 2023 16:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104405; x=1698709205; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=7hs5LXK64QuGvnjb5nv5KwW+waLw4KwuiCSeTdPJWrI=; b=iZwEAusYbE0n+500Y1eVOxVEOwIgNksAd3nKDsonxcMJ7037Kbl5+PxeU1R7znOzFo aR2n1VfB0zLWtXA5G38vjt2uodfT/8q2jocN9JK5d2hxy9O53iQC679sINETkKIpVhru dV1sZwF+vKMyXyAboRl1RngCYvOBEmoq2cqg5yOIX2XzFv5LSEyPn7+affm1+FXwpmNd 50QHaGI5zmBBpSgVgAJ+LP3bJjugNBtS3gHNAI98NsPEJuQ2witBcbDDPGJVilAt76R8 1KV1lGUe/4Nn3CSLjJVmlmVVbfRWkG4kZyDo3v6kasVpN+Wr11ks2LVBpu/ZDTHpE+5H eUbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104405; x=1698709205; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7hs5LXK64QuGvnjb5nv5KwW+waLw4KwuiCSeTdPJWrI=; b=rbRwfrbS+J9zl1AINXnrdFWNfPaW8ahbfItLiiRGCvTnSgql36HY5j/Vh5FIDJWMEF Tlw6V3sqZm/VK9EkxHTVSs6m6oyzWiDHERnft/1OTlqHezxJn7lxOlzKCL98oc/qHSqL hW1zZweQGq8XersPVVEL+GQptH4GnVCJMVkTAZAZyjlXXGxlqRihk9yCuR59cHlYuIP1 jWiLlHKQZRQsssDPYilTUf8nwJm2TjoSsrAS2xFzIx1ahW3elCzCwfzgpUT9Ng3Ko1N3 l+HeJfzwRpyyuVE7D1nluMQ3s/Y7x3+3K8fL0Zvo692Ej5nJmZoZmisVLCyax4icCd38 u81g== X-Gm-Message-State: AOJu0YyWBCpeUwbmYT+H0Air6IjUZKz2CnjKyA5zgw5A7BUGrVn0NI9v SqMrIeU4z4NE348h/jbIPZazzqstQaw= X-Google-Smtp-Source: AGHT+IFtgExxSownjg4pSQppeXSJemiFNe9s7PaYVEIMlhr6QHeOjldSLqohfie35x54YlyZDzLKQOaGsEc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:6502:b0:1c9:b045:5a8b with SMTP id b2-20020a170902650200b001c9b0455a8bmr181162plk.6.1698104405345; Mon, 23 Oct 2023 16:40:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:39:55 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-2-seanjc@google.com> Subject: [PATCH 1/6] KVM: x86/pmu: Move PMU reset logic to common x86 code From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the common (or at least "ignored") aspects of resetting the vPMU to common x86 code, along with the stop/release helpers that are no used only by the common pmu.c. There is no need to manually handle fixed counters as all_valid_pmc_idx tracks both fixed and general purpose counters, and resetting the vPMU is far from a hot path, i.e. the extra bit of overhead to the PMC from the index is a non-issue. Zero fixed_ctr_ctrl in common code even though it's Intel specific. Ensuring it's zero doesn't harm AMD/SVM in any way, and stopping the fixed counters via all_valid_pmc_idx, but not clearing the associated control bits, would be odd/confusing. Make the .reset() hook optional as SVM no longer needs vendor specific handling. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 2 +- arch/x86/kvm/pmu.c | 40 +++++++++++++++++++++++++- arch/x86/kvm/pmu.h | 18 ------------ arch/x86/kvm/svm/pmu.c | 16 ----------- arch/x86/kvm/vmx/pmu_intel.c | 20 ------------- 5 files changed, 40 insertions(+), 56 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/= kvm-x86-pmu-ops.h index 6c98f4bb4228..058bc636356a 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -22,7 +22,7 @@ KVM_X86_PMU_OP(get_msr) KVM_X86_PMU_OP(set_msr) KVM_X86_PMU_OP(refresh) KVM_X86_PMU_OP(init) -KVM_X86_PMU_OP(reset) +KVM_X86_PMU_OP_OPTIONAL(reset) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) =20 diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 9ae07db6f0f6..027e9c3c2b93 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -250,6 +250,24 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) return true; } =20 +static void pmc_release_perf_event(struct kvm_pmc *pmc) +{ + if (pmc->perf_event) { + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event =3D NULL; + pmc->current_config =3D 0; + pmc_to_pmu(pmc)->event_count--; + } +} + +static void pmc_stop_counter(struct kvm_pmc *pmc) +{ + if (pmc->perf_event) { + pmc->counter =3D pmc_read_counter(pmc); + pmc_release_perf_event(pmc); + } +} + static int filter_cmp(const void *pa, const void *pb, u64 mask) { u64 a =3D *(u64 *)pa & mask; @@ -654,7 +672,27 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) =20 void kvm_pmu_reset(struct kvm_vcpu *vcpu) { - static_call(kvm_x86_pmu_reset)(vcpu); + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int i; + + bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); + + for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { + pmc =3D static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + if (!pmc) + continue; + + pmc_stop_counter(pmc); + pmc->counter =3D 0; + + if (pmc_is_gp(pmc)) + pmc->eventsel =3D 0; + } + + pmu->fixed_ctr_ctrl =3D pmu->global_ctrl =3D pmu->global_status =3D 0; + + static_call_cond(kvm_x86_pmu_reset)(vcpu); } =20 void kvm_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 1d64113de488..a46aa9b25150 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -80,24 +80,6 @@ static inline void pmc_write_counter(struct kvm_pmc *pmc= , u64 val) pmc->counter &=3D pmc_bitmask(pmc); } =20 -static inline void pmc_release_perf_event(struct kvm_pmc *pmc) -{ - if (pmc->perf_event) { - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event =3D NULL; - pmc->current_config =3D 0; - pmc_to_pmu(pmc)->event_count--; - } -} - -static inline void pmc_stop_counter(struct kvm_pmc *pmc) -{ - if (pmc->perf_event) { - pmc->counter =3D pmc_read_counter(pmc); - pmc_release_perf_event(pmc); - } -} - static inline bool pmc_is_gp(struct kvm_pmc *pmc) { return pmc->type =3D=3D KVM_PMC_GP; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 373ff6a6687b..3fd47de14b38 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -233,21 +233,6 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) } } =20 -static void amd_pmu_reset(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); - int i; - - for (i =3D 0; i < KVM_AMD_PMC_MAX_GENERIC; i++) { - struct kvm_pmc *pmc =3D &pmu->gp_counters[i]; - - pmc_stop_counter(pmc); - pmc->counter =3D pmc->prev_counter =3D pmc->eventsel =3D 0; - } - - pmu->global_ctrl =3D pmu->global_status =3D 0; -} - struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .hw_event_available =3D amd_hw_event_available, .pmc_idx_to_pmc =3D amd_pmc_idx_to_pmc, @@ -259,7 +244,6 @@ struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .set_msr =3D amd_pmu_set_msr, .refresh =3D amd_pmu_refresh, .init =3D amd_pmu_init, - .reset =3D amd_pmu_reset, .EVENTSEL_EVENT =3D AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS =3D KVM_AMD_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS =3D AMD64_NUM_COUNTERS, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 820d3e1f6b4f..90c1f7f07e53 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -632,26 +632,6 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) =20 static void intel_pmu_reset(struct kvm_vcpu *vcpu) { - struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); - struct kvm_pmc *pmc =3D NULL; - int i; - - for (i =3D 0; i < KVM_INTEL_PMC_MAX_GENERIC; i++) { - pmc =3D &pmu->gp_counters[i]; - - pmc_stop_counter(pmc); - pmc->counter =3D pmc->prev_counter =3D pmc->eventsel =3D 0; - } - - for (i =3D 0; i < KVM_PMC_MAX_FIXED; i++) { - pmc =3D &pmu->fixed_counters[i]; - - pmc_stop_counter(pmc); - pmc->counter =3D pmc->prev_counter =3D 0; - } - - pmu->fixed_ctr_ctrl =3D pmu->global_ctrl =3D pmu->global_status =3D 0; - intel_pmu_release_guest_lbr_event(vcpu); } =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Fri Dec 19 17:34:24 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81FF2C25B67 for ; Mon, 23 Oct 2023 23:40:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231731AbjJWXkZ (ORCPT ); Mon, 23 Oct 2023 19:40:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231433AbjJWXkN (ORCPT ); Mon, 23 Oct 2023 19:40:13 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDE9FCC for ; Mon, 23 Oct 2023 16:40:07 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7af69a4baso52648357b3.0 for ; Mon, 23 Oct 2023 16:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104407; x=1698709207; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Jf1tQhWhiUHsUcOyz6b9syOvbswpkBFGF9h9iOKMWy8=; b=P966TVZ/R0NaGEdVlwjG21WF+qcIHmwPICnTwvETRjc65V9UfTjzkBucCMZ/TAdpuv Mm6jK1vCno8DkckRhP0rPcjhbt4s5H3SY31sOW56OmfpvP45MzKyDi+k/VQNzmV60oKK aW8zjLBc/QWjeAv3I9n7yUGY0fhF6bcs5/UX6r/fgZxzJOgtvtHCJM2zJ8URnfJ0Zzfu AQMn3Noipq7tnQUrz7t1JwaXajWX1wkwF0wTSjmOUPt8efOj72siZ5DaS5HHZs3kq9I1 N1AyWHiXr0BDfQI2rfzOdedItpkYlZU1I3bk4UxJJDu3FnT9pGNj0e942rwYFh79fPK2 ZczQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104407; x=1698709207; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Jf1tQhWhiUHsUcOyz6b9syOvbswpkBFGF9h9iOKMWy8=; b=rjWXvSxiH+8YQPki0iVpr7X3n3hsOwIQzva/8fSbEpgcHN1TI8TIPmsovebot//iy7 jEVjHa1+DfoY1wY5NWOUBkCghrVVWtgXB0hBDjdn5lyB7PnaAvUBbexviq/3HS5OS6uz c+OVF42DuNXe13FrfUS1dWMaZLSnpesm7kGDlM3ClEU8m0ukPuKrGLGQj1dEJSdt+mXi HidlCHvifLj8z2NgIXfcIwesg8EZ+ijkf6PjYZZn9iAIf7Rqcvo+mTTqzfDl/JQlreYG SDEnCTxksbe/Kf5lsviRSJ/gwOuDwkpsjULx+2btxE0T9e/XND25C/5mY4yzG8+Ytc0Q ZtYg== X-Gm-Message-State: AOJu0YwC6JNpXkj4OVld6HCoabCQ0ONzh6gHCDQDCf3yr0BueYfbJfGq hC4R79y+HdO+hL9G7Ryg+IVtjvZfdKE= X-Google-Smtp-Source: AGHT+IFB7gkT4ats34fYJblhK65PLyXH0GidTgEeYOQzGsiLxPvYUCVeyGmEAxdjKxNhFRyCGJwD95zbBOQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:e245:0:b0:576:af04:3495 with SMTP id l66-20020a0de245000000b00576af043495mr245928ywe.9.1698104407226; Mon, 23 Oct 2023 16:40:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:39:56 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-3-seanjc@google.com> Subject: [PATCH 2/6] KVM: x86/pmu: Reset the PMU, i.e. stop counters, before refreshing From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Stop all counters and release all perf events before refreshing the vPMU, i.e. before reconfiguring the vPMU to respond to changes in the vCPU model. Clear need_cleanup in kvm_pmu_reset() as well so that KVM doesn't prematurely stop counters, e.g. if KVM enters the guest and enables counters before the vCPU is scheduled out. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 027e9c3c2b93..dc8e8e907cfb 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -657,25 +657,14 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr= _data *msr_info) return 0; } =20 -/* refresh PMU settings. This function generally is called when underlying - * settings are changed (such as changes of PMU CPUID by guest VMs), which - * should rarely happen. - */ -void kvm_pmu_refresh(struct kvm_vcpu *vcpu) -{ - if (KVM_BUG_ON(kvm_vcpu_has_run(vcpu), vcpu->kvm)) - return; - - bitmap_zero(vcpu_to_pmu(vcpu)->all_valid_pmc_idx, X86_PMC_IDX_MAX); - static_call(kvm_x86_pmu_refresh)(vcpu); -} - void kvm_pmu_reset(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; int i; =20 + pmu->need_cleanup =3D false; + bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); =20 for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { @@ -695,6 +684,26 @@ void kvm_pmu_reset(struct kvm_vcpu *vcpu) static_call_cond(kvm_x86_pmu_reset)(vcpu); } =20 + +/* + * Refresh the PMU configuration for the vCPU, e.g. if userspace changes C= PUID + * and/or PERF_CAPABILITIES. + */ +void kvm_pmu_refresh(struct kvm_vcpu *vcpu) +{ + if (KVM_BUG_ON(kvm_vcpu_has_run(vcpu), vcpu->kvm)) + return; + + /* + * Stop/release all existing counters/events before realizing the new + * vPMU model. + */ + kvm_pmu_reset(vcpu); + + bitmap_zero(vcpu_to_pmu(vcpu)->all_valid_pmc_idx, X86_PMC_IDX_MAX); + static_call(kvm_x86_pmu_refresh)(vcpu); +} + void kvm_pmu_init(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); --=20 2.42.0.758.gaed0368e0e-goog From nobody Fri Dec 19 17:34:24 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA550C00A8F for ; Mon, 23 Oct 2023 23:40:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231758AbjJWXkU (ORCPT ); Mon, 23 Oct 2023 19:40:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231607AbjJWXkQ (ORCPT ); Mon, 23 Oct 2023 19:40:16 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D225CD6E for ; Mon, 23 Oct 2023 16:40:09 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da033914f7cso402280276.0 for ; Mon, 23 Oct 2023 16:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104409; x=1698709209; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ncl3FNsJPI8hbq8cXsw5giqcYGS/9dRNgfVScKM3w+8=; b=yeeyb2VUbRC6r8+YnM933njw2wWKCNaYXJlvdtAaMjBumtjXFJ2M8DWsK1Vg1MwJ3K of4zsWTQOaBWpSr+Fsh4XGmIQoD1b01zl6U0SvlvzBMF4k+fxT0W/S/x3yDv6AvjV1dV 9UYNXThheTPDfat+5BBDkQ7AD3PfeZ3szkxbfFFRriwBsI3klBK7RXLDf8LkyXq6brms 9HHL+qDX2LT9NBL5W91rFXpANPLz2HoR4x2T39Cx5qlBjLXXaD47mPDnbBHRSlXO+Xfk uDc4MfXDkxBGndLs6ZKgzi6Q/BsD+BIJNbpfLfBV+kxW1rRPnnz2sAaRPPriPq8bJOoD qd8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104409; x=1698709209; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ncl3FNsJPI8hbq8cXsw5giqcYGS/9dRNgfVScKM3w+8=; b=Jg+AddEkRTdsWpnZL/wAJecBFoTDImTLfmyF9aBb1M2FOFmYrwcihbiGPWrMK92ml1 wTl+eYDrnucWKHAVlAocOTsA3W3lBYbZWeBzWYywYt5KjwQH7FxRiCxvAi1qkJ7kqukg 1WoZFmKWEM8KEB9UOv3F35/AtdWe6onfMukLi4r9DO7U6qNzA9Ks2BQYgs+e6hpAtiXj snSV4Dkueagm03hMn6BmhwRs1JgwtoOUjIeb7o5vBtu8A1cksMybYRlbFmFQwJ9iX/A5 uc1IZUEKolBd3BaBlHmW7Qu7kkj70+0Tv2h3j6QPOzhJInKBNffRM5+ulCIYKKeggyjx vfgw== X-Gm-Message-State: AOJu0Ywy1WxmNorxKH4B975Mi2WtrtGQUbFONpQCc4J/eFcF3NkIttzu ssoxRxR2yJxAq4AR4pEGIPG5fyyw9zw= X-Google-Smtp-Source: AGHT+IHkfZjGTo7iBRDmhKCATqJxU/ld84g8TFcJVjoXQahO7z2ttVEHXNfjaiCEHYf4ft9kRqJ37KV49N8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1083:b0:d9a:c3b8:4274 with SMTP id v3-20020a056902108300b00d9ac3b84274mr279845ybu.7.1698104409129; Mon, 23 Oct 2023 16:40:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:39:57 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-4-seanjc@google.com> Subject: [PATCH 3/6] KVM: x86/pmu: Stop calling kvm_pmu_reset() at RESET (it's redundant) From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop kvm_vcpu_reset()'s call to kvm_pmu_reset(), the call is performed only for RESET, which is really just the same thing as vCPU creation, and kvm_arch_vcpu_create() *just* called kvm_pmu_init(), i.e. there can't possibly be any work to do. Unlike Intel, AMD's amd_pmu_refresh() does fill all_valid_pmc_idx even if guest CPUID is empty, but everything that is at all dynamic is guaranteed to be '0'/NULL, e.g. it should be impossible for KVM to have already created a perf event. Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 2 +- arch/x86/kvm/pmu.h | 1 - arch/x86/kvm/x86.c | 1 - 3 files changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index dc8e8e907cfb..458e836c6efe 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -657,7 +657,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_d= ata *msr_info) return 0; } =20 -void kvm_pmu_reset(struct kvm_vcpu *vcpu) +static void kvm_pmu_reset(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index a46aa9b25150..db9a12c0a2ef 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -243,7 +243,6 @@ bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 ms= r); int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); void kvm_pmu_refresh(struct kvm_vcpu *vcpu); -void kvm_pmu_reset(struct kvm_vcpu *vcpu); void kvm_pmu_init(struct kvm_vcpu *vcpu); void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0c9686207996..c8ac83a7764e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12207,7 +12207,6 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool ini= t_event) } =20 if (!init_event) { - kvm_pmu_reset(vcpu); vcpu->arch.smbase =3D 0x30000; =20 vcpu->arch.msr_misc_features_enables =3D 0; --=20 2.42.0.758.gaed0368e0e-goog From nobody Fri Dec 19 17:34:24 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 970BBC25B67 for ; Mon, 23 Oct 2023 23:40:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231607AbjJWXkb (ORCPT ); Mon, 23 Oct 2023 19:40:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231681AbjJWXkR (ORCPT ); Mon, 23 Oct 2023 19:40:17 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4A54D7D for ; Mon, 23 Oct 2023 16:40:11 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-da033914f7cso402293276.0 for ; Mon, 23 Oct 2023 16:40:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104411; x=1698709211; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=kKpVqTewUTpqij+D7Vj0c8pKvN0oprp2NzBGTWeq1ms=; b=s7VJKKgHcoYFE5Dij2QE/tI8xY6NlFHhPB8tUg99k3JQGa1yD4DLHcm5ihnc1CvgAm P9w+Yk5uxYwxu5yCn7I3LYIHj+JnYHDqzChw5FLLtXHxwhs7OAATvWkTeO8zMQKdFTRo HLdGlx716GGnCaMCiqsYrHeZd1bctuyd+NQc3rZPh3p+GfuG2eGYCYcalf21ufCu8R4D UM7ZSV9+DtIikBty260LyZ0JP5damI9soG2bgTfjiqhN9WiW0LDQC2t8IvS7A7f551R3 1Ej7LuCHWsXzRWI3n7vrf3Zdn1xNujVTg4PBmNL0M6sa2dDcsspIKUiqXVC/PkU9oQB5 lUVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104411; x=1698709211; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kKpVqTewUTpqij+D7Vj0c8pKvN0oprp2NzBGTWeq1ms=; b=QK0Ia4uu046W0LV4H/b/Emsm5uGgQM+8DbqsiaaRnq3VsjyYDQ4oVLmHotXGM8qkEM 5s3dyR6IsjHiZYI7GnkvA5t1yixCzpWZIb6j8W7n38h6jDQ9ktqXGkaI3BmlP0oeJSDS f5nxA0jMysja2joyMKBdUHKwK0x0N9RSfgAKk2kqG1ZqA7WYi9gzzxyaSR1EC64b/57G b2vek/sEC0JnbQ4N2tCqhU+HwHa9dLWEb2zCAyWllx4lcfV39OFRKfMBTKnqip/JdTSd uc9vOVA+SdQwvZ320nXu+8aYucmPYWaXBSHPmzIuQ6lSr2aHodfS/aTLaiWvtg3H5QCt mBxg== X-Gm-Message-State: AOJu0YxM7reyOl8quKkMPiag4dqCMEYAwzBQ5ZtzCiAAneRRft7nZNQz ixSK0W+lmxa/t3nMpKBowTAlaLxfaxI= X-Google-Smtp-Source: AGHT+IG/UQLvAHopvlX3+fTkoIXCZSG5C6SXuHDql5k5q4ixhxWwZSrg8qKFxtMws9CheBH+5kxBTEGrXKQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:b951:0:b0:da0:1ba4:6fa2 with SMTP id s17-20020a25b951000000b00da01ba46fa2mr52717ybm.4.1698104410953; Mon, 23 Oct 2023 16:40:10 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:39:58 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-5-seanjc@google.com> Subject: [PATCH 4/6] KVM: x86/pmu: Remove manual clearing of fields in kvm_pmu_init() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove code the unnecessarily clears event_count and need_cleanup in kvm_pmu_init(), the entire kvm_pmu is zeroed just a few lines earlier. Vendor code doesn't set event_count or need_cleanup during .init(), and if either VMX or SVM did set those fields it would be a flagrant bug. Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 458e836c6efe..c06090196b00 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -710,8 +710,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) =20 memset(pmu, 0, sizeof(*pmu)); static_call(kvm_x86_pmu_init)(vcpu); - pmu->event_count =3D 0; - pmu->need_cleanup =3D false; kvm_pmu_refresh(vcpu); } =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Fri Dec 19 17:34:24 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 018B9C00A8F for ; Mon, 23 Oct 2023 23:40:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231911AbjJWXkf (ORCPT ); Mon, 23 Oct 2023 19:40:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231694AbjJWXkR (ORCPT ); Mon, 23 Oct 2023 19:40:17 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B85810D3 for ; Mon, 23 Oct 2023 16:40:13 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d9a528c2c8bso4710513276.1 for ; Mon, 23 Oct 2023 16:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104412; x=1698709212; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OISog/o/YxfGLkUORpMvb/M1MSKf5KJE5wnNSowczuI=; b=wLXVmR9UP/bjufj9VE+LKQaDenAFjAQxbqE1P3oYxr3WlAFfhMZ8PadwSS32blTuOT HflzfbuPUK6FudK3LQWK9bH2PGepgLbk+EQFjzInzWZVgrYQebPVIomaKLwBvP2pJ37X ldKgt69Y/wAvI0y/rEt22TYnLTN1PNRHQQVIVpjAu+BuI+ANZyOSGu/OSsV8zVqu4Y2i 99UIBrBCiOcqZ/hFVnA7JyyFOdaszzo+wiKlgbKvDNME9MW1A58+xKVGv6Vfrc58IbCx zDdT4XymXvR4LtkHxfGTjrNs9pnKqhXa90YdXfi7kDuCGJ1mOXx3V7ZgiOiLSyNySAB3 j4Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104412; x=1698709212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OISog/o/YxfGLkUORpMvb/M1MSKf5KJE5wnNSowczuI=; b=l/1chyc9U7mTGclhPFwfiCY46/ORTKzO6j2Voq244nJ3dALursE4c338aMosz5ESrQ qVYDF0AEgCUCwxN+PcSIOhj5AKaexNw+CX7AStpNZ5sswF7jVlof1GcxQdHgRGFjATTY 44NJ6g5sPIwBCeoQi7PlA017PcP3QVQrSYKp/ZWvl6Dof2fFrMH6Jvw8sNn3Spd6uorb lS9G5TK/wFxfn3u9vcdPwUqGj9rG3cnGXFxYHOuZXU2sfLwc9a5ChsfeuqmysUrIoN0+ rTcdtIcuzo/VkxLJjNEFygH9ALRqalPjtgkgE9Flt5ZpEMG87p5m0Pz5iJ/t/22DuAw6 oerA== X-Gm-Message-State: AOJu0Yy7jRXxaAF3fdUMrG0JUxtqJvJMY/aS8uygfkjhdJQgDAVHS3pn 8BSCx4JZdNli8GakBAyxQqMXhfODlxU= X-Google-Smtp-Source: AGHT+IGdvIZLByyBNIUhCTP9Wm8mWMSf/+Xbp5sifAxXAMGbFzJRLll9+DfM5lV5yha3lmakbmYIlQgF+wc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:da8d:0:b0:d9b:454d:352d with SMTP id n135-20020a25da8d000000b00d9b454d352dmr217208ybf.9.1698104412611; Mon, 23 Oct 2023 16:40:12 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:39:59 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-6-seanjc@google.com> Subject: [PATCH 5/6] KVM: x86/pmu: Update sample period in pmc_write_counter() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update a PMC's sample period in pmc_write_counter() to deduplicate code across all callers of pmc_write_counter(). Opportunistically move pmc_write_counter() into pmc.c now that it's doing more work. WRMSR isn't such a hot path that an extra CALL+RET pair will be problematic, and the order of function definitions needs to be changed anyways, i.e. now is a convenient time to eat the churn. Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 27 +++++++++++++++++++++++++++ arch/x86/kvm/pmu.h | 25 +------------------------ arch/x86/kvm/svm/pmu.c | 1 - arch/x86/kvm/vmx/pmu_intel.c | 2 -- 4 files changed, 28 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index c06090196b00..3725d001239d 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -161,6 +161,15 @@ static u64 pmc_get_pebs_precise_level(struct kvm_pmc *= pmc) return 1; } =20 +static u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) +{ + u64 sample_period =3D (-counter_value) & pmc_bitmask(pmc); + + if (!sample_period) + sample_period =3D pmc_bitmask(pmc) + 1; + return sample_period; +} + static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool exclude_user, bool exclude_kernel, bool intr) @@ -268,6 +277,24 @@ static void pmc_stop_counter(struct kvm_pmc *pmc) } } =20 +static void pmc_update_sample_period(struct kvm_pmc *pmc) +{ + if (!pmc->perf_event || pmc->is_paused || + !is_sampling_event(pmc->perf_event)) + return; + + perf_event_period(pmc->perf_event, + get_sample_period(pmc, pmc->counter)); +} + +void pmc_write_counter(struct kvm_pmc *pmc, u64 val) +{ + pmc->counter +=3D val - pmc_read_counter(pmc); + pmc->counter &=3D pmc_bitmask(pmc); + pmc_update_sample_period(pmc); +} +EXPORT_SYMBOL_GPL(pmc_write_counter); + static int filter_cmp(const void *pa, const void *pb, u64 mask) { u64 a =3D *(u64 *)pa & mask; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index db9a12c0a2ef..cae85e550f60 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -74,11 +74,7 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) return counter & pmc_bitmask(pmc); } =20 -static inline void pmc_write_counter(struct kvm_pmc *pmc, u64 val) -{ - pmc->counter +=3D val - pmc_read_counter(pmc); - pmc->counter &=3D pmc_bitmask(pmc); -} +void pmc_write_counter(struct kvm_pmc *pmc, u64 val); =20 static inline bool pmc_is_gp(struct kvm_pmc *pmc) { @@ -128,25 +124,6 @@ static inline struct kvm_pmc *get_fixed_pmc(struct kvm= _pmu *pmu, u32 msr) return NULL; } =20 -static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) -{ - u64 sample_period =3D (-counter_value) & pmc_bitmask(pmc); - - if (!sample_period) - sample_period =3D pmc_bitmask(pmc) + 1; - return sample_period; -} - -static inline void pmc_update_sample_period(struct kvm_pmc *pmc) -{ - if (!pmc->perf_event || pmc->is_paused || - !is_sampling_event(pmc->perf_event)) - return; - - perf_event_period(pmc->perf_event, - get_sample_period(pmc, pmc->counter)); -} - static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 3fd47de14b38..b6a7ad4d6914 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -161,7 +161,6 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) pmc =3D get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); if (pmc) { pmc_write_counter(pmc, data); - pmc_update_sample_period(pmc); return 0; } /* MSR_EVNTSELn */ diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 90c1f7f07e53..a6216c874729 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -437,11 +437,9 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, st= ruct msr_data *msr_info) !(msr & MSR_PMC_FULL_WIDTH_BIT)) data =3D (s64)(s32)data; pmc_write_counter(pmc, data); - pmc_update_sample_period(pmc); break; } else if ((pmc =3D get_fixed_pmc(pmu, msr))) { pmc_write_counter(pmc, data); - pmc_update_sample_period(pmc); break; } else if ((pmc =3D get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) { reserved_bits =3D pmu->reserved_bits; --=20 2.42.0.758.gaed0368e0e-goog From nobody Fri Dec 19 17:34:24 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45EBBC00A8F for ; Mon, 23 Oct 2023 23:40:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231615AbjJWXkj (ORCPT ); Mon, 23 Oct 2023 19:40:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231728AbjJWXkS (ORCPT ); Mon, 23 Oct 2023 19:40:18 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22CF510E0 for ; Mon, 23 Oct 2023 16:40:15 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5b62a669d61so2369874a12.1 for ; Mon, 23 Oct 2023 16:40:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104414; x=1698709214; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=qUH4I8dRZYZdtSJK7SIMlGfWi/fWxXkAkYB7xZkkAPk=; b=cMMBVzmoNR/o7Mx04hDAFX2YXFvVUgT7Ud+6RkV1oR4ISwoZ+veNYZE2zG0ZN5GLgK Urc/PX4Ixlvm8ZsPDkKOQm2OdJAq0Bi7RG7xY3YR01BfHxsLpW0I4e/Ifp3GBwxfNQYV iebilztHcdacpJppIpI0ew6D/A3vmNI47CqA6/xgA0QwwqSQ4XwyJHzirRS0nhyGJzWf R6jM75esKzX8HoWsklfbYOlLeHjosAUSFLrUjASAiko9LxtEvt+KWa0EAL4VuRtJGCcM ZFQtmahaA5KOZqr3ftCk/UgfEpZsvQUbg2nIN76BMunaKgMxO0/NhxcRUI2+8l0+HtBi ss9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104414; x=1698709214; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qUH4I8dRZYZdtSJK7SIMlGfWi/fWxXkAkYB7xZkkAPk=; b=DqSUAlXHa2ugf+NPWjG6Qci3psL4cImvmRyWRtOhnNXbUB3lCLl6DH7soSkVfcWvbF ng+pRnT2E1LOOMjcpNpyqHbWvOICRg7e1H1ul1H66foTrikruu8NU3ALXnzPZ8kPum1r 3+2/I7Ms4xFOkSdQq/YiaMjB6A2kspZsjz+G5yzGcFbuj1w7ZY8QxkgvHG6qUVDJAByJ KbS2kcJsURXnY6ozrRWzGwvwL5/XB95XNK0/+g8D0zpltwtfMiWtL0Kp4LzwIPdo81iw voGvYyc9ieij5xvj5d2HHjGVuBjnel4fQsZSmcYklUXqQe3Pmm+BKsbmXTmK3JPjNroa HWow== X-Gm-Message-State: AOJu0YwJ9mxMTuu5q7Jkq95afsEZMGm0VhEyPiKBhe/PO02Uh4jkl5Ca /kR+HbiVmNjLBTWjsCUA0zXojEHFgDw= X-Google-Smtp-Source: AGHT+IHlq1CeC/WoWgz0CTXW5UjYJPfHpY7mtMCpxFg/1pdX7v89esWBp2sWfD+drMribdJgxh3H58FXWh8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:518b:0:b0:56b:6acd:d5f8 with SMTP id h11-20020a65518b000000b0056b6acdd5f8mr187066pgq.7.1698104414521; Mon, 23 Oct 2023 16:40:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:40:00 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-7-seanjc@google.com> Subject: [PATCH 6/6] KVM: x86/pmu: Track emulated counter events instead of previous counter From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Explicitly track emulated counter events instead of using the common counter value that's shared with the hardware counter owned by perf. Bumping the common counter requires snapshotting the pre-increment value in order to detect overflow from emulation, and the snapshot approach is inherently flawed. Snapshotting the previous counter at every increment assumes that there is at most one emulated counter event per emulated instruction (or rather, between checks for KVM_REQ_PMU). That's mostly holds true today because KVM only emulates (branch) instructions retired, but the approach will fall apart if KVM ever supports event types that don't have a 1:1 relationship with instructions. And KVM already has a relevant bug, as handle_invalid_guest_state() emulates multiple instructions without checking KVM_REQ_PMU, i.e. could miss an overflow event due to clobbering pmc->prev_counter. Not checking KVM_REQ_PMU is problematic in both cases, but at least with the emulated counter approach, the resulting behavior is delayed overflow detection, as opposed to completely lost detection. Cc: Mingwei Zhang Cc: Roman Kagan Cc: Jim Mattson Cc: Dapeng Mi Cc: Like Xu Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 17 +++++++++++++++- arch/x86/kvm/pmu.c | 36 +++++++++++++++++++++++---------- arch/x86/kvm/pmu.h | 3 ++- 3 files changed, 43 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d7036982332e..d8bc9ba88cfc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -500,8 +500,23 @@ struct kvm_pmc { u8 idx; bool is_paused; bool intr; + /* + * Base value of the PMC counter, relative to the *consumed* count in + * the associated perf_event. This value includes counter updates from + * the perf_event and emulated_count since the last time the counter + * was reprogrammed, but it is *not* the current value as seen by the + * guest or userspace. + * + * The count is relative to the associated perf_event so that KVM + * doesn't need to reprogram the perf_event every time the guest writes + * to the counter. + */ u64 counter; - u64 prev_counter; + /* + * PMC events triggered by KVM emulation that haven't been fully + * processed, i.e. haven't undergone overflow detection. + */ + u64 emulated_counter; u64 eventsel; struct perf_event *perf_event; struct kvm_vcpu *vcpu; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3725d001239d..f02cee222e9a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -127,9 +127,9 @@ static void kvm_perf_overflow(struct perf_event *perf_e= vent, struct kvm_pmc *pmc =3D perf_event->overflow_handler_context; =20 /* - * Ignore overflow events for counters that are scheduled to be - * reprogrammed, e.g. if a PMI for the previous event races with KVM's - * handling of a related guest WRMSR. + * Ignore asynchronous overflow events for counters that are scheduled + * to be reprogrammed, e.g. if a PMI for the previous event races with + * KVM's handling of a related guest WRMSR. */ if (test_and_set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi)) return; @@ -226,13 +226,19 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc,= u32 type, u64 config, =20 static void pmc_pause_counter(struct kvm_pmc *pmc) { - u64 counter =3D pmc->counter; + /* + * Accumulate emulated events, even if the PMC was already paused, e.g. + * if KVM emulated an event after a WRMSR, but before reprogramming, or + * if KVM couldn't create a perf event. + */ + u64 counter =3D pmc->counter + pmc->emulated_counter; =20 - if (!pmc->perf_event || pmc->is_paused) - return; + pmc->emulated_counter =3D 0; =20 /* update counter, reset event value to avoid redundant accumulation */ - counter +=3D perf_event_pause(pmc->perf_event, true); + if (pmc->perf_event && !pmc->is_paused) + counter +=3D perf_event_pause(pmc->perf_event, true); + pmc->counter =3D counter & pmc_bitmask(pmc); pmc->is_paused =3D true; } @@ -289,6 +295,14 @@ static void pmc_update_sample_period(struct kvm_pmc *p= mc) =20 void pmc_write_counter(struct kvm_pmc *pmc, u64 val) { + /* + * Drop any unconsumed accumulated counts, the WRMSR is a write, not a + * read-modify-write. Adjust the counter value so that it's value is + * relative to the current perf_event (if there is one), as reading the + * current count is faster than pausing and repgrogramming the event in + * order to reset it to '0'. + */ + pmc->emulated_counter =3D 0; pmc->counter +=3D val - pmc_read_counter(pmc); pmc->counter &=3D pmc_bitmask(pmc); pmc_update_sample_period(pmc); @@ -426,6 +440,7 @@ static bool pmc_event_is_allowed(struct kvm_pmc *pmc) static void reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); + u64 prev_counter =3D pmc->counter; u64 eventsel =3D pmc->eventsel; u64 new_config =3D eventsel; u8 fixed_ctr_ctrl; @@ -435,7 +450,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (!pmc_event_is_allowed(pmc)) goto reprogram_complete; =20 - if (pmc->counter < pmc->prev_counter) + if (pmc->counter < prev_counter) __kvm_perf_overflow(pmc, false); =20 if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) @@ -475,7 +490,6 @@ static void reprogram_counter(struct kvm_pmc *pmc) =20 reprogram_complete: clear_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi); - pmc->prev_counter =3D 0; } =20 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) @@ -701,6 +715,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) =20 pmc_stop_counter(pmc); pmc->counter =3D 0; + pmc->emulated_counter =3D 0; =20 if (pmc_is_gp(pmc)) pmc->eventsel =3D 0; @@ -772,8 +787,7 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) =20 static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - pmc->prev_counter =3D pmc->counter; - pmc->counter =3D (pmc->counter + 1) & pmc_bitmask(pmc); + pmc->emulated_counter++; kvm_pmu_request_counter_reprogram(pmc); } =20 diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index cae85e550f60..7caeb3d8d4fd 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -66,7 +66,8 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) { u64 counter, enabled, running; =20 - counter =3D pmc->counter; + counter =3D pmc->counter + pmc->emulated_counter; + if (pmc->perf_event && !pmc->is_paused) counter +=3D perf_event_read_value(pmc->perf_event, &enabled, &running); --=20 2.42.0.758.gaed0368e0e-goog