From nobody Mon Feb 9 05:59:42 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E898544A707 for ; Wed, 21 Jan 2026 22:55:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769036115; cv=none; b=F8zZNhq1qIzg+LNJFkihBByYPLXhC5dhzF41GJShlPgSBKOfFXI39+hmYRO5tvpzAN1rYjTPRCQVgpTu8JgDo7kMR3IdMeomv+UE/HvvrfewRin5F1YgFGu3WcMHMCAguAb9DAe4BlW7sd6ycJqeiVyWdM4qJ7U77Tq0V7vNMKg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769036115; c=relaxed/simple; bh=gLJ9pP4KbjVkIZBDUiydWO3ATbRPdgqnRoeV2aYIC0Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GL2p5Ph8HGGJDfvCtHwxQtEq5yxVDGybkNNZ/m8xiQaeAkU0VG2AHKQRHtKET8Z0uc2eTH+GVR8kyTtOu+6zadDbePS0QGcZJm3H6/agUP2qdvyDwZRtfhPzwRNRaTXXWdEyPmEM/q7tOExxu1AFvfyZnCAsxPTnv7R02N6mb/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jmattson.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=W1nuneVu; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jmattson.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W1nuneVu" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34c704d5d15so633271a91.1 for ; Wed, 21 Jan 2026 14:55:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1769036110; x=1769640910; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fYZnEa6Bb/f2/6ej1BUpAz8CiN6p1EtpnBTMXq8YhlA=; b=W1nuneVucFkDeLy07WKwR7WvXqdoUrSlkgTfF8EINic3KY514C2zkG8D0jEFK/6Jrm qiFDDxyvJOCwC6ohFYKEWtDH8Y96OjD0+anCMjR2eKo/AqWNJv1nPZhHOvJiqn2uznCB rRZqi8f+qum+EZjcOqknJvxwQdqs1r6GygCv2bDUw5ZkBCidhXnp+Suzsw1CkdI23YTf wAGDF56XT3Tf3GxSVYitWQsvAN9ExIBogDDNlgPUzHbn+ZVs+JFfxMU3WpxLZxmkZrXa Yp1nqJ1qfWrqoLQciXroSVCgPbhyXw0abzopwsQySkSFEdtsX84bQmKxBRBhC52xlwqe SNvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769036110; x=1769640910; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fYZnEa6Bb/f2/6ej1BUpAz8CiN6p1EtpnBTMXq8YhlA=; b=WHvNTHtYIOIdTpGe6Ex8ud254hAkfABEKpy09ygxW5MuFXBGrvSID/mQdDVeGzS6OJ EfIC0No2qA3UW7DiHYw0fMDaaN4uVhMG0ZO0YvTZcnM8uY5LyMQGr6HU5w6U2x1HpKW/ kY/A3016XsBRTJu/CjYZseNNe6IBSfr7CbLvHZRQCgvbl9yKiNWc1ndh6RScxql6UK2j 4WLJlnnZXKbzCdAUMNzOKiZzcs//8ChhdPfHrS0wIhkZkzop1g/08ocol9FIPjGsGZtD p84/dxZ12nl/nevBh3ap14CCt1KTimOGGr07MnaUOuDpIqcs5rfox8MW0wkk/Oc/f2+S BXPw== X-Forwarded-Encrypted: i=1; AJvYcCVRu8LIVpQQLfZO/9DQhoRHPoPgMj308WBe3kSNH7xdhl5aUCkocN7kBcoX8FV45DLaaDmt7r8YmaMNSE8=@vger.kernel.org X-Gm-Message-State: AOJu0Yz9aZD3FeWP5mXOe50Vs1Xpft3+lS87xiKBKqCqxBXebRj/pdpw 4fviDWHu9Ocakc/NVzOb0IA1MgBrJt9MqQeW4iNvtZvYOd1w5sHZhxTpjo85UOY4aVq40cveDn4 8QMUoSFA+B0Ai3g== X-Received: from pjbnp3.prod.google.com ([2002:a17:90b:4c43:b0:352:c99c:60b2]) (user=jmattson job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4d84:b0:352:e796:bb65 with SMTP id 98e67ed59e1d1-352e796bbbamr4837712a91.31.1769036109597; Wed, 21 Jan 2026 14:55:09 -0800 (PST) Date: Wed, 21 Jan 2026 14:54:02 -0800 In-Reply-To: <20260121225438.3908422-1-jmattson@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260121225438.3908422-1-jmattson@google.com> X-Mailer: git-send-email 2.52.0.457.g6b5491de43-goog Message-ID: <20260121225438.3908422-5-jmattson@google.com> Subject: [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions From: Jim Mattson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , James Clark , Shuah Khan , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Jim Mattson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new function, kvm_pmu_set_pmc_eventsel_hw_enable(), to set or clear the enable bit in eventsel_hw for PMCs identified by a bitmap. Use this function to update Host-Only and Guest-Only counters at the following transitions: - svm_set_efer(): When SVME changes, enable Guest-Only counters if SVME is being cleared (HG_ONLY bits become ignored), or disable them if SVME is being set (L1 is active). - nested_svm_vmrun(): Disable Host-Only counters and enable Guest-Only counters. - nested_svm_vmexit(): Disable Guest-Only counters and enable Host-Only counters. Signed-off-by: Jim Mattson --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/kvm/pmu.c | 7 +++++++ arch/x86/kvm/pmu.h | 4 ++++ arch/x86/kvm/svm/nested.c | 10 ++++++++++ arch/x86/kvm/svm/pmu.c | 17 +++++++++++++++++ arch/x86/kvm/svm/svm.c | 3 +++ 6 files changed, 42 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/= kvm-x86-pmu-ops.h index f0aa6996811f..7b32796213a0 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup) KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl) KVM_X86_PMU_OP(mediated_load) KVM_X86_PMU_OP(mediated_put) +KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable) =20 #undef KVM_X86_PMU_OP #undef KVM_X86_PMU_OP_OPTIONAL diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 833ee2ecd43f..1541c201285b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired); =20 +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu, + unsigned long *bitmap, bool enable) +{ + kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable); +} +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable); + static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *= filter) { u64 mask =3D kvm_pmu_ops.EVENTSEL_EVENT | diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 0925246731cb..b8be8b6e40d8 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -41,6 +41,8 @@ struct kvm_pmu_ops { void (*mediated_load)(struct kvm_vcpu *vcpu); void (*mediated_put)(struct kvm_vcpu *vcpu); void (*write_global_ctrl)(u64 global_ctrl); + void (*set_pmc_eventsel_hw_enable)(struct kvm_vcpu *vcpu, + unsigned long *bitmap, bool enable); =20 const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; @@ -258,6 +260,8 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); void kvm_pmu_instruction_retired(struct kvm_vcpu *vcpu); void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu); +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu, + unsigned long *bitmap, bool enable); void kvm_mediated_pmu_load(struct kvm_vcpu *vcpu); void kvm_mediated_pmu_put(struct kvm_vcpu *vcpu); =20 diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index de90b104a0dd..edaa76e38417 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -28,6 +28,7 @@ #include "smm.h" #include "cpuid.h" #include "lapic.h" +#include "pmu.h" #include "svm.h" #include "hyperv.h" =20 @@ -1054,6 +1055,11 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) goto out_exit_err; =20 + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu, + vcpu_to_pmu(vcpu)->pmc_hostonly, false); + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu, + vcpu_to_pmu(vcpu)->pmc_guestonly, true); + if (nested_svm_merge_msrpm(vcpu)) goto out; =20 @@ -1137,6 +1143,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm) =20 /* Exit Guest-Mode */ leave_guest_mode(vcpu); + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu, + vcpu_to_pmu(vcpu)->pmc_hostonly, true); + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu, + vcpu_to_pmu(vcpu)->pmc_guestonly, false); svm->nested.vmcb12_gpa =3D 0; WARN_ON_ONCE(svm->nested.nested_run_pending); =20 diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index c06013e2b4b1..85155d65fa38 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -316,6 +316,22 @@ static void amd_mediated_pmu_put(struct kvm_vcpu *vcpu) wrmsrq(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, pmu->global_status); } =20 +static void amd_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu, + unsigned long *bitmap, + bool enable) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int i; + + kvm_for_each_pmc(pmu, pmc, i, bitmap) { + if (enable) + pmc->eventsel_hw |=3D ARCH_PERFMON_EVENTSEL_ENABLE; + else + pmc->eventsel_hw &=3D ~ARCH_PERFMON_EVENTSEL_ENABLE; + } +} + struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .rdpmc_ecx_to_pmc =3D amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D amd_msr_idx_to_pmc, @@ -329,6 +345,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .is_mediated_pmu_supported =3D amd_pmu_is_mediated_pmu_supported, .mediated_load =3D amd_mediated_pmu_load, .mediated_put =3D amd_mediated_pmu_put, + .set_pmc_eventsel_hw_enable =3D amd_pmu_set_pmc_eventsel_hw_enable, =20 .EVENTSEL_EVENT =3D AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS =3D KVM_MAX_NR_AMD_GP_COUNTERS, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7803d2781144..953089b38921 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -244,6 +244,9 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm)) set_exception_intercept(svm, GP_VECTOR); } + + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu, + vcpu_to_pmu(vcpu)->pmc_guestonly, !(efer & EFER_SVME)); } =20 svm->vmcb->save.efer =3D efer | EFER_SVME; --=20 2.52.0.457.g6b5491de43-goog