From nobody Mon Feb 9 05:40:03 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E61B285C9F for ; Sat, 7 Feb 2026 01:23:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427438; cv=none; b=ZdaikzXxNri3ib1mmGq367aqsOWnVOYi7DgclDPU+iFmGszorfJ4PZNUkBEhFxEyDwy7gUBG2Y3WiQD3bKjuFBloNPqCz8HcgLlvdlAgAC3r8VnSz4Dats6vOcoE0thOiaWAm5bltes+y8peyaAZ6D+t1oiiNHLBoH7CHwxg7DM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427438; c=relaxed/simple; bh=2GgBCsfprKhcXfHGilZWm/7dHAnCoVcRRuhBWLgA7rg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AjHvIzknLXS1vovCJZQk3uLYfaAe9nukX1zGAIeizK/SaFugCzRU0E9D0BWakq1zZ34lSEgHlpL/wqCNgAVf+NjGovnVW9/z9xkJO0BT2BjNe2itfUmgdV7Fi/fY/Mm/mAsHxliuj3PY4/WsXM802+BOM6hlBlI57wxVID+Qh9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jmattson.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gYdt6FZ9; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jmattson.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gYdt6FZ9" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34c5d6193daso2832487a91.1 for ; Fri, 06 Feb 2026 17:23:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770427438; x=1771032238; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vJYQh1/AHMiGSmp1C/Vj2riCycFzgp01zsmPrrucJzY=; b=gYdt6FZ9IE96Rq+htKsZu/zdiOO9YX9MUMEZ6Ol4BOAloWuGFs+YDdUN9Y8JVuBZiJ 9lShzzs9Pfqkydsb5yRQODKiX8xEfOAJlp6TdrdErHorZhKGMug3e/Gmnsj0QYcDVL4s A7gnIJhpNvCq5HV7wzX0xdW9EFXvPvS+errADq/uzyfI0hWkdQ4whRpG8D4MJ14AInuS po/oz5I35jhOExO58Llv3N/1N2r1z3lM7d8robbg3jDHWctN3G3Y36V6t08voIi+FSJx eTdB1tcMtMiJKbhududD3y8z9f4riScflrVtXv4ld7JLiQhaVtSJ7X3YkVEZi5YdvWXj 8R/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770427438; x=1771032238; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vJYQh1/AHMiGSmp1C/Vj2riCycFzgp01zsmPrrucJzY=; b=F0I1xAnf1pqwH0c2gji21CqlQ2TNfvFwmZZOfID4Tpg3xSlhfb0b5wTThimlyRHl6D 0SF0o4dgpEsh5zHCaDBWevso4vdQnmY99WF0JrTyRpq5KtIYjmBaLwpsgToPKfv/fFb/ 7v7vhANsdZcK9kLmBhXhdU0mqOyafL32XYtBj5JYb2q8EJtgjrKemgRFBha8JPFaG/rF 99o3lEaUPTGqNIoXvfWU6naFMJJLZQs4xRJow8CxQ2fX7w3TJftCYczu1r4S6kKvPFek eeg4SngYpFX5W2RV3/H/2IvS3Ttui0bfzgkpzx2apYLJhEHDhrNGbk8sP3NYHHJde0e9 bQxg== X-Forwarded-Encrypted: i=1; AJvYcCVBiccnxU3CJMZcSS88AtrFLpsGCBdDCf/F6XTprG4YIOOsQODGUxi/u0ZpzhnvRfBMQIaDv7B1vRDI4YI=@vger.kernel.org X-Gm-Message-State: AOJu0Yx00sRMARmeDNMDjdymuykk8cZa6R8QfNvYKKHb1A5LYdRXHU9i 9RvgwaVqso2vveZtfsmvmCaQjIGJGDgkbOZvGxCre/ZvNrywDVHM3eWvc41kLWLwc1n6p8eyCos P2wmnqURDu1LOgg== X-Received: from pjbpi14.prod.google.com ([2002:a17:90b:1e4e:b0:340:5073:f80f]) (user=jmattson job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3c47:b0:352:ccae:fe65 with SMTP id 98e67ed59e1d1-354b3c379d7mr3937008a91.4.1770427438017; Fri, 06 Feb 2026 17:23:58 -0800 (PST) Date: Fri, 6 Feb 2026 17:23:29 -0800 In-Reply-To: <20260207012339.2646196-1-jmattson@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260207012339.2646196-1-jmattson@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260207012339.2646196-4-jmattson@google.com> Subject: [PATCH v3 3/5] KVM: x86/pmu: Refresh Host-Only/Guest-Only eventsel at nested transitions From: Jim Mattson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , James Clark , Shuah Khan , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Yosry Ahmed , Mingwei Zhang , Sandipan Das Cc: Jim Mattson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add amd_pmu_refresh_host_guest_eventsel_hw() to recalculate eventsel_hw for all PMCs based on the current vCPU state. This is needed because Host-Only and Guest-Only counters must be enabled/disabled at: - SVME changes: When EFER.SVME is modified, counters with Guest-Only bits need their hardware enable state updated. - Nested transitions: When entering or leaving guest mode, Host-Only counters should be disabled/enabled and Guest-Only counters should be enabled/disabled accordingly. Add a nested_transition() callback to kvm_x86_ops and call it from enter_guest_mode() and leave_guest_mode() to ensure the PMU state stays synchronized with guest mode transitions. Signed-off-by: Jim Mattson --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/kvm_cache_regs.h | 2 ++ arch/x86/kvm/svm/pmu.c | 12 ++++++++++++ arch/x86/kvm/svm/svm.c | 3 +++ arch/x86/kvm/svm/svm.h | 5 +++++ arch/x86/kvm/x86.c | 1 + 7 files changed, 26 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index de709fb5bd76..62ac8ecd26e9 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -108,6 +108,7 @@ KVM_X86_OP(get_entry_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging) +KVM_X86_OP_OPTIONAL(nested_transition) KVM_X86_OP_OPTIONAL(vcpu_blocking) KVM_X86_OP_OPTIONAL(vcpu_unblocking) KVM_X86_OP_OPTIONAL(pi_update_irte) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index ff07c45e3c73..8dbc5c731859 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1901,6 +1901,8 @@ struct kvm_x86_ops { =20 void (*update_cpu_dirty_logging)(struct kvm_vcpu *vcpu); =20 + void (*nested_transition)(struct kvm_vcpu *vcpu); + const struct kvm_x86_nested_ops *nested_ops; =20 void (*vcpu_blocking)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 8ddb01191d6f..14e2cbab8312 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -227,6 +227,7 @@ static inline void enter_guest_mode(struct kvm_vcpu *vc= pu) { vcpu->arch.hflags |=3D HF_GUEST_MASK; vcpu->stat.guest_mode =3D 1; + kvm_x86_call(nested_transition)(vcpu); } =20 static inline void leave_guest_mode(struct kvm_vcpu *vcpu) @@ -239,6 +240,7 @@ static inline void leave_guest_mode(struct kvm_vcpu *vc= pu) } =20 vcpu->stat.guest_mode =3D 0; + kvm_x86_call(nested_transition)(vcpu); } =20 static inline bool is_guest_mode(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 8d451110a94d..e2a849fc7daa 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -171,6 +171,18 @@ static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pm= c) pmc->eventsel_hw &=3D ~ARCH_PERFMON_EVENTSEL_ENABLE; } =20 +void amd_pmu_refresh_host_guest_eventsel_hw(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + int i; + + if (pmu->reserved_bits & AMD64_EVENTSEL_HOST_GUEST_MASK) + return; + + for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) + amd_pmu_set_eventsel_hw(&pmu->gp_counters[i]); +} + static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_inf= o) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 5f0136dbdde6..5753388542cf 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -244,6 +244,8 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm)) set_exception_intercept(svm, GP_VECTOR); } + + amd_pmu_refresh_host_guest_eventsel_hw(vcpu); } =20 svm->vmcb->save.efer =3D efer | EFER_SVME; @@ -5222,6 +5224,7 @@ struct kvm_x86_ops svm_x86_ops __initdata =3D { =20 .check_intercept =3D svm_check_intercept, .handle_exit_irqoff =3D svm_handle_exit_irqoff, + .nested_transition =3D amd_pmu_refresh_host_guest_eventsel_hw, =20 .nested_ops =3D &svm_nested_ops, =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index ebd7b36b1ceb..c31ef7c46d58 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -864,6 +864,11 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcp= u, u8 vector); void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_sa= ve_area *hostsa); void sev_es_unmap_ghcb(struct vcpu_svm *svm); =20 + +/* pmu.c */ +void amd_pmu_refresh_host_guest_eventsel_hw(struct kvm_vcpu *vcpu); + + #ifdef CONFIG_KVM_AMD_SEV int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp); int sev_mem_enc_register_region(struct kvm *kvm, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index db3f393192d9..01ccbaa5b2e6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -150,6 +150,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly; #include EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits); EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg); +EXPORT_STATIC_CALL_GPL(kvm_x86_nested_transition); =20 static bool __read_mostly ignore_msrs =3D 0; module_param(ignore_msrs, bool, 0644); --=20 2.53.0.rc2.204.g2597b5adb4-goog