From nobody Tue Dec 16 14:51:26 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 563C223F439 for ; Sat, 6 Dec 2025 00:18:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980289; cv=none; b=X/as0JCO16liBzH8jeB7nZ1LTdkhJzu21GSG0WQ0XpFtn+Rbp1hiERDbdCHigl8d76Z+UG9G+yN8JJpAR3tiLZB8cQQWjLISIcNkf3OpRt/wgL6Ml3urtLwRoYpbEAgXZpYPRcPLgXOd0hXmEiQzuAMgUSvOIf+sRDDdwgoe0QQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980289; c=relaxed/simple; bh=ke85FLmiKSfdj4rUFAQKnbz4Pj1/xMDGNPG2+PNz6uc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k4UwZwXuH9TwWvqojCxfY1rm2hbPdiE5iCGf5BEucruIkepeyWstAUqdArCs8Ha68YRInAlF9DyUtKRMSQoEHiDHWh3Zr3ia/SHcJaYloaJm9ictw5x1GXLyMdxGa16OkGs8Ama4YoVrDXtuDtOrfNx7hOyBSvfZbGkDdmAdrBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=duvqRfMk; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="duvqRfMk" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7aa148105a2so2494156b3a.1 for ; Fri, 05 Dec 2025 16:18:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764980286; x=1765585086; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=3tXwFHzylPZMGyN6KBlqBM06kqQeDyKKeii7TB3ROVw=; b=duvqRfMkfDvLQbWqABHCg5hheVxU4oc6TzwHIiI2gz9eEoUBUcTKY0jsAP7yA43Wt7 4OXeUJ1eUJi7bxoiQ7MV3FREuCZuYPFh6kkn8Kid923xyyNoxTcfVF53GGPwNG4z9bY3 HEszbW1ehyBA44qD3wtJCgT4p3W9HzyPHOycndFUPEYlOe5d6UpR4Z1l+xeYVHznygo0 9xnrSUyqzHRHmAmOdkwzj1XJrcJ2xPuv6+JIEgKG2mDg+OkfxksvQh9Qy5s2TPUpfc+J I8Eq3GuSKogdAbydRm976vWyI2QWASdk2w9f01J9DWKwuJjvrS32jDmIoj1Ft8vKJGle kQcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764980286; x=1765585086; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3tXwFHzylPZMGyN6KBlqBM06kqQeDyKKeii7TB3ROVw=; b=P6icNbZiC+gtvWzoIsV2Y32CzS8ubWNMybNgC+KDjMr4UgQxQ6Zg0d5bbb4/XNcF5l erBOHQqszAe2XRb7JwmpLfjRFGdK0m+z8XFMQxJh8vVFwNNJTD5aqMJpr6zm2BfwKN3R wpgDzxcrEJLPVp/pcdoXHf8UZU1A9G/hwk6NZoIw6+ADvWUWX3Juh7uvp+P2wS173jSS iMmleSWLaXFHPGAJZ7h3+Z4PZSEE48v6TPdAxFevw2HVaKvBkQIi1x3t4hPtPeU3IoQj ToKGUrYqNbSUkromuDNLa260sNg6yiEeDkUymVp8Yz7ELx9G8FGoLkG2iZc5uFcMV5zU X+pQ== X-Forwarded-Encrypted: i=1; AJvYcCX7cxRfOnf1PwSyQ6h0SZklr6luT89UAP27/+3Hk76CqpGmHipC/Es84EDzZBDpXRs6x83JdR+YIzJv/jo=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+I+7f3DUw5G1jqROVKX0tmgvpkKO5sockK8osAgYil/FGgSxk s1BXe3Z9LD7BFDd+1zKuS0cHDtZjpHNbnoo51UTtVED/3u7RaGX23rHhs+TmtRqAmK9VkIQjtkn fFcF5EA== X-Google-Smtp-Source: AGHT+IGMO0B+jqPHT7+9YfCBpeq9zR/PklORB8CpAuE6Up8hoHfkAh/+97jt6V2KW+WqsentqXikVA61obU= X-Received: from pfbdr8.prod.google.com ([2002:a05:6a00:4a88:b0:7ae:973b:9f29]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:14c9:b0:7b7:a62:550c with SMTP id d2e1a72fcca58-7e8bf858c3dmr849285b3a.1.1764980286427; Fri, 05 Dec 2025 16:18:06 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 5 Dec 2025 16:16:56 -0800 In-Reply-To: <20251206001720.468579-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251206001720.468579-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.223.gf5cc29aaa4-goog Message-ID: <20251206001720.468579-21-seanjc@google.com> Subject: [PATCH v6 20/44] KVM: x86/pmu: Disable RDPMC interception for compatible mediated vPMU From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Xin Li , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Mingwei Zhang , Xudong Hao , Sandipan Das , Dapeng Mi , Xiong Zhang , Manali Shukla , Jim Mattson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Disable RDPMC interception for vCPUs with a mediated vPMU that is compatible with the host PMU, i.e. that doesn't require KVM emulation of RDPMC to honor the guest's vCPU model. With a mediated vPMU, all guest state accessible via RDPMC is loaded into hardware while the guest is running. Adust RDPMC interception only for non-TDX guests, as the TDX module is responsible for managing RDPMC intercepts based on the TD configuration. Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Co-developed-by: Sandipan Das Signed-off-by: Sandipan Das Signed-off-by: Dapeng Mi Tested-by: Xudong Hao Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 26 ++++++++++++++++++++++++++ arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/svm/svm.c | 5 +++++ arch/x86/kvm/vmx/vmx.c | 7 +++++++ arch/x86/kvm/x86.c | 1 + 5 files changed, 40 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b3dde9a836ea..182ff2d8d119 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -716,6 +716,32 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx,= u64 *data) return 0; } =20 +bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + if (!kvm_vcpu_has_mediated_pmu(vcpu)) + return true; + + /* + * VMware allows access to these Pseduo-PMCs even when read via RDPMC + * in Ring3 when CR4.PCE=3D0. + */ + if (enable_vmware_backdoor) + return true; + + /* + * Note! Check *host* PMU capabilities, not KVM's PMU capabilities, as + * KVM's capabilities are constrained based on KVM support, i.e. KVM's + * capabilities themselves may be a subset of hardware capabilities. + */ + return pmu->nr_arch_gp_counters !=3D kvm_host_pmu.num_counters_gp || + pmu->nr_arch_fixed_counters !=3D kvm_host_pmu.num_counters_fixed || + pmu->counter_bitmask[KVM_PMC_GP] !=3D (BIT_ULL(kvm_host_pmu.bit_wi= dth_gp) - 1) || + pmu->counter_bitmask[KVM_PMC_FIXED] !=3D (BIT_ULL(kvm_host_pmu.bit= _width_fixed) - 1); +} +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_need_rdpmc_intercept); + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) { if (lapic_in_kernel(vcpu)) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 9849c2bb720d..506c203587ea 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -238,6 +238,7 @@ void kvm_pmu_instruction_retired(struct kvm_vcpu *vcpu); void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu); =20 bool is_vmware_backdoor_pmc(u32 pmc_idx); +bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu); =20 extern struct kvm_pmu_ops intel_pmu_ops; extern struct kvm_pmu_ops amd_pmu_ops; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 24d59ccfa40d..11913574de88 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1011,6 +1011,11 @@ static void svm_recalc_instruction_intercepts(struct= kvm_vcpu *vcpu) svm->vmcb->control.virt_ext |=3D VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; } } + + if (kvm_need_rdpmc_intercept(vcpu)) + svm_set_intercept(svm, INTERCEPT_RDPMC); + else + svm_clr_intercept(svm, INTERCEPT_RDPMC); } =20 static void svm_recalc_intercepts(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fdd18ad1ede3..9f71ba99cf70 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4300,8 +4300,15 @@ static void vmx_recalc_msr_intercepts(struct kvm_vcp= u *vcpu) */ } =20 +static void vmx_recalc_instruction_intercepts(struct kvm_vcpu *vcpu) +{ + exec_controls_changebit(to_vmx(vcpu), CPU_BASED_RDPMC_EXITING, + kvm_need_rdpmc_intercept(vcpu)); +} + void vmx_recalc_intercepts(struct kvm_vcpu *vcpu) { + vmx_recalc_instruction_intercepts(vcpu); vmx_recalc_msr_intercepts(vcpu); } =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1623afddff3b..76e86eb358df 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3945,6 +3945,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) =20 vcpu->arch.perf_capabilities =3D data; kvm_pmu_refresh(vcpu); + kvm_make_request(KVM_REQ_RECALC_INTERCEPTS, vcpu); break; case MSR_IA32_PRED_CMD: { u64 reserved_bits =3D ~(PRED_CMD_IBPB | PRED_CMD_SBPB); --=20 2.52.0.223.gf5cc29aaa4-goog