lib/x86/pmu.c | 16 +++++++++++++++- x86/pmu.c | 3 +++ 2 files changed, 18 insertions(+), 1 deletion(-)
From: Like Xu <likexu@tencent.com>
Avoid misleading pmu initialisation when vPMU is globally disabled
by performing a read/write consistency test on one of AMD's counters.
Without this check, pmu test will fail rather than be skipped.
Given some historical reasons, the expectation that AMD counter write
values lead to #GP when vPMU is globally disabled is not in line with
the architectural specification (even when not disabled, writing to AMD
reserved bits will likely not result in #GP, but more of a filtered write),
and like many mature guests, KUT can test the consistency of read and
write values against a counter MSR that must be supported to determine
if the PMU is available and continue testing.
Signed-off-by: Like Xu <likexu@tencent.com>
---
Nit:
- A generic "durable" MSRs check could be applied later if need;
- Two more tests: 20221226075412.61167-1-likexu@tencent.com;
lib/x86/pmu.c | 16 +++++++++++++++-
x86/pmu.c | 3 +++
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index 0f2afd6..1e23a17 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -2,6 +2,20 @@
struct pmu_caps pmu;
+/*
+ * All AMD CPUs that support vPMU have MSR_K7_PERFCTRx, but the
+ * values written to it are discarded when vPMU is globally disabled.
+ */
+static inline bool amd_k7_counter_is_durable(void)
+{
+ u64 after, before = (1ull << PMC_DEFAULT_WIDTH) - 1;
+
+ wrmsr_safe(MSR_K7_PERFCTR0, before);
+ rdmsr_safe(MSR_K7_PERFCTR0, &after);
+
+ return before == after;
+}
+
void pmu_init(void)
{
pmu.is_intel = is_intel();
@@ -38,7 +52,7 @@ void pmu_init(void)
pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL;
pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL;
}
- } else {
+ } else if (amd_k7_counter_is_durable()){
if (this_cpu_has(X86_FEATURE_PERFCTR_CORE)) {
/* Performance Monitoring Version 2 Supported */
if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) {
diff --git a/x86/pmu.c b/x86/pmu.c
index 72c2c9c..1f041e6 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -684,6 +684,9 @@ int main(int ac, char **av)
gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]);
report_prefix_push("Intel");
set_ref_cycle_expectations();
+ } else if (!pmu.nr_gp_counters) {
+ report_skip("No AMD PMU is detected!");
+ return report_summary();
} else {
gp_events_size = sizeof(amd_gp_events)/sizeof(amd_gp_events[0]);
gp_events = (struct pmu_event *)amd_gp_events;
base-commit: e3c5c3ef2524c58023073c0fadde2e8ae3c04ec6
--
2.39.2
On Tue, Feb 21, 2023, Like Xu wrote: > From: Like Xu <likexu@tencent.com> > > Avoid misleading pmu initialisation when vPMU is globally disabled > by performing a read/write consistency test on one of AMD's counters. > Without this check, pmu test will fail rather than be skipped. This is unnecessary, the testcases can instead check KVM's enable_pmu module param. Of course, runtime.bash has a bug and doesn't play nice with multiple requirements, but that's easy enough to fix. Patches incoming. @@ -206,7 +206,7 @@ extra_params = -cpu max,vendor=GenuineIntel [pmu] file = pmu.flat extra_params = -cpu max -check = /proc/sys/kernel/nmi_watchdog=0 +check = /sys/module/kvm/parameters/enable_pmu=Y /proc/sys/kernel/nmi_watchdog=0 accel = kvm groups = pmu
© 2016 - 2025 Red Hat, Inc.