On 9/20/2025 5:46 AM, Sean Christopherson wrote:
> Track the mask of "unavailable" PMU events as a 32-bit value. While bits
> 31:9 are currently reserved, silently truncating those bits is unnecessary
> and asking for missed coverage. To avoid running afoul of the sanity check
> in vcpu_set_cpuid_property(), explicitly adjust the mask based on the
> non-reserved bits as reported by KVM's supported CPUID.
>
> Opportunistically update the "all ones" testcase to pass -1u instead of
> 0xff.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> tools/testing/selftests/kvm/x86/pmu_counters_test.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
> index 8aaaf25b6111..1ef038c4c73f 100644
> --- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c
> +++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
> @@ -311,7 +311,7 @@ static void guest_test_arch_events(void)
> }
>
> static void test_arch_events(uint8_t pmu_version, uint64_t perf_capabilities,
> - uint8_t length, uint8_t unavailable_mask)
> + uint8_t length, uint32_t unavailable_mask)
> {
> struct kvm_vcpu *vcpu;
> struct kvm_vm *vm;
> @@ -320,6 +320,9 @@ static void test_arch_events(uint8_t pmu_version, uint64_t perf_capabilities,
> if (!pmu_version)
> return;
>
> + unavailable_mask &= GENMASK(X86_PROPERTY_PMU_EVENTS_MASK.hi_bit,
> + X86_PROPERTY_PMU_EVENTS_MASK.lo_bit);
> +
> vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_arch_events,
> pmu_version, perf_capabilities);
>
> @@ -630,7 +633,7 @@ static void test_intel_counters(void)
> */
> for (j = 0; j <= NR_INTEL_ARCH_EVENTS + 1; j++) {
> test_arch_events(v, perf_caps[i], j, 0);
> - test_arch_events(v, perf_caps[i], j, 0xff);
> + test_arch_events(v, perf_caps[i], j, -1u);
>
> for (k = 0; k < NR_INTEL_ARCH_EVENTS; k++)
> test_arch_events(v, perf_caps[i], j, BIT(k));
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>