From nobody Thu Jan 1 11:02:35 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51196C25B67 for ; Tue, 24 Oct 2023 00:27:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231934AbjJXA1T (ORCPT ); Mon, 23 Oct 2023 20:27:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231736AbjJXA1F (ORCPT ); Mon, 23 Oct 2023 20:27:05 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFFC71709 for ; Mon, 23 Oct 2023 17:26:53 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c9e0b9b96cso28687515ad.2 for ; Mon, 23 Oct 2023 17:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107213; x=1698712013; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9PRm2qPI6Hb9eDZmaLCMIjwG1XEncicGFs+ftKdyMw0=; b=GeXm48usbyMoKFGdJEwQhQDL9/O9HJeRxfbvPhBBb1X52Twog2iuZNPGRwm7iSyWdM sO0ZTWxrEGzzaAGMEEChkCfv8CDlpo82tpYElzaooxFKkFy26Y72SQHlvFLnj1WtxraZ 6taImWgIfq8Q8pqOpHD1n73cqwikfXNzSUe1JKbkN2MV25TVCwBHUWlrl9v+ysJnWuy+ YYJJY3Q5AAAGPuQDVqWyqGj1apgDtP3gKItyKway2+y2mcduhcSL4AMK+jio3ve2NZ+E PxuND/AqDrvQ+bjLYzTEGOad+1UJ89KBBhOws3ACJJ0DaBIbO49g9h81GQDCBDusyf/d RZwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107213; x=1698712013; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9PRm2qPI6Hb9eDZmaLCMIjwG1XEncicGFs+ftKdyMw0=; b=WT77Qgle/RnrqWUzjPKLDCGIRFO5m32JLoafdIWfNptH4RyDmC+u/5RHg9cktX++x1 fgJmM89nw81q10vYRPfDxsl4PRedZpjX2CEUwG+NT1MBajT54GiepNQCwUiIcLo6iesx BQ/e5lzlDMiFSABYV2AlfOZSUUJ+khIj6i5/6hp1DALpknYHL1qMV8Xrv2vgSs8Y9Kkm fllFUox6w6wEDqO1W7sjb73ZkNaAN/bB/x6boW5x/zTWiOygm8Ye/u4CBqWeCUXGYzLv B8yIbKnOLllvqap93GWjTOggroC0WPMnKq0oqNkX5C+YXivB+8beOrJOq5lmIRO1weEf hzWg== X-Gm-Message-State: AOJu0Yz2AWjN1V8RINQEF19KDlpxV+yuzhmBLMGzr8VSjj7QgKsZBKnR 2G5cTyDTTKjRLLqTj1+9VnpvEeq/6js= X-Google-Smtp-Source: AGHT+IGr/Ltfu/OlTtvWVK+MrFF2S+e43S2CPrXujZtJlpcEbSKWXqMzh5K491VXBI+wesKAWcCTfWmzOsE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:8c:b0:1ca:8629:82a3 with SMTP id o12-20020a170903008c00b001ca862982a3mr170488pld.6.1698107213086; Mon, 23 Oct 2023 17:26:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:29 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-10-seanjc@google.com> Subject: [PATCH v5 09/13] KVM: selftests: Test Intel PMU architectural events on fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Update test to cover Intel PMU architectural events on fixed counters. Per Intel SDM, PMU users can also count architecture performance events on fixed counters (specifically, FIXED_CTR0 for the retired instructions and FIXED_CTR1 for cpu core cycles event). Therefore, if guest's CPUID indicates that an architecture event is not available, the corresponding fixed counter will also not count that event. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 54 ++++++++++++++++--- 1 file changed, 46 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c index 2a6336b994d5..410d09f788ef 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -85,23 +85,44 @@ static void guest_measure_pmu_v1(struct kvm_x86_pmu_fea= ture event, GUEST_DONE(); } =20 +#define X86_PMU_FEATURE_NULL \ +({ \ + struct kvm_x86_pmu_feature feature =3D {}; \ + \ + feature; \ +}) + +static bool pmu_is_null_feature(struct kvm_x86_pmu_feature event) +{ + return !(*(u64 *)&event); +} + static void guest_measure_loop(uint8_t idx) { const struct { struct kvm_x86_pmu_feature gp_event; + struct kvm_x86_pmu_feature fixed_event; } intel_event_to_feature[] =3D { - [INTEL_ARCH_CPU_CYCLES] =3D { X86_PMU_FEATURE_CPU_CYCLES }, - [INTEL_ARCH_INSTRUCTIONS_RETIRED] =3D { X86_PMU_FEATURE_INSNS_RETIRED }, - [INTEL_ARCH_REFERENCE_CYCLES] =3D { X86_PMU_FEATURE_REFERENCE_CYCLES = }, - [INTEL_ARCH_LLC_REFERENCES] =3D { X86_PMU_FEATURE_LLC_REFERENCES }, - [INTEL_ARCH_LLC_MISSES] =3D { X86_PMU_FEATURE_LLC_MISSES }, - [INTEL_ARCH_BRANCHES_RETIRED] =3D { X86_PMU_FEATURE_BRANCH_INSNS_RETI= RED }, - [INTEL_ARCH_BRANCHES_MISPREDICTED] =3D { X86_PMU_FEATURE_BRANCHES_MISPRE= DICTED }, + [INTEL_ARCH_CPU_CYCLES] =3D { X86_PMU_FEATURE_CPU_CYCLES, X86_PMU_FE= ATURE_CPU_CYCLES_FIXED }, + [INTEL_ARCH_INSTRUCTIONS_RETIRED] =3D { X86_PMU_FEATURE_INSNS_RETIRED, = X86_PMU_FEATURE_INSNS_RETIRED_FIXED }, + /* + * Note, the fixed counter for reference cycles is NOT the same + * as the general purpose architectural event (because the GP + * event is garbage). The fixed counter explicitly counts at + * the same frequency as the TSC, whereas the GP event counts + * at a fixed, but uarch specific, frequency. Bundle them here + * for simplicity. + */ + [INTEL_ARCH_REFERENCE_CYCLES] =3D { X86_PMU_FEATURE_REFERENCE_CYCLES,= X86_PMU_FEATURE_REFERENCE_CYCLES_FIXED }, + [INTEL_ARCH_LLC_REFERENCES] =3D { X86_PMU_FEATURE_LLC_REFERENCES, X86= _PMU_FEATURE_NULL }, + [INTEL_ARCH_LLC_MISSES] =3D { X86_PMU_FEATURE_LLC_MISSES, X86_PMU_FE= ATURE_NULL }, + [INTEL_ARCH_BRANCHES_RETIRED] =3D { X86_PMU_FEATURE_BRANCH_INSNS_RETI= RED, X86_PMU_FEATURE_NULL }, + [INTEL_ARCH_BRANCHES_MISPREDICTED] =3D { X86_PMU_FEATURE_BRANCHES_MISPRE= DICTED, X86_PMU_FEATURE_NULL }, }; =20 uint32_t nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUN= TERS); uint32_t pmu_version =3D this_cpu_property(X86_PROPERTY_PMU_VERSION); - struct kvm_x86_pmu_feature gp_event; + struct kvm_x86_pmu_feature gp_event, fixed_event; uint32_t counter_msr; unsigned int i; =20 @@ -132,6 +153,23 @@ static void guest_measure_loop(uint8_t idx) GUEST_ASSERT_EQ(this_pmu_has(gp_event), !!_rdpmc(i)); } =20 + fixed_event =3D intel_event_to_feature[idx].fixed_event; + if (pmu_is_null_feature(fixed_event) || !this_pmu_has(fixed_event)) + goto done; + + i =3D fixed_event.f.bit; + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(PMC_IDX_FIXED + i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + if (pmu_is_intel_event_stable(idx)) + GUEST_ASSERT_NE(_rdpmc(PMC_FIXED_RDPMC_BASE | i), 0); + +done: GUEST_DONE(); } =20 --=20 2.42.0.758.gaed0368e0e-goog