From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CF77C00A8F for ; Tue, 24 Oct 2023 00:26:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231681AbjJXA0o (ORCPT ); Mon, 23 Oct 2023 20:26:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230477AbjJXA0l (ORCPT ); Mon, 23 Oct 2023 20:26:41 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6887110E for ; Mon, 23 Oct 2023 17:26:39 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c9d140fcddso28146765ad.1 for ; Mon, 23 Oct 2023 17:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107199; x=1698711999; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=eiyrZUCrtXg6DUYczSD0Onn4lO3H8IBkZ8lYsggwbsM=; b=prC4RNRG9KDEzUdaKcOxD4bCKkR79gHNgBrWobLc3nJT/IUYiPAZVeEC8/FT1WrAPP Ium5Uz5RtJLP39gGZRvTsEm6UxjqbhwYb2NDL8xv646BQ+UpqLRvhJaByh0/FTC/lEY2 WUv9YX29SF2x1Rpk3nyU08d2NLNkvgKlbCQdG8akyeD7bntsrKCCaU67vt6IAcaB+y8C Uks4skqGFCHek1Vgtx4KEBHG83kCe6llbz9PBN/27d//yC975/Uk7G9tNG9o+K4k/ZcE uenz3U2F4Y3Yo/sEXn3HQaaluWgRfjki0VBsrCYLVNe23mGjkHpU5xIpr53L8RuAfO2U zFHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107199; x=1698711999; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=eiyrZUCrtXg6DUYczSD0Onn4lO3H8IBkZ8lYsggwbsM=; b=VZ9aC9uOgRbFR8noQsiVCDP0C3vshIn7etd9ZbUoe0XI/5lvNGVr2UJHv+8RdWFyCl QnOEe2+0w4foy4MvlNh87dwpcM23MN83inSZXecNv6hhqvthOSkpy/W501fgeQ4QYVIi bDysjmmJeLmbB9yUneGE9cfCiajyTuRfDrHiLRNRXg2pdLJzAUYQ1Z+tNwxGlaVjLB7d tKGxGNa5n9SdyRnAPsN3gJK47afSMM5kyER3SqUGLtgleGK4f4XE7d67m7XqxU7qKnjq eR20oYKsw61XK/41fjdMqqKEWM5tJi7Wc471QD4Iz+gmxrN9Mk2GsLCFOcKNyg1F3MmD gnlQ== X-Gm-Message-State: AOJu0YxXA0iTteV3qmJCyuwoRXbe7tX59gABIGBDr380Fzb4MVaJ3t+k 7DDD9sigkvhQqOwmitmRB6INt2E4wqo= X-Google-Smtp-Source: AGHT+IHj+FCYxGfNDar24spee5O/pearaB0iU2QWyVFO7sEtB/opWMPS5qVffZgO+HIP0D03p+DS6Qbjq4g= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:4346:b0:1ca:85ab:1638 with SMTP id lo6-20020a170903434600b001ca85ab1638mr189311plb.12.1698107198877; Mon, 23 Oct 2023 17:26:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:21 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-2-seanjc@google.com> Subject: [PATCH v5 01/13] KVM: x86/pmu: Don't allow exposing unsupported architectural events From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hide architectural events that are unsupported according to guest CPUID *or* hardware, i.e. don't let userspace advertise and potentially program unsupported architectural events. Note, KVM already limits the length of the reverse polarity field, only the mask itself is missing. Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests= ") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 820d3e1f6b4f..1b13a472e3f2 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -533,7 +533,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->counter_bitmask[KVM_PMC_GP] =3D ((u64)1 << eax.split.bit_width) - 1; eax.split.mask_length =3D min_t(int, eax.split.mask_length, kvm_pmu_cap.events_mask_len); - pmu->available_event_types =3D ~entry->ebx & + pmu->available_event_types =3D ~(entry->ebx | kvm_pmu_cap.events_mask) & ((1ull << eax.split.mask_length) - 1); =20 if (pmu->version =3D=3D 1) { --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DBFCC00A8F for ; Tue, 24 Oct 2023 00:26:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231749AbjJXA0t (ORCPT ); Mon, 23 Oct 2023 20:26:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231629AbjJXA0n (ORCPT ); Mon, 23 Oct 2023 20:26:43 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FAEF10D for ; Mon, 23 Oct 2023 17:26:41 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a7aa161b2fso51690357b3.2 for ; Mon, 23 Oct 2023 17:26:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107200; x=1698712000; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xj3Z6F7wtWkzGUCF9WkpLNZQGBSMludr8th6Vmamu+0=; b=RFpUY5O3ZnwQB/qyj/xfIACsmkSXkQfqhgnKyfhhw4b9ic2ebsFm6pmMPDhp8Ueza4 nFjHokhYEzLG79M6H4T4IyniknSILbyN0/LbLFiFCbRFTlRO1S6xWrdPMaDXyAOaYyLT 5fZkhVts9wioNIU8ZvgxdoMayQxR/u1SbVo4r69lL2ZY2d71wmAqJGjLm7Vii17fGL31 WtsyP6dz/5quWe8JJ5aD++Q0PgTFKO7WWf/APPQbmjxW9SgJwa9bw7w5S9bGn70yxalw JepDJBguIN/hjH88DP5DdHVVCejACxNPGI4cIyKq6iYkQfpODH/iBs+wa/vldul0ELPt yokQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107200; x=1698712000; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xj3Z6F7wtWkzGUCF9WkpLNZQGBSMludr8th6Vmamu+0=; b=ebIM3YGPnIZolaP9dXzsiWZy6HuFOrD7rvRwKOVQEvXeOFPecY5PhKD8GpzR3hCUaT 52UjTA9mAgvH+XWQ2MSfBhTyYIyEqKXFDTCAsNKTyLDgBQ0vmR56Vxe/noH9Wgm56M3i Llbr2a7FrqzQ+AH66AhyXMAJlnVJemH0IB9/CDgPOxJrQiirfX9h5YzCnSQkrcte/9dK 9+KwbkZQDAfUJPPL3skhCBT7o5xcFakZ7ryRBxz+NGe6vWcpVREfGtjLzZWo6l9LDcQB yGzkvSVhigi38WCtGydISue981Zd99u295V5MVRabY4Kk2paqA5YDFXIsnMlGRczM8px KI9g== X-Gm-Message-State: AOJu0Yz0JuK91glDCzFs0bYFjYeJXfh84ar9IZZLnT+l/gJN9Jt9AoJf +/vzHfXid1WLrpG4Nqo6BEWghSQ0VHM= X-Google-Smtp-Source: AGHT+IF/WocQ8dioU9d90H5SmfeM24Hr2cdDjMDDy9i8VR+00/+0ouTl4G5U15xB4MSpRuKr89Bs6SNXrOg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:cac6:0:b0:5a7:b543:7f0c with SMTP id m189-20020a0dcac6000000b005a7b5437f0cmr247826ywd.10.1698107200739; Mon, 23 Oct 2023 17:26:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:22 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-3-seanjc@google.com> Subject: [PATCH v5 02/13] KVM: x86/pmu: Don't enumerate support for fixed counters KVM can't virtualize From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hide fixed counters for which perf is incapable of creating the associated architectural event. Except for the so called pseudo-architectural event for counting TSC reference cycle, KVM virtualizes fixed counters by creating a perf event for the associated general purpose architectural event. If the associated event isn't supported in hardware, KVM can't actually virtualize the fixed counter because perf will likely not program up the correct event. Note, this issue is almost certainly limited to running KVM on a funky virtual CPU model, no known real hardware has an asymmetric PMU where a fixed counter is supported but the associated architectural event is not. Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests= ") Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.h | 4 ++++ arch/x86/kvm/vmx/pmu_intel.c | 31 +++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 1d64113de488..5341e8f69a22 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -19,6 +19,7 @@ #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 =20 struct kvm_pmu_ops { + void (*init_pmu_capability)(void); bool (*hw_event_available)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, @@ -218,6 +219,9 @@ static inline void kvm_init_pmu_capability(const struct= kvm_pmu_ops *pmu_ops) pmu_ops->MAX_NR_GP_COUNTERS); kvm_pmu_cap.num_counters_fixed =3D min(kvm_pmu_cap.num_counters_fixed, KVM_PMC_MAX_FIXED); + + if (pmu_ops->init_pmu_capability) + pmu_ops->init_pmu_capability(); } =20 static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1b13a472e3f2..3316fdea212a 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,6 +68,36 @@ static int fixed_pmc_events[] =3D { [2] =3D PSEUDO_ARCH_REFERENCE_CYCLES, }; =20 +static void intel_init_pmu_capability(void) +{ + int i; + + /* + * Perf may (sadly) back a guest fixed counter with a general purpose + * counter, and so KVM must hide fixed counters whose associated + * architectural event are unsupported. On real hardware, this should + * never happen, but if KVM is running on a funky virtual CPU model... + * + * TODO: Drop this horror if/when KVM stops using perf events for + * guest fixed counters, or can explicitly request fixed counters. + */ + for (i =3D 0; i < kvm_pmu_cap.num_counters_fixed; i++) { + int event =3D fixed_pmc_events[i]; + + /* + * Ignore pseudo-architectural events, they're a bizarre way of + * requesting events from perf that _can't_ be backed with a + * general purpose architectural event, i.e. they're guaranteed + * to be backed by the real fixed counter. + */ + if (event < NR_REAL_INTEL_ARCH_EVENTS && + (kvm_pmu_cap.events_mask & BIT(event))) + break; + } + + kvm_pmu_cap.num_counters_fixed =3D i; +} + static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { struct kvm_pmc *pmc; @@ -789,6 +819,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } =20 struct kvm_pmu_ops intel_pmu_ops __initdata =3D { + .init_pmu_capability =3D intel_init_pmu_capability, .hw_event_available =3D intel_hw_event_available, .pmc_idx_to_pmc =3D intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc =3D intel_rdpmc_ecx_to_pmc, --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5926CC25B6E for ; Tue, 24 Oct 2023 00:26:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231728AbjJXA0v (ORCPT ); Mon, 23 Oct 2023 20:26:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231686AbjJXA0p (ORCPT ); Mon, 23 Oct 2023 20:26:45 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7346D7A for ; Mon, 23 Oct 2023 17:26:42 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c9d7dc2b36so26443175ad.0 for ; Mon, 23 Oct 2023 17:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107202; x=1698712002; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=g4kIN+NDAiITyOKUUPdApOuHTeRJ+9iSBbDvve6Hp2k=; b=H8TaCRA2WdMcgeLNicN4Q6UXVdf2rogIWpIqrhMdMmcUXs+ILMQ2RoITtq216ZG9/5 On9Mkcsxv5EpnKLQdm9FTezrA9m9zD9PBy8by3EXmRtfVwXFTNubp1qoEDBqaw7HLxG5 9ADcrqhJsajjZIpJeHbFaAC3HSUtbWLMW2Au22ZiB9KSBacaF6sXhICpCRsTDPXSHTSs gjcJl++k3ym5PNUA06IRcTXbVx7+xKR37N0ukjXp+uD5f1nYmZG81qxqCnvuarftO+fn IQtY5Ker0LxxAEdS3gofO5a9bRBsl8h9ArxpQf2/VYpOOk9Cn5ZB66adJyYw2jWSpJaT 6dCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107202; x=1698712002; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=g4kIN+NDAiITyOKUUPdApOuHTeRJ+9iSBbDvve6Hp2k=; b=TVxlRyNpmUHbTkT15YsSaolYq11yGRlWMZVxu1XTVzNI+jNiydj+2mhe8xQ3h7rl8i aUFJIuImZCxOpRxtbQ2pqBtkGzt/QDjUeo1/nIc71bkNTEn+AbwU+n34kYpkme7sI1rg fAVL7QlHq36YTIAIvc+Uv8brJ/ahQROeuQR8VmWRU3EmWp59qifFDewuk5S49Zzdb7ud ykarCz7UIEnw+vEP20BI9L+hHjoz20tgdYwQ9yo6DLQzMK/bzcdPkki3caiEMGXHEUl0 O7duTv6ksWYymARhZ0dlrk7s+P9ZrykNbMeAC+c2Z+OEvMSQHaFh3PE7sLF46Lr9KYFO RFIg== X-Gm-Message-State: AOJu0Yz6OszfK7FosA51OXManVA7wNzTKqZaFI8WpZrS4IAh7pSsblaq y2Uxbn22klJb72uJ901Vq4lCs/C/+Ek= X-Google-Smtp-Source: AGHT+IGjRouHgfnwVcLwi0Ea5OGRBSKGmtfa1lqztrKmBu+4nSvOq/hXr5lioBe3jjYsvu6LQ9fiI0yKRLs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:26c6:b0:1bb:a78c:7a3e with SMTP id jg6-20020a17090326c600b001bba78c7a3emr226757plb.3.1698107202417; Mon, 23 Oct 2023 17:26:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:23 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-4-seanjc@google.com> Subject: [PATCH v5 03/13] KVM: x86/pmu: Always treat Fixed counters as available when supported From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that KVM hides fixed counters that can't be virtualized, treat fixed counters as available when they are supported, i.e. don't silently ignore an enabled fixed counter just because guest CPUID says the associated general purpose architectural event is unavailable. KVM originally treated fixed counters as always available, but that got changed as part of a fix to avoid confusing REF_CPU_CYCLES, which does NOT map to an architectural event, with the actual architectural event used associated with bit 7, TOPDOWN_SLOTS. The commit justified the change with: If the event is marked as unavailable in the Intel guest CPUID 0AH.EBX leaf, we need to avoid any perf_event creation, whether it's a gp or fixed counter. but that justification doesn't mesh with reality. The Intel SDM uses "architectural events" to refer to both general purpose events (the ones with the reverse polarity mask in CPUID.0xA.EBX) and the events for fixed counters, e.g. the SDM makes statements like: Each of the fixed-function PMC can count only one architectural performance event. but the fact that fixed counter 2 (TSC reference cycles) doesn't have an associated general purpose architectural makes trying to apply the mask from CPUID.0xA.EBX impossible. Furthermore, the SDM never explicitly says that an architectural events that's marked unavailable in EBX affects the fixed counters. Note, at the time of the change, KVM didn't enforce hardware support, i.e. didn't prevent userspace from enumerating support in guest CPUID.0xA.EBX for architectural events that aren't supported in hardware. I.e. silently dropping the fixed counter didn't somehow protection against counting the wrong event, it just enforced guest CPUID. Arguably, userspace is creating a bogus vCPU model by advertising a fixed counter but saying the associated general purpose architectural event is unavailable. But regardless of the validity of the vCPU model, letting the guest enable a fixed counter and then not actually having it count anything is completely nonsensical. I.e. even if all of the above is wrong and it's illegal for a fixed counter to exist when the architectural event is unavailable, silently doing nothing is still the wrong behavior and KVM should instead disallow enabling the fixed counter in the first place. Fixes: a21864486f7e ("KVM: x86/pmu: Fix available_event_types check for REF= _CPU_CYCLES event") Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 3316fdea212a..1c0a17661781 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -138,11 +138,24 @@ static bool intel_hw_event_available(struct kvm_pmc *= pmc) u8 unit_mask =3D (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; =20 + /* + * Fixed counters are always available if KVM reaches this point. If a + * fixed counter is unsupported in hardware or guest CPUID, KVM doesn't + * allow the counter's corresponding MSR to be written. KVM does use + * architectural events to program fixed counters, as the interface to + * perf doesn't allow requesting a specific fixed counter, e.g. perf + * may (sadly) back a guest fixed PMC with a general purposed counter. + * But if _hardware_ doesn't support the associated event, KVM simply + * doesn't enumerate support for the fixed counter. + */ + if (pmc_is_fixed(pmc)) + return true; + BUILD_BUG_ON(ARRAY_SIZE(intel_arch_events) !=3D NR_INTEL_ARCH_EVENTS); =20 /* * Disallow events reported as unavailable in guest CPUID. Note, this - * doesn't apply to pseudo-architectural events. + * doesn't apply to pseudo-architectural events (see above). */ for (i =3D 0; i < NR_REAL_INTEL_ARCH_EVENTS; i++) { if (intel_arch_events[i].eventsel !=3D event_select || --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D585BC25B6D for ; Tue, 24 Oct 2023 00:26:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231873AbjJXA0y (ORCPT ); Mon, 23 Oct 2023 20:26:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231725AbjJXA0q (ORCPT ); Mon, 23 Oct 2023 20:26:46 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBCC210D for ; Mon, 23 Oct 2023 17:26:44 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c9c83b656fso32912225ad.1 for ; Mon, 23 Oct 2023 17:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107204; x=1698712004; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=F8dxA5PYrMgcC3TCFQUnjkCAgDkYN5FCICeYb/ZzON0=; b=3ANCsRAOhw2Bys8XhAYGao+LqGKuUu60I6BcYMOl9QOwq9ONFG9TSzR7Db/ay/LHFA Xd8NBJ+hpOPC/ocdqgLZnBu5DE/fOTJ90krxvyQ2J5UJk9h8Pz0y7r9m0LIBN2dnPy8D wWBUmtudodW93UWc3NQdUIAdL1RWCFlkRce3lR+535kkwWn8yZFvMicZtm8MEC1qDk/5 5lcT0fugNM7bDMb4eJgdP4qsw1Vy2qpTUkfOC3OuBaxuye0wAY8lk0XVXrvvRjBk+zQI lBUNt2mdmDA+q/X/U+0ipYte8bmyhk9lhbMjjQradhvWUDxGmKBEMF/Q+8RSWsYhyd4W Xx2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107204; x=1698712004; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=F8dxA5PYrMgcC3TCFQUnjkCAgDkYN5FCICeYb/ZzON0=; b=YeDPpcpsA9d56HW4RKf5dSw/tCCTgZDNvxajfo5V783FSaO6z7D5GH94kX7/UMop6g CNwifPIDsOif6zuQxdeibTh3gw9Wz9KqSMPb5ibRQrSBsOydP51eDrza8QNrQORx2X4m oEqHPjBuydA9f0preMHwDqsWAUA00H+Fav0QM1KUpaSdOV8wCppOhVBJBfviFhzcJSxE uB3xgfPv24+zy/yP2SKr9UJPiK454cyjsM48QRRGrJKebMykKNUxfcGnCq7ksPXlq9fo VN6ugV+krmiz3qQXdcHeGuh3NHVJw5W6MhnjQrQyFldiskHwC/MmM3KGyIoQ9099T3tt yPng== X-Gm-Message-State: AOJu0Yw2L+sO2nGSYxbs6w38Sj9g/zGHFDTa/Z/nJA8sKvoTK2Tg/zw3 TA8jPifvTnUWdPtm8/ex4b/IWwWyZzI= X-Google-Smtp-Source: AGHT+IFxcLi6IwWcXtvtdZIs4kyjvjtoxoncPE9QN4ThjeMC3VX0DBhml+M+m1iSJrMtMUkKANjGOAL6YJM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c1d4:b0:1bc:2547:b17c with SMTP id c20-20020a170902c1d400b001bc2547b17cmr202838plc.1.1698107204068; Mon, 23 Oct 2023 17:26:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:24 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-5-seanjc@google.com> Subject: [PATCH v5 04/13] KVM: selftests: Add vcpu_set_cpuid_property() to set properties From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Add vcpu_set_cpuid_property() helper function for setting properties, and use it instead of open coding an equivalent for MAX_PHY_ADDR. Future vPMU testcases will also need to stuff various CPUID properties. Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../testing/selftests/kvm/include/x86_64/processor.h | 4 +++- tools/testing/selftests/kvm/lib/x86_64/processor.c | 12 +++++++++--- .../kvm/x86_64/smaller_maxphyaddr_emulation_test.c | 2 +- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 25bc61dac5fb..a01931f7d954 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -994,7 +994,9 @@ static inline void vcpu_set_cpuid(struct kvm_vcpu *vcpu) vcpu_ioctl(vcpu, KVM_GET_CPUID2, vcpu->cpuid); } =20 -void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr); +void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, + struct kvm_x86_cpu_property property, + uint32_t value); =20 void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function); void vcpu_set_or_clear_cpuid_feature(struct kvm_vcpu *vcpu, diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/tes= ting/selftests/kvm/lib/x86_64/processor.c index d8288374078e..9e717bc6bd6d 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -752,11 +752,17 @@ void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const str= uct kvm_cpuid2 *cpuid) vcpu_set_cpuid(vcpu); } =20 -void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr) +void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, + struct kvm_x86_cpu_property property, + uint32_t value) { - struct kvm_cpuid_entry2 *entry =3D vcpu_get_cpuid_entry(vcpu, 0x80000008); + struct kvm_cpuid_entry2 *entry; + + entry =3D __vcpu_get_cpuid_entry(vcpu, property.function, property.index); + + (&entry->eax)[property.reg] &=3D ~GENMASK(property.hi_bit, property.lo_bi= t); + (&entry->eax)[property.reg] |=3D value << (property.lo_bit); =20 - entry->eax =3D (entry->eax & ~0xff) | maxphyaddr; vcpu_set_cpuid(vcpu); } =20 diff --git a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulatio= n_test.c b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_= test.c index 06edf00a97d6..9b89440dff19 100644 --- a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c +++ b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c @@ -63,7 +63,7 @@ int main(int argc, char *argv[]) vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(vcpu); =20 - vcpu_set_cpuid_maxphyaddr(vcpu, MAXPHYADDR); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_MAX_PHY_ADDR, MAXPHYADDR); =20 rc =3D kvm_check_cap(KVM_CAP_EXIT_ON_EMULATION_FAILURE); TEST_ASSERT(rc, "KVM_CAP_EXIT_ON_EMULATION_FAILURE is unavailable"); --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75E73C25B67 for ; Tue, 24 Oct 2023 00:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231879AbjJXA06 (ORCPT ); Mon, 23 Oct 2023 20:26:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbjJXA0t (ORCPT ); Mon, 23 Oct 2023 20:26:49 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60D3110C6 for ; Mon, 23 Oct 2023 17:26:46 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c9d4e38f79so28089905ad.2 for ; Mon, 23 Oct 2023 17:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107206; x=1698712006; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=cKw2oFXjDBeLUy3TN9d0wAQBta1CYNuO3g/uAt5rkPU=; b=vKn+/AUweDCsC14TZMn+KQtWldIkLL/EkEtYsn7yPtJgZgLCDuGNTFQyNehKIDs1R7 2FB+dXItFGodP1BAWiHTDoszQ/2epqk2+DLDZ3/icmwMhkqkXPB7LO/IEaZhjCoFY75L KfyRKl2evBUXW1EqhZY37jGU9mukWm7yqYjEcVWsmx1MBH5F7zZFaVLBCmeaUs9euDKy kV8CtKhvlLu+3oOrHnCLyg66x6ADD307XIa10ei2nGsQWCB9oF5lGxaxcfxvQ25lqhQJ 9YIhQ5BaLmhXjViiiGegjaV+2ew5fo0UFqXaFCOxQOvfna56hAqEm7rZ29ar39TuCVYF t4zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107206; x=1698712006; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=cKw2oFXjDBeLUy3TN9d0wAQBta1CYNuO3g/uAt5rkPU=; b=fkfihSrx5F1bT71qRyL+xn3alU/FE4tBHxzg+MTxSJKdW2dywnDx7Y56wtTD/FZnyZ yidkp9KqzyVotucS55eqGwf3TIwFagIq1DwW3tFVCTaZeUC96ve1ts4AMK7plKtikxdO zBxcyKJR8rVazN0Fa+noL18BVJ8qdWBe8/lCmxxnbMdtpAzxST+5VGTd/Qz2AiCICJ9Z 3BVpfSWsmG2OBEK2nfZJgayk5zVkvhTKiC8PuQDMqJg1Svef4Jl+jPt1NpRoca9bcYGW Sab6YgZaK3BCQgYPPQZUY79UD3haN/FtPzME2P5DrQkAd3VXhSfYg4yhKfeXhZPlL9VC 5gag== X-Gm-Message-State: AOJu0YzUHgSeIFMnC5EzJGN+ti+LvhUE6qEKXl/8Ai8sRmGqC6V06x9a lqv6nYu3D4S6ckitB4DQh8wbwTOV5zU= X-Google-Smtp-Source: AGHT+IHCk1iJEjqyU6r8Xnv/6T2nNERPiPMyxUaFyJVNh/usThtDLM2yp4ID9P1oppjz5D7IP0MxYMcn2uo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ab5a:b0:1c7:3462:ce8d with SMTP id ij26-20020a170902ab5a00b001c73462ce8dmr193856plb.10.1698107205898; Mon, 23 Oct 2023 17:26:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:25 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-6-seanjc@google.com> Subject: [PATCH v5 05/13] KVM: selftests: Drop the "name" param from KVM_X86_PMU_FEATURE() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the "name" parameter from KVM_X86_PMU_FEATURE(), it's unused and the name is redundant with the macro, i.e. it's truly useless. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86_64/processor.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index a01931f7d954..2d9771151dd9 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -289,7 +289,7 @@ struct kvm_x86_cpu_property { struct kvm_x86_pmu_feature { struct kvm_x86_cpu_feature anti_feature; }; -#define KVM_X86_PMU_FEATURE(name, __bit) \ +#define KVM_X86_PMU_FEATURE(__bit) \ ({ \ struct kvm_x86_pmu_feature feature =3D { \ .anti_feature =3D KVM_X86_CPU_FEATURE(0xa, 0, EBX, __bit), \ @@ -298,7 +298,7 @@ struct kvm_x86_pmu_feature { feature; \ }) =20 -#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(BRANCH_IN= SNS_RETIRED, 5) +#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(5) =20 static inline unsigned int x86_family(unsigned int eax) { --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9E08C00A8F for ; Tue, 24 Oct 2023 00:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231901AbjJXA1I (ORCPT ); Mon, 23 Oct 2023 20:27:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231845AbjJXA0w (ORCPT ); Mon, 23 Oct 2023 20:26:52 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B4CF10DB for ; Mon, 23 Oct 2023 17:26:48 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7af69a4baso52994427b3.0 for ; Mon, 23 Oct 2023 17:26:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107207; x=1698712007; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=U9FAqMqGtvimJV8duNMrmOEEttYHO7Vcjr1pZFha2AI=; b=ymyrPoeHtVTHjKzVDHZ3lYEMIix9RkvQml9sH1EKPAPBpnckYVYW+gVePkIixl5pIi sTGO/gSzZ/bIQ/SHNU7srTlKMhMjfg9I4mgjvtxO2KtuxWDVjgoHEOSbxuc3wp+hIDvn fAxttTmZlDYJv7fQ302ghiVKJzW/sNgyI4Q2So4rDqenZ2Wum/Zto/EHoboZewtVVL8E b2qt7YyDsI5HQ1mtF1X7A+Gf2SeOrL1f1tVxMVgfuUdF2wlbbo3QPhmKXqmXlZ7NZTnh +EOVh47DtAL1p4ETxEtL5bS2yCOv8JCErPchVyzVdHOmtREI5znIcXl5k3PsJyidnWSc Wp6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107207; x=1698712007; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=U9FAqMqGtvimJV8duNMrmOEEttYHO7Vcjr1pZFha2AI=; b=Rkq9sLPRY64blKwRniDLov3dgbHDh7sl89qv3WfDz+fngQJsrnfmsh827aWP4meVK4 jC5emGLBpCrQlmgJT2+YMrqUzsK+MoO39ar7KYUuXYu8wIdm0QC0GzvsycgtSVC+osko ZqC4o5Wc6thPkngDivg/mpfm8cdAWICfcOz774UNCFFuMwZW/bg4+dY1uXM5TZIxRAyY 02H5lZa6xKqHmItXdbfKK8bMXYLE91aPCvNLPT0aby1pIcmTxkfQeyCq8zWUeNPrnN6d swGJ1GSioMQ4KgH2cwjXY4tCE1BeI2ZEEVWe2P251ndUVNFqWrBbDxR96HIPASS34HHp sGHg== X-Gm-Message-State: AOJu0YxZAtTKc6qF/VXea2qtDK7YqL2cnVa+fq44ekVi8BYo9nu8zAeH XqoBY5/j8R6n00Nj7fh/57s4Nl+rbUs= X-Google-Smtp-Source: AGHT+IHdY1RcA75Ev7AeEMIaSUTqKneRwAjog5BXT7thttvE72u2/jknvPFNvqPiTEfmLYJzdXUGHBh7XOE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1083:b0:d9a:c3b8:4274 with SMTP id v3-20020a056902108300b00d9ac3b84274mr281851ybu.7.1698107207733; Mon, 23 Oct 2023 17:26:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:26 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-7-seanjc@google.com> Subject: [PATCH v5 06/13] KVM: selftests: Extend {kvm,this}_pmu_has() to support fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend the kvm_x86_pmu_feature framework to allow querying for fixed counters via {kvm,this}_pmu_has(). Like architectural events, checking for a fixed counter annoyingly requires checking multiple CPUID fields, as a fixed counter exists if: FxCtr[i]_is_supported :=3D ECX[i] || (EDX[4:0] > i); Note, KVM currently doesn't actually support exposing fixed counters via the bitmask, but that will hopefully change sooner than later, and Intel's SDM explicitly "recommends" checking both the number of counters and the mask. Rename the intermedate "anti_feature" field to simply 'f' since the fixed counter bitmask (thankfully) doesn't have reversed polarity like the architectural events bitmask. Note, ideally the helpers would use BUILD_BUG_ON() to assert on the incoming register, but the expected usage in PMU tests can't guarantee the inputs are compile-time constants. Opportunistically define macros for all of the architectural events and fixed counters that KVM currently supports. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 63 +++++++++++++------ 1 file changed, 45 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools= /testing/selftests/kvm/include/x86_64/processor.h index 2d9771151dd9..b103c462701b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -281,24 +281,39 @@ struct kvm_x86_cpu_property { * that indicates the feature is _not_ supported, and a property that stat= es * the length of the bit mask of unsupported features. A feature is suppo= rted * if the size of the bit mask is larger than the "unavailable" bit, and s= aid - * bit is not set. + * bit is not set. Fixed counters also bizarre enumeration, but inverted = from + * arch events for general purpose counters. Fixed counters are supported= if a + * feature flag is set **OR** the total number of fixed counters is greater + * than index of the counter. * - * Wrap the "unavailable" feature to simplify checking whether or not a gi= ven - * architectural event is supported. + * Wrap the events for general purpose and fixed counters to simplify chec= king + * whether or not a given architectural event is supported. */ struct kvm_x86_pmu_feature { - struct kvm_x86_cpu_feature anti_feature; + struct kvm_x86_cpu_feature f; }; -#define KVM_X86_PMU_FEATURE(__bit) \ -({ \ - struct kvm_x86_pmu_feature feature =3D { \ - .anti_feature =3D KVM_X86_CPU_FEATURE(0xa, 0, EBX, __bit), \ - }; \ - \ - feature; \ +#define KVM_X86_PMU_FEATURE(__reg, __bit) \ +({ \ + struct kvm_x86_pmu_feature feature =3D { \ + .f =3D KVM_X86_CPU_FEATURE(0xa, 0, __reg, __bit), \ + }; \ + \ + kvm_static_assert(KVM_CPUID_##__reg =3D=3D KVM_CPUID_EBX || \ + KVM_CPUID_##__reg =3D=3D KVM_CPUID_ECX); \ + feature; \ }) =20 -#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(5) +#define X86_PMU_FEATURE_CPU_CYCLES KVM_X86_PMU_FEATURE(EBX, 0) +#define X86_PMU_FEATURE_INSNS_RETIRED KVM_X86_PMU_FEATURE(EBX, 1) +#define X86_PMU_FEATURE_REFERENCE_CYCLES KVM_X86_PMU_FEATURE(EBX, 2) +#define X86_PMU_FEATURE_LLC_REFERENCES KVM_X86_PMU_FEATURE(EBX, 3) +#define X86_PMU_FEATURE_LLC_MISSES KVM_X86_PMU_FEATURE(EBX, 4) +#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(EBX, 5) +#define X86_PMU_FEATURE_BRANCHES_MISPREDICTED KVM_X86_PMU_FEATURE(EBX, 6) + +#define X86_PMU_FEATURE_INSNS_RETIRED_FIXED KVM_X86_PMU_FEATURE(ECX, 0) +#define X86_PMU_FEATURE_CPU_CYCLES_FIXED KVM_X86_PMU_FEATURE(ECX, 1) +#define X86_PMU_FEATURE_REFERENCE_CYCLES_FIXED KVM_X86_PMU_FEATURE(ECX, 2) =20 static inline unsigned int x86_family(unsigned int eax) { @@ -697,10 +712,16 @@ static __always_inline bool this_cpu_has_p(struct kvm= _x86_cpu_property property) =20 static inline bool this_pmu_has(struct kvm_x86_pmu_feature feature) { - uint32_t nr_bits =3D this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LE= NGTH); + uint32_t nr_bits; =20 - return nr_bits > feature.anti_feature.bit && - !this_cpu_has(feature.anti_feature); + if (feature.f.reg =3D=3D KVM_CPUID_EBX) { + nr_bits =3D this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + return nr_bits > feature.f.bit && !this_cpu_has(feature.f); + } + + GUEST_ASSERT(feature.f.reg =3D=3D KVM_CPUID_ECX); + nr_bits =3D this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + return nr_bits > feature.f.bit || this_cpu_has(feature.f); } =20 static __always_inline uint64_t this_cpu_supported_xcr0(void) @@ -916,10 +937,16 @@ static __always_inline bool kvm_cpu_has_p(struct kvm_= x86_cpu_property property) =20 static inline bool kvm_pmu_has(struct kvm_x86_pmu_feature feature) { - uint32_t nr_bits =3D kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LEN= GTH); + uint32_t nr_bits; =20 - return nr_bits > feature.anti_feature.bit && - !kvm_cpu_has(feature.anti_feature); + if (feature.f.reg =3D=3D KVM_CPUID_EBX) { + nr_bits =3D kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + return nr_bits > feature.f.bit && !kvm_cpu_has(feature.f); + } + + TEST_ASSERT_EQ(feature.f.reg, KVM_CPUID_ECX); + nr_bits =3D kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + return nr_bits > feature.f.bit || kvm_cpu_has(feature.f); } =20 static __always_inline uint64_t kvm_cpu_supported_xcr0(void) --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96426C25B6D for ; Tue, 24 Oct 2023 00:27:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231522AbjJXA1N (ORCPT ); Mon, 23 Oct 2023 20:27:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231776AbjJXA1D (ORCPT ); Mon, 23 Oct 2023 20:27:03 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 548C410C8 for ; Mon, 23 Oct 2023 17:26:50 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c9da175faaso26359545ad.1 for ; Mon, 23 Oct 2023 17:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107209; x=1698712009; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=JwQXCr637BG2GMnZ3Zgjp5AEv/uuEs9akrm/IfbGmL8=; b=lkD/UItx0iwqy2N9mYXLbZR//gwHVHVPGnz6SiR0YIplbf45hPVKTom8Bh9jRHdyPV nxJpjAPtRF6HDbXU1InMnNTlz7VEWFkxKVcyyVqZnDr5pbRpNE0riYBJeYjYqeYBFlZA L0NYOX1ksavUSeaYlkJYPa9Nx1FDeyhUIR39nYz+CBOxA0Jpct2LpJdMkuip09A/cVO/ 0QYEMFDr446/r1LEcLZRXf9dX1Y1YFFKvLHdVjculiRr+92xafSTQoybqHkhxN3rHaEf Yi0gB1kmGK/mLeS2gF3fVNl17JWQMMMsimKasUcXL6xq1ZYBzdBdimzkXo1j07XxNtEO ZF8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107209; x=1698712009; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=JwQXCr637BG2GMnZ3Zgjp5AEv/uuEs9akrm/IfbGmL8=; b=REb/dxMmhezuNzWr49ZK32N9fUrGquwXrw2oKI6DJlvncGbqBSyilj8xGGX6q1QnFe LbdDssKN/sfcG6RtpY1gFdbK4+BwNoHxgyQVxlKsdYac6XM7ogujxoCbFnC3fSb+Sytf 9wzf8pL/J5f9Iyzxg6bhpwXFSxg758tIDI9oHDl4jPHf3RqNt69AMoF25+h0MS7j4hST rWpc/JRYobYrwMjuznkLEzjyjXPl/U/KiOeKqM29emCU4yfRDjWC/+hp24rSpGQgyf0s G7PsQs0p24hTAux9XpsiH/FpK9RyNrIfL5rk35MS6k+EXjl5w8q5ARzamm53x+zUSWgt zCIg== X-Gm-Message-State: AOJu0YwHOiqRy6MjHA803+B7m0E0xbuuvujrkrUTIQCBfSMeeM0YfU3E CAj6udjWRGAuFUU+TUMY0h2IPcldAvE= X-Google-Smtp-Source: AGHT+IEu3I4uiKbU26E88MDgAwwhBZ4BcB4rIdfnbCGFn2Zlm14a8UPDjIpdwi4pYa6cAWF9VWcuf1knKJ0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ab5a:b0:1bf:cc5:7b53 with SMTP id ij26-20020a170902ab5a00b001bf0cc57b53mr198433plb.1.1698107209479; Mon, 23 Oct 2023 17:26:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:27 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-8-seanjc@google.com> Subject: [PATCH v5 07/13] KVM: selftests: Add pmu.h and lib/pmu.c for common PMU assets From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang By defining the PMU performance events and masks relevant for x86 in the new pmu.h and pmu.c, it becomes easier to reference them, minimizing potential errors in code that handles these values. Clean up pmu_event_filter_test.c by including pmu.h and removing unnecessary macros. Suggested-by: Sean Christopherson Signed-off-by: Jinrong Liang [sean: drop PSEUDO_ARCH_REFERENCE_CYCLES] Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/Makefile | 1 + tools/testing/selftests/kvm/include/pmu.h | 84 +++++++++++++++++++ tools/testing/selftests/kvm/lib/pmu.c | 28 +++++++ .../kvm/x86_64/pmu_event_filter_test.c | 32 ++----- 4 files changed, 122 insertions(+), 23 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/pmu.h create mode 100644 tools/testing/selftests/kvm/lib/pmu.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index fb01c3f8d3da..ed1c17cabc07 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -23,6 +23,7 @@ LIBKVM +=3D lib/guest_modes.c LIBKVM +=3D lib/io.c LIBKVM +=3D lib/kvm_util.c LIBKVM +=3D lib/memstress.c +LIBKVM +=3D lib/pmu.c LIBKVM +=3D lib/guest_sprintf.c LIBKVM +=3D lib/rbtree.c LIBKVM +=3D lib/sparsebit.c diff --git a/tools/testing/selftests/kvm/include/pmu.h b/tools/testing/self= tests/kvm/include/pmu.h new file mode 100644 index 000000000000..987602c62b51 --- /dev/null +++ b/tools/testing/selftests/kvm/include/pmu.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023, Tencent, Inc. + */ +#ifndef SELFTEST_KVM_PMU_H +#define SELFTEST_KVM_PMU_H + +#include + +#define X86_PMC_IDX_MAX 64 +#define INTEL_PMC_MAX_GENERIC 32 +#define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300 + +#define GP_COUNTER_NR_OFS_BIT 8 +#define EVENT_LENGTH_OFS_BIT 24 + +#define PMU_VERSION_MASK GENMASK_ULL(7, 0) +#define EVENT_LENGTH_MASK GENMASK_ULL(31, EVENT_LENGTH_OFS_BIT) +#define GP_COUNTER_NR_MASK GENMASK_ULL(15, GP_COUNTER_NR_OFS_BIT) +#define FIXED_COUNTER_NR_MASK GENMASK_ULL(4, 0) + +#define ARCH_PERFMON_EVENTSEL_EVENT GENMASK_ULL(7, 0) +#define ARCH_PERFMON_EVENTSEL_UMASK GENMASK_ULL(15, 8) +#define ARCH_PERFMON_EVENTSEL_USR BIT_ULL(16) +#define ARCH_PERFMON_EVENTSEL_OS BIT_ULL(17) +#define ARCH_PERFMON_EVENTSEL_EDGE BIT_ULL(18) +#define ARCH_PERFMON_EVENTSEL_PIN_CONTROL BIT_ULL(19) +#define ARCH_PERFMON_EVENTSEL_INT BIT_ULL(20) +#define ARCH_PERFMON_EVENTSEL_ANY BIT_ULL(21) +#define ARCH_PERFMON_EVENTSEL_ENABLE BIT_ULL(22) +#define ARCH_PERFMON_EVENTSEL_INV BIT_ULL(23) +#define ARCH_PERFMON_EVENTSEL_CMASK GENMASK_ULL(31, 24) + +#define PMC_MAX_FIXED 16 +#define PMC_IDX_FIXED 32 + +/* RDPMC offset for Fixed PMCs */ +#define PMC_FIXED_RDPMC_BASE BIT_ULL(30) +#define PMC_FIXED_RDPMC_METRICS BIT_ULL(29) + +#define FIXED_BITS_MASK 0xFULL +#define FIXED_BITS_STRIDE 4 +#define FIXED_0_KERNEL BIT_ULL(0) +#define FIXED_0_USER BIT_ULL(1) +#define FIXED_0_ANYTHREAD BIT_ULL(2) +#define FIXED_0_ENABLE_PMI BIT_ULL(3) + +#define fixed_bits_by_idx(_idx, _bits) \ + ((_bits) << ((_idx) * FIXED_BITS_STRIDE)) + +#define AMD64_NR_COUNTERS 4 +#define AMD64_NR_COUNTERS_CORE 6 + +#define PMU_CAP_FW_WRITES BIT_ULL(13) +#define PMU_CAP_LBR_FMT 0x3f + +enum intel_pmu_architectural_events { + /* + * The order of the architectural events matters as support for each + * event is enumerated via CPUID using the index of the event. + */ + INTEL_ARCH_CPU_CYCLES, + INTEL_ARCH_INSTRUCTIONS_RETIRED, + INTEL_ARCH_REFERENCE_CYCLES, + INTEL_ARCH_LLC_REFERENCES, + INTEL_ARCH_LLC_MISSES, + INTEL_ARCH_BRANCHES_RETIRED, + INTEL_ARCH_BRANCHES_MISPREDICTED, + NR_INTEL_ARCH_EVENTS, +}; + +enum amd_pmu_k7_events { + AMD_ZEN_CORE_CYCLES, + AMD_ZEN_INSTRUCTIONS, + AMD_ZEN_BRANCHES, + AMD_ZEN_BRANCH_MISSES, + NR_AMD_ARCH_EVENTS, +}; + +extern const uint64_t intel_pmu_arch_events[]; +extern const uint64_t amd_pmu_arch_events[]; +extern const int intel_pmu_fixed_pmc_events[]; + +#endif /* SELFTEST_KVM_PMU_H */ diff --git a/tools/testing/selftests/kvm/lib/pmu.c b/tools/testing/selftest= s/kvm/lib/pmu.c new file mode 100644 index 000000000000..27a6c35f98a1 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/pmu.c @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023, Tencent, Inc. + */ + +#include + +#include "pmu.h" + +/* Definitions for Architectural Performance Events */ +#define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) <<= 8) + +const uint64_t intel_pmu_arch_events[] =3D { + [INTEL_ARCH_CPU_CYCLES] =3D ARCH_EVENT(0x3c, 0x0), + [INTEL_ARCH_INSTRUCTIONS_RETIRED] =3D ARCH_EVENT(0xc0, 0x0), + [INTEL_ARCH_REFERENCE_CYCLES] =3D ARCH_EVENT(0x3c, 0x1), + [INTEL_ARCH_LLC_REFERENCES] =3D ARCH_EVENT(0x2e, 0x4f), + [INTEL_ARCH_LLC_MISSES] =3D ARCH_EVENT(0x2e, 0x41), + [INTEL_ARCH_BRANCHES_RETIRED] =3D ARCH_EVENT(0xc4, 0x0), + [INTEL_ARCH_BRANCHES_MISPREDICTED] =3D ARCH_EVENT(0xc5, 0x0), +}; + +const uint64_t amd_pmu_arch_events[] =3D { + [AMD_ZEN_CORE_CYCLES] =3D ARCH_EVENT(0x76, 0x00), + [AMD_ZEN_INSTRUCTIONS] =3D ARCH_EVENT(0xc0, 0x00), + [AMD_ZEN_BRANCHES] =3D ARCH_EVENT(0xc2, 0x00), + [AMD_ZEN_BRANCH_MISSES] =3D ARCH_EVENT(0xc3, 0x00), +}; diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/t= ools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 283cc55597a4..b6e4f57a8651 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -11,31 +11,18 @@ */ =20 #define _GNU_SOURCE /* for program_invocation_short_name */ -#include "test_util.h" + #include "kvm_util.h" +#include "pmu.h" #include "processor.h" - -/* - * In lieu of copying perf_event.h into tools... - */ -#define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) -#define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22) - -/* End of stuff taken from perf_event.h. */ - -/* Oddly, this isn't in perf_event.h. */ -#define ARCH_PERFMON_BRANCHES_RETIRED 5 +#include "test_util.h" =20 #define NUM_BRANCHES 42 -#define INTEL_PMC_IDX_FIXED 32 - -/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ -#define MAX_FILTER_EVENTS 300 #define MAX_TEST_EVENTS 10 =20 #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAGS_VALID_MASK <= < 1) -#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (KVM_PMU_EVENT_FILTER_MAX_EVENTS= + 1) =20 /* * This is how the event selector and unit mask are stored in an AMD @@ -63,7 +50,6 @@ =20 #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) =20 - /* * "Retired instructions", from Processor Programming Reference * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, @@ -84,7 +70,7 @@ struct __kvm_pmu_event_filter { __u32 fixed_counter_bitmap; __u32 flags; __u32 pad[4]; - __u64 events[MAX_FILTER_EVENTS]; + __u64 events[KVM_PMU_EVENT_FILTER_MAX_EVENTS]; }; =20 /* @@ -729,14 +715,14 @@ static void add_dummy_events(uint64_t *events, int ne= vents) =20 static void test_masked_events(struct kvm_vcpu *vcpu) { - int nevents =3D MAX_FILTER_EVENTS - MAX_TEST_EVENTS; - uint64_t events[MAX_FILTER_EVENTS]; + int nevents =3D KVM_PMU_EVENT_FILTER_MAX_EVENTS - MAX_TEST_EVENTS; + uint64_t events[KVM_PMU_EVENT_FILTER_MAX_EVENTS]; =20 /* Run the test cases against a sparse PMU event filter. */ run_masked_events_tests(vcpu, events, 0); =20 /* Run the test cases against a dense PMU event filter. */ - add_dummy_events(events, MAX_FILTER_EVENTS); + add_dummy_events(events, KVM_PMU_EVENT_FILTER_MAX_EVENTS); run_masked_events_tests(vcpu, events, nevents); } =20 @@ -818,7 +804,7 @@ static void intel_run_fixed_counter_guest_code(uint8_t = fixed_ctr_idx) /* Only OS_EN bit is enabled for fixed counter[idx]. */ wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, - BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + BIT_ULL(PMC_IDX_FIXED + fixed_ctr_idx)); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A72B5C25B67 for ; Tue, 24 Oct 2023 00:27:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231965AbjJXA1Y (ORCPT ); Mon, 23 Oct 2023 20:27:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231913AbjJXA1D (ORCPT ); Mon, 23 Oct 2023 20:27:03 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8D0B10F3 for ; Mon, 23 Oct 2023 17:26:51 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c9c939cc94so28740485ad.1 for ; Mon, 23 Oct 2023 17:26:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107211; x=1698712011; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0U+dLPR8elz3+cGklc6/3G8uITKyjqrZ5KLIjMTZ3q8=; b=yU1A3L10BSMTA6J2kZ593aTlbJ7631hanoC4gNl2rTP6fQ7wgVbTPa3+ExDiRICWDp qMgaeRkd0SBl/TBpLh2yAyfp+iCgibWfqpkoUjE+gOr9roCdicYQYl6L6B71U2Z30OvF kCTea/DCcpUykKLj7n7gPgoLa6MJBPlGtgoS914YjvDNy98Tg+8zacG7GMlKuq+BrgwD JNdZ+aTz7e+WM5Xlc0XXK93E487SjI7AlRwTmFaPeUDr0zzh/npVktOJpGFirTBfrRTE Rod8OZtUCrH+wT5+qbFA2SVBuRsPBeu07qBozr01n2g0MnlfI629RKOwVc7mHC8uoEbR /K0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107211; x=1698712011; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0U+dLPR8elz3+cGklc6/3G8uITKyjqrZ5KLIjMTZ3q8=; b=YA7zSnJj1sTm2PinT2h6ast30nbRq/hPsYnJzknQ52n2cnYlZHB50C2sIWBMPZLkJ6 gnH+iR7+RcEIWA5aeITylUCIjVONg1iIkS7dwffFZbi6AIPrK2+TfTPC2WRY+uJriBr9 sHuSskzx+OZpNHfiXY3fbje+WAwSMBqAk5WCksQ2S3yJA3rxZjqmhnGYMzGNl5FOOxJ0 OzoBcAffmdJMQVN/bCeehTYHF5LHadCsL9xTQgISmdqE8DN2z8UHnXsG1oqdhxqGkTne DAqcwEmyQndEzW9AyBKmHVyw7/++9TDshX3VmaH5iv+wPFYQTz9MtO5m6HMuBrIPsAY6 g6Ig== X-Gm-Message-State: AOJu0Yw0fyg+uyP1OikILuFasAQvdGMIpky+7zWuKePgy4Enf5RrNx2G hFyyCOoj2an6OnsqLyQM27riBE6FWJY= X-Google-Smtp-Source: AGHT+IHJPhKSTPumJNPYucEF66kLkluGKltC6POP4Mk2CjXcoY1UlV+wSJc0iVRs0oDFt6rua4PJ4SuyZNo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:64c2:b0:1cb:de60:874c with SMTP id y2-20020a17090264c200b001cbde60874cmr37994pli.12.1698107211277; Mon, 23 Oct 2023 17:26:51 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:28 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-9-seanjc@google.com> Subject: [PATCH v5 08/13] KVM: selftests: Test Intel PMU architectural events on gp counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Add test cases to check if different Architectural events are available after it's marked as unavailable via CPUID. It covers vPMU event filtering logic based on Intel CPUID, which is a complement to pmu_event_filter. According to Intel SDM, the number of architectural events is reported through CPUID.0AH:EAX[31:24] and the architectural event x is supported if EBX[x]=3D0 && EAX[31:24]>x. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/pmu_counters_test.c | 189 ++++++++++++++++++ 2 files changed, 190 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_counters_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index ed1c17cabc07..4c024fb845b4 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -82,6 +82,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/mmio_warning_test TEST_GEN_PROGS_x86_64 +=3D x86_64/monitor_mwait_test TEST_GEN_PROGS_x86_64 +=3D x86_64/nested_exceptions_test TEST_GEN_PROGS_x86_64 +=3D x86_64/platform_info_test +TEST_GEN_PROGS_x86_64 +=3D x86_64/pmu_counters_test TEST_GEN_PROGS_x86_64 +=3D x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 +=3D x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 +=3D x86_64/set_sregs_test diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c new file mode 100644 index 000000000000..2a6336b994d5 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -0,0 +1,189 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2023, Tencent, Inc. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include + +#include "pmu.h" +#include "processor.h" + +/* Guest payload for any performance counter counting */ +#define NUM_BRANCHES 10 + +static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, + void *guest_code) +{ + struct kvm_vm *vm; + + vm =3D vm_create_with_one_vcpu(vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(*vcpu); + + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu) +{ + struct ucall uc; + + do { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); + } + } while (uc.cmd !=3D UCALL_DONE); +} + +static bool pmu_is_intel_event_stable(uint8_t idx) +{ + switch (idx) { + case INTEL_ARCH_CPU_CYCLES: + case INTEL_ARCH_INSTRUCTIONS_RETIRED: + case INTEL_ARCH_REFERENCE_CYCLES: + case INTEL_ARCH_BRANCHES_RETIRED: + return true; + default: + return false; + } +} + +static void guest_measure_pmu_v1(struct kvm_x86_pmu_feature event, + uint32_t counter_msr, uint32_t nr_gp_counters) +{ + uint8_t idx =3D event.f.bit; + unsigned int i; + + for (i =3D 0; i < nr_gp_counters; i++) { + wrmsr(counter_msr + i, 0); + wrmsr(MSR_P6_EVNTSEL0 + i, ARCH_PERFMON_EVENTSEL_OS | + ARCH_PERFMON_EVENTSEL_ENABLE | intel_pmu_arch_events[idx]); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + + if (pmu_is_intel_event_stable(idx)) + GUEST_ASSERT_EQ(this_pmu_has(event), !!_rdpmc(i)); + + wrmsr(MSR_P6_EVNTSEL0 + i, ARCH_PERFMON_EVENTSEL_OS | + !ARCH_PERFMON_EVENTSEL_ENABLE | + intel_pmu_arch_events[idx]); + wrmsr(counter_msr + i, 0); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + + if (pmu_is_intel_event_stable(idx)) + GUEST_ASSERT(!_rdpmc(i)); + } + + GUEST_DONE(); +} + +static void guest_measure_loop(uint8_t idx) +{ + const struct { + struct kvm_x86_pmu_feature gp_event; + } intel_event_to_feature[] =3D { + [INTEL_ARCH_CPU_CYCLES] =3D { X86_PMU_FEATURE_CPU_CYCLES }, + [INTEL_ARCH_INSTRUCTIONS_RETIRED] =3D { X86_PMU_FEATURE_INSNS_RETIRED }, + [INTEL_ARCH_REFERENCE_CYCLES] =3D { X86_PMU_FEATURE_REFERENCE_CYCLES = }, + [INTEL_ARCH_LLC_REFERENCES] =3D { X86_PMU_FEATURE_LLC_REFERENCES }, + [INTEL_ARCH_LLC_MISSES] =3D { X86_PMU_FEATURE_LLC_MISSES }, + [INTEL_ARCH_BRANCHES_RETIRED] =3D { X86_PMU_FEATURE_BRANCH_INSNS_RETI= RED }, + [INTEL_ARCH_BRANCHES_MISPREDICTED] =3D { X86_PMU_FEATURE_BRANCHES_MISPRE= DICTED }, + }; + + uint32_t nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUN= TERS); + uint32_t pmu_version =3D this_cpu_property(X86_PROPERTY_PMU_VERSION); + struct kvm_x86_pmu_feature gp_event; + uint32_t counter_msr; + unsigned int i; + + if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + counter_msr =3D MSR_IA32_PMC0; + else + counter_msr =3D MSR_IA32_PERFCTR0; + + gp_event =3D intel_event_to_feature[idx].gp_event; + TEST_ASSERT_EQ(idx, gp_event.f.bit); + + if (pmu_version < 2) { + guest_measure_pmu_v1(gp_event, counter_msr, nr_gp_counters); + return; + } + + for (i =3D 0; i < nr_gp_counters; i++) { + wrmsr(counter_msr + i, 0); + wrmsr(MSR_P6_EVNTSEL0 + i, ARCH_PERFMON_EVENTSEL_OS | + ARCH_PERFMON_EVENTSEL_ENABLE | + intel_pmu_arch_events[idx]); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + if (pmu_is_intel_event_stable(idx)) + GUEST_ASSERT_EQ(this_pmu_has(gp_event), !!_rdpmc(i)); + } + + GUEST_DONE(); +} + +static void test_arch_events_cpuid(uint8_t i, uint8_t j, uint8_t idx) +{ + uint8_t arch_events_unavailable_mask =3D BIT_ULL(j); + uint8_t arch_events_bitmap_size =3D BIT_ULL(i); + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm =3D pmu_vm_create_with_one_vcpu(&vcpu, guest_measure_loop); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH, + arch_events_bitmap_size); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EVENTS_MASK, + arch_events_unavailable_mask); + + vcpu_args_set(vcpu, 1, idx); + + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + +static void test_intel_arch_events(void) +{ + uint8_t idx, i, j; + + for (idx =3D 0; idx < NR_INTEL_ARCH_EVENTS; idx++) { + /* + * A brute force iteration of all combinations of values is + * likely to exhaust the limit of the single-threaded thread + * fd nums, so it's test by iterating through all valid + * single-bit values. + */ + for (i =3D 0; i < NR_INTEL_ARCH_EVENTS; i++) { + for (j =3D 0; j < NR_INTEL_ARCH_EVENTS; j++) + test_arch_events_cpuid(i, j, idx); + } + } +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + + TEST_REQUIRE(host_cpu_is_intel); + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); + + test_intel_arch_events(); + + return 0; +} --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51196C25B67 for ; Tue, 24 Oct 2023 00:27:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231934AbjJXA1T (ORCPT ); Mon, 23 Oct 2023 20:27:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231736AbjJXA1F (ORCPT ); Mon, 23 Oct 2023 20:27:05 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFFC71709 for ; Mon, 23 Oct 2023 17:26:53 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c9e0b9b96cso28687515ad.2 for ; Mon, 23 Oct 2023 17:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107213; x=1698712013; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9PRm2qPI6Hb9eDZmaLCMIjwG1XEncicGFs+ftKdyMw0=; b=GeXm48usbyMoKFGdJEwQhQDL9/O9HJeRxfbvPhBBb1X52Twog2iuZNPGRwm7iSyWdM sO0ZTWxrEGzzaAGMEEChkCfv8CDlpo82tpYElzaooxFKkFy26Y72SQHlvFLnj1WtxraZ 6taImWgIfq8Q8pqOpHD1n73cqwikfXNzSUe1JKbkN2MV25TVCwBHUWlrl9v+ysJnWuy+ YYJJY3Q5AAAGPuQDVqWyqGj1apgDtP3gKItyKway2+y2mcduhcSL4AMK+jio3ve2NZ+E PxuND/AqDrvQ+bjLYzTEGOad+1UJ89KBBhOws3ACJJ0DaBIbO49g9h81GQDCBDusyf/d RZwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107213; x=1698712013; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9PRm2qPI6Hb9eDZmaLCMIjwG1XEncicGFs+ftKdyMw0=; b=WT77Qgle/RnrqWUzjPKLDCGIRFO5m32JLoafdIWfNptH4RyDmC+u/5RHg9cktX++x1 fgJmM89nw81q10vYRPfDxsl4PRedZpjX2CEUwG+NT1MBajT54GiepNQCwUiIcLo6iesx BQ/e5lzlDMiFSABYV2AlfOZSUUJ+khIj6i5/6hp1DALpknYHL1qMV8Xrv2vgSs8Y9Kkm fllFUox6w6wEDqO1W7sjb73ZkNaAN/bB/x6boW5x/zTWiOygm8Ye/u4CBqWeCUXGYzLv B8yIbKnOLllvqap93GWjTOggroC0WPMnKq0oqNkX5C+YXivB+8beOrJOq5lmIRO1weEf hzWg== X-Gm-Message-State: AOJu0Yz2AWjN1V8RINQEF19KDlpxV+yuzhmBLMGzr8VSjj7QgKsZBKnR 2G5cTyDTTKjRLLqTj1+9VnpvEeq/6js= X-Google-Smtp-Source: AGHT+IGr/Ltfu/OlTtvWVK+MrFF2S+e43S2CPrXujZtJlpcEbSKWXqMzh5K491VXBI+wesKAWcCTfWmzOsE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:8c:b0:1ca:8629:82a3 with SMTP id o12-20020a170903008c00b001ca862982a3mr170488pld.6.1698107213086; Mon, 23 Oct 2023 17:26:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:29 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-10-seanjc@google.com> Subject: [PATCH v5 09/13] KVM: selftests: Test Intel PMU architectural events on fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Update test to cover Intel PMU architectural events on fixed counters. Per Intel SDM, PMU users can also count architecture performance events on fixed counters (specifically, FIXED_CTR0 for the retired instructions and FIXED_CTR1 for cpu core cycles event). Therefore, if guest's CPUID indicates that an architecture event is not available, the corresponding fixed counter will also not count that event. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 54 ++++++++++++++++--- 1 file changed, 46 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c index 2a6336b994d5..410d09f788ef 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -85,23 +85,44 @@ static void guest_measure_pmu_v1(struct kvm_x86_pmu_fea= ture event, GUEST_DONE(); } =20 +#define X86_PMU_FEATURE_NULL \ +({ \ + struct kvm_x86_pmu_feature feature =3D {}; \ + \ + feature; \ +}) + +static bool pmu_is_null_feature(struct kvm_x86_pmu_feature event) +{ + return !(*(u64 *)&event); +} + static void guest_measure_loop(uint8_t idx) { const struct { struct kvm_x86_pmu_feature gp_event; + struct kvm_x86_pmu_feature fixed_event; } intel_event_to_feature[] =3D { - [INTEL_ARCH_CPU_CYCLES] =3D { X86_PMU_FEATURE_CPU_CYCLES }, - [INTEL_ARCH_INSTRUCTIONS_RETIRED] =3D { X86_PMU_FEATURE_INSNS_RETIRED }, - [INTEL_ARCH_REFERENCE_CYCLES] =3D { X86_PMU_FEATURE_REFERENCE_CYCLES = }, - [INTEL_ARCH_LLC_REFERENCES] =3D { X86_PMU_FEATURE_LLC_REFERENCES }, - [INTEL_ARCH_LLC_MISSES] =3D { X86_PMU_FEATURE_LLC_MISSES }, - [INTEL_ARCH_BRANCHES_RETIRED] =3D { X86_PMU_FEATURE_BRANCH_INSNS_RETI= RED }, - [INTEL_ARCH_BRANCHES_MISPREDICTED] =3D { X86_PMU_FEATURE_BRANCHES_MISPRE= DICTED }, + [INTEL_ARCH_CPU_CYCLES] =3D { X86_PMU_FEATURE_CPU_CYCLES, X86_PMU_FE= ATURE_CPU_CYCLES_FIXED }, + [INTEL_ARCH_INSTRUCTIONS_RETIRED] =3D { X86_PMU_FEATURE_INSNS_RETIRED, = X86_PMU_FEATURE_INSNS_RETIRED_FIXED }, + /* + * Note, the fixed counter for reference cycles is NOT the same + * as the general purpose architectural event (because the GP + * event is garbage). The fixed counter explicitly counts at + * the same frequency as the TSC, whereas the GP event counts + * at a fixed, but uarch specific, frequency. Bundle them here + * for simplicity. + */ + [INTEL_ARCH_REFERENCE_CYCLES] =3D { X86_PMU_FEATURE_REFERENCE_CYCLES,= X86_PMU_FEATURE_REFERENCE_CYCLES_FIXED }, + [INTEL_ARCH_LLC_REFERENCES] =3D { X86_PMU_FEATURE_LLC_REFERENCES, X86= _PMU_FEATURE_NULL }, + [INTEL_ARCH_LLC_MISSES] =3D { X86_PMU_FEATURE_LLC_MISSES, X86_PMU_FE= ATURE_NULL }, + [INTEL_ARCH_BRANCHES_RETIRED] =3D { X86_PMU_FEATURE_BRANCH_INSNS_RETI= RED, X86_PMU_FEATURE_NULL }, + [INTEL_ARCH_BRANCHES_MISPREDICTED] =3D { X86_PMU_FEATURE_BRANCHES_MISPRE= DICTED, X86_PMU_FEATURE_NULL }, }; =20 uint32_t nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUN= TERS); uint32_t pmu_version =3D this_cpu_property(X86_PROPERTY_PMU_VERSION); - struct kvm_x86_pmu_feature gp_event; + struct kvm_x86_pmu_feature gp_event, fixed_event; uint32_t counter_msr; unsigned int i; =20 @@ -132,6 +153,23 @@ static void guest_measure_loop(uint8_t idx) GUEST_ASSERT_EQ(this_pmu_has(gp_event), !!_rdpmc(i)); } =20 + fixed_event =3D intel_event_to_feature[idx].fixed_event; + if (pmu_is_null_feature(fixed_event) || !this_pmu_has(fixed_event)) + goto done; + + i =3D fixed_event.f.bit; + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(PMC_IDX_FIXED + i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + if (pmu_is_intel_event_stable(idx)) + GUEST_ASSERT_NE(_rdpmc(PMC_FIXED_RDPMC_BASE | i), 0); + +done: GUEST_DONE(); } =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84FCDC00A8F for ; Tue, 24 Oct 2023 00:27:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231772AbjJXA1g (ORCPT ); Mon, 23 Oct 2023 20:27:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232058AbjJXA1I (ORCPT ); Mon, 23 Oct 2023 20:27:08 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F968172D for ; Mon, 23 Oct 2023 17:26:57 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5b7f3f47547so2400575a12.3 for ; Mon, 23 Oct 2023 17:26:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107215; x=1698712015; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=iY/ybdzlTO573Vp4ngFbRoGUgka/mobzIHiA6krmWYs=; b=aEKVFk3N/H5E+o4007jXxqL/1Cp5rVStABXX7rWvMw1ngAaIJpJYDUZ5esW7Obisdw GoOAKZ1Yb/o/rOY4apwP2opyU51Z74G03IQ1cnaXmgqsj3PFS38DW4Oq8T9KZbgj8UHh ZZlwR9bUFwkjhBQx+yMNpAfsop7ZPyZsyAKedLF1Bc+YCRGokjQP5gIB5ERM2jBbD19K bdvzrNCsLucA06MhGyRjMZtwngNGu3Jj9AtSf7iC5l6ncHHJzqQfiLB3hbdSSVAkgVvP wv0AquTx+tnvDfgBCsEn3uxmrGEeF3W27hZSlBN0vBgMJUwngKs9qMkXxOhwsr/9SvRA dGlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107215; x=1698712015; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iY/ybdzlTO573Vp4ngFbRoGUgka/mobzIHiA6krmWYs=; b=cbao5lhqJaoavZCAzt8e3QVFzMlQxpyq+RkCwG3q0KuGuJa+Sn70H0BPLudLoinjZ0 pDCULThHA5aN4eBLXE5B9Oh7Un1I/kamJV6yOjsrwn5ydQ/VxQK71RR8wAEYnRI9WQae TDCcWdzm3/QnXzkHDZhFyf5mCA9Pfi8yoYUdbjSTbqcW+fBj2Hx7pQ9z4dhPs5VOOCU+ bDav45PO6SvbLv07tXPknTjCXbxsJUND4xxAkuCd2hmYWxHaOnymwvy83d9chonc8e2F qKPvSyZaEpLvcRrRV2fWFUGCDpdXU0Vtr4/z3MlJVR64F5m/0TnvNIYJOaZuZJgrbYBl 9opw== X-Gm-Message-State: AOJu0YyRnfiz+9sk0o5Oxm5ZfrX4qla2SthHJqVF4NJepqe55LdtAyRx GDEUuzEGBDqqNZxWKv43W3fZy27jFVE= X-Google-Smtp-Source: AGHT+IFMCR8B4hXbRfR7AOaKKIgWLCz070mAba3562Hx7aepoJ96sEmIvcJlV+rnz1jQdIAqRyS+XuWppsg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ee53:b0:1ca:8c48:736e with SMTP id 19-20020a170902ee5300b001ca8c48736emr174043plo.9.1698107215022; Mon, 23 Oct 2023 17:26:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:30 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-11-seanjc@google.com> Subject: [PATCH v5 10/13] KVM: selftests: Test consistency of CPUID with num of gp counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Add a test to verify that KVM correctly emulates MSR-based accesses to general purpose counters based on guest CPUID, e.g. that accesses to non-existent counters #GP and accesses to existent counters succeed. Note, for compatibility reasons, KVM does not emulate #GP when MSR_P6_PERFCTR[0|1] is not present (writes should be dropped). Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 98 +++++++++++++++++++ 1 file changed, 98 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c index 410d09f788ef..274b7f4d4b53 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -212,6 +212,103 @@ static void test_intel_arch_events(void) } } =20 +/* + * Limit testing to MSRs that are actually defined by Intel (in the SDM). = MSRs + * that aren't defined counter MSRs *probably* don't exist, but there's no + * guarantee that currently undefined MSR indices won't be used for someth= ing + * other than PMCs in the future. + */ +#define MAX_NR_GP_COUNTERS 8 +#define MAX_NR_FIXED_COUNTERS 3 + +#define GUEST_ASSERT_PMC_MSR_ACCESS(insn, msr, expect_gp, vector) \ +__GUEST_ASSERT(expect_gp ? vector =3D=3D GP_VECTOR : !vector, \ + "Expected %s on " #insn "(0x%x), got vector %u", \ + expect_gp ? "#GP" : "no fault", msr, vector) \ + +static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_co= unters, + uint8_t nr_counters) +{ + uint8_t i; + + for (i =3D 0; i < nr_possible_counters; i++) { + const uint32_t msr =3D base_msr + i; + const bool expect_success =3D i < nr_counters; + + /* + * KVM drops writes to MSR_P6_PERFCTR[0|1] if the counters are + * unsupported, i.e. doesn't #GP and reads back '0'. + */ + const uint64_t expected_val =3D expect_success ? 0xffff : 0; + const bool expect_gp =3D !expect_success && msr !=3D MSR_P6_PERFCTR0 && + msr !=3D MSR_P6_PERFCTR1; + uint8_t vector; + uint64_t val; + + vector =3D wrmsr_safe(msr, 0xffff); + GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); + + vector =3D rdmsr_safe(msr, &val); + GUEST_ASSERT_PMC_MSR_ACCESS(RDMSR, msr, expect_gp, vector); + + /* On #GP, the result of RDMSR is undefined. */ + if (!expect_gp) + __GUEST_ASSERT(val =3D=3D expected_val, + "Expected RDMSR(0x%x) to yield 0x%lx, got 0x%lx", + msr, expected_val, val); + + vector =3D wrmsr_safe(msr, 0); + GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); + } + GUEST_DONE(); +} + +static void guest_test_gp_counters(void) +{ + uint8_t nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNT= ERS); + uint32_t base_msr; + + if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + base_msr =3D MSR_IA32_PMC0; + else + base_msr =3D MSR_IA32_PERFCTR0; + + guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters); +} + +static void test_gp_counters(uint8_t nr_gp_counters, uint64_t perf_cap) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm =3D pmu_vm_create_with_one_vcpu(&vcpu, guest_test_gp_counters); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_GP_COUNTERS, + nr_gp_counters); + vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, perf_cap); + + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + +static void test_intel_counters(void) +{ + uint8_t nr_gp_counters =3D kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTE= RS); + unsigned int i; + uint8_t j; + + const uint64_t perf_caps[] =3D { + 0, + PMU_CAP_FW_WRITES, + }; + + for (i =3D 0; i < ARRAY_SIZE(perf_caps); i++) { + for (j =3D 0; j <=3D nr_gp_counters; j++) + test_gp_counters(j, perf_caps[i]); + } +} + int main(int argc, char *argv[]) { TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); @@ -222,6 +319,7 @@ int main(int argc, char *argv[]) TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); =20 test_intel_arch_events(); + test_intel_counters(); =20 return 0; } --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5E02C25B6B for ; Tue, 24 Oct 2023 00:27:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232043AbjJXA1p (ORCPT ); Mon, 23 Oct 2023 20:27:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232017AbjJXA1c (ORCPT ); Mon, 23 Oct 2023 20:27:32 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE80F10C for ; Mon, 23 Oct 2023 17:27:00 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-da03ef6fc30so120437276.0 for ; Mon, 23 Oct 2023 17:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107217; x=1698712017; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+RoABScHftr6hc85optkCJ7nhZ5E1OiZawbS1myLt1k=; b=eMGZGAZkRO4INJO88zyNeJLn+DRIDaKftvYeybrSWKHr04FRGR3jYgdBGqt5xkFHAD ZJjgaEpzztwtuFxLc8DHNz/fQKIDWajon8LfcNUtDN2cGQHb8tKsW2w5VnAzKty5YLve 67yAnoTlwuDLkRiqxHaQdcKZHeSFty1ErgaxUDLNS0FHnuDnblobFoNXqR4ujyALWXBP Jf42k1TVteAhtajErQva3jngeJY6IriDmnmKnQtTXZrcOPp8KQN0YfYeUQqA/yR3bien IUzPyo8f3PYVNBCTm2QnM4IiI5Z07YFtvy5D0XKvEKJFESxELcxqM/iIf6ZmCqPcn9LY Wrow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107217; x=1698712017; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+RoABScHftr6hc85optkCJ7nhZ5E1OiZawbS1myLt1k=; b=A6qS3YU9+qsxaH+imzFYqtyR1taiX7qVFS1F56IoJgL/BP5ga3sFwaqoKtWdjgT5ky 4QgphfWGy0r3XUiFSvdtyPCDljREB/+jqRYtHJZWVSaM1mP0X+xsrASaIOR2TbSj2ZnB Hsk6/Gs5t3lxcZ1MxsoI/Bb+RRbwMv1iWiqMUDoC3Mnxc7Y8gL8Tktqfi9mjJS7mFk8o DI8UeJcqbyAF1Qs9PzC/h5ztB2GSJwjTVINT1MY7G5Jdgu7gCNKh0NHWfLK7zLvo0yf8 iD6z+ObkD+dmSnoXtd9oR00e3fiQY4ntZBTklg19JEVMrMAt5cEWCpfnYkhMSyC/mGU5 1Dww== X-Gm-Message-State: AOJu0YwseaiFJ1WsHZaMkLagTwegzHSpMAOyW1vBjk03zxCianLr/h0D qn77Dg0F5AhlU6LB3gReDnctNovGe/c= X-Google-Smtp-Source: AGHT+IHnZB9VUP/AjDxRMoLdyognF9OR6mpTMbnZtDIBdpMUAztz6mgWts0jTKuU6zplFlh/CTfPSmJAMx8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1083:b0:d9a:47ea:69a5 with SMTP id v3-20020a056902108300b00d9a47ea69a5mr291702ybu.1.1698107216905; Mon, 23 Oct 2023 17:26:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:31 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-12-seanjc@google.com> Subject: [PATCH v5 11/13] KVM: selftests: Test consistency of CPUID with num of fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Extend the PMU counters test to verify KVM emulation of fixed counters in addition to general purpose counters. Fixed counters add an extra wrinkle in the form of an extra supported bitmask. Thus quoth the SDM: fixed-function performance counter 'i' is supported if ECX[i] || (EDX[4:0= ] > i) Test that KVM handles a counter being available through either method. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 58 ++++++++++++++++++- 1 file changed, 55 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c index 274b7f4d4b53..f1d9cdd69a17 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -227,13 +227,19 @@ __GUEST_ASSERT(expect_gp ? vector =3D=3D GP_VECTOR : = !vector, \ expect_gp ? "#GP" : "no fault", msr, vector) \ =20 static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_co= unters, - uint8_t nr_counters) + uint8_t nr_counters, uint32_t or_mask) { uint8_t i; =20 for (i =3D 0; i < nr_possible_counters; i++) { const uint32_t msr =3D base_msr + i; - const bool expect_success =3D i < nr_counters; + + /* + * Fixed counters are supported if the counter is less than the + * number of enumerated contiguous counters *or* the counter is + * explicitly enumerated in the supported counters mask. + */ + const bool expect_success =3D i < nr_counters || (or_mask & BIT(i)); =20 /* * KVM drops writes to MSR_P6_PERFCTR[0|1] if the counters are @@ -273,7 +279,7 @@ static void guest_test_gp_counters(void) else base_msr =3D MSR_IA32_PERFCTR0; =20 - guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters); + guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters, 0); } =20 static void test_gp_counters(uint8_t nr_gp_counters, uint64_t perf_cap) @@ -292,10 +298,51 @@ static void test_gp_counters(uint8_t nr_gp_counters, = uint64_t perf_cap) kvm_vm_free(vm); } =20 +static void guest_test_fixed_counters(void) +{ + uint64_t supported_bitmask =3D 0; + uint8_t nr_fixed_counters =3D 0; + + /* KVM provides fixed counters iff the vPMU version is 2+. */ + if (this_cpu_property(X86_PROPERTY_PMU_VERSION) >=3D 2) + nr_fixed_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTE= RS); + + /* + * The supported bitmask for fixed counters was introduced in PMU + * version 5. + */ + if (this_cpu_property(X86_PROPERTY_PMU_VERSION) >=3D 5) + supported_bitmask =3D this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_= BITMASK); + + guest_rd_wr_counters(MSR_CORE_PERF_FIXED_CTR0, MAX_NR_FIXED_COUNTERS, + nr_fixed_counters, supported_bitmask); +} + +static void test_fixed_counters(uint8_t nr_fixed_counters, + uint32_t supported_bitmask, uint64_t perf_cap) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm =3D pmu_vm_create_with_one_vcpu(&vcpu, guest_test_fixed_counters); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK, + supported_bitmask); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_FIXED_COUNTERS, + nr_fixed_counters); + vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, perf_cap); + + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + static void test_intel_counters(void) { + uint8_t nr_fixed_counters =3D kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_= COUNTERS); uint8_t nr_gp_counters =3D kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTE= RS); unsigned int i; + uint32_t k; uint8_t j; =20 const uint64_t perf_caps[] =3D { @@ -306,6 +353,11 @@ static void test_intel_counters(void) for (i =3D 0; i < ARRAY_SIZE(perf_caps); i++) { for (j =3D 0; j <=3D nr_gp_counters; j++) test_gp_counters(j, perf_caps[i]); + + for (j =3D 0; j <=3D nr_fixed_counters; j++) { + for (k =3D 0; k <=3D (BIT(nr_fixed_counters) - 1); k++) + test_fixed_counters(j, k, perf_caps[i]); + } } } =20 --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B85E5C00A8F for ; Tue, 24 Oct 2023 00:27:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231922AbjJXA1t (ORCPT ); Mon, 23 Oct 2023 20:27:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231817AbjJXA1c (ORCPT ); Mon, 23 Oct 2023 20:27:32 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE2D210DB for ; Mon, 23 Oct 2023 17:27:01 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a81cd8d267so75335057b3.1 for ; Mon, 23 Oct 2023 17:27:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107219; x=1698712019; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=14wArEVnBNhNrbZ4s3cIhEfuNUwYxvC64ZUFQ1mIX8Y=; b=nTIWCNrVOi4oWDJcxrd7dX3VpfEMFgMw8vyCBc5tB6ggMhlwXYvOVBzyw+AvzF5wob gIDwhJIR7mXT/3UhsK4+N4PepcSIAcF6ceRhc7vg27nO09eSrVvpNF2Yr7fiAwSpMUBs fNuITbfgNemreJ7n4sHVWJHGr9uuhAOZLOrPjG9yQoR5MX6kqxBAGHRErUjdg7N+Btpw 7VBnYJ2jAVz2PKA9Ul013LZogOOTXE+Lpd6t3n5/KSGcNXWh3ig4b24fcdTWd8rwa5+0 nCtvEEqRCFPKJazlIL7DcnQbqA/N0saxzc71dxG7pCZhJCcME+RrsL//Xc3aB1BEbXpR f2Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107219; x=1698712019; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=14wArEVnBNhNrbZ4s3cIhEfuNUwYxvC64ZUFQ1mIX8Y=; b=RBx80dlB7ea3NJj27dzbMImaJcbMrO8bM/01JRFq5uucQ5y67loEkIbmmRF1sP1Bcq ylL26q5bvtzFo+AN5xHfETNYLsUognKq1eWa1Ry/w9FWOYPVfRUPCkZdn/1riy5p/XJn s1nBApTIIr6lv+rgZLnMgPeq/fdVDLifvEEcNUW4QkyMZeOfsXlIgObVIZ3jmUNQZCTI otB/5nG1cOF+f2n6bNhUn0qV6OeA1QzxQg86pRMkSq6EfLsXlUlrqZ85uPN8eJ8s4lDT 29M+wPTX6iQidjpQ4z3YTha6IrSgSMSNE2XyTineE6bnLG+EpkuQuP6jRZhomDSAYVvS 6JAQ== X-Gm-Message-State: AOJu0YzQrel3JlLiqF2oVY4XzoGGJPIFrCaGRH1s3E75mmho/BgO7X66 uDH21B5CA5rjapD4dFZAhW7G3RT/mgE= X-Google-Smtp-Source: AGHT+IG2LW9PGB9cXobco7ikpUCA0s3PI34T5IQvsTCHhMRo8sFby8GDriPs96/3MflKnfZN+sOTJMSSRpM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:ccd4:0:b0:58c:b45f:3e94 with SMTP id o203-20020a0dccd4000000b0058cb45f3e94mr209290ywd.8.1698107218929; Mon, 23 Oct 2023 17:26:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:32 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-13-seanjc@google.com> Subject: [PATCH v5 12/13] KVM: selftests: Add functional test for Intel's fixed PMU counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jinrong Liang Extend the fixed counters test to verify that supported counters can actually be enabled in the control MSRs, that unsupported counters cannot, and that enabled counters actually count. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang [sean: fold into the rd/wr access test, massage changelog] Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 29 ++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c index f1d9cdd69a17..1c392ad156f4 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -266,7 +266,6 @@ static void guest_rd_wr_counters(uint32_t base_msr, uin= t8_t nr_possible_counters vector =3D wrmsr_safe(msr, 0); GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); } - GUEST_DONE(); } =20 static void guest_test_gp_counters(void) @@ -280,6 +279,7 @@ static void guest_test_gp_counters(void) base_msr =3D MSR_IA32_PERFCTR0; =20 guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters, 0); + GUEST_DONE(); } =20 static void test_gp_counters(uint8_t nr_gp_counters, uint64_t perf_cap) @@ -302,6 +302,7 @@ static void guest_test_fixed_counters(void) { uint64_t supported_bitmask =3D 0; uint8_t nr_fixed_counters =3D 0; + uint8_t i; =20 /* KVM provides fixed counters iff the vPMU version is 2+. */ if (this_cpu_property(X86_PROPERTY_PMU_VERSION) >=3D 2) @@ -316,6 +317,32 @@ static void guest_test_fixed_counters(void) =20 guest_rd_wr_counters(MSR_CORE_PERF_FIXED_CTR0, MAX_NR_FIXED_COUNTERS, nr_fixed_counters, supported_bitmask); + + for (i =3D 0; i < MAX_NR_FIXED_COUNTERS; i++) { + uint8_t vector; + uint64_t val; + + if (i >=3D nr_fixed_counters && !(supported_bitmask & BIT_ULL(i))) { + vector =3D wrmsr_safe(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + __GUEST_ASSERT(vector =3D=3D GP_VECTOR, + "Expected #GP for counter %u in FIXED_CTRL_CTRL", i); + + vector =3D wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(PMC_IDX_FIXED = + i)); + __GUEST_ASSERT(vector =3D=3D GP_VECTOR, + "Expected #GP for counter %u in PERF_GLOBAL_CTRL", i); + continue; + } + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(PMC_IDX_FIXED + i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + val =3D rdmsr(MSR_CORE_PERF_FIXED_CTR0 + i); + + GUEST_ASSERT_NE(val, 0); + } + GUEST_DONE(); } =20 static void test_fixed_counters(uint8_t nr_fixed_counters, --=20 2.42.0.758.gaed0368e0e-goog From nobody Thu Jan 1 09:12:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D156C00A8F for ; Tue, 24 Oct 2023 00:28:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232563AbjJXA2B (ORCPT ); Mon, 23 Oct 2023 20:28:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232289AbjJXA1g (ORCPT ); Mon, 23 Oct 2023 20:27:36 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D3EC19B1 for ; Mon, 23 Oct 2023 17:27:05 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a818c1d2c7so55846427b3.0 for ; Mon, 23 Oct 2023 17:27:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698107220; x=1698712020; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=LVgGvnFW9q/mijlkpvY+O1Zix6lIKT3NeTV0IdzcrJI=; b=ZlsezbuSc+49AlP1bQ2PbpiZz+SvGIjUjkpTcqbur2L82OEZHm/v2oH9FbQMBa+nD8 z9vwRJcPw9HE83O5eH8uKM3UKVoq8lIyZAUBZHtsROj5AV4Wia46Ii931l2owGabLBwO a3xJAhU2ROlfPTSvG6aNWeO80IVZjbde0stEmzzEd9ZJ9rzRP6tmBvX5A9hlE95QTQN/ hgRxXwcsWQHWeeeaOHu9wj9ejFaCtAcmnBxYZb9CrlZEuoi5EP70JtQaCRcrVtK23xRQ 6MYlBqp5+TEk+s0nNHJIWj3r/oW9Seik3JaNzL72Is4ch3OrpPifsTI7NbijX2G0Ce5W K4TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698107220; x=1698712020; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LVgGvnFW9q/mijlkpvY+O1Zix6lIKT3NeTV0IdzcrJI=; b=hnPQGPvyNG+wixx0jAUhlwRD/zvWG+0/oT7wFjHpu/EaB2akO2bXR3fnQem7WhTpbE 29VPDqIUTPyUdbgT2MnADQZtlNopsNg8aD+lFtldWQXa9NsgYfFp0PgTG9tTa/ev/IxP AQdjA94bYjZwt5HKnkZCZOfH/Jlv5E5B3KlXqOx9ekHdv6lhiRtjcjNPYKcGLsAIgAxK lqmAlzTi0zvCHISM6PoUX3acX2qMl2PU6PAcNBBMQ3CfhjapMT5cJIc7HVFgNlJ5FdTj DzDzrqt8XDIMGgyxpS0muAGJ5kT+u1x3JkXqq8HtdfRMfMXy1gkaSyP/hVw2h7QvLHNT Uehw== X-Gm-Message-State: AOJu0Yz++unoh7OFdwEVRj0T223QjGBIAF+vKaRZsj3Ftv8YB0f0EFMZ muiOJWTxPGb8kTD66J7ZjjsitnBx6Qc= X-Google-Smtp-Source: AGHT+IGDUdJgo+MJ6+2aZRfH3/YowIGxQsEvFIcUSx2iogNdLSlS53ij+RVDbB0oTfE5uPotkkKXADFTBn0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:105:b0:da0:3da9:ce08 with SMTP id o5-20020a056902010500b00da03da9ce08mr8158ybh.10.1698107220804; Mon, 23 Oct 2023 17:27:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 17:26:33 -0700 In-Reply-To: <20231024002633.2540714-1-seanjc@google.com> Mime-Version: 1.0 References: <20231024002633.2540714-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231024002633.2540714-14-seanjc@google.com> Subject: [PATCH v5 13/13] KVM: selftests: Extend PMU counters test to permute on vPMU version From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jinrong Liang , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extent the PMU counters test to verify that KVM emulates the vPMU (or not) according to the vPMU version exposed to the guest. KVM's ABI (which does NOT reflect Intel's architectural behavior) is that GP counters are available if the PMU version is >0, and that fixed counters and PERF_GLOBAL_CTRL are available if the PMU version is >1. Test up to vPMU version 5, i.e. the current architectural max. KVM only officially supports up to version 2, but the behavior of the counters is backwards compatible, i.e. KVM shouldn't do something completely different for a higher, architecturally-defined vPMU version. Verify KVM behavior against the effective vPMU version, e.g. advertising vPMU 5 when KVM only supports vPMU 2 shouldn't magically unlock vPMU 5 features. Suggested-by: Like Xu Suggested-by: Jinrong Liang Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 60 +++++++++++++++---- 1 file changed, 47 insertions(+), 13 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools= /testing/selftests/kvm/x86_64/pmu_counters_test.c index 1c392ad156f4..85b01dd5b2cd 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -12,6 +12,8 @@ /* Guest payload for any performance counter counting */ #define NUM_BRANCHES 10 =20 +static uint8_t kvm_pmu_version; + static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, void *guest_code) { @@ -21,6 +23,8 @@ static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct = kvm_vcpu **vcpu, vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(*vcpu); =20 + sync_global_to_guest(vm, kvm_pmu_version); + return vm; } =20 @@ -97,6 +101,19 @@ static bool pmu_is_null_feature(struct kvm_x86_pmu_feat= ure event) return !(*(u64 *)&event); } =20 +static uint8_t guest_get_pmu_version(void) +{ + /* + * Return the effective PMU version, i.e. the minimum between what KVM + * supports and what is enumerated to the guest. The counters test + * deliberately advertises a PMU version to the guest beyond what is + * actually supported by KVM to verify KVM doesn't freak out and do + * something bizarre with an architecturally valid, but unsupported, + * version. + */ + return min_t(uint8_t, kvm_pmu_version, this_cpu_property(X86_PROPERTY_PMU= _VERSION)); +} + static void guest_measure_loop(uint8_t idx) { const struct { @@ -121,7 +138,7 @@ static void guest_measure_loop(uint8_t idx) }; =20 uint32_t nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUN= TERS); - uint32_t pmu_version =3D this_cpu_property(X86_PROPERTY_PMU_VERSION); + uint32_t pmu_version =3D guest_get_pmu_version(); struct kvm_x86_pmu_feature gp_event, fixed_event; uint32_t counter_msr; unsigned int i; @@ -270,9 +287,12 @@ static void guest_rd_wr_counters(uint32_t base_msr, ui= nt8_t nr_possible_counters =20 static void guest_test_gp_counters(void) { - uint8_t nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNT= ERS); + uint8_t nr_gp_counters =3D 0; uint32_t base_msr; =20 + if (guest_get_pmu_version()) + nr_gp_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); + if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) base_msr =3D MSR_IA32_PMC0; else @@ -282,7 +302,8 @@ static void guest_test_gp_counters(void) GUEST_DONE(); } =20 -static void test_gp_counters(uint8_t nr_gp_counters, uint64_t perf_cap) +static void test_gp_counters(uint8_t pmu_version, uint8_t nr_gp_counters, + uint64_t perf_cap) { struct kvm_vcpu *vcpu; struct kvm_vm *vm; @@ -305,16 +326,17 @@ static void guest_test_fixed_counters(void) uint8_t i; =20 /* KVM provides fixed counters iff the vPMU version is 2+. */ - if (this_cpu_property(X86_PROPERTY_PMU_VERSION) >=3D 2) + if (guest_get_pmu_version() >=3D 2) nr_fixed_counters =3D this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTE= RS); =20 /* * The supported bitmask for fixed counters was introduced in PMU * version 5. */ - if (this_cpu_property(X86_PROPERTY_PMU_VERSION) >=3D 5) + if (guest_get_pmu_version() >=3D 5) supported_bitmask =3D this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_= BITMASK); =20 + guest_rd_wr_counters(MSR_CORE_PERF_FIXED_CTR0, MAX_NR_FIXED_COUNTERS, nr_fixed_counters, supported_bitmask); =20 @@ -345,7 +367,7 @@ static void guest_test_fixed_counters(void) GUEST_DONE(); } =20 -static void test_fixed_counters(uint8_t nr_fixed_counters, +static void test_fixed_counters(uint8_t pmu_version, uint8_t nr_fixed_coun= ters, uint32_t supported_bitmask, uint64_t perf_cap) { struct kvm_vcpu *vcpu; @@ -368,22 +390,32 @@ static void test_intel_counters(void) { uint8_t nr_fixed_counters =3D kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_= COUNTERS); uint8_t nr_gp_counters =3D kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTE= RS); + uint8_t max_pmu_version =3D kvm_cpu_property(X86_PROPERTY_PMU_VERSION); unsigned int i; + uint8_t j, v; uint32_t k; - uint8_t j; =20 const uint64_t perf_caps[] =3D { 0, PMU_CAP_FW_WRITES, }; =20 - for (i =3D 0; i < ARRAY_SIZE(perf_caps); i++) { - for (j =3D 0; j <=3D nr_gp_counters; j++) - test_gp_counters(j, perf_caps[i]); + /* + * Test up to PMU v5, which is the current maximum version defined by + * Intel, i.e. is the last version that is guaranteed to be backwards + * compatible with KVM's existing behavior. + */ + max_pmu_version =3D max_t(typeof(max_pmu_version), max_pmu_version, 5); =20 - for (j =3D 0; j <=3D nr_fixed_counters; j++) { - for (k =3D 0; k <=3D (BIT(nr_fixed_counters) - 1); k++) - test_fixed_counters(j, k, perf_caps[i]); + for (v =3D 0; v <=3D max_pmu_version; v++) { + for (i =3D 0; i < ARRAY_SIZE(perf_caps) + 1; i++) { + for (j =3D 0; j <=3D nr_gp_counters; j++) + test_gp_counters(v, j, perf_caps[i]); + + for (j =3D 0; j <=3D nr_fixed_counters; j++) { + for (k =3D 0; k <=3D (BIT(nr_fixed_counters) - 1); k++) + test_fixed_counters(v, j, k, perf_caps[i]); + } } } } @@ -397,6 +429,8 @@ int main(int argc, char *argv[]) TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); =20 + kvm_pmu_version =3D kvm_cpu_property(X86_PROPERTY_PMU_VERSION); + test_intel_arch_events(); test_intel_counters(); =20 --=20 2.42.0.758.gaed0368e0e-goog