From nobody Thu Dec 18 19:26:15 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E63830C36D for ; Tue, 9 Dec 2025 20:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313579; cv=none; b=QfT5M8gHEtNwPzFcedSaf/8TVAVOtbhMn0n1uZlcdpkchxbtEuGWWcMKDAwtnKRuPyZrzjONHEIvJGw9gnlj7o+64qsRdjeF4RON/2ubVuM4XJlUdIj+6LlU45h1gjqqg4YzvLDm4qfhOot4xxEy1ZiaL9t/hHO+X2a8Mtv7yQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313579; c=relaxed/simple; bh=Ovdb4lVvItfJ3Rv6cYdLAQWvS+U6STJuj9GjRFXJ9SM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CXdy7InL4/r7cQAa8xQ+1E6pdrZS4Gl+36jDsCyA2G4KDxfJ4iVEUD955L5l3uLiEYOyVJA2hl5NUU+1zjvMunQgjTjroWdgjMYeEo2zqX7AIWzh1aQIN384Y24fbiGUkBj8kpaeGEuGLmZRpI/QAJjkoCUaBmnBpbbmSYUChNQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yAWJT6PZ; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yAWJT6PZ" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-45074787a6dso5695592b6e.0 for ; Tue, 09 Dec 2025 12:52:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313571; x=1765918371; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9TQM9hbEKY4h1TXZBEY6y+aDCak1RQ1iwoKxBpJfd80=; b=yAWJT6PZveoEuxpuw/wbE6bdSgUhYlE/RHpx+Qkm+CTLqiO6OAoVBLuHGO4tc9jQNf 8FDHVNFxGatyTy0kPGTec1hUnzme3gy5E5Ox8i6NsGHQ9+4GHDNSmGcH2MOnXcx2bx5M md0+qkUcWQ8fzmnnvje+LB8CewIbLKJ2u8tOWAzwvCEGeLxl5U4/j/38Bur2GR1g3VD7 Rxd1bDIZD4nXnwBt/ooOhohOV8G7cM93yH3PKyA9e4V75mfwsrff+7EPyfRl4MBzDYpI zgESOslgCv+3vx60p7ARs7/LLCsXrU0G00PxAwqzSOtNaZph9vnNid3gjdCRY0RrO1C9 WckQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313571; x=1765918371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9TQM9hbEKY4h1TXZBEY6y+aDCak1RQ1iwoKxBpJfd80=; b=bsbsoPWuKNjDJOMc4nFluPSF2BWju39/l60FtK1z/56a8UuAM1MEp1ogwfKa03JTQV 0cOM5UlwWzvhwSe17M0JYq+8jYkLNsbXtTf0DF7nNnzxDfdj8GqiNYVq7JwYwZrffy1N QRSgz/LL9H6RIBnszVGReGgjHsXUs56JmWwRL1oYNYoa4aTFkrYX8Xgs2x7877I1Chik K+L/BfTT3QNY4ejbisRwxlHoXuTAxu+qsaksawwwHOjacxeS41kgMec23fn8cFysYMTU XPmNf/O1Hq+euG7dmvX51Ec3MK1B+L6hqQwPXFxyUNew8V8AGiz0wraRi2LXi/ZSXqdn KvzA== X-Forwarded-Encrypted: i=1; AJvYcCUjFCSOQsBtsFGvN8qVf9dF0XH/guiiODoXndYzYLiXxLzZH8+AoDGVoq8SzT/J9JESOpDzFCx/1dolqJI=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7Zr2+9mbMTwDE+WiJdJbs5guWv/s2MtDktiJP9HaZXEehkXle bHMEM/6Wa7SJyjveQcohFtt76zOE0eSLz0rTbaEIzsYk6vRe9bSgSSp5Z56Doq6y3f8+FxcY+Iy Z5MaaLmak9FD8sS49Gc5jf57fkw== X-Google-Smtp-Source: AGHT+IHWcIE4lg0T6ePDVQb/sbhJCa/b57rBLjAAg31xKxOUUm8LB/AXhfBedLziDTfhcJmhpwXWgmPZUFcO0YaxEw== X-Received: from oifn25-n1.prod.google.com ([2002:a05:6808:8859:10b0:450:63b:b0dd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:1441:b0:44f:6a32:5364 with SMTP id 5614622812f47-455865ceac4mr180351b6e.24.1765313570670; Tue, 09 Dec 2025 12:52:50 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:21 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-25-coltonlewis@google.com> Subject: [PATCH v5 24/24] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rerun all tests for a partitioned PMU in vpmu_counter_access. Create an enum specifying whether we are testing the emulated or partitioned PMU and all the test functions are modified to take the implementation as an argument and make the difference in setup appropriately. Signed-off-by: Colton Lewis --- tools/include/uapi/linux/kvm.h | 1 + .../selftests/kvm/arm64/vpmu_counter_access.c | 77 ++++++++++++++----- 2 files changed, 57 insertions(+), 21 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index 52f6000ab0208..2bb2f234df0e6 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -963,6 +963,7 @@ struct kvm_enable_cap { #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 #define KVM_CAP_GUEST_MEMFD_FLAGS 244 +#define KVM_CAP_ARM_PARTITION_PMU 245 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index ae36325c022fb..e68072e3e1326 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -25,9 +25,20 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 =20 +enum pmu_impl { + EMULATED, + PARTITIONED +}; + +const char *pmu_impl_str[] =3D { + "Emulated", + "Partitioned" +}; + struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; + bool pmu_partitioned; }; =20 static struct vpmu_vm vpmu_vm; @@ -399,7 +410,7 @@ static void guest_code(uint64_t expected_pmcr_n) } =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) +static void create_vpmu_vm(void *guest_code, enum pmu_impl impl) { struct kvm_vcpu_init init; uint8_t pmuver, ec; @@ -409,6 +420,11 @@ static void create_vpmu_vm(void *guest_code) .attr =3D KVM_ARM_VCPU_PMU_V3_IRQ, .addr =3D (uint64_t)&irq, }; + bool partition =3D (impl =3D=3D PARTITIONED); + struct kvm_enable_cap partition_cap =3D { + .cap =3D KVM_CAP_ARM_PARTITION_PMU, + .args[0] =3D partition, + }; =20 /* The test creates the vpmu_vm multiple times. Ensure a clean state */ memset(&vpmu_vm, 0, sizeof(vpmu_vm)); @@ -420,6 +436,12 @@ static void create_vpmu_vm(void *guest_code) guest_sync_handler); } =20 + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) { + vm_ioctl(vpmu_vm.vm, KVM_ENABLE_CAP, &partition_cap); + vpmu_vm.pmu_partitioned =3D partition; + pr_debug("Set PMU partitioning: %d\n", partition); + } + /* Create vCPU with PMUv3 */ kvm_get_default_vcpu_target(vpmu_vm.vm, &init); init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); @@ -461,13 +483,14 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) } } =20 -static void test_create_vpmu_vm_with_nr_counters(unsigned int nr_counters,= bool expect_fail) +static void test_create_vpmu_vm_with_nr_counters( + unsigned int nr_counters, enum pmu_impl impl, bool expect_fail) { struct kvm_vcpu *vcpu; unsigned int prev; int ret; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); vcpu =3D vpmu_vm.vcpu; =20 prev =3D get_pmcr_n(vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0))); @@ -489,7 +512,7 @@ static void test_create_vpmu_vm_with_nr_counters(unsign= ed int nr_counters, bool * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, enum pmu_impl impl) { uint64_t sp; struct kvm_vcpu *vcpu; @@ -497,7 +520,7 @@ static void run_access_test(uint64_t pmcr_n) =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, false); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -531,14 +554,14 @@ static struct pmreg_sets validity_check_reg_sets[] = =3D { * Create a VM, and check if KVM handles the userspace accesses of * the PMU register sets in @validity_check_reg_sets[] correctly. */ -static void run_pmregs_validity_test(uint64_t pmcr_n) +static void run_pmregs_validity_test(uint64_t pmcr_n, enum pmu_impl impl) { int i; struct kvm_vcpu *vcpu; uint64_t set_reg_id, clr_reg_id, reg_val; uint64_t valid_counters_mask, max_counters_mask; =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, false); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 valid_counters_mask =3D get_counters_mask(pmcr_n); @@ -588,11 +611,11 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_error_test(uint64_t pmcr_n, enum pmu_impl impl) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, true); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, true); destroy_vpmu_vm(); } =20 @@ -600,11 +623,11 @@ static void run_error_test(uint64_t pmcr_n) * Return the default number of implemented PMU event counters excluding * the cycle counter (i.e. PMCR_EL0.N value) for the guest. */ -static uint64_t get_pmcr_n_limit(void) +static uint64_t get_pmcr_n_limit(enum pmu_impl impl) { uint64_t pmcr; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); pmcr =3D vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); destroy_vpmu_vm(); return get_pmcr_n(pmcr); @@ -614,7 +637,7 @@ static bool kvm_supports_nr_counters_attr(void) { bool supported; =20 - create_vpmu_vm(NULL); + create_vpmu_vm(NULL, EMULATED); supported =3D !__vcpu_has_device_attr(vpmu_vm.vcpu, KVM_ARM_VCPU_PMU_V3_C= TRL, KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS); destroy_vpmu_vm(); @@ -622,22 +645,34 @@ static bool kvm_supports_nr_counters_attr(void) return supported; } =20 -int main(void) +void test_pmu(enum pmu_impl impl) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); - TEST_REQUIRE(kvm_supports_vgic_v3()); - TEST_REQUIRE(kvm_supports_nr_counters_attr()); + pr_info("Testing PMU: Implementation =3D %s\n", pmu_impl_str[impl]); + + pmcr_n =3D get_pmcr_n_limit(impl); + pr_debug("PMCR_EL0.N: Limit =3D %lu\n", pmcr_n); =20 - pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); - run_pmregs_validity_test(i); + run_access_test(i, impl); + run_pmregs_validity_test(i, impl); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_error_test(i, impl); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + TEST_REQUIRE(kvm_supports_vgic_v3()); + TEST_REQUIRE(kvm_supports_nr_counters_attr()); + + test_pmu(EMULATED); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + test_pmu(PARTITIONED); =20 return 0; } --=20 2.52.0.239.gd5f0c6e74e-goog