From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7034DC636D4 for ; Mon, 13 Feb 2023 18:02:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230502AbjBMSCw (ORCPT ); Mon, 13 Feb 2023 13:02:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230074AbjBMSCr (ORCPT ); Mon, 13 Feb 2023 13:02:47 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12F0D16AE9 for ; Mon, 13 Feb 2023 10:02:41 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id k13-20020a92c24d000000b003127853ef5dso9802338ilo.5 for ; Mon, 13 Feb 2023 10:02:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=70xvWuvYdaN0RHC9y2y+6cAohvWGNUH5IWQYtqhnjZE=; b=URPZnrYEApr1It+IVm3J16L4qUg2cigXzrcfhjuyLm3gPm2LeW9+RwiQI/kFr/6ju1 QRO6gvU1kY6dLPzMq2+wCiHIcftw3NDrPbCYlTgca1cM2BNmpFb0BRx5X9OGtMV8iMKn 4GC9+VjpgnH3ZAy+lR+VvnIrWIq2xvRyCFftvMcT7dG5qXF65SenNlCXkyi51D30k9MG Abkslkc8PQo9JTtVqLGYkkodvZP1FK5TTm+dDZmUgRPrThCwyXSKOQumFLGyGTRNrShi Ag9sAoQFvgZNJPH5i4fSQ1WSw6K2yJobD3ohEm/lMQeQ/f+tIPe4oookQuuonepQMnv3 WQQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=70xvWuvYdaN0RHC9y2y+6cAohvWGNUH5IWQYtqhnjZE=; b=KHJThiRwpnDc8XSyd/u2fGF4rMH03DcvEMRY6T53ig/7OM+3F72P8tJ3/x8hG6JmvP UY9tZyQ7v0McdbwRTkbcoke3oPFqT35Mp3LWMgv9HG94EEwr79w7u3w1qPrftbYzxQMg iMJgpDP/59WUiCY955Cf+MgyNAszkdSCwVExv9snPq9P56SYd8Mduwf539Ze2EOyfnvd ZyAOFF0bIIapdIUz7WGwvA5gO2XcnGQ+J6y1lZ6V7GPaU6GIL2k7v1K9LABwirKL32MY t794IJAPcyPB5Ud+luBL2EzuW/OojIdKI2oPgCMwDPN/HjqV+XZ+Gb6v3CahOnu1n+bR nCNg== X-Gm-Message-State: AO0yUKVwOFAWDBRTNlPXwmaALP3dlZxiiPSvoxYLpeMxS186GJFG+5y/ QO2I3wElfsJYtXATZBGKdGcBGKVlqyTE X-Google-Smtp-Source: AK7set/8lUVnM/xUxlc5ev5n38xMQU6UonaUcN7oe3E/mMvqt3yuPRKRB7RT8I01SIqBC8WTFb8l0KoRBFtB X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:2613:0:b0:310:9afc:aa6 with SMTP id n19-20020a922613000000b003109afc0aa6mr2640960ile.0.1676311360373; Mon, 13 Feb 2023 10:02:40 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:22 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-2-rananta@google.com> Subject: [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming patches would add more vPMU related tests to the file. Hence, rename it to be more generic. Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 2 +- .../kvm/aarch64/{vpmu_counter_access.c =3D> vpmu_test.c} | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) rename tools/testing/selftests/kvm/aarch64/{vpmu_counter_access.c =3D> vpm= u_test.c} (99%) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index b27fea0ce5918..a4d262e139b18 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,7 +143,7 @@ TEST_GEN_PROGS_aarch64 +=3D aarch64/psci_test TEST_GEN_PROGS_aarch64 +=3D aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 +=3D aarch64/vgic_init TEST_GEN_PROGS_aarch64 +=3D aarch64/vgic_irq -TEST_GEN_PROGS_aarch64 +=3D aarch64/vpmu_counter_access +TEST_GEN_PROGS_aarch64 +=3D aarch64/vpmu_test TEST_GEN_PROGS_aarch64 +=3D access_tracking_perf_test TEST_GEN_PROGS_aarch64 +=3D demand_paging_test TEST_GEN_PROGS_aarch64 +=3D dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/to= ols/testing/selftests/kvm/aarch64/vpmu_test.c similarity index 99% rename from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c rename to tools/testing/selftests/kvm/aarch64/vpmu_test.c index 453f0dd240f44..581be0c463ad1 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * vpmu_counter_access - Test vPMU event counter access + * vpmu_test - Test the vPMU * * Copyright (c) 2022 Google LLC. * --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B68BC636D4 for ; Mon, 13 Feb 2023 18:02:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231129AbjBMSCz (ORCPT ); Mon, 13 Feb 2023 13:02:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230471AbjBMSCu (ORCPT ); Mon, 13 Feb 2023 13:02:50 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3535416ACC for ; Mon, 13 Feb 2023 10:02:42 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id z12-20020a92d6cc000000b00310d4433c8cso9768253ilp.6 for ; Mon, 13 Feb 2023 10:02:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AsmO5eDYgPGzRFknyAuV3lp4/AdQBm/C7SIWaQDWgzs=; b=Cwyfu8Bq2PKtWPxrul5ebBEulrHVwsUaI40rkksHoY18wRNmgXilA9Ert3lhPhjxON UCkt9lQB7j/+RRwya+RbcV290veQl/3Jw9F88H6SqsjLrU4nnVODSYYiH9eJRouyB6II /qsFUZ1PIKaOYDahsLErY6Coe/tm9FDtSwspmSQtN/9CvXjv2wMaURvCiX889j+22PzX vuO8LqDxX6XxAYUscu7nk0JOconpZMPtvAIYFtHR8VruBW6vPhrRKWjPWxxQXFkgIak3 EoV8Q6qNkksBj3c9dDHndoddsc+ebUAGqCjGhrZUp5yDhA/EKml6KO8XoFTYbGu4zm9S Fz6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AsmO5eDYgPGzRFknyAuV3lp4/AdQBm/C7SIWaQDWgzs=; b=jY/6PlRLzBUNzfAW9pTyuOl5gDfEAYWnWc/TaxSnIBOWXHZHPT3pLHxgf811NRvmjN VHa4fahweveHEqe5sA8M+Wr32if3NutKwxLn6ZYX3//iFXK9nuZvDwKPI6GmCro1+sdO A307OZbwJ/KPI68WWM3hiIdDKku96jgDs1y9JxBZ8zZFSYz7oEZmd0PA1L1fdfj5ITsm HF4uuRrc6ERfgFXijeCik75YcjNCXWYLoyoTogfsdQhSHaKL3hbXwzovPj/Wo45D5CwE fTsZoNBKqGrUCY0Zv1WTSiWuRLg5KQrB91GXB7Yn+qenCbjtlJgDC1QDK5g+soVNU0Ng krhA== X-Gm-Message-State: AO0yUKVpLNDLCBq0MXllX5bmEQpBYkj9cYr/dvFFvSsB8rm9BExfmbla UJs+W/Xw35ZdT7LBnyx3MEd+tiskP8lE X-Google-Smtp-Source: AK7set9pNf3dQ3WTwUtipCs3HqOPBdYIZEdSHW/5bEFYmaEXSbjn7kq84XWVTKBORlsRqT+UT9RJi2sQZ2qi X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:a144:0:b0:3c4:8536:446a with SMTP id m4-20020a02a144000000b003c48536446amr241jah.1.1676311361559; Mon, 13 Feb 2023 10:02:41 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:23 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-3-rananta@google.com> Subject: [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor the existing counter access tests into its own independent functions and make running the tests generic to make way for the upcoming tests. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 140 ++++++++++++------ 1 file changed, 98 insertions(+), 42 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 581be0c463ad1..d72c3c9b9c39f 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,11 @@ static inline void disable_counter(int idx) isb(); } =20 +static inline uint64_t get_pmcr_n(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); +} + /* * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will @@ -183,6 +188,23 @@ struct pmc_accessor pmc_accessors[] =3D { uint64_t expected_ec =3D INVALID_EC; uint64_t op_end_addr; =20 +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +enum test_stage { + TEST_STAGE_COUNTER_ACCESS =3D 1, +}; + +struct guest_data { + enum test_stage test_stage; + uint64_t expected_pmcr_n; +}; + +static struct guest_data guest_data; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -295,7 +317,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_= op) write_sysreg(test_bit, pmovsset_el0); =20 /* The bit will be set only if the counter is implemented */ - pmcr_n =3D FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + pmcr_n =3D get_pmcr_n(); set_expected =3D (pmc_idx < pmcr_n) ? true : false; } else { write_sysreg(test_bit, pmcntenclr_el0); @@ -424,15 +446,14 @@ static void test_access_invalid_pmc_regs(struct pmc_a= ccessor *acc, int pmc_idx) * if reading/writing PMU registers for implemented or unimplemented * counters can work as expected. */ -static void guest_code(uint64_t expected_pmcr_n) +static void guest_counter_access_test(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n, unimp_mask; + uint64_t pmcr_n, unimp_mask; int i, pmc; =20 GUEST_ASSERT(expected_pmcr_n <=3D ARMV8_PMU_MAX_GENERAL_COUNTERS); =20 - pmcr =3D read_sysreg(pmcr_el0); - pmcr_n =3D FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + pmcr_n =3D get_pmcr_n(); =20 /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n =3D=3D expected_pmcr_n, pmcr_n, expected_pmcr_n); @@ -462,6 +483,18 @@ static void guest_code(uint64_t expected_pmcr_n) for (pmc =3D pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); } +} + +static void guest_code(void) +{ + switch (guest_data.test_stage) { + case TEST_STAGE_COUNTER_ACCESS: + guest_counter_access_test(guest_data.expected_pmcr_n); + break; + default: + GUEST_ASSERT_1(0, guest_data.test_stage); + } + GUEST_DONE(); } =20 @@ -469,14 +502,14 @@ static void guest_code(uint64_t expected_pmcr_n) #define GICR_BASE_GPA 0x80A0000ULL =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **v= cpup, - int *gic_fd) +static struct vpmu_vm *create_vpmu_vm(void *guest_code) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; uint64_t dfr0, irq =3D 23; + struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr =3D { .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, .attr =3D KVM_ARM_VCPU_PMU_V3_IRQ, @@ -487,7 +520,10 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code,= struct kvm_vcpu **vcpup, .attr =3D KVM_ARM_VCPU_PMU_V3_INIT, }; =20 - vm =3D vm_create(1); + vpmu_vm =3D calloc(1, sizeof(*vpmu_vm)); + TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); + + vpmu_vm->vm =3D vm =3D vm_create(1); vm_init_descriptor_tables(vm); /* Catch exceptions for easier debugging */ for (ec =3D 0; ec < ESR_EC_NUM; ec++) { @@ -498,9 +534,9 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, = struct kvm_vcpu **vcpup, /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); - vcpu =3D aarch64_vcpu_add(vm, 0, &init, guest_code); + vpmu_vm->vcpu =3D vcpu =3D aarch64_vcpu_add(vm, 0, &init, guest_code); vcpu_init_descriptor_tables(vcpu); - *gic_fd =3D vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + vpmu_vm->gic_fd =3D vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA= ); =20 /* Make sure that PMUv3 support is indicated in the ID register */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); @@ -513,15 +549,21 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code= , struct kvm_vcpu **vcpup, vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); =20 - *vcpup =3D vcpu; - return vm; + return vpmu_vm; +} + +static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) +{ + close(vpmu_vm->gic_fd); + kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm); } =20 -static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +static void run_vcpu(struct kvm_vcpu *vcpu) { struct ucall uc; =20 - vcpu_args_set(vcpu, 1, pmcr_n); + sync_global_to_guest(vcpu->vm, guest_data); vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -539,16 +581,18 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_test(uint64_t pmcr_n) +static void run_counter_access_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd; uint64_t sp, pmcr, pmcr_orig; struct kvm_vcpu_init init; =20 + guest_data.expected_pmcr_n =3D pmcr_n; + pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vm =3D create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm =3D create_vpmu_vm(guest_code); + vcpu =3D vpmu_vm->vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -559,23 +603,22 @@ static void run_test(uint64_t pmcr_n) pmcr |=3D (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); =20 - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); =20 /* * Reset and re-initialize the vCPU, and run the guest code again to * check if PMCR_EL0.N is preserved. */ - vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); =20 - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); =20 - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); } =20 /* @@ -583,15 +626,18 @@ static void run_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_counter_access_error_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd, ret; + int ret; uint64_t pmcr, pmcr_orig; =20 + guest_data.expected_pmcr_n =3D pmcr_n; + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vm =3D create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm =3D create_vpmu_vm(guest_code); + vcpu =3D vpmu_vm->vcpu; =20 /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -603,8 +649,25 @@ static void run_error_test(uint64_t pmcr_n) TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", pmcr, pmcr_orig); =20 - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); +} + +static void run_counter_access_tests(uint64_t pmcr_n) +{ + uint64_t i; + + guest_data.test_stage =3D TEST_STAGE_COUNTER_ACCESS; + + for (i =3D 0; i <=3D pmcr_n; i++) + run_counter_access_test(i); + + for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_counter_access_error_test(i); +} + +static void run_tests(uint64_t pmcr_n) +{ + run_counter_access_tests(pmcr_n); } =20 /* @@ -613,30 +676,23 @@ static void run_error_test(uint64_t pmcr_n) */ static uint64_t get_pmcr_n_limit(void) { - struct kvm_vm *vm; - struct kvm_vcpu *vcpu; - int gic_fd; + struct vpmu_vm *vpmu_vm; uint64_t pmcr; =20 - vm =3D create_vpmu_vm(guest_code, &vcpu, &gic_fd); - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); - close(gic_fd); - kvm_vm_free(vm); + vpmu_vm =3D create_vpmu_vm(guest_code); + vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } =20 int main(void) { - uint64_t i, pmcr_n; + uint64_t pmcr_n; =20 TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); =20 pmcr_n =3D get_pmcr_n_limit(); - for (i =3D 0; i <=3D pmcr_n; i++) - run_test(i); - - for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_tests(pmcr_n); =20 return 0; } --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE662C64EC7 for ; Mon, 13 Feb 2023 18:03:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231150AbjBMSC7 (ORCPT ); Mon, 13 Feb 2023 13:02:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230472AbjBMSCu (ORCPT ); Mon, 13 Feb 2023 13:02:50 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F7F01F919 for ; Mon, 13 Feb 2023 10:02:43 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id g1-20020a92cda1000000b0030c45d93884so9932591ild.16 for ; Mon, 13 Feb 2023 10:02:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=QbdYvBJWHHUW2D1muNpyWaqx7zQissf8ygmzU3xHnetuuSzhAGPmBNebmkvWnEv3Xz 0jTCK1eWSC3eS7VG05SG9/FEi8vrnebVB2jiUNG9uyANwJOnQEbKNRaRfVC8qN3KlsXd Rb7FZJQCo48VHQ5nbIMEllm2vsR2a8e8QO0sBnWaEwoXOJcijWRfplFKi31C6/eZC2hl CDRxtjZxnwD6IpsDftFHB8SBdayD9oQvAHBbKF9YvKcJEuRACq+a/5OsQTF53rsLP01P 9TtEQC0bRtuTmKo8G83KZK+lIFZkGfzIN2Lo/u/KWJLSOnf/yAbo8wQZT1gT4X3o0f7A /H3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=gKSTmVQ4EVHXjzjQso8pGOlUQjr4zrMfBjPfY4Riki3GrudiWqD1vlIMGCiVjcGlzi KMpjrRfq2MVEyriceRXLVW5fmHw0SAR9yDy8pQhc5Ta/0NORjPgg4mrpBGWWlua76/27 vP7rrbqNeU3dNsfbGFwti/64g3zNO9IwBKSQ/FNpr78iY7r2Xmb27VfaTSDpCpJ7808V zUHbZQF+0QhU4/dsAfyCr1gra20VE6MtbCkQvTjn38tFtdTpP/sQF1gOUmlG9Sy+N5gC yyza2zjeG8LtyXR4rqCaMfMFU1Pe/pJm5NedOyW033uwLOVYxcx+/B6WriBWX2IxQwvH ArnA== X-Gm-Message-State: AO0yUKW6mPhdaMWmjjmh2aE8sRNZmVQUKCQz5a66WqVdRFqSuLiYvHSt FWiVL9yWa6so6iOUir7ui2SPiWRdmUsY X-Google-Smtp-Source: AK7set9juY00LvKn0j/zEJmqdLAxmL1I6OU4HnYIt4dcFolsJ+3w6e8YxMwoo+OzXJYGRW6YP6suEVqRXC4+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5d:9c49:0:b0:718:b11d:a972 with SMTP id 9-20020a5d9c49000000b00718b11da972mr11228352iof.36.1676311362812; Mon, 13 Feb 2023 10:02:42 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:24 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-4-rananta@google.com> Subject: [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle counter enable bit) for PMCNTENSET_EL0 register. Signed-off-by: Raghavendra Rao Ananta --- tools/arch/arm64/include/asm/perf_event.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/i= nclude/asm/perf_event.h index 97e49a4d4969f..8ce23aabf6fe6 100644 --- a/tools/arch/arm64/include/asm/perf_event.h +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -222,9 +222,11 @@ /* * PMOVSR: counters overflow flag status reg */ +#define ARMV8_PMU_CNTOVS_C (1 << 31) /* Cycle counter overflow bit */ #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK =20 + /* * PMXEVTYPER: Event selection reg */ @@ -247,6 +249,11 @@ #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 = */ #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 = */ =20 +/* + * PMCNTENSET: Count Enable set reg + */ +#define ARMV8_PMU_CNTENSET_C (1 << 31) /* Cycle counter enable bit */ + /* PMMIR_EL1.SLOTS mask */ #define ARMV8_PMU_SLOTS_MASK 0xff =20 --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA900C6379F for ; Mon, 13 Feb 2023 18:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230132AbjBMSDD (ORCPT ); Mon, 13 Feb 2023 13:03:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbjBMSCw (ORCPT ); Mon, 13 Feb 2023 13:02:52 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71A0D15CB6 for ; Mon, 13 Feb 2023 10:02:44 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id z12-20020a92d6cc000000b00310d4433c8cso9768333ilp.6 for ; Mon, 13 Feb 2023 10:02:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=lcEuCEa97ssf+wWD9q1X2jC8NG6WwfI+HwutzvJgCa6Uq+ZlQvO6nZ4fTZ1DOuWvCH QYfQMES86DFpGnJNaCKSYUITRt/fnbFRlV24rRk/9s+HJOK4uK3HUoXbUfKpYuWWbdGM JAkmmiPtglLKasxoEqGau0AV+ap1NMUKKv6f9Q0J6+GYYOqy5tXTw0cA3vI0PsY+8Ti9 6DRHzCp9lRTmscj+OMpjzyRj19bIqkMSdns7IbL4DdWskvbfqjdhHYeYx/pEtPAC/xpy 7SX2rfvXEDyQVkwgmKnDgLQKOy/Rc39ynIF8Wwbekx9vLgo9IWDVOMuK3NLGvfW1CAvj N/RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=GL6FCZb97yA2DqMpSOKsOhEZpgTMQtUmIDo518zyLFFXbEqy0fNjGL7mL4bdrnn6Bu W57dN9P+nIn/Cj9Irr5Zm4rlYbycvGAtnGJh/8fli0p8pxz6DQPeaAiOhrY1iLN2TGYw tmNh9U7g3X21JTj5JxWijUimo7Rn4QOgKPGBEzZFPGCf49bjVNU1y/1GBKrALaxvDks9 Fd6ot7MRnwsj3UNhpjk+CtOuG3dj8Zj9n7Wa0n/xj2HYlHT2lWv+fPxpM1kJhP7GBAUw pUX8dd/Grm4/KEo3XiXzxJmNwl0ENpbnwY3h+b/nNb/tVPRB73SBqOJndwXYADkTLsKo sKlQ== X-Gm-Message-State: AO0yUKWsRxQBJqFi28YECtsS3N2XyIwgkg5eti031xYc5/GQKEkpJtrn +yl1dIfX5MolK4G3GdV0+5E1lSqr+hNk X-Google-Smtp-Source: AK7set/0edXrdMAO+nSquBaytN7bLQuBx2MZ7S0viOK1DwWUM7ZNzi/YO3vgRtPP1oEgxZM2YSawlRvX5KHR X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:854b:0:b0:3a7:e46f:1016 with SMTP id g69-20020a02854b000000b003a7e46f1016mr298jai.0.1676311363803; Mon, 13 Feb 2023 10:02:43 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:25 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-5-rananta@google.com> Subject: [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add basic helpers for the test to access the cycle counter registers. The helpers will be used in the upcoming patches to run the tests related to cycle counter. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index d72c3c9b9c39f..15aebc7d7dc94 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,46 @@ static inline void disable_counter(int idx) isb(); } =20 +static inline uint64_t read_cycle_counter(void) +{ + return read_sysreg(pmccntr_el0); +} + +static inline void reset_cycle_counter(void) +{ + uint64_t v =3D read_sysreg(pmcr_el0); + + write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0); + isb(); +} + +static inline void enable_cycle_counter(void) +{ + uint64_t v =3D read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0); + isb(); +} + +static inline void disable_cycle_counter(void) +{ + uint64_t v =3D read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0); + isb(); +} + +static inline void write_pmccfiltr(unsigned long val) +{ + write_sysreg(val, pmccfiltr_el0); + isb(); +} + +static inline uint64_t read_pmccfiltr(void) +{ + return read_sysreg(pmccfiltr_el0); +} + static inline uint64_t get_pmcr_n(void) { return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75B5EC636D4 for ; Mon, 13 Feb 2023 18:03:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231167AbjBMSDG (ORCPT ); Mon, 13 Feb 2023 13:03:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230479AbjBMSCw (ORCPT ); Mon, 13 Feb 2023 13:02:52 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE439166C6 for ; Mon, 13 Feb 2023 10:02:45 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id a4-20020a5b0004000000b006fdc6aaec4fso13371419ybp.20 for ; Mon, 13 Feb 2023 10:02:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=Jx3a++ruGoXrcWcT9j3/ApaAyQ82DXdYanaDr7ST0XpE2Zdzv5xTJY/Gfvi9n0sUxT IAGJgL1CX6Hf5UbNZQlM9snWEB7yQtiJ3BksENLZKNl8tmSdopEGil1x1IoShWafjen4 62pvkCMKN/mE+1i0nSy4KestzSs7XPOtO/tu5mcO3NEE1o7l/6XD/mvfDRZctpTX5k1/ 1ymQj+IpyTb/3m2LR+/32yyqg07hGCAIxxTKNNPGOEavERroNb6chMCoBWYWytMMWT+T SVF11S0EQueuF/zePtphljd/7Lqh6rJXqYpNcY3OAqPtQRWYbKLek3BlYIVi3Lowxvff lneA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=VXZ/Ivwo2Kn9eLx1BaESUPN5M08V7C+qveRTDGmvaujK4Wdh3RN2b9SloqhxdPdGZ2 YxrV8vRnniMTa3Eg7V6q7b/bODzW8+nDugUOt8YBwA7QzftX5XGa/wY0r8oAFDxkMTXg nXaYZFyFSLySokVFNdjQ8urvjXeOinh2EmvAPEo++Oink7fDCQMmkVjJJhW3C3J4BbiO HZfy3Jx8g4n5bgNJp601yNAsqDjijZIxT4fbDi+Zl/dMlZyPCTLws6Zou77iMoOj8VrL YJQH50/SO8zEnIeVo9t9upQxXQ/eUCJvQCazG0CDpsNoOFt1kQCemPBA4JLsRmEZ8u1W C7kw== X-Gm-Message-State: AO0yUKXRNjc08dLOgYEyvn11vKNtz5+bA20avzNNZXHjyzGFbNpaqKaX JoZS4sRYED5JH+57fr0SB9SV/TrTAZYY X-Google-Smtp-Source: AK7set/EzSiXKhIlnWZGFQXSX4Ttf2a7ec3Qre0VGbOsMKnsjqZTP111F1zCvSTSfNtsmpqkl4lGGYYDMsVP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1c4:b0:87e:a15b:4e55 with SMTP id u4-20020a05690201c400b0087ea15b4e55mr2919839ybh.21.1676311364919; Mon, 13 Feb 2023 10:02:44 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:26 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-6-rananta@google.com> Subject: [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Accept a list of KVM PMU event filters as an argument while creating a VM via create_vpmu_vm(). Upcoming patches would leverage this to test the event filters' functionality. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++-- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 15aebc7d7dc94..2b3a4fa3afa9c 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,10 +15,14 @@ #include #include #include +#include =20 /* The max number of the PMU event counters (excluding the cycle counter) = */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) =20 +/* The max number of event numbers that's supported */ +#define ARMV8_PMU_MAX_EVENTS 64 + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_= EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] =3D { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; =20 +#define MAX_EVENT_FILTERS_PER_VM 10 + #define INVALID_EC (-1ul) uint64_t expected_ec =3D INVALID_EC; uint64_t op_end_addr; @@ -232,6 +238,7 @@ struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; int gic_fd; + unsigned long *pmu_filter; }; =20 enum test_stage { @@ -541,8 +548,51 @@ static void guest_code(void) #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL =20 +static unsigned long * +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_= event_filters) +{ + int j; + unsigned long *pmu_filter; + struct kvm_device_attr filter_attr =3D { + .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, + .attr =3D KVM_ARM_VCPU_PMU_V3_FILTER, + }; + + /* + * Setting up of the bitmap is similar to what KVM does. + * If the first filter denys an event, default all the others to allow, a= nd vice-versa. + */ + pmu_filter =3D bitmap_zalloc(ARMV8_PMU_MAX_EVENTS); + TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter"); + + if (pmu_event_filters[0].action =3D=3D KVM_PMU_EVENT_DENY) + bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS); + + for (j =3D 0; j < MAX_EVENT_FILTERS_PER_VM; j++) { + struct kvm_pmu_event_filter *pmu_event_filter =3D &pmu_event_filters[j]; + + if (!pmu_event_filter->nevents) + break; + + pr_debug("Applying event filter:: event: 0x%x; action: %s\n", + pmu_event_filter->base_event, + pmu_event_filter->action =3D=3D KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY"= ); + + filter_attr.addr =3D (uint64_t) pmu_event_filter; + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + + if (pmu_event_filter->action =3D=3D KVM_PMU_EVENT_ALLOW) + __set_bit(pmu_event_filter->base_event, pmu_filter); + else + __clear_bit(pmu_event_filter->base_event, pmu_filter); + } + + return pmu_filter; +} + /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct vpmu_vm *create_vpmu_vm(void *guest_code) +static struct vpmu_vm * +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_fi= lters) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); =20 /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter =3D set_event_filters(vcpu, pmu_event_filters); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); =20 @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) =20 static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) { + if (vpmu_vm->pmu_filter) + bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); free(vpmu_vm); @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n =3D pmcr_n; =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm =3D create_vpmu_vm(guest_code); + vpmu_vm =3D create_vpmu_vm(guest_code, NULL); vcpu =3D vpmu_vm->vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr= _n) guest_data.expected_pmcr_n =3D pmcr_n; =20 pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm =3D create_vpmu_vm(guest_code); + vpmu_vm =3D create_vpmu_vm(guest_code, NULL); vcpu =3D vpmu_vm->vcpu; =20 /* Update the PMCR_EL0.N with @pmcr_n */ @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; =20 - vpmu_vm =3D create_vpmu_vm(guest_code); + vpmu_vm =3D create_vpmu_vm(guest_code, NULL); vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } =20 --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF3E7C636D4 for ; Mon, 13 Feb 2023 18:03:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231189AbjBMSDL (ORCPT ); Mon, 13 Feb 2023 13:03:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230521AbjBMSCy (ORCPT ); Mon, 13 Feb 2023 13:02:54 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8A321EFE5 for ; Mon, 13 Feb 2023 10:02:46 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id r39-20020a25ac67000000b0091231592671so5668936ybd.1 for ; Mon, 13 Feb 2023 10:02:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=fXK9ReETXPS2j77ugkcwz+WisHJJl30nYulVnC5wPiVkGYG2emzbPoAyLynH7ZaAsI 1c3LfWRVQ6iewmxzt1Wg3dcqrIbv7kvBuFrhDdqtN0ITzFnMcZNSuWr+0SY6Kw1+o/FR szzw1tiwapnZbgthXSx/Q5fTtv96fLq+VqsfNBqsKWqMyUmVfxhQ7HqAbJNxSJzNGvs+ CqH2iEfuDdAD+suKAproIohuUCyyo1DeIRtwQxE1WZ193pQLMzCrkXd/oJGU2pJpvoMD KS3jlG86kbYxGFvTrXi5TTvOLQarjIK+MQliKVXkQg6GClJdDqpSaaeT1GDu/N1b4QVv jK6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=tbhnClbnhWde/fefN4oMycJEquMomg9eua3A709rnEpxr9VyXrFmC/X4V77Bb/FSJn vBLu1iVmYrNC6D3cZldzff+WgKA4GULsww4hU9t4ZWJFkClTptMpa/iUXOBmGRRjby+w VMY3ZPLZVy0/fmrPKHxruCp8e5TPu+qlifdfsaL0WRRdszuvezcndE1732t3Ucwi1/cj yt15zjpcvc7Iq2mKn2d045r9P4efUkNDQyStOtGX5A6eeUw3v1X+XnkSGB0onMBZlE3A j6A/JLnpbOVtg9YZS08mrvyeVxW5DJDvl5NgNg4eyCMjSz0OQHaFrAS5eNbd72tHBj/w mbRA== X-Gm-Message-State: AO0yUKWQtkBOMtFLRQLs+3qP0vgJmCK4pUuOiLiK3n1kcaPgMYbUwI6a 6sb5h7Vv03snuC+gBOZO/KB0Dsvd8JRi X-Google-Smtp-Source: AK7set9x9mJnUYIPvLaxq+mW4hkjTEtu0lhCvSA8e5Al797fNHvzz7ZeDQoXvhWboY/INSxnq62Zmi3F8bIn X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a0d:eac3:0:b0:52e:c5c0:1d0e with SMTP id t186-20020a0deac3000000b0052ec5c01d0emr1538030ywe.418.1676311366096; Mon, 13 Feb 2023 10:02:46 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:27 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-7-rananta@google.com> Subject: [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER attribute by applying a series of filters to allow or deny events from the userspace. Validation is done by the guest in a way that it should be able to count only the events that are allowed. The workload to execute a precise number of instructions (execute_precise_instrs() and precise_instrs_loop()) is taken from the kvm-unit-tests' arm/pmu.c. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++- 1 file changed, 258 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 2b3a4fa3afa9c..3dfb770b538e9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -2,12 +2,21 @@ /* * vpmu_test - Test the vPMU * - * Copyright (c) 2022 Google LLC. + * The test suit contains a series of checks to validate the vPMU + * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is + * supported on the host. The tests include: * - * This test checks if the guest can see the same number of the PMU event + * 1. Check if the guest can see the same number of the PMU event * counters (PMCR_EL0.N) that userspace sets, if the guest can access * those counters, and if the guest cannot access any other counters. - * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + * + * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER + * attribute by applying a series of filters in various combinations + * of allowing or denying the events. The guest validates it by + * checking if it's able to count only the events that are allowed. + * + * Copyright (c) 2022 Google LLC. + * */ #include #include @@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] =3D { =20 #define MAX_EVENT_FILTERS_PER_VM 10 =20 +#define EVENT_ALLOW(ev) \ + {.base_event =3D ev, .nevents =3D 1, .action =3D KVM_PMU_EVENT_ALLOW} + +#define EVENT_DENY(ev) \ + {.base_event =3D ev, .nevents =3D 1, .action =3D KVM_PMU_EVENT_DENY} + #define INVALID_EC (-1ul) uint64_t expected_ec =3D INVALID_EC; uint64_t op_end_addr; @@ -243,11 +258,13 @@ struct vpmu_vm { =20 enum test_stage { TEST_STAGE_COUNTER_ACCESS =3D 1, + TEST_STAGE_KVM_EVENT_FILTER, }; =20 struct guest_data { enum test_stage test_stage; uint64_t expected_pmcr_n; + unsigned long *pmu_filter; }; =20 static struct guest_data guest_data; @@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } =20 + +/* + * Extra instructions inserted by the compiler would be difficult to compe= nsate + * for, so hand assemble everything between, and including, the PMCR acces= ses + * to start and stop counting. isb instructions are inserted to make sure + * pmccntr read after this function returns the exact instructions executed + * in the controlled block. Total instrs =3D isb + nop + 2*loop =3D 2 + 2*= loop. + */ +static inline void precise_instrs_loop(int loop, uint32_t pmcr) +{ + uint64_t pmcr64 =3D pmcr; + + asm volatile( + " msr pmcr_el0, %[pmcr]\n" + " isb\n" + "1: subs %w[loop], %w[loop], #1\n" + " b.gt 1b\n" + " nop\n" + " msr pmcr_el0, xzr\n" + " isb\n" + : [loop] "+r" (loop) + : [pmcr] "r" (pmcr64) + : "cc"); +} + +/* + * Execute a known number of guest instructions. Only even instruction cou= nts + * greater than or equal to 4 are supported by the in-line assembly code. = The + * control register (PMCR_EL0) is initialized with the provided value (all= owing + * for example for the cycle counter or event counters to be reset). At th= e end + * of the exact instruction loop, zero is written to PMCR_EL0 to disable + * counting, allowing the cycle counter or event counters to be read at the + * leisure of the calling code. + */ +static void execute_precise_instrs(int num, uint32_t pmcr) +{ + int loop =3D (num - 2) / 2; + + GUEST_ASSERT_2(num >=3D 4 && ((num - 2) % 2 =3D=3D 0), num, loop); + precise_instrs_loop(loop, pmcr); +} + +static void test_instructions_count(int pmc_idx, bool expect_count) +{ + int i; + struct pmc_accessor *acc; + uint64_t cnt; + int instrs_count =3D 100; + + enable_counter(pmc_idx); + + /* Test the event using all the possible way to configure the event */ + for (i =3D 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc =3D &pmc_accessors[i]; + + pmu_disable_reset(); + + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* Enable the PMU and execute precisely number of instructions as a work= load */ + execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_P= MCR_E); + + /* If a count is expected, the counter should be increased by 'instrs_co= unt' */ + cnt =3D acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(expect_count =3D=3D (cnt =3D=3D instrs_count), + i, expect_count, cnt, instrs_count); + } + + disable_counter(pmc_idx); +} + +static void test_cycles_count(bool expect_count) +{ + uint64_t cnt; + + pmu_enable(); + reset_cycle_counter(); + + /* Count cycles in EL0 and EL1 */ + write_pmccfiltr(0); + enable_cycle_counter(); + + cnt =3D read_cycle_counter(); + + /* + * If a count is expected by the test, the cycle counter should be increa= sed by + * at least 1, as there is at least one instruction between enabling the + * counter and reading the counter. + */ + GUEST_ASSERT_2(expect_count =3D=3D (cnt > 0), cnt, expect_count); + + disable_cycle_counter(); + pmu_disable_reset(); +} + +static void test_event_count(uint64_t event, int pmc_idx, bool expect_coun= t) +{ + switch (event) { + case ARMV8_PMUV3_PERFCTR_INST_RETIRED: + test_instructions_count(pmc_idx, expect_count); + break; + case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: + test_cycles_count(expect_count); + break; + } +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expect= ed_pmcr_n) } } =20 +static void guest_event_filter_test(unsigned long *pmu_filter) +{ + uint64_t event; + + /* + * Check if PMCEIDx_EL0 is advertized as configured by the userspace. + * It's possible that even though the userspace allowed it, it may not be= supported + * by the hardware and could be advertized as 'disabled'. Hence, only val= idate against + * the events that are advertized. + * + * Furthermore, check if the event is in fact counting if enabled, or vic= e-versa. + */ + for (event =3D 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) { + if (pmu_event_is_supported(event)) { + GUEST_ASSERT_1(test_bit(event, pmu_filter), event); + test_event_count(event, 0, true); + } else { + test_event_count(event, 0, false); + } + } +} + static void guest_code(void) { switch (guest_data.test_stage) { case TEST_STAGE_COUNTER_ACCESS: guest_counter_access_test(guest_data.expected_pmcr_n); break; + case TEST_STAGE_KVM_EVENT_FILTER: + guest_event_filter_test(guest_data.pmu_filter); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n) run_counter_access_error_test(i); } =20 +static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_P= ER_VM] =3D { + /* + * Each set of events denotes a filter configuration for that VM. + * During VM creation, the filters will be applied in the sequence mentio= ned here. + */ + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, +}; + +static void run_kvm_event_filter_error_tests(void) +{ + int ret; + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct vpmu_vm *vpmu_vm; + struct kvm_vcpu_init init; + struct kvm_pmu_event_filter pmu_event_filter =3D EVENT_ALLOW(ARMV8_PMUV3_= PERFCTR_CPU_CYCLES); + struct kvm_device_attr filter_attr =3D { + .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, + .attr =3D KVM_ARM_VCPU_PMU_V3_FILTER, + .addr =3D (uint64_t) &pmu_event_filter, + }; + + /* KVM should not allow configuring filters after the PMU is initialized = */ + vpmu_vm =3D create_vpmu_vm(guest_code, NULL); + ret =3D __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret =3D=3D -1 && errno =3D=3D EBUSY, + "Failed to disallow setting an event filter after PMU init"); + destroy_vpmu_vm(vpmu_vm); + + /* Check for invalid event filter setting */ + vm =3D vm_create(1); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); + vcpu =3D aarch64_vcpu_add(vm, 0, &init, guest_code); + + pmu_event_filter.base_event =3D UINT16_MAX; + pmu_event_filter.nevents =3D 5; + ret =3D __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret =3D=3D -1 && errno =3D=3D EINVAL, "Failed check for inval= id filter configuration"); + kvm_vm_free(vm); +} + +static void run_kvm_event_filter_test(void) +{ + int i; + struct vpmu_vm *vpmu_vm; + struct kvm_vm *vm; + vm_vaddr_t pmu_filter_gva; + size_t pmu_filter_bmap_sz =3D BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeo= f(unsigned long); + + guest_data.test_stage =3D TEST_STAGE_KVM_EVENT_FILTER; + + /* Test for valid filter configurations */ + for (i =3D 0; i < ARRAY_SIZE(pmu_event_filters); i++) { + vpmu_vm =3D create_vpmu_vm(guest_code, pmu_event_filters[i]); + vm =3D vpmu_vm->vm; + + pmu_filter_gva =3D vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_V= ADDR); + memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter= _bmap_sz); + guest_data.pmu_filter =3D (unsigned long *) pmu_filter_gva; + + run_vcpu(vpmu_vm->vcpu); + + destroy_vpmu_vm(vpmu_vm); + } + + /* Check if KVM is handling the errors correctly */ + run_kvm_event_filter_error_tests(); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); + run_kvm_event_filter_test(); } =20 /* --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B172C6379F for ; Mon, 13 Feb 2023 18:03:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230505AbjBMSDT (ORCPT ); Mon, 13 Feb 2023 13:03:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231133AbjBMSC6 (ORCPT ); Mon, 13 Feb 2023 13:02:58 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4C891F4BF for ; Mon, 13 Feb 2023 10:02:47 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id c16-20020a92b750000000b003154f6c5064so1194752ilm.12 for ; Mon, 13 Feb 2023 10:02:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=IUlIvqk99wOkDyN8QxvxQiqVaFMxbqcgA9uNoQQZndNpq1vnhI1bsorC5l7knDzNI1 CNDo0y4gWYWsjH8HGIUC2KbuMfUKXfm7ePcqNQpy+Qs4qo8ajWDUsOujEIEHzjn327wK sZ1ayR5mmnOmUCp4kpQavgmctGK3PWKE8lV3wz7hGdhQau+QWqpp26566SvWZ7ItdtGp m6IcH36QOlGywNrGdqHCpVGddGG8i6Xw2nKmdn1sn1BnyRI/jVLOCzIkw/xbo4NTXMGy /IjMPtH0X6J6GrDvjXmIAlRLaq0mI7ZT34see0s7lbyDyBucG6Lo7pboRUp5GpzbE3TB P77g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=ZvCY2UA6XhdqRZetZBON5IZapEMFXy5hJeAHc5bO1lO1XpYrvBez/cO9YKRuOOy1Ak luFRkF/PztYKyPogSDzbHeWUIJPBStYydA1uVzdICmdMo1hL2LvkhNR6yOqAJSP7nAZv vW+aHyXoTro+8AqJrucxcPZfggMnfMaKYYkMu9ui2zsiAOjSsuLHhaGYG5dx8mRMwXRt upm8u1kO+nfTw91gUHN/3aZF+GOIPlMFsuzqpv9dctQ7IDcMT/fv160rTjQZW68jpD8B 1j6AwDq5Zgb3M4H+G/AT2Nm+zcaygNdxe3RYXDFddq4rO7wwEGwKoTH4zkB3mqnHY68O bIEQ== X-Gm-Message-State: AO0yUKUVzikBmmMWWg/LENQ4XQLZ61Erzv09Bjcvafdmlr8lKxIsq24c xdM0Ifrhc9vvacbWzQYNL/qEyqjjYEif X-Google-Smtp-Source: AK7set/zRoXeDH2o2OgI5SNOkKe9Bljn/xqjOVO+iFT8+upLAFbhCUFo5GurkgO+YcVUEBjj4XoXzR5ZKsrW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:9409:0:b0:310:9fc1:a92b with SMTP id c9-20020a929409000000b003109fc1a92bmr2739337ili.0.1676311367115; Mon, 13 Feb 2023 10:02:47 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:28 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-8-rananta@google.com> Subject: [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" KVM doest't allow the guests to modify the filter types such counting events in nonsecure/secure-EL2, EL3, and so on. Validate the same by force-configuring the bits in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0 registers. The test extends further by trying to create an event for counting only in EL2 and validates if the counter is not progressing. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 3dfb770b538e9..5c166df245589 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,6 +15,10 @@ * of allowing or denying the events. The guest validates it by * checking if it's able to count only the events that are allowed. * + * 3. KVM doesn't allow the guest to count the events attributed with + * higher exception levels (EL2, EL3). Verify this functionality by + * configuring and trying to count the events for EL2 in the guest. + * * Copyright (c) 2022 Google LLC. * */ @@ -23,6 +27,7 @@ #include #include #include +#include #include #include =20 @@ -259,6 +264,7 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS =3D 1, TEST_STAGE_KVM_EVENT_FILTER, + TEST_STAGE_KVM_EVTYPE_FILTER, }; =20 struct guest_data { @@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu= _filter) } } =20 +static void guest_evtype_filter_test(void) +{ + int i; + struct pmc_accessor *acc; + uint64_t typer, cnt; + struct arm_smccc_res res; + + pmu_enable(); + + /* + * KVM blocks the guests from creating events for counting in Secure/Non-= Secure Hyp (EL2), + * Monitor (EL3), and Multithreading configuration. It applies the mask + * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTY= PERn_EL0, + * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this = using all possible + * ways to configure the EVTYPER. + */ + for (i =3D 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc =3D &pmc_accessors[i]; + + /* Set all filter bits (31-24), readback, and check against the mask */ + acc->write_typer(0, 0xff000000); + typer =3D acc->read_typer(0); + + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) =3D=3D ARMV8_PMU_EVTYPE_= MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + /* + * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv + * to not count NS-EL2 events. Verify this functionality by configuring + * a NS-EL2 event, for which the couunt shouldn't increment. + */ + typer =3D ARMV8_PMUV3_PERFCTR_INST_RETIRED; + typer |=3D ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXC= LUDE_EL0; + acc->write_typer(0, typer); + acc->write_cntr(0, 0); + enable_counter(0); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt =3D acc->read_cntr(0); + GUEST_ASSERT_3(cnt =3D=3D 0, cnt, typer, i); + } + + /* Check the same sequence for the Cycle counter */ + write_pmccfiltr(0xff000000); + typer =3D read_pmccfiltr(); + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) =3D=3D ARMV8_PMU_EVTYPE_M= ASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + typer =3D ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLU= DE_EL0; + write_pmccfiltr(typer); + reset_cycle_counter(); + enable_cycle_counter(); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt =3D read_cycle_counter(); + GUEST_ASSERT_2(cnt =3D=3D 0, cnt, typer); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -687,6 +757,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVENT_FILTER: guest_event_filter_test(guest_data.pmu_filter); break; + case TEST_STAGE_KVM_EVTYPE_FILTER: + guest_evtype_filter_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void) run_kvm_event_filter_error_tests(); } =20 +static void run_kvm_evtype_filter_test(void) +{ + struct vpmu_vm *vpmu_vm; + + guest_data.test_stage =3D TEST_STAGE_KVM_EVTYPE_FILTER; + + vpmu_vm =3D create_vpmu_vm(guest_code, NULL); + run_vcpu(vpmu_vm->vcpu); + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); + run_kvm_evtype_filter_test(); } =20 /* --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 638BDC636CC for ; Mon, 13 Feb 2023 18:03:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230523AbjBMSDW (ORCPT ); Mon, 13 Feb 2023 13:03:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231171AbjBMSDH (ORCPT ); Mon, 13 Feb 2023 13:03:07 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD34E2111 for ; Mon, 13 Feb 2023 10:02:48 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id z12-20020a92d6cc000000b00310d4433c8cso9768506ilp.6 for ; Mon, 13 Feb 2023 10:02:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=q/9pkLp04O+q0/h8HqWtbW0b4VJCwyM93nDv77GqvjJ6CZ4Czxpm5H8d/lDlpTXsCj Lxw6nGE3jZxgA/Lo2HUBZdf5EE0qjSxvUgHGZW6o0+9e+HkhvQpiHgZdToFd+AbKHIFB A5CHSl5TSlCOZjqQlpxD5e5wkvR8X/0b6YcT5jL/nTql/+HqHR7svK3rwtTT1Y+ZTR7R GJ/moeEEW6KXP6NWUNqTjcT4/YBd1FC1rP0DUY32WTXTIIVoTMVTv09vwSp4CKIUlMXE jaIuuptu376OmVIHLWwwSlFCwuNah5pfd60Cn7DxqT17NURWdErBO2ZYOWNiy3gYm9fN UGog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=3f3fwrh3xgwi78S+S6NP4HVQDoR13MxfTTuTVHxHUo/xv6/XT7Ii719t4XaEjW09WL tlq0tElAWX+QiKoQyZrUkvtTTdsyHX6cj52Lsu9uSN9L1QM4HqRKMTDVxOrL5xPLXOt3 2ssHE3qKE/jgQIyHUxtY1Ct9yp5WZzMgWHKOPZIihFdxmNvTdmu5Lbmdbat3Rz9X5KKf olgDbZj+UbcD5iHlMYGXiRD8aCOofJ3COQ4G/ZCZBxYKubYjUfrp3qXTS1rB9AIW63Lx xCkD70YEysx4tZaIcbkyKGZ1Z8vsvsd28fzbw0BXAZyresf3blYZRyMnPVqLSiEGwWy0 g8Tw== X-Gm-Message-State: AO0yUKVaBGosBds35SvNtIVB8+Z3+kCobKD2Y+up8EKIL1Vzf4aBxK3e qBBUf/Qo83zsiiOKwEMOl1Q+ik7/tBe3 X-Google-Smtp-Source: AK7set/5XtGkWzBFYSheugf2dkdONWpj2qzeqEPgWjWfAnOy0vxdZ74FjxG8HtTLpC55TP3sfOr6vpeLRU29 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:9407:0:b0:313:f870:58fb with SMTP id c7-20020a929407000000b00313f87058fbmr2508047ili.2.1676311368215; Mon, 13 Feb 2023 10:02:48 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:29 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-9-rananta@google.com> Subject: [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement a stress test for KVM by frequently force-migrating the vCPU to random pCPUs in the system. This would validate the save/restore functionality of KVM and starting/stopping of PMU counters as necessary. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++- 1 file changed, 193 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 5c166df245589..0c9d801f4e602 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,9 +19,15 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * + * 4. Since the PMU registers are per-cpu, stress KVM by frequently + * migrating the guest vCPU to random pCPUs in the system, and check + * if the vPMU is still behaving as expected. + * * Copyright (c) 2022 Google LLC. * */ +#define _GNU_SOURCE + #include #include #include @@ -30,6 +36,11 @@ #include #include #include +#include +#include +#include + +#include "delay.h" =20 /* The max number of the PMU event counters (excluding the cycle counter) = */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) @@ -37,6 +48,8 @@ /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 =20 +#define msecs_to_usecs(msec) ((msec) * 1000LL) + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_= EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -265,6 +278,7 @@ enum test_stage { TEST_STAGE_COUNTER_ACCESS =3D 1, TEST_STAGE_KVM_EVENT_FILTER, TEST_STAGE_KVM_EVTYPE_FILTER, + TEST_STAGE_VCPU_MIGRATION, }; =20 struct guest_data { @@ -275,6 +289,19 @@ struct guest_data { =20 static struct guest_data guest_data; =20 +#define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 +#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 + +struct test_args { + int vcpu_migration_test_iter; + int vcpu_migration_test_migrate_freq_ms; +}; + +static struct test_args test_args =3D { + .vcpu_migration_test_iter =3D VCPU_MIGRATIONS_TEST_ITERS_DEF, + .vcpu_migration_test_migrate_freq_ms =3D VCPU_MIGRATIONS_TEST_MIGRATION_F= REQ_MS, +}; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } =20 - /* * Extra instructions inserted by the compiler would be difficult to compe= nsate * for, so hand assemble everything between, and including, the PMCR acces= ses @@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_i= dx, bool expect_count) } } =20 +static void test_basic_pmu_functionality(void) +{ + /* Test events on generic and cycle counters */ + test_instructions_count(0, true); + test_cycles_count(true); +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void) GUEST_ASSERT_2(cnt =3D=3D 0, cnt, typer); } =20 +static void guest_vcpu_migration_test(void) +{ + /* + * While the userspace continuously migrates this vCPU to random pCPUs, + * run basic PMU functionalities and verify the results. + */ + while (test_args.vcpu_migration_test_iter--) + test_basic_pmu_functionality(); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -760,6 +803,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVTYPE_FILTER: guest_evtype_filter_test(); break; + case TEST_STAGE_VCPU_MIGRATION: + guest_vcpu_migration_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_f= ilter *pmu_event_filters) =20 vpmu_vm->vm =3D vm =3D vm_create(1); vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ for (ec =3D 0; ec < ESR_EC_NUM; ec++) { vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, @@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu) struct ucall uc; =20 sync_global_to_guest(vcpu->vm, guest_data); + sync_global_to_guest(vcpu->vm, test_args); + vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void) destroy_vpmu_vm(vpmu_vm); } =20 +struct vcpu_migrate_data { + struct vpmu_vm *vpmu_vm; + pthread_t *pt_vcpu; + bool vcpu_done; +}; + +static void *run_vcpus_migrate_test_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data =3D arg; + struct vpmu_vm *vpmu_vm =3D migrate_data->vpmu_vm; + + run_vcpu(vpmu_vm->vcpu); + migrate_data->vcpu_done =3D true; + + return NULL; +} + +static uint32_t get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf =3D get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place the vCPU on */ + do { + pcpu =3D rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu =3D get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + + ret =3D pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &c= puset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret =3D=3D 0 || ret =3D=3D ESRCH, + "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret); + + return ret; +} + +static void *vcpus_migrate_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data =3D arg; + + while (!migrate_data->vcpu_done) { + usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); + migrate_vcpu(migrate_data); + } + + return NULL; +} + +static void run_vcpu_migration_test(uint64_t pmcr_n) +{ + int ret; + struct vpmu_vm *vpmu_vm; + pthread_t pt_vcpu, pt_sched; + struct vcpu_migrate_data migrate_data =3D { + .pt_vcpu =3D &pt_vcpu, + .vcpu_done =3D false, + }; + + __TEST_REQUIRE(get_nprocs() >=3D 2, "At least two pCPUs needed for vCPU m= igration test"); + + guest_data.test_stage =3D TEST_STAGE_VCPU_MIGRATION; + guest_data.expected_pmcr_n =3D pmcr_n; + + migrate_data.vpmu_vm =3D vpmu_vm =3D create_vpmu_vm(guest_code, NULL); + + /* Initialize random number generation for migrating vCPUs to random pCPU= s */ + srand(time(NULL)); + + /* Spawn a vCPU thread */ + ret =3D pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migr= ate_data); + TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + + /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ + ret =3D pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data= ); + TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating th= e vCPUs"); + + pthread_join(pt_sched, NULL); + pthread_join(pt_vcpu, NULL); + + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); run_kvm_evtype_filter_test(); + run_vcpu_migration_test(pmcr_n); } =20 /* @@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void) return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } =20 -int main(void) +static void print_help(char *name) +{ + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migr= ation_freq_ms]\n", + name); + pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)= \n", + VCPU_MIGRATIONS_TEST_ITERS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (= default: %u)\n", + VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-h: print this help screen\n"); +} + +static bool parse_args(int argc, char *argv[]) +{ + int opt; + + while ((opt =3D getopt(argc, argv, "hi:m:")) !=3D -1) { + switch (opt) { + case 'i': + test_args.vcpu_migration_test_iter =3D + atoi_positive("Nr vCPU migration iterations", optarg); + break; + case 'm': + test_args.vcpu_migration_test_migrate_freq_ms =3D + atoi_positive("vCPU migration frequency", optarg); + break; + case 'h': + default: + goto err; + } + } + + return true; + +err: + print_help(argv[0]); + return false; +} + +int main(int argc, char *argv[]) { uint64_t pmcr_n; =20 TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); =20 + if (!parse_args(argc, argv)) + exit(KSFT_SKIP); + pmcr_n =3D get_pmcr_n_limit(); run_tests(pmcr_n); =20 --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26393C636CC for ; Mon, 13 Feb 2023 18:03:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230103AbjBMSD2 (ORCPT ); Mon, 13 Feb 2023 13:03:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231183AbjBMSDI (ORCPT ); Mon, 13 Feb 2023 13:03:08 -0500 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C80733C31 for ; Mon, 13 Feb 2023 10:02:49 -0800 (PST) Received: by mail-io1-xd4a.google.com with SMTP id i124-20020a6b3b82000000b0073440a80b1aso8794369ioa.4 for ; Mon, 13 Feb 2023 10:02:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=miWor8s0jqaZUd3xGtI4JBTTtHHMqBsmA5HUd209CbcgvQyMiCxhmIj64++D4RGVUx zOUakTN+WbmyJc4C78xTGot9kZ1TRe/KSWTN0bpi6hF+WVnpU0/dmbw0WrccUj1vTd9B j6YMabigiPCP6tqWjGaElGNojvU9Bo+s3tzO3qRKWFPfYD2rFIZeIdpyKo0nGUoUr6bU n/Dd+LajqjVo+FCHnCumcYYRwJzv1+FUYrv9G15MD7XiGehqTHK72wTJyLinYEhAl+Jt 1CRcERI1uyamRWlZqHP0/QY9MLpgSBpNz5ytPZDElyqeePPkphsxrm3FcdKBwM6D7P6I A2TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=LKY3ksNz8P8dbr+G9fpIppHfAnwV/cgPwBHOHOjHGfZPNnEj9J45HeiGD+lgHwQk0B THL1dlRzKcHWoB6l6grBkEK+0tiKLRIF86nTNEYMWRXiLEAEqgsDylRYvzogmfbUJoLR yGQiwYMbOTM5Gq274FfBgUw3OMgD8r3B9GYaWXzmCMvW0DQLQVBEkeeN0+28/EiAy6q4 tJ9g6U8b/bRvxA5Uwx9UiU4PBSOGqdohdKQij7piiyX9sNTxX4F/V+OWD+/KWImQ61aG bFFxIBZIWpKa0qvzAXcuYqneFwBjXrcvbc1C2SaO+yUYPpLkP5pXCBMk3guSLiHlCd3r Mhyg== X-Gm-Message-State: AO0yUKVZiUL0nzXBruU0dc1yXB2y0D7FM7fe09FbssJEEPPCx5rUDzqu 8PqbQOfnz5CUm/imkAv4buS1kb/RJI1M X-Google-Smtp-Source: AK7set+pdx/dE8EiXxbf6oKANYP1HUfyveaDx4g8eQWjTRp3vLmsmLm3zVaQurH0YKQXfW3y4Mzx2sOx1mUP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:735c:0:b0:3c4:88de:524 with SMTP id a28-20020a02735c000000b003c488de0524mr351jae.3.1676311369255; Mon, 13 Feb 2023 10:02:49 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:30 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-10-rananta@google.com> Subject: [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend the vCPU migration test to also validate the vPMU's functionality when set up for overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++-- 1 file changed, 198 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 0c9d801f4e602..066dc17fa3906 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -21,7 +21,9 @@ * * 4. Since the PMU registers are per-cpu, stress KVM by frequently * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. + * if the vPMU is still behaving as expected. The sub-tests include + * testing basic functionalities such as basic counters behavior, + * overflow, and overflow interrupts. * * Copyright (c) 2022 Google LLC. * @@ -41,13 +43,27 @@ #include =20 #include "delay.h" +#include "gic.h" +#include "spinlock.h" =20 /* The max number of the PMU event counters (excluding the cycle counter) = */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) =20 +/* The cycle counter bit position that's common among the PMU registers */ +#define ARMV8_PMU_CYCLE_COUNTER_IDX 31 + /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 =20 +#define PMU_IRQ 23 + +#define COUNT_TO_OVERFLOW 0xFULL +#define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) +#define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + #define msecs_to_usecs(msec) ((msec) * 1000LL) =20 /* @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned= long val) isb(); } =20 +static inline void write_pmovsclr(unsigned long val) +{ + write_sysreg(val, pmovsclr_el0); + isb(); +} + +static unsigned long read_pmovsclr(void) +{ + return read_sysreg(pmovsclr_el0); +} + static inline void enable_counter(int idx) { uint64_t v =3D read_sysreg(pmcntenset_el0); @@ -178,11 +205,33 @@ static inline void disable_counter(int idx) isb(); } =20 +static inline void enable_irq(int idx) +{ + uint64_t v =3D read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenset_el1); + isb(); +} + +static inline void disable_irq(int idx) +{ + uint64_t v =3D read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenclr_el1); + isb(); +} + static inline uint64_t read_cycle_counter(void) { return read_sysreg(pmccntr_el0); } =20 +static inline void write_cycle_counter(uint64_t v) +{ + write_sysreg(v, pmccntr_el0); + isb(); +} + static inline void reset_cycle_counter(void) { uint64_t v =3D read_sysreg(pmcr_el0); @@ -289,6 +338,15 @@ struct guest_data { =20 static struct guest_data guest_data; =20 +/* Data to communicate among guest threads */ +struct guest_irq_data { + uint32_t pmc_idx_bmap; + uint32_t irq_received_bmap; + struct spinlock lock; +}; + +static struct guest_irq_data guest_irq_data; + #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 =20 @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs) expected_ec =3D INVALID_EC; } =20 +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pm= c_idx_bmap) +{ + /* + * Fail if there's an interrupt from unexpected PMCs. + * All the expected events' IRQs may not arrive at the same time. + * Hence, check if the interrupt is valid only if it's expected. + */ + if (pmovsclr & BIT(pmc_idx)) { + GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_b= map); + write_pmovsclr(BIT(pmc_idx)); + } +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + uint32_t pmc_idx_bmap; + uint64_t i, pmcr_n =3D get_pmcr_n(); + uint32_t pmovsclr =3D read_pmovsclr(); + unsigned int intid =3D gic_get_and_ack_irq(); + + /* No other IRQ apart from the PMU IRQ is expected */ + GUEST_ASSERT_1(intid =3D=3D PMU_IRQ, intid); + + spin_lock(&guest_irq_data.lock); + pmc_idx_bmap =3D READ_ONCE(guest_irq_data.pmc_idx_bmap); + + for (i =3D 0; i < pmcr_n; i++) + guest_validate_irq(i, pmovsclr, pmc_idx_bmap); + guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); + + /* Mark IRQ as recived for the corresponding PMCs */ + WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); + spin_unlock(&guest_irq_data.lock); + + gic_set_eoi(intid); +} + +static int pmu_irq_received(int pmc_idx) +{ + bool irq_received; + + spin_lock(&guest_irq_data.lock); + irq_received =3D READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_id= x); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap = & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + return irq_received; +} + +static void pmu_irq_init(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap = & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT= (pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + enable_irq(pmc_idx); +} + +static void pmu_irq_exit(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap = & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BI= T(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + disable_irq(pmc_idx); +} + /* * Run the given operation that should trigger an exception with the * given exception class. The exception handler (guest_sync_handler) @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t = pmcr) precise_instrs_loop(loop, pmcr); } =20 -static void test_instructions_count(int pmc_idx, bool expect_count) +static void test_instructions_count(int pmc_idx, bool expect_count, bool t= est_overflow) { int i; struct pmc_accessor *acc; - uint64_t cnt; - int instrs_count =3D 100; + uint64_t cntr_val =3D 0; + int instrs_count =3D 500; + + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT_1(expect_count, pmc_idx); + + cntr_val =3D PRE_OVERFLOW_32; + pmu_irq_init(pmc_idx); + } =20 enable_counter(pmc_idx); =20 @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool= expect_count) for (i =3D 0; i < ARRAY_SIZE(pmc_accessors); i++) { acc =3D &pmc_accessors[i]; =20 - pmu_disable_reset(); - + acc->write_cntr(pmc_idx, cntr_val); acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); =20 - /* Enable the PMU and execute precisely number of instructions as a work= load */ - execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_P= MCR_E); + /* + * Enable the PMU and execute a precise number of instructions as a work= load. + * Since execute_precise_instrs() disables the PMU at the end, 'instrs_c= ount' + * should have enough instructions to raise an IRQ. + */ + execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E); =20 - /* If a count is expected, the counter should be increased by 'instrs_co= unt' */ - cnt =3D acc->read_cntr(pmc_idx); - GUEST_ASSERT_4(expect_count =3D=3D (cnt =3D=3D instrs_count), - i, expect_count, cnt, instrs_count); + /* + * If an overflow is expected, only check for the overflag flag. + * As overflow interrupt is enabled, the interrupt would add additional + * instructions and mess up the precise instruction count. Hence, measure + * the instructions count only when the test is not set up for an overfl= ow. + */ + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i); + } else { + uint64_t cnt =3D acc->read_cntr(pmc_idx); + + GUEST_ASSERT_4(expect_count =3D=3D (cnt =3D=3D instrs_count), + pmc_idx, i, cnt, expect_count); + } } =20 - disable_counter(pmc_idx); + if (test_overflow) + pmu_irq_exit(pmc_idx); } =20 -static void test_cycles_count(bool expect_count) +static void test_cycles_count(bool expect_count, bool test_overflow) { uint64_t cnt; =20 - pmu_enable(); - reset_cycle_counter(); + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT(expect_count); + + write_cycle_counter(PRE_OVERFLOW_64); + pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX); + } else { + reset_cycle_counter(); + } =20 /* Count cycles in EL0 and EL1 */ write_pmccfiltr(0); enable_cycle_counter(); =20 + /* Enable the PMU and execute precisely number of instructions as a workl= oad */ + execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); cnt =3D read_cycle_counter(); =20 /* * If a count is expected by the test, the cycle counter should be increa= sed by - * at least 1, as there is at least one instruction between enabling the + * at least 1, as there are a number of instructions between enabling the * counter and reading the counter. */ GUEST_ASSERT_2(expect_count =3D=3D (cnt > 0), cnt, expect_count); + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expec= t_count); + pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX); + } =20 disable_cycle_counter(); pmu_disable_reset(); @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_= idx, bool expect_count) { switch (event) { case ARMV8_PMUV3_PERFCTR_INST_RETIRED: - test_instructions_count(pmc_idx, expect_count); + test_instructions_count(pmc_idx, expect_count, false); break; case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: - test_cycles_count(expect_count); + test_cycles_count(expect_count, false); break; } } =20 static void test_basic_pmu_functionality(void) { + local_irq_disable(); + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_irq_enable(PMU_IRQ); + local_irq_enable(); + /* Test events on generic and cycle counters */ - test_instructions_count(0, true); - test_cycles_count(true); + test_instructions_count(0, true, false); + test_cycles_count(true, false); + + /* Test overflow with interrupts on generic and cycle counters */ + test_instructions_count(0, true, true); + test_cycles_count(true, true); } =20 /* @@ -813,9 +988,6 @@ static void guest_code(void) GUEST_DONE(); } =20 -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - static unsigned long * set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_= event_filters) { @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_= filter *pmu_event_filters) struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; - uint64_t dfr0, irq =3D 23; + uint64_t dfr0, irq =3D PMU_IRQ; struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr =3D { .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_= filter *pmu_event_filters) =20 vpmu_vm->vm =3D vm =3D vm_create(1); vm_init_descriptor_tables(vm); + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); =20 /* Catch exceptions for easier debugging */ for (ec =3D 0; ec < ESR_EC_NUM; ec++) { --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC609C636D4 for ; Mon, 13 Feb 2023 18:03:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229801AbjBMSDb (ORCPT ); Mon, 13 Feb 2023 13:03:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231182AbjBMSDI (ORCPT ); Mon, 13 Feb 2023 13:03:08 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A0E17DB9 for ; Mon, 13 Feb 2023 10:02:50 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id q127-20020a25d985000000b009362f0368aeso818277ybg.3 for ; Mon, 13 Feb 2023 10:02:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=Odev8Z6Kqs3GEhehIpX1NfnNvtpHcHO5P823Sx9vCSEvBaW+zpJ3MeA2k0DrQgW/hq qzZ0Hg7nt5uxsTCap5L4HuaZBZQTI3E1Zi5MUcXkCuJGIpYie9eqKdEIoyCQHFLzDhhG xlq5vIQJd3CcnFweyuBak5BSdKhSKWTI0i0kyNFGzwKtEuzFKbtmWbuUQceIbBCCHYyi kxFKKFTLFjxiMhp6TugfvW8TYUKvke2doRbaM8GV/h9rrqEyRRaZMrWlf3IqbBqhJGsH wTSTc75k7otoY5g1eF6K6WOJDyw2YWSdz0xPc0gMpyTcUXxLWhaGv9Wjfw3hc+h3rdFW vu0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=uiIZhq8Nh4xGOsdb3yfXEXNKUdvnIfxe2TdY5/zG4735VPDS9h50p+w4drcRGFhHPL LP2vA0bmgONqz6BMo4x7llyiwgBEeXwvWnzroUuDJVWo4HX21nVbHPNW5Jp0aULUq00H /oJqLkKU/fRpd9BxdpEdV/zp5lwiz/vVgxJYK4OX+GoY6OMWkkJqVQfO6fHRV+Augq/T /Ue+Uqm7IjeHm5T2EAEAbLyFaBzfYIq1yQMeaWsAnfGo1rCaz4J811SdVM5j1paCssUZ wMjVz81oN6xayji/YgxdgWKQ35StzwxDFY1kTxmYzkK3S9P1MOVUTURZPKl+9Ph7RneD ec9A== X-Gm-Message-State: AO0yUKU3ZBWwwBOs1NwPq19dBJj/4znq2ZkJFlGvA7hHZQWO2RPew6WI LtW6aXLVONXBSQp18JmiwU0G59QCA4Gj X-Google-Smtp-Source: AK7set8RHgzHP8c22NX4XX8rTz8mK48f2xqsC6kI1cPrwBMOEbu/WcXWtj02YO8LfVZHw/DkvQKxZKKZMH8A X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:788:b0:52f:184a:da09 with SMTP id bw8-20020a05690c078800b0052f184ada09mr23ywb.2.1676311370417; Mon, 13 Feb 2023 10:02:50 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:31 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-11-rananta@google.com> Subject: [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend the vPMU's vCPU migration test to validate chained events, and their overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 066dc17fa3906..de725f4339ad5 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -23,7 +23,7 @@ * migrating the guest vCPU to random pCPUs in the system, and check * if the vPMU is still behaving as expected. The sub-tests include * testing basic functionalities such as basic counters behavior, - * overflow, and overflow interrupts. + * overflow, overflow interrupts, and chained events. * * Copyright (c) 2022 Google LLC. * @@ -61,6 +61,8 @@ #define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) #define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) =20 +#define ALL_SET_64 GENMASK(63, 0) + #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL =20 @@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool = test_overflow) pmu_disable_reset(); } =20 +static void test_chained_count(int pmc_idx) +{ + int i, chained_pmc_idx; + struct pmc_accessor *acc; + uint64_t pmcr_n, cnt, cntr_val; + + /* The test needs at least two PMCs */ + pmcr_n =3D get_pmcr_n(); + GUEST_ASSERT_1(pmcr_n >=3D 2, pmcr_n); + + /* + * The chained counter's idx is always chained with (pmc_idx + 1). + * pmc_idx should be even as the chained event doesn't count on + * odd numbered counters. + */ + GUEST_ASSERT_1(pmc_idx % 2 =3D=3D 0, pmc_idx); + + /* + * The max counter idx that the chained counter can occupy is + * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2). + */ + chained_pmc_idx =3D pmc_idx + 1; + GUEST_ASSERT(chained_pmc_idx < pmcr_n); + + enable_counter(chained_pmc_idx); + pmu_irq_init(chained_pmc_idx); + + /* Configure the chained event using all the possible ways*/ + for (i =3D 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc =3D &pmc_accessors[i]; + + /* Test if the chained counter increments when the base event overflows = */ + + cntr_val =3D 1; + acc->write_cntr(chained_pmc_idx, cntr_val); + acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN); + + /* Chain the counter with pmc_idx that's configured for an overflow */ + test_instructions_count(pmc_idx, true, true); + + /* + * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessor= s) + * combinations. Hence, the chained chained_pmc_idx is expected to be + * cntr_val + ARRAY_SIZE(pmc_accessors). + */ + cnt =3D acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt =3D=3D cntr_val + ARRAY_SIZE(pmc_accessors), + pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors)); + + /* Test for the overflow of the chained counter itself */ + + cntr_val =3D ALL_SET_64; + acc->write_cntr(chained_pmc_idx, cntr_val); + + test_instructions_count(pmc_idx, true, true); + + /* + * At this point, an interrupt should've been fired for the chained + * counter (which validates the overflow bit), and the counter should've + * wrapped around to ARRAY_SIZE(pmc_accessors) - 1. + */ + cnt =3D acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt =3D=3D ARRAY_SIZE(pmc_accessors) - 1, + pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors)); + } + + pmu_irq_exit(chained_pmc_idx); +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_coun= t) { switch (event) { @@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void) /* Test overflow with interrupts on generic and cycle counters */ test_instructions_count(0, true, true); test_cycles_count(true, true); + + /* Test chained events */ + test_chained_count(0); } =20 /* --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9B7AC636CC for ; Mon, 13 Feb 2023 18:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229751AbjBMSDl (ORCPT ); Mon, 13 Feb 2023 13:03:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231194AbjBMSDN (ORCPT ); Mon, 13 Feb 2023 13:03:13 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FD31213E for ; Mon, 13 Feb 2023 10:02:52 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id s8-20020a056e0210c800b003155720bbbfso382287ilj.22 for ; Mon, 13 Feb 2023 10:02:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=klshuBOzJGsDndpswreQ0XlinbMaJK4kRqAQ+37wZY8lmbOhHgx/ZdZOFpaN1lJ6Yy GKSvylJPXQGCRQtm8PG+CoJXz3i/KEj97JV/Ab045b8U+Wj43GYkWufOUVvppWmGFm0v WaQbsZTE/P4Putg06m5v8Od47rRyUpcHCvrxvuzKG+FvCNge/NQA8/pfj9YEQUHWkd1n t1T8Jk93C0yORnKG07Q1162qgwKRkb9xUo9DJnwrkywAG4Rmh/eaGJg6ea6LmI7s1gpt sHUJKKq/+DnRWgfVNv9ltO9gOAFPNz18MlmUrWzqIN8X8XsBaKgJsikEtqscWqdJj/h9 2Zgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=WKq7SvT+z8ocefc1o47J04R7aeYFlhcP+HbXhshk1xArYbRi4KO3xrr7+uR9gGzp5x Zx2gPQrZtJOWQ+lhZP5pqT6ROuUaU+oGGQog0OItOEwNFyMXazrinFrL2xzfFFx82zLs AY+E3vIQHRf3SLdiPyDDvrPc3TGy5SvRSy1wSgUaTHAyVBAXVgg9InRn5nTSxKsZTRVd 0bzXLpPf+3amvyFi48+wn4F1hFlJXHqoWXJV2X31oTB5et02/+Ys70ix5ZhtZsmYyvBh 2Rnhh1OwObhPps0iXTn9oruGhp9/r/lbDEF9b9GpjlikWK1MTgAo++7BgFXoOPqbRHV7 C1Sg== X-Gm-Message-State: AO0yUKXENhlT7fFR53OvdThpFKq5zDmwiAKtGgRPCr/najUJetQTrwew nJoXjvCVnX1n1yZ/RCPxNF3xLbOS0vzP X-Google-Smtp-Source: AK7set9XcTIYijLiXD+fG0978y9t+ze0/330Z/53BuCCPZ7jl1ZgND52wYhOh70n0GJQrhaPSQBmbepj7EJS X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:d0c:b0:3a7:e46f:1018 with SMTP id q12-20020a0566380d0c00b003a7e46f1018mr113jaj.2.1676311371815; Mon, 13 Feb 2023 10:02:51 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:32 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-12-rananta@google.com> Subject: [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend the vCPU migration test to occupy all the vPMU counters, by configuring chained events on alternate counter-ids and chaining them with its corresponding predecessor counter, and verify against the extended behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index de725f4339ad5..fd00acb9391c8 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx) pmu_irq_exit(chained_pmc_idx); } =20 +static void test_chain_all_counters(void) +{ + int i; + uint64_t cnt, pmcr_n =3D get_pmcr_n(); + struct pmc_accessor *acc =3D &pmc_accessors[0]; + + /* + * Test the occupancy of all the event counters, by chaining the + * alternate counters. The test assumes that the host hasn't + * occupied any counters. Hence, if the test fails, it could be + * because all the counters weren't available to the guest or + * there's actually a bug in KVM. + */ + + /* + * Configure even numbered counters to count cpu-cycles, and chain + * each of them with its odd numbered counter. + */ + for (i =3D 0; i < pmcr_n; i++) { + if (i % 2) { + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN); + acc->write_cntr(i, 1); + } else { + pmu_irq_init(i); + acc->write_cntr(i, PRE_OVERFLOW_32); + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + } + enable_counter(i); + } + + /* Introduce some cycles */ + execute_precise_instrs(500, ARMV8_PMU_PMCR_E); + + /* + * An overflow interrupt should've arrived for all the even numbered + * counters but none for the odd numbered ones. The odd numbered ones + * should've incremented exactly by 1. + */ + for (i =3D 0; i < pmcr_n; i++) { + if (i % 2) { + GUEST_ASSERT_1(!pmu_irq_received(i), i); + + cnt =3D acc->read_cntr(i); + GUEST_ASSERT_2(cnt =3D=3D 2, i, cnt); + } else { + GUEST_ASSERT_1(pmu_irq_received(i), i); + } + } + + /* Cleanup the states */ + for (i =3D 0; i < pmcr_n; i++) { + if (i % 2 =3D=3D 0) + pmu_irq_exit(i); + disable_counter(i); + } +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_coun= t) { switch (event) { @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void) =20 /* Test chained events */ test_chained_count(0); + + /* Test running chained events on all the implemented counters */ + test_chain_all_counters(); } =20 /* --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45447C636D4 for ; Mon, 13 Feb 2023 18:03:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231144AbjBMSDz (ORCPT ); Mon, 13 Feb 2023 13:03:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231217AbjBMSDR (ORCPT ); Mon, 13 Feb 2023 13:03:17 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0F7AE069 for ; Mon, 13 Feb 2023 10:02:53 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-52eb75aeecdso126573877b3.13 for ; Mon, 13 Feb 2023 10:02:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=Dl8jDBFxwwxPG5dvCnLeljlHX+mEMM3Lt7CRmMJSCztwob6fwXljZpLM3RUkog1USt HLVFYbxCfbddhLdviZGYAZWu1YOebDg9K7yxfjuHtBa2X9q4BCps7A/v2LPNN1otPxsw oB6E+y3HzPVYekUc1zHIdLE6GTler0EGqotKcXV8VxicHBonenMsmfvh8IhK7LkKa6mk F3RSWL4rVu3+5tNQ4uRFQP/N0AT/0CiK/2xmVH7Rtm6fa5GJKd1sXVqRA0Z4nDB5ZkfV 0QDEQzHlYocRqurPz0TXY1z5Ozz7EvVY8ZkcsXGe26IThiFJWzAy9HKsbiwWBlJT5BvR 7tDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=4C4mF1NM4nMcS32VV6E9XeVUxYj9E4AP9KChCgHLQhFlZGEFWxgwzo9ef2frYG4s87 PeDAEd746ytV8CFN5R6w22Bydo2K8XA1pGIiHB8FzfTUzb/rfLVPl3eloFOcJisAX2Dz ag277W1XcmbEFoCDa9sg2O59e0xefZeEYdKoiOj8wdP4LMkGVYJAKVSvThI1uirCpIdX T5z1JrVLOILs0J60eBtKMneK7rJ+I31P5oR4whbB7gDBEzFDK3m5tWqwAIRXl6GNxNek blFKJj7QKXL0saGqjKJMDy/pY6/lZQ2q64P448i05RY7fL6WjBe+qtnmbaEQGojIyk3u EdWg== X-Gm-Message-State: AO0yUKVj3n/TxhE1lfj1UeETiYw6sVW13YCvc8bZTxuLTCG4CgspHmE0 yPxRqLVTloaU4O5f3FjBSmuQGc2BOFbJ X-Google-Smtp-Source: AK7set8WFkcjjQ486sENyo/Bofwmb4YEVcsdM3KzPAVhr+NFr/y1hTAIeb0LxE4fo9R9UrAHes4GTeEFvSOs X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:5987:0:b0:8da:f656:8da6 with SMTP id n129-20020a255987000000b008daf6568da6mr16ybb.7.1676311372794; Mon, 13 Feb 2023 10:02:52 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:33 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-13-rananta@google.com> Subject: [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The PMU test's create_vpmu_vm() currently creates a VM with only one cpu. Extend this to accept a number of cpus as a argument to create a multi-vCPU VM. This would help the upcoming patches to test the vPMU context across multiple vCPUs. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 82 +++++++++++-------- 1 file changed, 49 insertions(+), 33 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index fd00acb9391c8..239fc7e06b3b9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -320,7 +320,8 @@ uint64_t op_end_addr; =20 struct vpmu_vm { struct kvm_vm *vm; - struct kvm_vcpu *vcpu; + int nr_vcpus; + struct kvm_vcpu **vcpus; int gic_fd; unsigned long *pmu_filter; }; @@ -1164,10 +1165,11 @@ set_event_filters(struct kvm_vcpu *vcpu, struct kvm= _pmu_event_filter *pmu_event_ return pmu_filter; } =20 -/* Create a VM that has one vCPU with PMUv3 configured. */ +/* Create a VM that with PMUv3 configured. */ static struct vpmu_vm * -create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_fi= lters) +create_vpmu_vm(int nr_vcpus, void *guest_code, struct kvm_pmu_event_filter= *pmu_event_filters) { + int i; struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; @@ -1187,7 +1189,11 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_even= t_filter *pmu_event_filters) vpmu_vm =3D calloc(1, sizeof(*vpmu_vm)); TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); =20 - vpmu_vm->vm =3D vm =3D vm_create(1); + vpmu_vm->vcpus =3D calloc(nr_vcpus, sizeof(struct kvm_vcpu *)); + TEST_ASSERT(vpmu_vm->vcpus, "Failed to allocate kvm_vpus"); + vpmu_vm->nr_vcpus =3D nr_vcpus; + + vpmu_vm->vm =3D vm =3D vm_create(nr_vcpus); vm_init_descriptor_tables(vm); vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); =20 @@ -1197,26 +1203,35 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_eve= nt_filter *pmu_event_filters) guest_sync_handler); } =20 - /* Create vCPU with PMUv3 */ + /* Create vCPUs with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); - vpmu_vm->vcpu =3D vcpu =3D aarch64_vcpu_add(vm, 0, &init, guest_code); - vcpu_init_descriptor_tables(vcpu); - vpmu_vm->gic_fd =3D vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA= ); =20 - /* Make sure that PMUv3 support is indicated in the ID register */ - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); - pmuver =3D FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); - TEST_ASSERT(pmuver !=3D ID_AA64DFR0_PMUVER_IMP_DEF && - pmuver >=3D ID_AA64DFR0_PMUVER_8_0, - "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + for (i =3D 0; i < nr_vcpus; i++) { + vpmu_vm->vcpus[i] =3D vcpu =3D aarch64_vcpu_add(vm, i, &init, guest_code= ); + vcpu_init_descriptor_tables(vcpu); + } =20 - /* Initialize vPMU */ - if (pmu_event_filters) - vpmu_vm->pmu_filter =3D set_event_filters(vcpu, pmu_event_filters); + /* vGIC setup is expected after the vCPUs are created but before the vPMU= is initialized */ + vpmu_vm->gic_fd =3D vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_B= ASE_GPA); =20 - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + for (i =3D 0; i < nr_vcpus; i++) { + vcpu =3D vpmu_vm->vcpus[i]; + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver =3D FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver !=3D ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >=3D ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU %d with PMUv3", i, pmuver); + + /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter =3D set_event_filters(vcpu, pmu_event_filters); + + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + } =20 return vpmu_vm; } @@ -1227,6 +1242,7 @@ static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm->vcpus); free(vpmu_vm); } =20 @@ -1264,8 +1280,8 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n =3D pmcr_n; =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm =3D create_vpmu_vm(guest_code, NULL); - vcpu =3D vpmu_vm->vcpu; + vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); + vcpu =3D vpmu_vm->vcpus[0]; =20 /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -1309,8 +1325,8 @@ static void run_counter_access_error_test(uint64_t pm= cr_n) guest_data.expected_pmcr_n =3D pmcr_n; =20 pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm =3D create_vpmu_vm(guest_code, NULL); - vcpu =3D vpmu_vm->vcpu; + vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); + vcpu =3D vpmu_vm->vcpus[0]; =20 /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -1396,8 +1412,8 @@ static void run_kvm_event_filter_error_tests(void) }; =20 /* KVM should not allow configuring filters after the PMU is initialized = */ - vpmu_vm =3D create_vpmu_vm(guest_code, NULL); - ret =3D __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); + ret =3D __vcpu_ioctl(vpmu_vm->vcpus[0], KVM_SET_DEVICE_ATTR, &filter_attr= ); TEST_ASSERT(ret =3D=3D -1 && errno =3D=3D EBUSY, "Failed to disallow setting an event filter after PMU init"); destroy_vpmu_vm(vpmu_vm); @@ -1427,14 +1443,14 @@ static void run_kvm_event_filter_test(void) =20 /* Test for valid filter configurations */ for (i =3D 0; i < ARRAY_SIZE(pmu_event_filters); i++) { - vpmu_vm =3D create_vpmu_vm(guest_code, pmu_event_filters[i]); + vpmu_vm =3D create_vpmu_vm(1, guest_code, pmu_event_filters[i]); vm =3D vpmu_vm->vm; =20 pmu_filter_gva =3D vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_V= ADDR); memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter= _bmap_sz); guest_data.pmu_filter =3D (unsigned long *) pmu_filter_gva; =20 - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); =20 destroy_vpmu_vm(vpmu_vm); } @@ -1449,8 +1465,8 @@ static void run_kvm_evtype_filter_test(void) =20 guest_data.test_stage =3D TEST_STAGE_KVM_EVTYPE_FILTER; =20 - vpmu_vm =3D create_vpmu_vm(guest_code, NULL); - run_vcpu(vpmu_vm->vcpu); + vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } =20 @@ -1465,7 +1481,7 @@ static void *run_vcpus_migrate_test_func(void *arg) struct vcpu_migrate_data *migrate_data =3D arg; struct vpmu_vm *vpmu_vm =3D migrate_data->vpmu_vm; =20 - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); migrate_data->vcpu_done =3D true; =20 return NULL; @@ -1535,7 +1551,7 @@ static void run_vcpu_migration_test(uint64_t pmcr_n) guest_data.test_stage =3D TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n =3D pmcr_n; =20 - migrate_data.vpmu_vm =3D vpmu_vm =3D create_vpmu_vm(guest_code, NULL); + migrate_data.vpmu_vm =3D vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); =20 /* Initialize random number generation for migrating vCPUs to random pCPU= s */ srand(time(NULL)); @@ -1571,8 +1587,8 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; =20 - vpmu_vm =3D create_vpmu_vm(guest_code, NULL); - vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); + vcpu_get_reg(vpmu_vm->vcpus[0], KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); =20 return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 04:30:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63356C6379F for ; Mon, 13 Feb 2023 18:04:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231327AbjBMSED (ORCPT ); Mon, 13 Feb 2023 13:04:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230479AbjBMSDT (ORCPT ); Mon, 13 Feb 2023 13:03:19 -0500 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 441CAA243 for ; Mon, 13 Feb 2023 10:02:54 -0800 (PST) Received: by mail-io1-xd49.google.com with SMTP id f4-20020a05660215c400b0073a6a9f8f45so5528633iow.11 for ; Mon, 13 Feb 2023 10:02:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=mnjrGM5wUez2KEVI/vFMV4+FbMOaoVOnfJ9xGgmJ4Zd0uoapoTt5c95+fncj1Ro61W TUPQy0EqHajLx4de1Fdfds2Ob5QycOCdI8eHnPwd/MexlEbNJPqJVys/+MnhBemJq29N ebNk/piULaEcMtvxd9QBPD5koG96A4AWJ6140gil1zQe/bYGnJWni59OsnV2d5tEhHNU X8QeFM0m+JtgdYxf7dURm6TMD3wCQ6HecN9oWny4BBbrHhW+3+yPtkFaWufSdEvy3thw ma3Cmg32IIfK0X5DtalBNaYeROcLkG7mk6jnQNONfmBTH2cxoarIDk15wHj6mi2GgKOq 41Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=0IBrDvl+BPypRTWh9pDWgcd+zaMElJZV5U2KPiOKc22sC1lFyvwaSPYG8Mp1V8B6Lf X+arZVNeshTgbK7JAjvMd8A+mDcUH6Qr6NLJ8AEvNldFmNCsobDH0dYzf6VzEOKDVu9H zZnHLeVjWxpXH97wMVskTvdx2I4FiA966E+bbcHefC7StXx2CXHTqDhtGVJt4CXuWV+x qGSagO+D4PhiocQjqIFf7lsvWhX889+O/BSbq391VY1wqG5SkvbLtqQRZhQDH+AjtrI7 f8dQABoXXC6e0Ud98lFwyvxT0ZqONklBzYX7Utn43yv9fVvqa4lWK7tpzY8/1Q03y0WJ x0RQ== X-Gm-Message-State: AO0yUKUo6Dpb9BRS86xQ3mET5W5RX4sAnh7wYkQ2Y5TpPtXrMbXFKQlf VzEGFSHf2j5C8VHmWlgyzBzN/D4/vh4B X-Google-Smtp-Source: AK7set9z6n5ru7Pp9s5RYw/d+TMzl0POKn/BVaGF5c9dcAeZ9wYFw3I4Z3GF+X4eobdissRqds0fajtDfTdM X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:b513:0:b0:310:a381:d0a2 with SMTP id f19-20020a92b513000000b00310a381d0a2mr2582318ile.0.1676311374044; Mon, 13 Feb 2023 10:02:54 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:34 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-14-rananta@google.com> Subject: [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To test KVM's handling of multiple vCPU contexts together, that are frequently migrating across random pCPUs in the system, extend the test to create a VM with multiple vCPUs and validate the behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------ 1 file changed, 114 insertions(+), 52 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testin= g/selftests/kvm/aarch64/vpmu_test.c index 239fc7e06b3b9..c9d8e5f9a22ab 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,11 +19,12 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * - * 4. Since the PMU registers are per-cpu, stress KVM by frequently - * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. The sub-tests include - * testing basic functionalities such as basic counters behavior, - * overflow, overflow interrupts, and chained events. + * 4. Since the PMU registers are per-cpu, stress KVM by creating a + * multi-vCPU VM, then frequently migrate the guest vCPUs to random + * pCPUs in the system, and check if the vPMU is still behaving as + * expected. The sub-tests include testing basic functionalities such + * as basic counters behavior, overflow, overflow interrupts, and + * chained events. * * Copyright (c) 2022 Google LLC. * @@ -348,19 +349,22 @@ struct guest_irq_data { struct spinlock lock; }; =20 -static struct guest_irq_data guest_irq_data; +static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS]; =20 #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 +#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF 2 =20 struct test_args { int vcpu_migration_test_iter; int vcpu_migration_test_migrate_freq_ms; + int vcpu_migration_test_nr_vcpus; }; =20 static struct test_args test_args =3D { .vcpu_migration_test_iter =3D VCPU_MIGRATIONS_TEST_ITERS_DEF, .vcpu_migration_test_migrate_freq_ms =3D VCPU_MIGRATIONS_TEST_MIGRATION_F= REQ_MS, + .vcpu_migration_test_nr_vcpus =3D VCPU_MIGRATIONS_TEST_NR_VPUS_DEF, }; =20 static void guest_sync_handler(struct ex_regs *regs) @@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t = pmovsclr, uint32_t pmc_idx_ } } =20 +static struct guest_irq_data *get_irq_data(void) +{ + uint32_t cpu =3D guest_get_vcpuid(); + + return &guest_irq_data[cpu]; +} + static void guest_irq_handler(struct ex_regs *regs) { uint32_t pmc_idx_bmap; uint64_t i, pmcr_n =3D get_pmcr_n(); uint32_t pmovsclr =3D read_pmovsclr(); unsigned int intid =3D gic_get_and_ack_irq(); + struct guest_irq_data *irq_data =3D get_irq_data(); =20 /* No other IRQ apart from the PMU IRQ is expected */ GUEST_ASSERT_1(intid =3D=3D PMU_IRQ, intid); =20 - spin_lock(&guest_irq_data.lock); - pmc_idx_bmap =3D READ_ONCE(guest_irq_data.pmc_idx_bmap); + spin_lock(&irq_data->lock); + pmc_idx_bmap =3D READ_ONCE(irq_data->pmc_idx_bmap); =20 for (i =3D 0; i < pmcr_n; i++) guest_validate_irq(i, pmovsclr, pmc_idx_bmap); guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); =20 /* Mark IRQ as recived for the corresponding PMCs */ - WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); - spin_unlock(&guest_irq_data.lock); + WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr); + spin_unlock(&irq_data->lock); =20 gic_set_eoi(intid); } @@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs) static int pmu_irq_received(int pmc_idx) { bool irq_received; + struct guest_irq_data *irq_data =3D get_irq_data(); =20 - spin_lock(&guest_irq_data.lock); - irq_received =3D READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_id= x); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap = & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + irq_received =3D READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc= _idx)); + spin_unlock(&irq_data->lock); =20 return irq_received; } =20 static void pmu_irq_init(int pmc_idx) { + struct guest_irq_data *irq_data =3D get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); =20 - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap = & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT= (pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc= _idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&irq_data->lock); =20 enable_irq(pmc_idx); } =20 static void pmu_irq_exit(int pmc_idx) { + struct guest_irq_data *irq_data =3D get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); =20 - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap = & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BI= T(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc= _idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)= ); + spin_unlock(&irq_data->lock); =20 disable_irq(pmc_idx); } @@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_id= x, bool expect_count) static void test_basic_pmu_functionality(void) { local_irq_disable(); - gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus, + (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); gic_irq_enable(PMU_IRQ); local_irq_enable(); =20 @@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void) =20 static void guest_vcpu_migration_test(void) { + int iter =3D test_args.vcpu_migration_test_iter; + /* * While the userspace continuously migrates this vCPU to random pCPUs, * run basic PMU functionalities and verify the results. */ - while (test_args.vcpu_migration_test_iter--) + while (iter--) test_basic_pmu_functionality(); } =20 @@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void) =20 struct vcpu_migrate_data { struct vpmu_vm *vpmu_vm; - pthread_t *pt_vcpu; - bool vcpu_done; + pthread_t *pt_vcpus; + unsigned long *vcpu_done_map; + pthread_mutex_t vcpu_done_map_lock; }; =20 +struct vcpu_migrate_data migrate_data; + static void *run_vcpus_migrate_test_func(void *arg) { - struct vcpu_migrate_data *migrate_data =3D arg; - struct vpmu_vm *vpmu_vm =3D migrate_data->vpmu_vm; + struct vpmu_vm *vpmu_vm =3D migrate_data.vpmu_vm; + unsigned int vcpu_idx =3D (unsigned long)arg; =20 - run_vcpu(vpmu_vm->vcpus[0]); - migrate_data->vcpu_done =3D true; + run_vcpu(vpmu_vm->vcpus[vcpu_idx]); + + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + __set_bit(vcpu_idx, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); =20 return NULL; } @@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void) return pcpu; } =20 -static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +static int migrate_vcpu(int vcpu_idx) { int ret; cpu_set_t cpuset; @@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *mig= rate_data) CPU_ZERO(&cpuset); CPU_SET(new_pcpu, &cpuset); =20 - pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu); =20 - ret =3D pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &c= puset); + ret =3D pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cp= uset), &cpuset); =20 /* Allow the error where the vCPU thread is already finished */ TEST_ASSERT(ret =3D=3D 0 || ret =3D=3D ESRCH, @@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *m= igrate_data) =20 static void *vcpus_migrate_func(void *arg) { - struct vcpu_migrate_data *migrate_data =3D arg; + struct vpmu_vm *vpmu_vm =3D migrate_data.vpmu_vm; + int i, n_done, nr_vcpus =3D vpmu_vm->nr_vcpus; + bool vcpu_done; =20 - while (!migrate_data->vcpu_done) { + do { usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); - migrate_vcpu(migrate_data); - } + for (n_done =3D 0, i =3D 0; i < nr_vcpus; i++) { + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + vcpu_done =3D test_bit(i, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + migrate_vcpu(i); + } + + } while (nr_vcpus !=3D n_done); =20 return NULL; } =20 static void run_vcpu_migration_test(uint64_t pmcr_n) { - int ret; + int i, nr_vcpus, ret; struct vpmu_vm *vpmu_vm; - pthread_t pt_vcpu, pt_sched; - struct vcpu_migrate_data migrate_data =3D { - .pt_vcpu =3D &pt_vcpu, - .vcpu_done =3D false, - }; + pthread_t pt_sched, *pt_vcpus; =20 __TEST_REQUIRE(get_nprocs() >=3D 2, "At least two pCPUs needed for vCPU m= igration test"); =20 guest_data.test_stage =3D TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n =3D pmcr_n; =20 - migrate_data.vpmu_vm =3D vpmu_vm =3D create_vpmu_vm(1, guest_code, NULL); + nr_vcpus =3D test_args.vcpu_migration_test_nr_vcpus; + + migrate_data.vcpu_done_map =3D bitmap_zalloc(nr_vcpus); + TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitma= p"); + pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL); + + migrate_data.pt_vcpus =3D pt_vcpus =3D calloc(nr_vcpus, sizeof(*pt_vcpus)= ); + TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers"); + + migrate_data.vpmu_vm =3D vpmu_vm =3D create_vpmu_vm(nr_vcpus, guest_code,= NULL); =20 /* Initialize random number generation for migrating vCPUs to random pCPU= s */ srand(time(NULL)); =20 - /* Spawn a vCPU thread */ - ret =3D pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migr= ate_data); - TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + /* Spawn vCPU threads */ + for (i =3D 0; i < nr_vcpus; i++) { + ret =3D pthread_create(&pt_vcpus[i], NULL, + run_vcpus_migrate_test_func, (void *)(unsigned long)i); + TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i); + } =20 /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ - ret =3D pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data= ); + ret =3D pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL); TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating th= e vCPUs"); =20 pthread_join(pt_sched, NULL); - pthread_join(pt_vcpu, NULL); + + for (i =3D 0; i < nr_vcpus; i++) + pthread_join(pt_vcpus[i], NULL); =20 destroy_vpmu_vm(vpmu_vm); + free(pt_vcpus); + bitmap_free(migrate_data.vcpu_done_map); } =20 static void run_tests(uint64_t pmcr_n) @@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void) =20 static void print_help(char *name) { - pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migr= ation_freq_ms]\n", - name); + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migr= ation_freq_ms]" + "[-n vcpu_migration_nr_vcpus]\n", name); pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)= \n", VCPU_MIGRATIONS_TEST_ITERS_DEF); pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (= default: %u)\n", VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n", + VCPU_MIGRATIONS_TEST_NR_VPUS_DEF); pr_info("\t-h: print this help screen\n"); } =20 @@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[]) { int opt; =20 - while ((opt =3D getopt(argc, argv, "hi:m:")) !=3D -1) { + while ((opt =3D getopt(argc, argv, "hi:m:n:")) !=3D -1) { switch (opt) { case 'i': test_args.vcpu_migration_test_iter =3D @@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[]) test_args.vcpu_migration_test_migrate_freq_ms =3D atoi_positive("vCPU migration frequency", optarg); break; + case 'n': + test_args.vcpu_migration_test_nr_vcpus =3D + atoi_positive("Nr vCPUs for vCPU migrations", optarg); + if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) { + pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS); + goto err; + } + break; case 'h': default: goto err; --=20 2.39.1.581.gbfd45094c4-goog