From nobody Fri Dec 19 07:50:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5384E7F14B for ; Wed, 27 Sep 2023 01:57:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235582AbjI0B5K (ORCPT ); Tue, 26 Sep 2023 21:57:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233835AbjI0BzI (ORCPT ); Tue, 26 Sep 2023 21:55:08 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3BBC1F9F9 for ; Tue, 26 Sep 2023 16:40:22 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a1f12cf1ddso20608667b3.0 for ; Tue, 26 Sep 2023 16:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695771622; x=1696376422; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NSPoizOaoJNKS0Pq8areU0YHcQY4BXkAsJGAui6tKMs=; b=lHDwS5kSVOxala4z3wPjyJOhYUYVDAP5AJV7pvR9ZhxYm7Ct93CqoJ1G/paiOThCZ2 GDJXYIqmNDF0GXbAY2H2oY2L0f4gsnmDmeNhJnYWiu7z+m/+ssWeNMX82Hw2tp47YAoI nMMysEVv1M9TOKty76FV9GkTnOEGoGQiNM1gelOj89onxcfbySPry6GAP2q+FUYNN13r 798jSlCvi5NFHb0QKqEKRP4Q395WW+jo+HdhMDBpuQj4LTfmHjQXpXhxGOdGDuc3BbYo sPwuojBZ0yLHh984RfnlLwhLwuFBcKkdvyeVmPnT7CVQgXIxIGbcatu4Nf+yinmvZCid rZyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695771622; x=1696376422; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NSPoizOaoJNKS0Pq8areU0YHcQY4BXkAsJGAui6tKMs=; b=NPyD0dLmUVWpo37vayBDHi4iVn1lOPMCeZlGvAYhDTGlaLV/AjXHqKYfSDpnKlYnLo o2tlRDIMwBK/Xxk/izDjYcT0nnSgXu8bGenk6eo4ourJbdpSNiktpRlyDH42SAW3wStp YqOUEHcTBcaX0FHmkBUKZn2GHRGDfLm/fbbPo5E8/OikG0obt8g8crOdBaun4MyWgr+G qD5qopf+fGWrXBeeL5LMhh54XDBPzI5WChP9Eh3hdGpCJiDQSMCPxQNKIZHXJbYSWciI Lt759k5hdHhVSCovZl5Xa5X2p1JO9d+EfT5yjR0JjwLG+tgc1XvdmAK9jfDFL1VEOV41 i+wg== X-Gm-Message-State: AOJu0YwQGxaOKMOg9srq6BtEo5rGzA2gzs2t6KxEQMncpEwC6EuwY7Aw wV5HXSQ4niDlaSqtfl+ZjvolD9EZIQ69 X-Google-Smtp-Source: AGHT+IH/eSiHCCWvkr3bGTUUifnU8couJPCWbukwfKqVHiqpH7OlTtQP+xw3YzcbOOqClSLaUCzHlGGziIZ0 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:20a1]) (user=rananta job=sendgmr) by 2002:a25:ac1b:0:b0:d7b:89af:b153 with SMTP id w27-20020a25ac1b000000b00d7b89afb153mr5664ybi.5.1695771621959; Tue, 26 Sep 2023 16:40:21 -0700 (PDT) Date: Tue, 26 Sep 2023 23:40:07 +0000 In-Reply-To: <20230926234008.2348607-1-rananta@google.com> Mime-Version: 1.0 References: <20230926234008.2348607-1-rananta@google.com> X-Mailer: git-send-email 2.42.0.582.g8ccd20d70d-goog Message-ID: <20230926234008.2348607-11-rananta@google.com> Subject: [PATCH v6 10/11] KVM: selftests: aarch64: vPMU register test for implemented counters From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Reiji Watanabe Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for implemented counters on the vCPU are readable/writable as expected, and can be programmed to count events. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- .../kvm/aarch64/vpmu_counter_access.c | 270 +++++++++++++++++- 1 file changed, 268 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/to= ols/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 58949b17d76e5..e92af3c0db039 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,7 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets. + * counters (PMCR_EL0.N) that userspace sets, and if the guest can access + * those counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -37,6 +38,259 @@ static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) *pmcr |=3D (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); } =20 +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline unsigned long read_sel_evcntr(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevcntr_el0); +} + +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline void write_sel_evcntr(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevcntr_el0); + isb(); +} + +/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline unsigned long read_sel_evtyper(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevtyper_el0); +} + +/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline void write_sel_evtyper(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevtyper_el0); + isb(); +} + +static inline void enable_counter(int idx) +{ + uint64_t v =3D read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenset_el0); + isb(); +} + +static inline void disable_counter(int idx) +{ + uint64_t v =3D read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenclr_el0); + isb(); +} + +static void pmu_disable_reset(void) +{ + uint64_t pmcr =3D read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr &=3D ~ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +#define RETURN_READ_PMEVCNTRN(n) \ + return read_sysreg(pmevcntr##n##_el0) +static unsigned long read_pmevcntrn(int n) +{ + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); + return 0; +} + +#define WRITE_PMEVCNTRN(n) \ + write_sysreg(val, pmevcntr##n##_el0) +static void write_pmevcntrn(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); + isb(); +} + +#define READ_PMEVTYPERN(n) \ + return read_sysreg(pmevtyper##n##_el0) +static unsigned long read_pmevtypern(int n) +{ + PMEVN_SWITCH(n, READ_PMEVTYPERN); + return 0; +} + +#define WRITE_PMEVTYPERN(n) \ + write_sysreg(val, pmevtyper##n##_el0) +static void write_pmevtypern(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); + isb(); +} + +/* + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * accessors that test cases will use. Each of the accessors will + * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). + * + * This is used to test that combinations of those accessors provide + * the consistent behavior. + */ +struct pmc_accessor { + /* A function to be used to read PMEVTCNTR_EL0 */ + unsigned long (*read_cntr)(int idx); + /* A function to be used to write PMEVTCNTR_EL0 */ + void (*write_cntr)(int idx, unsigned long val); + /* A function to be used to read PMEVTYPER_EL0 */ + unsigned long (*read_typer)(int idx); + /* A function to be used to write PMEVTYPER_EL0 */ + void (*write_typer)(int idx, unsigned long val); +}; + +struct pmc_accessor pmc_accessors[] =3D { + /* test with all direct accesses */ + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, + /* test with all indirect accesses */ + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper = }, + /* read with direct accesses, and write with indirect accesses */ + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, + /* read with indirect accesses, and write with direct accesses */ + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, +}; + +/* + * Convert a pointer of pmc_accessor to an index in pmc_accessors[], + * assuming that the pointer is one of the entries in pmc_accessors[]. + */ +#define PMC_ACC_TO_IDX(acc) (acc - &pmc_accessors[0]) + +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ +{ \ + uint64_t _tval =3D read_sysreg(regname); \ + \ + if (set_expected) \ + __GUEST_ASSERT((_tval & mask), \ + "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \ + _tval, mask, set_expected); \ + else \ + __GUEST_ASSERT(!(_tval & mask), \ + "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \ + _tval, mask, set_expected); \ +} + +/* + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers + * are set or cleared as specified in @set_expected. + */ +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) +{ + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); +} + +/* + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers correspo= nding + * to the specified counter (@pmc_idx) can be read/written as expected. + * When @set_op is true, it tries to set the bit for the counter in + * those registers by writing the SET registers (the bit won't be set + * if the counter is not implemented though). + * Otherwise, it tries to clear the bits in the registers by writing + * the CLR registers. + * Then, it checks if the values indicated in the registers are as expecte= d. + */ +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) +{ + uint64_t pmcr_n, test_bit =3D BIT(pmc_idx); + bool set_expected =3D false; + + if (set_op) { + write_sysreg(test_bit, pmcntenset_el0); + write_sysreg(test_bit, pmintenset_el1); + write_sysreg(test_bit, pmovsset_el0); + + /* The bit will be set only if the counter is implemented */ + pmcr_n =3D get_pmcr_n(read_sysreg(pmcr_el0)); + set_expected =3D (pmc_idx < pmcr_n) ? true : false; + } else { + write_sysreg(test_bit, pmcntenclr_el0); + write_sysreg(test_bit, pmintenclr_el1); + write_sysreg(test_bit, pmovsclr_el0); + } + check_bitmap_pmu_regs(test_bit, set_expected); +} + +/* + * Tests for reading/writing registers for the (implemented) event counter + * specified by @pmc_idx. + */ +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + uint64_t write_data, read_data; + + /* Disable all PMCs and reset all PMCs to zero. */ + pmu_disable_reset(); + + + /* + * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1. + */ + + /* Make sure that the bit in those registers are set to 0 */ + test_bitmap_pmu_regs(pmc_idx, false); + /* Test if setting the bit in those registers works */ + test_bitmap_pmu_regs(pmc_idx, true); + /* Test if clearing the bit in those registers works */ + test_bitmap_pmu_regs(pmc_idx, false); + + + /* + * Tests for reading/writing the event type register. + */ + + read_data =3D acc->read_typer(pmc_idx); + /* + * Set the event type register to an arbitrary value just for testing + * of reading/writing the register. + * ArmARM says that for the event from 0x0000 to 0x003F, + * the value indicated in the PMEVTYPER_EL0.evtCount field is + * the value written to the field even when the specified event + * is not supported. + */ + write_data =3D (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); + acc->write_typer(pmc_idx, write_data); + read_data =3D acc->read_typer(pmc_idx); + __GUEST_ASSERT(read_data =3D=3D write_data, + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x= %lx", + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); + + + /* + * Tests for reading/writing the event count register. + */ + + read_data =3D acc->read_cntr(pmc_idx); + + /* The count value must be 0, as it is not used after the reset */ + __GUEST_ASSERT(read_data =3D=3D 0, + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx", + pmc_idx, PMC_ACC_TO_IDX(acc), read_data); + + write_data =3D read_data + pmc_idx + 0x12345; + acc->write_cntr(pmc_idx, write_data); + read_data =3D acc->read_cntr(pmc_idx); + __GUEST_ASSERT(read_data =3D=3D write_data, + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x= %lx", + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); +} + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -49,11 +303,14 @@ static void guest_sync_handler(struct ex_regs *regs) /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and + * if reading/writing PMU registers for implemented counters can work + * as expected. */ static void guest_code(uint64_t expected_pmcr_n) { uint64_t pmcr, pmcr_n; + int i, pmc; =20 __GUEST_ASSERT(expected_pmcr_n <=3D ARMV8_PMU_MAX_GENERAL_COUNTERS, "Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx", @@ -67,6 +324,15 @@ static void guest_code(uint64_t expected_pmcr_n) "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx", pmcr_n, expected_pmcr_n); =20 + /* + * Tests for reading/writing PMU registers for implemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i =3D 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc =3D 0; pmc < pmcr_n; pmc++) + test_access_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } =20 --=20 2.42.0.582.g8ccd20d70d-goog