From nobody Tue Feb 10 22:00:08 2026 Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D922318EF5 for ; Mon, 9 Feb 2026 22:40:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676855; cv=none; b=f0/X+gx2k0ENCPxHRQmkW92pPx6PTEPcoS58mP69AIsomeDSQ9PsJ6jkyJTQmrBkQH+L9zb+UoGwCR642LiMXVo96j6KUHFMfQQJfBipSaYDzsW7GCx23mSLPvVHN+mS5rStOYRIUfiyCm6ngI0ZcyKzMRN55vYmiOcfBRE19FM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676855; c=relaxed/simple; bh=EuxKWuzEIIMz2C6Y2N+6HP59WW/pZgL1W1t4JoiFzPo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=smGTfAxEPhVSuqOZcoN4SGEBTNuvW7GPZXI3mZMuyWOkBiPruPU3hQNYAbReZmzYoNoN6AI+5o6+QrI7t78QUGN4xvu8l5Yrv5gPCegz+/wdvqo8wG1o3Iat7Q/45PG2nuse3PhrThmo4SgR/BzJik/zvNSg+Qn3qvhF4Dd18Tw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gVP9ZfUq; arc=none smtp.client-ip=209.85.161.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gVP9ZfUq" Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-672c40f3873so911562eaf.2 for ; Mon, 09 Feb 2026 14:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676850; x=1771281650; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=K7fJO3z0zUsCH/9inACT3LTF5j9RrtbcCQwYrsnTqt8=; b=gVP9ZfUqkeWe4eL/wJm+9baFT04DVCtjkKBpDXR9udbqJPSoaQHPTEWGiLBqJ4qMuY u4KLXzwjG5i1l9HsDcEgsL0cii/gg8/hQidw6yVsUdO0OI3yYbrq/x7Bl7kdOASQkxEE DkUun9DbJuj5HfZtluCgg5u4K9k6bfMOrtf9e5c7hr4PHO3gugCD1fDWyfPLEp2UWk+A FbXdUpD/6g3kSSZlic8xymdps8s0mb3F/Q/3WerbDXZ+iD8fyZSQiEd3nZsto98A36hP tmfLmpBGgRIm8oWDz0cTtCxa6MxSxIDhepQvlR5y6BrqY/+Kzf7XJJB4St4vPqz8TOF4 ojtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676850; x=1771281650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K7fJO3z0zUsCH/9inACT3LTF5j9RrtbcCQwYrsnTqt8=; b=n21yo/xpv+vQA9VWAF/+O3rHDpYH0ytpH39TwKcieemZuUmFBD8F5CP4IK1DGsZjG1 B8gg5dJ8vnk8WCeQo+PCoSVMiejz3eM68+HS5vILiERnXyHW3tYW1ei50o6vOF/1bhKs ahWcZElIKKAxywtPA5dNqn5WHkCmi69WnPW2+Cp0bFdqa3WG5e2JCGwtZSQYRWmv4dJx /u6gBHd+KiqcbaIjH330SAcjYvcN2bIf1iroycutw6q6sczMk9JpoWmEguNR6IqsNTbh Foknp2QvG+F/F8tEmEg3jzSdJlTdsnXNSc0ZqOb94Wh8xMqJAJ/7ykmU0C3NH65mmnk/ aMBQ== X-Forwarded-Encrypted: i=1; AJvYcCWyYIHptmxp57cLM4j7ujVfg3swxcfNFGfgDFJkRMdyZpiOaBua9y47x+w00Uf2JApPyNesmATzJl11JJY=@vger.kernel.org X-Gm-Message-State: AOJu0YzpE47ht93HDcKB5diBijeAQuLLlCQf1CcmsjisLZR8zC96HSpc 21pGNKHwDZ+1zS8E2d5/mbNWl27yN5duzr/uJZwgvwXgWyD2rQoLXxRavIPGTP2XbOdQ4RD2Bsk 9wDGYFrb8RdBC+3IC5xvRD3rmKA== X-Received: from jabjx7.prod.google.com ([2002:a05:6638:a287:b0:5c8:f4c9:f665]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:2293:b0:662:c65f:a06a with SMTP id 006d021491bc7-66d0a667020mr6032066eaf.31.1770676850105; Mon, 09 Feb 2026 14:40:50 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:01 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-7-coltonlewis@google.com> Subject: [PATCH v6 06/19] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Define some functions that determine whether the PMU is partitioned and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their separate position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/kvm/pmu-direct.c | 86 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 40 +++++++++++++-- include/kvm/arm_pmu.h | 24 +++++++++ 4 files changed, 164 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 154503f054886..bed4dfa755681 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -231,6 +231,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 74e40e4915416..05ac38ec3ea20 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -5,6 +5,8 @@ */ =20 #include +#include +#include =20 #include =20 @@ -20,3 +22,87 @@ bool has_host_pmu_partition_support(void) return has_vhe() && system_supports_pmuv3(); } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + if (!pmu) + return false; + + return pmu->max_guest_counters >=3D 0 && + pmu->max_guest_counters <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + if (!kvm_pmu_is_partitioned(pmu)) + return ARMV8_PMU_CNT_MASK_ALL; + + return GENMASK(nr_counters - 1, pmu->max_guest_counters); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_C & GENMASK(pmu->max_guest_counters - 1, 0); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index b37908fad3249..6395b6deb78c2 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -871,6 +871,9 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) brbe_enable(cpu_pmu); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 @@ -882,6 +885,9 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) brbe_disable(); =20 /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -1028,6 +1034,12 @@ static bool armv8pmu_can_use_pmccntr(struct pmu_hw_e= vents *cpuc, if (cpu_pmu->has_smt) return false; =20 + /* + * If partitioned at all, pmccntr belongs to the guest. + */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + return false; + return true; } =20 @@ -1054,6 +1066,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_event= s *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -1065,7 +1078,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_event= s *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1177,6 +1190,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1184,6 +1205,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1196,11 +1220,19 @@ static void armv8pmu_reset(void *info) brbe_invalidate(); } =20 + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index e7172db1e897d..accfcb79723c8 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,6 +92,12 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -228,6 +234,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(void *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(void *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(void *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif --=20 2.53.0.rc2.204.g2597b5adb4-goog