From nobody Wed Oct 8 16:06:33 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D836F2F3C03 for ; Thu, 26 Jun 2025 20:06:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968364; cv=none; b=eOlDnCO5dRddDH/I9Gdjl0qMIxA6HMPz/0F2b0T2NPbc5FUV3cqHHXe/cslhR6Am+xt5ViRHTZZyW76pdT3lhv1iwT8E7RNBkefZ8/nZ79cgmNcAGbELJI3A/aO6vFBvAich59xw5bot9HOLzm7MWEaemO7W5qKOmLNk5WB0ZEo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968364; c=relaxed/simple; bh=JKvVw7A/8JGu4CwoQcJwSACUWgFqO10vrUJv3yyThLY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I9woaEvRhepokno6zGdNmiviYP4E5dFonbLtjaHJXm4T4+OFvviOYYg6vHKgKnisPHAyxI6oT/ncMFrnaT+5RogBSPC8EouuWxrPlA8lEF7/GU8UTWgczVzicIIaIco2zpCA3Y+s13O2UfxGSM0IJMa5W5bxs1RWGmKW4eIyhSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a2OaGWMK; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a2OaGWMK" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-40b23ef137fso325505b6e.1 for ; Thu, 26 Jun 2025 13:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968359; x=1751573159; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k99ouXUEirIE9akhLs0xb4XKe9rhIoYFwQDE+CcOFTs=; b=a2OaGWMKcPmaXh831Z1XhLDEu1kYxnJRZawgJEGH+PgVKE6NLzhEkRkVmc9K/00GAg tYgC1CoNb1r5rZwSXEo9TEepzQDLL+KbcNX9G/CRTLGW0oAg5fygtptk8v525RsRLFbQ yTfQPiNoRZo8UDAiHBcvnSFqCN41ZUC1QTGWyoleCXHPZR7TmHiiiNaErRXz4V5EAOHa rzlK2bPBQFKHG/d9nQk8rQHvme7ge89ooF3SoMeqiduk45iUlPmIK1Zto6yH8hE3V1oY uUpdkZPiZyprRmJwr/rV+MHDeCJy52MzeJiDPqPCQpsTbs/Ftg4toeVzjnc+Scmtujcs Bjkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968359; x=1751573159; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k99ouXUEirIE9akhLs0xb4XKe9rhIoYFwQDE+CcOFTs=; b=H/bEkHjbBF5A9fWyrQhS7TjgXwY7ajHkRdWzxL4gqTkhFpW70+0DPbN+7AUb6viY2m 7Z0QpZ1VsRk71bWLUE+zQs9VH4EPujCAV9zwhV4NId4+KBoyftDoH0pY4exgaILg8T2o kpbJWUyl52XF5sPeh8J3wJFm8gawYt3dhowFot67AIVMqyy0FMVZB1heO7yli/tk7E9l C5XZrPNZcIvNJb+/S091Q+RBzY7VYKO6BXg2BGIo4a08O4IArGmFX+5fylgDba7P/cNN iH//QY6sLO0uMK9oKrgPtSqasfIWn+jUyyeWrahqTDxgwjLJGXf1WEm613WNgSOkfjKL PB+Q== X-Forwarded-Encrypted: i=1; AJvYcCVBg7kCnGc28DZOuQ9q9KlKkWfv2rBKJOYBgGHuxZTu/qjHyZVhzxpyGPb39d34iA0ljjcrfZZR9Jj/J90=@vger.kernel.org X-Gm-Message-State: AOJu0YxvKeb0B+QaNkRdwmZ0pydYQb9YzJJqp2A16+yrjq5Yeq+ZpWIP jeXIlgzyOseuOm90/+dCcsWm2YdT0aX0PTKXLCvwlxQ5SFcP/uxhLc/F1FEbyOGZEdjhVyzxFzN Pod3uBbvsUf57GVzNvSlUBUseLA== X-Google-Smtp-Source: AGHT+IFge23p/I/z7S0XcRITxBw4/sy3n3n02uRxYoHFvdZ4JoPohx7PAGqgN0sbPW+XiqB5U6UA12YkpdYqnn6mpw== X-Received: from oibce5.prod.google.com ([2002:a05:6808:3385:b0:40a:b672:b3fd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:1208:b0:401:e98b:ee41 with SMTP id 5614622812f47-40b33e14008mr489722b6e.21.1750968359726; Thu, 26 Jun 2025 13:05:59 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:44 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-9-coltonlewis@google.com> Subject: [PATCH v3 08/22] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and the maximum value KVM can use is saved in cpu_pmu->hpmn_max. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some functions that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/include/asm/kvm_pmu.h | 24 +++++++++ arch/arm64/kvm/pmu-part.c | 84 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 36 ++++++++++++-- 4 files changed, 158 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 49b1f2d7842d..5f6269039f44 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -231,6 +231,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 8a2ed02e157d..6328e90952ba 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -88,6 +88,12 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -220,6 +226,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index f04c580c19c0..4f06a48175e2 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -5,7 +5,10 @@ */ =20 #include +#include +#include =20 +#include #include =20 /** @@ -21,3 +24,84 @@ bool kvm_pmu_partition_supported(void) return kvm_supports_guest_pmuv3() && has_vhe(); } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return pmu->hpmn_max >=3D 0 && + pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Assumes pmu is partitioned and hpmn_max is a valid value. + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + return GENMASK(nr_counters - 1, pmu->hpmn_max); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Assumes pmu is partitioned and hpmn_max is a valid value. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_ALL & ~kvm_pmu_host_counter_mask(pmu); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3bc016afea34..5836672a3dd9 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -834,12 +834,18 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) kvm_vcpu_pmu_resync_el0(); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -949,6 +955,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, =20 /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -964,6 +971,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -975,7 +983,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1067,6 +1075,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1074,6 +1090,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1081,11 +1100,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); =20 + + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) --=20 2.50.0.727.gbf7dc18ff4-goog