From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B48523176EB for ; Mon, 9 Feb 2026 22:40:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676847; cv=none; b=P9o0JsVdfSE1HQklh2eIs28a158lMTnx6Yx/uCP9Gw4PShuo8lA9JjDJlmG/9HioMHYAs2z05r2VSn2L625fSMy5tWRoKUKVjL/PDgkBvavFl2F7TzVhTFa+voRwiRxdrKB7BXWllvLBJx52vzSAxHQlPhEPJie2GyioHb3MrAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676847; c=relaxed/simple; bh=6+/AAsrIcJ/tlip7YRHDsjoPFsDxdedlbrM0lqMPkcI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KX8rnpJIiQ73wjfSDn2bGGpg0ZAk88ZUIDL9ndv1WFeerq5cAmbLr5rgDsK+W24Uxdam9U/yQ1hZiYgbhjxDbDBT2lMsuvxt1duH6U6nmrDpjqYFJCFipVLJPKIOYEMM1/bx5Pbr3IMhfWDvqz0QcPDobbBICJq2euKSbk+ZIHE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=omAswZb5; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="omAswZb5" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-6630d58662aso1420247eaf.0 for ; Mon, 09 Feb 2026 14:40:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676845; x=1771281645; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iRE+uiZ0XUNzFGmxnXVrfreBhjJPUlvjb1s52PGeIK4=; b=omAswZb566EMnpAmqCd+XqybWKN04qDDUrYy76S7aZy77qMqpscdVzb/Vz7F8lmUft 3UVkv2E8jnXFgitrE5Jsr3Y/Px++T9H1wBsRTbiyNFfUuVjSNKE3/B8I+C4dDHux2aob ASGUEguW8TbH6qcs065+ciu2mW/OR5mTxqDr3oTe+VGBSkYshszX0V0ydbwSQW4dwTM+ iCM3KQOg9kfRAiQpnZv1ittXdoo0PI3KHhIFkCQyeZhdHALRZy+FWQIMs0CJ92KtSMEL S+S0r2nkA3AyZ0kaHy9ueGxkIX3IM2feyIONtaEQeN2owvKrrbm2OQvoScSo/RFfrFUN 0e0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676845; x=1771281645; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iRE+uiZ0XUNzFGmxnXVrfreBhjJPUlvjb1s52PGeIK4=; b=S5RTPgJ7NCRuasxou/JRbTQS3WU1SRNvIOLjAtDup4FLc7BjzGavoDoV/ERhuw9jx5 prgVjLu+98GPAg+hyctwx1zpFcOb7N96QyFbVYHtpoGD+9rJeFIov83tHvfAifh+J4DW YjR0aYpGi0c25WS1MDLTWt6m0D8yLPpbWKUt13aXwmI8Ox+mPGCzcG9/pLU8lESQD1p7 SzXJK6i78D04XdYtq0XFw5ReEZR3qZAwmeoPqrYLrProsr3Yr6Gh2iP2YuD+DTYQDV6v U8IfUB3WlqCSfEH0h8RivieeqzzH2r2TEaU+AjBbKg51vb8MCX8JIDHtat/7AFrvlbmy 03ZA== X-Forwarded-Encrypted: i=1; AJvYcCUGokpLp+9NnX6a0NSbBHwl5GseYAC/vLlXUWa4Pa2+JFS2NI49XyZQuGLVO3PrQl1QI341BFukuWg0A2Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yx5pNqan1edqJkOO0FbZzlZlmcFn7Ft5ltsJJgeuDWY8/lwLtsR dZW9PzZ8pt1xhZv16mEdVs7Mgskn7ERhb85E+8yGb0t6XpItQE32232X1dlLck6TYLA5ISu7hzb lpB2PNtPhi1Omp0DiIaCm8fkDzQ== X-Received: from jamu20.prod.google.com ([2002:a02:c054:0:b0:5ca:f895:d041]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4b17:b0:663:93:5146 with SMTP id 006d021491bc7-66d0bdae77emr5791796eaf.43.1770676844869; Mon, 09 Feb 2026 14:40:44 -0800 (PST) Date: Mon, 9 Feb 2026 22:13:56 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-2-coltonlewis@google.com> Subject: [PATCH v6 01/19] arm64: cpufeature: Add cpucap for HPMN0 From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a capability for FEAT_HPMN0, whether MDCR_EL2.HPMN can specify 0 counters reserved for the guest. This required changing HPMN0 to an UnsignedEnum in tools/sysreg because otherwise not all the appropriate macros are generated to add it to arm64_cpu_capabilities_arm64_features. Acked-by: Mark Rutland Reviewed-by: Suzuki K Poulose Signed-off-by: Colton Lewis --- arch/arm64/kernel/cpufeature.c | 8 ++++++++ arch/arm64/kvm/sys_regs.c | 3 ++- arch/arm64/tools/cpucaps | 1 + arch/arm64/tools/sysreg | 6 +++--- 4 files changed, 14 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c840a93b9ef95..e6a8373d8625b 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -555,6 +555,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] =3D { }; =20 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] =3D { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _HPMN0_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_= DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CT= X_CMPs_SHIFT, 4, 0), @@ -2950,6 +2951,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2) }, + { + .desc =3D "HPMN0", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_HPMN0, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 88a57ca36d96c..a460e93b1ad0a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3229,7 +3229,8 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64DFR0_EL1_DoubleLock_MASK | ID_AA64DFR0_EL1_WRPs_MASK | ID_AA64DFR0_EL1_PMUVer_MASK | - ID_AA64DFR0_EL1_DebugVer_MASK), + ID_AA64DFR0_EL1_DebugVer_MASK | + ID_AA64DFR0_EL1_HPMN0_MASK), ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5,2), ID_UNALLOCATED(5,3), diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 0fac75f015343..1e3f6e9cc2c86 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -42,6 +42,7 @@ HAS_GIC_PRIO_MASKING HAS_GIC_PRIO_RELAXED_SYNC HAS_ICH_HCR_EL2_TDIR HAS_HCR_NV1 +HAS_HPMN0 HAS_HCX HAS_LDAPR HAS_LPA2 diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 8921b51866d64..c9cf3d139c2da 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1666,9 +1666,9 @@ EndEnum EndSysreg =20 Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 -Enum 63:60 HPMN0 - 0b0000 UNPREDICTABLE - 0b0001 DEF +UnsignedEnum 63:60 HPMN0 + 0b0000 NI + 0b0001 IMP EndEnum UnsignedEnum 59:56 ExtTrcBuff 0b0000 NI --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24CB73176EF for ; Mon, 9 Feb 2026 22:40:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676848; cv=none; b=GgIjnizzL2RVlPfQPpSh+lpj4xFsa1QNPX+s0tXZX1lUbmRXRG/ZMHiQv29EV1m0lOjC/5bmS3l3EgnPukes632XdaSKWoYwUQnO270CfAElTRgc4xEqUktwlOHkJXtY5bfhQRtMBfLw08jI+BVjrNwpA0FHwtZ0xyRMjfUS3N4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676848; c=relaxed/simple; bh=ClzoLjOWd2eDfx7l8vWciOXnqSMF0pa1W4fVrvDK4Xs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Q8NPa51BHa0i/6x7Q+JIeUoHUMPnxvMns9pUJiOT/WHfS4DEo4snEw6qarUpPVqIv03usukp5DPWQAZvY9DYbWtfc4bLmUS162qECKJMAqXsp5B+T8M1kVfjt8hA2NBbBTexBaxKHmty1YzM4WU8kqt4MwOEB3k5U7D0SpFDLOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ecrPG/LN; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ecrPG/LN" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-4094c2f62f3so15450201fac.0 for ; Mon, 09 Feb 2026 14:40:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676846; x=1771281646; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mRISQAnWOQWKEWPADCZ3CUO06JQMsIXQJY2VQE3uoLk=; b=ecrPG/LNfF5JVpPrwO5ml3vxA/LFvBO4m5Z6h+hxiXyBhUl+JfmDGFMVSi5KZYKevE 9LDBwHhIYe6/TAE427sEo9vYBnnQOtG4yvIYiu/QIS1it1Bt1uNcpABBYJHg28nAI8A9 zMUB0zwYqqGgN/sG4OUc0Ipa4s8MiI6yMUXRL7NBhlnQo5DBkHyIkI5oPiQazs3dD1yr IHqQN0oAJjl0yNVgoExBJvPJ5XjEcnTbeJvpRkCNXK96Zqfb8cQQHafh3BhA6gyYlv6O ZdCPl5hwQcP1zPEPDzsk26fhLDwS540AX8n0Eece00aobn5rTWw1aOjOCH+xfHKAmAzQ B+ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676846; x=1771281646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mRISQAnWOQWKEWPADCZ3CUO06JQMsIXQJY2VQE3uoLk=; b=FOBGR06yINmoSBtu5RiP+JQFpV7cDOG1b9qfuw9C/AjNuDV3REpLSAtuYA/Hrr7TLd tURYNMD5xyRjgmYwzof/tDBOhLy+pu9lgYtfhnXUrmv4BY/kbHZ0QOR3dUpZUpEZz2l/ 5JlTp6FABC/2j3DDMXoOafGJPx80T40vnG+0Zp7iEGi4jXKCgF47ftq3a4eruCkq08qp NIvHnle4WiAIXWK8qbE6eI6Pqckx7WRjI8eVynJOB8DRy7ME0zB7SqiR8XF8TvbLw8iW X+PufAL01xry6W5hn+vh8ivvFaQgB/t0OHJkV3+icPR3/vI8fwJPb6PB1NtXxO+lHOPk C9vg== X-Forwarded-Encrypted: i=1; AJvYcCV410Tw+JxuTuqiKm5Se23cZDW64Y5+4cCN/WIDxwunZ5nfATAhbBCvUbhz7+Y6csoaI90v8HwX1fbqKNs=@vger.kernel.org X-Gm-Message-State: AOJu0YzjYQAV8l5paHQUh7LwZckLz2LTNr/g3IHi7j4IWtfivpoSZBYb 8nMvaChzh2kQl712YF33aoI1L6f/N0bzfZGSHWW8m7XkYxQUUxy4KkMffBeJ5VhIVQreQezjlG3 NL+8CNtslBw7Oy9IQWnp7cuVw4A== X-Received: from ilrf10.prod.google.com ([2002:a05:6e02:12aa:b0:479:ef5f:f1b5]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:160f:b0:662:c263:c9b4 with SMTP id 006d021491bc7-66d09dad11amr5531889eaf.9.1770676845998; Mon, 09 Feb 2026 14:40:45 -0800 (PST) Date: Mon, 9 Feb 2026 22:13:57 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-3-coltonlewis@google.com> Subject: [PATCH v6 02/19] KVM: arm64: Reorganize PMU includes From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Including *all* of asm/kvm_host.h in asm/arm_pmuv3.h is a bad idea because that is much more than arm_pmuv3.h logically needs and creates a circular dependency that makes it easy to introduce compiler errors when editing this code. asm/kvm_host.h includes kvm/arm_pmu.h includes perf/arm_pmuv3.h includes asm/arm_pmuv3.h includes asm/kvm_host.h Reorganize the PMU includes to be more sane. In particular: * Remove the circular dependency by removing the kvm_host.h include from asm/arm_pmuv3.h since 99% of it isn't needed. * Move the remaining tiny bit of KVM/PMU interface from kvm_host.h into arm_pmu.h * Conditionally on ARM64, include the more targeted arm_pmu.h directly in the arm_pmuv3.c driver. Signed-off-by: Marc Zyngier Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 2 -- arch/arm64/include/asm/kvm_host.h | 14 -------------- drivers/perf/arm_pmuv3.c | 5 +++++ include/kvm/arm_pmu.h | 19 +++++++++++++++++++ 4 files changed, 24 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8a777dec8d88a..cf2b2212e00a2 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -6,8 +6,6 @@ #ifndef __ASM_PMUV3_H #define __ASM_PMUV3_H =20 -#include - #include #include =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index ac7f970c78830..8e09865490a9f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1414,25 +1414,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcp= u); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); =20 -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) -{ - return (!has_vhe() && attr->exclude_host); -} - #ifdef CONFIG_KVM -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); -void kvm_clr_pmu_events(u64 clr); -bool kvm_set_pmuserenr(u64 val); void kvm_enable_trbe(void); void kvm_disable_trbe(void); void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); #else -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} -static inline void kvm_clr_pmu_events(u64 clr) {} -static inline bool kvm_set_pmuserenr(u64 val) -{ - return false; -} static inline void kvm_enable_trbe(void) {} static inline void kvm_disable_trbe(void) {} static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_gu= est) {} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 8014ff766cff5..8d3b832cd633a 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -9,6 +9,11 @@ */ =20 #include + +#if defined(CONFIG_ARM64) +#include +#endif + #include #include =20 diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 96754b51b4116..e91d15a7a564b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -9,9 +9,19 @@ =20 #include #include +#include =20 #define KVM_ARMV8_PMU_MAX_COUNTERS 32 =20 +#define kvm_pmu_counter_deferred(attr) \ + ({ \ + !has_vhe() && (attr)->exclude_host; \ + }) + +struct kvm; +struct kvm_device_attr; +struct kvm_vcpu; + #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ @@ -66,6 +76,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); =20 struct kvm_pmu_events *kvm_get_pmu_events(void); +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u64 clr); +bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -159,6 +172,12 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *= vcpu, bool pmceid1) =20 #define kvm_vcpu_has_pmu(vcpu) ({ false; }) static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} +static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} +static inline void kvm_clr_pmu_events(u64 clr) {} +static inline bool kvm_set_pmuserenr(u64 val) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26071318B9F for ; Mon, 9 Feb 2026 22:40:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676851; cv=none; b=nmJsCwGvojVAolJqaDeAIk/ko5G5H0r8UPPyo15xxUW0lTMnD5D2/RFbrjcLepQPP+fQcyqKvNdxp/EXywkPTZkxhSaFVQJaA6FDPp17VvkrMHn0gKPhx8XWH4c6ZIbl+1hWaCuaaF8z3LqDF8pu+LDaD5pPB3B5uV3hNvGHMQY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676851; c=relaxed/simple; bh=0DM44odctlhvZua4O7eUrUtzwovZ8bcth0D0ugL1yHM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KAJOvwCq/u4qxci6osjGP8ehprqT0EbuRkJq/Qm2xaSZ48nCdoo96/ZwVt2KTwB1ALQPwbGEAfgPySnfgA7gzvdsQlSuJFPHV1wS8XIyeAi2wPF8BvQ4CNOflNEcOmebCFA9GzmNNjdBjVLaoeCEhqboPBmnAMm7Pw7gn+Y1jCg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Fn8Urd05; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Fn8Urd05" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-3f9e3c7de14so13682095fac.3 for ; Mon, 09 Feb 2026 14:40:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676847; x=1771281647; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YRCAmun1TY4onvJPYO4fmNihU9hP5edqKJoyjt/KYR4=; b=Fn8Urd05QJ2iAwCckVzlrFbZ+PL88Qg5FcdthppETojcoqYz0jCxhrkwUOmI3BydOp 8KmwnHpHwJYZtj0PxpMjm51FKt/0ArgZJxPMJxrKk/9TxpPY7roWjwdlnT2jH0uLLWT9 Qoh3pmeAzR9I1fQWxmuxv+96JglQE2SzhJzYaYrHFnt4M3tAfeUlJQeyUfiyPv42buwn E+vHuT/sJzccImt4G9qB4pfZrQC9jVMJtsiM9C9SdYVXeXrk7yMVC3N7utSqRRG31g8g jRC0X8x5pGWtoGwjRvljtOMrJIPtYwQNMCRy8oWaKdsdeOCXF+BRbDoKB1GvZGeSlTbP wfgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676847; x=1771281647; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YRCAmun1TY4onvJPYO4fmNihU9hP5edqKJoyjt/KYR4=; b=NzHnu5jV0Gw7AFNIeqoCsjsN7TGWa4cKknH4EUkMDSj7N8mhVlmlGKafYpSCEB9cJx UXbrOvQ4DBO33uHzh9Ra6O30REgp9IObbXMs9Blxg0Lo7cdk9iJk8ELNShnebQpoi00e WkZ5BvyPQq8p0Nj/4sNVBEWyvhe5IQ6rno2dSVIUrBPd0pbokWtxWDYQdXUR7p6wUaUM RNbPwDTWKAQUFC2pDjb7q/f1sINo9OT1501SulhqDgaa3yPwltW7Wh7VXU4TE1iqVC+W jS6RmiKBMYgTPwq0W7l/b4fhlwF20MIGmt0O1MGKWQu9YhXWNe7okvpphTs5A7qwstkx zCzg== X-Forwarded-Encrypted: i=1; AJvYcCVTZZzLBPB+nNxahoNeZnFkFjGf42aIYbQo3YlleiBdw2JKZOQG+3MPYYZE5Ige5pmSQZczbVZrurMraqY=@vger.kernel.org X-Gm-Message-State: AOJu0Yw3oSqy22CW/aXfnqJ3wlLqvlZG/4hN9QRYE+X3T2QBknE8OVQ1 /T7UlL856IrzLxEB8iRiTebrh343kdreDbxwDI/yLYudI8ZvfVFyE9qikHFomIW6rfa60QUTlwR 2xjgw8XqPAwSntf/rpzh3/raFCQ== X-Received: from oabub17.prod.google.com ([2002:a05:6871:4b11:b0:3ec:c0dc:2314]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:4207:b0:409:96b8:6515 with SMTP id 586e51a60fabf-40a9706d0c6mr5983956fac.48.1770676847092; Mon, 09 Feb 2026 14:40:47 -0800 (PST) Date: Mon, 9 Feb 2026 22:13:58 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-4-coltonlewis@google.com> Subject: [PATCH v6 03/19] KVM: arm64: Reorganize PMU functions From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A lot of functions in pmu-emul.c aren't specific to the emulated PMU implementation. Move them to the more appropriate pmu.c file where shared PMU functions should live. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-emul.c | 672 +------------------------------------ arch/arm64/kvm/pmu.c | 676 ++++++++++++++++++++++++++++++++++++++ include/kvm/arm_pmu.h | 7 + 3 files changed, 684 insertions(+), 671 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index b03dbda7f1ab9..a40db0d5120ff 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -17,19 +17,10 @@ =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 -static LIST_HEAD(arm_pmus); -static DEFINE_MUTEX(arm_pmus_lock); - static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 -bool kvm_supports_guest_pmuv3(void) -{ - guard(mutex)(&arm_pmus_lock); - return !list_empty(&arm_pmus); -} - static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) { return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]); @@ -40,46 +31,6 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vc= pu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } =20 -static u32 __kvm_pmu_event_mask(unsigned int pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return GENMASK(9, 0); - case ID_AA64DFR0_EL1_PMUVer_V3P1: - case ID_AA64DFR0_EL1_PMUVer_V3P4: - case ID_AA64DFR0_EL1_PMUVer_V3P5: - case ID_AA64DFR0_EL1_PMUVer_V3P7: - return GENMASK(15, 0); - default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); - return 0; - } -} - -static u32 kvm_pmu_event_mask(struct kvm *kvm) -{ - u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); - u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); - - return __kvm_pmu_event_mask(pmuver); -} - -u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | - kvm_pmu_event_mask(kvm); - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) - mask |=3D ARMV8_PMU_INCLUDE_EL2; - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) - mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | - ARMV8_PMU_EXCLUDE_NS_EL1 | - ARMV8_PMU_EXCLUDE_EL3; - - return mask; -} - /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -272,59 +223,6 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) irq_work_sync(&vcpu->arch.pmu.overflow_work); } =20 -static u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) -{ - unsigned int hpmn, n; - - if (!vcpu_has_nv(vcpu)) - return 0; - - hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - n =3D vcpu->kvm->arch.nr_pmu_counters; - - /* - * Programming HPMN to a value greater than PMCR_EL0.N is - * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an - * UNKNOWN number of counters (in our case, zero) are reserved for EL2. - */ - if (hpmn >=3D n) - return 0; - - /* - * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't - * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly - * depend on hardware (all PMU registers are trapped), make the - * implementation choice that all counters are included in the second - * range reserved for EL2/EL3. - */ - return GENMASK(n - 1, hpmn); -} - -bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) -{ - return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); -} - -u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); - - if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) - return mask; - - return mask & ~kvm_pmu_hyp_counter_mask(vcpu); -} - -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); - - if (val =3D=3D 0) - return BIT(ARMV8_PMU_CYCLE_IDX); - else - return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); -} - static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) { if (!pmc->perf_event) { @@ -370,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -393,24 +291,6 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *v= cpu) return reg; } =20 -static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - bool overflow; - - overflow =3D kvm_pmu_overflow_status(vcpu); - if (pmu->irq_level =3D=3D overflow) - return; - - pmu->irq_level =3D overflow; - - if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, - pmu->irq_num, overflow, pmu); - WARN_ON(ret); - } -} - bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; @@ -436,43 +316,6 @@ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) regs->device_irq_level |=3D KVM_ARM_DEV_PMU; } =20 -/** - * kvm_pmu_flush_hwstate - flush pmu state to cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the host, and = inject - * an interrupt if that was the case. - */ -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/** - * kvm_pmu_sync_hwstate - sync pmu state from cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the guest, and - * inject an interrupt if that was the case. - */ -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/* - * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding - * to the event. - * This is why we need a callback to do it once outside of the NMI context. - */ -static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) -{ - struct kvm_vcpu *vcpu; - - vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); - kvm_vcpu_kick(vcpu); -} - /* * Perform an increment on any of the counters described in @mask, * generating the overflow if required, and propagate it as a chained @@ -784,132 +627,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *= vcpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 -void kvm_host_pmu_init(struct arm_pmu *pmu) -{ - struct arm_pmu_entry *entry; - - /* - * Check the sanitised PMU version for the system, as KVM does not - * support implementations where PMUv3 exists on a subset of CPUs. - */ - if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) - return; - - guard(mutex)(&arm_pmus_lock); - - entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return; - - entry->arm_pmu =3D pmu; - list_add_tail(&entry->entry, &arm_pmus); -} - -static struct arm_pmu *kvm_pmu_probe_armpmu(void) -{ - struct arm_pmu_entry *entry; - struct arm_pmu *pmu; - int cpu; - - guard(mutex)(&arm_pmus_lock); - - /* - * It is safe to use a stale cpu to iterate the list of PMUs so long as - * the same value is used for the entirety of the loop. Given this, and - * the fact that no percpu data is used for the lookup there is no need - * to disable preemption. - * - * It is still necessary to get a valid cpu, though, to probe for the - * default PMU instance as userspace is not required to specify a PMU - * type. In order to uphold the preexisting behavior KVM selects the - * PMU instance for the core during vcpu init. A dependent use - * case would be a user with disdain of all things big.LITTLE that - * affines the VMM to a particular cluster of cores. - * - * In any case, userspace should just do the sane thing and use the UAPI - * to select a PMU type directly. But, be wary of the baggage being - * carried here. - */ - cpu =3D raw_smp_processor_id(); - list_for_each_entry(entry, &arm_pmus, entry) { - pmu =3D entry->arm_pmu; - - if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) - return pmu; - } - - return NULL; -} - -static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) -{ - u32 hi[2], lo[2]; - - bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); - bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); - - return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; -} - -static u64 compute_pmceid0(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 0); - - /* always support SW_INCR */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); - /* always support CHAIN */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); - return val; -} - -static u64 compute_pmceid1(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 1); - - /* - * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled - * as RAZ - */ - val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); - return val; -} - -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; - unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; - u64 val, mask =3D 0; - int base, i, nr_events; - - if (!pmceid1) { - val =3D compute_pmceid0(cpu_pmu); - base =3D 0; - } else { - val =3D compute_pmceid1(cpu_pmu); - base =3D 32; - } - - if (!bmap) - return val; - - nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; - - for (i =3D 0; i < 32; i +=3D 8) { - u64 byte; - - byte =3D bitmap_get_value8(bmap, base + i); - mask |=3D byte << i; - if (nr_events >=3D (0x4000 + base + 32)) { - byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); - mask |=3D byte << (32 + i); - } - } - - return val & mask; -} - void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) { u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); @@ -921,393 +638,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) kvm_pmu_reprogram_counter_mask(vcpu, mask); } =20 -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - if (!vcpu->arch.pmu.created) - return -EINVAL; - - /* - * A valid interrupt configuration for the PMU is either to have a - * properly configured interrupt number and using an in-kernel - * irqchip, or to not have an in-kernel GIC and not set an IRQ. - */ - if (irqchip_in_kernel(vcpu->kvm)) { - int irq =3D vcpu->arch.pmu.irq_num; - /* - * If we are using an in-kernel vgic, at this point we know - * the vgic will be initialized, so we can check the PMU irq - * number against the dimensions of the vgic and make sure - * it's valid. - */ - if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) - return -EINVAL; - } else if (kvm_arm_pmu_irq_initialized(vcpu)) { - return -EINVAL; - } - - return 0; -} - -static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) -{ - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; - - /* - * If using the PMU with an in-kernel virtual GIC - * implementation, we require the GIC to be already - * initialized when initializing the PMU. - */ - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, - &vcpu->arch.pmu); - if (ret) - return ret; - } - - init_irq_work(&vcpu->arch.pmu.overflow_work, - kvm_pmu_perf_overflow_notify_vcpu); - - vcpu->arch.pmu.created =3D true; - return 0; -} - -/* - * For one VM the interrupt type must be same for each vcpu. - * As a PPI, the interrupt number is the same for all vcpus, - * while as an SPI it must be a separate number per vcpu. - */ -static bool pmu_irq_is_valid(struct kvm *kvm, int irq) -{ - unsigned long i; - struct kvm_vcpu *vcpu; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_arm_pmu_irq_initialized(vcpu)) - continue; - - if (irq_is_ppi(irq)) { - if (vcpu->arch.pmu.irq_num !=3D irq) - return false; - } else { - if (vcpu->arch.pmu.irq_num =3D=3D irq) - return false; - } - } - - return true; -} - -/** - * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. - * @kvm: The kvm pointer - */ -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; - - /* - * PMUv3 requires that all event counters are capable of counting any - * event, though the same may not be true of non-PMUv3 hardware. - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return 1; - - /* - * The arm_pmu->cntr_mask considers the fixed counter(s) as well. - * Ignore those and return only the general-purpose counters. - */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); -} - -static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) -{ - kvm->arch.nr_pmu_counters =3D nr; - - /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ - if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { - struct kvm_vcpu *vcpu; - unsigned long i; - - kvm_for_each_vcpu(i, vcpu, kvm) { - u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); - val &=3D ~MDCR_EL2_HPMN; - val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); - __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); - } - } -} - -static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) -{ - lockdep_assert_held(&kvm->arch.config_lock); - - kvm->arch.arm_pmu =3D arm_pmu; - kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); -} - -/** - * kvm_arm_set_default_pmu - No PMU set, get the default one. - * @kvm: The kvm pointer - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ -int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); - - if (!arm_pmu) - return -ENODEV; - - kvm_arm_set_pmu(kvm, arm_pmu); - return 0; -} - -static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) -{ - struct kvm *kvm =3D vcpu->kvm; - struct arm_pmu_entry *entry; - struct arm_pmu *arm_pmu; - int ret =3D -ENXIO; - - lockdep_assert_held(&kvm->arch.config_lock); - mutex_lock(&arm_pmus_lock); - - list_for_each_entry(entry, &arm_pmus, entry) { - arm_pmu =3D entry->arm_pmu; - if (arm_pmu->pmu.type =3D=3D pmu_id) { - if (kvm_vm_has_ran_once(kvm) || - (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { - ret =3D -EBUSY; - break; - } - - kvm_arm_set_pmu(kvm, arm_pmu); - cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret =3D 0; - break; - } - } - - mutex_unlock(&arm_pmus_lock); - return ret; -} - -static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) -{ - struct kvm *kvm =3D vcpu->kvm; - - if (!kvm->arch.arm_pmu) - return -EINVAL; - - if (n > kvm_arm_pmu_get_max_counters(kvm)) - return -EINVAL; - - kvm_arm_set_nr_counters(kvm, n); - return 0; -} - -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - struct kvm *kvm =3D vcpu->kvm; - - lockdep_assert_held(&kvm->arch.config_lock); - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (vcpu->arch.pmu.created) - return -EBUSY; - - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(kvm)) - return -EINVAL; - - if (get_user(irq, uaddr)) - return -EFAULT; - - /* The PMU overflow interrupt can be a PPI or a valid SPI. */ - if (!(irq_is_ppi(irq) || irq_is_spi(irq))) - return -EINVAL; - - if (!pmu_irq_is_valid(kvm, irq)) - return -EINVAL; - - if (kvm_arm_pmu_irq_initialized(vcpu)) - return -EBUSY; - - kvm_debug("Set kvm ARM PMU irq: %d\n", irq); - vcpu->arch.pmu.irq_num =3D irq; - return 0; - } - case KVM_ARM_VCPU_PMU_V3_FILTER: { - u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); - struct kvm_pmu_event_filter __user *uaddr; - struct kvm_pmu_event_filter filter; - int nr_events; - - /* - * Allow userspace to specify an event filter for the entire - * event range supported by PMUVer of the hardware, rather - * than the guest's PMUVer for KVM backward compatibility. - */ - nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; - - uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; - - if (copy_from_user(&filter, uaddr, sizeof(filter))) - return -EFAULT; - - if (((u32)filter.base_event + filter.nevents) > nr_events || - (filter.action !=3D KVM_PMU_EVENT_ALLOW && - filter.action !=3D KVM_PMU_EVENT_DENY)) - return -EINVAL; - - if (kvm_vm_has_ran_once(kvm)) - return -EBUSY; - - if (!kvm->arch.pmu_filter) { - kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); - if (!kvm->arch.pmu_filter) - return -ENOMEM; - - /* - * The default depends on the first applied filter. - * If it allows events, the default is to deny. - * Conversely, if the first filter denies a set of - * events, the default is to allow. - */ - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_zero(kvm->arch.pmu_filter, nr_events); - else - bitmap_fill(kvm->arch.pmu_filter, nr_events); - } - - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - else - bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - - return 0; - } - case KVM_ARM_VCPU_PMU_V3_SET_PMU: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int pmu_id; - - if (get_user(pmu_id, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); - } - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { - unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; - unsigned int n; - - if (get_user(n, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); - } - case KVM_ARM_VCPU_PMU_V3_INIT: - return kvm_arm_pmu_v3_init(vcpu); - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(vcpu->kvm)) - return -EINVAL; - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - irq =3D vcpu->arch.pmu.irq_num; - return put_user(irq, uaddr); - } - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: - case KVM_ARM_VCPU_PMU_V3_INIT: - case KVM_ARM_VCPU_PMU_V3_FILTER: - case KVM_ARM_VCPU_PMU_V3_SET_PMU: - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: - if (kvm_vcpu_has_pmu(vcpu)) - return 0; - } - - return -ENXIO; -} - -u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - unsigned int pmuver; - - pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, - read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); - - /* - * Spoof a barebones PMUv3 implementation if the system supports IMPDEF - * traps of the PMUv3 sysregs - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return ID_AA64DFR0_EL1_PMUVer_IMP; - - /* - * Otherwise, treat IMPLEMENTATION DEFINED functionality as - * unimplemented - */ - if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) - return 0; - - return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); -} - -/** - * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU - * @vcpu: The vcpu pointer - */ -u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) -{ - u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; - - if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) - n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - - return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); -} - void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) { bool reprogrammed =3D false; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 6b48a3d16d0d5..74a5d35edb244 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,8 +8,22 @@ #include #include =20 +#include + +#include + +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + +bool kvm_supports_guest_pmuv3(void) +{ + guard(mutex)(&arm_pmus_lock); + return !list_empty(&arm_pmus); +} + /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -209,3 +223,665 @@ void kvm_vcpu_pmu_resync_el0(void) =20 kvm_make_request(KVM_REQ_RESYNC_PMU_EL0, vcpu); } + +void kvm_host_pmu_init(struct arm_pmu *pmu) +{ + struct arm_pmu_entry *entry; + + /* + * Check the sanitised PMU version for the system, as KVM does not + * support implementations where PMUv3 exists on a subset of CPUs. + */ + if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) + return; + + guard(mutex)(&arm_pmus_lock); + + entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->arm_pmu =3D pmu; + list_add_tail(&entry->entry, &arm_pmus); +} + +static struct arm_pmu *kvm_pmu_probe_armpmu(void) +{ + struct arm_pmu_entry *entry; + struct arm_pmu *pmu; + int cpu; + + guard(mutex)(&arm_pmus_lock); + + /* + * It is safe to use a stale cpu to iterate the list of PMUs so long as + * the same value is used for the entirety of the loop. Given this, and + * the fact that no percpu data is used for the lookup there is no need + * to disable preemption. + * + * It is still necessary to get a valid cpu, though, to probe for the + * default PMU instance as userspace is not required to specify a PMU + * type. In order to uphold the preexisting behavior KVM selects the + * PMU instance for the core during vcpu init. A dependent use + * case would be a user with disdain of all things big.LITTLE that + * affines the VMM to a particular cluster of cores. + * + * In any case, userspace should just do the sane thing and use the UAPI + * to select a PMU type directly. But, be wary of the baggage being + * carried here. + */ + cpu =3D raw_smp_processor_id(); + list_for_each_entry(entry, &arm_pmus, entry) { + pmu =3D entry->arm_pmu; + + if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return pmu; + } + + return NULL; +} + +static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) +{ + u32 hi[2], lo[2]; + + bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); + bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); + + return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; +} + +static u64 compute_pmceid0(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 0); + + /* always support SW_INCR */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); + /* always support CHAIN */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); + return val; +} + +static u64 compute_pmceid1(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 1); + + /* + * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled + * as RAZ + */ + val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); + return val; +} + +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; + u64 val, mask =3D 0; + int base, i, nr_events; + + if (!pmceid1) { + val =3D compute_pmceid0(cpu_pmu); + base =3D 0; + } else { + val =3D compute_pmceid1(cpu_pmu); + base =3D 32; + } + + if (!bmap) + return val; + + nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; + + for (i =3D 0; i < 32; i +=3D 8) { + u64 byte; + + byte =3D bitmap_get_value8(bmap, base + i); + mask |=3D byte << i; + if (nr_events >=3D (0x4000 + base + 32)) { + byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); + mask |=3D byte << (32 + i); + } + } + + return val & mask; +} + +/* + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + + vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); + kvm_vcpu_kick(vcpu); +} + +static u32 __kvm_pmu_event_mask(unsigned int pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return GENMASK(9, 0); + case ID_AA64DFR0_EL1_PMUVer_V3P1: + case ID_AA64DFR0_EL1_PMUVer_V3P4: + case ID_AA64DFR0_EL1_PMUVer_V3P5: + case ID_AA64DFR0_EL1_PMUVer_V3P7: + return GENMASK(15, 0); + default: /* Shouldn't be here, just for sanity */ + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); + return 0; + } +} + +u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + +u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | + kvm_pmu_event_mask(kvm); + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) + mask |=3D ARMV8_PMU_INCLUDE_EL2; + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) + mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | + ARMV8_PMU_EXCLUDE_NS_EL1 | + ARMV8_PMU_EXCLUDE_EL3; + + return mask; +} + +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D &vcpu->arch.pmu; + bool overflow; + + overflow =3D kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) + return; + + pmu->irq_level =3D overflow; + + if (likely(irqchip_in_kernel(vcpu->kvm))) { + int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, + pmu->irq_num, overflow, pmu); + WARN_ON(ret); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and = inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.pmu.created) + return -EINVAL; + + /* + * A valid interrupt configuration for the PMU is either to have a + * properly configured interrupt number and using an in-kernel + * irqchip, or to not have an in-kernel GIC and not set an IRQ. + */ + if (irqchip_in_kernel(vcpu->kvm)) { + int irq =3D vcpu->arch.pmu.irq_num; + /* + * If we are using an in-kernel vgic, at this point we know + * the vgic will be initialized, so we can check the PMU irq + * number against the dimensions of the vgic and make sure + * it's valid. + */ + if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) + return -EINVAL; + } else if (kvm_arm_pmu_irq_initialized(vcpu)) { + return -EINVAL; + } + + return 0; +} + +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) +{ + if (irqchip_in_kernel(vcpu->kvm)) { + int ret; + + /* + * If using the PMU with an in-kernel virtual GIC + * implementation, we require the GIC to be already + * initialized when initializing the PMU. + */ + if (!vgic_initialized(vcpu->kvm)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, + &vcpu->arch.pmu); + if (ret) + return ret; + } + + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + + vcpu->arch.pmu.created =3D true; + return 0; +} + +/* + * For one VM the interrupt type must be same for each vcpu. + * As a PPI, the interrupt number is the same for all vcpus, + * while as an SPI it must be a separate number per vcpu. + */ +static bool pmu_irq_is_valid(struct kvm *kvm, int irq) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_arm_pmu_irq_initialized(vcpu)) + continue; + + if (irq_is_ppi(irq)) { + if (vcpu->arch.pmu.irq_num !=3D irq) + return false; + } else { + if (vcpu->arch.pmu.irq_num =3D=3D irq) + return false; + } + } + + return true; +} + +/** + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. + * @kvm: The kvm pointer + */ +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + + /* + * PMUv3 requires that all event counters are capable of counting any + * event, though the same may not be true of non-PMUv3 hardware. + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return 1; + + /* + * The arm_pmu->cntr_mask considers the fixed counter(s) as well. + * Ignore those and return only the general-purpose counters. + */ + return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); +} + +static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) +{ + kvm->arch.nr_pmu_counters =3D nr; + + /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ + if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); + + val &=3D ~MDCR_EL2_HPMN; + val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); + } + } +} + +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm *kvm =3D vcpu->kvm; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret =3D -ENXIO; + + lockdep_assert_held(&kvm->arch.config_lock); + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu =3D entry->arm_pmu; + if (arm_pmu->pmu.type =3D=3D pmu_id) { + if (kvm_vm_has_ran_once(kvm) || + (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { + ret =3D -EBUSY; + break; + } + + kvm_arm_set_pmu(kvm, arm_pmu); + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); + ret =3D 0; + break; + } + } + + mutex_unlock(&arm_pmus_lock); + return ret; +} + +static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm->arch.arm_pmu) + return -EINVAL; + + if (n > kvm_arm_pmu_get_max_counters(kvm)) + return -EINVAL; + + kvm_arm_set_nr_counters(kvm, n); + return 0; +} + +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + struct kvm *kvm =3D vcpu->kvm; + + lockdep_assert_held(&kvm->arch.config_lock); + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (vcpu->arch.pmu.created) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(kvm)) + return -EINVAL; + + if (get_user(irq, uaddr)) + return -EFAULT; + + /* The PMU overflow interrupt can be a PPI or a valid SPI. */ + if (!(irq_is_ppi(irq) || irq_is_spi(irq))) + return -EINVAL; + + if (!pmu_irq_is_valid(kvm, irq)) + return -EINVAL; + + if (kvm_arm_pmu_irq_initialized(vcpu)) + return -EBUSY; + + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; + return 0; + } + case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); + struct kvm_pmu_event_filter __user *uaddr; + struct kvm_pmu_event_filter filter; + int nr_events; + + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; + + uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; + + if (copy_from_user(&filter, uaddr, sizeof(filter))) + return -EFAULT; + + if (((u32)filter.base_event + filter.nevents) > nr_events || + (filter.action !=3D KVM_PMU_EVENT_ALLOW && + filter.action !=3D KVM_PMU_EVENT_DENY)) + return -EINVAL; + + if (kvm_vm_has_ran_once(kvm)) + return -EBUSY; + + if (!kvm->arch.pmu_filter) { + kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); + if (!kvm->arch.pmu_filter) + return -ENOMEM; + + /* + * The default depends on the first applied filter. + * If it allows events, the default is to deny. + * Conversely, if the first filter denies a set of + * events, the default is to allow. + */ + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_zero(kvm->arch.pmu_filter, nr_events); + else + bitmap_fill(kvm->arch.pmu_filter, nr_events); + } + + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + else + bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + + return 0; + } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + unsigned int n; + + if (get_user(n, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); + } + case KVM_ARM_VCPU_PMU_V3_INIT: + return kvm_arm_pmu_v3_init(vcpu); + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(vcpu->kvm)) + return -EINVAL; + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + irq =3D vcpu->arch.pmu.irq_num; + return put_user(irq, uaddr); + } + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: + case KVM_ARM_VCPU_PMU_V3_INIT: + case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + if (kvm_vcpu_has_pmu(vcpu)) + return 0; + } + + return -ENXIO; +} + +u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + unsigned int pmuver; + + pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, + read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); + + /* + * Spoof a barebones PMUv3 implementation if the system supports IMPDEF + * traps of the PMUv3 sysregs + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return ID_AA64DFR0_EL1_PMUVer_IMP; + + /* + * Otherwise, treat IMPLEMENTATION DEFINED functionality as + * unimplemented + */ + if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return 0; + + return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); +} + +u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); + + if (val =3D=3D 0) + return BIT(ARMV8_PMU_CYCLE_IDX); + else + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); +} + +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) +{ + unsigned int hpmn, n; + + if (!vcpu_has_nv(vcpu)) + return 0; + + hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + n =3D vcpu->kvm->arch.nr_pmu_counters; + + /* + * Programming HPMN to a value greater than PMCR_EL0.N is + * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an + * UNKNOWN number of counters (in our case, zero) are reserved for EL2. + */ + if (hpmn >=3D n) + return 0; + + /* + * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't + * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly + * depend on hardware (all PMU registers are trapped), make the + * implementation choice that all counters are included in the second + * range reserved for EL2/EL3. + */ + return GENMASK(n - 1, hpmn); +} + +bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) +{ + return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); +} + +u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); + + if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) + return mask; + + return mask & ~kvm_pmu_hyp_counter_mask(vcpu); +} + +/** + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU + * @vcpu: The vcpu pointer + */ +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + + if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) + n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + + return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); +} diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index e91d15a7a564b..24a471cf59d56 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -53,13 +53,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u6= 4 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu); +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu); u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); +u32 kvm_pmu_event_mask(struct kvm *kvm); u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); void kvm_pmu_update_run(struct kvm_vcpu *vcpu); void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); @@ -132,6 +135,10 @@ static inline u64 kvm_pmu_accessible_counter_mask(stru= ct kvm_vcpu *vcpu) { return 0; } +static inline u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + return 0; +} static inline void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u= 64 val) {} --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6247C318ED2 for ; Mon, 9 Feb 2026 22:40:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676851; cv=none; b=EKA2zJIqqEL9X+OlFFbNdtAS63y8E1tEYqAVekzvaZyBVvhXmfE8Qb/Hj9U3vV00bv4I9ZG+SaPo8BUDWVS0JY/nWxSSWjmRCrUILra2ibE38V0Q4avQrPyy0aEO/QmmWig5xYOl5T+6rCLlckBcZpZFGo0MfyTfMPhhc445gIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676851; c=relaxed/simple; bh=3jyzXJIajLbS6CcMqTYwDpjsuiNMz542LZWLjVw8/1c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RBi5q3N5owVKaimPzJrRVnE27FsZeRFh4IRYSZ3OyS9+00zq9AyAHEenUCZ5RyxMwGrJVT3u9pHRbmKahicRrmrB+iK61IMa0WtgTVoAKzlOxAg15ZltWKW0LYM9S8x4lNzWp3fzmjeyuy/JHKnjS1P+dmItN1ZoOTYnvH/ieq0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=syG1F7Hk; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="syG1F7Hk" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-7d18673d47aso973228a34.2 for ; Mon, 09 Feb 2026 14:40:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676848; x=1771281648; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dktQDxgyJAqONd4AmsQUPyNDvjXWLaiBbmgt7yyngF8=; b=syG1F7HkdavTeBLgfNELjqto/6OK3z2CsNpyCM12iPdHJBUL5ycDUh2+m6sudgBelY a5uK8F+xH5HgGYulXMS6goLTan+s/FolgYXL68mCsnqzYWAJgLTWQrdNSnSto5ED6YLB QR3ph/l19Hsi2MZ50pqTbnQ9jPNUP+fTbWasS+3oRgWpYKzSG4hMaurPkAS3eF+mIZSm pNgTAUxfwRqxtNDrf7n4Zjsz4RJXfKZMSNx6bK3D53OccShfV8bA1LwEaoqLrdQvTIa3 ZXGHPr5KtmPClo68f5Abepy5Hy26yfJR8MyRkc7yVfUFoFXgc1Noi9y4em/Xzz8BKi23 Jq5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676848; x=1771281648; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dktQDxgyJAqONd4AmsQUPyNDvjXWLaiBbmgt7yyngF8=; b=BCnFKqs0x3yX9ecOPT7aJblVSrfCDt7B4CL3CYhOVA12pqnNMoTGG4YTOdToHhO83y 1VnzU4VDkCk96D7UobO8EUnZTzUdcczwAIfpb5t/1OtOewjpv0ZhOElQ19cZASqs2y6+ JchqGn2LhI2fI986Jl3diHlJKMmCBiMYp6l5khK95ihffhLTvCkV7pG/0Ljtzs4F52Cs K0QiA05MmSI42AKBDwkYXibBHhDydm9EhFDQKAePV7vkKcDVRvzKy2BRlx8TaRaFy1B9 8aDSZP3R98B4AguF1EmFTK7ybcHj7GCvOU9VKmZja6HU4QDndNN/FNHJSXWCv1N2yRhG /p5A== X-Forwarded-Encrypted: i=1; AJvYcCVUcCl4DUMoICeGC+6GLz+F5Z+eGO0akyVzoEoz/Yg7V43YdAn1XQgzhKJQF+6H7RVWHVC08XIbK6/YAgU=@vger.kernel.org X-Gm-Message-State: AOJu0YwbdKn/jBygqP3fyrJvM6T3uTyQI3tdhJ36KNPMypmAhTkGgTE1 kl76zGDKU4bAAeVA9XXUd4xQu+fb4NdAng6KsSeGQhcJ5TkWkEMlLGVSmTa8V6mfBbzZf3U9xVE C3st3gYgiV3/aW38z2EM6Iu9D3A== X-Received: from iowy22.prod.google.com ([2002:a05:6602:1656:b0:951:7dc4:cdab]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:22a4:b0:662:fe9f:2259 with SMTP id 006d021491bc7-66d0bea84camr5654511eaf.50.1770676848388; Mon, 09 Feb 2026 14:40:48 -0800 (PST) Date: Mon, 9 Feb 2026 22:13:59 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-5-coltonlewis@google.com> Subject: [PATCH v6 04/19] perf: arm_pmuv3: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Create module parameter reserved_host_counters to reserve a number of counters for the host. This number is set at boot because the perf subsystem assumes the number of counters will not change after the PMU is probed. Introduce the function armv8pmu_partition() to modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest and record reserved_guest_counters as the maximum allowable value for HPMN. Due to the difficulty this feature would create for the driver running in nVHE mode, partitioning is only allowed in VHE mode. In order to support a partitioning on nVHE we'd need to explicitly disable guest counters on every exit and reset HPMN to place all counters in the first range. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 4 ++ arch/arm64/include/asm/arm_pmuv3.h | 5 ++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/pmu-direct.c | 22 +++++++++ drivers/perf/arm_pmuv3.c | 78 +++++++++++++++++++++++++++++- include/kvm/arm_pmu.h | 8 +++ include/linux/perf/arm_pmu.h | 1 + 7 files changed, 117 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kvm/pmu-direct.c diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc98..154503f054886 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -221,6 +221,10 @@ static inline bool kvm_pmu_counter_deferred(struct per= f_event_attr *attr) return false; } =20 +static inline bool has_host_pmu_partition_support(void) +{ + return false; +} static inline bool kvm_set_pmuserenr(u64 val) { return false; diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index cf2b2212e00a2..27c4d6d47da31 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -171,6 +171,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P4; diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 3ebc0570345cc..baf0f296c0e53 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -26,7 +26,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o \ vgic/vgic-v5.o =20 -kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o +kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu-direct.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) +=3D ptdump.o =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c new file mode 100644 index 0000000000000..74e40e4915416 --- /dev/null +++ b/arch/arm64/kvm/pmu-direct.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Colton Lewis + */ + +#include + +#include + +/** + * has_host_pmu_partition_support() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode with PMUv3 + * + * Return: True if partitioning is possible, false otherwise + */ +bool has_host_pmu_partition_support(void) +{ + return has_vhe() && + system_supports_pmuv3(); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 8d3b832cd633a..798c93678e97c 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -42,6 +42,13 @@ #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED =20 +static int reserved_host_counters __read_mostly =3D -1; +int armv8pmu_max_guest_counters =3D -1; + +module_param(reserved_host_counters, int, 0); +MODULE_PARM_DESC(reserved_host_counters, + "PMU Partition: -1 =3D No partition; +N =3D Reserve N counters for the = host"); + /* * ARMv8 Architectural defined events, not all of these may * be supported on any given implementation. Unsupported events will @@ -532,6 +539,11 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } =20 +static u64 armv8pmu_pmcr_n_read(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1309,6 +1321,61 @@ struct armv8pmu_probe_info { bool present; }; =20 +/** + * armv8pmu_reservation_is_valid() - Determine if reservation is allowed + * @host_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is an + * allowed reservation, 0 to NR_COUNTERS inclusive. + * + * Return: True if reservation allowed, false otherwise + */ +static bool armv8pmu_reservation_is_valid(int host_counters) +{ + return host_counters >=3D 0 && + host_counters <=3D armv8pmu_pmcr_n_read(); +} + +/** + * armv8pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @host_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the max_guest_counters field of the PMU and + * clearing the guest-reserved counters from the counter mask. + * + * Return: 0 on success, -ERROR otherwise + */ +static int armv8pmu_partition(struct arm_pmu *pmu, int host_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!armv8pmu_reservation_is_valid(host_counters)) { + pr_err("PMU partition reservation of %d host counters is not valid", hos= t_counters); + return -EINVAL; + } + + nr_counters =3D armv8pmu_pmcr_n_read(); + hpmn =3D nr_counters - host_counters; + + pmu->max_guest_counters =3D hpmn; + armv8pmu_max_guest_counters =3D hpmn; + + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, host_counters); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + pr_info("Partitioned PMU with %d host counters -> %u guest counters", hos= t_counters, hpmn); + + return 0; +} + static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe =3D info; @@ -1323,10 +1390,10 @@ static void __armv8pmu_probe_pmu(void *info) =20 cpu_pmu->pmuver =3D pmuver; probe->present =3D true; + cpu_pmu->max_guest_counters =3D -1; =20 /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, armv8pmu_pmcr_n_read()); =20 /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1335,6 +1402,13 @@ static void __armv8pmu_probe_pmu(void *info) if (pmuv3_has_icntr()) set_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask); =20 + if (reserved_host_counters >=3D 0) { + if (has_host_pmu_partition_support()) + armv8pmu_partition(cpu_pmu, reserved_host_counters); + else + pr_err("PMU partition is not supported"); + } + pmceid[0] =3D pmceid_raw[0] =3D read_pmceid0(); pmceid[1] =3D pmceid_raw[1] =3D read_pmceid1(); =20 diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 24a471cf59d56..e7172db1e897d 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -47,7 +47,10 @@ struct arm_pmu_entry { struct arm_pmu *arm_pmu; }; =20 +extern int armv8pmu_max_guest_counters; + bool kvm_supports_guest_pmuv3(void); +bool has_host_pmu_partition_support(void); #define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); @@ -117,6 +120,11 @@ static inline bool kvm_supports_guest_pmuv3(void) return false; } =20 +static inline bool has_host_pmu_partition_support(void) +{ + return false; +} + #define kvm_arm_pmu_irq_initialized(v) (false) static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 52b37f7bdbf9e..1bee8c6eba46b 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -129,6 +129,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + int max_guest_counters; }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D5A83195F9 for ; Mon, 9 Feb 2026 22:40:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676853; cv=none; b=a+W/xfihev6Cc3jydutDdBwxq5Mvzz5xwztQirsijb+Ovx1dsd7QNbq8UodfHyp6h63GR0AVvfPR1jAbkW07xA26VYf5xxhfuVVkR681/MKGGDpfemOhDBhAyKznbgzXQptY9JhHs5WwionvC0GA7ZCgLra1nTep7td7WQBakz8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676853; c=relaxed/simple; bh=dXTvn4qocYHpwQDXijQpcg4yhTZjehQMfz5QbgqWF8w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IIn7pJdzLWbuva4fer99qktcQPdO9jKV4+YLilINm9SwGpOJqSZd7yRpTyjNIa+75E3f8aG1n7WlTsJawm8DYje7Ok4vMNcx3ji/ofow/Z+T1DJPedCN4e2grTpz0SUmv4KGcKEfTcObShHBpAk4B8AGGSR04Yzx53d5Q+xMSCc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VXRB4Z5B; arc=none smtp.client-ip=209.85.161.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VXRB4Z5B" Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-6630d58662aso1420407eaf.0 for ; Mon, 09 Feb 2026 14:40:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676849; x=1771281649; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=imeQaLzXrnVqs2xwe1B4fumETENHcf/y9Tb2xZBjKCc=; b=VXRB4Z5BPWZUplDuhiKWgqT0op4G6BioUYx53Cw8Y+8LUxjNK4sb9M1TdUvC0mynvZ Q+K54u2Wa675nkRHkzh+z834mzpB3sU3zQ+yG3DDe7xEJD+HL0SCI7CmHjLm02TexLhZ 2bxAF09MlXkbiGQ4U6eVMyCUuGP6++9Kw0jGFHyLkSxzGZ2I1pdRExF37MZNkD0VErHN VGrZB9P/tczRtyLwsVYGhWjqtPGtjEmz9ZWGabehT/FSifXdJNUH/F8YQvyVliumawk4 9Zfp6Z9mjXhnZj/azjxm6ROeJrobsIRCcfHsa7TaY3gfGrvDNaz7L6RYyt77GODfNgaQ cf+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676849; x=1771281649; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=imeQaLzXrnVqs2xwe1B4fumETENHcf/y9Tb2xZBjKCc=; b=s+9c+n+CPE8IQypXa6mRtxjfbPKtahY6H3b6iH7E02itW4qBYJLu4+DXDkBB2XlmZ4 1Mx8XnFmi41CuyA9FQxhhgqVYjzorvEIblwhKqW7MNqULNh/RJma/2atAL3bND8TlajH nCIyODLkGsVQguXqlMptl/2VrkOLXwxhFYnFssFbVk91JlCVuCflmDOR63EQw//R4qIS VWFf3mNtNFWq6fmsIlDIo1cObHbGIMC9/KOU9P2E6CV9wJF/DOGYDsT0s0eXl4UH2f3H g5RRjwGWIZCv7YYhLZnuqX3AGwa8U4Ho2X3ysK2eZvHPMiqpT4+VyoyZlsPDeCHshDBd APqw== X-Forwarded-Encrypted: i=1; AJvYcCVf0veQz0aM9NmvvgNmw+CspwjLo6lG/gHEhBsGaPRuiaBp3bsq59KoMZ8JzLVvwPmaNsRAYeEjDPHQew8=@vger.kernel.org X-Gm-Message-State: AOJu0YwyGuNMnFm9GtpUcYkk4BulDNZmxtEiNiuXB0j50fTnxg90psQS XaG6E4o0x7nUuvRwNzLK0FP+nF/vnZN+VMcFiXgx/MdL6Sp6BFkPHB0Rs+Ma4lDLyWz7BR0WxxI 67ob6P5ib8HF1e+Fxa5JAtGAW+w== X-Received: from jabfq6.prod.google.com ([2002:a05:6638:6506:b0:5ce:3e9c:22c9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:3088:b0:662:f61e:759d with SMTP id 006d021491bc7-66d0c666b89mr6055036eaf.62.1770676849224; Mon, 09 Feb 2026 14:40:49 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:00 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-6-coltonlewis@google.com> Subject: [PATCH v6 05/19] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The OVSR bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Acked-by: Mark Rutland Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 4 ++-- include/linux/perf/arm_pmuv3.h | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 798c93678e97c..b37908fad3249 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -546,7 +546,7 @@ static u64 armv8pmu_pmcr_n_read(void) =20 static int armv8pmu_has_overflowed(u64 pmovsr) { - return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); + return !!(pmovsr & ARMV8_PMU_CNT_MASK_ALL); } =20 static int armv8pmu_counter_has_overflowed(u64 pmnc, int idx) @@ -782,7 +782,7 @@ static u64 armv8pmu_getreset_flags(void) value =3D read_pmovsclr(); =20 /* Write to clear flags */ - value &=3D ARMV8_PMU_OVERFLOWED_MASK; + value &=3D ARMV8_PMU_CNT_MASK_ALL; write_pmovsclr(value); =20 return value; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a27..fd2a34b4a64d1 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -224,14 +224,14 @@ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) =20 /* - * PMOVSR: counters overflow flag status reg + * Counter bitmask layouts for overflow, enable, and interrupts */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ -/* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) =20 /* * PMXEVTYPER: Event selection reg --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D922318EF5 for ; Mon, 9 Feb 2026 22:40:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676855; cv=none; b=f0/X+gx2k0ENCPxHRQmkW92pPx6PTEPcoS58mP69AIsomeDSQ9PsJ6jkyJTQmrBkQH+L9zb+UoGwCR642LiMXVo96j6KUHFMfQQJfBipSaYDzsW7GCx23mSLPvVHN+mS5rStOYRIUfiyCm6ngI0ZcyKzMRN55vYmiOcfBRE19FM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676855; c=relaxed/simple; bh=EuxKWuzEIIMz2C6Y2N+6HP59WW/pZgL1W1t4JoiFzPo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=smGTfAxEPhVSuqOZcoN4SGEBTNuvW7GPZXI3mZMuyWOkBiPruPU3hQNYAbReZmzYoNoN6AI+5o6+QrI7t78QUGN4xvu8l5Yrv5gPCegz+/wdvqo8wG1o3Iat7Q/45PG2nuse3PhrThmo4SgR/BzJik/zvNSg+Qn3qvhF4Dd18Tw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gVP9ZfUq; arc=none smtp.client-ip=209.85.161.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gVP9ZfUq" Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-672c40f3873so911562eaf.2 for ; Mon, 09 Feb 2026 14:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676850; x=1771281650; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=K7fJO3z0zUsCH/9inACT3LTF5j9RrtbcCQwYrsnTqt8=; b=gVP9ZfUqkeWe4eL/wJm+9baFT04DVCtjkKBpDXR9udbqJPSoaQHPTEWGiLBqJ4qMuY u4KLXzwjG5i1l9HsDcEgsL0cii/gg8/hQidw6yVsUdO0OI3yYbrq/x7Bl7kdOASQkxEE DkUun9DbJuj5HfZtluCgg5u4K9k6bfMOrtf9e5c7hr4PHO3gugCD1fDWyfPLEp2UWk+A FbXdUpD/6g3kSSZlic8xymdps8s0mb3F/Q/3WerbDXZ+iD8fyZSQiEd3nZsto98A36hP tmfLmpBGgRIm8oWDz0cTtCxa6MxSxIDhepQvlR5y6BrqY/+Kzf7XJJB4St4vPqz8TOF4 ojtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676850; x=1771281650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K7fJO3z0zUsCH/9inACT3LTF5j9RrtbcCQwYrsnTqt8=; b=n21yo/xpv+vQA9VWAF/+O3rHDpYH0ytpH39TwKcieemZuUmFBD8F5CP4IK1DGsZjG1 B8gg5dJ8vnk8WCeQo+PCoSVMiejz3eM68+HS5vILiERnXyHW3tYW1ei50o6vOF/1bhKs ahWcZElIKKAxywtPA5dNqn5WHkCmi69WnPW2+Cp0bFdqa3WG5e2JCGwtZSQYRWmv4dJx /u6gBHd+KiqcbaIjH330SAcjYvcN2bIf1iroycutw6q6sczMk9JpoWmEguNR6IqsNTbh Foknp2QvG+F/F8tEmEg3jzSdJlTdsnXNSc0ZqOb94Wh8xMqJAJ/7ykmU0C3NH65mmnk/ aMBQ== X-Forwarded-Encrypted: i=1; AJvYcCWyYIHptmxp57cLM4j7ujVfg3swxcfNFGfgDFJkRMdyZpiOaBua9y47x+w00Uf2JApPyNesmATzJl11JJY=@vger.kernel.org X-Gm-Message-State: AOJu0YzpE47ht93HDcKB5diBijeAQuLLlCQf1CcmsjisLZR8zC96HSpc 21pGNKHwDZ+1zS8E2d5/mbNWl27yN5duzr/uJZwgvwXgWyD2rQoLXxRavIPGTP2XbOdQ4RD2Bsk 9wDGYFrb8RdBC+3IC5xvRD3rmKA== X-Received: from jabjx7.prod.google.com ([2002:a05:6638:a287:b0:5c8:f4c9:f665]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:2293:b0:662:c65f:a06a with SMTP id 006d021491bc7-66d0a667020mr6032066eaf.31.1770676850105; Mon, 09 Feb 2026 14:40:50 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:01 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-7-coltonlewis@google.com> Subject: [PATCH v6 06/19] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Define some functions that determine whether the PMU is partitioned and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their separate position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/kvm/pmu-direct.c | 86 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 40 +++++++++++++-- include/kvm/arm_pmu.h | 24 +++++++++ 4 files changed, 164 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 154503f054886..bed4dfa755681 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -231,6 +231,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 74e40e4915416..05ac38ec3ea20 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -5,6 +5,8 @@ */ =20 #include +#include +#include =20 #include =20 @@ -20,3 +22,87 @@ bool has_host_pmu_partition_support(void) return has_vhe() && system_supports_pmuv3(); } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + if (!pmu) + return false; + + return pmu->max_guest_counters >=3D 0 && + pmu->max_guest_counters <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + if (!kvm_pmu_is_partitioned(pmu)) + return ARMV8_PMU_CNT_MASK_ALL; + + return GENMASK(nr_counters - 1, pmu->max_guest_counters); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_C & GENMASK(pmu->max_guest_counters - 1, 0); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index b37908fad3249..6395b6deb78c2 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -871,6 +871,9 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) brbe_enable(cpu_pmu); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 @@ -882,6 +885,9 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) brbe_disable(); =20 /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -1028,6 +1034,12 @@ static bool armv8pmu_can_use_pmccntr(struct pmu_hw_e= vents *cpuc, if (cpu_pmu->has_smt) return false; =20 + /* + * If partitioned at all, pmccntr belongs to the guest. + */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + return false; + return true; } =20 @@ -1054,6 +1066,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_event= s *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -1065,7 +1078,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_event= s *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1177,6 +1190,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1184,6 +1205,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1196,11 +1220,19 @@ static void armv8pmu_reset(void *info) brbe_invalidate(); } =20 + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index e7172db1e897d..accfcb79723c8 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,6 +92,12 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -228,6 +234,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(void *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(void *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(void *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69917319871 for ; Mon, 9 Feb 2026 22:40:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676859; cv=none; b=GHP1Snqtw3euW1NQGLLLxtlyBHTqwZh/lhMXMbU8f+oQ024TROQphLG3neaPtWQUsOOdth4zRAHGgYvzB6C5e4ewA6B+NJMsB1So74+ZEnv3J15kLN6nA7Y0+mdePuNZEKLLv2hpto77TmcHcfCXBwQ8z27HdYUnBRi1CdmYLfw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676859; c=relaxed/simple; bh=rirnVzGkSFumuowwu7y/h7aq9o6GqwBPfUtAR73WACo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VTpInU5hW4Sgzh4jp7s5ApVtVnUlpz16H5hwdiBYQgMclAjkqEFqKZ/02lSjhIC2gTBWcyGBuLHSi5KVCKGKaSFSy/6t7RUN9YBvvPMN29L/9NDz72ivzu6hAeAjPbfSrX7hpx55ehq+cuFomhm2iUAjhycNjMqqDWxVVunx2vw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kmcpNg7C; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kmcpNg7C" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-45f2c24ff41so10745018b6e.1 for ; Mon, 09 Feb 2026 14:40:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676851; x=1771281651; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KEehwDOtOjkxJnHY0L2puQVsuBSwId81nFU3636AvNU=; b=kmcpNg7Cx8y8RvvtUygmIH+9lFzDaQD9bO86vdxp79V0/dRrLJ/+xFXE55BOWB98VH bFOKwFdvqgeMTagZxE4vHYggWMiPh0qYY3h0tCpPmG0qUHi8IUVrlhoWR/wNmkAHe0Nu 8QuDZoukzQjRKl2r9S6fxuwS9ANTEW2BKxOk/lsmh6Egomlc4ovtBh4gu5V9lZ8aGVeD WQKDVt1uHPsuMkNiQUFGlbzAp5FoUdFluL2WIYxjd51pQNkGUTRpJZJS6e15yJdlclFZ exPwR8Rd5gkev1y6pVW2MfowzeWV6QUkzURYenVmfXO8g+03YJyt59mGpKhsuW/9Htl4 vvHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676851; x=1771281651; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KEehwDOtOjkxJnHY0L2puQVsuBSwId81nFU3636AvNU=; b=Dca2MWcnFZJ8uKqm7mjqClAhqtlxaT67gOYdI9u5gUFPd+pcXLFw7u37AaTFt3frqd MttZi8KB9qxPBmt8mM+k23jigZfAYYe3a3zwx26BZ+lxLzbfFJyBo5LLaNegjRxv86Xw nKgKeyhXvaASBX9IC2BrpH8sDP0O5/5/99ph6bkCb0tAsxuASNnDBcS7jSvIZDJu9ZA6 lgtLdc9OUNQwPLnhXmJbrxtgtFraG0Jjv09RgzJcrG5ZZfGVroD01tm409SORfNmSfs/ SCoZ7jNgnvsCTaMxGAm6bXZQwh8387s3BWOuPvLRYGSbWP304wbMXm3jJ2KgdTAVdK+h 6Z1A== X-Forwarded-Encrypted: i=1; AJvYcCX12tHYqYd/uC2nvzRSb8unIcOWvXeyCXVTLzKD0iX6ScVh+bfiDZ8bC7VZWyp/r/k7UUsNaMRQsVvFNeo=@vger.kernel.org X-Gm-Message-State: AOJu0YzSKtSZ8abRpQlLxj9EmnbiK3NXLRz1LseB73Wfc6kaZlA1igRn iP2lClbw+Tiyx1FsiTCZ3LYnUPpOZ4JVIgC1e8QQsQe0CrLLXh06gG9vTQZffORI7Eou0F+L9bS koTyOouGSRgSXefL+kRoGVb/SIQ== X-Received: from iou23.prod.google.com ([2002:a05:6602:64d7:b0:954:2480:2d28]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1c84:b0:66a:1886:e4c6 with SMTP id 006d021491bc7-672fe0e4e72mr62956eaf.21.1770676851132; Mon, 09 Feb 2026 14:40:51 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:02 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-8-coltonlewis@google.com> Subject: [PATCH v6 07/19] KVM: arm64: Set up FGT for Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to gain the best performance benefit from partitioning the PMU, utilize fine grain traps (FEAT_FGT and FEAT_FGT2) to avoid trapping common PMU register accesses by the guest to remove that overhead. Untrapped: * PMCR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCCNTR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 * PMEVCNTRn_EL0 These are safe to untrap because writing MDCR_EL2.HPMN as this series will do limits the effect of writes to any of these registers to the partition of counters 0..HPMN-1. Reads from these registers will not leak information from between guests as all these registers are context swapped by a later patch in this series. Reads from these registers also do not leak any information about the host's hardware beyond what is promised by PMUv3. Trapped: * PMOVS_EL0 * PMEVTYPERn_EL0 * PMCCFILTR_EL0 * PMICNTR_EL0 * PMICFILTR_EL0 * PMCEIDn_EL0 * PMMIR_EL1 PMOVS remains trapped so KVM can track overflow IRQs that will need to be injected into the guest. PMICNTR and PMIFILTR remain trapped because KVM is not handling them yet. PMEVTYPERn remains trapped so KVM can limit which events guests can count, such as disallowing counting at EL2. PMCCFILTR and PMCIFILTR are special cases of the same. PMCEIDn and PMMIR remain trapped because they can leak information specific to the host hardware implementation. NOTE: This patch temporarily forces kvm_vcpu_pmu_is_partitioned() to be false to prevent partial feature activation for easier debugging. Signed-off-by: Colton Lewis --- arch/arm64/kvm/config.c | 41 ++++++++++++++++++++++++++++++++++--- arch/arm64/kvm/pmu-direct.c | 33 +++++++++++++++++++++++++++++ include/kvm/arm_pmu.h | 23 +++++++++++++++++++++ 3 files changed, 94 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c index 24bb3f36e9d59..7daba2537601d 100644 --- a/arch/arm64/kvm/config.c +++ b/arch/arm64/kvm/config.c @@ -1489,12 +1489,47 @@ static void __compute_hfgwtr(struct kvm_vcpu *vcpu) *vcpu_fgt(vcpu, HFGWTR_EL2) |=3D HFGWTR_EL2_TCR_EL1; } =20 +static void __compute_hdfgrtr(struct kvm_vcpu *vcpu) +{ + __compute_fgt(vcpu, HDFGRTR_EL2); + + *vcpu_fgt(vcpu, HDFGRTR_EL2) |=3D + HDFGRTR_EL2_PMOVS + | HDFGRTR_EL2_PMCCFILTR_EL0 + | HDFGRTR_EL2_PMEVTYPERn_EL0 + | HDFGRTR_EL2_PMCEIDn_EL0 + | HDFGRTR_EL2_PMMIR_EL1; +} + static void __compute_hdfgwtr(struct kvm_vcpu *vcpu) { __compute_fgt(vcpu, HDFGWTR_EL2); =20 if (is_hyp_ctxt(vcpu)) *vcpu_fgt(vcpu, HDFGWTR_EL2) |=3D HDFGWTR_EL2_MDSCR_EL1; + + *vcpu_fgt(vcpu, HDFGWTR_EL2) |=3D + HDFGWTR_EL2_PMOVS + | HDFGWTR_EL2_PMCCFILTR_EL0 + | HDFGWTR_EL2_PMEVTYPERn_EL0; +} + +static void __compute_hdfgrtr2(struct kvm_vcpu *vcpu) +{ + __compute_fgt(vcpu, HDFGRTR2_EL2); + + *vcpu_fgt(vcpu, HDFGRTR2_EL2) &=3D + ~(HDFGRTR2_EL2_nPMICFILTR_EL0 + | HDFGRTR2_EL2_nPMICNTR_EL0); +} + +static void __compute_hdfgwtr2(struct kvm_vcpu *vcpu) +{ + __compute_fgt(vcpu, HDFGWTR2_EL2); + + *vcpu_fgt(vcpu, HDFGWTR2_EL2) &=3D + ~(HDFGWTR2_EL2_nPMICFILTR_EL0 + | HDFGWTR2_EL2_nPMICNTR_EL0); } =20 void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) @@ -1505,7 +1540,7 @@ void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) __compute_fgt(vcpu, HFGRTR_EL2); __compute_hfgwtr(vcpu); __compute_fgt(vcpu, HFGITR_EL2); - __compute_fgt(vcpu, HDFGRTR_EL2); + __compute_hdfgrtr(vcpu); __compute_hdfgwtr(vcpu); __compute_fgt(vcpu, HAFGRTR_EL2); =20 @@ -1515,6 +1550,6 @@ void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) __compute_fgt(vcpu, HFGRTR2_EL2); __compute_fgt(vcpu, HFGWTR2_EL2); __compute_fgt(vcpu, HFGITR2_EL2); - __compute_fgt(vcpu, HDFGRTR2_EL2); - __compute_fgt(vcpu, HDFGWTR2_EL2); + __compute_hdfgrtr2(vcpu); + __compute_hdfgwtr2(vcpu); } diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 05ac38ec3ea20..275bd4156871e 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -42,6 +42,39 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) pmu->max_guest_counters <=3D *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU + * @vcpu: Pointer to kvm_vcpu struct + * + * Determine if given VCPU has a partitioned PMU by extracting that + * field and passing it to :c:func:`kvm_pmu_is_partitioned` + * + * Return: True if the VCPU PMU is partitioned, false otherwise + */ +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) && + false; +} + +/** + * kvm_vcpu_pmu_use_fgt() - Determine if we can use FGT + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if we can use FGT for direct access to registers. We can + * if capabilities permit the number of guest counters requested. + * + * Return: True if we can use FGT, false otherwise + */ +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; + + return kvm_vcpu_pmu_is_partitioned(vcpu) && + cpus_have_final_cap(ARM64_HAS_FGT) && + (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index accfcb79723c8..50983cdbec045 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -98,6 +98,21 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +#if !defined(__KVM_NVHE_HYPERVISOR__) +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); +#else +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} + +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} +#endif + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -137,6 +152,14 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm= _vcpu *vcpu, { return 0; } +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A952531A576 for ; Mon, 9 Feb 2026 22:40:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676856; cv=none; b=Qy/k1FPklC9NhV0nLvVldvnlt4U3Ibd/ha0wUuPikRciXfPl2z9NeqSUK5ASfagTeDTdhJI/qwqQTlrR8LKZfIwnA6j/JMrxg/3dXy8c7JMEzCOhauDRm0RIuZC12Ml8Dbt45NqjvUhCyG9bhN5E6ygCRlm2vh8j0zQqiMa4D+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676856; c=relaxed/simple; bh=B9HZV/XvGCtJKEIzSCyXAxqSnw0EEFic83RP1XipC3Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ri179eBtk91rMIgYwso4F1fnV2mI8fJioSDznrgZv8MCyaT47fCCrTT+eugDkzIj1bnWjmzunxizWW3t9HAGqIswpTOkiL/K3I8TNDKdtbd+x8IccBeQ4selLV55k478Dyu4u0oRDd8UfnfjWsR7AP1IOCoLo6hvI2ldhwoEzz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NnzDkZsd; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NnzDkZsd" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-7d496d080d8so1811611a34.0 for ; Mon, 09 Feb 2026 14:40:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676852; x=1771281652; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nily0rEh3mmY4uvAeTjqDVW7kEcYRK1QYtsSJT+aPhk=; b=NnzDkZsdRLMTlHa1swP5Z/JorSQFqooN1864sOw2+uAjnPSBEswKJaMg8TA+qw52ye MpOnat3IAmj/gBBsPdM1dBaEvdKS32+Dm80ECCaL1tcby+2ksjylQ9XIAXYbIiDbBu5X 7JzB+lgSgl4NPkMD+Qj3odeKBGp6FjqhzJ8hYlOhnwUbeUNoVZ/k+AbYveWeMCOlkcqF KprLqhMyPqQt9DAr7O8cZod1P0vLamAuVywNZyX958GK0fVU8U0igL4u9uVFjl+wcQrI eS1mZ159iT2LPJXOKSJhbYP9d+giiwI4/IrgteNWXIC8OF13nOLeA0VgDm1fFZ5A2Bwa dUeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676852; x=1771281652; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nily0rEh3mmY4uvAeTjqDVW7kEcYRK1QYtsSJT+aPhk=; b=PlJSV2VLZecJfDRPp3M4krNPJriCpR9aHzrbYu4B+ReGG4o+k9Ssm0YR5s6FZoUF2X B2QWV3MHkLuGcv6UQaWRmNChHn6zUCMNNo7m51z3fVnVe8nAwQXaTPUvpD/A8YmrrwBx g0DmtvHit6oX/FGHJk+8H9IHzV9DmEaN0oI2xkjMyJPa8JjcTbsWzxG9YwY+/I5xGD8F cL7PU+eNhptGCaBHf5ANdHZCFNHicI1qFHtIJYfcz6UAf9+cA+66KMVjmSDdv0Lwcuzh d6ZSUn/j3e9S2pPAhkdmEFh4CdlWwPu9Pd2OeVZvrA17PrgIiz3jvIoQ/0Af7MPQ5LUi oE6Q== X-Forwarded-Encrypted: i=1; AJvYcCXwpBwAyE8PIF4S4HEU9IOZyJlYucpCNHmovOOmEgkekjuB+mHGQ57jg81jdqslfBvGwsUr3k6LPDYrBS0=@vger.kernel.org X-Gm-Message-State: AOJu0YxClcNU083xZEVSdYEsFI6dVZ0TzveJPXN5gfiGVAw0D1cE3PjK HyRB0N2ex4zDNMBWGgXEDTOJXEXr0lGJ4g/MhcMY7e0SqT6VyDo+qtmX0Eb2svBBjGquojrDSRl AVV36J44c83Pj2NKDBE5iRJ1mow== X-Received: from iobbx7.prod.google.com ([2002:a05:6602:4187:b0:957:50e1:3858]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a4a:e902:0:b0:66f:6d5e:76c3 with SMTP id 006d021491bc7-672fff0af95mr60439eaf.42.1770676852384; Mon, 09 Feb 2026 14:40:52 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:03 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-9-coltonlewis@google.com> Subject: [PATCH v6 08/19] KVM: arm64: Define access helpers for PMUSERENR and PMSELR From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to ensure register permission checks will have consistent results whether or not the PMU is partitioned, define some access helpers for PMUSERENR and PMSELR that always return the canonical value for those registers, whether it lives in a physical or virtual register. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 16 ++++++++++++++++ arch/arm64/kvm/sys_regs.c | 6 +++--- include/kvm/arm_pmu.h | 12 ++++++++++++ 3 files changed, 31 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 74a5d35edb244..344ed9d8329a6 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -885,3 +885,19 @@ u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) =20 return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); } + +u64 kvm_vcpu_read_pmselr(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return read_sysreg(pmselr_el0); + else + return __vcpu_sys_reg(vcpu, PMSELR_EL0); +} + +u64 kvm_vcpu_read_pmuserenr(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return read_sysreg(pmuserenr_el0); + else + return __vcpu_sys_reg(vcpu, PMUSERENR_EL0); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a460e93b1ad0a..9e893859a41c9 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -987,7 +987,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const stru= ct sys_reg_desc *r) =20 static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) { - u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + u64 reg =3D kvm_vcpu_read_pmuserenr(vcpu); bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); =20 if (!enabled) @@ -1141,7 +1141,7 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return false; =20 idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, - __vcpu_sys_reg(vcpu, PMSELR_EL0)); + kvm_vcpu_read_pmselr(vcpu)); } else if (r->Op2 =3D=3D 0) { /* PMCCNTR_EL0 */ if (pmu_access_cycle_counter_el0_disabled(vcpu)) @@ -1191,7 +1191,7 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, =20 if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13 && r->Op2 =3D=3D 1) { /* PMXEVTYPER_EL0 */ - idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, kvm_vcpu_read_pmselr(vcpu)); reg =3D PMEVTYPER0_EL0 + idx; } else if (r->CRn =3D=3D 14 && (r->CRm & 12) =3D=3D 12) { idx =3D ((r->CRm & 3) << 3) | (r->Op2 & 7); diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 50983cdbec045..f21439000129b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -130,6 +130,8 @@ int kvm_arm_set_default_pmu(struct kvm *kvm); u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm); =20 u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); +u64 kvm_vcpu_read_pmselr(struct kvm_vcpu *vcpu); +u64 kvm_vcpu_read_pmuserenr(struct kvm_vcpu *vcpu); bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx); void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu); #else @@ -250,6 +252,16 @@ static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *= vcpu) return 0; } =20 +static inline u64 kvm_vcpu_read_pmselr(struct kvm_vcpu *vcpu) +{ + return 0; +} + +static u64 kvm_vcpu_read_pmuserenr(struct kvm_vcpu *vcpu) +{ + return 0; +} + static inline bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned = int idx) { return false; --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3BF331D372 for ; Mon, 9 Feb 2026 22:40:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676858; cv=none; b=TnQ+Xrkgqk34ujWZMYD7yZFy3gf6vAqyU08NxQXoENMcYMzis5VGyeB2lHN8O+KIm1fHxSPKBksNTjV9t5wKU0nMlMNdHuIGdncpy3z5N0GBpYWn+AgV6jgnPK+pmh2Ql2IAbK+woRSEB7y1YZlTdtLsqUTANdkt0At2jF5t3X8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676858; c=relaxed/simple; bh=VlntSjT33zFT4cwsmNUlK5lpAzJYXAv17KA7vHnYPW8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tmJ41FjFG2tW86QLoG24vXOMRuFStcClkJBtjIQOC/yPfmoSPBxjoLnCzEOnJqlyf16YndX6Q0w9moOMZQdIZ0iDmZLvqMk3VXslOeUf46gCNiw+nUvbZGsicu//Kc1qeVGX48MITIgLyY6iFbpd176nOxCEhTt/eYxqKrr0ASw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Lxmy9cH5; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Lxmy9cH5" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7d498212845so623225a34.3 for ; Mon, 09 Feb 2026 14:40:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676854; x=1771281654; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1q4WN7z/SL7quCNEFM+ItoBlF4yxyRczz8LessGfPao=; b=Lxmy9cH5s438x/aSg/jAP0vd9MbnfNzRWx9myX25dDHvy6auC0aXnRWWtXozqqWev2 8V9BedJCp/q/02BROvUxW48VBaJHjVorxbPGvwC19PnZk1r+TTR535Zl887f23ik24Ug y1zV8asGsRqGlNyrk5X0o4pZQGRF4pSTg4gGCYvZcthBfJeR7DogXt0wVul60kEY3bcs DlwQbCkavaNLUHukzq3GCc9BBSjrlUquHTPs5cFIBoSVNJglnDaFXP9Qlr9TBOmsaqqG YK+sZAHgPk1UDlUXHh21J/qIgHgmcZICfO7NHo+fE74Fststvo0UXlZgB8SimHVEFm4b fblQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676854; x=1771281654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1q4WN7z/SL7quCNEFM+ItoBlF4yxyRczz8LessGfPao=; b=CnBpd8kXLMFJqndoIfdg8PpUmVYqXuez0LfGsggPtgIFjhLwzRP4kO7vvYP+XRGRtQ eU9wbszEMq3ESwNQbO/Ho3hGNmv9yNAc6eY6v/qJOBJ469vs+SJ4rbyzy63J2J8OX873 bPfdvRGWp5pm2mEg4G/8Mxqwefdyyp4jrsxQGd22/peiETR6ufvg/HRvFxGs6wdOeqg7 Zc+tWVtVGLplBM4Oc9rrQyk37l+ZK3iFg7ysexHijisVxl1US0NXTtselwkLrk9XIfya Y+0Ew8N2ywJJf9WAxcp2NomHxxV9J9fqopSzCAgkYGBzSuC+4TW+WqvDChiv/vXKRHbL Jf7w== X-Forwarded-Encrypted: i=1; AJvYcCVli38kJt4VC1yFtTzYy2D1ida3kbJPyrEyG/syUdVF241ZJMnLM7S46QSvAt0O8ljRLmZXL161nP4wD/M=@vger.kernel.org X-Gm-Message-State: AOJu0YyJX009jN9manz6uUSli9sqp8CLrrv0buk8A7jZT/aC7CcYEViQ 7yiBUfQB4Q/F9oUNDpntbO3QjnBS6PyMLPX9uFF9FktaTLr20yq07KR+32otTaDD8Xk48zJp+bn U67BmUzSrDXj6B+lVvpbtt7A6yw== X-Received: from iong8.prod.google.com ([2002:a5d:8c88:0:b0:957:4b76:541e]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:6aca:b0:663:9b9:9297 with SMTP id 006d021491bc7-66d0c6685ccmr6026920eaf.64.1770676853645; Mon, 09 Feb 2026 14:40:53 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:04 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-10-coltonlewis@google.com> Subject: [PATCH v6 09/19] KVM: arm64: Write fast path PMU register handlers From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We may want a partitioned PMU but not have FEAT_FGT to untrap the specific registers that would normally be untrapped. Add a handler for those registers in the fast path so we can still get a performance boost from partitioning. The idea is to handle traps for all the PMU registers quickly by writing directly to the hardware when possible instead of hooking into the emulated vPMU as the standard handlers in sys_regs.c do. For registers that can't be written to hardware because they require special handling (PMEVTYPER and PMOVS), write to the virtual register. A later patch will ensure these are handled correctly at vcpu_load time. Signed-off-by: Colton Lewis --- arch/arm64/kvm/hyp/vhe/switch.c | 238 ++++++++++++++++++++++++++++++++ 1 file changed, 238 insertions(+) diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 9db3f11a4754d..154da70146d98 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -28,6 +28,8 @@ #include #include =20 +#include <../../sys_regs.h> + /* VHE specific context */ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); @@ -482,6 +484,239 @@ static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *v= cpu, u64 *exit_code) return false; } =20 +/** + * kvm_hyp_handle_pmu_regs() - Fast handler for PMU registers + * @vcpu: Pointer to vcpu struct + * + * This handler immediately writes through certain PMU registers when + * we have a partitioned PMU (that is, MDCR_EL2.HPMN is set to reserve + * a range of counters for the guest) but the machine does not have + * FEAT_FGT to selectively untrap the registers we want. + * + * Return: True if the exception was successfully handled, false otherwise + */ +static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vcpu) +{ + struct sys_reg_params p; + u64 pmuser; + u64 pmselr; + u64 esr; + u64 val; + u64 mask; + u32 sysreg; + u8 nr_cnt; + u8 rt; + u8 idx; + bool ret; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + return false; + + pmuser =3D kvm_vcpu_read_pmuserenr(vcpu); + + if (!(pmuser & ARMV8_PMU_USERENR_EN)) + return false; + + esr =3D kvm_vcpu_get_esr(vcpu); + p =3D esr_sys64_to_params(esr); + sysreg =3D esr_sys64_to_sysreg(esr); + rt =3D kvm_vcpu_sys_get_rt(vcpu); + val =3D vcpu_get_reg(vcpu, rt); + nr_cnt =3D vcpu->kvm->arch.nr_pmu_counters; + + switch (sysreg) { + case SYS_PMCR_EL0: + mask =3D ARMV8_PMU_PMCR_MASK; + + if (p.is_write) { + write_sysreg(val & mask, pmcr_el0); + } else { + mask |=3D ARMV8_PMU_PMCR_N; + val =3D u64_replace_bits( + read_sysreg(pmcr_el0), + nr_cnt, + ARMV8_PMU_PMCR_N); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMUSERENR_EL0: + mask =3D ARMV8_PMU_USERENR_MASK; + + if (p.is_write) { + write_sysreg(val & mask, pmuserenr_el0); + } else { + val =3D read_sysreg(pmuserenr_el0); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMSELR_EL0: + mask =3D PMSELR_EL0_SEL_MASK; + val &=3D mask; + + if (p.is_write) { + write_sysreg(val & mask, pmselr_el0); + } else { + val =3D read_sysreg(pmselr_el0); + vcpu_set_reg(vcpu, rt, val & mask); + } + ret =3D true; + break; + case SYS_PMINTENCLR_EL1: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + write_sysreg(val & mask, pmintenclr_el1); + } else { + val =3D read_sysreg(pmintenclr_el1); + vcpu_set_reg(vcpu, rt, val & mask); + } + ret =3D true; + + break; + case SYS_PMINTENSET_EL1: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + write_sysreg(val & mask, pmintenset_el1); + } else { + val =3D read_sysreg(pmintenset_el1); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMCNTENCLR_EL0: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + write_sysreg(val & mask, pmcntenclr_el0); + } else { + val =3D read_sysreg(pmcntenclr_el0); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMCNTENSET_EL0: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + write_sysreg(val & mask, pmcntenset_el0); + } else { + val =3D read_sysreg(pmcntenset_el0); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMOVSCLR_EL0: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=3D, ~(val & mask)); + } else { + val =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMOVSSET_EL0: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, val & mask); + } else { + val =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMCCNTR_EL0: + case SYS_PMXEVCNTR_EL0: + case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30): + if (sysreg =3D=3D SYS_PMCCNTR_EL0) + idx =3D ARMV8_PMU_CYCLE_IDX; + else if (sysreg =3D=3D SYS_PMXEVCNTR_EL0) + idx =3D FIELD_GET(PMSELR_EL0_SEL, kvm_vcpu_read_pmselr(vcpu)); + else + idx =3D ((p.CRm & 3) << 3) | (p.Op2 & 7); + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX && + !(pmuser & ARMV8_PMU_USERENR_CR)) { + ret =3D false; + break; + } else if (!(pmuser & ARMV8_PMU_USERENR_ER)) { + ret =3D false; + break; + } + + if (idx >=3D nr_cnt && idx < ARMV8_PMU_CYCLE_IDX) { + ret =3D false; + break; + } + + pmselr =3D read_sysreg(pmselr_el0); + write_sysreg(idx, pmselr_el0); + + if (p.is_write) { + write_sysreg(val, pmxevcntr_el0); + } else { + val =3D read_sysreg(pmxevcntr_el0); + vcpu_set_reg(vcpu, rt, val); + } + + write_sysreg(pmselr, pmselr_el0); + ret =3D true; + break; + case SYS_PMCCFILTR_EL0: + case SYS_PMXEVTYPER_EL0: + case SYS_PMEVTYPERn_EL0(0) ... SYS_PMEVTYPERn_EL0(30): + if (sysreg =3D=3D SYS_PMCCFILTR_EL0) + idx =3D ARMV8_PMU_CYCLE_IDX; + else if (sysreg =3D=3D SYS_PMXEVTYPER_EL0) + idx =3D FIELD_GET(PMSELR_EL0_SEL, kvm_vcpu_read_pmselr(vcpu)); + else + idx =3D ((p.CRm & 3) << 3) | (p.Op2 & 7); + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX && + !(pmuser & ARMV8_PMU_USERENR_CR)) { + ret =3D false; + break; + } else if (!(pmuser & ARMV8_PMU_USERENR_ER)) { + ret =3D false; + break; + } + + if (idx >=3D nr_cnt && idx < ARMV8_PMU_CYCLE_IDX) { + ret =3D false; + break; + } + + if (p.is_write) { + __vcpu_assign_sys_reg(vcpu, PMEVTYPER0_EL0 + idx, val); + } else { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx); + vcpu_set_reg(vcpu, rt, val); + } + + ret =3D true; + break; + default: + ret =3D false; + } + + if (ret) + __kvm_skip_instr(vcpu); + + return ret; +} + static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_cod= e) { if (kvm_hyp_handle_tlbi_el2(vcpu, exit_code)) @@ -496,6 +731,9 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *= vcpu, u64 *exit_code) if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) return true; =20 + if (kvm_hyp_handle_pmu_regs(vcpu)) + return true; + return kvm_hyp_handle_sysreg(vcpu, exit_code); } =20 --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 081BC321F48 for ; Mon, 9 Feb 2026 22:40:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676858; cv=none; b=hB2qY4kOPKfGpYnli1521EwxljpTgOeC8N283+ZMhZeNushB0XeQFta8agxc9PKvSkKMzn8MOE1xpkoMvpcELF7Foz7GpicEufTV1WgQYvhfSWytJtmOmJjUBevhfmUfHCedDpn9qRavbz9S1p34vSoSoauKAdSRi1V3D9pGrZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676858; c=relaxed/simple; bh=ssl/5MqNa1agAhX4GvIbWHZqFYEeaev35rRVg6Go1qI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PzJONJmkIb9qHEKSQcPPCo2M5sWHjr0xhH2/FIkraS4M0STsy8YUGbEwEe+ju5ELMa/3ewZU8KMfVSLr/itNb0bQ6gKyVEV4GRhcwGbSc95X7Y8AC8LLivcR9eQS1nSmGUhC17rxgzf9hHZr5k4RbWnuJ9+Q8YShSjROtJrrMRA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=od6x58Of; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="od6x58Of" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-66b8c7f0debso1310515eaf.3 for ; Mon, 09 Feb 2026 14:40:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676855; x=1771281655; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DzFE4miTcOtAr/dPUTYbZiH7kW5NfaMHCMboYuCvXco=; b=od6x58OfT6TMLkiVYw75g20xoh04DMT3ZTcR3xALFwTJ8WF5iOrtvVnmUT8FImB0ag lPy30j355iL80R3Qse0D5XLl9AfN/zoRvlmLJHJs7/4yMJxWS3zgPByYIBlJV8IT6Zfe uoMFt8r6A3XWdMQYroo67zaP91fkS3GxWahRhwBx9TW50SO8pqq5JwDQuOvr4SbrZ6bI 9d3IjKydRfX9Mj/QXn6+WPpXqpYlQpc6frD62RReWx9GwY3eBXHD4uEwMofg3+JDKf2K srPGR05jV68A7X/SE4pI5c3fl9NzSrlikkRdrVtBgnfH8vLcW3Ol6j/sVSEuRHsmXQDb FlCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676855; x=1771281655; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DzFE4miTcOtAr/dPUTYbZiH7kW5NfaMHCMboYuCvXco=; b=VP7+AoMalZqEOy6aah122UJWfWlo7MPl7kg1IvTTWe4Z4LIfbE8gNDW+LbxR3wUiy/ c7wAOqzuXrVyCPDJ39fQgnl4cWB51P8F4yxnTmbCJEwMwr1H0pwwFylks8vMAjYrLFT2 k/B6qd3P2PNKZsuT6++gmwZ3dzJVPzpFTOKRKf2vO8DIePh5L2hVmNXg8holhaoxqcaI 7K4QVWwplEQ6PvStuzFwF6qDGiqSavjFP/EofY7R161iuNQj3PROjt3zmhnorQfiA1ZN gP/CcVkCdEqUDX45lkwHYmH5vOn+2A+sJ7l0ve3b+AM8DoWuHEcl3GIPjO4QaCqvLiZr WPxQ== X-Forwarded-Encrypted: i=1; AJvYcCVXgQBMN6aJP1DwZO7d0Ktf7c2LOkQxWPZcQsO2aUkTd/nU98hRw+M0bPo0IxTn0CR+0LdHT8Nf2cVvmYY=@vger.kernel.org X-Gm-Message-State: AOJu0YxmG/xayn38HXbDAJFndUneQj6kmM9RcYXwg3BLHxJ8NAVaUTCD D3v4KrW/+hsbJiuCkR3adukbKz98rJxCJ6QZeNzJ9Cl9Qfq9wm4G1oGZYI1v/f2QkwNxA/MLyDW uPmCOlZfPsvQZ+5A5yELFCTsN8g== X-Received: from iozc9.prod.google.com ([2002:a05:6602:3349:b0:957:7945:e822]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1610:b0:66b:ea2:afa with SMTP id 006d021491bc7-66d09f9c4bbmr4972919eaf.20.1770676854816; Mon, 09 Feb 2026 14:40:54 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:05 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-11-coltonlewis@google.com> Subject: [PATCH v6 10/19] KVM: arm64: Setup MDCR_EL2 to handle a partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Setup MDCR_EL2 to handle a partitioned PMU. That means calculate an appropriate value for HPMN instead of the default maximum setting the host allows (which implies no partition) so hardware enforces that a guest will only see the counters in the guest partition. Setting HPMN to a non default value means the global enable bit for the host counters is now MDCR_EL2.HPME instead of the usual PMCR_EL0.E. Enable the HPME bit to allow the host to count guest events. Since HPME only has an effect when HPMN is set which we only do for the guest, it is correct to enable it unconditionally here. Unset the TPM and TPMCR bits, which trap all PMU accesses, if FGT (fine grain trapping) is being used. If available, set the filtering bits HPMD and HCCD to be extra sure nothing in the guest counts at EL2. Signed-off-by: Colton Lewis --- arch/arm64/kvm/debug.c | 29 ++++++++++++++++++++++++++--- arch/arm64/kvm/pmu-direct.c | 24 ++++++++++++++++++++++++ arch/arm64/kvm/pmu.c | 7 +++++++ include/kvm/arm_pmu.h | 11 +++++++++++ 4 files changed, 68 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 3ad6b7c6e4ba7..0ab89c91e19cb 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -36,20 +36,43 @@ static int cpu_has_spe(u64 dfr0) */ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { + int hpmn =3D kvm_pmu_hpmn(vcpu); + preempt_disable(); =20 /* * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); + + vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, hpmn); vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | MDCR_EL2_TPMCR | MDCR_EL2_TDRA | - MDCR_EL2_TDOSA); + MDCR_EL2_TDOSA | + MDCR_EL2_HPME); + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + /* + * Filtering these should be redundant because we trap + * all the TYPER and FILTR registers anyway and ensure + * they filter EL2, but set the bits if they are here. + */ + if (is_pmuv3p1(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HPMD; + if (is_pmuv3p5(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HCCD; + + /* + * Take out the coarse grain traps if we are using + * fine grain traps. + */ + if (kvm_vcpu_pmu_use_fgt(vcpu)) + vcpu->arch.mdcr_el2 &=3D ~(MDCR_EL2_TPM | MDCR_EL2_TPMCR); + + } =20 /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 275bd4156871e..f2e6b1eea8bd6 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -139,3 +139,27 @@ void kvm_pmu_host_counters_disable(void) mdcr &=3D ~MDCR_EL2_HPME; write_sysreg(mdcr, mdcr_el2); } + +/** + * kvm_pmu_hpmn() - Calculate HPMN field value + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the appropriate value to set for MDCR_EL2.HPMN. If + * partitioned, this is the number of counters set for the guest if + * supported, falling back to max_guest_counters if needed. If we are not + * partitioned or can't set the implied HPMN value, fall back to the + * host value. + * + * Return: A valid HPMN value + */ +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + u8 nr_guest_cntr =3D vcpu->kvm->arch.nr_pmu_counters; + + if (kvm_vcpu_pmu_is_partitioned(vcpu) + && !vcpu_on_unsupported_cpu(vcpu) + && (cpus_have_final_cap(ARM64_HAS_HPMN0) || nr_guest_cntr > 0)) + return nr_guest_cntr; + + return *host_data_ptr(nr_event_counters); +} diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 344ed9d8329a6..b198356d772ca 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -542,6 +542,13 @@ u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) return 1; =20 + /* + * If partitioned then we are limited by the max counters in + * the guest partition. + */ + if (kvm_pmu_is_partitioned(arm_pmu)) + return arm_pmu->max_guest_counters; + /* * The arm_pmu->cntr_mask considers the fixed counter(s) as well. * Ignore those and return only the general-purpose counters. diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index f21439000129b..8fab533fa3ebc 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -98,6 +98,9 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -162,6 +165,14 @@ static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcp= u *vcpu) { return false; } +static inline u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + return 0; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0A4D322B79 for ; Mon, 9 Feb 2026 22:40:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676860; cv=none; b=CPKqxneanf9vyqzP9PzBbngtUMPbgqwLirYvhNWIUkOKM7pGrbsjBimUn6WpG9mjgUK5kYwZgFR8pS/EigyXUWke6stnyJUdD6zmYW0cnIiytXfvcaMFAmIWmzeaVBcheGHHHNAL+uK6tagJSwEolpEbTg17ysC3kFzhE3L/N3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676860; c=relaxed/simple; bh=ticqunKDjBIsQCcPD+ntjeVpVTTCEgOMggrhz0tKgaw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=c8KXsckhekdWz6IexbrH5EtPTjLHdy6BRwaXvz9kwO4Goo7Q/ubtdjD3i4W52wZlsE6+g2NSl9K/3tr0BmC/FUnnb8VTZTmkKeEj6gYODJY/JPHWK+mFpWpozaM4CE1f5vet7oLOzYLVqmosBc04ca/9VKAPd2j+b56qz/8dO/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KDOmbSXh; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KDOmbSXh" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-7d195fe3eb4so21472973a34.3 for ; Mon, 09 Feb 2026 14:40:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676855; x=1771281655; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eEJ7Ulo3PlPr4llr+MiZewNknZeKPyCSc/YCs5BCTfM=; b=KDOmbSXhKMdFYMOwzEwPt3EuMgqxvbj6WvAZ+Jkb1RrdHch8cxpp/8T6Cf29Aq80Pq t58slVX8ZAfe1zj0BcU7W8YiqDiwAk9PLUX3qhYCfOJ6VP2Yx+YhAJKJBm8TGYnxlYT6 1XHF1A8b2jFMCOnu7CxIrv5NbRorFYDjugyOYM3OY7GSCbgJsm8ynMsePJFIu9H8D2GE LAcuvXN9UHuA9uSo7Nj3vZznN10GjLBE1L/83AVAC0IkUuW3YRccyuSJvmaPQPSVnQ8F vBazCLMT7z8EYgMl2BPcf1eOgBX8XbId3ELgAuYyHZbn1BM9kWM5DytYHzGkhF9M5xaD yQpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676855; x=1771281655; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eEJ7Ulo3PlPr4llr+MiZewNknZeKPyCSc/YCs5BCTfM=; b=F0tBPGKyaVYfxRzu+/tm4wjVhsJwdYN5GtQlD8USABcID76HF8eCXprsgJWcgbxNmW PAr8qoqYw736Dsb775oZ5mCRt4oQNtpdJpHZwghGtSoOcp/IJbzumxvS+iRHoiyjyXy0 WK2NhqdB91dRhtTj2t5CqeyUqbj5Y7SOJpXfm4lnRZp9kK11yHRJeEn5cxcWCmIAVWLF h0o0+PC7Le+Su/Qvf1C1pG5J4tubQ90S3DUFsGCHBkP+6j+Q3TolKNBXsE85SJFZuA5u vMfX4Od/qUk0ipBBXw7YHwqrGtdsvE+R4M2SZJW8NL7jp4iyAl0VQzAUb7TgBOy79jJH ufdQ== X-Forwarded-Encrypted: i=1; AJvYcCVMdxmNtpdtHiaA7Pi3EZGeMbYWvrVuRGuIUZQkT8s1LO3KnSeoLaQ+wAbAIDQ3I7efaiA9AVqB9gG2zYQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwSCwbIcXuApvgx8nCX+63Un7PrTNNOar6YxDLl74yN7FlyXsew xky3UiGQYzWEuuSQq/QVBbjNWqo1tRzcZiOpyTaoGVT95RaTIxbNfTVKtJxeFirexZ1qBD3WdU+ b1nSKJFF0vxv0FLpx8b+kf9LHtw== X-Received: from jagp12-n1.prod.google.com ([2002:a05:6638:8c:10b0:5ca:3da1:a5c7]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1985:b0:662:ffc5:cec9 with SMTP id 006d021491bc7-672fe5b72d4mr49716eaf.40.1770676855676; Mon, 09 Feb 2026 14:40:55 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:06 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-12-coltonlewis@google.com> Subject: [PATCH v6 11/19] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that can be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If we know we are not partitioned (that is, using the emulated vPMU), then return immediately. A later patch will make this lazy so the context swaps don't happen unless the guest has accessed the PMU. PMEVTYPER is handled in a following patch since we must apply the KVM event filter before writing values to hardware. PMOVS guest counters are cleared to avoid the possibility of generating spurious interrupts when PMINTEN is written. This is fine because the virtual register for PMOVS is always the canonical value. Signed-off-by: Colton Lewis --- arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-direct.c | 123 ++++++++++++++++++++++++++++++++++++ include/kvm/arm_pmu.h | 4 ++ 3 files changed, 129 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 620a465248d1b..adbe79264c032 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -635,6 +635,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -676,6 +677,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index f2e6b1eea8bd6..b07b521543478 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -9,6 +9,7 @@ #include =20 #include +#include =20 /** * has_host_pmu_partition_support() - Determine if partitioning is possible @@ -163,3 +164,125 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) =20 return *host_data_ptr(nr_event_counters); } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + unsigned long guest_counters; + u64 mask; + u8 i; + u64 val; + + /* + * If we aren't guest-owned then we know the guest isn't using + * the PMU anyway, so no need to bother with the swap. + */ + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + return; + + preempt_disable(); + + pmu =3D vcpu->kvm->arch.arm_pmu; + guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + + write_sysreg(i, pmselr_el0); + write_sysreg(val, pmxevcntr_el0); + } + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_sysreg(val, pmselr_el0); + + /* Save only the stateful writable bits. */ + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + mask =3D ARMV8_PMU_PMCR_MASK & + ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); + write_sysreg(val & mask, pmcr_el0); + + /* + * When handling these: + * 1. Apply only the bits for guest counters (indicated by mask) + * 2. Use the different registers for set and clear + */ + mask =3D kvm_pmu_guest_counter_mask(pmu); + + /* Clear the hardware overflow flags so there is no chance of + * creating spurious interrupts. The hardware here is never + * the canonical version anyway. + */ + write_sysreg(mask, pmovsclr_el0); + + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_sysreg(val & mask, pmcntenset_el0); + write_sysreg(~val & mask, pmcntenclr_el0); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_sysreg(val & mask, pmintenset_el1); + write_sysreg(~val & mask, pmintenclr_el1); + + preempt_enable(); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + unsigned long guest_counters; + u64 mask; + u8 i; + u64 val; + + /* + * If we aren't guest-owned then we know the guest is not + * accessing the PMU anyway, so no need to bother with the + * swap. + */ + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + return; + + preempt_disable(); + + pmu =3D vcpu->kvm->arch.arm_pmu; + guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + write_sysreg(i, pmselr_el0); + val =3D read_sysreg(pmxevcntr_el0); + + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val =3D read_sysreg(pmselr_el0); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val =3D read_sysreg(pmcr_el0); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + mask =3D kvm_pmu_guest_counter_mask(pmu); + + val =3D read_sysreg(pmcntenset_el0); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val =3D read_sysreg(pmintenset_el1); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); + + preempt_enable(); +} diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 8fab533fa3ebc..93ccda941aa46 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -100,6 +100,8 @@ void kvm_pmu_host_counters_disable(void); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -173,6 +175,8 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; } +static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAE5532471A for ; Mon, 9 Feb 2026 22:40:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676862; cv=none; b=dg4MshlKDRZ28EjDOIONTTFKM4jlJBHPKQDYzO7Eudkwdci58u0orMh6S1vvdvDtn4tDdV024zGgNcTeqZIeuvsplS61H1qjy8zx0UC/YirHsZd0Llnx/WA7MN2D8sFa7a4M6s7S2F0+P2xhoyNbGLYhffGv8aSlnmaHQJrQgBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676862; c=relaxed/simple; bh=0yHFaL4kFduvDWHMbB5SGQV3aeiI1+OaatDNUsMBH/E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BOidjdFafV1QU69e9ejWWNx+vzdtMU4ZoTG+Ph/BlJUTTGcW7DGiUn36ewR6n4wpq2OfBsHFNrYcTUjGJ35z1uE1LE79Qo4qBSHb8djCmBHHK9ce1JBfI3ygzUPkRN9SKL8kkZC1TUt1xvbxFQsDnDvN+OhVPjwI2sT22DwLjJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dYTxExwp; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dYTxExwp" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-4040b9ea153so9288718fac.1 for ; Mon, 09 Feb 2026 14:40:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676857; x=1771281657; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EEaHOE5nznf9+zz/kBeN5l5G1L0iV5xIIbwKx1lv3GQ=; b=dYTxExwpT1SQQn4JiSw9L5+rKy3MULJFWO8bsWMy5bss2uf0w3GLkrbQsSY4ePkhQq unQDd/fMmR/7zoeku2Be8tMtwIRVTOjRNS6r6KicbzDEc0v1lQnvCiYzvSRB6dntuQoT sTu0wxRSPUJGvenUo5cGM6Nk1PE3vchhige7wltSVqwukdvzRjx1pTuFCYRUiIMm9lhs Lx6ezjDt//fct4Yb6oIitCXmeaHEi/XYPJbZ+iICEX5+sXB5wy293SWOU4iME3SBerwu Xe9rwRio7V4S60pYEp4yPhDsKY0G3P0kUKnPZrzEqdVyWyyfW5/PCjMY6Zc6OzQp+rda EQqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676857; x=1771281657; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EEaHOE5nznf9+zz/kBeN5l5G1L0iV5xIIbwKx1lv3GQ=; b=RgSwOV9jM8EuWcv6IZzMD1zYgt1byS7dajeN6xNhD8A+sZx43phiuP04oE84oiUhYR sbxcudUymnj4vt/wajmzeGHGmZU9Wn/UhwK15DgWLeJqQfvH1CK5LidvqW4AJrguiOBh rSlrhrQuwFQpuRQWF9sPkHqO2dukjufzlXOo7EgEbduCZIVMA549J9SnUzLtduAKBROb eGVXRyEQ/z5A8BUm4ka8i96T8mEz8QeO/7tSaSqMBobAL7H52KfnflmPUrri8F2ZlQWE rhQlUG4IBJFB2WqcGE/CslLbgX8YRkjmIYmRChDAt0SfNFV2mp37bnO/bmGj3Z3UBNFT RlLQ== X-Forwarded-Encrypted: i=1; AJvYcCVv66+YTj0n0Zf/h8qE6uFrtnJzqZMYTanGig8FTKmmrLHFHy9whUvSDEuaZvtb9q5vHSiG14Ci7m0pTiI=@vger.kernel.org X-Gm-Message-State: AOJu0YxT0VJUW+qdbEfOO759vxFomgeVJP227PUb5rD8ytpXTlE2G9dO ArfMx/HEL+KQxkYs3qOaP4IHCFx6+che3WImUOZJFdi/sXZFxrSGeuwXPCgpoAG1nYw8LwTDCZP Rlkmg4xqTLbQpNRnBIde8yv2Hmg== X-Received: from iojv23.prod.google.com ([2002:a5d:9497:0:b0:95a:608d:8cf9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:6ec6:b0:66e:10ca:fcd5 with SMTP id 006d021491bc7-66e10cafdcfmr3750272eaf.12.1770676856769; Mon, 09 Feb 2026 14:40:56 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:07 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-13-coltonlewis@google.com> Subject: [PATCH v6 12/19] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load since it might have changed since the last time the guest wrote a value. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 48 +++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index b07b521543478..4bcacc55c507f 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -165,6 +165,53 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + u64 evtyper_set =3D ARMV8_PMU_EXCLUDE_EL0 | + ARMV8_PMU_EXCLUDE_EL1; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + bool guest_include_el2; + u8 i; + u64 val; + u64 evsel; + + if (!pmu) + return; + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + if (i =3D=3D ARMV8_PMU_CYCLE_IDX) { + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + evsel =3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + } else { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + evsel =3D val & kvm_pmu_event_mask(vcpu->kvm); + } + + guest_include_el2 =3D (val & ARMV8_PMU_INCLUDE_EL2); + val &=3D ~evtyper_clr; + + if (unlikely(is_hyp_ctxt(vcpu)) && guest_include_el2) + val &=3D ~ARMV8_PMU_EXCLUDE_EL1; + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(evsel, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + write_sysreg(i, pmselr_el0); + write_sysreg(val, pmxevtyper_el0); + } +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -192,6 +239,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) =20 pmu =3D vcpu->kvm->arch.arm_pmu; guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + kvm_pmu_apply_event_filter(vcpu); =20 for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05398326924 for ; Mon, 9 Feb 2026 22:40:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676862; cv=none; b=HE5CJfjiV4PIkDKt7sWg4kquGdy5ufGiOzE/6dMGDi7a85I1Mr/VN1OQPQESx+2Zi1mW4yGCVYju2etmJErx9UgyY0aLLWh98rJuj8dh+t9Sfy6Q4yLsVqeDSc8Am3QlxvJy6y65xB/uVW0ypPPK0qJyimCclzX+yguhc+dtsPA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676862; c=relaxed/simple; bh=+PFbR8ZNDdGusVlDRio/6EupmgT2xKnhpLR1U03oicM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oE9wqfNLU3PVxM2PTRDGQXzcYaiZ1BK7b/g1Ssw9Gesk4tPACnYjLSYpXjdXluSdvxpRc8YWuD2hEOR0HeYmx5u0SnnPgdKsnXkd+onQnhJw8Hf5ZX4f4N7bQwkKzmvfZbJsib9neYlrmIql6FSg/jdUAIXGlHr72VUKy2eoVms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rMPzMWgR; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rMPzMWgR" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7d19c1317ccso10133754a34.0 for ; Mon, 09 Feb 2026 14:40:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676858; x=1771281658; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TTgLU/vlu0rEnv4svs9zdi9v/ds5aA8zfTq/qKcEnmk=; b=rMPzMWgRRcbVJGr+4+/FsjgZNvlQJqVcEykkXkZmIj5qFzbgiHvPut83zUwQVz1dKJ Qbk+Ox96f7kdUDvrDH0arbZP3KzsIogIf9P+tHlR1sMtY7Su3mXzaybvy9s90rRtnHJv ma0NcY7okT4DN3sCc3+NDTur1FoZAjgEPJvaKFYzxSQlZ0MNJ8b3Btd3iixgcYD4t5Oy G2mmCj0EotJ3EgZ7GbAqMtxp5PIrqN4/nd1XcsLAGPNnwrwj2XbZHG9Ug+mO7KVUtG+u 7xF9ED1f6nrlaCdSAyyCK/9LY3Y2CZWf4Hhx5v3b6cGFVHbebsnvBfVbhrKaPAR67cws IYGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676858; x=1771281658; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TTgLU/vlu0rEnv4svs9zdi9v/ds5aA8zfTq/qKcEnmk=; b=DGmKNdkR20DgxNb6gE6vhlbjSViO84Bxun5K58rocnrPSKhs+hWUpUUa4Y+uPCJTHr 9tq9XAapIH3x7y2fnbaWr8Cr53nWBNH+9m0GAZemxC5v4LdLoY1lhhYw1ugylfPe3jop o5rl0maODtsT/EZktvbn8pj2iJ4LwvWnASzlIgGxCydDAVYooH+s0oR/XRfE2Ncas3H9 uPPQxCqgMphQsI7b8WcBAEqE+FN9Z4vB6XyTDuHlG7mJQ6mtlBSRea35BwQ+RzOQNDck AdsxAZV+UAm/3Re3UOp4CCWanMTHnuxdYV5cScSs2DJ/5Q+X94S2+szBqAdWtiaqgMa+ VHnw== X-Forwarded-Encrypted: i=1; AJvYcCUive1SUTn7J4ctzpJXX2G9cn3CHMh/67ymiMD7ZkEyLBvjup0AKpBWTEkzEOZUzF/qnIoJ6qMlio6BHFE=@vger.kernel.org X-Gm-Message-State: AOJu0YwjpAsW7OkM8VvFBrou3fTHvacCuj2E+RZEwhaZJ6tx861uEh+G BFWwyZCXvm4iU2t+6T23gWuPYXbDddWtGiIxLkJISRej1oKoOnWrLccmS3H5XkQ/q/+CSMLkBEf DO5vkC7oh5oDPsJUNrfK5DAq3Kw== X-Received: from iorw25.prod.google.com ([2002:a5d:8459:0:b0:957:5e45:b59b]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:806:b0:662:fb1b:ff9e with SMTP id 006d021491bc7-66d0c94e605mr6655176eaf.69.1770676857913; Mon, 09 Feb 2026 14:40:57 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:08 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-14-coltonlewis@google.com> Subject: [PATCH v6 13/19] KVM: arm64: Implement lazy PMU context swaps From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since many guests will never touch the PMU, they need not pay the cost of context swapping those registers. Use an enum to implement a simple state machine for PMU register access. The PMU is either free or guest owned. We only need to context swap if the PMU registers are guest owned. The PMU initially starts as free and only transitions to guest owned if a guest has touched the PMU registers. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_types.h | 6 +++++- arch/arm64/kvm/debug.c | 2 +- arch/arm64/kvm/hyp/vhe/switch.c | 2 ++ arch/arm64/kvm/pmu-direct.c | 26 ++++++++++++++++++++++++-- include/kvm/arm_pmu.h | 5 +++++ 6 files changed, 38 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 8e09865490a9f..41577ede0254f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1377,6 +1377,7 @@ static inline bool kvm_system_needs_idmapped_vectors(= void) return cpus_have_final_cap(ARM64_SPECTRE_V3A); } =20 +void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu); void kvm_init_host_debug_data(void); void kvm_debug_init_vhe(void); void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_types.h b/arch/arm64/include/asm/kv= m_types.h index 9a126b9e2d7c9..4e39cbc80aa0b 100644 --- a/arch/arm64/include/asm/kvm_types.h +++ b/arch/arm64/include/asm/kvm_types.h @@ -4,5 +4,9 @@ =20 #define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 =20 -#endif /* _ASM_ARM64_KVM_TYPES_H */ +enum vcpu_pmu_register_access { + VCPU_PMU_ACCESS_FREE, + VCPU_PMU_ACCESS_GUEST_OWNED, +}; =20 +#endif /* _ASM_ARM64_KVM_TYPES_H */ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 0ab89c91e19cb..c2cf6b308ec60 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -34,7 +34,7 @@ static int cpu_has_spe(u64 dfr0) * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF) * - Self-hosted Trace (MDCR_EL2_TTRF/MDCR_EL2_E2TB) */ -static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) +void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { int hpmn =3D kvm_pmu_hpmn(vcpu); =20 diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 154da70146d98..b374308e786d7 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -524,6 +524,8 @@ static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vc= pu) val =3D vcpu_get_reg(vcpu, rt); nr_cnt =3D vcpu->kvm->arch.nr_pmu_counters; =20 + kvm_pmu_set_physical_access(vcpu); + switch (sysreg) { case SYS_PMCR_EL0: mask =3D ARMV8_PMU_PMCR_MASK; diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 4bcacc55c507f..11fae54cd6534 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -72,10 +72,30 @@ bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; =20 return kvm_vcpu_pmu_is_partitioned(vcpu) && + vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_GUEST_OWNED && cpus_have_final_cap(ARM64_HAS_FGT) && (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); } =20 +/** + * kvm_pmu_set_physical_access() + * @vcpu: Pointer to vcpu struct + * + * Reconfigure the guest for physical access of PMU hardware if + * allowed. This means reconfiguring mdcr_el2 and loading the vCPU + * state onto hardware. + * + */ + +void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu) + && vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_FREE) { + vcpu->arch.pmu.access =3D VCPU_PMU_ACCESS_GUEST_OWNED; + kvm_arm_setup_mdcr_el2(vcpu); + } +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct @@ -232,7 +252,8 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) * If we aren't guest-owned then we know the guest isn't using * the PMU anyway, so no need to bother with the swap. */ - if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + if (!kvm_vcpu_pmu_is_partitioned(vcpu) || + vcpu->arch.pmu.access !=3D VCPU_PMU_ACCESS_GUEST_OWNED) return; =20 preempt_disable(); @@ -302,7 +323,8 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) * accessing the PMU anyway, so no need to bother with the * swap. */ - if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + if (!kvm_vcpu_pmu_is_partitioned(vcpu) || + vcpu->arch.pmu.access !=3D VCPU_PMU_ACCESS_GUEST_OWNED) return; =20 preempt_disable(); diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 93ccda941aa46..82665d54258df 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -7,6 +7,7 @@ #ifndef __ASM_ARM_KVM_PMU_H #define __ASM_ARM_KVM_PMU_H =20 +#include #include #include #include @@ -40,6 +41,7 @@ struct kvm_pmu { int irq_num; bool created; bool irq_level; + enum vcpu_pmu_register_access access; }; =20 struct arm_pmu_entry { @@ -103,6 +105,8 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); void kvm_pmu_load(struct kvm_vcpu *vcpu); void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 +void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -177,6 +181,7 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) } static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67ABA327C10 for ; Mon, 9 Feb 2026 22:41:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676863; cv=none; b=e65ojoblg+o/y91GaqwP+VAm8PhALPCVgbwTy0b1bB6++oY1Hv1a3Oq2vl8F+m7qAsnAVV8aLNDZN3K4NGT1HjVy2U9wcaGPSiILCOcGSMSjHSTgGXPd0Jr5Hf5seu9zxdhe2vu8e1CzWLBgnN02WHrXuzLcfq4LrGsUKMx6UmE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676863; c=relaxed/simple; bh=LS/mEjV4jNCK1fswN5F1dSimFWuhvVCl6oqxmUKaPBc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=p+a4aKV3ufOSttY8rGgcrUTjxCY3bK28fBjAyOGPHRM9UqF8LQOA5O+NnflFuBJ2eCylrgXAJHSH3w8tzLp0RETQOwNyPTanExuG/cMFjCK3X8uoSUGJUXrIMywMO1SRpbN747Tcn4xOetPyBOTEGnlIQx705nyt1q7pdIiLIqM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vCX+plNv; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vCX+plNv" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-4042356948fso1049173fac.3 for ; Mon, 09 Feb 2026 14:41:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676859; x=1771281659; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Fbs2IJ0IMwh/1h3wCPgscqdo8v8XfIhb3eCvNMBg5sM=; b=vCX+plNvutM4xXCeJskjd4hWau+kbId60JUUpO+X7WE8EwcH/wBSKwvZpa7SLuTPZ4 rIDTq/H6Xkkg0RT8TYFhPTXbFParLlWnlhXw78sN2o8GqVCNhzsm5m2JH18t8+3NgTp9 ZnkaSe7avCgK4YHb/UYDmlNHH4y1m8YmDNSJVJfTdvKVCs6ddQ8yeoblcr9JHHrOofdm TLFq0JjrIMEnjbrQGUw1aS7bXaUC/16qQJN0Ma8R3D7cSmOJu5D8T/0anOYrLJNTKZA8 yNOT6dvZ5cSkrLM8jtWoxe1qNaCf+mAbG7YWl9Fu4HqCGQvCNzWvJGg9WH9xIfSCAo67 gYtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676859; x=1771281659; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Fbs2IJ0IMwh/1h3wCPgscqdo8v8XfIhb3eCvNMBg5sM=; b=cFgAYm4XpCFKH4DWLbZhlOZmPFL+Y/PkvWixESGzfGfYxoF3xfQipzuPfT7YxVCqBe vK1yF5dttOSjUh5qIPhEDCd3hvisJAjuHub2RlTwUcrtL0TngL6kX2/bi/oDeUIfGWbp rnEurvL17hAixtzGDJvQk1fmbt1J5hNzsO43h/MCo++mmnx1auQkLs42srCFlLzwEQsf A68A1kKNByxW/8kDz5oPWfD7QNkvbcHTe1rxYGkEnohjjCBEDPZCK8DYcNam+sn6O0Jy UUHAOUFWHcjPGfOPQRLZBcj4fpqhyywyfnkHp7+5K8nPMBx+xZQhtJOaFAaeS1K0qrdN +upw== X-Forwarded-Encrypted: i=1; AJvYcCWqbEgaw/Gcqc2HRUf2IH3Bn6P0sviZYoGERxIJK8NEJeRlkl+M15Wf/oQAIt0VSNl4/uPx8w/o2mKt/hU=@vger.kernel.org X-Gm-Message-State: AOJu0YxX64a32Bj3Orl3IcTw/1bCsLQcd9r5r2wq/WtASCPbFF5VrULz HJbYYbhTtS8ZxONAg4x2aOoK19PAVsBnT5drJJQF3WR03pt7/qACC8zuAKaFUD32vrH6O/qgtg/ MWAC4APSjgEFwWBhE6BIeKofuAg== X-Received: from ioyy2.prod.google.com ([2002:a05:6602:2142:b0:957:61bf:f0ca]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1607:b0:66a:adbb:31c1 with SMTP id 006d021491bc7-66d0c856dbcmr5554562eaf.61.1770676859402; Mon, 09 Feb 2026 14:40:59 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:09 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-15-coltonlewis@google.com> Subject: [PATCH v6 14/19] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because ARM hardware is not yet capable of direct PPI injection into guests, guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/arm_pmuv3.h | 5 +++++ arch/arm64/kvm/pmu-direct.c | 22 ++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 24 +++++++++++++++++------- include/kvm/arm_pmu.h | 2 ++ 5 files changed, 52 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index bed4dfa755681..d2ed4f2f02b25 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -249,6 +254,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64 pmovs= r) {} =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 27c4d6d47da31..69ff4d014bf39 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -110,6 +110,11 @@ static inline void write_pmintenset(u64 val) write_sysreg(val, pmintenset_el1); } =20 +static inline u64 read_pmintenset(void) +{ + return read_sysreg(pmintenset_el1); +} + static inline void write_pmintenclr(u64 val) { write_sysreg(val, pmintenclr_el1); diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 11fae54cd6534..79d13a0aa2fd6 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -356,3 +356,25 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) =20 preempt_enable(); } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @pmu: PMU to check for overflows + * @pmovsr: Overflow flags reported by driver + * + * Set overflow flags in guest-reserved counters in the VCPU register + * for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64 pmovsr) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u64 govf =3D pmovsr & mask; + + write_pmovsclr(govf); + + if (!vcpu) + return; + + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 6395b6deb78c2..9520634991305 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -774,16 +774,15 @@ static void armv8pmu_disable_event_irq(struct perf_ev= ent *event) armv8pmu_disable_intens(BIT(event->hw.idx)); } =20 -static u64 armv8pmu_getreset_flags(void) +static u64 armv8pmu_getovf_flags(void) { u64 value; =20 /* Read */ value =3D read_pmovsclr(); =20 - /* Write to clear flags */ - value &=3D ARMV8_PMU_CNT_MASK_ALL; - write_pmovsclr(value); + /* Only report interrupt enabled counters. */ + value &=3D read_pmintenset(); =20 return value; } @@ -903,16 +902,17 @@ static void read_branch_records(struct pmu_hw_events = *cpuc, =20 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { - u64 pmovsr; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; + u64 host_set =3D kvm_pmu_host_counter_mask(cpu_pmu); + u64 pmovsr; int idx; =20 /* - * Get and reset the IRQ flags + * Get the IRQ flags */ - pmovsr =3D armv8pmu_getreset_flags(); + pmovsr =3D armv8pmu_getovf_flags(); =20 /* * Did an overflow occur? @@ -920,6 +920,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) if (!armv8pmu_has_overflowed(pmovsr)) return IRQ_NONE; =20 + /* + * Guest flag reset is handled the kvm hook at the bottom of + * this function. + */ + write_pmovsclr(pmovsr & host_set); + /* * Handle the counter(s) overflow(s) */ @@ -961,6 +967,10 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) */ perf_event_overflow(event, &data, regs); } + + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_handle_guest_irq(cpu_pmu, pmovsr); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 82665d54258df..3d922bd145d4e 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -99,6 +99,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64 pmovsr); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); @@ -306,6 +307,7 @@ static inline u64 kvm_pmu_guest_counter_mask(void *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64 pmovs= r) {} =20 #endif =20 --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCE80321428 for ; Mon, 9 Feb 2026 22:41:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676865; cv=none; b=HwoBS7htiReNWX/MHTfXNG/jDUzsdPfh7oqak/eim7Vvnmg0okOi/eQ1OBoKhA3NVJGwOZr20Q/y4OQ1TFezyjgkxyG6Ae7CXM6ufqKxGJqzJZzsTOHC8/YP654HsVFhmlNJN3nPgyYpeY/no2lq2UUhx5NjtO0hfDKOkLenpYw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676865; c=relaxed/simple; bh=RowRt03ThgQKuNhhFZ3YMhl1qn0omKfjfEtkXkjMK7g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G1n+PrzZ+RN6Flqtcni06ea/PIJx/qyfaZ3c2PVaxCfRLjfLZHvdkVlPbxrWzIHL27jubpfoYbh6Q3E7JJB7SLlBXl/ueyzW/2Fcckb4Hm5GGemahLJAx5X3TJ51dV05+vVmnsNLOSEpvnt7KRNhYwpBPFwsiFcNiB7UUDJCvik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RlXkr1NW; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RlXkr1NW" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-45c8d5caf62so10858138b6e.3 for ; Mon, 09 Feb 2026 14:41:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676860; x=1771281660; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xtvZ/CVAyvlBIyi+KC1xnXowl3JGTClEkFZx5dHECQw=; b=RlXkr1NW/O273SG/7upIZZrnT18xA+ix6n4aWWYhmSbfC+UULhlRrVrhPHTNKHMlcr Gn05MalaYcdtOSHCzcunf5t6rPuKAJoVewBRmj4fz98cfpqMafwFFz/cV2JixANpRxXy zKh3xyad41cwzbyedkkII3q9M8nC/OetiCDj0lYvroQiqrtGw7SxDQdQLjAYqU6UjN6D BDZDIjFcNg4FtCDsSxp4qGEIX1ZAcJinuAy5TSUzmykMV1LNc+Ae/AtulilSTSLxenqL xeVL3ZMcHxzZ9/lH+aLgK6sU+RvPQ7MyM16pcDW+X2ma7ppXitXORxvJ8ni9odsU2BLn peMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676860; x=1771281660; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xtvZ/CVAyvlBIyi+KC1xnXowl3JGTClEkFZx5dHECQw=; b=WBh/6lDb+jijcZ8BfegL0q5y1epEw5lHlIZ9vgZ+gIZfSkO3k4ip6icIpAXHOslucb 1DomhUBclq4wGlY05bzFY6T8Bib+PKhgM72YbZHCOt1kHm8CoHW2J6EFkUwI37Y4A5Zm 2LnquG8csB2hLkC58Fa4Ynxo/G/COewIPM4vp3d/iU5EGVZSCiDpp5ngSTpD1wtlZVni jbk5noGWjpvT5OFB/UUDQaZFRYspKk/I0sECDWyPZIRRuL7YzHRMgmW6Fq8XqX/OD2Xg gtW3FqYAvurvNXa2mgd1MspmmkwdseViZJm8ykAf7swjcz5X7gNVRJBXs8GwYUIvfPAl ywNQ== X-Forwarded-Encrypted: i=1; AJvYcCWTPUMdGf+0WpP32Pn051QRYcfJ47kKTCB29AXJiTWG66ZbPZBrDRmE7jStO8PUFA79nvObPSAC++fg4cs=@vger.kernel.org X-Gm-Message-State: AOJu0Yy43kytvjNPQQiPPKLU9iNsE9TATdyD9TDrqm1V0NZaEtxp4BHr QE2ZmbuTkuITQf+g8/nKV+1acdnl0MrJ4APzlR/j4YrYAxUXR06MD1d69dqw7Cxkui87Kx77rZu 6nEiN9vnChbw0qae7Y56uKHATag== X-Received: from ilbbg8.prod.google.com ([2002:a05:6e02:3108:b0:482:7936:419f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4913:b0:662:fa75:d6df with SMTP id 006d021491bc7-672fdc0a914mr57705eaf.12.1770676860543; Mon, 09 Feb 2026 14:41:00 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:10 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-16-coltonlewis@google.com> Subject: [PATCH v6 15/19] KVM: arm64: Detect overflows for the Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu.c | 6 +++++- include/kvm/arm_pmu.h | 2 ++ 4 files changed, 39 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 79d13a0aa2fd6..6ebb59d2aa0e7 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -378,3 +378,33 @@ void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64= pmovsr) =20 __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); } + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. If access is still free, + * then the guest hasn't accessed the PMU yet so we know the guest + * context is not loaded onto the pCPU and an overflow is impossible. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + u64 mask, pmovs, pmint, pmcr; + bool overflow; + + if (vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_FREE) + return false; + + pmu =3D vcpu->kvm->arch.arm_pmu; + mask =3D kvm_pmu_guest_counter_mask(pmu); + pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + pmint =3D read_pmintenset(); + pmcr =3D read_pmcr(); + overflow =3D (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); + + return overflow; +} diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index a40db0d5120ff..c5438de3e5a74 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -268,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -405,7 +405,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index b198356d772ca..72d5b7cb3d93e 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -408,7 +408,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 3d922bd145d4e..93586691a2790 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -90,6 +90,8 @@ bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF5CC329368 for ; Mon, 9 Feb 2026 22:41:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676868; cv=none; b=mdojoBrfS3eLfXNah4dpl+VmcFLPjLTKJhQDWMz+HaRNKB6Yd1htL9yefEjyJD7NpuYNyAgkf1ieWqC9ZxbUNeihmEh9b8wkShDyTRtUruV3yRy5uY4pL7aB/0N+FnxUCLeLv1FY84AftYEcBfpxEYbBj1XlEcvbtcTd1TT80qk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676868; c=relaxed/simple; bh=IGbjYf4aDEdJzYqyHxQ48Lwc7zrh6q6MUfA91xc3mH0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Sf4oeom1UY2uCB82NvjtkTALvjoKjltxGg2o8fqhI/oSNTwJiH57sbgnBhObl92FuKKv7gux3xROPgmMVZIhRFRFgUQkvxnB5usiqiKNVqeRPxZBrMXel/3/bn6TyLbpHW7ZMKw4OZdW1+9rnnG6r4TlCE0Ywro4WEVI8zu97Ao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FCnRNsjs; arc=none smtp.client-ip=209.85.161.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FCnRNsjs" Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-6630b0a016aso1063581eaf.0 for ; Mon, 09 Feb 2026 14:41:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676862; x=1771281662; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I8vsvPCwrm4qK2tJSRhYymVMX8OUXdkNYkDGMZ9conM=; b=FCnRNsjsHKpi4R/Z7BA/JIekd/KAfoQtmFDSrv8v5si1BWF6MMUVNBl2FzT/GImE/M LgJJiXquTh4TlPkpWeLD66dJtESVNQHV4Gj5Qyz5lqcfPbi/tJNYq0XrSYttmzg3IqXA IV4eByYjAhn2icP6a2aPmoj4AuF22iTPl6B0DI5o8lOhILYmcDsWaNKHc5HVuej1F2qz hbflSOwydlkvf3j5Ip2XCVnN5oFbbVshcy62O7vZXYySJY8OuhUHrDBM6NaIesAEhEFU 9jCF002ElO1Tu079fNDi/fHk+H/tDw0A7TmuXfp8TpdGSLLssme/FNhxSTfUiTuuv6TB qDNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676862; x=1771281662; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I8vsvPCwrm4qK2tJSRhYymVMX8OUXdkNYkDGMZ9conM=; b=EBBugRATPu0jrTDTkw6d66oyqwPaB2v9CJq8GbxZgF2Rn4Jonu8XzG5pLDYHeVS3G0 vFIpNjpmWkCPVsl4Z9QCtzCx9ztAdFQIDEbxtcXpy0Mxomh3q+FzvIDOtxLkLKb9Ajto UhkHkyrzy4JfBEmm8C/QT2h36O2guhHf740/+56S4+c+V8qL6M3oWzOwigT0IwtEKNMp P0/n8u03ppwygHs6zrwRMtUOwFNsS+2yTwjmwCnBcn2Os316ZBL+2xQbkAVMc22Fe0Sv yWFhopCc1sRKh4ZE1aCLyvHrS378RzzgaT+fkeSZgPVrSHdUtQMMTtjL8rkbSdh8CG9X Ytew== X-Forwarded-Encrypted: i=1; AJvYcCXULxUxCDfTybaynf0wbYKhgFVkByfa8OLdXy1FvDHhHTOiiK2i+5ax1U6qVKvrNoJuzajAj2mPEDy0x8U=@vger.kernel.org X-Gm-Message-State: AOJu0YwELnb4x/VJJSLKhD/nUmXUqcbQETe6wG7vvlEGe61CvnkH6MvP ehQQ5EvWeQIFHZRDggU/YyI0boW3NnegMSBxqeo0oTtQXBq2yOcRFlBuy9YTtjfEY8CO23F8ffl Ur1SGbg7XhAcw4y69rPchz2nymg== X-Received: from iomv22.prod.google.com ([2002:a5e:d716:0:b0:957:5d25:584]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1622:b0:662:f8f6:e8d1 with SMTP id 006d021491bc7-66d09ac24cdmr6817429eaf.6.1770676861761; Mon, 09 Feb 2026 14:41:01 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:11 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-17-coltonlewis@google.com> Subject: [PATCH v6 16/19] KVM: arm64: Add vCPU device attr to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new PMU device attr to enable the partitioned PMU for a given VM. This capability can be set when the PMU is initially configured before the vCPU starts running and is allowed where PMUv3 and VHE are supported and the host driver was configured with arm_pmuv3.reserved_host_counters. The enabled capability is tracked by the new flag KVM_ARCH_FLAG_PARTITIONED_PMU_ENABLED. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/uapi/asm/kvm.h | 2 ++ arch/arm64/kvm/pmu-direct.c | 35 ++++++++++++++++++++++++++++--- arch/arm64/kvm/pmu.c | 14 +++++++++++++ include/kvm/arm_pmu.h | 9 ++++++++ 5 files changed, 59 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 41577ede0254f..f0b0a5edc7252 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -353,6 +353,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS 10 /* Unhandled SEAs are taken to userspace */ #define KVM_ARCH_FLAG_EXIT_SEA 11 + /* Partitioned PMU Enabled */ +#define KVM_ARCH_FLAG_PARTITION_PMU_ENABLED 12 unsigned long flags; =20 /* VM-wide vCPU feature set */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/as= m/kvm.h index a792a599b9d68..3e0b7619f781d 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -436,6 +436,8 @@ enum { #define KVM_ARM_VCPU_PMU_V3_FILTER 2 #define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 #define KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS 4 +#define KVM_ARM_VCPU_PMU_V3_ENABLE_PARTITION 5 + #define KVM_ARM_VCPU_TIMER_CTRL 1 #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0 #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 6ebb59d2aa0e7..1dbf50b8891f6 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -44,8 +44,8 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) } =20 /** - * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU - * @vcpu: Pointer to kvm_vcpu struct + * kvm_pmu_is_partitioned() - Determine if given VCPU has a partitioned PMU + * @kvm: Pointer to kvm_vcpu struct * * Determine if given VCPU has a partitioned PMU by extracting that * field and passing it to :c:func:`kvm_pmu_is_partitioned` @@ -55,7 +55,36 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) { return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) && - false; + test_bit(KVM_ARCH_FLAG_PARTITION_PMU_ENABLED, &vcpu->kvm->arch.flags); +} + +/** + * has_kvm_pmu_partition_support() - If we can enable/disable partition + * + * Return: true if allowed, false otherwise. + */ +bool has_kvm_pmu_partition_support(void) +{ + return has_host_pmu_partition_support() && + kvm_supports_guest_pmuv3() && + armv8pmu_max_guest_counters > -1; +} + +/** + * kvm_pmu_partition_enable() - Enable/disable partition flag + * @kvm: Pointer to vcpu + * @enable: Whether to enable or disable + * + * If we want to enable the partition, the guest is free to grab + * hardware by accessing PMU registers. Otherwise, the host maintains + * control. + */ +void kvm_pmu_partition_enable(struct kvm *kvm, bool enable) +{ + if (enable) + set_bit(KVM_ARCH_FLAG_PARTITION_PMU_ENABLED, &kvm->arch.flags); + else + clear_bit(KVM_ARCH_FLAG_PARTITION_PMU_ENABLED, &kvm->arch.flags); } =20 /** diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 72d5b7cb3d93e..cdf51f24fdaf3 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -759,6 +759,19 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, str= uct kvm_device_attr *attr) =20 return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); } + case KVM_ARM_VCPU_PMU_V3_ENABLE_PARTITION: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + bool enable; + + if (get_user(enable, uaddr)) + return -EFAULT; + + if (!has_kvm_pmu_partition_support()) + return -EPERM; + + kvm_pmu_partition_enable(kvm, enable); + return 0; + } case KVM_ARM_VCPU_PMU_V3_INIT: return kvm_arm_pmu_v3_init(vcpu); } @@ -798,6 +811,7 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, stru= ct kvm_device_attr *attr) case KVM_ARM_VCPU_PMU_V3_FILTER: case KVM_ARM_VCPU_PMU_V3_SET_PMU: case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + case KVM_ARM_VCPU_PMU_V3_ENABLE_PARTITION: if (kvm_vcpu_has_pmu(vcpu)) return 0; } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 93586691a2790..ff898370fa63f 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -109,6 +109,8 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu); void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu); +bool has_kvm_pmu_partition_support(void); +void kvm_pmu_partition_enable(struct kvm *kvm, bool enable); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -311,6 +313,13 @@ static inline void kvm_pmu_host_counters_enable(void) = {} static inline void kvm_pmu_host_counters_disable(void) {} static inline void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64 pmovs= r) {} =20 +static inline bool has_kvm_pmu_partition_support(void) +{ + return false; +} + +static inline void kvm_pmu_partition_enable(struct kvm *kvm, bool enable) = {} + #endif =20 #endif --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FC0E329E43 for ; Mon, 9 Feb 2026 22:41:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676867; cv=none; b=W9u6uuZyK86pukQP9HbHT/svxjDG9Yt0GxBUfzgqeggS6rzaUBRFb3tm7do6pkEM1d/+uHYP/HDsAE7k9cSSRH5tomHuer6bKueMDhIyBkILaNsBIOpAOxTnwjFvI0lQKVA8CcP3qJHSsqxjg7NWV+qzRFYsSls9LR/U2v1br1M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676867; c=relaxed/simple; bh=pKWQ8eiEG2mrdGJZiqdIDMjf5MzF6cSVZ2+ozjk63zY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eh3SraY7Z0GomCTxKCc2HaunfsuCz19mLiPCI9IMTGze7ThGO25sREpKXYynePzLkChXh27kH3tikZezcKAxKLGt0tcqHON+xVr/oqNJUDwMF959uDhNRnizAYwV0M0ltcjyF5huqsC2u8FHeam0wlb27yHX+t9RpDAH2YdnbeE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=F3rWxAhy; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F3rWxAhy" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-40951019a1dso15411588fac.1 for ; Mon, 09 Feb 2026 14:41:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676863; x=1771281663; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=N13aFr3ll/bgxK3rlveFMtlg5cOY53+sxi39a+VtEUU=; b=F3rWxAhyu4/fpcC5n73l7lwudmeJlqrVt4V5wMoVxWD0SYq4ffjrkiSfRnFXsxHnY0 tmjWL11DgLX04eUPWDCmAMDVQKJwzbs+zHadBh+sm4yrYqs6aRJDkMlTYjqU2Z9BoPau Q9+tZYpERgaZ5uusWrAdaRkr6WwKHhvEZ7woRYMvOpH/LaNrrbeA7GgmbMfKri7a/DUg 5rC9FeHh8xL7wODmaLQfTQtg5bltYkq5ozqVXz0ms645Z91AqkJkxnQVMO3cv/naVqDl G/KIm9i9BpAeDADIfb7kae64uDEszdQ1Eu8FyPBYoUueepSNJ4jaA/iCYGR1x3ECbJNV tRzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676863; x=1771281663; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=N13aFr3ll/bgxK3rlveFMtlg5cOY53+sxi39a+VtEUU=; b=gu/8kjoxbBLgM6jqzCJrZ1044x1WSzBHBWqusBmb6HSClnMLjivcrDZdR7IquJKxqp RTbeqypkAZyjYR8jAZ/iREZsZ7CitBtNGrPG3MMvewxkGSqEH4tqUCeTbVkTg1+RVd/Z O0AFad17sL5WdbxzaXEZGaQ3vog9iTNZYiVFVVlP6CA73U8TwVTtG1F7DQvHpQznzo/j df5yBTIyhJGYsj4biAqoc7TX+UG9SdewDCmucDmIT74X8qI0KUYUYo43uD20XCSR4Qlw nQM1kmjEnTFIor8/dN30NJYlEW5JoMEksHsO6Oyx2VNdWZnBA2cwjIH8fJXWdbN2Gh/o 11IQ== X-Forwarded-Encrypted: i=1; AJvYcCVNOBP5OMMHlKVpzgVFYDQRcKiFPrTuKiIMxPlM3DgwJa7Yk56Lt8e2ePaIjBpTS8FOaPLdMRPTlaJbavs=@vger.kernel.org X-Gm-Message-State: AOJu0Yw+XuDQVUwPQirLBBlnd2js6pTT52922xjodGELEo/mFlnYZBQE MrLzWQ2cMF8gDiNIOvVQXfhiU02B+hJJfyQTrz9FT9WHbZHHdMimiN8ot8/gyhwwLvXuIIgHXfc n9xxuZEmE6PpmUrnYFNt5RY1P8A== X-Received: from iobfj4.prod.google.com ([2002:a05:6602:644:b0:957:6f27:bbe0]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4b94:b0:662:f452:648f with SMTP id 006d021491bc7-66d09dacae7mr6166360eaf.17.1770676862961; Mon, 09 Feb 2026 14:41:02 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:12 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-18-coltonlewis@google.com> Subject: [PATCH v6 17/19] KVM: selftests: Add find_bit to KVM library From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some selftests have a dependency on find_bit and weren't compiling separately without it, so I've added it to the KVM library here using the same method as files like rbtree.c. Signed-off-by: Colton Lewis --- tools/testing/selftests/kvm/Makefile.kvm | 1 + tools/testing/selftests/kvm/lib/find_bit.c | 1 + 2 files changed, 2 insertions(+) create mode 100644 tools/testing/selftests/kvm/lib/find_bit.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index ba5c2b643efaa..1f7465348e545 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -5,6 +5,7 @@ all: =20 LIBKVM +=3D lib/assert.c LIBKVM +=3D lib/elf.c +LIBKVM +=3D lib/find_bit.c LIBKVM +=3D lib/guest_modes.c LIBKVM +=3D lib/io.c LIBKVM +=3D lib/kvm_util.c diff --git a/tools/testing/selftests/kvm/lib/find_bit.c b/tools/testing/sel= ftests/kvm/lib/find_bit.c new file mode 100644 index 0000000000000..67d9d9cbca85c --- /dev/null +++ b/tools/testing/selftests/kvm/lib/find_bit.c @@ -0,0 +1 @@ +#include "../../../../lib/find_bit.c" --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4914432AAD1 for ; Mon, 9 Feb 2026 22:41:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676869; cv=none; b=NIo6Y00fx8f5EnvZmxREl5z2k6FupKI5CBcyq9DofCoxuCPjJKRovXp6frA6QWNdL2AiUqbJhIUyK6SQKNmFKw44L+ll/qFBJ/JxNEJ5vxyL3J7k7kNv8vYKD/sXj+QRKMrIg6hupYFgLVsRSyT4f2DPVtwhDswcnEX/9yGcxrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676869; c=relaxed/simple; bh=H04nlaZglKH5c2S6TSbb16Bnz8/pDkIa7tY4/oH6tUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tw1FjRVlZDdFIQWO6TOTYJ/hEQYiE05tDRO3M5yu3TDYOoGHjgGj3Qcnic6RG1KUf0hdCtE/SQcv89/NqSO/hVqD3YAp1iJDTPdrZ3WytuvJXg5cbCjYzQGii7e9QCLqiF/FuzE6J8rH+QKb0sEfZnpn02ft5lvvs5IgjjqGdOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=f5cTlFOw; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="f5cTlFOw" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-4094c2f62f3so15450689fac.0 for ; Mon, 09 Feb 2026 14:41:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676864; x=1771281664; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2S8QK1DUpnxCvBG6dnxfgC3sjGb7yD/nYkqrFgFEa7w=; b=f5cTlFOwPJ4YJKThIn27IVUjeeTfH1jEfF4msDA/ayFtawEWe7IyjCHYb9/ljFt0PB rF3we89/ORL9R+WSGhxtPJLEKMtSKj+Iw4gZsxJGGVkPu9vb2sf36I142veSMrtOpoE0 DJMGMsfEvidC/07NyxHR220HNwKMo6Hy9UQrnt5wMmHALe1TUwl/ql7ufF/HcaRFgvVT /ntp4htKprmEcWF8gl6CmRE7LJDe1VTrfy/Z2VaCHZovDyXItQF9Ahd8Azv9VDNA2oJl rVhyRFCm++ujnk6Kl/fuyI0CsBPSEyVThO42v4N3Ol3c2zxZQvzUmW06qkdidDBBb4P6 r0VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676864; x=1771281664; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2S8QK1DUpnxCvBG6dnxfgC3sjGb7yD/nYkqrFgFEa7w=; b=ZJcB2vfcwA7VcXDTtJmpKRecEylvBxJ4Tra2zyGxU8OSP6jw/9ADCBWIgjriDycCld XZZuK7ICcNPIQKVEOYfrfS295AVXfxQyoEwqm/7fG/qJaOiHy5G7StetRnrtwWYL5FWk 3DRbT1bNXSd3Y9qX1/LXpioH7zDi6w1vuXgAgWU0+SmaH4fHUoiTTM2uTCro+LPn1h2x OfDr7jQFBLjDvApg6kzow0fhmOqZaOeCEzgMI17TWaS5jqkPQiqdAO7Xuz5GFaZjJoyA pTuYXNJc6O6CQ5kc9Vv4PG2qk8uASo115o7YFpIaXrHE9/1rlyxZJejv8zfteavf3+8X n/cg== X-Forwarded-Encrypted: i=1; AJvYcCWMb9jkwzsH7TZUwSTGxiPm0GV7+s6cKSNpHXj+QTwxZsV5azp56mRkV6CNqQzVQOdsERw+QsjIcSaciO8=@vger.kernel.org X-Gm-Message-State: AOJu0YxwZmW3CglaqIXVrpwd+CEZ4mN9ViSZt4gdnrejP/Ad7ANG/LbD 1Qfueq2uQaMx8HX0Fk08S0w7kvLBYI3O3qnZ3BvtZYkMdqwMJ7UdazGtw8m3Ei9v41T4fpYVVU8 t4C7A0p7kFZqxDKAz+2ODoGF+Zg== X-Received: from oapz39.prod.google.com ([2002:a05:6870:d6a7:b0:409:b837:9227]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:d083:b0:404:3635:c885 with SMTP id 586e51a60fabf-40a96fccba3mr6975342fac.32.1770676864078; Mon, 09 Feb 2026 14:41:04 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:13 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-19-coltonlewis@google.com> Subject: [PATCH v6 18/19] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rerun all tests for a partitioned PMU in vpmu_counter_access. Create an enum specifying whether we are testing the emulated or partitioned PMU and all the test functions are modified to take the implementation as an argument and make the difference in setup appropriately. Signed-off-by: Colton Lewis --- .../selftests/kvm/arm64/vpmu_counter_access.c | 94 ++++++++++++++----- 1 file changed, 73 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index ae36325c022fb..9702f1d43b832 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -25,9 +25,20 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 =20 +enum pmu_impl { + EMULATED, + PARTITIONED +}; + +const char *pmu_impl_str[] =3D { + "Emulated", + "Partitioned" +}; + struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; + bool pmu_partitioned; }; =20 static struct vpmu_vm vpmu_vm; @@ -399,7 +410,7 @@ static void guest_code(uint64_t expected_pmcr_n) } =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) +static void create_vpmu_vm(void *guest_code, enum pmu_impl impl) { struct kvm_vcpu_init init; uint8_t pmuver, ec; @@ -409,6 +420,13 @@ static void create_vpmu_vm(void *guest_code) .attr =3D KVM_ARM_VCPU_PMU_V3_IRQ, .addr =3D (uint64_t)&irq, }; + bool partition =3D (impl =3D=3D PARTITIONED); + struct kvm_device_attr part_attr =3D { + .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, + .attr =3D KVM_ARM_VCPU_PMU_V3_ENABLE_PARTITION, + .addr =3D (uint64_t)&partition + }; + int ret; =20 /* The test creates the vpmu_vm multiple times. Ensure a clean state */ memset(&vpmu_vm, 0, sizeof(vpmu_vm)); @@ -436,6 +454,15 @@ static void create_vpmu_vm(void *guest_code) "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); =20 vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + + ret =3D __vcpu_has_device_attr( + vpmu_vm.vcpu, KVM_ARM_VCPU_PMU_V3_CTRL, KVM_ARM_VCPU_PMU_V3_ENABLE_PARTI= TION); + if (!ret) { + vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &part_attr); + vpmu_vm.pmu_partitioned =3D partition; + pr_debug("Set PMU partitioning: %d\n", partition); + } + } =20 static void destroy_vpmu_vm(void) @@ -461,13 +488,14 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) } } =20 -static void test_create_vpmu_vm_with_nr_counters(unsigned int nr_counters,= bool expect_fail) +static void test_create_vpmu_vm_with_nr_counters( + unsigned int nr_counters, enum pmu_impl impl, bool expect_fail) { struct kvm_vcpu *vcpu; unsigned int prev; int ret; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); vcpu =3D vpmu_vm.vcpu; =20 prev =3D get_pmcr_n(vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0))); @@ -489,7 +517,7 @@ static void test_create_vpmu_vm_with_nr_counters(unsign= ed int nr_counters, bool * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, enum pmu_impl impl) { uint64_t sp; struct kvm_vcpu *vcpu; @@ -497,7 +525,7 @@ static void run_access_test(uint64_t pmcr_n) =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, false); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -531,14 +559,14 @@ static struct pmreg_sets validity_check_reg_sets[] = =3D { * Create a VM, and check if KVM handles the userspace accesses of * the PMU register sets in @validity_check_reg_sets[] correctly. */ -static void run_pmregs_validity_test(uint64_t pmcr_n) +static void run_pmregs_validity_test(uint64_t pmcr_n, enum pmu_impl impl) { int i; struct kvm_vcpu *vcpu; uint64_t set_reg_id, clr_reg_id, reg_val; uint64_t valid_counters_mask, max_counters_mask; =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, false); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 valid_counters_mask =3D get_counters_mask(pmcr_n); @@ -588,11 +616,11 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_error_test(uint64_t pmcr_n, enum pmu_impl impl) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, true); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, true); destroy_vpmu_vm(); } =20 @@ -600,11 +628,11 @@ static void run_error_test(uint64_t pmcr_n) * Return the default number of implemented PMU event counters excluding * the cycle counter (i.e. PMCR_EL0.N value) for the guest. */ -static uint64_t get_pmcr_n_limit(void) +static uint64_t get_pmcr_n_limit(enum pmu_impl impl) { uint64_t pmcr; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); pmcr =3D vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); destroy_vpmu_vm(); return get_pmcr_n(pmcr); @@ -614,7 +642,7 @@ static bool kvm_supports_nr_counters_attr(void) { bool supported; =20 - create_vpmu_vm(NULL); + create_vpmu_vm(NULL, EMULATED); supported =3D !__vcpu_has_device_attr(vpmu_vm.vcpu, KVM_ARM_VCPU_PMU_V3_C= TRL, KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS); destroy_vpmu_vm(); @@ -622,22 +650,46 @@ static bool kvm_supports_nr_counters_attr(void) return supported; } =20 -int main(void) +static bool kvm_supports_partition_attr(void) +{ + bool supported; + + create_vpmu_vm(NULL, EMULATED); + supported =3D !__vcpu_has_device_attr(vpmu_vm.vcpu, KVM_ARM_VCPU_PMU_V3_C= TRL, + KVM_ARM_VCPU_PMU_V3_ENABLE_PARTITION); + destroy_vpmu_vm(); + + return supported; +} + +void test_pmu(enum pmu_impl impl) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); - TEST_REQUIRE(kvm_supports_vgic_v3()); - TEST_REQUIRE(kvm_supports_nr_counters_attr()); + pr_info("Testing PMU: Implementation =3D %s\n", pmu_impl_str[impl]); + + pmcr_n =3D get_pmcr_n_limit(impl); + pr_debug("PMCR_EL0.N: Limit =3D %lu\n", pmcr_n); =20 - pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); - run_pmregs_validity_test(i); + run_access_test(i, impl); + run_pmregs_validity_test(i, impl); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_error_test(i, impl); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + TEST_REQUIRE(kvm_supports_vgic_v3()); + TEST_REQUIRE(kvm_supports_nr_counters_attr()); + + test_pmu(EMULATED); + + if (kvm_supports_partition_attr()) + test_pmu(PARTITIONED); =20 return 0; } --=20 2.53.0.rc2.204.g2597b5adb4-goog From nobody Tue Feb 10 20:28:56 2026 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E96732B9A4 for ; Mon, 9 Feb 2026 22:41:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676870; cv=none; b=T+q5GgKAdpLL/OAWcJq1/mQjeIHzJSzToM6FcnzRZCEVayBcVRiZRMEJv30BNmgdo9UpQSUYLF1JjABnSbSrztRapthOWT+RWbAEW5ifoa0Yb/kF8qoj2ktKKl9hlrHn7WWYMzXLWBgDZoKllg2TaR/GeQLC0NbVRvqN4DoLtq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676870; c=relaxed/simple; bh=SHVJoQZ+YflPOx5JrxGn/C/DEC5GjFnR/d7ZsgewPiA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=K62Mwrgxr8a8yBElxOTLSOu48aCMpLh+CrhfuLaytkYGPRq2APjdOL6GjgF121ekImLdHlAY6LqW6Kev1Vh42c06WSRpsOtoDD66OE1NhtTgdOLdo7WdaxW9k7D2B1xZwvFo4OxE1+sxkJIJlbnVVFpv1kTVskrGnVrfB7vlwLo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Pw/KYaU+; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Pw/KYaU+" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-662f839d680so9455161eaf.2 for ; Mon, 09 Feb 2026 14:41:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676865; x=1771281665; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/TKm1gU8lTLqEcn/8yRmMV0Zp5pYjSJjWs0bsDB+GUI=; b=Pw/KYaU+p47J7laFb8fSu+8hF3BsYRID8xNIgkrXkcmy+RRBqixWMeT9jZhtrlVw+Y ixbrgc00LemqRbOGxA1onA2IiqPtldaMvXiEKm9lnSZn5RLSjy8mLzm/vPyfOEh3MCMU EZErxxS7e1QLWSLAU60HhwxYSSTKl45QBBi93y03k32/SpD30eDXeJaapGZKwIEm4k++ uKSFGpifrk7ibEsUghklcJ269upu9a7BM1PxHGzSZhq1QBbmoDgFJPOnIcjrkErti90E 2LkjWOCYZWrtBWOPRefkt54hopSRjAN95dx17iR1rVb8maf6uaUOxIxLACkJwLcs17p4 H8ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676865; x=1771281665; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/TKm1gU8lTLqEcn/8yRmMV0Zp5pYjSJjWs0bsDB+GUI=; b=AJRHxVavkzjzvorgNUL1CQJEQHdDLclYKD0iBxITYkYSC5xqLuEpoXHPEbhLzzlBUY no2mFdtsFArDy68r/aVVGzysPASkiMu4u3ZmjjQ8Ia35aqBS+OhqzSFNba+oZAySBltp ddA2LxWbMiGqMRUZGHhNj+IK/0mdMuRaSSxBMGAQFV4T4gs30mvicGwuQi7Oal3MgMkW Pp9eOkynOzkEtWAG1LLseSbgQ41COUjqLvXeZyWtbT3KX++LOszMTI6DL31nITmkcO4J MPu+8Oxjul2fwmqw9yluGJqh7G3GomtHNsu+ZLENY7/qM6YCJnF5h3cr2fOOsXu6jiDY svXA== X-Forwarded-Encrypted: i=1; AJvYcCV8t4aChjR7w5cYaIJ5k6D7d0bpvtpXkxstrINxhjNdpZEhFMiT7DFucuHPy1hrKmkU8cQv/bXC8uerlL0=@vger.kernel.org X-Gm-Message-State: AOJu0YzgQfLYpc5UUaE+5UDgr86iRzpV3VYy0kPNx+5H4XT4sCsRtQQK M5ryWRs8KNp5QgNandxV8Nj6M6zakkcC4nTfvS+69tHZfQEFrNFIvyaMq9pkxGq/dFDe+ziKOi3 bmFOnWdBZHRuQ5MyBYaOqrQEdRw== X-Received: from jaox17.prod.google.com ([2002:a05:6638:111:b0:5ca:fdfb:2007]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:f015:b0:663:610:cb67 with SMTP id 006d021491bc7-66d0a477b40mr5663839eaf.28.1770676865086; Mon, 09 Feb 2026 14:41:05 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:14 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-20-coltonlewis@google.com> Subject: [PATCH v6 19/19] KVM: arm64: selftests: Relax testing for exceptions when partitioned From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because the Partitioned PMU must lean heavily on underlying hardware support, it can't guarantee an exception occurs when accessing an invalid pmc index. The ARM manual specifies that accessing PMEVCNTR_EL0 where n is greater than the number of counters on the system is constrained unpredictable when FEAT_FGT is not implemented, and it is desired the Partitioned PMU still work without FEAT_FGT. Though KVM could enforce exceptions here since all PMU accesses without FEAT_FGT are trapped, that creates further difficulties. For one example, the manual also says that after writing a value to PMSELR_EL0 greater than the number of counters on a system, direct reads will return an unknown value, meaning KVM could not rely on the hardware register to hold the correct value. Signed-off-by: Colton Lewis --- .../selftests/kvm/arm64/vpmu_counter_access.c | 20 ++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index 9702f1d43b832..27b7d7b2a059a 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -38,10 +38,14 @@ const char *pmu_impl_str[] =3D { struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; +}; + +struct guest_context { bool pmu_partitioned; }; =20 static struct vpmu_vm vpmu_vm; +static struct guest_context guest_context; =20 struct pmreg_sets { uint64_t set_reg_id; @@ -342,11 +346,16 @@ static void test_access_invalid_pmc_regs(struct pmc_a= ccessor *acc, int pmc_idx) /* * Reading/writing the event count/type registers should cause * an UNDEFINED exception. + * + * If the pmu is partitioned, we can't guarantee it because + * hardware doesn't. */ - TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->read_cntr(pmc_idx)); - TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0)); - TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->read_typer(pmc_idx)); - TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->write_typer(pmc_idx, 0)); + if (!guest_context.pmu_partitioned) { + TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->read_cntr(pmc_idx)); + TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0)); + TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->read_typer(pmc_idx)); + TEST_EXCEPTION(ESR_ELx_EC_UNKNOWN, acc->write_typer(pmc_idx, 0)); + } /* * The bit corresponding to the (unimplemented) counter in * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers should be RAZ. @@ -459,7 +468,7 @@ static void create_vpmu_vm(void *guest_code, enum pmu_i= mpl impl) vpmu_vm.vcpu, KVM_ARM_VCPU_PMU_V3_CTRL, KVM_ARM_VCPU_PMU_V3_ENABLE_PARTI= TION); if (!ret) { vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &part_attr); - vpmu_vm.pmu_partitioned =3D partition; + guest_context.pmu_partitioned =3D partition; pr_debug("Set PMU partitioning: %d\n", partition); } =20 @@ -511,6 +520,7 @@ static void test_create_vpmu_vm_with_nr_counters( TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_SET_DEVICE_ATTR, ret)); =20 vcpu_device_attr_set(vcpu, KVM_ARM_VCPU_PMU_V3_CTRL, KVM_ARM_VCPU_PMU_V3_= INIT, NULL); + sync_global_to_guest(vpmu_vm.vm, guest_context); } =20 /* --=20 2.53.0.rc2.204.g2597b5adb4-goog