From nobody Tue Feb 10 22:00:05 2026 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAE5532471A for ; Mon, 9 Feb 2026 22:40:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676862; cv=none; b=dg4MshlKDRZ28EjDOIONTTFKM4jlJBHPKQDYzO7Eudkwdci58u0orMh6S1vvdvDtn4tDdV024zGgNcTeqZIeuvsplS61H1qjy8zx0UC/YirHsZd0Llnx/WA7MN2D8sFa7a4M6s7S2F0+P2xhoyNbGLYhffGv8aSlnmaHQJrQgBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676862; c=relaxed/simple; bh=0yHFaL4kFduvDWHMbB5SGQV3aeiI1+OaatDNUsMBH/E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BOidjdFafV1QU69e9ejWWNx+vzdtMU4ZoTG+Ph/BlJUTTGcW7DGiUn36ewR6n4wpq2OfBsHFNrYcTUjGJ35z1uE1LE79Qo4qBSHb8djCmBHHK9ce1JBfI3ygzUPkRN9SKL8kkZC1TUt1xvbxFQsDnDvN+OhVPjwI2sT22DwLjJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dYTxExwp; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dYTxExwp" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-4040b9ea153so9288718fac.1 for ; Mon, 09 Feb 2026 14:40:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676857; x=1771281657; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EEaHOE5nznf9+zz/kBeN5l5G1L0iV5xIIbwKx1lv3GQ=; b=dYTxExwpT1SQQn4JiSw9L5+rKy3MULJFWO8bsWMy5bss2uf0w3GLkrbQsSY4ePkhQq unQDd/fMmR/7zoeku2Be8tMtwIRVTOjRNS6r6KicbzDEc0v1lQnvCiYzvSRB6dntuQoT sTu0wxRSPUJGvenUo5cGM6Nk1PE3vchhige7wltSVqwukdvzRjx1pTuFCYRUiIMm9lhs Lx6ezjDt//fct4Yb6oIitCXmeaHEi/XYPJbZ+iICEX5+sXB5wy293SWOU4iME3SBerwu Xe9rwRio7V4S60pYEp4yPhDsKY0G3P0kUKnPZrzEqdVyWyyfW5/PCjMY6Zc6OzQp+rda EQqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676857; x=1771281657; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EEaHOE5nznf9+zz/kBeN5l5G1L0iV5xIIbwKx1lv3GQ=; b=RgSwOV9jM8EuWcv6IZzMD1zYgt1byS7dajeN6xNhD8A+sZx43phiuP04oE84oiUhYR sbxcudUymnj4vt/wajmzeGHGmZU9Wn/UhwK15DgWLeJqQfvH1CK5LidvqW4AJrguiOBh rSlrhrQuwFQpuRQWF9sPkHqO2dukjufzlXOo7EgEbduCZIVMA549J9SnUzLtduAKBROb eGVXRyEQ/z5A8BUm4ka8i96T8mEz8QeO/7tSaSqMBobAL7H52KfnflmPUrri8F2ZlQWE rhQlUG4IBJFB2WqcGE/CslLbgX8YRkjmIYmRChDAt0SfNFV2mp37bnO/bmGj3Z3UBNFT RlLQ== X-Forwarded-Encrypted: i=1; AJvYcCVv66+YTj0n0Zf/h8qE6uFrtnJzqZMYTanGig8FTKmmrLHFHy9whUvSDEuaZvtb9q5vHSiG14Ci7m0pTiI=@vger.kernel.org X-Gm-Message-State: AOJu0YxT0VJUW+qdbEfOO759vxFomgeVJP227PUb5rD8ytpXTlE2G9dO ArfMx/HEL+KQxkYs3qOaP4IHCFx6+che3WImUOZJFdi/sXZFxrSGeuwXPCgpoAG1nYw8LwTDCZP Rlkmg4xqTLbQpNRnBIde8yv2Hmg== X-Received: from iojv23.prod.google.com ([2002:a5d:9497:0:b0:95a:608d:8cf9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:6ec6:b0:66e:10ca:fcd5 with SMTP id 006d021491bc7-66e10cafdcfmr3750272eaf.12.1770676856769; Mon, 09 Feb 2026 14:40:56 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:07 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-13-coltonlewis@google.com> Subject: [PATCH v6 12/19] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load since it might have changed since the last time the guest wrote a value. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 48 +++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index b07b521543478..4bcacc55c507f 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -165,6 +165,53 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + u64 evtyper_set =3D ARMV8_PMU_EXCLUDE_EL0 | + ARMV8_PMU_EXCLUDE_EL1; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + bool guest_include_el2; + u8 i; + u64 val; + u64 evsel; + + if (!pmu) + return; + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + if (i =3D=3D ARMV8_PMU_CYCLE_IDX) { + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + evsel =3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + } else { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + evsel =3D val & kvm_pmu_event_mask(vcpu->kvm); + } + + guest_include_el2 =3D (val & ARMV8_PMU_INCLUDE_EL2); + val &=3D ~evtyper_clr; + + if (unlikely(is_hyp_ctxt(vcpu)) && guest_include_el2) + val &=3D ~ARMV8_PMU_EXCLUDE_EL1; + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(evsel, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + write_sysreg(i, pmselr_el0); + write_sysreg(val, pmxevtyper_el0); + } +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -192,6 +239,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) =20 pmu =3D vcpu->kvm->arch.arm_pmu; guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + kvm_pmu_apply_event_filter(vcpu); =20 for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); --=20 2.53.0.rc2.204.g2597b5adb4-goog