From nobody Thu Dec 18 19:25:18 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CAA730DD1A for ; Tue, 9 Dec 2025 20:52:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313576; cv=none; b=OdgzlnKRQN26pjx/1B0N0vfGLbkXjfKyo3caxd3CpI7Q6nmWchsfUHr75pR8GyD7yHXeZoHnU5pIX0StUg8IQHO9KKkiWy6gswuKQVdkadhhSyS/EUsYou/0PDZ776yBBVtKjJJQOrIS3ep0KXGfVI4Hh2rIQqtth8q7dB3rQnQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313576; c=relaxed/simple; bh=0i/XVALUfLVfonlnN8cOfbbK0RivHhuamyQe6SmIzEY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mLR7Z12gjumeLlocwmvtF7wZokcFMaB4alqytd6O8P6Q7nJYMSUDCazoO0JYQK8Xwe6mosSNbl3V/PEkD6+nIcoERX+Qdc3pu7p6ofq12+dGUy3I/1jdQfBlcTDPsDh4HJ1O5CS6N6akLK4VDjxyiH3n1OsJU7VnDjLjBjjDamU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p8yCYit0; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p8yCYit0" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-65744e10b91so4211432eaf.0 for ; Tue, 09 Dec 2025 12:52:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313564; x=1765918364; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ihXpQp4YbEFBpuWRcFtm84jVRjsgBgDbq/Ms0l8x8e0=; b=p8yCYit0q0/NYoK/RSE56lgArZmffC2RUnjc3avCDel7JbfOtadDw14NvTscZZyOxe ElaBo+2k29JW8SwmTsrWtEWCaMI5oTakgbZ4eCf3kQ7HfFHVUPi+P3C5gBBlSgkEL741 kd/0QRfdSSNoxuaNx/fLwq5XKfWTldNF9J+OwA5Anu++BhkdJLXeLa+dIPQKlNTi/Gki Bl95OY8Y4HyZPu3hK4F6ROZEkYvbXs5t+yTsv/EQ5rSgXpRtAVLXqIA7Gr8tS0mxIaka xUEeaMQB6vVguFgc3BXI4Bl5jwBDaNOzOdSdKiZzeKmJ7vRc6ijKt7mjJapPxXkpbnzy nW+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313564; x=1765918364; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ihXpQp4YbEFBpuWRcFtm84jVRjsgBgDbq/Ms0l8x8e0=; b=CJtfVsjGOi1Uz2Rcq8h6wLST+zVu4zQ0UpoL0VroJ/KWv0wUFOJwn+k9XZDDNqPAIS GwjKpma72Dg8IULHnnSHWKyc0nSGgNvXQ1MeqUFGlQifZ4nbJMGg5rQBulIDYRjH91fr fMgw7bAfsFn+J0LdDEA/HsYAL7fjdFOJJdTy5euQc6CaZUyvUpbu2y1GhqYHfBoVDeOm L/llkktoYZENsBPmE6tJzqQABoRxAuRucFS0mRxxztY82xXKc+2S/ioGEgJnliaXwbAT v2ZVhbslhSAn/Fdw6qOMBGDf1FzVFa1XWSuOdYQ8ui58gauVbNXVkF8KhmfWUk68ZoA8 YGtQ== X-Forwarded-Encrypted: i=1; AJvYcCUFsdgjxqmTLCJroMJ7lGuqr0k8IKdJXfw2vx7hrjKPo/ZsfttZa6cZTKrkKFctzyVSiRBRUqKc+uI/MzU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7/UnxY6aEa7U+6RBG0/bZp79DT02Zdm1kKpB0WhSEcwJdjqNA YA1mPCWfWMa/y7swBLut/lC10pOhZ6h+MONi6PIjNanjgv4bQe65hv9eWI7KY1vI9EdFsPAtMHY p/AJVFUnz1Ak/lpHKIh4wt7oSnQ== X-Google-Smtp-Source: AGHT+IHxM7HDFMt11Fktl9vAFCiqioFJsGX2hMXvHLM0Q/bfIgLNdr5sCpLbQ+drd5Dfesj8Rx6njSPwYNcP8RB94g== X-Received: from ilbck7.prod.google.com ([2002:a05:6e02:3707:b0:430:c858:3dd3]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:20e:b0:657:64ce:b40f with SMTP id 006d021491bc7-65b2ac06416mr132080eaf.4.1765313564200; Tue, 09 Dec 2025 12:52:44 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:15 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-19-coltonlewis@google.com> Subject: [PATCH v5 18/24] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load since it might have changed since the last time the guest wrote a value. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 44 +++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 71977d24f489a..8d0d6d1a0d851 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -221,6 +221,49 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return nr_host_cnt_max; } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 evtyper_set =3D ARMV8_PMU_EXCLUDE_EL0 | + ARMV8_PMU_EXCLUDE_EL1; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + u8 i; + u64 val; + u64 evsel; + + if (!pmu) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + evsel =3D val & kvm_pmu_event_mask(vcpu->kvm); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(evsel, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + val &=3D ~evtyper_clr; + write_pmevtypern(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(ARMV8_PMUV3_PERFCTR_CPU_CYCLES, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + val &=3D ~evtyper_clr; + write_pmccfiltr(val); +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -244,6 +287,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) return; =20 pmu =3D vcpu->kvm->arch.arm_pmu; + kvm_pmu_apply_event_filter(vcpu); =20 for (i =3D 0; i < pmu->hpmn_max; i++) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); --=20 2.52.0.239.gd5f0c6e74e-goog