From nobody Tue Feb 10 22:00:05 2026 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCE80321428 for ; Mon, 9 Feb 2026 22:41:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676865; cv=none; b=HwoBS7htiReNWX/MHTfXNG/jDUzsdPfh7oqak/eim7Vvnmg0okOi/eQ1OBoKhA3NVJGwOZr20Q/y4OQ1TFezyjgkxyG6Ae7CXM6ufqKxGJqzJZzsTOHC8/YP654HsVFhmlNJN3nPgyYpeY/no2lq2UUhx5NjtO0hfDKOkLenpYw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676865; c=relaxed/simple; bh=RowRt03ThgQKuNhhFZ3YMhl1qn0omKfjfEtkXkjMK7g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G1n+PrzZ+RN6Flqtcni06ea/PIJx/qyfaZ3c2PVaxCfRLjfLZHvdkVlPbxrWzIHL27jubpfoYbh6Q3E7JJB7SLlBXl/ueyzW/2Fcckb4Hm5GGemahLJAx5X3TJ51dV05+vVmnsNLOSEpvnt7KRNhYwpBPFwsiFcNiB7UUDJCvik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RlXkr1NW; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RlXkr1NW" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-45c8d5caf62so10858138b6e.3 for ; Mon, 09 Feb 2026 14:41:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676860; x=1771281660; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xtvZ/CVAyvlBIyi+KC1xnXowl3JGTClEkFZx5dHECQw=; b=RlXkr1NW/O273SG/7upIZZrnT18xA+ix6n4aWWYhmSbfC+UULhlRrVrhPHTNKHMlcr Gn05MalaYcdtOSHCzcunf5t6rPuKAJoVewBRmj4fz98cfpqMafwFFz/cV2JixANpRxXy zKh3xyad41cwzbyedkkII3q9M8nC/OetiCDj0lYvroQiqrtGw7SxDQdQLjAYqU6UjN6D BDZDIjFcNg4FtCDsSxp4qGEIX1ZAcJinuAy5TSUzmykMV1LNc+Ae/AtulilSTSLxenqL xeVL3ZMcHxzZ9/lH+aLgK6sU+RvPQ7MyM16pcDW+X2ma7ppXitXORxvJ8ni9odsU2BLn peMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676860; x=1771281660; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xtvZ/CVAyvlBIyi+KC1xnXowl3JGTClEkFZx5dHECQw=; b=WBh/6lDb+jijcZ8BfegL0q5y1epEw5lHlIZ9vgZ+gIZfSkO3k4ip6icIpAXHOslucb 1DomhUBclq4wGlY05bzFY6T8Bib+PKhgM72YbZHCOt1kHm8CoHW2J6EFkUwI37Y4A5Zm 2LnquG8csB2hLkC58Fa4Ynxo/G/COewIPM4vp3d/iU5EGVZSCiDpp5ngSTpD1wtlZVni jbk5noGWjpvT5OFB/UUDQaZFRYspKk/I0sECDWyPZIRRuL7YzHRMgmW6Fq8XqX/OD2Xg gtW3FqYAvurvNXa2mgd1MspmmkwdseViZJm8ykAf7swjcz5X7gNVRJBXs8GwYUIvfPAl ywNQ== X-Forwarded-Encrypted: i=1; AJvYcCWTPUMdGf+0WpP32Pn051QRYcfJ47kKTCB29AXJiTWG66ZbPZBrDRmE7jStO8PUFA79nvObPSAC++fg4cs=@vger.kernel.org X-Gm-Message-State: AOJu0Yy43kytvjNPQQiPPKLU9iNsE9TATdyD9TDrqm1V0NZaEtxp4BHr QE2ZmbuTkuITQf+g8/nKV+1acdnl0MrJ4APzlR/j4YrYAxUXR06MD1d69dqw7Cxkui87Kx77rZu 6nEiN9vnChbw0qae7Y56uKHATag== X-Received: from ilbbg8.prod.google.com ([2002:a05:6e02:3108:b0:482:7936:419f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4913:b0:662:fa75:d6df with SMTP id 006d021491bc7-672fdc0a914mr57705eaf.12.1770676860543; Mon, 09 Feb 2026 14:41:00 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:10 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-16-coltonlewis@google.com> Subject: [PATCH v6 15/19] KVM: arm64: Detect overflows for the Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu.c | 6 +++++- include/kvm/arm_pmu.h | 2 ++ 4 files changed, 39 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 79d13a0aa2fd6..6ebb59d2aa0e7 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -378,3 +378,33 @@ void kvm_pmu_handle_guest_irq(struct arm_pmu *pmu, u64= pmovsr) =20 __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); } + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. If access is still free, + * then the guest hasn't accessed the PMU yet so we know the guest + * context is not loaded onto the pCPU and an overflow is impossible. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + u64 mask, pmovs, pmint, pmcr; + bool overflow; + + if (vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_FREE) + return false; + + pmu =3D vcpu->kvm->arch.arm_pmu; + mask =3D kvm_pmu_guest_counter_mask(pmu); + pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + pmint =3D read_pmintenset(); + pmcr =3D read_pmcr(); + overflow =3D (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); + + return overflow; +} diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index a40db0d5120ff..c5438de3e5a74 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -268,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -405,7 +405,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index b198356d772ca..72d5b7cb3d93e 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -408,7 +408,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 3d922bd145d4e..93586691a2790 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -90,6 +90,8 @@ bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) --=20 2.53.0.rc2.204.g2597b5adb4-goog