From nobody Tue Feb 10 22:00:04 2026 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0A4D322B79 for ; Mon, 9 Feb 2026 22:40:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676860; cv=none; b=CPKqxneanf9vyqzP9PzBbngtUMPbgqwLirYvhNWIUkOKM7pGrbsjBimUn6WpG9mjgUK5kYwZgFR8pS/EigyXUWke6stnyJUdD6zmYW0cnIiytXfvcaMFAmIWmzeaVBcheGHHHNAL+uK6tagJSwEolpEbTg17ysC3kFzhE3L/N3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770676860; c=relaxed/simple; bh=ticqunKDjBIsQCcPD+ntjeVpVTTCEgOMggrhz0tKgaw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=c8KXsckhekdWz6IexbrH5EtPTjLHdy6BRwaXvz9kwO4Goo7Q/ubtdjD3i4W52wZlsE6+g2NSl9K/3tr0BmC/FUnnb8VTZTmkKeEj6gYODJY/JPHWK+mFpWpozaM4CE1f5vet7oLOzYLVqmosBc04ca/9VKAPd2j+b56qz/8dO/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KDOmbSXh; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KDOmbSXh" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-7d195fe3eb4so21472973a34.3 for ; Mon, 09 Feb 2026 14:40:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770676855; x=1771281655; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eEJ7Ulo3PlPr4llr+MiZewNknZeKPyCSc/YCs5BCTfM=; b=KDOmbSXhKMdFYMOwzEwPt3EuMgqxvbj6WvAZ+Jkb1RrdHch8cxpp/8T6Cf29Aq80Pq t58slVX8ZAfe1zj0BcU7W8YiqDiwAk9PLUX3qhYCfOJ6VP2Yx+YhAJKJBm8TGYnxlYT6 1XHF1A8b2jFMCOnu7CxIrv5NbRorFYDjugyOYM3OY7GSCbgJsm8ynMsePJFIu9H8D2GE LAcuvXN9UHuA9uSo7Nj3vZznN10GjLBE1L/83AVAC0IkUuW3YRccyuSJvmaPQPSVnQ8F vBazCLMT7z8EYgMl2BPcf1eOgBX8XbId3ELgAuYyHZbn1BM9kWM5DytYHzGkhF9M5xaD yQpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770676855; x=1771281655; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eEJ7Ulo3PlPr4llr+MiZewNknZeKPyCSc/YCs5BCTfM=; b=F0tBPGKyaVYfxRzu+/tm4wjVhsJwdYN5GtQlD8USABcID76HF8eCXprsgJWcgbxNmW PAr8qoqYw736Dsb775oZ5mCRt4oQNtpdJpHZwghGtSoOcp/IJbzumxvS+iRHoiyjyXy0 WK2NhqdB91dRhtTj2t5CqeyUqbj5Y7SOJpXfm4lnRZp9kK11yHRJeEn5cxcWCmIAVWLF h0o0+PC7Le+Su/Qvf1C1pG5J4tubQ90S3DUFsGCHBkP+6j+Q3TolKNBXsE85SJFZuA5u vMfX4Od/qUk0ipBBXw7YHwqrGtdsvE+R4M2SZJW8NL7jp4iyAl0VQzAUb7TgBOy79jJH ufdQ== X-Forwarded-Encrypted: i=1; AJvYcCVMdxmNtpdtHiaA7Pi3EZGeMbYWvrVuRGuIUZQkT8s1LO3KnSeoLaQ+wAbAIDQ3I7efaiA9AVqB9gG2zYQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwSCwbIcXuApvgx8nCX+63Un7PrTNNOar6YxDLl74yN7FlyXsew xky3UiGQYzWEuuSQq/QVBbjNWqo1tRzcZiOpyTaoGVT95RaTIxbNfTVKtJxeFirexZ1qBD3WdU+ b1nSKJFF0vxv0FLpx8b+kf9LHtw== X-Received: from jagp12-n1.prod.google.com ([2002:a05:6638:8c:10b0:5ca:3da1:a5c7]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1985:b0:662:ffc5:cec9 with SMTP id 006d021491bc7-672fe5b72d4mr49716eaf.40.1770676855676; Mon, 09 Feb 2026 14:40:55 -0800 (PST) Date: Mon, 9 Feb 2026 22:14:06 +0000 In-Reply-To: <20260209221414.2169465-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209221414.2169465-1-coltonlewis@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260209221414.2169465-12-coltonlewis@google.com> Subject: [PATCH v6 11/19] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that can be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If we know we are not partitioned (that is, using the emulated vPMU), then return immediately. A later patch will make this lazy so the context swaps don't happen unless the guest has accessed the PMU. PMEVTYPER is handled in a following patch since we must apply the KVM event filter before writing values to hardware. PMOVS guest counters are cleared to avoid the possibility of generating spurious interrupts when PMINTEN is written. This is fine because the virtual register for PMOVS is always the canonical value. Signed-off-by: Colton Lewis --- arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-direct.c | 123 ++++++++++++++++++++++++++++++++++++ include/kvm/arm_pmu.h | 4 ++ 3 files changed, 129 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 620a465248d1b..adbe79264c032 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -635,6 +635,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -676,6 +677,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index f2e6b1eea8bd6..b07b521543478 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -9,6 +9,7 @@ #include =20 #include +#include =20 /** * has_host_pmu_partition_support() - Determine if partitioning is possible @@ -163,3 +164,125 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) =20 return *host_data_ptr(nr_event_counters); } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + unsigned long guest_counters; + u64 mask; + u8 i; + u64 val; + + /* + * If we aren't guest-owned then we know the guest isn't using + * the PMU anyway, so no need to bother with the swap. + */ + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + return; + + preempt_disable(); + + pmu =3D vcpu->kvm->arch.arm_pmu; + guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + + write_sysreg(i, pmselr_el0); + write_sysreg(val, pmxevcntr_el0); + } + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_sysreg(val, pmselr_el0); + + /* Save only the stateful writable bits. */ + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + mask =3D ARMV8_PMU_PMCR_MASK & + ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); + write_sysreg(val & mask, pmcr_el0); + + /* + * When handling these: + * 1. Apply only the bits for guest counters (indicated by mask) + * 2. Use the different registers for set and clear + */ + mask =3D kvm_pmu_guest_counter_mask(pmu); + + /* Clear the hardware overflow flags so there is no chance of + * creating spurious interrupts. The hardware here is never + * the canonical version anyway. + */ + write_sysreg(mask, pmovsclr_el0); + + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_sysreg(val & mask, pmcntenset_el0); + write_sysreg(~val & mask, pmcntenclr_el0); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_sysreg(val & mask, pmintenset_el1); + write_sysreg(~val & mask, pmintenclr_el1); + + preempt_enable(); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + unsigned long guest_counters; + u64 mask; + u8 i; + u64 val; + + /* + * If we aren't guest-owned then we know the guest is not + * accessing the PMU anyway, so no need to bother with the + * swap. + */ + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + return; + + preempt_disable(); + + pmu =3D vcpu->kvm->arch.arm_pmu; + guest_counters =3D kvm_pmu_guest_counter_mask(pmu); + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + write_sysreg(i, pmselr_el0); + val =3D read_sysreg(pmxevcntr_el0); + + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val =3D read_sysreg(pmselr_el0); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val =3D read_sysreg(pmcr_el0); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + mask =3D kvm_pmu_guest_counter_mask(pmu); + + val =3D read_sysreg(pmcntenset_el0); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val =3D read_sysreg(pmintenset_el1); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); + + preempt_enable(); +} diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 8fab533fa3ebc..93ccda941aa46 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -100,6 +100,8 @@ void kvm_pmu_host_counters_disable(void); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -173,6 +175,8 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; } +static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, --=20 2.53.0.rc2.204.g2597b5adb4-goog