From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F5462FF663 for ; Tue, 9 Dec 2025 20:52:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313549; cv=none; b=FLDn56Imn3B3fIluJUeIS96FwavqANqzyWZPHItH7wtN5QLvQkG8tmnoPLX/RAPybBzCZk8vN6Y5KjgkC9jr7hLWfxVhbdOuAu0HL85/qIq+qb19Q01uItXFolyrsnZ3zegC+he9oMvIGKuL3Ig/r49R0vIqk0yNblIWtghCDTs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313549; c=relaxed/simple; bh=xhgsH7VuPbYxutqiwHT9/OQJv83re1fwT8FPiWsqnmg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GEuVspg60coOgGkhOCEOBzFzXqGeM/FeG43AmyQHzLqjxBc51KagAPMmWk9Fcs9YTCv/q3PIflkV9E9bCShHmuYSCrpq8Rb+ehkceSLDX0GTHdVjhxbD+yDvg/uZWJFOEuUdO4LFwSwagDElwQC9oAM+eC2USwk0Y0XUY2bdzJo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KBD3RLVe; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KBD3RLVe" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-3f0d1a7a9c2so3201382fac.3 for ; Tue, 09 Dec 2025 12:52:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313546; x=1765918346; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nU4SZZiqzmEBDGaqnOKa3zbJ+UV1JK/WP+4z3ISxHr0=; b=KBD3RLVek+LlK2TSasBWqd9eXdot5utUHoVJ5SAtx7KHXIx3yReBwOEKzeQFPAt8S4 X3Ed5PStXYbO9inJmnc5VoJX892VGqWLezlTiLQLpjq+M5hYHJdR729ibJZZmQgVAiMV WhoXI9pvIi5CpUNT9WLatL+ebLnFPvoag5vd5+PLxkkby/ypkqINYgUw8YalLZmrMW+U 92jVhnLwIjxgQ4E4I4M6lya9gylYYvpfWf7BIFAQzOOfCAgJsODCFbsNyYemSvuF8bwd y0dQzkmu3sCFm9zkcdOKj7kxdnFOk2Fo2loRXHa/34q3jcVu4XWNmRDyXSNcq7O2baFx oCEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313546; x=1765918346; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nU4SZZiqzmEBDGaqnOKa3zbJ+UV1JK/WP+4z3ISxHr0=; b=fI0kUeN321Y0+VaEck29Vk9JlbF3RF9+56nOcsZfLe3GG5Hs88r8luWS7tH1adtROi yEBe+V/0XyWj1WIQnsFxwmJ1uhkpb5MwOfViIgDVkhrH5ejERd2ZC/opl27kXOlQPh1m f6etLvf1cfd6h97VygkDmn1/baYZg/hOD5jzF2tfOH4EEQMgxdE31VImTddPYEMFo5T+ zxAQ2rlNx8dAJK+o6RGkTZf7aYK7xYGAwDRoBleZOKJCQYZ1naRfNSP5qc41GOb7l3MP mLX2U7amjsX46ZsNyX5WAHkGnQ8qHU5qNDWXp1AwSlB2E6FsihLmBQURXM2CnAnGfhAw kszw== X-Forwarded-Encrypted: i=1; AJvYcCWhihWD8veLx1xrucBYXi14dAD5DaI2oPpVeYwB2R5TdaJ5UiEmqtAkO6l8DCoQaAaN2ZALfYjMeQAtzCI=@vger.kernel.org X-Gm-Message-State: AOJu0YyUAR160We1t7POzDV3HxBEyCLZAWawatWHyH6G0Y7FLvkB8MOK 9BZiGku8VLf2ZTiTgwSdp9v6smARv6M/8ZbYRUFzG2OSkkLZgopEMEei0U4e/soY0CzwvUlEErM lJeIzNrJ7dtB7SEEOdPpBfklvGQ== X-Google-Smtp-Source: AGHT+IFTAojYGX7mwXtTP/8YDMBz+eRxv1aZJcv87i7iIEe8Y4VXsg/VmkpZtN9aG2jKXs0+7TmzxHPwou8ROcGDdw== X-Received: from oaqy6-n2.prod.google.com ([2002:a05:6871:2c6:20b0:3f5:4f00:dfbf]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:568e:b0:3e8:3176:a342 with SMTP id 586e51a60fabf-3f5bd88ead1mr170402fac.22.1765313546407; Tue, 09 Dec 2025 12:52:26 -0800 (PST) Date: Tue, 9 Dec 2025 20:50:58 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-2-coltonlewis@google.com> Subject: [PATCH v5 01/24] arm64: cpufeature: Add cpucap for HPMN0 From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a capability for FEAT_HPMN0, whether MDCR_EL2.HPMN can specify 0 counters reserved for the guest. This required changing HPMN0 to an UnsignedEnum in tools/sysreg because otherwise not all the appropriate macros are generated to add it to arm64_cpu_capabilities_arm64_features. Acked-by: Mark Rutland Signed-off-by: Colton Lewis Reviewed-by: Suzuki K Poulose --- arch/arm64/kernel/cpufeature.c | 8 ++++++++ arch/arm64/kvm/sys_regs.c | 3 ++- arch/arm64/tools/cpucaps | 1 + arch/arm64/tools/sysreg | 6 +++--- 4 files changed, 14 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e25b0f84a22da..ceddc55eb30a0 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -555,6 +555,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] =3D { }; =20 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] =3D { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _HPMN0_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_= DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CT= X_CMPs_SHIFT, 4, 0), @@ -2898,6 +2899,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2) }, + { + .desc =3D "HPMN0", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_HPMN0, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ec3fbe0b8d525..c636840b1f6f9 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3214,7 +3214,8 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64DFR0_EL1_DoubleLock_MASK | ID_AA64DFR0_EL1_WRPs_MASK | ID_AA64DFR0_EL1_PMUVer_MASK | - ID_AA64DFR0_EL1_DebugVer_MASK), + ID_AA64DFR0_EL1_DebugVer_MASK | + ID_AA64DFR0_EL1_HPMN0_MASK), ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5,2), ID_UNALLOCATED(5,3), diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 1b32c1232d28d..8efa6a437515d 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -41,6 +41,7 @@ HAS_GICV5_LEGACY HAS_GIC_PRIO_MASKING HAS_GIC_PRIO_RELAXED_SYNC HAS_HCR_NV1 +HAS_HPMN0 HAS_HCX HAS_LDAPR HAS_LPA2 diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 1c6cdf9d54bba..24d20138ea664 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1666,9 +1666,9 @@ EndEnum EndSysreg =20 Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 -Enum 63:60 HPMN0 - 0b0000 UNPREDICTABLE - 0b0001 DEF +UnsignedEnum 63:60 HPMN0 + 0b0000 NI + 0b0001 IMP EndEnum UnsignedEnum 59:56 ExtTrcBuff 0b0000 NI --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7507C3043C9 for ; Tue, 9 Dec 2025 20:52:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313551; cv=none; b=fZofrMhRLD4/3qVWA4lbX8A32fstrlwsl2RAphFS+Ua4umGn6JlwB92iwnhAqTazPXnkleUa2DQ9IFol7Vitk2RtD7gCt5nhIC7lEykUikpSqQEuhpr6uAl6Bj4haYBzGJ9AIDHnXS5e1zXgGfyqdnfDLS0/sNw4vnP2+Axz2w8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313551; c=relaxed/simple; bh=ufztQXXkn+enKVKuetQXwknrlXBaIadx4vWFFqCK2Jw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uxyvmFLcgIfiOY7dCu13f1lkerk0GYaowJfjp6q1zHsPHJ+bA399gpvaWKlcjSv9GPQVxm2FJU3KULFYfg6u+CbPilLJxCzjXEq0imXyO4L4MV3uM4eRpm9+1CGKYFrqthyHeovsGd2MPYxnCgb5ljx/NnhDPrgZQAal53uSc8E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EvtAGLAZ; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EvtAGLAZ" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-450d5d26c0fso5860749b6e.3 for ; Tue, 09 Dec 2025 12:52:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313547; x=1765918347; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o/qVm7syXSBwzXKPW3sV0twUobJ0+qc96Rj5NT8NWwc=; b=EvtAGLAZjUJtn7jK6FOjrRi3akMDQZjM4arPEW9eXaae6kbfY5C/DDr5N1a9PsRlmB VzR/kHV4UWqqMZKeA/NIdlGqFvuC3DGqbMgfJcY/M81s6N6HXq/igWYrW8DFjSOPjoc1 CJ/gejITIKaZdpRu1rpam0M0tflTXzBUgYdob7v7cH/gyHAbPOySUk5Cg/lUPei9bC49 dbILGSBL/qKS5nuWpvlukmCP/k98OL2vma6DFpiyrzxmGeky+/LiEx8YOGgTUfrkn9+a 74tUTR7iytmWvKsOqOh5JyMhdmw+McT5qnIdrRP55H01sxbqW9vLFmn7+4fbl/8BmNzW D2hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313547; x=1765918347; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o/qVm7syXSBwzXKPW3sV0twUobJ0+qc96Rj5NT8NWwc=; b=Jsos6Wpf3OeRER6pFFPlSzgzNHeE0wsvI5CHbU6DJHduMWa32jxzeHeE6PDHF88cxk JWi4rpLn2iFRDNs6jlAzDTi2lkUbw91ZWtCa7QZtA09xtK7ARlqAlTZ6bNHXDnbn/zpt w4qVCTMpzc9+1bIvF2t815aXf3HBz1jpQDOgwX456fcYBUDT1YJA1qW16gN8F6yXoHB7 dCQcgyaUQrI5G79mzV3cDMRIh3Otc1VgG6tiLTV2faB5KLMSfQr40EybXKoFTQ42el4/ PA0wfFuoZiGWbkNOtIbt4eDsaBr/ugMaJo6cRBfztInlBsu8qiwYctjQlkX/vSQa+A4G 9Oxg== X-Forwarded-Encrypted: i=1; AJvYcCUgJ0RemwdCGAPsgg4pA5ZQAU6tSE7aYoK7HjZZgHft+AB3hCGajJxcsW/kT2YUWqCaEPLt4xyaYSNrYPM=@vger.kernel.org X-Gm-Message-State: AOJu0YzZt9eKmq3htLvmzYyWimhvwkyXoO2r4qj7D/RaX070GltLja49 M0sisgSacBDBFkqA8TsSvcBnN/n9snjyCE79AZaPfi5SD8EkYrDgL5DXXi3gMCwpUHI6HH6pdwA waLYMm3gI6Bfz5bRb5Xboej9cYg== X-Google-Smtp-Source: AGHT+IGMEEiWlIsLcKC8hs4P3z7HH8vM6u4+YJjjWTz9UxE62QKo9QFa3+4D81wsK3WwLqL+Ey8kAtFXETIm5Pw36Q== X-Received: from oibcg5.prod.google.com ([2002:a05:6808:3285:b0:450:c419:c769]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:13d4:b0:450:7fc9:3709 with SMTP id 5614622812f47-455864f476emr131219b6e.48.1765313547501; Tue, 09 Dec 2025 12:52:27 -0800 (PST) Date: Tue, 9 Dec 2025 20:50:59 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-3-coltonlewis@google.com> Subject: [PATCH v5 02/24] KVM: arm64: Move arm_{psci,hypercalls}.h to an internal KVM path From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Anish Ghulati , Sean Christopherson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Anish Ghulati Move arm_hypercalls.h and arm_psci.h into arch/arm64/kvm now that KVM no longer supports 32-bit ARM, i.e. now that there's no reason to make the hypercall and PSCI APIs "public". Signed-off-by: Anish Ghulati [sean: squash into one patch, write changelog] Signed-off-by: Sean Christopherson Message-ID: <20250611001042.170501-2-seanjc@google.com> Signed-off-by: Paolo Bonzini --- arch/arm64/kvm/arm.c | 5 +++-- {include =3D> arch/arm64}/kvm/arm_hypercalls.h | 0 {include =3D> arch/arm64}/kvm/arm_psci.h | 0 arch/arm64/kvm/guest.c | 2 +- arch/arm64/kvm/handle_exit.c | 2 +- arch/arm64/kvm/hyp/Makefile | 6 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 ++-- arch/arm64/kvm/hyp/vhe/switch.c | 4 ++-- arch/arm64/kvm/hypercalls.c | 4 ++-- arch/arm64/kvm/psci.c | 4 ++-- arch/arm64/kvm/pvtime.c | 2 +- arch/arm64/kvm/trng.c | 2 +- 13 files changed, 20 insertions(+), 19 deletions(-) rename {include =3D> arch/arm64}/kvm/arm_hypercalls.h (100%) rename {include =3D> arch/arm64}/kvm/arm_psci.h (100%) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 052bf0d4d0b03..d1750d6058dfd 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -41,9 +41,10 @@ #include #include =20 -#include #include -#include + +#include "arm_hypercalls.h" +#include "arm_psci.h" =20 #include "sys_regs.h" =20 diff --git a/include/kvm/arm_hypercalls.h b/arch/arm64/kvm/arm_hypercalls.h similarity index 100% rename from include/kvm/arm_hypercalls.h rename to arch/arm64/kvm/arm_hypercalls.h diff --git a/include/kvm/arm_psci.h b/arch/arm64/kvm/arm_psci.h similarity index 100% rename from include/kvm/arm_psci.h rename to arch/arm64/kvm/arm_psci.h diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 1c87699fd886e..863b351ae1221 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -27,6 +26,7 @@ #include #include =20 +#include "arm_hypercalls.h" #include "trace.h" =20 const struct _kvm_stats_desc kvm_vm_stats_desc[] =3D { diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index cc7d5d1709cb8..66740520f2166 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -22,7 +22,7 @@ #include #include =20 -#include +#include "arm_hypercalls.h" =20 #define CREATE_TRACE_POINTS #include "trace_handle_exit.h" diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index d61e44642f980..b1a4884446c69 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -3,8 +3,8 @@ # Makefile for Kernel-based Virtual Machine module, HYP part # =20 -incdir :=3D $(src)/include -subdir-asflags-y :=3D -I$(incdir) -subdir-ccflags-y :=3D -I$(incdir) +hyp_includes :=3D -I$(src)/include -I$(srctree)/arch/arm64/kvm +subdir-asflags-y :=3D $(hyp_includes) +subdir-ccflags-y :=3D $(hyp_includes) =20 obj-$(CONFIG_KVM) +=3D vhe/ nvhe/ pgtable.o diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index c5d5e5b86eaf0..6e8050f260f34 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -16,8 +16,6 @@ #include #include =20 -#include - #include #include #include @@ -32,6 +30,8 @@ #include #include =20 +#include "arm_psci.h" + struct kvm_exception_table_entry { int insn, fixup; }; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index d3b9ec8a7c283..5d626308952ac 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -13,8 +13,6 @@ #include #include =20 -#include - #include #include #include @@ -28,6 +26,8 @@ =20 #include =20 +#include "arm_psci.h" + /* Non-VHE specific context */ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switc= h.c index 9984c492305a8..0039e501a3cb7 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -13,8 +13,6 @@ #include #include =20 -#include - #include #include #include @@ -28,6 +26,8 @@ #include #include =20 +#include "arm_psci.h" + /* VHE specific context */ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index 58c5fe7d75727..05331389081f8 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -6,8 +6,8 @@ =20 #include =20 -#include -#include +#include "arm_hypercalls.h" +#include "arm_psci.h" =20 #define KVM_ARM_SMCCC_STD_FEATURES \ GENMASK(KVM_REG_ARM_STD_BMAP_BIT_COUNT - 1, 0) diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index 3b5dbe9a0a0ea..0566b59074978 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -13,8 +13,8 @@ #include #include =20 -#include -#include +#include "arm_hypercalls.h" +#include "arm_psci.h" =20 /* * This is an implementation of the Power State Coordination Interface diff --git a/arch/arm64/kvm/pvtime.c b/arch/arm64/kvm/pvtime.c index 4ceabaa4c30bd..b07d250d223c0 100644 --- a/arch/arm64/kvm/pvtime.c +++ b/arch/arm64/kvm/pvtime.c @@ -8,7 +8,7 @@ #include #include =20 -#include +#include "arm_hypercalls.h" =20 void kvm_update_stolen_time(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/trng.c b/arch/arm64/kvm/trng.c index 99bdd7103c9c1..b5dc0f09797a3 100644 --- a/arch/arm64/kvm/trng.c +++ b/arch/arm64/kvm/trng.c @@ -6,7 +6,7 @@ =20 #include =20 -#include +#include "arm_hypercalls.h" =20 #define ARM_SMCCC_TRNG_VERSION_1_0 0x10000UL =20 --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5DDF3081D3 for ; Tue, 9 Dec 2025 20:52:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313552; cv=none; b=kJ6sohfMmMMZye3exIEND0WqTzA3sbiRxZdRmXraWbmrUljOsD9T0VTmAEOJTMZdVLiC5I64mFeyJjhSZk9pJvMCMi2hB0y8xuxc2tbjyWwK8QQZDFdvph/3Wux6RlMy+9MTSRR8bm77BDMW1fTb73RazGWjcJsBigY+/lLPskQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313552; c=relaxed/simple; bh=k/nMGCP44lTyxOTDNhpSpKnt2VfnuuwutQPmX8YXDEM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ukVaEII4xR8aJa+VSJA+f/S+ZaF1Qn75J90V/ltOrLNRMVhwQqTgJkWKy4Aw/QhuQh/q2NmKuk3B5qRETx56wV/lI6Q5KVeML83+3G3jXzmRjzqz92gAwfExm7eMBnuSM8wkAiCGypguIUlCD3AMp+nxcvdrkLo8hrijGfYvsGA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vvNGip2s; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vvNGip2s" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c702347c6eso6664162a34.1 for ; Tue, 09 Dec 2025 12:52:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313549; x=1765918349; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=avZ9OXi40r9I3tBhG1OZSSWwPglo/DoIFRIQKLTo7zA=; b=vvNGip2saGXfoamJoPDLRSwEv/vGYfuN/YszvybUf5Tr8DoMKJM7qasWXIaQGlE6BO h21gD+YKR/lVgar3j2jMoL5HKLJGPCqrXFnPLtjIMst3UXBxGTXiRzEHA5nJ0fx2/eEv 0eQqBSI/E7VzJ0nu5ymAdHYoxzbh0PMYR9mWeOAYe1AKzD7+ewAA/nIh81CVqRGfI7pq RmnICWKRepyMktQ7Ph4iM3iRQzPEwhpO37mnoEz2vg0huZDmEdiIOWDciOVLBYE3w1RS OhfqMVyEuqMl6cmaS6j3qaDJ4FT62D4g2BmegvG8QMOa/CTw5GaXk9Y0IBe02rmWmGMs KqAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313549; x=1765918349; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=avZ9OXi40r9I3tBhG1OZSSWwPglo/DoIFRIQKLTo7zA=; b=BdOQSRqvrRv6Z6+S1CN+Rh5fux2wByphWwF5wZDUrVsVy3/RTUI+ScAijP6IdpRUp+ GtiP9WwypVIaoh/6xXKkQH8A6xQdsBuvlR3XdHJohR8ajekNZGin+eVo4smJSPDwPOIk +LRcBA60YWefCsXqnXLDhSTjOJC4YPKet+cES6+E3Oywx4ANANx6hhgIhlOaLDS6c8rm 5IbsFAc5yD8wv09SdimpGDzeg+r1H8hJUJNsqo8FyBAcR1qx681j7UbOuwIyUYl/Tsx+ G3SUdk/tIb3hcbuoexHTsebaBbNL9f9zQxnJ4skyZnCCDrpyZkWiWvhM+jlyYf5oSS5s Mvxg== X-Forwarded-Encrypted: i=1; AJvYcCX5hVWJRoQZa6BZfzYmeB2q5dhMWy/51eQmEuk72Y6P4TOvcVlyYpYqwe4SQUn8mSsoS1TesuszByXAmhQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxtlMAanNWMtzVUTWLSRblsSYFSrH2E4gU0MqSnWAmouc4eeTeh TQ24rZE8eZfoWp5Wba110hPqnxCoQJTBUsVDtaCHzXIGT4rtnb78H5lkxHH93IVsl+rbnQpl145 PDQ6P8aLqzJWUFqFfl7AK3pBosA== X-Google-Smtp-Source: AGHT+IGBJBO46UR3T81/n2L+V0opSJVwtEO5cO1rJLAwUl/wZGtRFy5mrcug0+A8VGtDI6JSlEB8C9Gz2V+VpcB2Qw== X-Received: from otbbq1.prod.google.com ([2002:a05:6830:3881:b0:7c7:583b:2e9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:4411:b0:7c7:1e8a:c9e0 with SMTP id 46e09a7af769-7cacec42631mr79191a34.23.1765313548734; Tue, 09 Dec 2025 12:52:28 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:00 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-4-coltonlewis@google.com> Subject: [PATCH v5 03/24] KVM: arm64: Include KVM headers to get forward declarations From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Sean Christopherson , kernel test robot Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Include include/uapi/linux/kvm.h and include/linux/kvm_types.h in ARM's public arm_arch_timer.h and arm_pmu.h headers to get forward declarations of things like "struct kvm_vcpu" and "struct kvm_device_attr", which are referenced but never declared (neither file includes *any* KVM headers). The missing includes don't currently cause problems because of the order of includes in parent files, but that order is largely arbitrary and is subject to change, e.g. a future commit will move the ARM specific headers to arch/arm64/include/asm and reorder parent includes to maintain alphabetic ordering. Reported-by: kernel test robot Signed-off-by: Sean Christopherson Message-ID: <20250611001042.170501-3-seanjc@google.com> Signed-off-by: Paolo Bonzini --- include/kvm/arm_arch_timer.h | 2 ++ include/kvm/arm_pmu.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h index 7310841f45121..d55359e67c22c 100644 --- a/include/kvm/arm_arch_timer.h +++ b/include/kvm/arm_arch_timer.h @@ -7,6 +7,8 @@ #ifndef __ASM_ARM_KVM_ARCH_TIMER_H #define __ASM_ARM_KVM_ARCH_TIMER_H =20 +#include +#include #include #include =20 diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 96754b51b4116..baf028d19dfc9 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -7,6 +7,8 @@ #ifndef __ASM_ARM_KVM_PMU_H #define __ASM_ARM_KVM_PMU_H =20 +#include +#include #include #include =20 --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C8C92FF646 for ; Tue, 9 Dec 2025 20:52:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313554; cv=none; b=BMZ/W3ArHBuZ1DXnhbM2p7RAlEHELvHw7bUoTyix0jfajzGtrQ/OKqEfINKaZUouEgsazJcbtd9xPRmiKJ5zdidjEwGjCBYFEAE9xEpQfolGemj3kNXrnuQiBT7ESOI1YkrJs4vTizRq7+1dAZZ5d3g2bn3khDb6Z+tRrf6drws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313554; c=relaxed/simple; bh=DRKxMglUAQIUPphAezZ5SP/cHAMFr7fL5LywaHt3aMY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gH+72PI9PXx2qRDf8GlMi+agRd9FmXZxUemuHikclqGhmOtPX0WXngNVx8+k906lnRjoKTU3GBGSjR+RoQFWoMvhOiJ0dXYkEITwDB/3ghxCdeswQO4MA4E/k9phODg7h+I3lsly4y7E5ZMq/P2mrLEYapC3RbGN6geDaGE84ZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jpzfOvwA; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jpzfOvwA" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-44fe73611fdso7360907b6e.0 for ; Tue, 09 Dec 2025 12:52:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313549; x=1765918349; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kVIdJYok4fKvyvbBrKjyW2svdWuCVzRHg04jM7uK3Lo=; b=jpzfOvwAZ4D9Gi3VonliA/1av9PNXv/wiaxPCvrM1GvaAOb8nSLjLIBV+6XGzyi7un Ag433nm+NxEzVJh/dtn26JJEwBbycxGVNEHAQT67iCXGM/q3a1vG5tWJIv2EJ/oFQ8Yo jk0HO9id0ZpqvY25OiNFaXQp8QpntP8fUFI59lGBCMstE68p9YfVknxBk9FGSq+bU84D SPW5t7uzSfFy3xIKUhXzBlb05E4a5p4KDH+CN0YlNjU6HGsAS9tfLTgxMFznsqCFUPg/ BOg2IGjto8rLDi8vyfDqBXMSX4vHbvmdUBY/RyjMEajKGV17CbRZ5JS1tnjQPBg0SLA7 BaHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313549; x=1765918349; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kVIdJYok4fKvyvbBrKjyW2svdWuCVzRHg04jM7uK3Lo=; b=A2NkrrWXgvzYgihtfoAzy9fuu8mhacn2wxmbRlmrsK32caPCK9RxLHkzjeLjimbKm1 7zyQQMG4mzsxRd3qIH4r+r9D+esmW2m3Wl/JwnWSHfubAg6w6vRe3TTFLS9G++380DIl 7AFDr3IDmEOsvEM9TW9OG/z0IZFSlkAZFpNDoz5pXBEGeYvEDPVKvoxmkfy0qaL/YbJp wuo6Lwb3dTIQEbsTmvMqUqjOGil92BOt2JIF9RNdatl2S0LktpKKNeqaLOfNyKnCwAlK P3WICXX9f/TVyWIgtPxQqfzmfmwY+qf8hXCOACzz6Uah3hTlwxXYlgZ3C0R4ik7OCbPD F+VA== X-Forwarded-Encrypted: i=1; AJvYcCWyXvX+qUUpBm0xGXEVMsEs32w9SlUGfq42jpcdWD4WwXMnok3dwv2J4/zAEZAKCNXQZsEaLHwGNhS4H+g=@vger.kernel.org X-Gm-Message-State: AOJu0YynJmRCorWz6ZM4D1t28n6xSNhxEHOg7XIwV8U/rT2AO3AwlmHU ZErIqOAtsCxiqL4J86VtrqCAJigCCACac+Q1Z2hU/NQcm9dReKr9rcfWhh5nnxFgcGnOQSS7U8o t4qhJw2QkJ5mVIUw09z7iKFAxIw== X-Google-Smtp-Source: AGHT+IHakm9evPNzvbU5exw4KLaOwmIzNJE4dGxAct41AW1pYufo/Hoq5siHPrNun1I5ypJlFtQnQTo9wNdKcG/GXw== X-Received: from ilbbn5.prod.google.com ([2002:a05:6e02:3385:b0:438:15d1:5e1c]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:81d8:b0:659:7bf7:4d96 with SMTP id 006d021491bc7-65b2ac4f918mr124596eaf.6.1765313549536; Tue, 09 Dec 2025 12:52:29 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:01 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-5-coltonlewis@google.com> Subject: [PATCH v5 04/24] KVM: arm64: Move ARM specific headers in include/kvm to arch directory From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Sean Christopherson , Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Move kvm/arm_{arch_timer,pmu,vgic}.h to arch/arm64/include/asm and drop the "arm" prefix from all file names. Now that KVM no longer supports 32-bit ARM, there is no reason to expose ARM specific headers to other architectures beyond arm64. Cc: Colton Lewis Signed-off-by: Sean Christopherson Message-ID: <20250611001042.170501-4-seanjc@google.com> Signed-off-by: Paolo Bonzini [Colton: applied header change to vgic-v5.c] Signed-off-by: Colton Lewis --- .../arm64/include/asm/kvm_arch_timer.h | 0 arch/arm64/include/asm/kvm_host.h | 7 +++---- include/kvm/arm_pmu.h =3D> arch/arm64/include/asm/kvm_pmu.h | 0 .../kvm/arm_vgic.h =3D> arch/arm64/include/asm/kvm_vgic.h | 0 arch/arm64/kvm/arch_timer.c | 5 ++--- arch/arm64/kvm/arm.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/reset.c | 3 +-- arch/arm64/kvm/trace_arm.h | 2 +- arch/arm64/kvm/vgic/vgic-debug.c | 2 +- arch/arm64/kvm/vgic/vgic-init.c | 2 +- arch/arm64/kvm/vgic/vgic-irqfd.c | 2 +- arch/arm64/kvm/vgic/vgic-kvm-device.c | 2 +- arch/arm64/kvm/vgic/vgic-mmio-v2.c | 2 +- arch/arm64/kvm/vgic/vgic-mmio-v3.c | 2 +- arch/arm64/kvm/vgic/vgic-mmio.c | 4 ++-- arch/arm64/kvm/vgic/vgic-v2.c | 2 +- arch/arm64/kvm/vgic/vgic-v3-nested.c | 3 +-- arch/arm64/kvm/vgic/vgic-v3.c | 2 +- arch/arm64/kvm/vgic/vgic-v5.c | 2 +- 20 files changed, 22 insertions(+), 27 deletions(-) rename include/kvm/arm_arch_timer.h =3D> arch/arm64/include/asm/kvm_arch_t= imer.h (100%) rename include/kvm/arm_pmu.h =3D> arch/arm64/include/asm/kvm_pmu.h (100%) rename include/kvm/arm_vgic.h =3D> arch/arm64/include/asm/kvm_vgic.h (100%) diff --git a/include/kvm/arm_arch_timer.h b/arch/arm64/include/asm/kvm_arch= _timer.h similarity index 100% rename from include/kvm/arm_arch_timer.h rename to arch/arm64/include/asm/kvm_arch_timer.h diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 64302c438355c..7f19702eac2b9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -26,17 +26,16 @@ #include #include #include +#include #include +#include +#include #include =20 #define __KVM_HAVE_ARCH_INTC_INITIALIZED =20 #define KVM_HALT_POLL_NS_DEFAULT 500000 =20 -#include -#include -#include - #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS =20 #define KVM_VCPU_MAX_FEATURES 9 diff --git a/include/kvm/arm_pmu.h b/arch/arm64/include/asm/kvm_pmu.h similarity index 100% rename from include/kvm/arm_pmu.h rename to arch/arm64/include/asm/kvm_pmu.h diff --git a/include/kvm/arm_vgic.h b/arch/arm64/include/asm/kvm_vgic.h similarity index 100% rename from include/kvm/arm_vgic.h rename to arch/arm64/include/asm/kvm_vgic.h diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 3f675875abea2..ce62a12cf0e5c 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -14,12 +14,11 @@ =20 #include #include +#include #include #include #include - -#include -#include +#include =20 #include "trace.h" =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d1750d6058dfd..43e92f35f56ab 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -38,11 +38,10 @@ #include #include #include +#include #include #include =20 -#include - #include "arm_hypercalls.h" #include "arm_psci.h" =20 diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index b03dbda7f1ab9..dcdd80ffd49d5 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -12,8 +12,8 @@ #include #include #include -#include -#include +#include +#include =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 959532422d3a3..bae3676387419 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -17,12 +17,11 @@ #include #include =20 -#include - #include #include #include #include +#include #include #include #include diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h index 9c60f6465c787..8fc8178e21a70 100644 --- a/arch/arm64/kvm/trace_arm.h +++ b/arch/arm64/kvm/trace_arm.h @@ -3,7 +3,7 @@ #define _TRACE_ARM_ARM64_KVM_H =20 #include -#include +#include #include =20 #undef TRACE_SYSTEM diff --git a/arch/arm64/kvm/vgic/vgic-debug.c b/arch/arm64/kvm/vgic/vgic-de= bug.c index bb92853d1fd3a..a67e5e5f44871 100644 --- a/arch/arm64/kvm/vgic/vgic-debug.c +++ b/arch/arm64/kvm/vgic/vgic-debug.c @@ -9,7 +9,7 @@ #include #include #include -#include +#include #include #include "vgic.h" =20 diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-ini= t.c index da62edbc1205a..39ead7ec1b43a 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -7,7 +7,7 @@ #include #include #include -#include +#include #include #include #include "vgic.h" diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-ir= qfd.c index c314c016659ab..b73401c34f298 100644 --- a/arch/arm64/kvm/vgic/vgic-irqfd.c +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include "vgic.h" =20 /* diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vg= ic-kvm-device.c index 3d1a776b716d7..39d96b52f773d 100644 --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c @@ -7,7 +7,7 @@ */ #include #include -#include +#include #include #include #include diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v2.c b/arch/arm64/kvm/vgic/vgic-= mmio-v2.c index f25fccb1f8e63..d00c8a74fad63 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v2.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v2.c @@ -9,7 +9,7 @@ #include =20 #include -#include +#include =20 #include "vgic.h" #include "vgic-mmio.h" diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-= mmio-v3.c index 70d50c77e5dc7..5191ad3b74b7e 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c @@ -9,11 +9,11 @@ #include #include #include -#include =20 #include #include #include +#include =20 #include "vgic.h" #include "vgic-mmio.h" diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmi= o.c index a573b1f0c6cbe..45876b5ef9fc8 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio.c +++ b/arch/arm64/kvm/vgic/vgic-mmio.c @@ -10,8 +10,8 @@ #include #include #include -#include -#include +#include +#include =20 #include "vgic.h" #include "vgic-mmio.h" diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c index 381673f03c395..780afb7aad06e 100644 --- a/arch/arm64/kvm/vgic/vgic-v2.c +++ b/arch/arm64/kvm/vgic/vgic-v2.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include =20 #include "vgic.h" diff --git a/arch/arm64/kvm/vgic/vgic-v3-nested.c b/arch/arm64/kvm/vgic/vgi= c-v3-nested.c index 7f1259b49c505..f3f21d8fa8335 100644 --- a/arch/arm64/kvm/vgic/vgic-v3-nested.c +++ b/arch/arm64/kvm/vgic/vgic-v3-nested.c @@ -7,11 +7,10 @@ #include #include =20 -#include - #include #include #include +#include =20 #include "vgic.h" =20 diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 2f75ef14d3399..f345501016e2c 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -7,10 +7,10 @@ #include #include #include -#include #include #include #include +#include =20 #include "vgic.h" =20 diff --git a/arch/arm64/kvm/vgic/vgic-v5.c b/arch/arm64/kvm/vgic/vgic-v5.c index 2d3811f4e1174..601d7b376deef 100644 --- a/arch/arm64/kvm/vgic/vgic-v5.c +++ b/arch/arm64/kvm/vgic/vgic-v5.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only =20 -#include +#include #include =20 #include "vgic.h" --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F5CC308F2B for ; Tue, 9 Dec 2025 20:52:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313556; cv=none; b=io6+bwJmPr1hQzl1CIayfPkOpgALKf3idFFEIdlNGHGYEWzwU+Ml8EOf63LveKAdbVA3MtljKpiap9ZhtwQFmVQTKGn+w6BaNswU04Cc4Bwbu2z88MpCd1GgpJMH2P/D9mCOq4n5IE8a/KozchbihmaopWo04u+mmtdpM76W6rc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313556; c=relaxed/simple; bh=Df9Qx2xfEKI6+qK1rsPflT7Kf/FfYXCuJ7XJVpUi3Es=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=APQNBc0TUhm+W8Kd7FlZCRZPOT/cHlBtlJWey3+sQXh03w8mx98SsEF9cu/UkqiybUOMOH7TZ2YjRKxKxr+C5jKYpegSvsDqA1hHVcTtJsUcs0bznwAnUhL9OV9/kaEKQdsYxWnaxFGrko1Pt+WwY4IT3cbcL9PO/NZXUUr5Ma0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kAqybrBJ; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kAqybrBJ" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-7c75663feaeso6997804a34.3 for ; Tue, 09 Dec 2025 12:52:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313550; x=1765918350; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3zDt0iNU+euTK/TFW8J/j2N0ChdqTjom0mpiCcHEvBA=; b=kAqybrBJ3cSlq8E4jKfHiFWvbziSWthlNmMpC86/m8dOMm8OldbDftUZ2h0K8Upjs5 cN9oGANJG5Rf4fdiPUPeShFUB93VQmM9H1BC05v0EhoIimlARs4332EKSH24hfh5kIkt L6/A16RCYq9eNtU0AN2LJQB9qzTJB/UefMRO/ktUWRUnpWSQ35Z507hPIkMTEJCPLldI nG/j6m28tFjAjr3SuB/tzj2LFKxX4UccmlKaH3BtbkR1e7e2U+w3jQ5FIIHlR5u8ijtG riDhoqqjn08Jdqe3k9nFHNGxxqugfUGXf4MJJoVhLY5rcwvizuNtuGBzLrDTk6cduNsV +7qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313550; x=1765918350; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3zDt0iNU+euTK/TFW8J/j2N0ChdqTjom0mpiCcHEvBA=; b=KFz1zcmParDHX3JAgfMQctiXWjyJyzs63MtyEyUlEibpXKAy/4I2l1Lv5ORIUdT5wW CyfGiX4S65C4917RVaJUGGa92a7+nhnoCWSokLNwi3KFWLrLDRmZMFyf4L/4Jw4kIUt2 T4HqZ6a6sgLBipkMS26pkqaSKVIGxpSvEQaXko3MugZFqIdWKu0PtSytQbUNn5O8aWNP 1pWanEmjB/9hoJIXH1HAGKMUCIncwuwPnou3eFiN5t6DFjBwpVYSLDx08WaZS5D0W4NE FkhbtSRpe+tP0x6rfLrHwqhfj2eKksl64u11qCw21FFa0Wf2c/oHKBxBBNwbc5tStVzb 3sAQ== X-Forwarded-Encrypted: i=1; AJvYcCV51AnKhiZT1TBzn9yHu7ioTp0LHs3ZCy8ouBItF4PAkb0fjCbzMGn8ARrzKIfHryTF4PlS3UoQ7pJrMMQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yyc4wsSmQBWWV+jtiNOeh1Ga7gp67TbjBrccC3+YN4TNjeFx/Wl Vf1FJ6HctBtvSZ/iEUc+czECT75Lsx3DFuk2P7mpnbjzpAzIL0wtuRtN2rRahgg8n3vfnP5vtf+ 0UNkS3SjEW3lJTsLMBaG9oJUHCg== X-Google-Smtp-Source: AGHT+IEKtn0WDN0qJ9sLPEWZNEZ7Ha5CeZ3q/VCsbWa648/vK4db/UbIl9YHdIzflWwFx6OplmDw/cVFnsD3PMM3vw== X-Received: from oty4-n1.prod.google.com ([2002:a05:6830:4d84:10b0:7c7:5343:40d1]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:618a:b0:7ab:e111:1a57 with SMTP id 46e09a7af769-7cacec3df9fmr64282a34.31.1765313550662; Tue, 09 Dec 2025 12:52:30 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:02 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-6-coltonlewis@google.com> Subject: [PATCH v5 05/24] KVM: arm64: Reorganize PMU includes From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Including *all* of asm/kvm_host.h in asm/arm_pmuv3.h is a bad idea because that is much more than arm_pmuv3.h logically needs and creates a circular dependency that makes it easy to introduce compiler errors when editing this code. asm/kvm_host.h includes asm/kvm_pmu.h includes perf/arm_pmuv3.h includes asm/arm_pmuv3.h includes asm/kvm_host.h Reorganize the PMU includes to be more sane. In particular: * Remove the circular dependency by removing the kvm_host.h include from asm/arm_pmuv3.h since 99% of it isn't needed. * Move the remaining tiny bit of KVM/PMU interface from kvm_host.h into kvm_pmu.h * Conditionally on ARM64, include the more targeted kvm_pmu.h directly in the arm_pmuv3.c driver. Signed-off-by: Marc Zyngier Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 2 -- arch/arm64/include/asm/kvm_host.h | 14 -------------- arch/arm64/include/asm/kvm_pmu.h | 15 +++++++++++++++ drivers/perf/arm_pmuv3.c | 5 +++++ 4 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8a777dec8d88a..cf2b2212e00a2 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -6,8 +6,6 @@ #ifndef __ASM_PMUV3_H #define __ASM_PMUV3_H =20 -#include - #include #include =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 7f19702eac2b9..c7e52aaf469dc 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1410,25 +1410,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcp= u); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); =20 -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) -{ - return (!has_vhe() && attr->exclude_host); -} - #ifdef CONFIG_KVM -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); -void kvm_clr_pmu_events(u64 clr); -bool kvm_set_pmuserenr(u64 val); void kvm_enable_trbe(void); void kvm_disable_trbe(void); void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); #else -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} -static inline void kvm_clr_pmu_events(u64 clr) {} -static inline bool kvm_set_pmuserenr(u64 val) -{ - return false; -} static inline void kvm_enable_trbe(void) {} static inline void kvm_disable_trbe(void) {} static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_gu= est) {} diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index baf028d19dfc9..ad3247b468388 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -11,9 +11,15 @@ #include #include #include +#include =20 #define KVM_ARMV8_PMU_MAX_COUNTERS 32 =20 +#define kvm_pmu_counter_deferred(attr) \ + ({ \ + !has_vhe() && (attr)->exclude_host; \ + }) + #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ @@ -68,6 +74,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); =20 struct kvm_pmu_events *kvm_get_pmu_events(void); +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u64 clr); +bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -161,6 +170,12 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *= vcpu, bool pmceid1) =20 #define kvm_vcpu_has_pmu(vcpu) ({ false; }) static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} +static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} +static inline void kvm_clr_pmu_events(u64 clr) {} +static inline bool kvm_set_pmuserenr(u64 val) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 69c5cc8f56067..513122388b9da 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -9,6 +9,11 @@ */ =20 #include + +#if defined(CONFIG_ARM64) +#include +#endif + #include #include =20 --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE77D308F3A for ; Tue, 9 Dec 2025 20:52:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313562; cv=none; b=srXGljlZUJp3QY4SRJHrv2p85mPxBE5Z+TtXyfv9fWNBLsuirr45w/bQVaH99F1eOuxa8kDHsz0D/gxNTiDyJfZVRRBoH+0sf5ti6J7eokoYhS3CxbBEE5uwRY95WcyhDqyanJ9vlS+wJrPk8nlWbxUXEurzARrsvq3wuT3EEjI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313562; c=relaxed/simple; bh=E1ooZifjzpKZ7uS8Z0J5t/9BmudFGfMwaSj7kY+dJxY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Q6lDe/2stGu0XMZpR50xQthMS8skypYMGV3pmH9u7I2kQtwoVRiBKjvZ87j26zMtA+JGpepIQYgkB72x/lkc91QEuLZLPO2yAhKsFv04300H7eLUyODF7Sv1vyEJ9qwz+jlnyGMhP8B8xAWPoyEEIwqQz8VjamjGyQlCf0aPDek= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rHx/4J0V; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rHx/4J0V" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-7c7095cf155so5880023a34.2 for ; Tue, 09 Dec 2025 12:52:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313552; x=1765918352; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1KcRCnbSRoVzysdprFVs9bUQY5J4IpZXqr/iNN5oJj8=; b=rHx/4J0VaRf4QEIfXdwNvO0y1QvkT8DO2hHOybaImS7IW4mCj0UGTDQMapphaSBkJc SjhaimCn1j9ZwLkL1/7/Efxj2HCaaPHqaZjmXUhrQlYET7Mtr8kdYDm0iP8ffZUM0PPi HvOLd6F9UaD3MRg7V0hieo0QTE2SQt9TeE6IUveiBVSrgMP8USfpxv3Pw4zKkJqFJ3DE ndfYYswiCVxNuTKoiPcLRfUPZh3wGagbeibKNgQ8fAy5JZSH/yz/waDwOc+rznBflC1G p/cCMQR0xZ6UimHWdlbegXXi7QcEDTnYE0QR+fulVDdEpLkrWhVdyiQkJ75ME9cj8Rwq ILkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313552; x=1765918352; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1KcRCnbSRoVzysdprFVs9bUQY5J4IpZXqr/iNN5oJj8=; b=tnVBxwNT+bCuBLHyoOrM74OqYSlnxAiiv9V481zQSEjO5p0m/yDvg2kUYeyUJ62yX2 Cb+tQY+/6/OmNp1GavSKJwmQNi2WcFI0k4TUJK04X3PuJsWouz0Pyr8Eu8+OwMFwEY2m Jl77RQ6JCF6ExcvH2xSHjB5wBP1+igz+DWuurjWtR010V9H0mCLd6YbxsswuyQwWdtCP xTYX5uSMlzRuvNjtpsbz/O9dbqI4nSwmFT+HZi9iz7GfPUOUH7bbBtwWzD+mDBbNcT2d mEVTfl5KAPZ36lU1Qr50RkXXHUp95yPgbbxM9/SmNJ3UxUsRUZlRXwJT3i02x87aMmUK agaw== X-Forwarded-Encrypted: i=1; AJvYcCUu5GfEedhUUsDM2MksXZWFwnZESm1vTBfKyBzK6hdRLtmLQssn5VTEM91oEbT3WUyd8jHlQmGKVfKiLCU=@vger.kernel.org X-Gm-Message-State: AOJu0YzyHY5aDmba9SwW6RqFYPPI54JPZT6FYizT4ozA/yBk/LY0hsky lX9ZNYCUeL0A6PcafYddqQ7PbA/WkWEbN9ZiGCA1+tfoaObkkyyvLlp0yBQf+I4EFH5kApHBz8v Zoup1hblL9+M7+N3iRaftywENCA== X-Google-Smtp-Source: AGHT+IHs/MJapiYRHe4AYRDyYC/4/nwEviEvXC23/C7gtSgygBvcsQRG+4C0d2CY1nsSmy6ZOiXy3JIZuLVC8lqcsA== X-Received: from jabjn2.prod.google.com ([2002:a05:6638:8cc2:b0:5b7:6a30:8b5e]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:1507:b0:659:9a49:8e5c with SMTP id 006d021491bc7-65b2acbfd5dmr119854eaf.44.1765313551751; Tue, 09 Dec 2025 12:52:31 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:03 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-7-coltonlewis@google.com> Subject: [PATCH v5 06/24] KVM: arm64: Reorganize PMU functions From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A lot of functions in pmu-emul.c aren't specific to the emulated PMU implementation. Move them to the more appropriate pmu.c file where shared PMU functions should live. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 3 + arch/arm64/kvm/pmu-emul.c | 672 +----------------------------- arch/arm64/kvm/pmu.c | 675 +++++++++++++++++++++++++++++++ 3 files changed, 679 insertions(+), 671 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index ad3247b468388..6c961e8778047 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -51,13 +51,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u6= 4 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu); +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu); u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); +u32 kvm_pmu_event_mask(struct kvm *kvm); u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); void kvm_pmu_update_run(struct kvm_vcpu *vcpu); void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index dcdd80ffd49d5..bcaa9f7a8ca28 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -17,19 +17,10 @@ =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 -static LIST_HEAD(arm_pmus); -static DEFINE_MUTEX(arm_pmus_lock); - static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 -bool kvm_supports_guest_pmuv3(void) -{ - guard(mutex)(&arm_pmus_lock); - return !list_empty(&arm_pmus); -} - static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) { return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]); @@ -40,46 +31,6 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vc= pu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } =20 -static u32 __kvm_pmu_event_mask(unsigned int pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return GENMASK(9, 0); - case ID_AA64DFR0_EL1_PMUVer_V3P1: - case ID_AA64DFR0_EL1_PMUVer_V3P4: - case ID_AA64DFR0_EL1_PMUVer_V3P5: - case ID_AA64DFR0_EL1_PMUVer_V3P7: - return GENMASK(15, 0); - default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); - return 0; - } -} - -static u32 kvm_pmu_event_mask(struct kvm *kvm) -{ - u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); - u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); - - return __kvm_pmu_event_mask(pmuver); -} - -u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | - kvm_pmu_event_mask(kvm); - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) - mask |=3D ARMV8_PMU_INCLUDE_EL2; - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) - mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | - ARMV8_PMU_EXCLUDE_NS_EL1 | - ARMV8_PMU_EXCLUDE_EL3; - - return mask; -} - /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -272,59 +223,6 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) irq_work_sync(&vcpu->arch.pmu.overflow_work); } =20 -static u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) -{ - unsigned int hpmn, n; - - if (!vcpu_has_nv(vcpu)) - return 0; - - hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - n =3D vcpu->kvm->arch.nr_pmu_counters; - - /* - * Programming HPMN to a value greater than PMCR_EL0.N is - * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an - * UNKNOWN number of counters (in our case, zero) are reserved for EL2. - */ - if (hpmn >=3D n) - return 0; - - /* - * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't - * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly - * depend on hardware (all PMU registers are trapped), make the - * implementation choice that all counters are included in the second - * range reserved for EL2/EL3. - */ - return GENMASK(n - 1, hpmn); -} - -bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) -{ - return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); -} - -u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); - - if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) - return mask; - - return mask & ~kvm_pmu_hyp_counter_mask(vcpu); -} - -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); - - if (val =3D=3D 0) - return BIT(ARMV8_PMU_CYCLE_IDX); - else - return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); -} - static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) { if (!pmc->perf_event) { @@ -370,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -393,24 +291,6 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *v= cpu) return reg; } =20 -static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - bool overflow; - - overflow =3D kvm_pmu_overflow_status(vcpu); - if (pmu->irq_level =3D=3D overflow) - return; - - pmu->irq_level =3D overflow; - - if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, - pmu->irq_num, overflow, pmu); - WARN_ON(ret); - } -} - bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; @@ -436,43 +316,6 @@ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) regs->device_irq_level |=3D KVM_ARM_DEV_PMU; } =20 -/** - * kvm_pmu_flush_hwstate - flush pmu state to cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the host, and = inject - * an interrupt if that was the case. - */ -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/** - * kvm_pmu_sync_hwstate - sync pmu state from cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the guest, and - * inject an interrupt if that was the case. - */ -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/* - * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding - * to the event. - * This is why we need a callback to do it once outside of the NMI context. - */ -static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) -{ - struct kvm_vcpu *vcpu; - - vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); - kvm_vcpu_kick(vcpu); -} - /* * Perform an increment on any of the counters described in @mask, * generating the overflow if required, and propagate it as a chained @@ -784,132 +627,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *= vcpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 -void kvm_host_pmu_init(struct arm_pmu *pmu) -{ - struct arm_pmu_entry *entry; - - /* - * Check the sanitised PMU version for the system, as KVM does not - * support implementations where PMUv3 exists on a subset of CPUs. - */ - if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) - return; - - guard(mutex)(&arm_pmus_lock); - - entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return; - - entry->arm_pmu =3D pmu; - list_add_tail(&entry->entry, &arm_pmus); -} - -static struct arm_pmu *kvm_pmu_probe_armpmu(void) -{ - struct arm_pmu_entry *entry; - struct arm_pmu *pmu; - int cpu; - - guard(mutex)(&arm_pmus_lock); - - /* - * It is safe to use a stale cpu to iterate the list of PMUs so long as - * the same value is used for the entirety of the loop. Given this, and - * the fact that no percpu data is used for the lookup there is no need - * to disable preemption. - * - * It is still necessary to get a valid cpu, though, to probe for the - * default PMU instance as userspace is not required to specify a PMU - * type. In order to uphold the preexisting behavior KVM selects the - * PMU instance for the core during vcpu init. A dependent use - * case would be a user with disdain of all things big.LITTLE that - * affines the VMM to a particular cluster of cores. - * - * In any case, userspace should just do the sane thing and use the UAPI - * to select a PMU type directly. But, be wary of the baggage being - * carried here. - */ - cpu =3D raw_smp_processor_id(); - list_for_each_entry(entry, &arm_pmus, entry) { - pmu =3D entry->arm_pmu; - - if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) - return pmu; - } - - return NULL; -} - -static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) -{ - u32 hi[2], lo[2]; - - bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); - bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); - - return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; -} - -static u64 compute_pmceid0(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 0); - - /* always support SW_INCR */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); - /* always support CHAIN */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); - return val; -} - -static u64 compute_pmceid1(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 1); - - /* - * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled - * as RAZ - */ - val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); - return val; -} - -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; - unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; - u64 val, mask =3D 0; - int base, i, nr_events; - - if (!pmceid1) { - val =3D compute_pmceid0(cpu_pmu); - base =3D 0; - } else { - val =3D compute_pmceid1(cpu_pmu); - base =3D 32; - } - - if (!bmap) - return val; - - nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; - - for (i =3D 0; i < 32; i +=3D 8) { - u64 byte; - - byte =3D bitmap_get_value8(bmap, base + i); - mask |=3D byte << i; - if (nr_events >=3D (0x4000 + base + 32)) { - byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); - mask |=3D byte << (32 + i); - } - } - - return val & mask; -} - void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) { u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); @@ -921,393 +638,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) kvm_pmu_reprogram_counter_mask(vcpu, mask); } =20 -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - if (!vcpu->arch.pmu.created) - return -EINVAL; - - /* - * A valid interrupt configuration for the PMU is either to have a - * properly configured interrupt number and using an in-kernel - * irqchip, or to not have an in-kernel GIC and not set an IRQ. - */ - if (irqchip_in_kernel(vcpu->kvm)) { - int irq =3D vcpu->arch.pmu.irq_num; - /* - * If we are using an in-kernel vgic, at this point we know - * the vgic will be initialized, so we can check the PMU irq - * number against the dimensions of the vgic and make sure - * it's valid. - */ - if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) - return -EINVAL; - } else if (kvm_arm_pmu_irq_initialized(vcpu)) { - return -EINVAL; - } - - return 0; -} - -static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) -{ - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; - - /* - * If using the PMU with an in-kernel virtual GIC - * implementation, we require the GIC to be already - * initialized when initializing the PMU. - */ - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, - &vcpu->arch.pmu); - if (ret) - return ret; - } - - init_irq_work(&vcpu->arch.pmu.overflow_work, - kvm_pmu_perf_overflow_notify_vcpu); - - vcpu->arch.pmu.created =3D true; - return 0; -} - -/* - * For one VM the interrupt type must be same for each vcpu. - * As a PPI, the interrupt number is the same for all vcpus, - * while as an SPI it must be a separate number per vcpu. - */ -static bool pmu_irq_is_valid(struct kvm *kvm, int irq) -{ - unsigned long i; - struct kvm_vcpu *vcpu; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_arm_pmu_irq_initialized(vcpu)) - continue; - - if (irq_is_ppi(irq)) { - if (vcpu->arch.pmu.irq_num !=3D irq) - return false; - } else { - if (vcpu->arch.pmu.irq_num =3D=3D irq) - return false; - } - } - - return true; -} - -/** - * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. - * @kvm: The kvm pointer - */ -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; - - /* - * PMUv3 requires that all event counters are capable of counting any - * event, though the same may not be true of non-PMUv3 hardware. - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return 1; - - /* - * The arm_pmu->cntr_mask considers the fixed counter(s) as well. - * Ignore those and return only the general-purpose counters. - */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); -} - -static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) -{ - kvm->arch.nr_pmu_counters =3D nr; - - /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ - if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { - struct kvm_vcpu *vcpu; - unsigned long i; - - kvm_for_each_vcpu(i, vcpu, kvm) { - u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); - val &=3D ~MDCR_EL2_HPMN; - val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); - __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); - } - } -} - -static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) -{ - lockdep_assert_held(&kvm->arch.config_lock); - - kvm->arch.arm_pmu =3D arm_pmu; - kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); -} - -/** - * kvm_arm_set_default_pmu - No PMU set, get the default one. - * @kvm: The kvm pointer - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ -int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); - - if (!arm_pmu) - return -ENODEV; - - kvm_arm_set_pmu(kvm, arm_pmu); - return 0; -} - -static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) -{ - struct kvm *kvm =3D vcpu->kvm; - struct arm_pmu_entry *entry; - struct arm_pmu *arm_pmu; - int ret =3D -ENXIO; - - lockdep_assert_held(&kvm->arch.config_lock); - mutex_lock(&arm_pmus_lock); - - list_for_each_entry(entry, &arm_pmus, entry) { - arm_pmu =3D entry->arm_pmu; - if (arm_pmu->pmu.type =3D=3D pmu_id) { - if (kvm_vm_has_ran_once(kvm) || - (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { - ret =3D -EBUSY; - break; - } - - kvm_arm_set_pmu(kvm, arm_pmu); - cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret =3D 0; - break; - } - } - - mutex_unlock(&arm_pmus_lock); - return ret; -} - -static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) -{ - struct kvm *kvm =3D vcpu->kvm; - - if (!kvm->arch.arm_pmu) - return -EINVAL; - - if (n > kvm_arm_pmu_get_max_counters(kvm)) - return -EINVAL; - - kvm_arm_set_nr_counters(kvm, n); - return 0; -} - -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - struct kvm *kvm =3D vcpu->kvm; - - lockdep_assert_held(&kvm->arch.config_lock); - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (vcpu->arch.pmu.created) - return -EBUSY; - - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(kvm)) - return -EINVAL; - - if (get_user(irq, uaddr)) - return -EFAULT; - - /* The PMU overflow interrupt can be a PPI or a valid SPI. */ - if (!(irq_is_ppi(irq) || irq_is_spi(irq))) - return -EINVAL; - - if (!pmu_irq_is_valid(kvm, irq)) - return -EINVAL; - - if (kvm_arm_pmu_irq_initialized(vcpu)) - return -EBUSY; - - kvm_debug("Set kvm ARM PMU irq: %d\n", irq); - vcpu->arch.pmu.irq_num =3D irq; - return 0; - } - case KVM_ARM_VCPU_PMU_V3_FILTER: { - u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); - struct kvm_pmu_event_filter __user *uaddr; - struct kvm_pmu_event_filter filter; - int nr_events; - - /* - * Allow userspace to specify an event filter for the entire - * event range supported by PMUVer of the hardware, rather - * than the guest's PMUVer for KVM backward compatibility. - */ - nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; - - uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; - - if (copy_from_user(&filter, uaddr, sizeof(filter))) - return -EFAULT; - - if (((u32)filter.base_event + filter.nevents) > nr_events || - (filter.action !=3D KVM_PMU_EVENT_ALLOW && - filter.action !=3D KVM_PMU_EVENT_DENY)) - return -EINVAL; - - if (kvm_vm_has_ran_once(kvm)) - return -EBUSY; - - if (!kvm->arch.pmu_filter) { - kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); - if (!kvm->arch.pmu_filter) - return -ENOMEM; - - /* - * The default depends on the first applied filter. - * If it allows events, the default is to deny. - * Conversely, if the first filter denies a set of - * events, the default is to allow. - */ - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_zero(kvm->arch.pmu_filter, nr_events); - else - bitmap_fill(kvm->arch.pmu_filter, nr_events); - } - - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - else - bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - - return 0; - } - case KVM_ARM_VCPU_PMU_V3_SET_PMU: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int pmu_id; - - if (get_user(pmu_id, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); - } - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { - unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; - unsigned int n; - - if (get_user(n, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); - } - case KVM_ARM_VCPU_PMU_V3_INIT: - return kvm_arm_pmu_v3_init(vcpu); - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(vcpu->kvm)) - return -EINVAL; - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - irq =3D vcpu->arch.pmu.irq_num; - return put_user(irq, uaddr); - } - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: - case KVM_ARM_VCPU_PMU_V3_INIT: - case KVM_ARM_VCPU_PMU_V3_FILTER: - case KVM_ARM_VCPU_PMU_V3_SET_PMU: - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: - if (kvm_vcpu_has_pmu(vcpu)) - return 0; - } - - return -ENXIO; -} - -u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - unsigned int pmuver; - - pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, - read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); - - /* - * Spoof a barebones PMUv3 implementation if the system supports IMPDEF - * traps of the PMUv3 sysregs - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return ID_AA64DFR0_EL1_PMUVer_IMP; - - /* - * Otherwise, treat IMPLEMENTATION DEFINED functionality as - * unimplemented - */ - if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) - return 0; - - return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); -} - -/** - * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU - * @vcpu: The vcpu pointer - */ -u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) -{ - u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; - - if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) - n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - - return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); -} - void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) { bool reprogrammed =3D false; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 6b48a3d16d0d5..79b7ea037153a 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,8 +8,21 @@ #include #include =20 +#include +#include + +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + +bool kvm_supports_guest_pmuv3(void) +{ + guard(mutex)(&arm_pmus_lock); + return !list_empty(&arm_pmus); +} + /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -209,3 +222,665 @@ void kvm_vcpu_pmu_resync_el0(void) =20 kvm_make_request(KVM_REQ_RESYNC_PMU_EL0, vcpu); } + +void kvm_host_pmu_init(struct arm_pmu *pmu) +{ + struct arm_pmu_entry *entry; + + /* + * Check the sanitised PMU version for the system, as KVM does not + * support implementations where PMUv3 exists on a subset of CPUs. + */ + if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) + return; + + guard(mutex)(&arm_pmus_lock); + + entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->arm_pmu =3D pmu; + list_add_tail(&entry->entry, &arm_pmus); +} + +static struct arm_pmu *kvm_pmu_probe_armpmu(void) +{ + struct arm_pmu_entry *entry; + struct arm_pmu *pmu; + int cpu; + + guard(mutex)(&arm_pmus_lock); + + /* + * It is safe to use a stale cpu to iterate the list of PMUs so long as + * the same value is used for the entirety of the loop. Given this, and + * the fact that no percpu data is used for the lookup there is no need + * to disable preemption. + * + * It is still necessary to get a valid cpu, though, to probe for the + * default PMU instance as userspace is not required to specify a PMU + * type. In order to uphold the preexisting behavior KVM selects the + * PMU instance for the core during vcpu init. A dependent use + * case would be a user with disdain of all things big.LITTLE that + * affines the VMM to a particular cluster of cores. + * + * In any case, userspace should just do the sane thing and use the UAPI + * to select a PMU type directly. But, be wary of the baggage being + * carried here. + */ + cpu =3D raw_smp_processor_id(); + list_for_each_entry(entry, &arm_pmus, entry) { + pmu =3D entry->arm_pmu; + + if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return pmu; + } + + return NULL; +} + +static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) +{ + u32 hi[2], lo[2]; + + bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); + bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); + + return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; +} + +static u64 compute_pmceid0(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 0); + + /* always support SW_INCR */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); + /* always support CHAIN */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); + return val; +} + +static u64 compute_pmceid1(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 1); + + /* + * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled + * as RAZ + */ + val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); + return val; +} + +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; + u64 val, mask =3D 0; + int base, i, nr_events; + + if (!pmceid1) { + val =3D compute_pmceid0(cpu_pmu); + base =3D 0; + } else { + val =3D compute_pmceid1(cpu_pmu); + base =3D 32; + } + + if (!bmap) + return val; + + nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; + + for (i =3D 0; i < 32; i +=3D 8) { + u64 byte; + + byte =3D bitmap_get_value8(bmap, base + i); + mask |=3D byte << i; + if (nr_events >=3D (0x4000 + base + 32)) { + byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); + mask |=3D byte << (32 + i); + } + } + + return val & mask; +} + +/* + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + + vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); + kvm_vcpu_kick(vcpu); +} + +static u32 __kvm_pmu_event_mask(unsigned int pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return GENMASK(9, 0); + case ID_AA64DFR0_EL1_PMUVer_V3P1: + case ID_AA64DFR0_EL1_PMUVer_V3P4: + case ID_AA64DFR0_EL1_PMUVer_V3P5: + case ID_AA64DFR0_EL1_PMUVer_V3P7: + return GENMASK(15, 0); + default: /* Shouldn't be here, just for sanity */ + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); + return 0; + } +} + +u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + +u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | + kvm_pmu_event_mask(kvm); + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) + mask |=3D ARMV8_PMU_INCLUDE_EL2; + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) + mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | + ARMV8_PMU_EXCLUDE_NS_EL1 | + ARMV8_PMU_EXCLUDE_EL3; + + return mask; +} + +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D &vcpu->arch.pmu; + bool overflow; + + overflow =3D kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) + return; + + pmu->irq_level =3D overflow; + + if (likely(irqchip_in_kernel(vcpu->kvm))) { + int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, + pmu->irq_num, overflow, pmu); + WARN_ON(ret); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and = inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.pmu.created) + return -EINVAL; + + /* + * A valid interrupt configuration for the PMU is either to have a + * properly configured interrupt number and using an in-kernel + * irqchip, or to not have an in-kernel GIC and not set an IRQ. + */ + if (irqchip_in_kernel(vcpu->kvm)) { + int irq =3D vcpu->arch.pmu.irq_num; + /* + * If we are using an in-kernel vgic, at this point we know + * the vgic will be initialized, so we can check the PMU irq + * number against the dimensions of the vgic and make sure + * it's valid. + */ + if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) + return -EINVAL; + } else if (kvm_arm_pmu_irq_initialized(vcpu)) { + return -EINVAL; + } + + return 0; +} + +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) +{ + if (irqchip_in_kernel(vcpu->kvm)) { + int ret; + + /* + * If using the PMU with an in-kernel virtual GIC + * implementation, we require the GIC to be already + * initialized when initializing the PMU. + */ + if (!vgic_initialized(vcpu->kvm)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, + &vcpu->arch.pmu); + if (ret) + return ret; + } + + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + + vcpu->arch.pmu.created =3D true; + return 0; +} + +/* + * For one VM the interrupt type must be same for each vcpu. + * As a PPI, the interrupt number is the same for all vcpus, + * while as an SPI it must be a separate number per vcpu. + */ +static bool pmu_irq_is_valid(struct kvm *kvm, int irq) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_arm_pmu_irq_initialized(vcpu)) + continue; + + if (irq_is_ppi(irq)) { + if (vcpu->arch.pmu.irq_num !=3D irq) + return false; + } else { + if (vcpu->arch.pmu.irq_num =3D=3D irq) + return false; + } + } + + return true; +} + +/** + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. + * @kvm: The kvm pointer + */ +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + + /* + * PMUv3 requires that all event counters are capable of counting any + * event, though the same may not be true of non-PMUv3 hardware. + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return 1; + + /* + * The arm_pmu->cntr_mask considers the fixed counter(s) as well. + * Ignore those and return only the general-purpose counters. + */ + return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); +} + +static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) +{ + kvm->arch.nr_pmu_counters =3D nr; + + /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ + if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); + + val &=3D ~MDCR_EL2_HPMN; + val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); + } + } +} + +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm *kvm =3D vcpu->kvm; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret =3D -ENXIO; + + lockdep_assert_held(&kvm->arch.config_lock); + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu =3D entry->arm_pmu; + if (arm_pmu->pmu.type =3D=3D pmu_id) { + if (kvm_vm_has_ran_once(kvm) || + (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { + ret =3D -EBUSY; + break; + } + + kvm_arm_set_pmu(kvm, arm_pmu); + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); + ret =3D 0; + break; + } + } + + mutex_unlock(&arm_pmus_lock); + return ret; +} + +static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm->arch.arm_pmu) + return -EINVAL; + + if (n > kvm_arm_pmu_get_max_counters(kvm)) + return -EINVAL; + + kvm_arm_set_nr_counters(kvm, n); + return 0; +} + +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + struct kvm *kvm =3D vcpu->kvm; + + lockdep_assert_held(&kvm->arch.config_lock); + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (vcpu->arch.pmu.created) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(kvm)) + return -EINVAL; + + if (get_user(irq, uaddr)) + return -EFAULT; + + /* The PMU overflow interrupt can be a PPI or a valid SPI. */ + if (!(irq_is_ppi(irq) || irq_is_spi(irq))) + return -EINVAL; + + if (!pmu_irq_is_valid(kvm, irq)) + return -EINVAL; + + if (kvm_arm_pmu_irq_initialized(vcpu)) + return -EBUSY; + + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; + return 0; + } + case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); + struct kvm_pmu_event_filter __user *uaddr; + struct kvm_pmu_event_filter filter; + int nr_events; + + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; + + uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; + + if (copy_from_user(&filter, uaddr, sizeof(filter))) + return -EFAULT; + + if (((u32)filter.base_event + filter.nevents) > nr_events || + (filter.action !=3D KVM_PMU_EVENT_ALLOW && + filter.action !=3D KVM_PMU_EVENT_DENY)) + return -EINVAL; + + if (kvm_vm_has_ran_once(kvm)) + return -EBUSY; + + if (!kvm->arch.pmu_filter) { + kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); + if (!kvm->arch.pmu_filter) + return -ENOMEM; + + /* + * The default depends on the first applied filter. + * If it allows events, the default is to deny. + * Conversely, if the first filter denies a set of + * events, the default is to allow. + */ + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_zero(kvm->arch.pmu_filter, nr_events); + else + bitmap_fill(kvm->arch.pmu_filter, nr_events); + } + + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + else + bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + + return 0; + } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + unsigned int n; + + if (get_user(n, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); + } + case KVM_ARM_VCPU_PMU_V3_INIT: + return kvm_arm_pmu_v3_init(vcpu); + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(vcpu->kvm)) + return -EINVAL; + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + irq =3D vcpu->arch.pmu.irq_num; + return put_user(irq, uaddr); + } + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: + case KVM_ARM_VCPU_PMU_V3_INIT: + case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + if (kvm_vcpu_has_pmu(vcpu)) + return 0; + } + + return -ENXIO; +} + +u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + unsigned int pmuver; + + pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, + read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); + + /* + * Spoof a barebones PMUv3 implementation if the system supports IMPDEF + * traps of the PMUv3 sysregs + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return ID_AA64DFR0_EL1_PMUVer_IMP; + + /* + * Otherwise, treat IMPLEMENTATION DEFINED functionality as + * unimplemented + */ + if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return 0; + + return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); +} + +u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); + + if (val =3D=3D 0) + return BIT(ARMV8_PMU_CYCLE_IDX); + else + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); +} + +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) +{ + unsigned int hpmn, n; + + if (!vcpu_has_nv(vcpu)) + return 0; + + hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + n =3D vcpu->kvm->arch.nr_pmu_counters; + + /* + * Programming HPMN to a value greater than PMCR_EL0.N is + * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an + * UNKNOWN number of counters (in our case, zero) are reserved for EL2. + */ + if (hpmn >=3D n) + return 0; + + /* + * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't + * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly + * depend on hardware (all PMU registers are trapped), make the + * implementation choice that all counters are included in the second + * range reserved for EL2/EL3. + */ + return GENMASK(n - 1, hpmn); +} + +bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) +{ + return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); +} + +u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); + + if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) + return mask; + + return mask & ~kvm_pmu_hyp_counter_mask(vcpu); +} + +/** + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU + * @vcpu: The vcpu pointer + */ +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + + if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) + n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + + return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); +} --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 835043093BB for ; Tue, 9 Dec 2025 20:52:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313559; cv=none; b=lWzMvM2LR+pa62uOyvl8Nh9bFLV55Dzn4fs8qgBpJZdRnOYUcdm2jk/IHi8ESlrnKpdq7z86Cth4GX70GBF4n8qepU0gOI69nAfTixbYrEdng+Jyt5iO63TysVoGCGXVIAfb4IxiROClOSN6AwdWgXaoEDE/EaO4njmBIB5ctl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313559; c=relaxed/simple; bh=s4DMz/+Tu3ABN3IwWkCYAvfYICvEYeC2cwgRdRHILBQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=toZm2YmUA2ew3AsvdSjVKy1/wUWmycJTSqrAY2zDxuq0fjLd8ehvlMotak0tplpxaRrohVFNEvCGObJv3+jfI7ekcg0OfqIb8GCawDTpnP0DhA+Uq60fjU4qh8mzok+Gr58ft9oJvcWSzkhNM5an81TL7zd9mKyu1HiL0SIKMxs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=boGa2pPi; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="boGa2pPi" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c75663feaeso6997837a34.3 for ; Tue, 09 Dec 2025 12:52:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313553; x=1765918353; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Biho4eq7PcKZkdHijX/jqQ9/pP3kvOPAP71ESOGpkwg=; b=boGa2pPiUyg9Dbp+PbjQN2x+iqUOkk+xYB0y5LDDHPsWg5c8kENSPE3m2veY94Zr+9 UkG31rSAs2xS5kcBhTnY63y8pqapso/aCYt6OLVWeY1dPlUp/psJ222D88Zs8ZxIj+7D t+aRnFuCFEXtnFcvDJizir2gxLhwMGSrmva9F5A0bkLMyHs7rLwqp0xiTBSJrwuwOMeL B9LWX3yfMqsQRVtatqOzox9hlkoJr6Giauo5J8WZ70yeGJYCC1S9Ol+xn+vKiGWE88SD iKF52i7rDKzqOXFybO4rsFHDtox/dtibuf6TOXTtBtnVLVymgwhrfs7mSzpFf2slSucp b++Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313553; x=1765918353; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Biho4eq7PcKZkdHijX/jqQ9/pP3kvOPAP71ESOGpkwg=; b=hLBLBfp8UXf/Z+Q3/QxGS3EFwn5B1DvCQwnXET44CGQvEGrJ4eMWHcu16jDfYVWl8P 8oo427FcdObktxv+RKTWvRZ26GpmuDwaEt5e6di0oI0630OIzHxLrgWTwj+2i/Qms3eU RkHu7TjZsWjCyan5sqaMUKbbeyUcWi4DXl44UwmQgOdWFyCma1xrt6JhopdBWfQLgkoX EbPCcxspr+PtCJZBJVc+cvvUqCvD84u2jRkHu8MSlU/pnHmMF9eNHSUGGx6YR7pi5VDa nImblzNxqhOWrNmT9RqP+KVLkSLkWo93Az0AfY0Pgu0ZIlXfj/NxFMCX72mDHzw+VTzu Ozmw== X-Forwarded-Encrypted: i=1; AJvYcCUoKBGaDSR2dm2FUoSj6s4tI7mHVic6WSdDeQ1qzTRFHrf8uVxKlxiDYFBclGG1w0hKPaJfMezK4FN+94w=@vger.kernel.org X-Gm-Message-State: AOJu0YwiTy17FFiZJhHdeKZZrfiqvhSxE7zIQe8ZlwEREDU4kp+yPzbT LxIwye6nWVT3TszJgMum1gTYNfOZDPgq+2QpuD42Z50a0Mmz3JUnTYoeTo3g+ZCv5anZE4DCjD7 AOvTxUkXVFZXKszuH+upbw0eCGA== X-Google-Smtp-Source: AGHT+IFf/qB0CwPTPv5iYUHXY++A0PWp+sPV3KJqa4c1lU2WnLn00JaFUZc1nigPyWTnlRiGdglDaVN896UPUMc8Mg== X-Received: from otir6.prod.google.com ([2002:a9d:5cc6:0:b0:7c7:6b86:cae6]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:681a:b0:7c6:d001:afb2 with SMTP id 46e09a7af769-7cacec4c4bcmr53642a34.35.1765313552927; Tue, 09 Dec 2025 12:52:32 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:04 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-8-coltonlewis@google.com> Subject: [PATCH v5 07/24] perf: arm_pmuv3: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Create module parameter reserved_host_counters to reserve a number of counters for the host. This number is set at boot because the perf subsystem assumes the number of counters will not change after the PMU is probed. Introduce the function armv8pmu_partition() to modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest and record reserved_guest_counters as the maximum allowable value for HPMN. Due to the difficulty this feature would create for the driver running in nVHE mode, partitioning is only allowed in VHE mode. In order to support a partitioning on nVHE we'd need to explicitly disable guest counters on every exit and reset HPMN to place all counters in the first range. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 4 ++ arch/arm64/include/asm/arm_pmuv3.h | 5 ++ arch/arm64/include/asm/kvm_pmu.h | 8 +++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/pmu-direct.c | 22 +++++++++ drivers/perf/arm_pmuv3.c | 78 +++++++++++++++++++++++++++++- include/linux/perf/arm_pmu.h | 1 + 7 files changed, 117 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kvm/pmu-direct.c diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc98..636b1aab9e8d2 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -221,6 +221,10 @@ static inline bool kvm_pmu_counter_deferred(struct per= f_event_attr *attr) return false; } =20 +static inline bool kvm_pmu_partition_supported(void) +{ + return false; +} static inline bool kvm_set_pmuserenr(u64 val) { return false; diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index cf2b2212e00a2..27c4d6d47da31 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -171,6 +171,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P4; diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 6c961e8778047..63bff75e4f8dd 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -45,7 +45,10 @@ struct arm_pmu_entry { struct arm_pmu *arm_pmu; }; =20 +extern int armv8pmu_hpmn_max; + bool kvm_supports_guest_pmuv3(void); +bool kvm_pmu_partition_supported(void); #define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); @@ -115,6 +118,11 @@ static inline bool kvm_supports_guest_pmuv3(void) return false; } =20 +static inline bool kvm_pmu_partition_supported(void) +{ + return false; +} + #define kvm_arm_pmu_irq_initialized(v) (false) static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 3ebc0570345cc..baf0f296c0e53 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -26,7 +26,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o \ vgic/vgic-v5.o =20 -kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o +kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu-direct.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) +=3D ptdump.o =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c new file mode 100644 index 0000000000000..0d38265b6f290 --- /dev/null +++ b/arch/arm64/kvm/pmu-direct.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Colton Lewis + */ + +#include + +#include + +/** + * kvm_pmu_partition_supported() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode with PMUv3 + * + * Return: True if partitioning is possible, false otherwise + */ +bool kvm_pmu_partition_supported(void) +{ + return has_vhe() && + system_supports_pmuv3(); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 513122388b9da..379d1877a61ba 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -42,6 +42,13 @@ #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED =20 +static int reserved_host_counters __read_mostly =3D -1; +int armv8pmu_hpmn_max =3D -1; + +module_param(reserved_host_counters, int, 0); +MODULE_PARM_DESC(reserved_host_counters, + "PMU Partition: -1 =3D No partition; +N =3D Reserve N counters for the = host"); + /* * ARMv8 Architectural defined events, not all of these may * be supported on any given implementation. Unsupported events will @@ -532,6 +539,11 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } =20 +static u64 armv8pmu_pmcr_n_read(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1299,6 +1311,61 @@ struct armv8pmu_probe_info { bool present; }; =20 +/** + * armv8pmu_reservation_is_valid() - Determine if reservation is allowed + * @host_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is an + * allowed reservation, 0 to NR_COUNTERS inclusive. + * + * Return: True if reservation allowed, false otherwise + */ +static bool armv8pmu_reservation_is_valid(int host_counters) +{ + return host_counters >=3D 0 && + host_counters <=3D armv8pmu_pmcr_n_read(); +} + +/** + * armv8pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @host_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the hpmn_max field of the PMU and + * clearing the guest-reserved counters from the counter mask. + * + * Return: 0 on success, -ERROR otherwise + */ +static int armv8pmu_partition(struct arm_pmu *pmu, int host_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!armv8pmu_reservation_is_valid(host_counters)) { + pr_err("PMU partition reservation of %d host counters is not valid", hos= t_counters); + return -EINVAL; + } + + nr_counters =3D armv8pmu_pmcr_n_read(); + hpmn =3D nr_counters - host_counters; + + pmu->hpmn_max =3D hpmn; + armv8pmu_hpmn_max =3D hpmn; + + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, host_counters); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + pr_info("Partitioned PMU with %d host counters -> %u guest counters", hos= t_counters, hpmn); + + return 0; +} + static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe =3D info; @@ -1313,10 +1380,10 @@ static void __armv8pmu_probe_pmu(void *info) =20 cpu_pmu->pmuver =3D pmuver; probe->present =3D true; + cpu_pmu->hpmn_max =3D -1; =20 /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, armv8pmu_pmcr_n_read()); =20 /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1325,6 +1392,13 @@ static void __armv8pmu_probe_pmu(void *info) if (pmuv3_has_icntr()) set_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask); =20 + if (reserved_host_counters >=3D 0) { + if (kvm_pmu_partition_supported()) + armv8pmu_partition(cpu_pmu, reserved_host_counters); + else + pr_err("PMU partition is not supported"); + } + pmceid[0] =3D pmceid_raw[0] =3D read_pmceid0(); pmceid[1] =3D pmceid_raw[1] =3D read_pmceid1(); =20 diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 93c9a26492fcf..69071e887f98f 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -128,6 +128,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + int hpmn_max; /* MDCR_EL2.HPMN: counter partition pivot */ }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0CF93093D1 for ; Tue, 9 Dec 2025 20:52:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313559; cv=none; b=ctcUOIjm3enf2OY69gbD86rsYx1tq8vPKU/0nHyNYsaAqz1A/NK6D8Kayf9H+DQjFDO9eG89clhZn4Mm7fMKL0Yd0v3TL0lbAiogkB+IQV1ChU0YVHGRs9f8t5qBEHoQFVKCkcYCQ3GsRoyXy6DyMKTCYPg2Z1mEzXDpVlpvAsg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313559; c=relaxed/simple; bh=0D8f0lGMl9e4KEAGFc7uVXhORDpuK8IH9gEX8a1WgT8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DQYT8SMJBqlTrdNkHOmbwMCeHTZAfIu2vzoVal567Tx2vIeVImXhlW4bRRFI0EqdiCgbcJKIg59a+80E9LBdQ0PWcUj+ZrboL6Z6Xf6hz5oru3D3Lj8YOEYZ9SMg3CcyZOtaXlUSdp83XnbR4ZEs6s195yNfc5I0HamSh7TumCA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=U2pzfVdk; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="U2pzfVdk" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-45074787a6dso5695272b6e.0 for ; Tue, 09 Dec 2025 12:52:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313554; x=1765918354; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A5yWZRzHwonF0ybTPKNr07gTZcgByeP7H9fiRk7DESE=; b=U2pzfVdk3j32d66hLo/ArCxFYmW6JmmdgOgejK9lI9gbM2SE0QL3caNUgmqOGUMRM2 HgDFIvYFnI3TuqxgaVFlbO7e9OU84uTFF2w3h0mgVqB57FE/7hbDkX7OWnd4CzFRwVIx 3IBFC0aUUJR+FPMttbmEPV1D4k0cNs0oDv9ZLa1X6UjI4OtKHuYtJ6zxnHGop1jSnKpS 6ONGdSDSitJbjEtfWnlVZeluno/TNM8+qQR/RgPGBnVVlfL1M0PvojXZgWOwvedAVEdQ UBddKg861zdUfrZwEZePG6r8zlCDHu0Utrn4BN+EQHRzX5v+/GtFArrYmAU6ycoEOaXM PBGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313554; x=1765918354; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A5yWZRzHwonF0ybTPKNr07gTZcgByeP7H9fiRk7DESE=; b=CswspFPLGdl7MJ/SPpsYT58jYZ5N/9fpHVijcmOSAaGOSHVnE2MD1n1k3NUUOcHNjq lFcsaJgmD0sSmwbt1eQHNZrnMgnCzFdzwZIzbHev1VZ9Md3zI43CBjRbitfADx1dd3pk zgwa+q9WSzT99NhW0AwDBps1rh/nJupp3jsVak5DBiDtvdl2rY7jw6mILAKC2mvuTrAE 8FlCVfSWzzxTkGFHIkQIRA1MV82G+9Ff+Dvhc7AlBKeO+fu/O5lU8cs22NaFzbAyybsD ZjSeoByWXovA9vkUINCeZuckekEtIpwFR349BvWZ1FmyWVQyI8phB4fIvmkpptB8XLfk wtbQ== X-Forwarded-Encrypted: i=1; AJvYcCUXCfV8LWk4F/8R/La6Vm64ASaEoPxcW6VUPvmV5mU0SgX94xiJ0nqzdL760mfjjQyx9GHEYAtMnpJTFCU=@vger.kernel.org X-Gm-Message-State: AOJu0YznUEFzP54+XxcAMEoh8CFdd/es9IwNUwbeI0U/YmneSr1PGjM2 A1xPXbXrqJYpheHwCi24dGYKBNuU5J4gCqlfSVMYMA6w/LCEyjCeeniX9FRaWsrXAp99sGDIK6M HgCG5NW9o3CI98h8AxibWBUB0rw== X-Google-Smtp-Source: AGHT+IH2y/y/c/VHwW7FsQzg6cEqGCqxyR53g3FW4QbocbYeWU2m/DAvppskfkGhYMkbAciwVFKu+/4Kb6/59MH36Q== X-Received: from oibbx18.prod.google.com ([2002:a05:6808:1b12:b0:453:f2f:fd82]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:191d:b0:44d:aa8b:58f2 with SMTP id 5614622812f47-45586554d1fmr140748b6e.9.1765313553989; Tue, 09 Dec 2025 12:52:33 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:05 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-9-coltonlewis@google.com> Subject: [PATCH v5 08/24] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The OVSR bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Acked-by: Mark Rutland Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 4 ++-- include/linux/perf/arm_pmuv3.h | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 379d1877a61ba..3e6eb4be4ac43 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -546,7 +546,7 @@ static u64 armv8pmu_pmcr_n_read(void) =20 static int armv8pmu_has_overflowed(u64 pmovsr) { - return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); + return !!(pmovsr & ARMV8_PMU_CNT_MASK_ALL); } =20 static int armv8pmu_counter_has_overflowed(u64 pmnc, int idx) @@ -782,7 +782,7 @@ static u64 armv8pmu_getreset_flags(void) value =3D read_pmovsclr(); =20 /* Write to clear flags */ - value &=3D ARMV8_PMU_OVERFLOWED_MASK; + value &=3D ARMV8_PMU_CNT_MASK_ALL; write_pmovsclr(value); =20 return value; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a27..fd2a34b4a64d1 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -224,14 +224,14 @@ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) =20 /* - * PMOVSR: counters overflow flag status reg + * Counter bitmask layouts for overflow, enable, and interrupts */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ -/* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) =20 /* * PMXEVTYPER: Event selection reg --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 452B4309F19 for ; Tue, 9 Dec 2025 20:52:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313563; cv=none; b=Rf8mvIUSTVpnzSvxU6o/uTATGxBo0XQWEipIwfjs4axLRe4pr621XhfH26FOuuSIw4uIwAMgmGzh00Euj2GypWUIXIfgLRS1VgRYY9ToRMGNGGLiWWNOX2z7GL5uA1WtFni7hLvCZE8utT/Lj99ErKkbkReao6hJRn+K8dWqmSU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313563; c=relaxed/simple; bh=41NAhCA/2EGwgWmj+Pj9H/vR82+MtXJvtZjvEDce/A4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sOazXxZuePw8UKvqbUSo5jdcxkkKHz5TUfcuNCKsidhAFc8gXMlMSqwsR3Y2T694xHmP4LN9AUlZiziUQBQzBND3LmVMFuXR30p1uDyqaevbczx61yUVghEqG6iRu3Lnd+FkuAt3NRWqRAnaDqNijLc2edxCoaf1qjg5+J/sj4c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EiiAjoK8; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EiiAjoK8" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-45033344baeso13082892b6e.1 for ; Tue, 09 Dec 2025 12:52:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313555; x=1765918355; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R3Ne4/j3i6bMg+yI7GrAFYYJtGDrr1dKJKFAZAi4YL0=; b=EiiAjoK88HLl92t3vFfXgrdA0NDR6vol3rWtLLybpDMsYpDSGD/R2FDXOOfr9+codO iL6dP6EzWfJyOz7hmX6FvkCaOUGixmVmD5tzkKBZx7WWUQYVjvWi5f8hIiiWpLBtkvWk U2QNQFWAxVQXBUjixFp+PapwkqMK2bZ9Q3/FQX+QAeddmVhKjRJ6SanM3tdonFk4hAJM sPaR/v5jcoq/mbdB2D5Ahm3n59yH3lZVFXSR+fFtqyGa/tv8JfXC/F+K+cBt69QOlvCd regRlTW+d72TimykQKNuL//h3Mb/mA27vEKhHuUtbyY7b4ViZ9yrW/IcDL82BU/+zFB/ 8KHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313555; x=1765918355; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R3Ne4/j3i6bMg+yI7GrAFYYJtGDrr1dKJKFAZAi4YL0=; b=m6xJobHICNpiADyK+0FQcLYy/LSQE3KVVG26kv4Y71T36bPfIm5RShjvG0HPnewmAO 5fXgSVFoOIEY40p65ow1rbR1ZgouL65mV7UvdP0TPzGDamZSsN5Nq7Yka4HW0YzmGLZ9 Y97odsBd3W7x6bjLESaJIYvwtiCS0thVJL5Ygipg4Mh1s7FKS/s6HA7lHcSAP7FidoEU bgxl0rHla0BpFaKI+E8bJ5qKHHXiOpHSLUmionGqnmKdQnP4uNRzE01s46BuPdzDQgqd hordq716jWyTyur2rmfUA21ooY8lwfLsPLHc/pd8tGjSjHGGwAoLWKMHxggPO7+Xyjyy 9i5A== X-Forwarded-Encrypted: i=1; AJvYcCVppQPMjDnAFvXxOJqxDcVyeUrZv/Lht9E5ZaOH329dsZh9WhjklW5KbjIpo1wDnU3O842nxxRSK/Mwt5k=@vger.kernel.org X-Gm-Message-State: AOJu0Yzt5R+A3Eq55EDQ34BhRU99NTkLRVj+W9HC74sm87reFY35fg0T OaC2LZeDzjph3GWkO49N/Kz0dmXWEkS1j5zCCYOLcOCfCIb7YdhBjWcvN0BHM6Pclm3w39HXxHc UPlX7pdEBZ5tk2KqMlEJWPBWDWA== X-Google-Smtp-Source: AGHT+IEa/rFR8Fw1AH4pDhUYtNUCf4vwq4EK7bqkjQNV+qFXgux9uP/SUV3Lv9mVRc+RszyAsYKt8sMK0JKXjHWz8Q== X-Received: from oifn25-n1.prod.google.com ([2002:a05:6808:8859:10b0:450:63b:b0dd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:6d8e:b0:450:760b:cc8d with SMTP id 5614622812f47-4558660f2ecmr115150b6e.29.1765313555166; Tue, 09 Dec 2025 12:52:35 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:06 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-10-coltonlewis@google.com> Subject: [PATCH v5 09/24] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Define some functions that determine whether the PMU is partitioned and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their separate position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/include/asm/kvm_pmu.h | 24 +++++++++ arch/arm64/kvm/pmu-direct.c | 86 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 41 +++++++++++++-- 4 files changed, 165 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 636b1aab9e8d2..3ea5741d213d8 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -231,6 +231,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 63bff75e4f8dd..8887f39c25e60 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -90,6 +90,12 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -222,6 +228,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(void *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(void *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(void *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 0d38265b6f290..d5de7fdd059f4 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -5,7 +5,10 @@ */ =20 #include +#include +#include =20 +#include #include =20 /** @@ -20,3 +23,86 @@ bool kvm_pmu_partition_supported(void) return has_vhe() && system_supports_pmuv3(); } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + if (!pmu) + return false; + + return pmu->hpmn_max >=3D 0 && + pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + if (!kvm_pmu_is_partitioned(pmu)) + return ARMV8_PMU_CNT_MASK_ALL; + + return GENMASK(nr_counters - 1, pmu->hpmn_max); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_ALL & ~kvm_pmu_host_counter_mask(pmu); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3e6eb4be4ac43..2bed99ba992d7 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -871,6 +871,9 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) brbe_enable(cpu_pmu); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 @@ -882,6 +885,9 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) brbe_disable(); =20 /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -998,6 +1004,7 @@ static int armv8pmu_get_chain_idx(struct pmu_hw_events= *cpuc, static bool armv8pmu_can_use_pmccntr(struct pmu_hw_events *cpuc, struct perf_event *event) { + struct arm_pmu *cpu_pmu =3D to_arm_pmu(event->pmu); struct hw_perf_event *hwc =3D &event->hw; unsigned long evtype =3D hwc->config_base & ARMV8_PMU_EVTYPE_EVENT; =20 @@ -1018,6 +1025,12 @@ static bool armv8pmu_can_use_pmccntr(struct pmu_hw_e= vents *cpuc, if (has_branch_stack(event)) return false; =20 + /* + * If partitioned at all, pmccntr belongs to the guest. + */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + return false; + return true; } =20 @@ -1044,6 +1057,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_event= s *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -1055,7 +1069,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_event= s *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1167,6 +1181,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1174,6 +1196,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1186,11 +1211,19 @@ static void armv8pmu_reset(void *info) brbe_invalidate(); } =20 + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6486730ACEB for ; Tue, 9 Dec 2025 20:52:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313562; cv=none; b=N/knwP27MgDzAlnd0DvzjgqIVF+4odVmQEzJIbgepORfCVIbfyMe256eFzID0Vm5+LKRogSTl4c9O+vE2cq4UNF95euFvWsu8FUnUhAGktFKO+QhlLKaWUA5Ygq/0Zf+eQKuygP926rNsB+2jQ8KlA+dCKuMnBDHmMLcjd4E9Sg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313562; c=relaxed/simple; bh=GwGbfZNMQrxTg0XvIeuMl4EOESaBksVkYIRF2UfMQpc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l/FVyIfWcOZT++YvjwsXrYX3MnQ5XB0mXiz4nWdz3VSYdaOXDd56J2ai08O+hxKSvjxTTkJ2t5fdPZHLT9MZtu+sP0IHKzphXF2+tWIJTqixnfAhPg10lK8O0jpR0biKm2ZhDCWi+4LkNcqYEaeMyVTsCEI4f/RFKmw4fEqvRic= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ads6K1jj; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ads6K1jj" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-65747d01bb0so8925246eaf.0 for ; Tue, 09 Dec 2025 12:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313556; x=1765918356; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XyMhuaQls3hUZ6GN0FBCXMwBUjXHqrFvzqVQVVR2z7A=; b=Ads6K1jjI9OZatCDJzkBSrS8txAqsPWKY4vRBWH1sZ3+KBXiQVLMW7eZtSJNgbJvOC an9n17zrBpmKu1hS1jzYeQkvCAbjxN/F9Vo4PBVS7xM0dRqLrCvw0XTE2rCYsG1mJ9PH HiKgkJhWzFIDokxA0RpnmIgqogOEk0WB3i5ne1yZv7Q+iIV49w8F51d59ODayKUk9gJI MERrP1OS3km6HxJg8lMxgtvkRpQYKVIDmEN7Uic+dgC0JLsJa7uyvEU+/Pf8KhPBzNat bak/AMaaKnve2gvCfc7y6dFb9aQ9PQqS0TZTXGogWF5clCIwbtgxntWURZxtDIZGGRut vO7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313556; x=1765918356; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XyMhuaQls3hUZ6GN0FBCXMwBUjXHqrFvzqVQVVR2z7A=; b=wC/rC3k7dxhnHWpg/vJdiTPEjbLXoJtA4f4gOiLlkKOPTcaBjwXZ970jC+kCfltWxL fD7YNq5GUNdN5M/LMcfVEc/bNb05U2xNR+uD2ir04KTA9f00Zy8p087YhFF1c02G6p3K 2emuOF5vNLCriynTcVJNCXX/fbyM7OaAmTl12P2QQpZtKKTWVjgAESh9dNnIIw5yy1Zt QseXZU+WOtQM6PPggHuppLogV+CjK6qjp2R+UH1jPekeKrfc11qaBnc7I2lDGKhv9W6f Q2oR6eQiF/+gBnZWNtCI2yT5RjSyG3Wo3Rv+S8mZAwPSLn/hu9K8eM4mwJehDYblJ07P vH/Q== X-Forwarded-Encrypted: i=1; AJvYcCXG72QU4hDSVTty/MoDsmRTz43JcR6PWRC2dIJQZzNlWna37mRiOQ/7+DBR0Y+k7FcujgnBGMB6LM3Cw/M=@vger.kernel.org X-Gm-Message-State: AOJu0YzYrgoMrI6JsQ4FS16F+8XZJO6XLejxFSS5ayqaMK6rDm7f5Gel YnNrICxugrCCIbOr7x3AdmtkU8EcBYrWuuNs3hEps7lwx8cfPoxeRfOkCLGlq0OsX0175oX1YxS BfSlV3r4rQa9qX0weZqTUilO4Hw== X-Google-Smtp-Source: AGHT+IEL2YXntnNluNzyop1DJKdNOL9WpvG30YFGZ5q0MGpLQ+kw1q3/XcXCa4YnfEMsFjzbz6E8dm5ob96Qli2iIg== X-Received: from ilbbs15.prod.google.com ([2002:a05:6e02:240f:b0:434:972f:bf91]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:81c1:b0:659:9a49:8ec0 with SMTP id 006d021491bc7-65b2ad8bbf7mr124430eaf.68.1765313556199; Tue, 09 Dec 2025 12:52:36 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:07 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-11-coltonlewis@google.com> Subject: [PATCH v5 10/24] KVM: arm64: Set up FGT for Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to gain the best performance benefit from partitioning the PMU, utilize fine grain traps (FEAT_FGT and FEAT_FGT2) to avoid trapping common PMU register accesses by the guest to remove that overhead. Untrapped: * PMCR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCCNTR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 * PMEVCNTRn_EL0 These are safe to untrap because writing MDCR_EL2.HPMN as this series will do limits the effect of writes to any of these registers to the partition of counters 0..HPMN-1. Reads from these registers will not leak information from between guests as all these registers are context swapped by a later patch in this series. Reads from these registers also do not leak any information about the host's hardware beyond what is promised by PMUv3. Trapped: * PMOVS_EL0 * PMEVTYPERn_EL0 * PMCCFILTR_EL0 * PMICNTR_EL0 * PMICFILTR_EL0 * PMCEIDn_EL0 * PMMIR_EL1 PMOVS remains trapped so KVM can track overflow IRQs that will need to be injected into the guest. PMICNTR and PMIFILTR remain trapped because KVM is not handling them yet. PMEVTYPERn remains trapped so KVM can limit which events guests can count, such as disallowing counting at EL2. PMCCFILTR and PMCIFILTR are special cases of the same. PMCEIDn and PMMIR remain trapped because they can leak information specific to the host hardware implementation. NOTE: This patch temporarily forces kvm_vcpu_pmu_is_partitioned() to be false to prevent partial feature activation for easier debugging. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 33 ++++++++++++++++++++++ arch/arm64/kvm/config.c | 34 ++++++++++++++++++++-- arch/arm64/kvm/pmu-direct.c | 48 ++++++++++++++++++++++++++++++++ 3 files changed, 112 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 8887f39c25e60..7297a697a4a62 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -96,6 +96,23 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +#if !defined(__KVM_NVHE_HYPERVISOR__) +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); +#else +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} + +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} +#endif +u64 kvm_pmu_fgt_bits(void); +u64 kvm_pmu_fgt2_bits(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -135,6 +152,22 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm= _vcpu *vcpu, { return 0; } +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline u64 kvm_pmu_fgt_bits(void) +{ + return 0; +} +static inline u64 kvm_pmu_fgt2_bits(void) +{ + return 0; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c index 24bb3f36e9d59..064dc6aa06f76 100644 --- a/arch/arm64/kvm/config.c +++ b/arch/arm64/kvm/config.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include #include =20 @@ -1489,12 +1490,39 @@ static void __compute_hfgwtr(struct kvm_vcpu *vcpu) *vcpu_fgt(vcpu, HFGWTR_EL2) |=3D HFGWTR_EL2_TCR_EL1; } =20 +static void __compute_hdfgrtr(struct kvm_vcpu *vcpu) +{ + __compute_fgt(vcpu, HDFGRTR_EL2); + + if (kvm_vcpu_pmu_use_fgt(vcpu)) + *vcpu_fgt(vcpu, HDFGRTR_EL2) |=3D kvm_pmu_fgt_bits(); +} + static void __compute_hdfgwtr(struct kvm_vcpu *vcpu) { __compute_fgt(vcpu, HDFGWTR_EL2); =20 if (is_hyp_ctxt(vcpu)) *vcpu_fgt(vcpu, HDFGWTR_EL2) |=3D HDFGWTR_EL2_MDSCR_EL1; + + if (kvm_vcpu_pmu_use_fgt(vcpu)) + *vcpu_fgt(vcpu, HDFGWTR_EL2) |=3D kvm_pmu_fgt_bits(); +} + +static void __compute_hdfgrtr2(struct kvm_vcpu *vcpu) +{ + __compute_fgt(vcpu, HDFGRTR2_EL2); + + if (kvm_vcpu_pmu_use_fgt(vcpu)) + *vcpu_fgt(vcpu, HDFGRTR2_EL2) |=3D kvm_pmu_fgt2_bits(); +} + +static void __compute_hdfgwtr2(struct kvm_vcpu *vcpu) +{ + __compute_fgt(vcpu, HDFGWTR2_EL2); + + if (kvm_vcpu_pmu_use_fgt(vcpu)) + *vcpu_fgt(vcpu, HDFGWTR2_EL2) |=3D kvm_pmu_fgt2_bits(); } =20 void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) @@ -1505,7 +1533,7 @@ void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) __compute_fgt(vcpu, HFGRTR_EL2); __compute_hfgwtr(vcpu); __compute_fgt(vcpu, HFGITR_EL2); - __compute_fgt(vcpu, HDFGRTR_EL2); + __compute_hdfgrtr(vcpu); __compute_hdfgwtr(vcpu); __compute_fgt(vcpu, HAFGRTR_EL2); =20 @@ -1515,6 +1543,6 @@ void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) __compute_fgt(vcpu, HFGRTR2_EL2); __compute_fgt(vcpu, HFGWTR2_EL2); __compute_fgt(vcpu, HFGITR2_EL2); - __compute_fgt(vcpu, HDFGRTR2_EL2); - __compute_fgt(vcpu, HDFGWTR2_EL2); + __compute_hdfgrtr2(vcpu); + __compute_hdfgwtr2(vcpu); } diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index d5de7fdd059f4..4dd160c878862 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -43,6 +43,54 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU + * @vcpu: Pointer to kvm_vcpu struct + * + * Determine if given VCPU has a partitioned PMU by extracting that + * field and passing it to :c:func:`kvm_pmu_is_partitioned` + * + * Return: True if the VCPU PMU is partitioned, false otherwise + */ +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) && + false; +} + +/** + * kvm_vcpu_pmu_use_fgt() - Determine if we can use FGT + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if we can use FGT for direct access to registers. We can + * if capabilities permit the number of guest counters requested. + * + * Return: True if we can use FGT, false otherwise + */ +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; + + return kvm_vcpu_pmu_is_partitioned(vcpu) && + cpus_have_final_cap(ARM64_HAS_FGT) && + (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); +} + +u64 kvm_pmu_fgt_bits(void) +{ + return HDFGRTR_EL2_PMOVS + | HDFGRTR_EL2_PMCCFILTR_EL0 + | HDFGRTR_EL2_PMEVTYPERn_EL0 + | HDFGRTR_EL2_PMCEIDn_EL0 + | HDFGRTR_EL2_PMMIR_EL1; +} + +u64 kvm_pmu_fgt2_bits(void) +{ + return HDFGRTR2_EL2_nPMICFILTR_EL0 + | HDFGRTR2_EL2_nPMICNTR_EL0; +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5EBE30B51F for ; Tue, 9 Dec 2025 20:52:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313565; cv=none; b=s8Kx27c8eKoUNsStYHMsxXiaTQKK8bTwbGsdtC1Fx0yIyVsKgkSTjzhQInDR6oMb3dihajKmoI9plaMk8M0zFKSIOKFO8LfTayZmGYNDjHx1UypC5Xjtz//hHoLWomiZ4B4dWZni4sziPBmS1WRrmBr3zF7exZdovksSbE4wqG8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313565; c=relaxed/simple; bh=aCNuh2V9q56mw6LZjRPGjWZ7ahwIMSj/x17VFTK+/00=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LpX8dIlLV7ug8KB3PUH6VpuHUglvcmmsCSJssI+KFXNOdz+rl3gaxRJ5pltY8buZLjmK4aYMsqXhxpDRHrlAaZX9OFFlh2/X3/MAe+7havC5W8DcC3sPXzvVGzeAp84bb0lrHuBQ4Hi2r0PH8ky+2fDzhUDK+Vu/6HNtrHWUVuI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nnbudiAz; arc=none smtp.client-ip=209.85.161.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nnbudiAz" Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-65742d1c7f9so9534194eaf.3 for ; Tue, 09 Dec 2025 12:52:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313557; x=1765918357; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tlzYgLY8Hya1ebzZsNsRPfgfqZdf/pfSg0Aip8i1MzM=; b=nnbudiAzTnZClv15k4t0Yc2pbRa4HGOi3dVG2BUA9f+ktr3y6dy8ekZBSdVV6W5ikU dXpxgxTmAIGjm+NdzTDi81Jx8Kq3U4t5CApVYhy6rIM4oOonEI4uC1vrwsCr86kFlaZq MrjvnUhPbEjE/oKyIlusOqDUU8y4yIauCpEhgwE88tSWJTxAcG+eQ0MVvRN4tkJ9hXJs /ZTJr5Vw1IYDH8SJi41dlJ6VsNxmPlRhDjbEPXsoGa5G6LR3S3uAQIZ3l758Pfys6Ucj z6/ZFQUG8gnFoFBP947sqR0IkC44c2dqqdCEmj+//yqAl2CqReqZjCORCWh/kmB7ouCx FuIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313557; x=1765918357; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tlzYgLY8Hya1ebzZsNsRPfgfqZdf/pfSg0Aip8i1MzM=; b=vtyT7fu9ZyDht1t4i0e2mSbwogLZFp88tkB2+YdZXWNWA6bSb+RU/yI3mNZI3++0/W 5Nz1WVlwTXr1/ya/fHiSaVsqlKw/sVpvwEo5D5dlYoJdSQrasMJxi3UDSM+Z7vKoyUak IEjNaNet6azveHvLGhzW/zaFmxK9GHZO3Whw06UgI9BFzTKFX4LaEdiV6JbOdNx3neIU 2DPhetGWL4CMCFqvbN/Ab0rGaVmK/4zh2nzeqWI5c5Yw+IppUrWsVUSdg0liMPFO2n/z xBq0el4BVhya+SKdJCSRukzbXuOfdPQrAEzmNY/KvW5Imp8NJ1edYxC6MQPzJmQko7KF 2NSg== X-Forwarded-Encrypted: i=1; AJvYcCUGUYBYOA3T881EpH0vZxJAExGcDWwNaogHd9P3SPVxRWTE5AudZhzM0Y/NLAHctKetO0+acDVUZdosma4=@vger.kernel.org X-Gm-Message-State: AOJu0YxQtbRf/EAWn5mDC0bqGlo5Cn+TfV0ef2cRykyKEiJ/1GVAnIjQ UfODizKEj5/WkMErd7Zk+Fq/ubGC2CapL5wy1z7FEmdITwIB3qYUoCDlKZK5poE4h9Gze3U5SrG XT94FgkzT2GYv+ZorlzdZI8c9LQ== X-Google-Smtp-Source: AGHT+IFjXkWMzT4yuAEaLB+dJ2qXq9BXOkkzzf3nS3SdXsepbg3bqK/U0WpaHKFNBgIw1WNvMEX2vLOAaq9kL+8k3A== X-Received: from jabhd27.prod.google.com ([2002:a05:6638:4e9b:b0:5b7:56fc:a47d]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:6acf:b0:659:9a49:8f3d with SMTP id 006d021491bc7-65b2ad9f5eamr96621eaf.78.1765313557028; Tue, 09 Dec 2025 12:52:37 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:08 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-12-coltonlewis@google.com> Subject: [PATCH v5 11/24] KVM: arm64: Writethrough trapped PMEVTYPER register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means guest writes will not take effect when expected. For the PMEVTYPER register, take care to enforce KVM's PMU event filter. Do that by setting the bits to exclude EL1 and EL0 when an event is not present in the filter and clearing the bit to include EL2 always. Note the virtual register is always assigned the value specified by the guest to hide the setting of those bits. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 34 +++++++++++++++++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c636840b1f6f9..0c9596325519b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1166,6 +1166,36 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return true; } =20 +static bool writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_p= arams *p, + u64 reg, u64 idx) +{ + u64 eventsel; + u64 val =3D p->regval; + u64 evtyper_set =3D ARMV8_PMU_EXCLUDE_EL0 | + ARMV8_PMU_EXCLUDE_EL1; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + + __vcpu_assign_sys_reg(vcpu, reg, val); + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + eventsel =3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + else + eventsel =3D val & kvm_pmu_event_mask(vcpu->kvm); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + val &=3D ~evtyper_clr; + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + write_pmccfiltr(val); + else + write_pmevtypern(idx, val); + + return true; +} + static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { @@ -1192,7 +1222,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmevtyper(vcpu, p, reg, idx); + } else if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); kvm_vcpu_pmu_restore_guest(vcpu); } else { --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3E6530B52E for ; Tue, 9 Dec 2025 20:52:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313566; cv=none; b=MGrDDvKQHGSmiYPfY3pPK1YUpTDlPWP+3qUd5rL3oosmCxkAvFYY432Hi7+m3m9y/cxsBxWBuOcXeHzt76hSR3qOybfmUg9E+Bm6KS3XKU/9cyoLu80AIFpfAAzFyoRh9HXaSr7qdK7fF5vQbVbGXEo+9Zk70JE8jDmo3b50wxE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313566; c=relaxed/simple; bh=gK8PYV0Ut8umvNmJJee7/9an5YvBL6+gILLBE0W0pjA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jfEUFdXjBpZji86QcAXyzM/BjHPJi2aWZspw3cGZiliKB0kYpneq/0UYMqbIOpISxLsIlBDFghoYN6igDxcmRbIgMizZIpZM/VAezYuU2Fvs/MRwqvfgcouK5py6S+Pz/h4HIOwl1BxQFB5X+nU97ZuesdYUVSNGUTKmwI/3Bws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=h9C/SU9i; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="h9C/SU9i" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-656b3efc41aso7311760eaf.3 for ; Tue, 09 Dec 2025 12:52:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313558; x=1765918358; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eILdFka/rsHJNMit/75Sq+eu11TjmQop/YtCpvpq3fA=; b=h9C/SU9i2BXm+Lt0xk3FLwcDlrhZL2w2HyL8nkPrMRnLsTD8Pchpd6M+9UjatznUW1 58UsYkrlw+KcchRgEvezNdHMPAMblwq9n9sBCx4XrvXHYj0/TARXIHgk63VCbFaSr77f Q3N/Ze0T0j1C1BF+2cPh67ZHahfH/Zr4xUkrXGaSLgoGK6WRBSgLB/51GuzoACm0XMaJ RlF+58zx7vjCqqQKPcRokis5+3Gix69D32ufx6BkteQWxgx2uoxLpvzZqtrKt/Kq9MLH yZAhvikIyjq4V9ULTCm71XL//6zqS94Q9rOSuLeQACQBHwSDFLQSi2z0WLPaQ6WUfAMc QR7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313558; x=1765918358; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eILdFka/rsHJNMit/75Sq+eu11TjmQop/YtCpvpq3fA=; b=pivJBbGsAXT1qSg6rMK+GWguoySkSeG1JvuqMO6LbWaqmYa73jAnq2AxL6K8roTkNf O4ahHxsCSzi0vLb/DKXFoUhjQlQx160U3fcKPlTvMZk3jMrvT8CTfLbdyiupxk4zWHf1 i5zNencuZegQsfBb/XiTEcoIJLeVANooet+uIi0OoQlsS0aoB1xSaAoq67m/vZb4IlBQ 2XMe/Fporo1/TW7jHxTZvDYR6sNbQokFUiwcNnnP5NfUtZbZAdfWhHmo46O97S0I0tev +lnYhrhJ9xfWPC99a8i5lF6E6TOAdxayX3oWYPd9Zh0QI2BCHTEnCxU/gh0NK1lw3giG ujPQ== X-Forwarded-Encrypted: i=1; AJvYcCW9IgHFWQh3kQzEc9AP2YWJp/h8aDeJ4mqzHtxfliiDIIr8WnsAUDSTgCFfi5D3P7pSLB73cvO1kGK3zPc=@vger.kernel.org X-Gm-Message-State: AOJu0YwzNKk324uRtm5M4VZbGokfMaIWEsBkyF1l8doXLM1FooAs8L6J mRSSv4FM5OtsyvtANI8meA/nHAl2q9m1yLlwGFd3hl4vHQ1CHiq6Z84P93i2s814E/ADeW8ZA55 Hotv3RaK75qBpSZMZ1MVlPpnoIg== X-Google-Smtp-Source: AGHT+IGFO7LxHdl3SUALrs39mwNrx9NW2pYEpCVi8R5YZ4pkCq168hpaasS4MHYzaxE05YVUGX0GL3A0G5bIBNL23w== X-Received: from ilbcp5.prod.google.com ([2002:a05:6e02:3985:b0:433:e44:c729]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4d08:b0:659:9a49:8f57 with SMTP id 006d021491bc7-65b2abec3e1mr136653eaf.28.1765313558233; Tue, 09 Dec 2025 12:52:38 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:09 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-13-coltonlewis@google.com> Subject: [PATCH v5 12/24] KVM: arm64: Use physical PMSELR for PMXEVTYPER if partitioned From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because PMXEVTYPER is trapped and PMSELR is not, it is not appropriate to use the virtual PMSELR register when it could be outdated and lead to an invalid write. Use the physical register when partitioned. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 7 ++++++- arch/arm64/kvm/sys_regs.c | 9 +++++++-- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 27c4d6d47da31..60600f04b5902 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -70,11 +70,16 @@ static inline u64 read_pmcr(void) return read_sysreg(pmcr_el0); } =20 -static inline void write_pmselr(u32 val) +static inline void write_pmselr(u64 val) { write_sysreg(val, pmselr_el0); } =20 +static inline u64 read_pmselr(void) +{ + return read_sysreg(pmselr_el0); +} + static inline void write_pmccntr(u64 val) { write_sysreg(val, pmccntr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 0c9596325519b..2e6d907fa8af2 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1199,14 +1199,19 @@ static bool writethrough_pmevtyper(struct kvm_vcpu = *vcpu, struct sys_reg_params static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { - u64 idx, reg; + u64 idx, reg, pmselr; =20 if (pmu_access_el0_disabled(vcpu)) return false; =20 if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13 && r->Op2 =3D=3D 1) { /* PMXEVTYPER_EL0 */ - idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + pmselr =3D read_pmselr(); + else + pmselr =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + + idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, pmselr); reg =3D PMEVTYPER0_EL0 + idx; } else if (r->CRn =3D=3D 14 && (r->CRm & 12) =3D=3D 12) { idx =3D ((r->CRm & 3) << 3) | (r->Op2 & 7); --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1151309DCF for ; Tue, 9 Dec 2025 20:52:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313565; cv=none; b=myXUhYokOrzwQXi5Y48J7Ig7Ktj2AnIfi5d3t34zlgatYSA/L42H2xiCqjiNC2bGF23kVPqpQwaBaWAvdosTYe8YxaS2Kr7b9Pku7YdSyqQTio63qI7E2fVGcTIvYd2Zt9MOASQlsINnGQzfGl6ISaQyf24Is0/xfbda+vLiRV4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313565; c=relaxed/simple; bh=SQ6Z1MwvA88uzbzI+XtO/m3rZVQWhJdhjpiP6b/54yU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EVWwc7I5sdW4A8aVMhn36rtck6cRcszMgaUGS6OcYht+zAYZeSjbjeov2XRxRqHc8Wrob89P2pA9gRUAU/FhRmjZFYLgC5RTAos921riVRqefJSlbmnsvg53p+DIrwHF0v0juINzMHCxDDvmoHUbmYMxRRjlKZ2WI7u04pzjR7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NjO85hTh; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NjO85hTh" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-4530ea23ce6so9475773b6e.1 for ; Tue, 09 Dec 2025 12:52:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313559; x=1765918359; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AJ1EthCH6ONdnoakqWvxYQ6ZZxvXCgKuA673W4BNIkM=; b=NjO85hThyzj4e4bH2Q8L543JCE8LzD6NXroOyfbotIbCzHulF9cs6kDlQFj+cvomJI CaCR1Nep2aoyrGzODkBnuMn/BOhor3Qx+F3c0TskjOtdPO8bcQmL5EGU0vcU0aobz2ur vGOpwI5rw43T4NMEbZOe254HA1sbZV8W6iPtscoq901pKz5vW4x6XqmfbQkVLWqm0S2M iiAay1X35PcJji9EcLXbvB+qqwI5ni8OQi7CBoWf84r7QU5yW0C0+oHrynvqyj0HbveK +M7AJyIXCU7cF67gXKso6tx8UyM5eqRuTe5g8CpkgSR1IyzC4Q7/OVIv2mv/CZ/VScSs 5b2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313559; x=1765918359; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AJ1EthCH6ONdnoakqWvxYQ6ZZxvXCgKuA673W4BNIkM=; b=AnJaEcTH9WcA6sj2oy5owQTcqPLDqL7svKcGW9zWxV7zwSpTixQdQFMBgyt78F35u+ uRFFYKg9tSDFdl5l2z8psiouvTAzUrvt/W9ygUv5/LqqqDGuSkRu1pvrrzCLhujPcMtb FwKUN4FQkbYMBz/9SzNBXje7ZpRUPh3azu4BytD50rmPerrsUzcoOg2/bKbQtLgC2skP YX/mZixAZCKQKWjSocImSGSux+Xtx/WIy4/xGtiTbiOgnQHjh0ZVyyDMwM4E0meH0D5F EKTe3gEGJ+XVHdyOuqeFQyHvLEdbbYC3Strk4Dewg5+LDu3y3e2SjdmHsOql5WbJ/3bA RUbQ== X-Forwarded-Encrypted: i=1; AJvYcCUuz+xwNDAMJOKlIrJSAxdcMnvhGB/vna86DSGShGhV7JFUPMEzV7fBEAWAmugS8+VGkRtwKh+VfCxVPMw=@vger.kernel.org X-Gm-Message-State: AOJu0YyUkTvB6eWah5fyI8XL8JTH94WUAK4hd53DlNubz7SHymKDZOX4 gudueS4afwXDGdfpAnKHvyscB8by67Avu1W8h85PHgCybCbhbabngBiaDp4esOeIR3rZoLVTDxO Kkr3bZESHFWhPea0LaWKP9Vv5mQ== X-Google-Smtp-Source: AGHT+IEXFAGDtIzcAMUsWr76V/QsGbzsYAGUdLx+/ZdxOcslRVX0AGiIPoGFGxMRz56nzOeGb3yai/VbNoKlqj0WOg== X-Received: from jabgz26.prod.google.com ([2002:a05:6638:6b1a:b0:5b7:27fd:4267]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a4a:ee05:0:b0:657:717a:8c8 with SMTP id 006d021491bc7-65b2abef8e5mr121600eaf.3.1765313559184; Tue, 09 Dec 2025 12:52:39 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:10 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-14-coltonlewis@google.com> Subject: [PATCH v5 13/24] KVM: arm64: Writethrough trapped PMOVS register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because PMOVS remains trapped, it needs to be written through when partitioned to affect PMU hardware when expected. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 10 ++++++++++ arch/arm64/kvm/sys_regs.c | 17 ++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 60600f04b5902..3e25c0313263c 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -140,6 +140,16 @@ static inline u64 read_pmicfiltr(void) return read_sysreg_s(SYS_PMICFILTR_EL0); } =20 +static inline void write_pmovsset(u64 val) +{ + write_sysreg(val, pmovsset_el0); +} + +static inline u64 read_pmovsset(void) +{ + return read_sysreg(pmovsset_el0); +} + static inline void write_pmovsclr(u64 val) { write_sysreg(val, pmovsclr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 2e6d907fa8af2..bee892db9ca8b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1307,6 +1307,19 @@ static bool access_pminten(struct kvm_vcpu *vcpu, st= ruct sys_reg_params *p, return true; } =20 +static void writethrough_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, bool set) +{ + u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (set) { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); + write_pmovsset(p->regval & mask); + } else { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=3D, ~(p->regval & mask)); + write_pmovsclr(p->regval & mask); + } +} + static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1315,7 +1328,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struc= t sys_reg_params *p, if (pmu_access_el0_disabled(vcpu)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmovs(vcpu, p, r->CRm & 0x2); + } else if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20602308F08 for ; Tue, 9 Dec 2025 20:52:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313569; cv=none; b=RxkHmo91TB55akdBOAMfra0qk6/9uC/ZR/LQUJWdFIJkrSBsjxNVBwYhWmiWcJsa00PrJYLezu4BQNA6vILXjaLxq/XvOA8zq0eeYUxYX77D7cZR9BwLkpGEA/SP2+Eae19Mg1YR+gQCQ0n+5zYfXUN9bj+75Fmrn8FK1JIyq1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313569; c=relaxed/simple; bh=8cRNgmZhNmgIt8apJ3I52IH8YRtz95hzV+y0zYEsqYo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oJrt2/x7WIee/7FZ0arFbImO/VFx5SV+yLiHS22Ivp4tysGnKMTjBM3VD60eeu/aWIx3/8RnrZvaeXl/HOERJ72q41jhhGHfVxsjHhfvmLD/2F2Le/NuV9jlLnKVJ9hIrF9WcD32u/tYq2qemgn4aIQoJhHAH38XwDw9EfeQl+k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oVLwhgHO; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oVLwhgHO" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-3e89d0035c4so8257188fac.2 for ; Tue, 09 Dec 2025 12:52:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313560; x=1765918360; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kUP/0ib7NzaQ10lNwrazeH9U5sdGAZQbx8k0mwKac3A=; b=oVLwhgHOS0gBg1RsbmcJ5RJSEa3C12tWfQiDaemkDFRsdfcEzcGessfXTg4/JojnQJ xLEpm9ciGxUogY96sUdZP1Wu8SUcgdOnTAn4zyomT1yoAjWjYEC1o7QQ47V4sNjt/1KJ sHVUkm4Frw8h9dSaDzUbL12/iNFNDwfKhV1rW2R+rtuBQFxF8BGlwwyN1yMAo9iZYjfk PbZDPKda5bK5l35a+6mh5lckAnG1J6u8cX+eIg+h6q/3/bKF8keRRu5+qMRJ02zrXFyH +lnjVrVrQfYHFNGOdFLLSDfrLAhROfGW79Y39e9JroyM9lB41yCusppobchA/9Ej+CwP 9L+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313560; x=1765918360; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kUP/0ib7NzaQ10lNwrazeH9U5sdGAZQbx8k0mwKac3A=; b=ZBJmoiClnsfoVFsNWN56ffykYFwJzrclQvLLzGEgLhx5PwAnKPUllWGQwjQcBuu8iX BK7TdGMdrjMvSwTpi8c8WDuT31awpxxfbmOUvirlRVOImUsvEDiX8YoQq52W3pOvEd90 9YwnuNU11pwucVUz8Fl8tVpVIM1NTJTuU3VKJ3QW5mDYLl0V6/6ud40j+h0o4v+4rYYd SHqOyWx28bbW2lPlb06e5Z/0RuchYwbjcq49QkHsfC8pQ/3CshMMcyicz2How2CIP5wJ nc6EgQwFZ4zoAsLwwOqc4wtaVnXHLzqh6h+5Y+n+rTT4ZWCL2TS6D9OfkvlHNcOwoqO1 RLFg== X-Forwarded-Encrypted: i=1; AJvYcCWt7HOSrmdjpSh57L2LebeWQYk6pncvqwraHRVlxYjmIRfcH6fI/SI5yDx32Wfk50IledwjHHqhqSgzzSI=@vger.kernel.org X-Gm-Message-State: AOJu0YyqbItIw3/DBcdatu1i/5c4BcG+SDmGIbKhG+euCUyf2oGBHqzo tVugKjHJ1wtl1IcfG8y73D+f9BDxiiuOhba17ogYNDN2eiZHOpY1Gg4UmRAtCanV1kXbRss63iE 33eyXOqxaljJiKEko39YhO2gGgQ== X-Google-Smtp-Source: AGHT+IHv8RQaXDHSivPTTeXk+3zR5MzJWrGN3A5PRODa9gDMtjaBK34a4o7xl6Y/q37pYaHLo9SGb0iM2yaoKrXonQ== X-Received: from ilbck11.prod.google.com ([2002:a05:6e02:370b:b0:430:c380:c567]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a4a:edc9:0:b0:659:9a49:8eb9 with SMTP id 006d021491bc7-65b2ad5ba6emr124789eaf.61.1765313560115; Tue, 09 Dec 2025 12:52:40 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:11 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-15-coltonlewis@google.com> Subject: [PATCH v5 14/24] KVM: arm64: Write fast path PMU register handlers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We may want a partitioned PMU but not have FEAT_FGT to untrap the specific registers that would normally be untrapped. Add a handler for those registers in the fast path so we can still get a performance boost from partitioning. The idea is to handle traps for all the PMU registers quickly by writing through to the hardware as possible instead of hooking into the emulated vPMU as the standard handlers in sys_regs.c do. Since context switching will not happen without FGT, make sure to write both physical and virtual registers so they stay in sync. To assist with that, fill of the gaps in arm_pmuv3.h for helper functions to read and write PMU registers and lift some of the access checking functions from sys_regs.c to pmu.c Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 37 ++++- arch/arm64/include/asm/kvm_pmu.h | 25 +++ arch/arm64/kvm/hyp/include/hyp/switch.h | 201 ++++++++++++++++++++++++ arch/arm64/kvm/pmu.c | 40 +++++ arch/arm64/kvm/sys_regs.c | 41 +---- 5 files changed, 303 insertions(+), 41 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 3e25c0313263c..41ec6730ebc62 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -39,6 +39,16 @@ static inline unsigned long read_pmevtypern(int n) return 0; } =20 +static inline void write_pmxevcntr(u64 val) +{ + write_sysreg(val, pmxevcntr_el0); +} + +static inline u64 read_pmxevcntr(void) +{ + return read_sysreg(pmxevcntr_el0); +} + static inline unsigned long read_pmmir(void) { return read_cpuid(PMMIR_EL1); @@ -105,21 +115,41 @@ static inline void write_pmcntenset(u64 val) write_sysreg(val, pmcntenset_el0); } =20 +static inline u64 read_pmcntenset(void) +{ + return read_sysreg(pmcntenset_el0); +} + static inline void write_pmcntenclr(u64 val) { write_sysreg(val, pmcntenclr_el0); } =20 +static inline u64 read_pmcntenclr(void) +{ + return read_sysreg(pmcntenclr_el0); +} + static inline void write_pmintenset(u64 val) { write_sysreg(val, pmintenset_el1); } =20 +static inline u64 read_pmintenset(void) +{ + return read_sysreg(pmintenset_el1); +} + static inline void write_pmintenclr(u64 val) { write_sysreg(val, pmintenclr_el1); } =20 +static inline u64 read_pmintenclr(void) +{ + return read_sysreg(pmintenclr_el1); +} + static inline void write_pmccfiltr(u64 val) { write_sysreg(val, pmccfiltr_el0); @@ -160,11 +190,16 @@ static inline u64 read_pmovsclr(void) return read_sysreg(pmovsclr_el0); } =20 -static inline void write_pmuserenr(u32 val) +static inline void write_pmuserenr(u64 val) { write_sysreg(val, pmuserenr_el0); } =20 +static inline u64 read_pmuserenr(void) +{ + return read_sysreg(pmuserenr_el0); +} + static inline void write_pmuacr(u64 val) { write_sysreg_s(val, SYS_PMUACR_EL1); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 7297a697a4a62..60b8a48cad456 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -83,6 +83,11 @@ struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); +bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags); +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu); +bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu); +bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu); +bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -226,6 +231,26 @@ static inline bool kvm_set_pmuserenr(u64 val) { return false; } +static inline bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 fl= ags) +{ + return false; +} +static inline bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *= vcpu) +{ + return false; +} +static inline bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *= vcpu) +{ + return false; +} +static inline bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 6e8050f260f34..40bd00df6c58f 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -24,12 +24,14 @@ #include #include #include +#include #include #include #include #include #include =20 +#include <../../sys_regs.h> #include "arm_psci.h" =20 struct kvm_exception_table_entry { @@ -768,6 +770,202 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu) return true; } =20 +/** + * handle_pmu_reg() - Handle fast access to most PMU regs + * @vcpu: Ponter to kvm_vcpu struct + * @p: System register parameters (read/write, Op0, Op1, CRm, CRn, Op2) + * @reg: VCPU register identifier + * @rt: Target general register + * @val: Value to write + * @readfn: Sysreg read function + * @writefn: Sysreg write function + * + * Handle fast access to most PMU regs. Writethrough to the physical + * register. This function is a wrapper for the simplest case, but + * sadly there aren't many of those. + * + * Always return true. The boolean makes usage more consistent with + * similar functions. + * + * Return: True + */ +static bool handle_pmu_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + enum vcpu_sysreg reg, u8 rt, u64 val, + u64 (*readfn)(void), void (*writefn)(u64)) +{ + if (p->is_write) { + __vcpu_assign_sys_reg(vcpu, reg, val); + writefn(val); + } else { + vcpu_set_reg(vcpu, rt, readfn()); + } + + return true; +} + +/** + * kvm_hyp_handle_pmu_regs() - Fast handler for PMU registers + * @vcpu: Pointer to vcpu struct + * + * This handler immediately writes through certain PMU registers when + * we have a partitioned PMU (that is, MDCR_EL2.HPMN is set to reserve + * a range of counters for the guest) but the machine does not have + * FEAT_FGT to selectively untrap the registers we want. + * + * Return: True if the exception was successfully handled, false otherwise + */ +static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vcpu) +{ + struct sys_reg_params p; + u64 esr; + u32 sysreg; + u8 rt; + u64 val; + u64 mask; + u8 idx; + bool ret; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu) + || pmu_access_el0_disabled(vcpu)) + return false; + + esr =3D kvm_vcpu_get_esr(vcpu); + p =3D esr_sys64_to_params(esr); + sysreg =3D esr_sys64_to_sysreg(esr); + rt =3D kvm_vcpu_sys_get_rt(vcpu); + val =3D vcpu_get_reg(vcpu, rt); + + switch (sysreg) { + case SYS_PMCR_EL0: + mask =3D ARMV8_PMU_PMCR_MASK; + + if (p.is_write) { + write_pmcr(val); + mask &=3D ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val & mask); + } else { + val =3D u64_replace_bits( + read_pmcr(), + vcpu->kvm->arch.nr_pmu_counters, + ARMV8_PMU_PMCR_N); + vcpu_set_reg(vcpu, rt, val); + } + + ret =3D true; + break; + case SYS_PMUSERENR_EL0: + mask =3D ARMV8_PMU_USERENR_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMUSERENR_EL0, rt, val & mask, + &read_pmuserenr, &write_pmuserenr); + break; + case SYS_PMSELR_EL0: + mask =3D PMSELR_EL0_SEL_MASK; + + if (pmu_access_event_counter_el0_disabled(vcpu)) + return false; + + ret =3D handle_pmu_reg(vcpu, &p, PMSELR_EL0, rt, val & mask, + &read_pmselr, &write_pmselr); + break; + case SYS_PMINTENCLR_EL1: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=3D, ~(val & mask)); + write_pmintenclr(val); + } else { + val =3D read_pmintenclr(); + vcpu_set_reg(vcpu, rt, val & mask); + } + ret =3D true; + + break; + case SYS_PMINTENSET_EL1: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=3D, val & mask); + write_pmintenset(val); + } else { + val =3D read_pmintenset(); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMCNTENCLR_EL0: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=3D, ~(val & mask)); + write_pmcntenclr(val); + } else { + val =3D read_pmcntenclr(); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMCNTENSET_EL0: + mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=3D, val & mask); + write_pmcntenset(val); + } else { + val =3D read_pmcntenset(); + vcpu_set_reg(vcpu, rt, val & mask); + } + + ret =3D true; + break; + case SYS_PMCCNTR_EL0: + if (pmu_access_cycle_counter_el0_disabled(vcpu)) + return false; + + ret =3D handle_pmu_reg(vcpu, &p, PMCCNTR_EL0, rt, val, + &read_pmccntr, &write_pmccntr); + break; + case SYS_PMXEVCNTR_EL0: + idx =3D FIELD_GET(PMSELR_EL0_SEL, read_pmselr()); + + if (pmu_access_event_counter_el0_disabled(vcpu)) + return false; + + if (!pmu_counter_idx_valid(vcpu, idx)) + return false; + + ret =3D handle_pmu_reg(vcpu, &p, PMEVCNTR0_EL0 + idx, rt, val, + &read_pmxevcntr, &write_pmxevcntr); + break; + case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30): + idx =3D ((p.CRm & 3) << 3) | (p.Op2 & 7); + + if (pmu_access_event_counter_el0_disabled(vcpu)) + return false; + + if (!pmu_counter_idx_valid(vcpu, idx)) + return false; + + if (p.is_write) { + write_pmevcntrn(idx, val); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + idx, val); + } else { + vcpu_set_reg(vcpu, rt, read_pmevcntrn(idx)); + } + + ret =3D true; + break; + default: + ret =3D false; + } + + if (ret) + __kvm_skip_instr(vcpu); + + return ret; +} + static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_= code) { if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && @@ -785,6 +983,9 @@ static inline bool kvm_hyp_handle_sysreg(struct kvm_vcp= u *vcpu, u64 *exit_code) if (kvm_handle_cntxct(vcpu)) return true; =20 + if (kvm_hyp_handle_pmu_regs(vcpu)) + return true; + return false; } =20 diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 79b7ea037153a..1fd012f8ff4a9 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -884,3 +884,43 @@ u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) =20 return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); } + +bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) +{ + u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); + + if (!enabled) + kvm_inject_undefined(vcpu); + + return !enabled; +} + +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); +} + +bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) +{ + u64 pmcr, val; + + pmcr =3D kvm_vcpu_read_pmcr(vcpu); + val =3D FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + if (idx >=3D val && idx !=3D ARMV8_PMU_CYCLE_IDX) { + kvm_inject_undefined(vcpu); + return false; + } + + return true; +} + +bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu) +{ + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_CR | ARMV8_PMU_U= SERENR_EN); +} + +bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu) +{ + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_U= SERENR_EN); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index bee892db9ca8b..70104087b6c7b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include =20 @@ -970,37 +971,11 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const st= ruct sys_reg_desc *r) return __vcpu_sys_reg(vcpu, r->reg); } =20 -static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) -{ - u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); - bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); - - if (!enabled) - kvm_inject_undefined(vcpu); - - return !enabled; -} - -static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) -{ - return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); -} - static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu) { return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_SW | ARMV8_PMU_U= SERENR_EN); } =20 -static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu) -{ - return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_CR | ARMV8_PMU_U= SERENR_EN); -} - -static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu) -{ - return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_U= SERENR_EN); -} - static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1067,20 +1042,6 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, str= uct sys_reg_params *p, return true; } =20 -static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) -{ - u64 pmcr, val; - - pmcr =3D kvm_vcpu_read_pmcr(vcpu); - val =3D FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); - if (idx >=3D val && idx !=3D ARMV8_PMU_CYCLE_IDX) { - kvm_inject_undefined(vcpu); - return false; - } - - return true; -} - static int get_pmu_evcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc= *r, u64 *val) { --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C348930C37B for ; Tue, 9 Dec 2025 20:52:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313572; cv=none; b=ibZYvGHWBsJ8wwDGOZcUIVIB61damvXt0UCCbWYT/n/X9n/PBNNYqQP7rhOaDJyzaJUMmqdmOAobLvF8D+iTSY4K817FuVbR09/NAcsciyEGZ455FEC1FEM9IgrKyPbalb82v9w/aSZJM+gisKTYcP32sauuUcWMOuDUFxDazpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313572; c=relaxed/simple; bh=vH4vToO1Kn0jUWaD9r2NByPEBh2j2Tq7llj3z+1eFKA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nhD9zt2+cOQ18BIjfPS1995adoIH7XC6X3FCxeea168RwH1JRZDC9Ot+hAL4KdyaPT2UDl0HQsXRQ1iLajq0fORdHMWjYXi5EKvI7bsbat30ThhiwyVMbwbpnYy5B7QWA019kkINOwKyyjmal4FbtoW08L7qAGzWqvsxaBS5DRM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UfLM6Osr; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UfLM6Osr" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-65b26eca9c7so845998eaf.0 for ; Tue, 09 Dec 2025 12:52:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313561; x=1765918361; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I7rslOWyfBKryoyVeAdT+vyRO2ASxcyyBGT1wwSBsKE=; b=UfLM6Osr3WuEvc2/MDRNGwnEq1XzwvHFX5sTN01PC4xr+LxWC17AzYjhdMY66YviCo ei+sCUpzO9xAEvBQKqbVMfQmGUjT6SrNuVEXogoVZPS6whGc+8teRd2fxq/1Jwxewd+v NyB7AHV3b6yCkONtYQ5xR8L8qkaPI0B6Pa5GTZxdvswOm/O0lxi5zR3V5ANFrQk5uNN/ zzwKsRR8M7e00wP4JuIaUb9g1YA3pBA5cyjjk89aNU6FtFUzjbo734EPAW4p9XXZbuKl 9cCxVSfqZ+CyFo7YtgUU8He+blC1+rF9Flw+W5nH2lPR3k8LOzPz7JdwrcrSNFj6nUvo AxUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313561; x=1765918361; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I7rslOWyfBKryoyVeAdT+vyRO2ASxcyyBGT1wwSBsKE=; b=HW2sr2TGZuIDIupy9NGEb1SUX2+wtgMadXLoy873v/Sy9ziavs+cOERW/xoMMA742L YrddVqLrvxhG+uJA+IWNHzT68bFNFy5dhZwwaY+de4N98/DA8rUCHEKiO8zdauDX9m29 eKtyoIi8Vr4DtFmuyzDmLoNhl5aRSi6phRamVLIzyHdAZwopYJBJMPwQqivim6ZYEsV4 Jpb0f4KW9+AAtoCNgWeoSN0h2wZYp8b4SPgLwcVhvQvTLt6TvU4BF9EIo9bxtI4Zk2Vm mxncaURLV/grr6QyecgD8qGRshR+ZcGuh4HiEfuM1spKU1FomemjRmAMM6NHLo+1jBBH Ku5w== X-Forwarded-Encrypted: i=1; AJvYcCVygNy8Cpe0eaMBLktpXSmFXkR8MzB2pSsPMaAzUYAaG87Q9KL6MNN6b1VMxQsWKynNGxLxfmXck3ajg20=@vger.kernel.org X-Gm-Message-State: AOJu0YyK/WyiUWhU88Th2hHN0RwiFnJiwQ2lbxBHW+JxSsVFs2K2hGU7 EtMQJ15CaoyN2mUXzP0At64WDMbyyygxjr+qk8JqKkkzB9UUEWETLaaB0fxcT9wL2KnRRywsFtP dlfI/lrtlvvNp8LHYDmowrlTRUg== X-Google-Smtp-Source: AGHT+IFdNtZkxvJxJ9KoIinFxTXoYCnJbZauUeKj4oIMf+rqE8iUac1MGIbX45NhV6sJzZWUdWJoByDmtDg2d29Vpw== X-Received: from ilbbc25.prod.google.com ([2002:a05:6e02:99:b0:434:972f:bf92]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4cc2:b0:659:9a49:9050 with SMTP id 006d021491bc7-65b2ac45517mr108480eaf.27.1765313561253; Tue, 09 Dec 2025 12:52:41 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:12 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-16-coltonlewis@google.com> Subject: [PATCH v5 15/24] KVM: arm64: Setup MDCR_EL2 to handle a partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Setup MDCR_EL2 to handle a partitioned PMU. That means calculate an appropriate value for HPMN instead of the default maximum setting the host allows (which implies no partition) so hardware enforces that a guest will only see the counters in the guest partition. Setting HPMN to a non default value means the global enable bit for the host counters is now MDCR_EL2.HPME instead of the usual PMCR_EL0.E. Enable the HPME bit to allow the host to count guest events. Since HPME only has an effect when HPMN is set which we only do for the guest, it is correct to enable it unconditionally here. Unset the TPM and TPMCR bits, which trap all PMU accesses, if FGT (fine grain trapping) is being used. If available, set the filtering bits HPMD and HCCD to be extra sure nothing in the guest counts at EL2. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 11 ++++++ arch/arm64/kvm/debug.c | 29 ++++++++++++-- arch/arm64/kvm/pmu-direct.c | 65 ++++++++++++++++++++++++++++++++ 3 files changed, 102 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 60b8a48cad456..8b634112eded2 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -101,6 +101,9 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -173,6 +176,14 @@ static inline u64 kvm_pmu_fgt2_bits(void) { return 0; } +static inline u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + return 0; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 3ad6b7c6e4ba7..0ab89c91e19cb 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -36,20 +36,43 @@ static int cpu_has_spe(u64 dfr0) */ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { + int hpmn =3D kvm_pmu_hpmn(vcpu); + preempt_disable(); =20 /* * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); + + vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, hpmn); vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | MDCR_EL2_TPMCR | MDCR_EL2_TDRA | - MDCR_EL2_TDOSA); + MDCR_EL2_TDOSA | + MDCR_EL2_HPME); + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + /* + * Filtering these should be redundant because we trap + * all the TYPER and FILTR registers anyway and ensure + * they filter EL2, but set the bits if they are here. + */ + if (is_pmuv3p1(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HPMD; + if (is_pmuv3p5(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HCCD; + + /* + * Take out the coarse grain traps if we are using + * fine grain traps. + */ + if (kvm_vcpu_pmu_use_fgt(vcpu)) + vcpu->arch.mdcr_el2 &=3D ~(MDCR_EL2_TPM | MDCR_EL2_TPMCR); + + } =20 /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 4dd160c878862..7fb4fb5c22e2a 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -154,3 +154,68 @@ void kvm_pmu_host_counters_disable(void) mdcr &=3D ~MDCR_EL2_HPME; write_sysreg(mdcr, mdcr_el2); } + +/** + * kvm_pmu_guest_num_counters() - Number of counters to show to guest + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the number of counters to show to the guest via + * PMCR_EL0.N, making sure to respect the maximum the host allows, + * which is hpmn_max if partitioned and host_max otherwise. + * + * Return: Valid value for PMCR_EL0.N + */ +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + u8 nr_cnt =3D vcpu->kvm->arch.nr_pmu_counters; + int hpmn_max =3D armv8pmu_hpmn_max; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (vcpu->kvm->arch.arm_pmu) + hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (nr_cnt <=3D hpmn_max && nr_cnt <=3D host_max) + return nr_cnt; + if (hpmn_max <=3D host_max) + return hpmn_max; + } + + if (nr_cnt <=3D host_max) + return nr_cnt; + + return host_max; +} + +/** + * kvm_pmu_hpmn() - Calculate HPMN field value + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the appropriate value to set for MDCR_EL2.HPMN. If + * partitioned, this is the number of counters set for the guest if + * supported, falling back to hpmn_max if needed. If we are not + * partitioned or can't set the implied HPMN value, fall back to the + * host value. + * + * Return: A valid HPMN value + */ +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + u8 nr_guest_cnt =3D kvm_pmu_guest_num_counters(vcpu); + int nr_guest_cnt_max =3D armv8pmu_hpmn_max; + u8 nr_host_cnt_max =3D *host_data_ptr(nr_event_counters); + + if (vcpu->kvm->arch.arm_pmu) + nr_guest_cnt_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (cpus_have_final_cap(ARM64_HAS_HPMN0)) + return nr_guest_cnt; + else if (nr_guest_cnt > 0) + return nr_guest_cnt; + else if (nr_guest_cnt_max > 0) + return nr_guest_cnt_max; + } + + return nr_host_cnt_max; +} --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 874193054FE for ; Tue, 9 Dec 2025 20:52:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313572; cv=none; b=jkBHEaZ7iGzsRBgQ2SMBgld5N6HGh9bYGkiHOB5Y8ECRvyhlGFP+sCu2YXqOGlCMIaAoeVm5kNSsE0QYLZQoWoUzA08yhYUzJSYMDD4wdZXjfueemhzZpugGiet7BGqlbvKSveMnOpyL76REmapeCvUcHi8HPI92i1wXkeK5w8o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313572; c=relaxed/simple; bh=L6JRHNao9Dv2st9IstObJcFX6EfxZz6LafDaZCOeTJ8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XDYELUyE4JVS0sWlAcTxZ0CNIBmsTgQBu8PwcgsnTYnbOMQw4PunP8XD7wDjn7+Tj5Hm0GSX8Bo2vJhFFJDIg5byBiOEsXSBerAjbI4nFcgFRp8zO4HFYHzotfZeJ0LjNPcHZfp7xrNd6oGzLdzQyOC1QPbwYmlLKFnqu+oORBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tTA237n3; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tTA237n3" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-3e891a895e0so7066173fac.1 for ; Tue, 09 Dec 2025 12:52:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313562; x=1765918362; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0zQsEiABUjsdMkMEnByRoAgIWKd7GLr6DjLwzP+aDHY=; b=tTA237n3Vh+t0TIlK504bUJ84TNu3JyywoJOfI6da3cEuIcPDmvEBP055LtGPDiUQ8 sPg87QSnOrZGUAD5KqTXFD1MV/+XIws2NjpIHidBrZcuyDlxl1+sre2N5ziwwb1/1j0s VgOAOJwCdAjDOTXYYcBhKOyJg2dwDqlxKUxgBK/dO0JQvyLFjDg53pk2EcHxDennl1im X1AKM4Tq2abZFcxxHKHvGPl/fpODydG/lhq/GDqLkYg0l8qYXxOVobanLKjGkLyQDUzZ MRFr3hkKh3qTeFVJZo5YeSzETycwsB/7Tq4ABxzdfZq/2trtFT/PF/m3QhsdODZbyJdE awSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313562; x=1765918362; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0zQsEiABUjsdMkMEnByRoAgIWKd7GLr6DjLwzP+aDHY=; b=FWvAxuJQ4z2+0QeT5zazjiJN565P3Sw6U7T5KyO3uyELflr3LlC0hUW8kmG8WsG3zc vB0rqprmot6XkBIWsY4H7uhw0dqmV5nuEIRIPdMkBRiWSdokBnpcAXpXoEwOY5fZOrnG QWVIAutYXU6F4SkKlqVgruHhwlDcPCfFJUTJre0ISDWifQcDqPoTj/qE1vpqUHmot1gx VTQlxtxPIjwNQUC9KXAjVcndlb8jBgeu+eMd5sWPQCGEtu8h7uGkQ1e8p2m5wiobhSU2 JRZQu4JcZh39Qeo2WCdfr7U3IuQux87v+c5VUmNXhRfaClZnvDs1xPifRn6Gi+KhIGZE gkYg== X-Forwarded-Encrypted: i=1; AJvYcCUSRnwTZquULhnhDQIM4NN4brMRrAEKI5yhZ4f0EIiS90Q9zAjDhK9x8k5FtpvZJNJ5j5ErWUujIgZap4A=@vger.kernel.org X-Gm-Message-State: AOJu0YzM1wosOqvbRw7XB3qetDxg4Anz9FgVncuZ0QmNjHnzcrJ+rpiy 1tON3XRfuOpcod1iF9ZFnnevx0N2BrdBktDB1J61cqnm1erCLplhdsv9uERhiB5wZCczChYZQhA 0XwZDPboNZlYRwrnekQxEjqTS/g== X-Google-Smtp-Source: AGHT+IFRKHeoUmjGuNFSlsmzEp/JswAlPAyHOqJKqkJs0+bQVeMX9v3ykC4h/RUbKQXV9XNTVLE12IZWXk6mEzWBCQ== X-Received: from oahn5.prod.google.com ([2002:a05:6870:3485:b0:3ec:4657:83cc]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:8208:b0:3ec:3b3e:4f38 with SMTP id 586e51a60fabf-3f5bdbe25e0mr227516fac.36.1765313562359; Tue, 09 Dec 2025 12:52:42 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:13 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-17-coltonlewis@google.com> Subject: [PATCH v5 16/24] KVM: arm64: Account for partitioning in PMCR_EL0 access From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make sure reads and writes to PMCR_EL0 conform to additional constraints imposed when the PMU is partitioned. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 2 +- arch/arm64/kvm/sys_regs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 1fd012f8ff4a9..48b39f096fa12 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -877,7 +877,7 @@ u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vc= pu) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + u64 n =3D kvm_pmu_guest_num_counters(vcpu); =20 if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 70104087b6c7b..f2ae761625a66 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1360,7 +1360,7 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const stru= ct sys_reg_desc *r, */ if (!kvm_vm_has_ran_once(kvm) && !vcpu_has_nv(vcpu) && - new_n <=3D kvm_arm_pmu_get_max_counters(kvm)) + new_n <=3D kvm_pmu_hpmn(vcpu)) kvm->arch.nr_pmu_counters =3D new_n; =20 mutex_unlock(&kvm->arch.config_lock); --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E95C308F07 for ; Tue, 9 Dec 2025 20:52:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313575; cv=none; b=BS0apHHEwjQrshVcebvCs3YtZGs2ayFHs3a7HcM54A5Zs+/fqWgItiOzz1yVjyDEn/bW5HQ9lLggH65+Xtna88sln1l/GpSU0o2ml/mv1xI8d+dLhnz3UF6Nr//puNUapKVLOwsINXcg4VHlTMPS6PgfizptTmwFn+vsyLGSVzc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313575; c=relaxed/simple; bh=Qz4r177CFGhy0LXqT9yrizG49NOxmJkZrufdw1DdziU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oyl2Spy0g83mvflDekX3CXtonAzg78S10GHoEpoa2mNjcbJPGYvzZSIt+8Xw/oIVx4Lnk/uBsPmbeU4l8bnzIhzoLZEo/K9hoDe/Wvq9fADlhTRZGMjLG4PUTRxdOu3ABAZiSxwcslvwvXxB5Ibm1xmI/WVX1PHY3FpmdFRz2/w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mmbybgTv; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mmbybgTv" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-656ceb0c967so4781644eaf.1 for ; Tue, 09 Dec 2025 12:52:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313563; x=1765918363; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=X/sFeK5VmbytXpjUspgl5j2szwkI07ChP0duwQoez5c=; b=mmbybgTvVD6PScFUDRLtCb5IJHnVQO7qapWMYWMg10JmaMQQsQwvbYoLvxzZy3yWv1 yD7+NhZ4Su9fmNSB4PoOZGhbcxskm1g8VfcitpSibLLYhjF3K7v9X4JncRu76AwWG4Dh gRdqwGzE6h0lc1d1vOoGVPixVq3NNeJo6lSjc7XRkU+cBd5gFtXWPjxnaNVGNVmehP+8 ebNh70J3pXg7t1wlq2rG+FgBVEgyZwAIR6Opb9OGBiMJ/tJ2LDkRK6W0jPsnae/6d4YO 900it1O3QVoVD/9B91idPZtjI3kv/OCdBQEtzIaYFU0VVCIl8OSYGZZ+Y9BbYFIY666a um8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313563; x=1765918363; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X/sFeK5VmbytXpjUspgl5j2szwkI07ChP0duwQoez5c=; b=AmZ9q0jsZU2HfFY13szEWATDF+THvh+gJfxHKEHsRoscAyy0PISyyE2+7QzE1HQ9xM MMBjz9GSIzhC5tg5UV87/Nqnw2XMlaFnMqmVjHayNBWCjBSkCov9CNnwbG+oJ3Wcmr9F S0+vWgGqeQRf14f8/eEXE8wYGXkDkpBnCvOldv46MmF/mWd6oI3MRaVz9CufBZdl1+pd mfk+VGVB9Oe8xcA1TqYxO3qDXEylBsn9hORuRRBjwE0AqfMEtrcpcyfBzu5x8chOp6fw ct/q8BEdsele+ig1gCSzLDOsVe+V9BhettsrDZeBKG7POyoIrrcjXTjkB/Aa1bWpuxFT 4Bcg== X-Forwarded-Encrypted: i=1; AJvYcCU9DtxuPEYD2Q99zcCUUyx57eGrUjG4fgceSAQEJ0e6cDVIaWnSSLD7tRbqGr56bvK/bLh6+B/fYHaUgDA=@vger.kernel.org X-Gm-Message-State: AOJu0YzeeZSCg9E/5sNsPw++3/wiOnoWuRRK7mYY2xE/obWhvgx6S4dW 9NcOq2Hzjsur8ibjDSyCAmmWYWY4O2W/cKRsWXUbQgDpdOkeXUZUdWLYi2WkOo4iI0mi/F4BVa1 h6nm/eZUZzmtxiMcDXLsuhU39CQ== X-Google-Smtp-Source: AGHT+IE/HaqRyprFnQdYwiWTY8frwPnLF0+hPo1GlYvxcyhgVPd88+4NFT3OoEIrs06Xz4mz8hzl498G4Aoi8k4qdw== X-Received: from ilbbd2.prod.google.com ([2002:a05:6e02:3002:b0:436:f324:48]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:2207:b0:65b:2869:c616 with SMTP id 006d021491bc7-65b2ac078bfmr114176eaf.33.1765313563180; Tue, 09 Dec 2025 12:52:43 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:14 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-18-coltonlewis@google.com> Subject: [PATCH v5 17/24] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that can be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMICNTR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If we know we are not using FGT (that is, trapping everything), then return immediately. Either the PMU is not partitioned, or it is but all register writes are being written through the VCPU fields to hardware, so all values are fresh. Since we are taking over context switching, avoid the writes to PMSELR_EL0 and PMUSERENR_EL0 that would normally occur in __{,de}activate_traps_common() Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 4 + arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +- arch/arm64/kvm/pmu-direct.c | 112 ++++++++++++++++++++++++ 4 files changed, 120 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 8b634112eded2..25a5eb8c623da 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -103,6 +103,8 @@ void kvm_pmu_host_counters_disable(void); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -184,6 +186,8 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; } +static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 43e92f35f56ab..1750df5944f6d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -629,6 +629,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -671,6 +672,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 40bd00df6c58f..bde79ec1a1836 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -311,7 +311,7 @@ static inline void __activate_traps_common(struct kvm_v= cpu *vcpu) * counter, which could make a PMXEVCNTR_EL0 access UNDEF at * EL1 instead of being trapped to EL2. */ - if (system_supports_pmuv3()) { + if (system_supports_pmuv3() && !kvm_vcpu_pmu_is_partitioned(vcpu)) { write_sysreg(0, pmselr_el0); =20 ctxt_sys_reg(hctxt, PMUSERENR_EL0) =3D read_sysreg(pmuserenr_el0); @@ -340,7 +340,7 @@ static inline void __deactivate_traps_common(struct kvm= _vcpu *vcpu) struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); =20 write_sysreg(0, hstr_el2); - if (system_supports_pmuv3()) { + if (system_supports_pmuv3() && !kvm_vcpu_pmu_is_partitioned(vcpu)) { write_sysreg(ctxt_sys_reg(hctxt, PMUSERENR_EL0), pmuserenr_el0); vcpu_clear_flag(vcpu, PMUSERENR_ON_CPU); } diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 7fb4fb5c22e2a..71977d24f489a 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -9,6 +9,7 @@ #include =20 #include +#include #include =20 /** @@ -219,3 +220,114 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) =20 return nr_host_cnt_max; } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + u64 mask; + u8 i; + u64 val; + + /* + * If we aren't using FGT then we are trapping everything + * anyway, so no need to bother with the swap. + */ + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + return; + + pmu =3D vcpu->kvm->arch.arm_pmu; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + write_pmevcntrn(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCNTR_EL0); + write_pmccntr(val); + + val =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + write_pmuserenr(val); + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_pmselr(val); + + /* Save only the stateful writable bits. */ + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + mask =3D ARMV8_PMU_PMCR_MASK & + ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); + write_pmcr(val & mask); + + /* + * When handling these: + * 1. Apply only the bits for guest counters (indicated by mask) + * 2. Use the different registers for set and clear + */ + mask =3D kvm_pmu_guest_counter_mask(pmu); + + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_pmcntenset(val & mask); + write_pmcntenclr(~val & mask); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_pmintenset(val & mask); + write_pmintenclr(~val & mask); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu; + u64 mask; + u8 i; + u64 val; + + /* + * If we aren't using FGT then we are trapping everything + * anyway, so no need to bother with the swap. + */ + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + return; + + pmu =3D vcpu->kvm->arch.arm_pmu; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D read_pmevcntrn(i); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val =3D read_pmccntr(); + __vcpu_assign_sys_reg(vcpu, PMCCNTR_EL0, val); + + val =3D read_pmuserenr(); + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val); + + val =3D read_pmselr(); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val =3D read_pmcr(); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + mask =3D kvm_pmu_guest_counter_mask(pmu); + + val =3D read_pmcntenset(); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val =3D read_pmintenset(); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); +} --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CAA730DD1A for ; Tue, 9 Dec 2025 20:52:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313576; cv=none; b=OdgzlnKRQN26pjx/1B0N0vfGLbkXjfKyo3caxd3CpI7Q6nmWchsfUHr75pR8GyD7yHXeZoHnU5pIX0StUg8IQHO9KKkiWy6gswuKQVdkadhhSyS/EUsYou/0PDZ776yBBVtKjJJQOrIS3ep0KXGfVI4Hh2rIQqtth8q7dB3rQnQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313576; c=relaxed/simple; bh=0i/XVALUfLVfonlnN8cOfbbK0RivHhuamyQe6SmIzEY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mLR7Z12gjumeLlocwmvtF7wZokcFMaB4alqytd6O8P6Q7nJYMSUDCazoO0JYQK8Xwe6mosSNbl3V/PEkD6+nIcoERX+Qdc3pu7p6ofq12+dGUy3I/1jdQfBlcTDPsDh4HJ1O5CS6N6akLK4VDjxyiH3n1OsJU7VnDjLjBjjDamU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p8yCYit0; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p8yCYit0" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-65744e10b91so4211432eaf.0 for ; Tue, 09 Dec 2025 12:52:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313564; x=1765918364; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ihXpQp4YbEFBpuWRcFtm84jVRjsgBgDbq/Ms0l8x8e0=; b=p8yCYit0q0/NYoK/RSE56lgArZmffC2RUnjc3avCDel7JbfOtadDw14NvTscZZyOxe ElaBo+2k29JW8SwmTsrWtEWCaMI5oTakgbZ4eCf3kQ7HfFHVUPi+P3C5gBBlSgkEL741 kd/0QRfdSSNoxuaNx/fLwq5XKfWTldNF9J+OwA5Anu++BhkdJLXeLa+dIPQKlNTi/Gki Bl95OY8Y4HyZPu3hK4F6ROZEkYvbXs5t+yTsv/EQ5rSgXpRtAVLXqIA7Gr8tS0mxIaka xUEeaMQB6vVguFgc3BXI4Bl5jwBDaNOzOdSdKiZzeKmJ7vRc6ijKt7mjJapPxXkpbnzy nW+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313564; x=1765918364; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ihXpQp4YbEFBpuWRcFtm84jVRjsgBgDbq/Ms0l8x8e0=; b=CJtfVsjGOi1Uz2Rcq8h6wLST+zVu4zQ0UpoL0VroJ/KWv0wUFOJwn+k9XZDDNqPAIS GwjKpma72Dg8IULHnnSHWKyc0nSGgNvXQ1MeqUFGlQifZ4nbJMGg5rQBulIDYRjH91fr fMgw7bAfsFn+J0LdDEA/HsYAL7fjdFOJJdTy5euQc6CaZUyvUpbu2y1GhqYHfBoVDeOm L/llkktoYZENsBPmE6tJzqQABoRxAuRucFS0mRxxztY82xXKc+2S/ioGEgJnliaXwbAT v2ZVhbslhSAn/Fdw6qOMBGDf1FzVFa1XWSuOdYQ8ui58gauVbNXVkF8KhmfWUk68ZoA8 YGtQ== X-Forwarded-Encrypted: i=1; AJvYcCUFsdgjxqmTLCJroMJ7lGuqr0k8IKdJXfw2vx7hrjKPo/ZsfttZa6cZTKrkKFctzyVSiRBRUqKc+uI/MzU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7/UnxY6aEa7U+6RBG0/bZp79DT02Zdm1kKpB0WhSEcwJdjqNA YA1mPCWfWMa/y7swBLut/lC10pOhZ6h+MONi6PIjNanjgv4bQe65hv9eWI7KY1vI9EdFsPAtMHY p/AJVFUnz1Ak/lpHKIh4wt7oSnQ== X-Google-Smtp-Source: AGHT+IHxM7HDFMt11Fktl9vAFCiqioFJsGX2hMXvHLM0Q/bfIgLNdr5sCpLbQ+drd5Dfesj8Rx6njSPwYNcP8RB94g== X-Received: from ilbck7.prod.google.com ([2002:a05:6e02:3707:b0:430:c858:3dd3]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:20e:b0:657:64ce:b40f with SMTP id 006d021491bc7-65b2ac06416mr132080eaf.4.1765313564200; Tue, 09 Dec 2025 12:52:44 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:15 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-19-coltonlewis@google.com> Subject: [PATCH v5 18/24] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load since it might have changed since the last time the guest wrote a value. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 44 +++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 71977d24f489a..8d0d6d1a0d851 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -221,6 +221,49 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return nr_host_cnt_max; } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 evtyper_set =3D ARMV8_PMU_EXCLUDE_EL0 | + ARMV8_PMU_EXCLUDE_EL1; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + u8 i; + u64 val; + u64 evsel; + + if (!pmu) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + evsel =3D val & kvm_pmu_event_mask(vcpu->kvm); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(evsel, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + val &=3D ~evtyper_clr; + write_pmevtypern(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(ARMV8_PMUV3_PERFCTR_CPU_CYCLES, vcpu->kvm->arch.pmu_filter)) + val |=3D evtyper_set; + + val &=3D ~evtyper_clr; + write_pmccfiltr(val); +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -244,6 +287,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) return; =20 pmu =3D vcpu->kvm->arch.arm_pmu; + kvm_pmu_apply_event_filter(vcpu); =20 for (i =3D 0; i < pmu->hpmn_max; i++) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4A8E30B52F for ; Tue, 9 Dec 2025 20:52:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313575; cv=none; b=g+QLp40zcDZxiHuCr1wnBIbqV+UuE3yu+cMqx94M7CJLAVTuBRbNbYZGOhU1C+LXIjrlUJGbI9bwppiM42VxSiKc2GOt06a4Z1ORjfPBKNvTo+6Gm8j4jieqUtte6Crgfb3wBYhR4KM3RoZ6gRREGmfYnlfhm29KWe2Eal+sKqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313575; c=relaxed/simple; bh=gmC2k89w1Dte4pd6AnbrWKevbCFa8dAc75+iE7e6/ds=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QvPf0IeB8Lw2nIReOa03cWyWPRhjdFz5VEZq7fve6lHAhMHQfebLxBfK8Q9qDCf/KoAKAFFLRydizyJDDby1/kKSGLuIRUap/TFJLGMB0dzVaj8E9YyGUJootkkszOgG2is0XKqm1qvchwiMEu0/kc5WaDCcxWErt9LbJ/0TWj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=y2SHgjqz; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="y2SHgjqz" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c702d1a4f7so11861931a34.3 for ; Tue, 09 Dec 2025 12:52:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313565; x=1765918365; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bZ89vuk8IoQwxvZsx0+1wjVs3fhaJQxFPQySDckilYY=; b=y2SHgjqz1/cxFp+1tXukiTYVVlWIBloM8Om1VNe5qINv1Gm2y0AhIIMnbNan2dk+am NCu3Q5vX9tabQJUa6EhRmfksAYwYDMKdOJanmofpKSReuAfqLN35+mjXclt7gFEOuw4I sP/oD1wjkMfz5Pc1eoYyVeCZSlNQZdGa2erR9UuI5cU94JWjbvgXLZeAg0797SP/11l8 sqATonkRqpwWFxnlCFr4lMUar78dXzP+NPs+pajcTzEFTcvJvLj/Q7frt755H6IC2Va7 yhGmcHwm5JdcLW1/LqyE3OJtGoQAQAbZnKcO43R9oaN60nOAx6h71bwnbnYxdWAQbJs6 eAxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313565; x=1765918365; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bZ89vuk8IoQwxvZsx0+1wjVs3fhaJQxFPQySDckilYY=; b=SaKcn8bHIqU/qDCe/VodRlRYXbQNsffbMnG/xyguTIWCMhC/BMBlJ+X4RsLwdBczAW 7GTLdIdBnMXFkfRZpJOiuuMbfxmMUHvqo05RrC4ETNwOnfuOkfekx+fl/X2zCbf0fWWG rwDqy8ZUrYsMcNCc/61MCUF3ACiha1e3XBiW+P6cGWWwBlyxbI4caWoLfgP/3tbUZcf2 1bQBMWett8H+Ce2WgTurCe/GiWWp37pi7HM+3fN16w5Z2L2127WdYTX3NOb2JaqX4AYF YabfllNOaGNiETAJM0XUK9OLk4c3TKi2XC7i9XcXPMrF/iu1wsUZURdBQ5wMbkIKEjjm GyWw== X-Forwarded-Encrypted: i=1; AJvYcCWkT2+7a00iZ08tpcZXTFlXWfo2XCtdwdeAfW/HBn2OFU3S+HVJ9Uq9QlzUrW4IkuJ7pz5hjvqNLY1XQ3o=@vger.kernel.org X-Gm-Message-State: AOJu0YyEmEJILi71daq5f7AK0qAOGTaR3dRFLENWZYgLWkgf9GHAUrmZ veZHoUcbyHzB9Xn0kw37F/eSXVcwoF3e52E0lOHpRGTacazodohT9LOA7zDy/nhqsmJgmJf14e7 I8dC6vuqBJsIAYIKwfqoo8UAiiw== X-Google-Smtp-Source: AGHT+IFMFGhBntVA7e0yxpl28sTmfttt7cHqvVKb12c8Ch6q+0Kzaupkymtg4kVqi74BeZUI4wFyYNpn47IB63gJJg== X-Received: from ilbea4.prod.google.com ([2002:a05:6e02:4504:b0:436:f324:49]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:4d08:b0:659:9a49:9024 with SMTP id 006d021491bc7-65b2adaf85amr87797eaf.81.1765313565337; Tue, 09 Dec 2025 12:52:45 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:16 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-20-coltonlewis@google.com> Subject: [PATCH v5 19/24] KVM: arm64: Implement lazy PMU context swaps From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since many guests will never touch the PMU, they need not pay the cost of context swapping those registers. Use an enum to implement a simple state machine for PMU register access. The PMU either accesses registers virtually or physically. Virtual access implies all PMU registers are trapped coarsely by MDCR_EL2.TPM and therefore do not need to be context swapped. Physical access implies some registers are untrapped through FGT and do need to be context swapped. All vCPUs do virtual access by default and transition to physical if the PMU is partitioned and the guest actually tries a PMU access. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pmu.h | 4 ++++ arch/arm64/include/asm/kvm_types.h | 7 ++++++- arch/arm64/kvm/debug.c | 2 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 ++ arch/arm64/kvm/pmu-direct.c | 21 +++++++++++++++++++++ arch/arm64/kvm/pmu.c | 7 +++++++ arch/arm64/kvm/sys_regs.c | 4 ++++ 8 files changed, 46 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index c7e52aaf469dc..f92027d8fdfd0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1373,6 +1373,7 @@ static inline bool kvm_system_needs_idmapped_vectors(= void) return cpus_have_final_cap(ARM64_SPECTRE_V3A); } =20 +void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu); void kvm_init_host_debug_data(void); void kvm_debug_init_vhe(void); void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 25a5eb8c623da..43aa334dce517 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -38,6 +38,7 @@ struct kvm_pmu { int irq_num; bool created; bool irq_level; + enum vcpu_pmu_register_access access; }; =20 struct arm_pmu_entry { @@ -106,6 +107,8 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); void kvm_pmu_load(struct kvm_vcpu *vcpu); void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 +void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -188,6 +191,7 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) } static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/include/asm/kvm_types.h b/arch/arm64/include/asm/kv= m_types.h index 9a126b9e2d7c9..9f67165359f5c 100644 --- a/arch/arm64/include/asm/kvm_types.h +++ b/arch/arm64/include/asm/kvm_types.h @@ -4,5 +4,10 @@ =20 #define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 =20 -#endif /* _ASM_ARM64_KVM_TYPES_H */ +enum vcpu_pmu_register_access { + VCPU_PMU_ACCESS_UNSET, + VCPU_PMU_ACCESS_VIRTUAL, + VCPU_PMU_ACCESS_PHYSICAL, +}; =20 +#endif /* _ASM_ARM64_KVM_TYPES_H */ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 0ab89c91e19cb..c2cf6b308ec60 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -34,7 +34,7 @@ static int cpu_has_spe(u64 dfr0) * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF) * - Self-hosted Trace (MDCR_EL2_TTRF/MDCR_EL2_E2TB) */ -static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) +void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { int hpmn =3D kvm_pmu_hpmn(vcpu); =20 diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index bde79ec1a1836..ea288a712bb5d 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -963,6 +963,8 @@ static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vc= pu) if (ret) __kvm_skip_instr(vcpu); =20 + kvm_pmu_set_physical_access(vcpu); + return ret; } =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 8d0d6d1a0d851..c5767e2ebc651 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -73,6 +73,7 @@ bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; =20 return kvm_vcpu_pmu_is_partitioned(vcpu) && + vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_PHYSICAL && cpus_have_final_cap(ARM64_HAS_FGT) && (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); } @@ -92,6 +93,26 @@ u64 kvm_pmu_fgt2_bits(void) | HDFGRTR2_EL2_nPMICNTR_EL0; } =20 +/** + * kvm_pmu_set_physical_access() + * @vcpu: Pointer to vcpu struct + * + * Reconfigure the guest for physical access of PMU hardware if + * allowed. This means reconfiguring mdcr_el2 and loading the vCPU + * state onto hardware. + * + */ + +void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu) + && vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_VIRTUAL) { + vcpu->arch.pmu.access =3D VCPU_PMU_ACCESS_PHYSICAL; + kvm_arm_setup_mdcr_el2(vcpu); + kvm_pmu_load(vcpu); + } +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 48b39f096fa12..c9862e55a4049 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -471,6 +471,12 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) return 0; } =20 +static void kvm_pmu_register_init(struct kvm_vcpu *vcpu) +{ + if (vcpu->arch.pmu.access =3D=3D VCPU_PMU_ACCESS_UNSET) + vcpu->arch.pmu.access =3D VCPU_PMU_ACCESS_VIRTUAL; +} + static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) { if (irqchip_in_kernel(vcpu->kvm)) { @@ -496,6 +502,7 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) init_irq_work(&vcpu->arch.pmu.overflow_work, kvm_pmu_perf_overflow_notify_vcpu); =20 + kvm_pmu_register_init(vcpu); vcpu->arch.pmu.created =3D true; return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f2ae761625a66..d73218706b834 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1197,6 +1197,8 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, p->regval =3D __vcpu_sys_reg(vcpu, reg); } =20 + kvm_pmu_set_physical_access(vcpu); + return true; } =20 @@ -1302,6 +1304,8 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struc= t sys_reg_params *p, p->regval =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); } =20 + kvm_pmu_set_physical_access(vcpu); + return true; } =20 --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D040030DEA3 for ; Tue, 9 Dec 2025 20:52:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313576; cv=none; b=m6MD46MfBSLj+W85Wmi3bCI1IpTM9NECiEbgYVLsEqDHWGMxiyuNbk+0bw2zJjpM51E9sCEUJHxsO5GwwU9otqx8QOhFdrhGo2m2de6dGUq67GSL3CwtTkomNZetW5D9TiFW5kto1niq3mvqVYNkLWj2dZ//Rm52fKZ9HTj/swU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313576; c=relaxed/simple; bh=vXep3cSR5+RW3ZaGAs8EaICvp+PDXlXKTrV3Sz3Arf0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Xt8cw472ebh1aOhOBwfApizWBfaRUjsz/1f3UAc0oWbWeIa+aw0fKQZdqzHKEpR7Dv4IoCC9PM63C63l/QuENLCvCpOHsSX2JYGw33xC6oi1gVZHQCLSfnQyOVphpq40lkcFudd8p0ogd2KOdGc1E5Wyr9HMMNkhhtzP6XynARU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UJW7uWDB; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UJW7uWDB" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c7593b5c93so10335367a34.3 for ; Tue, 09 Dec 2025 12:52:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313566; x=1765918366; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Z3C+EgXBZrnvYKUsePQFPeWA6+JM50EPdZ7QjJ5U3nk=; b=UJW7uWDB/q+u1O0dzb6pn1hiAyGvferoZeQawIWhWDMW7xnVcTDcqzs4sXHB/KDSny 04wVi/yvZm3AhjvruaQWt6V+Nfbui+xggqXNGvZWEZzOQCnUyGCgbTdbFw9mWBQ2cn7m wshbekOLEClxTYGi9KsbubntcVjaAV27sVJWhXeshugfDQ1htG5gBp3BYWf+9F1AmYNH HAqU9Xy50KLMWHWds3uoGdxWmTP9O7GeKAeRsdcN2w7iCL4UD3gnWi5p9bNVl8QapSmE j00xIyoaiqmlT2XtSPlEyqjBXls1vcFsnHw9BSd2X5SEuYSTl1R/B9aq+gy4LXIC3Ij1 Bm4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313566; x=1765918366; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Z3C+EgXBZrnvYKUsePQFPeWA6+JM50EPdZ7QjJ5U3nk=; b=qoMhUVb7tkQmUkLG9xCQFLZPA+9E0Zqy2kUf1dD+BLs7oF2ORR7FtBdkyfEfFA3c8C zsq9ZiYz4RjJiZRpYDSWhwlxQTT1COqA3d5A8/itOXIo6AGUTdD125qxcYBigRjY7oae R/ubUg7qtcz6E7j6McAxIfVy1jCXee/Vhq9m+0rOYP6FP2re973SqqdbH0HwoHkSfw2V E9PRlCGV0U0aqQkhdbQzAvjSWIIu8CFR2sC0ZFpkscebsBlYCs4JguTnAoCkok7Oua6a sab4lxXmRePQ5oQNWikVieQOQTSDojYoYhF7CwrzngzgDzKdIDlDxIc/6nwDY1I1Vz4k tVQg== X-Forwarded-Encrypted: i=1; AJvYcCX0OloZ6Ld5r9P7Wc1cFvilL9t15QqGAscn58QQ8qGyvR0ZMgwOxvY8Pf34s76UKVk7noikSn1lj8ZjNyk=@vger.kernel.org X-Gm-Message-State: AOJu0Yy8t2vKww9pUlf2d0p0o9Yh77rVri7furO1oF3hA+49BtPjHj4x u1/10hDwVfCPgWeAPmOTlcHMP27ufQgY4hMJemIRsR7fOpruS3AYWgz+3KUBf84jPFK2ilI3XEL CKDNhtMnlWhq8ZrePtFqW0LKyfQ== X-Google-Smtp-Source: AGHT+IE+Q1bxOP7O2hdAR+5uvGwSUyrHQRsNxJJcZ41BmbQLJOhfNhTF9IlZ3TTqoSNqarccK80mXi5fG/4ler8rFA== X-Received: from otbaz24.prod.google.com ([2002:a05:6830:4598:b0:7c7:307:1f8c]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:7198:b0:7c9:59a5:dd04 with SMTP id 46e09a7af769-7cacec5716bmr83284a34.37.1765313566430; Tue, 09 Dec 2025 12:52:46 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:17 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-21-coltonlewis@google.com> Subject: [PATCH v5 20/24] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because ARM hardware is not yet capable of direct interrupt injection into guests, guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but record which guest overflow flags were set in the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-direct.c | 17 +++++++++++++++++ drivers/perf/arm_pmuv3.c | 9 +++++++++ 4 files changed, 34 insertions(+) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 3ea5741d213d8..485d2f08ac113 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -249,6 +254,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 43aa334dce517..e4cbab0fd09cf 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -101,6 +101,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(u64 govf); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); @@ -322,6 +323,7 @@ static inline u64 kvm_pmu_guest_counter_mask(void *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 #endif =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index c5767e2ebc651..76d8ed24c8646 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -396,3 +396,20 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) val =3D read_pmintenset(); __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @govf: Bitmask of guest overflowed counters + * + * Record IRQs from overflows in guest-reserved counters in the VCPU + * register for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(u64 govf) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (!vcpu) + return; + + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 2bed99ba992d7..3c1a69f88b284 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -783,6 +783,8 @@ static u64 armv8pmu_getreset_flags(void) =20 /* Write to clear flags */ value &=3D ARMV8_PMU_CNT_MASK_ALL; + /* Only reset interrupt enabled counters. */ + value &=3D read_pmintenset(); write_pmovsclr(value); =20 return value; @@ -904,6 +906,7 @@ static void read_branch_records(struct pmu_hw_events *c= puc, static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u64 pmovsr; + u64 govf; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; @@ -961,6 +964,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) */ perf_event_overflow(event, &data, regs); } + + govf =3D pmovsr & kvm_pmu_guest_counter_mask(cpu_pmu); + + if (kvm_pmu_is_partitioned(cpu_pmu) && govf) + kvm_pmu_handle_guest_irq(govf); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4138D30E835 for ; Tue, 9 Dec 2025 20:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313582; cv=none; b=DiVz1hpHBeuVU92Tw+4xSlrqQGwV+Ec2klm+yF9HAVm4DShUpryr5wtKAP4banNFqXNhu2TMVAUwz/HAx9tr7E2GK7MJVCglQbYOdh6Gbj5gaAnZvZ3zZPAR17M3LGebr6yaurcwzhjQiCa9Swg7C6DzMq8OfkXUIsY4m3+OVMA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313582; c=relaxed/simple; bh=B0rHT03/ykxaZJBQkB+M/HLVKmH/k1hRDjrVKkpI4FE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=n5yEZORi4rgB0wFnwWdFfLBGcgFfl5tn4rJOdQtn6jF1GPNYOmBI8XxFlTvnmDdWaXpRrZNs0yAxmUcYqIOEpOPV5pz4CPoTEpU7rSvSa4it+mXi5ExxQD8mwl3oaxHfgDAOpu2QdPxHx86mQSXqxA2y65SEN1AlvzhXadQFwRg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=o4+T61qg; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o4+T61qg" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c7599a6f1fso11824786a34.1 for ; Tue, 09 Dec 2025 12:52:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313567; x=1765918367; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OtKUAKNDvUk1bFrFUkDSMLUOkfepy8OH/MA9DzHIeSk=; b=o4+T61qgABSrPpax8i4pYdR5z341Bnb+raSPBAB0e9Ba5SNiP8uvWuwy11dBTZF9MX 6mpZjAPAosqy0A3eWVtvzaqr3JtWqmxr//8xIQbHVm9A8g6Cjb7UrvhBdkv23ptXvnLB 22hTWRUX1ehZLJv6huQI3Nop7gHcohPUHZXTlT6wgQpsxmTUSLLKmtZVD7yI02dM5hB2 WvvsLRUqKoOeZt0krb/9J+bD1EBD4jizhAyguxlPxL/l4GUb8MsL1oP5Y3i0U96sYFpE S3rsJDdd0FW/+FupLYsNsVm7qS/uU3x+18ncYEMeJKKw5+I1hvlZFLHMeEW7rysCZZxA T7PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313567; x=1765918367; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OtKUAKNDvUk1bFrFUkDSMLUOkfepy8OH/MA9DzHIeSk=; b=BUiGSYwjvuubrMbNUumd3MjGoXywurzHt8NaskqzCPNrMl+lwr+wCn1DhAl3KdNBBc RTinAzkFvyTsBXmSxnGtwasb42S7TzQRkHR96elX2861gd4iLGCz0z/8tv6WBywr/lkB cC/XWslog4uOIdidQQ0uTxref8jb8+5lBA9tCTA93Js9oSijm0NMe1A4qY75J4f7WgPs aiz4C+1ZCWjS8hflUji0Dh4mhguLUaDKTH87+REjAOlrrqLNKnXdVTMv1t4Utp6bWiuQ RZPzeLy7Zahm5sT5rVbwXHl6WbuT9AMKcQKBb46pA59RdJsQliKgVdoD0yE7tvJ+S4mS eitw== X-Forwarded-Encrypted: i=1; AJvYcCXxe33LEPNUibi2RO/1vUX74qVrVoj2jnr2Y3dKihEbR9vY984uUy4Qjx6f0zJjecWKClsToyyqRxHfrgI=@vger.kernel.org X-Gm-Message-State: AOJu0YyPYaJ4Ai6hZICNJxHcASNW3XVVRX31sU7PyqnbjJsTF4EhnUEb VjyI1WXp5YJYsPaxy7dwayIGEXo+ZhJToQkUKfUTdekN96SudUV6cvLbvGe8c9N/CrVOVeYvCFw jsg3QnRvhp0KtuticPXHfhfveEA== X-Google-Smtp-Source: AGHT+IElus7zwyOd6GpcWSRDylwpIxEDMWdFHeMbdxYXsYAzEkLnXoWZD854plNF0r3PILxeHcsu/DPU8yH50AgLyg== X-Received: from jabfa16.prod.google.com ([2002:a05:6638:6190:b0:5b7:56fc:a47f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:c92:b0:659:9a49:8f68 with SMTP id 006d021491bc7-65b2aca3bc6mr107540eaf.45.1765313567368; Tue, 09 Dec 2025 12:52:47 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:18 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-22-coltonlewis@google.com> Subject: [PATCH v5 21/24] KVM: arm64: Inject recorded guest interrupts From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-direct.c | 20 ++++++++++++++++++++ arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu.c | 6 +++++- 4 files changed, 29 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index e4cbab0fd09cf..47e6f2a14e381 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -92,6 +92,8 @@ bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx= ); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 76d8ed24c8646..2ee99d6d2b6c1 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -413,3 +413,23 @@ void kvm_pmu_handle_guest_irq(u64 govf) =20 __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); } + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Ponter to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u64 pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + u64 pmint =3D read_pmintenset(); + u64 pmcr =3D read_pmcr(); + + return (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); +} diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index bcaa9f7a8ca28..6f41fc3e3f74b 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -268,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -405,7 +405,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index c9862e55a4049..e1332a158dfc8 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -407,7 +407,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F70F30DED3 for ; Tue, 9 Dec 2025 20:52:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313582; cv=none; b=blsDZJbxK107yUDyXkVHolJe3vVfwIiwgdexPoWG3CPRcemPvqg3+Ezp+Gjgerte7cTuyeeiXuweU1s8zQTJmc9nlJYmpJsypLm4ecVuYr2s9le7zEpGFPg25lzHFd9VCXRFud5IRhUiZ0W1RYkiuqgb1DRNn+SiTyf9mpfj3GM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313582; c=relaxed/simple; bh=wqxAcEnfjWslc6FMnBqUCDwEB4TrUdMmPgJYpyRpr/0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LGJDbCN9qIIPv/n9sZuuMmIznNqEIicLKOfyVXAMdU9t8cxHqZMmaVCCkJVFL83oskXNiMyzEGFTFqBJDBidfk1zoU6TFHm4am2WdBg7wC8GHXEvsr6MyL3lxCnjxrIPE7Nziklf35i7kmqcwLR2eZLc+9v4PLCR960ojc3nGMI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vAEjPpC8; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vAEjPpC8" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c7611165b3so7001059a34.2 for ; Tue, 09 Dec 2025 12:52:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313568; x=1765918368; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n4FZ+/Cx29XCU0NwopF8BMz21OlcWT5NMANeA7QHOoI=; b=vAEjPpC8V1ktQqPp21zU8bjuW49m94BQhX9NK/oZGfgOGY8dPosJwuPl5VV18vyXoT m8Sc47OfOpafdwcYGACd7pnLBfhWxOdiWZQCTOKAyfaXGV9yJ8GGvipEgP9yQqaU1M7I kiyanzwmyzt75uwFDDhd1YqHiXvNlHABLbvbZCUrfKYsn755VxmtvrB6dw2esiJMhFrr 0TpAthQgg3BzGQH+4e1SxFb1y42I88f/sH7lmixqcO4idffEeXIOTJpztRhxDRgPZySb heYIMrv+WZvwQh3DhBhd+f9b8xYgTPIOZjQpyEvCJg9e74MhHQ7NQ+uJCIXaCdfB4qwN 8bWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313568; x=1765918368; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n4FZ+/Cx29XCU0NwopF8BMz21OlcWT5NMANeA7QHOoI=; b=CbLBi9Y3FUZZYPZ2o3NoFGJOZFBOZKz8E7DO6vtOThoetrMPdIYoOV69IcktTYyHfL LigoA3cqth5YN+q9AB01SNEQvCnWmc6MTXc+lVD8aGbuflAIGlwOhbgR955eDW62g9SC FZbzt5JITMQUnPy1ZvV9viFndpTo7/PYPcb1YHpH905KJtMr8/9osNzBtZ6fjNrcBpno qurlARiw5R+m7cE8J/D+TCCKN54W4eRwfLbl1LM2tNHihuHWd7Mi3LZhWdA3GwPBGbfr jnI6zKpej+7FyN5MmHBFavcjfZUuU0+1rzD0FEU6nGKeozzJ3hhOwBEifFzAoKQ9IOD7 7W1Q== X-Forwarded-Encrypted: i=1; AJvYcCVsNxmavhCACjWp/+T7Pe6uyKrT9tbh1fIb9vEe27PLrsoG7PmYBGPlKK/5uYkXXYaw0zNKYhYXhAToXl0=@vger.kernel.org X-Gm-Message-State: AOJu0YwPdI7FuDkiJOz/OswtSjDVxi7l6R2uZ1ZS95ROfkMJGhhUrEKY 8mFv7shFXYHTT+7b86PvxgG0TTF2yewlhmwLNO4J/YIDo9Xl158CYpsnV8jdcBUHjEYq1DW7fer nV2goWvspa1kzXC4MK9ZA5cjRig== X-Google-Smtp-Source: AGHT+IEBRlr1/jXDGJ9Zvft6mZlSqQM12FXY9uBTued7GbCi/EnMLdFhp2WodcmHfDpcFufyHRIReuOYMmyMkLnTYg== X-Received: from otvq6.prod.google.com ([2002:a05:6830:4406:b0:7c7:3060:b521]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:60c9:10b0:7c9:5bef:ec3 with SMTP id 46e09a7af769-7cacebb3449mr70790a34.12.1765313568461; Tue, 09 Dec 2025 12:52:48 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:19 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-23-coltonlewis@google.com> Subject: [PATCH v5 22/24] KVM: arm64: Add KVM_CAP to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add KVM_CAP_ARM_PARTITION_PMU to enable the partitioned PMU for a given VM. This capability is allowed where PMUv3 and VHE are supported and the host driver was configured with arm_pmuv3.reserved_host_counters. Ordering is enforced such that this must be called before the vCPUs are created to avoid the possibility the guest is already using one PMU configuration that gets switched out from under it. The enabled capability is tracked by the new flag KVM_ARCH_FLAG_PARTITIONED_PMU_ENABLED. Signed-off-by: Colton Lewis --- Documentation/virt/kvm/api.rst | 24 +++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_pmu.h | 9 ++++++++ arch/arm64/kvm/arm.c | 15 +++++++++++++ arch/arm64/kvm/pmu-direct.c | 35 ++++++++++++++++++++++++++++--- include/uapi/linux/kvm.h | 1 + 6 files changed, 83 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 57061fa29e6a0..ef1b22f20ee71 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -8703,6 +8703,30 @@ This capability indicate to the userspace whether a = PFNMAP memory region can be safely mapped as cacheable. This relies on the presence of force write back (FWB) feature support on the hardware. =20 + +7.245 KVM_CAP_ARM_PARTITION_PMU +------------------------------------- + +:Architectures: arm64 +:Target: VM +:Parameters: arg[0] is a boolean that enables or disables the capability +:Returns: 0 on success + -EPERM if host doesn't support + -EBUSY if vPMU was already created + +This API controls the PMU implementation used for VMs. The capability +is only available if the host PMUv3 driver was configured for +partitioning via the module parameter +``arm-pmuv3.reserved_guest_counters=3D[0-$NR_COUNTERS]``. When enabled, +VMs are configured to have direct hardware access to the most +frequently used registers for the counters configured by the +aforementioned module parameters, bypassing the KVM traps in the +standard emulated PMU implementation and reducing overhead of any +guest software that uses PMU capabilities such as ``perf``. + +If the host driver was configured for partitioning but the partitioned +PMU is disabled through this interface, the VM will use the legacy PMU + 8. Other capabilities. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index f92027d8fdfd0..8431fdebcac43 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -349,6 +349,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_GUEST_HAS_SVE 9 /* MIDR_EL1, REVIDR_EL1, and AIDR_EL1 are writable from userspace */ #define KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS 10 + /* Partitioned PMU Enabled */ +#define KVM_ARCH_FLAG_PARTITION_PMU_ENABLED 11 unsigned long flags; =20 /* VM-wide vCPU feature set */ diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 47e6f2a14e381..6146120208e39 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -111,6 +111,8 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu); void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 void kvm_pmu_set_physical_access(struct kvm_vcpu *vcpu); +bool kvm_pmu_partition_ready(void); +void kvm_pmu_partition_enable(struct kvm *kvm, bool enable); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -327,6 +329,13 @@ static inline void kvm_pmu_host_counters_enable(void) = {} static inline void kvm_pmu_host_counters_disable(void) {} static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 +static inline bool kvm_pmu_partition_ready(void) +{ + return false; +} + +static inline void kvm_pmu_partition_enable(struct kvm *kvm, bool enable) = {} + #endif =20 #endif diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 1750df5944f6d..d09f272577277 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -20,6 +20,7 @@ #include #include #include +#include #include =20 #define CREATE_TRACE_POINTS @@ -37,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -132,6 +134,16 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, } mutex_unlock(&kvm->lock); break; + case KVM_CAP_ARM_PARTITION_PMU: + if (kvm->created_vcpus) { + r =3D -EBUSY; + } else if (!kvm_pmu_partition_ready()) { + r =3D -EPERM; + } else { + r =3D 0; + kvm_pmu_partition_enable(kvm, cap->args[0]); + } + break; default: break; } @@ -388,6 +400,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_PMU_V3: r =3D kvm_supports_guest_pmuv3(); break; + case KVM_CAP_ARM_PARTITION_PMU: + r =3D kvm_pmu_partition_ready(); + break; case KVM_CAP_ARM_INJECT_SERROR_ESR: r =3D cpus_have_final_cap(ARM64_HAS_RAS_EXTN); break; diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 2ee99d6d2b6c1..6cfba9caeea0e 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -45,8 +45,8 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) } =20 /** - * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU - * @vcpu: Pointer to kvm_vcpu struct + * kvm_pmu_is_partitioned() - Determine if given VCPU has a partitioned PMU + * @kvm: Pointer to kvm_vcpu struct * * Determine if given VCPU has a partitioned PMU by extracting that * field and passing it to :c:func:`kvm_pmu_is_partitioned` @@ -56,7 +56,36 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) { return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) && - false; + test_bit(KVM_ARCH_FLAG_PARTITION_PMU_ENABLED, &vcpu->kvm->arch.flags); +} + +/** + * kvm_pmu_partition_ready() - If we can enable/disable partition + * + * Return: true if allowed, false otherwise. + */ +bool kvm_pmu_partition_ready(void) +{ + return kvm_pmu_partition_supported() && + kvm_supports_guest_pmuv3() && + armv8pmu_hpmn_max > -1; +} + +/** + * kvm_pmu_partition_enable() - Enable/disable partition flag + * @kvm: Pointer to vcpu + * @enable: Whether to enable or disable + * + * If we want to enable the partition, the guest is free to grab + * hardware by accessing PMU registers. Otherwise, the host maintains + * control. + */ +void kvm_pmu_partition_enable(struct kvm *kvm, bool enable) +{ + if (enable) + set_bit(KVM_ARCH_FLAG_PARTITION_PMU_ENABLED, &kvm->arch.flags); + else + clear_bit(KVM_ARCH_FLAG_PARTITION_PMU_ENABLED, &kvm->arch.flags); } =20 /** diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 52f6000ab0208..2bb2f234df0e6 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -963,6 +963,7 @@ struct kvm_enable_cap { #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 #define KVM_CAP_GUEST_MEMFD_FLAGS 244 +#define KVM_CAP_ARM_PARTITION_PMU 245 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-ot1-f73.google.com (mail-ot1-f73.google.com [209.85.210.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BF3C30AD05 for ; Tue, 9 Dec 2025 20:52:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313580; cv=none; b=Jkifkgi3X0hKObBWFKzSxNcBiVDYBKup3XXjBw9yfqOsX/ojcJAMl/gIxSPBNOAxRAYkaWjHvsVvlfSeH/APDsCuOkT9Ip6FySQA/FP8dqqYJV5TofKNZwRRsfav3f+9l/0h5SbGtZbvoTF/otz2INvB/uOHOJhG9d41Jnywc5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313580; c=relaxed/simple; bh=cVZBJahaKSTV176PZqkKWritcCQlQjwr4hNs/Lq4IcY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aDYm9zX8Ix2NHmUikOyh+MMU4HKZUcGus1sT9LEd5cgqJsGFNNofpKpnQO8AC9qEzzqctOxRD29bsLChEAzh2EUmY3XV2mkMKBODeatFNuGHVQz47zmirp+1rrfJO6cocJvUtdaEc+fSLbnvcBhICskZq8HYo0g+HPmChLxKesk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Got1IYJO; arc=none smtp.client-ip=209.85.210.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Got1IYJO" Received: by mail-ot1-f73.google.com with SMTP id 46e09a7af769-7c702d1a4f7so11862049a34.3 for ; Tue, 09 Dec 2025 12:52:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313569; x=1765918369; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kv1AwMKfqms6xWA28lythBC3EJWNFQ3+8EnRHrSdP2Q=; b=Got1IYJO00h5P1davEuycOm1UGjFyaMfrkPpm+u8Yz2HaFfRknONhttZReTLHMJzQs +T2VLVmumTKfCH6H+8t35C84qWr4M2Jt8q/DsfGaDEWl+pPStGqIoBZySY+8M1OfCJ9Y mH40ttQHMV7PgmfkfoC9kYMRTY7DhSSUgTIS32/c+JHArl/b+ma/De+fN3jAhagbTdmD z3tl4dWaAQAHQbEb3RY7q2Uvr+JY+ZsKIFH1TTvwwlEjEaO84D4icqoEEqgfSYJXK2/V nl+YTAJKwtAgfKrHbhxQAA6WNb4chr/I0pOC+0r6FPMf9D+zyu57CBfinC2VjuRaGARb MqMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313569; x=1765918369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kv1AwMKfqms6xWA28lythBC3EJWNFQ3+8EnRHrSdP2Q=; b=Hzu0wSORTbXzay5TMg54fk+2Qp1KGObeIkS/vqBdOQ/AW8ozZ0NkBsXVH6sUpYbd6z aKsYGlQW+0jly6PP5qddQjjHNpZVUV6hNqXQdSFxTv0sdIcJz4NmYifJM3bKadTB/OA3 Ka1E4hqth5UqYmuUb62eLeQ2EJCxWyjYigN7QgZc/i0P9KMCuADhbOt9ppTRqSjx4eXZ B9wq6ExyJvPlbxDwGlqa7Z7iIeh1934kqsMGCxa7W36wLje7wf3EoIEQ6jAkvuWwcoUr gsxf1sulo4LNgtzL7EKSyKY8M9djw3lfBgC0Q/7s3VHea8SVU+dosYjPvPF7gbspVD4V zXXQ== X-Forwarded-Encrypted: i=1; AJvYcCVUMnh1epDvwI8cmQIVDkt4vEVLaei51uP/cxS3ny01GYyOrzV+4sSxbN2snX7jCOlakmDO3tYSmrcm4mY=@vger.kernel.org X-Gm-Message-State: AOJu0YxiIUqOpA5VSrT3//tcuMlcBrNExKgQvsrGUMrBBjVjpciOHBL9 rx+mD7VtwpbBXXfg8TTa8Tnxc+nM666y2k6JoOFz8PqamOrgz3pU8wemGhoHKisTHsF/VP70jQw bCEj2qDWQmc6F0O+bwmQMLlZe8A== X-Google-Smtp-Source: AGHT+IHsfvWbHL/8tFP4Ochtqb9Gms3M4fhZnid/Y2W+XVpjJijy4/NNK2sgiQPwkhfDucK/Gz477ZQXRQW17NPhgA== X-Received: from otfh13.prod.google.com ([2002:a05:6830:224d:b0:7c7:6609:92cd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:2685:b0:7c7:5f6e:f26e with SMTP id 46e09a7af769-7cacec2e507mr73093a34.24.1765313569578; Tue, 09 Dec 2025 12:52:49 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:20 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-24-coltonlewis@google.com> Subject: [PATCH v5 23/24] KVM: selftests: Add find_bit to KVM library From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some selftests have a dependency on find_bit and weren't compiling separately without it, so I've added it to the KVM library here using the same method as files like rbtree.c. Signed-off-by: Colton Lewis --- tools/testing/selftests/kvm/Makefile.kvm | 1 + tools/testing/selftests/kvm/lib/find_bit.c | 1 + 2 files changed, 2 insertions(+) create mode 100644 tools/testing/selftests/kvm/lib/find_bit.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 148d427ff24be..f44822b7d0e20 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -5,6 +5,7 @@ all: =20 LIBKVM +=3D lib/assert.c LIBKVM +=3D lib/elf.c +LIBKVM +=3D lib/find_bit.c LIBKVM +=3D lib/guest_modes.c LIBKVM +=3D lib/io.c LIBKVM +=3D lib/kvm_util.c diff --git a/tools/testing/selftests/kvm/lib/find_bit.c b/tools/testing/sel= ftests/kvm/lib/find_bit.c new file mode 100644 index 0000000000000..67d9d9cbca85c --- /dev/null +++ b/tools/testing/selftests/kvm/lib/find_bit.c @@ -0,0 +1 @@ +#include "../../../../lib/find_bit.c" --=20 2.52.0.239.gd5f0c6e74e-goog From nobody Thu Dec 18 08:38:45 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E63830C36D for ; Tue, 9 Dec 2025 20:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313579; cv=none; b=QfT5M8gHEtNwPzFcedSaf/8TVAVOtbhMn0n1uZlcdpkchxbtEuGWWcMKDAwtnKRuPyZrzjONHEIvJGw9gnlj7o+64qsRdjeF4RON/2ubVuM4XJlUdIj+6LlU45h1gjqqg4YzvLDm4qfhOot4xxEy1ZiaL9t/hHO+X2a8Mtv7yQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765313579; c=relaxed/simple; bh=Ovdb4lVvItfJ3Rv6cYdLAQWvS+U6STJuj9GjRFXJ9SM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CXdy7InL4/r7cQAa8xQ+1E6pdrZS4Gl+36jDsCyA2G4KDxfJ4iVEUD955L5l3uLiEYOyVJA2hl5NUU+1zjvMunQgjTjroWdgjMYeEo2zqX7AIWzh1aQIN384Y24fbiGUkBj8kpaeGEuGLmZRpI/QAJjkoCUaBmnBpbbmSYUChNQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yAWJT6PZ; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yAWJT6PZ" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-45074787a6dso5695592b6e.0 for ; Tue, 09 Dec 2025 12:52:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765313571; x=1765918371; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9TQM9hbEKY4h1TXZBEY6y+aDCak1RQ1iwoKxBpJfd80=; b=yAWJT6PZveoEuxpuw/wbE6bdSgUhYlE/RHpx+Qkm+CTLqiO6OAoVBLuHGO4tc9jQNf 8FDHVNFxGatyTy0kPGTec1hUnzme3gy5E5Ox8i6NsGHQ9+4GHDNSmGcH2MOnXcx2bx5M md0+qkUcWQ8fzmnnvje+LB8CewIbLKJ2u8tOWAzwvCEGeLxl5U4/j/38Bur2GR1g3VD7 Rxd1bDIZD4nXnwBt/ooOhohOV8G7cM93yH3PKyA9e4V75mfwsrff+7EPyfRl4MBzDYpI zgESOslgCv+3vx60p7ARs7/LLCsXrU0G00PxAwqzSOtNaZph9vnNid3gjdCRY0RrO1C9 WckQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765313571; x=1765918371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9TQM9hbEKY4h1TXZBEY6y+aDCak1RQ1iwoKxBpJfd80=; b=bsbsoPWuKNjDJOMc4nFluPSF2BWju39/l60FtK1z/56a8UuAM1MEp1ogwfKa03JTQV 0cOM5UlwWzvhwSe17M0JYq+8jYkLNsbXtTf0DF7nNnzxDfdj8GqiNYVq7JwYwZrffy1N QRSgz/LL9H6RIBnszVGReGgjHsXUs56JmWwRL1oYNYoa4aTFkrYX8Xgs2x7877I1Chik K+L/BfTT3QNY4ejbisRwxlHoXuTAxu+qsaksawwwHOjacxeS41kgMec23fn8cFysYMTU XPmNf/O1Hq+euG7dmvX51Ec3MK1B+L6hqQwPXFxyUNew8V8AGiz0wraRi2LXi/ZSXqdn KvzA== X-Forwarded-Encrypted: i=1; AJvYcCUjFCSOQsBtsFGvN8qVf9dF0XH/guiiODoXndYzYLiXxLzZH8+AoDGVoq8SzT/J9JESOpDzFCx/1dolqJI=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7Zr2+9mbMTwDE+WiJdJbs5guWv/s2MtDktiJP9HaZXEehkXle bHMEM/6Wa7SJyjveQcohFtt76zOE0eSLz0rTbaEIzsYk6vRe9bSgSSp5Z56Doq6y3f8+FxcY+Iy Z5MaaLmak9FD8sS49Gc5jf57fkw== X-Google-Smtp-Source: AGHT+IHWcIE4lg0T6ePDVQb/sbhJCa/b57rBLjAAg31xKxOUUm8LB/AXhfBedLziDTfhcJmhpwXWgmPZUFcO0YaxEw== X-Received: from oifn25-n1.prod.google.com ([2002:a05:6808:8859:10b0:450:63b:b0dd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:1441:b0:44f:6a32:5364 with SMTP id 5614622812f47-455865ceac4mr180351b6e.24.1765313570670; Tue, 09 Dec 2025 12:52:50 -0800 (PST) Date: Tue, 9 Dec 2025 20:51:21 +0000 In-Reply-To: <20251209205121.1871534-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251209205121.1871534-1-coltonlewis@google.com> X-Mailer: git-send-email 2.52.0.239.gd5f0c6e74e-goog Message-ID: <20251209205121.1871534-25-coltonlewis@google.com> Subject: [PATCH v5 24/24] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rerun all tests for a partitioned PMU in vpmu_counter_access. Create an enum specifying whether we are testing the emulated or partitioned PMU and all the test functions are modified to take the implementation as an argument and make the difference in setup appropriately. Signed-off-by: Colton Lewis --- tools/include/uapi/linux/kvm.h | 1 + .../selftests/kvm/arm64/vpmu_counter_access.c | 77 ++++++++++++++----- 2 files changed, 57 insertions(+), 21 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index 52f6000ab0208..2bb2f234df0e6 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -963,6 +963,7 @@ struct kvm_enable_cap { #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 #define KVM_CAP_GUEST_MEMFD_FLAGS 244 +#define KVM_CAP_ARM_PARTITION_PMU 245 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index ae36325c022fb..e68072e3e1326 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -25,9 +25,20 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 =20 +enum pmu_impl { + EMULATED, + PARTITIONED +}; + +const char *pmu_impl_str[] =3D { + "Emulated", + "Partitioned" +}; + struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; + bool pmu_partitioned; }; =20 static struct vpmu_vm vpmu_vm; @@ -399,7 +410,7 @@ static void guest_code(uint64_t expected_pmcr_n) } =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) +static void create_vpmu_vm(void *guest_code, enum pmu_impl impl) { struct kvm_vcpu_init init; uint8_t pmuver, ec; @@ -409,6 +420,11 @@ static void create_vpmu_vm(void *guest_code) .attr =3D KVM_ARM_VCPU_PMU_V3_IRQ, .addr =3D (uint64_t)&irq, }; + bool partition =3D (impl =3D=3D PARTITIONED); + struct kvm_enable_cap partition_cap =3D { + .cap =3D KVM_CAP_ARM_PARTITION_PMU, + .args[0] =3D partition, + }; =20 /* The test creates the vpmu_vm multiple times. Ensure a clean state */ memset(&vpmu_vm, 0, sizeof(vpmu_vm)); @@ -420,6 +436,12 @@ static void create_vpmu_vm(void *guest_code) guest_sync_handler); } =20 + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) { + vm_ioctl(vpmu_vm.vm, KVM_ENABLE_CAP, &partition_cap); + vpmu_vm.pmu_partitioned =3D partition; + pr_debug("Set PMU partitioning: %d\n", partition); + } + /* Create vCPU with PMUv3 */ kvm_get_default_vcpu_target(vpmu_vm.vm, &init); init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); @@ -461,13 +483,14 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) } } =20 -static void test_create_vpmu_vm_with_nr_counters(unsigned int nr_counters,= bool expect_fail) +static void test_create_vpmu_vm_with_nr_counters( + unsigned int nr_counters, enum pmu_impl impl, bool expect_fail) { struct kvm_vcpu *vcpu; unsigned int prev; int ret; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); vcpu =3D vpmu_vm.vcpu; =20 prev =3D get_pmcr_n(vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0))); @@ -489,7 +512,7 @@ static void test_create_vpmu_vm_with_nr_counters(unsign= ed int nr_counters, bool * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, enum pmu_impl impl) { uint64_t sp; struct kvm_vcpu *vcpu; @@ -497,7 +520,7 @@ static void run_access_test(uint64_t pmcr_n) =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, false); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -531,14 +554,14 @@ static struct pmreg_sets validity_check_reg_sets[] = =3D { * Create a VM, and check if KVM handles the userspace accesses of * the PMU register sets in @validity_check_reg_sets[] correctly. */ -static void run_pmregs_validity_test(uint64_t pmcr_n) +static void run_pmregs_validity_test(uint64_t pmcr_n, enum pmu_impl impl) { int i; struct kvm_vcpu *vcpu; uint64_t set_reg_id, clr_reg_id, reg_val; uint64_t valid_counters_mask, max_counters_mask; =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, false); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 valid_counters_mask =3D get_counters_mask(pmcr_n); @@ -588,11 +611,11 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_error_test(uint64_t pmcr_n, enum pmu_impl impl) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 - test_create_vpmu_vm_with_nr_counters(pmcr_n, true); + test_create_vpmu_vm_with_nr_counters(pmcr_n, impl, true); destroy_vpmu_vm(); } =20 @@ -600,11 +623,11 @@ static void run_error_test(uint64_t pmcr_n) * Return the default number of implemented PMU event counters excluding * the cycle counter (i.e. PMCR_EL0.N value) for the guest. */ -static uint64_t get_pmcr_n_limit(void) +static uint64_t get_pmcr_n_limit(enum pmu_impl impl) { uint64_t pmcr; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); pmcr =3D vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); destroy_vpmu_vm(); return get_pmcr_n(pmcr); @@ -614,7 +637,7 @@ static bool kvm_supports_nr_counters_attr(void) { bool supported; =20 - create_vpmu_vm(NULL); + create_vpmu_vm(NULL, EMULATED); supported =3D !__vcpu_has_device_attr(vpmu_vm.vcpu, KVM_ARM_VCPU_PMU_V3_C= TRL, KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS); destroy_vpmu_vm(); @@ -622,22 +645,34 @@ static bool kvm_supports_nr_counters_attr(void) return supported; } =20 -int main(void) +void test_pmu(enum pmu_impl impl) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); - TEST_REQUIRE(kvm_supports_vgic_v3()); - TEST_REQUIRE(kvm_supports_nr_counters_attr()); + pr_info("Testing PMU: Implementation =3D %s\n", pmu_impl_str[impl]); + + pmcr_n =3D get_pmcr_n_limit(impl); + pr_debug("PMCR_EL0.N: Limit =3D %lu\n", pmcr_n); =20 - pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); - run_pmregs_validity_test(i); + run_access_test(i, impl); + run_pmregs_validity_test(i, impl); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_error_test(i, impl); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + TEST_REQUIRE(kvm_supports_vgic_v3()); + TEST_REQUIRE(kvm_supports_nr_counters_attr()); + + test_pmu(EMULATED); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + test_pmu(PARTITIONED); =20 return 0; } --=20 2.52.0.239.gd5f0c6e74e-goog