From nobody Tue Oct 7 03:46:17 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 266A0278E75 for ; Mon, 14 Jul 2025 22:59:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533974; cv=none; b=QAibYSOUXuoO4dimin3akeNV12ozkgMFov89sgQxHJXLTzrAa6iB3SL/yVtfgf96Q8b132L1IXjhH1e8XIyYTrLJCBWDJI38u9SeRonteI7eVfUOv0W22VXIc+xk3KzppbxWTVseF4MQkeGIhjo9TAunsUZ54lYQcqNmnGXergI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533974; c=relaxed/simple; bh=9pirQkb0m7N/ZSHonJo+EvQ3meYt9RuTsLbJDifQha4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Lt6MVcVz/atibg9l0GHhv5eo3e+o0nbPJx71Yi90VeD7GQs+r+ArG+faVrOGDH2ncvOrsCwXPJhGlnZyLGtsbxZJSFjIspJQbFq/Yf9dyTaLrrogPmrRipyU9mUeIaCvVw51P3tyceMl1TagRMXuLuhvbJBfHaKW3H0ME0tRNJ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ko40Dxh+; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ko40Dxh+" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-40c65ffbc47so1158025b6e.0 for ; Mon, 14 Jul 2025 15:59:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533971; x=1753138771; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KJXmYeKTGnjqKEud4aRiMq2ofHrNWARe73nlzic5/G8=; b=ko40Dxh+Ub++GN7XdlXq381D5rNH0Rl/ZCx5TcPpZN2s2E0mLWALqmj6xlIozfd+55 kd4DA1i4g4tkjOu1vMDGWMYn2OYfRdMeIUOUHzhnzWYNTVHQsZUUj/aP37JqXsnhMe4c 7zCc9oi4x1J72KLwQJbjDf8pRgYxVZT0s+eA98fnb/Aq+tafVP4gf++JtOfuRc/3p/jv URRVGi9IPuSrRFeMS/7Qm2SdqSeNGg78MaLaZbHGF4RXCg8z1IGNNNtlHD74Ar3l157R KMOL0iisWWnwo98IpzSwjkjHz7iWpDCncd/RgJMpdRBmDtFzWe+R2HVOVe8bOirPGpPz Z/tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533971; x=1753138771; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KJXmYeKTGnjqKEud4aRiMq2ofHrNWARe73nlzic5/G8=; b=Zf2UUTIWavDb6aQRqoHLoJ/PgTxAycLKXvBtn0ggpvqD4tJ4l8rCBzpmsfV3MiSZDY aDLDVDE1McggnP4Sk50NicFTbCc0x+0XL7d+PnODi+hwdsJEe6hzdHlcv5AaC5aJXY4g gmOfoPuQUUNDnQZncI0eLaVA0j6zpgp92w4I8eiTOw9o44haqcd6xF4Vh0Onr5XJEFZn ai+ZynQECo1UeczBPxvW9mLBLFfUev0PTLCcMweNzIZDaUM1goIHFRpfAJAUHzOZsjmJ tyviQoi7y7wudORoUNxHNCOMe8wGaOEjt84Yp5jBWp7gAN0cjNL0m13JmUXrV9c6baYo Xw6w== X-Forwarded-Encrypted: i=1; AJvYcCX+1L+n+dGzSgGBmrLjMENs+OF+Tj5PbSWm2ubXAllNQL0CG+YheREiQ4MJYjJABUuDO/6QmQ45Abz94Os=@vger.kernel.org X-Gm-Message-State: AOJu0Yy3fw1B++JdVzQunrcKW70Z2gCttXLu/MSwy9LncXyimwcPCi36 FVqPrr5JsSn642Ymc+fxZ/8d+x8hVkOzwRbB78BCffVCBcfuSH4AS0PBy1jLnfNKIF7pwyn5a98 sZAiLTbHoFXdFDkM9ZySjZh6fnQ== X-Google-Smtp-Source: AGHT+IFJZaaG0NKL8tBBBDgd5PVrD7hPJMlHTf5CXiCJq4KOEoeUsMaOm4Ndc7OfFJp7bHopehW6o7GpIil6kUfASA== X-Received: from oibir14.prod.google.com ([2002:a05:6808:6f8e:b0:40c:a463:99a7]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:8859:10b0:415:9306:2e3b with SMTP id 5614622812f47-41593063055mr7134275b6e.23.1752533971422; Mon, 14 Jul 2025 15:59:31 -0700 (PDT) Date: Mon, 14 Jul 2025 22:58:55 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-2-coltonlewis@google.com> Subject: [PATCH v4 01/23] arm64: cpufeature: Add cpucap for HPMN0 From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a capability for FEAT_HPMN0, whether MDCR_EL2.HPMN can specify 0 counters reserved for the guest. This required changing HPMN0 to an UnsignedEnum in tools/sysreg because otherwise not all the appropriate macros are generated to add it to arm64_cpu_capabilities_arm64_features. Acked-by: Mark Rutland Signed-off-by: Colton Lewis Acked-by: Suzuki K Poulose --- arch/arm64/kernel/cpufeature.c | 8 ++++++++ arch/arm64/tools/cpucaps | 1 + arch/arm64/tools/sysreg | 6 +++--- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b34044e20128..f38d7b5294ec 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -548,6 +548,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] =3D { }; =20 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] =3D { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_HP= MN0_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_= DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CT= X_CMPs_SHIFT, 4, 0), @@ -2896,6 +2897,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2) }, + { + .desc =3D "HPMN0", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_HPMN0, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 10effd4cff6b..5b196ba21629 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -39,6 +39,7 @@ HAS_GIC_CPUIF_SYSREGS HAS_GIC_PRIO_MASKING HAS_GIC_PRIO_RELAXED_SYNC HAS_HCR_NV1 +HAS_HPMN0 HAS_HCX HAS_LDAPR HAS_LPA2 diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 8a8cf6874298..d29742481754 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1531,9 +1531,9 @@ EndEnum EndSysreg =20 Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 -Enum 63:60 HPMN0 - 0b0000 UNPREDICTABLE - 0b0001 DEF +UnsignedEnum 63:60 HPMN0 + 0b0000 NI + 0b0001 IMP EndEnum UnsignedEnum 59:56 ExtTrcBuff 0b0000 NI --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2BC9279787 for ; Mon, 14 Jul 2025 22:59:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533974; cv=none; b=o15R0nxXoAgaxQ9c197yxXqwM8ospElY5FMKvOVuChwmv1yNLg+zSykAlcAUZFMo7HT4u5Ca0OeRExzQcXUzpO0FneWgQHAzP9OH4li+3H7E/ZkQOhbS6hG9oWDv2s3VKteaLajaQUC7w9Y2ILneospOG9SOE+iYT9M1IHSf3Go= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533974; c=relaxed/simple; bh=hOTRa511jzbS7yKwTFBz9jTYvhLCpapYcm6zER8h/+U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KlaL8V1O8TY4nLNR1v41N4P5B16gk85+Qg+9/VqWovlKEIXaX4AVasCCL5fyw7j4QAQpNo6R7bFnWI9WzhxUj+9wvT4LLXuxheWef31y/0QitnY0fQZtmuM5kujo97vMaw61bWO+FAVQiafbjmFfgjQdwtPzGjOa5CRJfGe+31k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iHZl26jY; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iHZl26jY" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-8649be94fa1so960745039f.0 for ; Mon, 14 Jul 2025 15:59:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533972; x=1753138772; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RsXzykyZ0LoYkkSjRYm7JEpztKOUc4MUEbwC3RjOmME=; b=iHZl26jYMNhO2gsfpOKJW/n1WdIU0e0wQSgoNQ2mhUnVi0vPcyEai6owZwg625mh4v U66uFXAtNNryyU7Skkd5qHigus6CUM68LkVil9+RMLs8OZqDIUrjXgvd+8HLEngv42X4 zFqG9ZPxsEqp/h0hOVKvfYN2J+bGIf4U6pAayGg7FMHsAjxG7nL/ExlZBaJbaCCPLPLB 97d3+/wH4hlBzp71WoXjzI8zUkHf+5ZKzDqtcCRQAshdc0wNckwj+MOtPtHfhMkbinjJ 7MgJQDHTVUExPLD7BofT2e1a/zM5GKDxudRPrB3JfyD9mpN8JwlMEFGzgRU+n2E4s1ts I47g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533972; x=1753138772; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RsXzykyZ0LoYkkSjRYm7JEpztKOUc4MUEbwC3RjOmME=; b=v4ugJcLcpx2cbfxFD8iX9j6Cm+lmcU0RlUeUt4tfffMwwsDWDli86p95uuRdMLfhP+ 9gKxJRMVQaKYukgA0fpSC4OrzCnDzYJ2A1visxyfZnxvWsWBxmeXeNz34fvfdatnxzmK S33Z5TJg6OIN51tSy/+zlXcZ7lfgDv5c80Yq/nnI6U8y9tXog1nfMp7jKwOB8El+vt2I yjg55/YjTAIEQfht03JFoYOO7UvGu26/Sjw9589fcDSRHRmvMlRvSv+2duBbZ1cP6C5I Gd2jxLUDU1XscRdiv6X2s7an4YqmZ+PiQIJkM9RCn3itoA6Y3SUeZCp3gFKFJ+2R8pdp inTA== X-Forwarded-Encrypted: i=1; AJvYcCW3CizOYRR32nGIppAAArlSG5AsZXUpqs68VQnUOEOqFysiycDKcFe/ZRCZK22eMVT5TQuymu4egdR6+pY=@vger.kernel.org X-Gm-Message-State: AOJu0YyTdsq/WMIzt6Y6xGkOeiRaEfheC3X2I+E8LcBNBlnfJe3sjRzq 7T5rQUYZFJ90FAXVsPXu7/g8FJRdC/L18PchPXCux/PdKCloPYcdkMrQWeQKQH2VFpmfbFZiE+/ F7wlkm1A8dFVBCY51itEY3oQwuw== X-Google-Smtp-Source: AGHT+IEpddQm39eOue1RsOnjPQc1DAuSYoVtcPcyMtHmEBXoJpp94iomugG2xoJBhD4Zximcypfn3q/M/U33LBaYPw== X-Received: from iosu5.prod.google.com ([2002:a05:6602:29a5:b0:86c:f382:ca9f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:6c0e:b0:876:8790:6ac8 with SMTP id ca18e2360f4ac-879b0a8db9emr69267639f.1.1752533972180; Mon, 14 Jul 2025 15:59:32 -0700 (PDT) Date: Mon, 14 Jul 2025 22:58:56 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-3-coltonlewis@google.com> Subject: [PATCH v4 02/23] KVM: arm64: Reorganize PMU includes From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Including *all* of asm/kvm_host.h in asm/arm_pmuv3.h is a bad idea because that is much more than arm_pmuv3.h logically needs and creates a circular dependency that makes it easy to introduce compiler errors when editing this code. asm/kvm_host.h includes asm/kvm_pmu.h includes perf/arm_pmuv3.h includes asm/arm_pmuv3.h includes asm/kvm_host.h Reorganize the PMU includes to be more sane. In particular: * Remove the circular dependency by removing the kvm_host.h include since it isn't needed in that header. * Conditionally on ARM64, include the more targeted kvm_pmu.h directly in the arm_pmuv3.c driver, where some functions defining the KVM/PMU interface are needed. * Move the last bit of KVM/PMU interface from kvm_host.h into kvm_pmu.h Signed-off-by: Marc Zyngier Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 2 -- arch/arm64/include/asm/kvm_host.h | 14 -------------- arch/arm64/include/asm/kvm_pmu.h | 15 +++++++++++++++ drivers/perf/arm_pmuv3.c | 5 +++++ 4 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8a777dec8d88..cf2b2212e00a 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -6,8 +6,6 @@ #ifndef __ASM_PMUV3_H #define __ASM_PMUV3_H =20 -#include - #include #include =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 27ed26bd4381..92d672429233 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1487,25 +1487,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcp= u); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); =20 -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) -{ - return (!has_vhe() && attr->exclude_host); -} - #ifdef CONFIG_KVM -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); -void kvm_clr_pmu_events(u64 clr); -bool kvm_set_pmuserenr(u64 val); void kvm_enable_trbe(void); void kvm_disable_trbe(void); void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); #else -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} -static inline void kvm_clr_pmu_events(u64 clr) {} -static inline bool kvm_set_pmuserenr(u64 val) -{ - return false; -} static inline void kvm_enable_trbe(void) {} static inline void kvm_disable_trbe(void) {} static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_gu= est) {} diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index baf028d19dfc..ad3247b46838 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -11,9 +11,15 @@ #include #include #include +#include =20 #define KVM_ARMV8_PMU_MAX_COUNTERS 32 =20 +#define kvm_pmu_counter_deferred(attr) \ + ({ \ + !has_vhe() && (attr)->exclude_host; \ + }) + #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ @@ -68,6 +74,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); =20 struct kvm_pmu_events *kvm_get_pmu_events(void); +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u64 clr); +bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -161,6 +170,12 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *= vcpu, bool pmceid1) =20 #define kvm_vcpu_has_pmu(vcpu) ({ false; }) static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} +static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} +static inline void kvm_clr_pmu_events(u64 clr) {} +static inline bool kvm_set_pmuserenr(u64 val) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3db9f4ed17e8..c2e3672e1228 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -9,6 +9,11 @@ */ =20 #include + +#if defined(CONFIG_ARM64) +#include +#endif + #include #include =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EE0327A444 for ; Mon, 14 Jul 2025 22:59:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533977; cv=none; b=PmHZyMbL1TFSHw8AVbAfUJy54qOv4rSt3ZBFBKDFjDGP7iudVqs6PVBNBugx8ckDF5lODpL2Q/CGEU7S6SNQalGJNQrIJeIDlafnr8rARWD9BB5o6z9eTnJeqzdF9kORyojjNLixCcSo8BboachYuzV95EVyvJP/nNy7FcOnzn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533977; c=relaxed/simple; bh=othQALCNyOm+wGAvU8/627th09/kFPGwbtOm5z9Z9UI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=X2InI7QYwYck/8fN4STol/cVyplPpwqDpCuG2IJAHNOt3Gdj1Rzignuc4oGqNrelRaIvDfvxRgmAF073kMlDx7FCBT+KnChg8d7h5x+7H9oc4f9jSvD8BJro2SQVb3Uqaidw6vCczPa7PP/qyod7lFP3zbvlLjvIM+cq2gOm1eA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=15Gj6shk; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="15Gj6shk" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-8649be94fa1so960747339f.0 for ; Mon, 14 Jul 2025 15:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533973; x=1753138773; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Yj1d2O4si27ynA5c551P+Zfs7Ln+MSdwgpjmb0x1hL8=; b=15Gj6shkEmzWvUwOFKAgHcGBEcJrZve/sgdyDIJPYQapcuaw4jg2d2XRBMNoxkmOE8 uUXfpgz6Hbz93ljVAph13YCPwirWVgaavuIwdHtJdGul3GsaNmVMJ9Nh8mNy2K98tKTO UhIgONoiSInM0ZtphC4Mw+MkpupRC/R/TAYuWpfiypTDD/zjkoYtym9pFC6TIrynEqzX fwY3NpzQAVPmOciDjD8upuNXyiwPntY81OXD1McsCtd73KVdoTRjuvbjvjXk6+pfiMN0 v4s//7e64s45Or4OBrU3SygbVAg67aoaB7jYpqnzrBUOsjld8MzzvqjeTuQ+3eWNB4mt PR8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533973; x=1753138773; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Yj1d2O4si27ynA5c551P+Zfs7Ln+MSdwgpjmb0x1hL8=; b=qlgy4bIKhu/PhgEuahY/kgYo/aOB5dyWMFiprbCDMM5RKJAR8i9k2cm3dUITTVptl5 jnBBUW67wJ3hjChwe7HkT98UYiaOhze1IK9tf1nyYPXEX8qzFp5ewb9ioLh9P1J1pPcN MxxejAJQy1645QbXuayQvlJX89RMlImo0lR0GfB3u53xlGP4FEXTWGaMsOpW4rkBN6/j rdXz4j9iUJMZw7hZU2/+4B7JfUSoPcyDA0RuqDPiyeHAcKypdhPCeL4xX1Fd4WOY7g4M 98WgjTl507/yEosOddGPT5fScR3EIBesNQEppzM3IZ+aKd1l+gzCcC0rSq4coSIHmoJp un3w== X-Forwarded-Encrypted: i=1; AJvYcCVYS/w7tulbiJ+0dXqHOle4o+mm+yr6eeixjbS7L0QK8Xj31fISY580nExEivxYMZLGIFnknB2f5m3fVU0=@vger.kernel.org X-Gm-Message-State: AOJu0Yw55d2dIJF95mJ2KEItLfwirWzf4yK5VYRCR8uucfRgnWqdCq9L 0Y2Z9ybhXb+XkdqWSp0hQZAFmo/qwAaQ1r6w2IPzSobVOaS8+IrbfGroqNXppBDQ9PMYwYBi2Qp d/thrdnDUQPpn8kPNNr+6DyCMrA== X-Google-Smtp-Source: AGHT+IE6m3OV9C9WhafiqNDWUgM/SPyyFj0uO1JQbx0lCfDxaiYoAG40Gt8hPMCh6HqT1UEUS+mqhQ/SVyZg0yYLbQ== X-Received: from iobee17.prod.google.com ([2002:a05:6602:4891:b0:879:8855:6961]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:6004:b0:85b:58b0:7ac9 with SMTP id ca18e2360f4ac-879b0b31ac1mr65009539f.10.1752533973276; Mon, 14 Jul 2025 15:59:33 -0700 (PDT) Date: Mon, 14 Jul 2025 22:58:57 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-4-coltonlewis@google.com> Subject: [PATCH v4 03/23] KVM: arm64: Reorganize PMU functions From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A lot of functions in pmu-emul.c aren't specific to the emulated PMU implementation. Move them to the more appropriate pmu.c file where shared PMU functions should live. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 3 + arch/arm64/kvm/pmu-emul.c | 672 +----------------------------- arch/arm64/kvm/pmu.c | 675 +++++++++++++++++++++++++++++++ 3 files changed, 679 insertions(+), 671 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index ad3247b46838..6c961e877804 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -51,13 +51,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u6= 4 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu); +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu); u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); +u32 kvm_pmu_event_mask(struct kvm *kvm); u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); void kvm_pmu_update_run(struct kvm_vcpu *vcpu); void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index dcdd80ffd49d..bcaa9f7a8ca2 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -17,19 +17,10 @@ =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 -static LIST_HEAD(arm_pmus); -static DEFINE_MUTEX(arm_pmus_lock); - static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 -bool kvm_supports_guest_pmuv3(void) -{ - guard(mutex)(&arm_pmus_lock); - return !list_empty(&arm_pmus); -} - static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) { return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]); @@ -40,46 +31,6 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vc= pu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } =20 -static u32 __kvm_pmu_event_mask(unsigned int pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return GENMASK(9, 0); - case ID_AA64DFR0_EL1_PMUVer_V3P1: - case ID_AA64DFR0_EL1_PMUVer_V3P4: - case ID_AA64DFR0_EL1_PMUVer_V3P5: - case ID_AA64DFR0_EL1_PMUVer_V3P7: - return GENMASK(15, 0); - default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); - return 0; - } -} - -static u32 kvm_pmu_event_mask(struct kvm *kvm) -{ - u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); - u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); - - return __kvm_pmu_event_mask(pmuver); -} - -u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | - kvm_pmu_event_mask(kvm); - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) - mask |=3D ARMV8_PMU_INCLUDE_EL2; - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) - mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | - ARMV8_PMU_EXCLUDE_NS_EL1 | - ARMV8_PMU_EXCLUDE_EL3; - - return mask; -} - /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -272,59 +223,6 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) irq_work_sync(&vcpu->arch.pmu.overflow_work); } =20 -static u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) -{ - unsigned int hpmn, n; - - if (!vcpu_has_nv(vcpu)) - return 0; - - hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - n =3D vcpu->kvm->arch.nr_pmu_counters; - - /* - * Programming HPMN to a value greater than PMCR_EL0.N is - * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an - * UNKNOWN number of counters (in our case, zero) are reserved for EL2. - */ - if (hpmn >=3D n) - return 0; - - /* - * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't - * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly - * depend on hardware (all PMU registers are trapped), make the - * implementation choice that all counters are included in the second - * range reserved for EL2/EL3. - */ - return GENMASK(n - 1, hpmn); -} - -bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) -{ - return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); -} - -u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); - - if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) - return mask; - - return mask & ~kvm_pmu_hyp_counter_mask(vcpu); -} - -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); - - if (val =3D=3D 0) - return BIT(ARMV8_PMU_CYCLE_IDX); - else - return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); -} - static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) { if (!pmc->perf_event) { @@ -370,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -393,24 +291,6 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *v= cpu) return reg; } =20 -static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - bool overflow; - - overflow =3D kvm_pmu_overflow_status(vcpu); - if (pmu->irq_level =3D=3D overflow) - return; - - pmu->irq_level =3D overflow; - - if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, - pmu->irq_num, overflow, pmu); - WARN_ON(ret); - } -} - bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; @@ -436,43 +316,6 @@ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) regs->device_irq_level |=3D KVM_ARM_DEV_PMU; } =20 -/** - * kvm_pmu_flush_hwstate - flush pmu state to cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the host, and = inject - * an interrupt if that was the case. - */ -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/** - * kvm_pmu_sync_hwstate - sync pmu state from cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the guest, and - * inject an interrupt if that was the case. - */ -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/* - * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding - * to the event. - * This is why we need a callback to do it once outside of the NMI context. - */ -static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) -{ - struct kvm_vcpu *vcpu; - - vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); - kvm_vcpu_kick(vcpu); -} - /* * Perform an increment on any of the counters described in @mask, * generating the overflow if required, and propagate it as a chained @@ -784,132 +627,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *= vcpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 -void kvm_host_pmu_init(struct arm_pmu *pmu) -{ - struct arm_pmu_entry *entry; - - /* - * Check the sanitised PMU version for the system, as KVM does not - * support implementations where PMUv3 exists on a subset of CPUs. - */ - if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) - return; - - guard(mutex)(&arm_pmus_lock); - - entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return; - - entry->arm_pmu =3D pmu; - list_add_tail(&entry->entry, &arm_pmus); -} - -static struct arm_pmu *kvm_pmu_probe_armpmu(void) -{ - struct arm_pmu_entry *entry; - struct arm_pmu *pmu; - int cpu; - - guard(mutex)(&arm_pmus_lock); - - /* - * It is safe to use a stale cpu to iterate the list of PMUs so long as - * the same value is used for the entirety of the loop. Given this, and - * the fact that no percpu data is used for the lookup there is no need - * to disable preemption. - * - * It is still necessary to get a valid cpu, though, to probe for the - * default PMU instance as userspace is not required to specify a PMU - * type. In order to uphold the preexisting behavior KVM selects the - * PMU instance for the core during vcpu init. A dependent use - * case would be a user with disdain of all things big.LITTLE that - * affines the VMM to a particular cluster of cores. - * - * In any case, userspace should just do the sane thing and use the UAPI - * to select a PMU type directly. But, be wary of the baggage being - * carried here. - */ - cpu =3D raw_smp_processor_id(); - list_for_each_entry(entry, &arm_pmus, entry) { - pmu =3D entry->arm_pmu; - - if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) - return pmu; - } - - return NULL; -} - -static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) -{ - u32 hi[2], lo[2]; - - bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); - bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); - - return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; -} - -static u64 compute_pmceid0(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 0); - - /* always support SW_INCR */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); - /* always support CHAIN */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); - return val; -} - -static u64 compute_pmceid1(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 1); - - /* - * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled - * as RAZ - */ - val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); - return val; -} - -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; - unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; - u64 val, mask =3D 0; - int base, i, nr_events; - - if (!pmceid1) { - val =3D compute_pmceid0(cpu_pmu); - base =3D 0; - } else { - val =3D compute_pmceid1(cpu_pmu); - base =3D 32; - } - - if (!bmap) - return val; - - nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; - - for (i =3D 0; i < 32; i +=3D 8) { - u64 byte; - - byte =3D bitmap_get_value8(bmap, base + i); - mask |=3D byte << i; - if (nr_events >=3D (0x4000 + base + 32)) { - byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); - mask |=3D byte << (32 + i); - } - } - - return val & mask; -} - void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) { u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); @@ -921,393 +638,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) kvm_pmu_reprogram_counter_mask(vcpu, mask); } =20 -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - if (!vcpu->arch.pmu.created) - return -EINVAL; - - /* - * A valid interrupt configuration for the PMU is either to have a - * properly configured interrupt number and using an in-kernel - * irqchip, or to not have an in-kernel GIC and not set an IRQ. - */ - if (irqchip_in_kernel(vcpu->kvm)) { - int irq =3D vcpu->arch.pmu.irq_num; - /* - * If we are using an in-kernel vgic, at this point we know - * the vgic will be initialized, so we can check the PMU irq - * number against the dimensions of the vgic and make sure - * it's valid. - */ - if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) - return -EINVAL; - } else if (kvm_arm_pmu_irq_initialized(vcpu)) { - return -EINVAL; - } - - return 0; -} - -static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) -{ - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; - - /* - * If using the PMU with an in-kernel virtual GIC - * implementation, we require the GIC to be already - * initialized when initializing the PMU. - */ - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, - &vcpu->arch.pmu); - if (ret) - return ret; - } - - init_irq_work(&vcpu->arch.pmu.overflow_work, - kvm_pmu_perf_overflow_notify_vcpu); - - vcpu->arch.pmu.created =3D true; - return 0; -} - -/* - * For one VM the interrupt type must be same for each vcpu. - * As a PPI, the interrupt number is the same for all vcpus, - * while as an SPI it must be a separate number per vcpu. - */ -static bool pmu_irq_is_valid(struct kvm *kvm, int irq) -{ - unsigned long i; - struct kvm_vcpu *vcpu; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_arm_pmu_irq_initialized(vcpu)) - continue; - - if (irq_is_ppi(irq)) { - if (vcpu->arch.pmu.irq_num !=3D irq) - return false; - } else { - if (vcpu->arch.pmu.irq_num =3D=3D irq) - return false; - } - } - - return true; -} - -/** - * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. - * @kvm: The kvm pointer - */ -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; - - /* - * PMUv3 requires that all event counters are capable of counting any - * event, though the same may not be true of non-PMUv3 hardware. - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return 1; - - /* - * The arm_pmu->cntr_mask considers the fixed counter(s) as well. - * Ignore those and return only the general-purpose counters. - */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); -} - -static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) -{ - kvm->arch.nr_pmu_counters =3D nr; - - /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ - if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { - struct kvm_vcpu *vcpu; - unsigned long i; - - kvm_for_each_vcpu(i, vcpu, kvm) { - u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); - val &=3D ~MDCR_EL2_HPMN; - val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); - __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); - } - } -} - -static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) -{ - lockdep_assert_held(&kvm->arch.config_lock); - - kvm->arch.arm_pmu =3D arm_pmu; - kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); -} - -/** - * kvm_arm_set_default_pmu - No PMU set, get the default one. - * @kvm: The kvm pointer - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ -int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); - - if (!arm_pmu) - return -ENODEV; - - kvm_arm_set_pmu(kvm, arm_pmu); - return 0; -} - -static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) -{ - struct kvm *kvm =3D vcpu->kvm; - struct arm_pmu_entry *entry; - struct arm_pmu *arm_pmu; - int ret =3D -ENXIO; - - lockdep_assert_held(&kvm->arch.config_lock); - mutex_lock(&arm_pmus_lock); - - list_for_each_entry(entry, &arm_pmus, entry) { - arm_pmu =3D entry->arm_pmu; - if (arm_pmu->pmu.type =3D=3D pmu_id) { - if (kvm_vm_has_ran_once(kvm) || - (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { - ret =3D -EBUSY; - break; - } - - kvm_arm_set_pmu(kvm, arm_pmu); - cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret =3D 0; - break; - } - } - - mutex_unlock(&arm_pmus_lock); - return ret; -} - -static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) -{ - struct kvm *kvm =3D vcpu->kvm; - - if (!kvm->arch.arm_pmu) - return -EINVAL; - - if (n > kvm_arm_pmu_get_max_counters(kvm)) - return -EINVAL; - - kvm_arm_set_nr_counters(kvm, n); - return 0; -} - -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - struct kvm *kvm =3D vcpu->kvm; - - lockdep_assert_held(&kvm->arch.config_lock); - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (vcpu->arch.pmu.created) - return -EBUSY; - - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(kvm)) - return -EINVAL; - - if (get_user(irq, uaddr)) - return -EFAULT; - - /* The PMU overflow interrupt can be a PPI or a valid SPI. */ - if (!(irq_is_ppi(irq) || irq_is_spi(irq))) - return -EINVAL; - - if (!pmu_irq_is_valid(kvm, irq)) - return -EINVAL; - - if (kvm_arm_pmu_irq_initialized(vcpu)) - return -EBUSY; - - kvm_debug("Set kvm ARM PMU irq: %d\n", irq); - vcpu->arch.pmu.irq_num =3D irq; - return 0; - } - case KVM_ARM_VCPU_PMU_V3_FILTER: { - u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); - struct kvm_pmu_event_filter __user *uaddr; - struct kvm_pmu_event_filter filter; - int nr_events; - - /* - * Allow userspace to specify an event filter for the entire - * event range supported by PMUVer of the hardware, rather - * than the guest's PMUVer for KVM backward compatibility. - */ - nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; - - uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; - - if (copy_from_user(&filter, uaddr, sizeof(filter))) - return -EFAULT; - - if (((u32)filter.base_event + filter.nevents) > nr_events || - (filter.action !=3D KVM_PMU_EVENT_ALLOW && - filter.action !=3D KVM_PMU_EVENT_DENY)) - return -EINVAL; - - if (kvm_vm_has_ran_once(kvm)) - return -EBUSY; - - if (!kvm->arch.pmu_filter) { - kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); - if (!kvm->arch.pmu_filter) - return -ENOMEM; - - /* - * The default depends on the first applied filter. - * If it allows events, the default is to deny. - * Conversely, if the first filter denies a set of - * events, the default is to allow. - */ - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_zero(kvm->arch.pmu_filter, nr_events); - else - bitmap_fill(kvm->arch.pmu_filter, nr_events); - } - - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - else - bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - - return 0; - } - case KVM_ARM_VCPU_PMU_V3_SET_PMU: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int pmu_id; - - if (get_user(pmu_id, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); - } - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { - unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; - unsigned int n; - - if (get_user(n, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); - } - case KVM_ARM_VCPU_PMU_V3_INIT: - return kvm_arm_pmu_v3_init(vcpu); - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(vcpu->kvm)) - return -EINVAL; - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - irq =3D vcpu->arch.pmu.irq_num; - return put_user(irq, uaddr); - } - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: - case KVM_ARM_VCPU_PMU_V3_INIT: - case KVM_ARM_VCPU_PMU_V3_FILTER: - case KVM_ARM_VCPU_PMU_V3_SET_PMU: - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: - if (kvm_vcpu_has_pmu(vcpu)) - return 0; - } - - return -ENXIO; -} - -u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - unsigned int pmuver; - - pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, - read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); - - /* - * Spoof a barebones PMUv3 implementation if the system supports IMPDEF - * traps of the PMUv3 sysregs - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return ID_AA64DFR0_EL1_PMUVer_IMP; - - /* - * Otherwise, treat IMPLEMENTATION DEFINED functionality as - * unimplemented - */ - if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) - return 0; - - return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); -} - -/** - * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU - * @vcpu: The vcpu pointer - */ -u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) -{ - u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; - - if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) - n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - - return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); -} - void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) { bool reprogrammed =3D false; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 6b48a3d16d0d..79b7ea037153 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,8 +8,21 @@ #include #include =20 +#include +#include + +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + +bool kvm_supports_guest_pmuv3(void) +{ + guard(mutex)(&arm_pmus_lock); + return !list_empty(&arm_pmus); +} + /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -209,3 +222,665 @@ void kvm_vcpu_pmu_resync_el0(void) =20 kvm_make_request(KVM_REQ_RESYNC_PMU_EL0, vcpu); } + +void kvm_host_pmu_init(struct arm_pmu *pmu) +{ + struct arm_pmu_entry *entry; + + /* + * Check the sanitised PMU version for the system, as KVM does not + * support implementations where PMUv3 exists on a subset of CPUs. + */ + if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) + return; + + guard(mutex)(&arm_pmus_lock); + + entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->arm_pmu =3D pmu; + list_add_tail(&entry->entry, &arm_pmus); +} + +static struct arm_pmu *kvm_pmu_probe_armpmu(void) +{ + struct arm_pmu_entry *entry; + struct arm_pmu *pmu; + int cpu; + + guard(mutex)(&arm_pmus_lock); + + /* + * It is safe to use a stale cpu to iterate the list of PMUs so long as + * the same value is used for the entirety of the loop. Given this, and + * the fact that no percpu data is used for the lookup there is no need + * to disable preemption. + * + * It is still necessary to get a valid cpu, though, to probe for the + * default PMU instance as userspace is not required to specify a PMU + * type. In order to uphold the preexisting behavior KVM selects the + * PMU instance for the core during vcpu init. A dependent use + * case would be a user with disdain of all things big.LITTLE that + * affines the VMM to a particular cluster of cores. + * + * In any case, userspace should just do the sane thing and use the UAPI + * to select a PMU type directly. But, be wary of the baggage being + * carried here. + */ + cpu =3D raw_smp_processor_id(); + list_for_each_entry(entry, &arm_pmus, entry) { + pmu =3D entry->arm_pmu; + + if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return pmu; + } + + return NULL; +} + +static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) +{ + u32 hi[2], lo[2]; + + bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); + bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); + + return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; +} + +static u64 compute_pmceid0(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 0); + + /* always support SW_INCR */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); + /* always support CHAIN */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); + return val; +} + +static u64 compute_pmceid1(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 1); + + /* + * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled + * as RAZ + */ + val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); + return val; +} + +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; + u64 val, mask =3D 0; + int base, i, nr_events; + + if (!pmceid1) { + val =3D compute_pmceid0(cpu_pmu); + base =3D 0; + } else { + val =3D compute_pmceid1(cpu_pmu); + base =3D 32; + } + + if (!bmap) + return val; + + nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; + + for (i =3D 0; i < 32; i +=3D 8) { + u64 byte; + + byte =3D bitmap_get_value8(bmap, base + i); + mask |=3D byte << i; + if (nr_events >=3D (0x4000 + base + 32)) { + byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); + mask |=3D byte << (32 + i); + } + } + + return val & mask; +} + +/* + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + + vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); + kvm_vcpu_kick(vcpu); +} + +static u32 __kvm_pmu_event_mask(unsigned int pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return GENMASK(9, 0); + case ID_AA64DFR0_EL1_PMUVer_V3P1: + case ID_AA64DFR0_EL1_PMUVer_V3P4: + case ID_AA64DFR0_EL1_PMUVer_V3P5: + case ID_AA64DFR0_EL1_PMUVer_V3P7: + return GENMASK(15, 0); + default: /* Shouldn't be here, just for sanity */ + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); + return 0; + } +} + +u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + +u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | + kvm_pmu_event_mask(kvm); + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) + mask |=3D ARMV8_PMU_INCLUDE_EL2; + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) + mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | + ARMV8_PMU_EXCLUDE_NS_EL1 | + ARMV8_PMU_EXCLUDE_EL3; + + return mask; +} + +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D &vcpu->arch.pmu; + bool overflow; + + overflow =3D kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) + return; + + pmu->irq_level =3D overflow; + + if (likely(irqchip_in_kernel(vcpu->kvm))) { + int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, + pmu->irq_num, overflow, pmu); + WARN_ON(ret); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and = inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.pmu.created) + return -EINVAL; + + /* + * A valid interrupt configuration for the PMU is either to have a + * properly configured interrupt number and using an in-kernel + * irqchip, or to not have an in-kernel GIC and not set an IRQ. + */ + if (irqchip_in_kernel(vcpu->kvm)) { + int irq =3D vcpu->arch.pmu.irq_num; + /* + * If we are using an in-kernel vgic, at this point we know + * the vgic will be initialized, so we can check the PMU irq + * number against the dimensions of the vgic and make sure + * it's valid. + */ + if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) + return -EINVAL; + } else if (kvm_arm_pmu_irq_initialized(vcpu)) { + return -EINVAL; + } + + return 0; +} + +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) +{ + if (irqchip_in_kernel(vcpu->kvm)) { + int ret; + + /* + * If using the PMU with an in-kernel virtual GIC + * implementation, we require the GIC to be already + * initialized when initializing the PMU. + */ + if (!vgic_initialized(vcpu->kvm)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, + &vcpu->arch.pmu); + if (ret) + return ret; + } + + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + + vcpu->arch.pmu.created =3D true; + return 0; +} + +/* + * For one VM the interrupt type must be same for each vcpu. + * As a PPI, the interrupt number is the same for all vcpus, + * while as an SPI it must be a separate number per vcpu. + */ +static bool pmu_irq_is_valid(struct kvm *kvm, int irq) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_arm_pmu_irq_initialized(vcpu)) + continue; + + if (irq_is_ppi(irq)) { + if (vcpu->arch.pmu.irq_num !=3D irq) + return false; + } else { + if (vcpu->arch.pmu.irq_num =3D=3D irq) + return false; + } + } + + return true; +} + +/** + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. + * @kvm: The kvm pointer + */ +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + + /* + * PMUv3 requires that all event counters are capable of counting any + * event, though the same may not be true of non-PMUv3 hardware. + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return 1; + + /* + * The arm_pmu->cntr_mask considers the fixed counter(s) as well. + * Ignore those and return only the general-purpose counters. + */ + return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); +} + +static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) +{ + kvm->arch.nr_pmu_counters =3D nr; + + /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ + if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); + + val &=3D ~MDCR_EL2_HPMN; + val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); + } + } +} + +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm *kvm =3D vcpu->kvm; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret =3D -ENXIO; + + lockdep_assert_held(&kvm->arch.config_lock); + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu =3D entry->arm_pmu; + if (arm_pmu->pmu.type =3D=3D pmu_id) { + if (kvm_vm_has_ran_once(kvm) || + (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { + ret =3D -EBUSY; + break; + } + + kvm_arm_set_pmu(kvm, arm_pmu); + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); + ret =3D 0; + break; + } + } + + mutex_unlock(&arm_pmus_lock); + return ret; +} + +static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm->arch.arm_pmu) + return -EINVAL; + + if (n > kvm_arm_pmu_get_max_counters(kvm)) + return -EINVAL; + + kvm_arm_set_nr_counters(kvm, n); + return 0; +} + +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + struct kvm *kvm =3D vcpu->kvm; + + lockdep_assert_held(&kvm->arch.config_lock); + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (vcpu->arch.pmu.created) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(kvm)) + return -EINVAL; + + if (get_user(irq, uaddr)) + return -EFAULT; + + /* The PMU overflow interrupt can be a PPI or a valid SPI. */ + if (!(irq_is_ppi(irq) || irq_is_spi(irq))) + return -EINVAL; + + if (!pmu_irq_is_valid(kvm, irq)) + return -EINVAL; + + if (kvm_arm_pmu_irq_initialized(vcpu)) + return -EBUSY; + + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; + return 0; + } + case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); + struct kvm_pmu_event_filter __user *uaddr; + struct kvm_pmu_event_filter filter; + int nr_events; + + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; + + uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; + + if (copy_from_user(&filter, uaddr, sizeof(filter))) + return -EFAULT; + + if (((u32)filter.base_event + filter.nevents) > nr_events || + (filter.action !=3D KVM_PMU_EVENT_ALLOW && + filter.action !=3D KVM_PMU_EVENT_DENY)) + return -EINVAL; + + if (kvm_vm_has_ran_once(kvm)) + return -EBUSY; + + if (!kvm->arch.pmu_filter) { + kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); + if (!kvm->arch.pmu_filter) + return -ENOMEM; + + /* + * The default depends on the first applied filter. + * If it allows events, the default is to deny. + * Conversely, if the first filter denies a set of + * events, the default is to allow. + */ + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_zero(kvm->arch.pmu_filter, nr_events); + else + bitmap_fill(kvm->arch.pmu_filter, nr_events); + } + + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + else + bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + + return 0; + } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + unsigned int n; + + if (get_user(n, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); + } + case KVM_ARM_VCPU_PMU_V3_INIT: + return kvm_arm_pmu_v3_init(vcpu); + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(vcpu->kvm)) + return -EINVAL; + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + irq =3D vcpu->arch.pmu.irq_num; + return put_user(irq, uaddr); + } + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: + case KVM_ARM_VCPU_PMU_V3_INIT: + case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + if (kvm_vcpu_has_pmu(vcpu)) + return 0; + } + + return -ENXIO; +} + +u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + unsigned int pmuver; + + pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, + read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); + + /* + * Spoof a barebones PMUv3 implementation if the system supports IMPDEF + * traps of the PMUv3 sysregs + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return ID_AA64DFR0_EL1_PMUVer_IMP; + + /* + * Otherwise, treat IMPLEMENTATION DEFINED functionality as + * unimplemented + */ + if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return 0; + + return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); +} + +u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); + + if (val =3D=3D 0) + return BIT(ARMV8_PMU_CYCLE_IDX); + else + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); +} + +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) +{ + unsigned int hpmn, n; + + if (!vcpu_has_nv(vcpu)) + return 0; + + hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + n =3D vcpu->kvm->arch.nr_pmu_counters; + + /* + * Programming HPMN to a value greater than PMCR_EL0.N is + * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an + * UNKNOWN number of counters (in our case, zero) are reserved for EL2. + */ + if (hpmn >=3D n) + return 0; + + /* + * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't + * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly + * depend on hardware (all PMU registers are trapped), make the + * implementation choice that all counters are included in the second + * range reserved for EL2/EL3. + */ + return GENMASK(n - 1, hpmn); +} + +bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) +{ + return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); +} + +u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); + + if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) + return mask; + + return mask & ~kvm_pmu_hyp_counter_mask(vcpu); +} + +/** + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU + * @vcpu: The vcpu pointer + */ +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + + if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) + n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + + return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); +} --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 498F8280337 for ; Mon, 14 Jul 2025 22:59:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533981; cv=none; b=JT9IZsFr7ZY1/i6FxylHOmiMnTpVi/w0H25F7XSNveeCuDsD+SB73bR5OK+mlVyMa9rQoZw/NapZcciAjEf6AtjUGpvZDuIulT+V4qxumMKKMBwTlAlU6ofPlVmNuUWVf4ukWRYhzpoIOriRjjW0kgV4WeMFwyoFrgJeVAOyFtY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533981; c=relaxed/simple; bh=DhNJxwtbMKN1Xjxkk4G2N2kwb2YustWbLKIc6yhbgV0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kTGMelNBPv+SPahkuvHgtBc9Y8sarp8yqwAw4illp7q4LgZ+XzvR4tL1tfmDz+F/gTZTr8Sq3ykBvVOVO63+eNuzgUZcxXA16oz6kbn63PVIbwF8RBZfwWfyXP+bNRhHZecsgITSqXmPCea6HXAsjwyu5S+UX4i9ypdtUZYzO4g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zpKodP4t; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zpKodP4t" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2ff88395982so261806fac.2 for ; Mon, 14 Jul 2025 15:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533974; x=1753138774; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UHPilczBMGoAee/NxsWPBSFL6RDSqf4UCYQezF4YOII=; b=zpKodP4tinAhx/VsIb4Kc0Woi5Je7zKRvFbJw7pwes1f9E1OIy4CtMD2bAIuk1wSO+ ekmZIzFiEzo0zw0+KsbelsZ9vvgaD0XNam6ZR5CURsG/LtdzdbbP5O9WjLlYMa3cgMP5 pZLflIVxuW/nYNNeL94Qw335iqa5d1gElCRSRtcn6cVxcmqrq7mrGnpjKYrmO1KlCoqe O14DRgqC9/yFhgjxS8ENNU6TBdzIYnJO8cbtDZAZu+8KyF8pAmrkflzzjJak2DvZLP2j vQYSOP+ppu4vZewJh/f3C3hjvJotk//fkp6bRQdn9wB/JiPgGjZGFfVfEnNuprz7ArDK ws5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533974; x=1753138774; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UHPilczBMGoAee/NxsWPBSFL6RDSqf4UCYQezF4YOII=; b=pdNR7Ar7fqpnBCHwwB0V8zMz26vz+/RC+KNwCkRQkk81CULhknnxBq0lAOrjFCauCZ RBmR7gidUvO5QIZpEs88yaDphMmXpV6ZmlAiQ+MZronfdfrScaSXrrSOPpJqHpsdERj9 pqpOZuX0MsNdWtzfhvmrLPZSZxpX89GKL4ppUHyqCrAM+NJ/ivy3YXwvrzxrRzUvk1Tn fYJQJ2Lr0aHxD+7FYGoD6tj5NSh0+GRSCmETxALmX8b9PUT/cTmw8rzJzxOzFZhpFGXG z2zyDr8bl74/bw+cu+jBTBKZ0YBa7EVE54JmRQQdUPQ6LX+9Vum2SQBtX88nvarLevIe JGRw== X-Forwarded-Encrypted: i=1; AJvYcCXJ37CWUPdx4yUp5mdF988IY7g9dS6rPEFhatZDMmpytA2dFPLAx0DL7EBC3/fdpSvk0kHkG3eWEVsHgBc=@vger.kernel.org X-Gm-Message-State: AOJu0YxDEdpCUUzh+jZj9i3IyytIaxfU2D9xtgiq3zkJFIuXivm6ErTp bwUE6He0NTBV5o3n++jI4Oyo4NdPD1q6+4UdW+JkM+hw4eTnjBhVOgh40W1gbNaHbiXiQqzbSGn b5kn691Ax4vPSsQUQA4x/ogs7LA== X-Google-Smtp-Source: AGHT+IEAWWYG/VIKx8r6iZPZ1qTs8MCzlnhG4Qeg0PPUlKt/cLSyfJW7JBrJ/5M/Zp6SVe/k2s23ZgEdNfoQV0rAxQ== X-Received: from oabna17.prod.google.com ([2002:a05:6870:6c11:b0:2ef:e9aa:27e6]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:a987:b0:2e9:365:d0d3 with SMTP id 586e51a60fabf-2ff26a28925mr10157294fac.21.1752533974416; Mon, 14 Jul 2025 15:59:34 -0700 (PDT) Date: Mon, 14 Jul 2025 22:58:58 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-5-coltonlewis@google.com> Subject: [PATCH v4 04/23] perf: arm_pmuv3: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Create module parameter reserved_host_counters to reserve a number of counters for the host. This number is set at boot because the perf subsystem assumes the number of counters will not change after the PMU is probed. Introduce the function armv8pmu_partition() to modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest and record reserved_guest_counters as the maximum allowable value for HPMN. Due to the difficulty this feature would create for the driver running in nVHE mode, partitioning is only allowed in VHE mode. In order to support a partitioned on nVHE we'd need to explicitly disable guest counters on every exit and reset HPMN to place all counters in the first range. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 14 ++++++ arch/arm64/include/asm/arm_pmuv3.h | 5 ++ arch/arm64/include/asm/kvm_pmu.h | 6 +++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/pmu-direct.c | 22 +++++++++ drivers/perf/arm_pmuv3.c | 74 +++++++++++++++++++++++++++++- include/linux/perf/arm_pmu.h | 1 + 7 files changed, 121 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kvm/pmu-direct.c diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc9..49b1f2d7842d 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -221,6 +221,10 @@ static inline bool kvm_pmu_counter_deferred(struct per= f_event_attr *attr) return false; } =20 +static inline bool kvm_pmu_partition_supported(void) +{ + return false; +} static inline bool kvm_set_pmuserenr(u64 val) { return false; @@ -228,6 +232,11 @@ static inline bool kvm_set_pmuserenr(u64 val) =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} =20 +static inline bool has_vhe(void) +{ + return false; +} + /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 #define ARMV8_PMU_DFR_VER_V3P1 0x4 @@ -242,6 +251,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ARMV8_PMU_DFR_VER_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ARMV8_PMU_DFR_VER_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ARMV8_PMU_DFR_VER_V3P4; diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index cf2b2212e00a..27c4d6d47da3 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -171,6 +171,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P4; diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 6c961e877804..8a2ed02e157d 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -46,6 +46,7 @@ struct arm_pmu_entry { }; =20 bool kvm_supports_guest_pmuv3(void); +bool kvm_pmu_partition_supported(void); #define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); @@ -115,6 +116,11 @@ static inline bool kvm_supports_guest_pmuv3(void) return false; } =20 +static inline bool kvm_pmu_partition_supported(void) +{ + return false; +} + #define kvm_arm_pmu_irq_initialized(v) (false) static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 86035b311269..7ce842217575 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,7 +23,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o =20 -kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o +kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu-direct.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) +=3D ptdump.o =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c new file mode 100644 index 000000000000..9423d6f65059 --- /dev/null +++ b/arch/arm64/kvm/pmu-direct.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Colton Lewis + */ + +#include + +#include + +/** + * kvm_pmu_partition_supported() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode (with PMUv3, assumed + * since we are in the PMUv3 driver) + * + * Return: True if partitioning is possible, false otherwise + */ +bool kvm_pmu_partition_supported(void) +{ + return has_vhe(); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index c2e3672e1228..294ccbdc3816 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -40,6 +40,12 @@ #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED =20 +static int reserved_host_counters __read_mostly =3D -1; + +module_param(reserved_host_counters, int, 0); +MODULE_PARM_DESC(reserved_host_counters, + "PMU Partition: -1 =3D No partition; +N =3D Reserve N counters for the = host"); + /* * ARMv8 Architectural defined events, not all of these may * be supported on any given implementation. Unsupported events will @@ -505,6 +511,11 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } =20 +static u64 armv8pmu_pmcr_n_read(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1200,6 +1211,58 @@ struct armv8pmu_probe_info { bool present; }; =20 +/** + * armv8pmu_reservation_is_valid() - Determine if reservation is allowed + * @host_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is an + * allowed reservation, 0 to NR_COUNTERS inclusive. + * + * Return: True if reservation allowed, false otherwise + */ +static bool armv8pmu_reservation_is_valid(int host_counters) +{ + return host_counters >=3D 0 && + host_counters <=3D armv8pmu_pmcr_n_read(); +} + +/** + * armv8pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @host_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the hpmn_max field of the PMU and + * clearing the guest-reserved counters from the counter mask. + * + * Return: 0 on success, -ERROR otherwise + */ +static int armv8pmu_partition(struct arm_pmu *pmu, int host_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!armv8pmu_reservation_is_valid(host_counters)) + return -EINVAL; + + nr_counters =3D armv8pmu_pmcr_n_read(); + hpmn =3D nr_counters - host_counters; + + pmu->hpmn_max =3D hpmn; + + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, host_counters); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + pr_info("Partitioned PMU with HPMN %u", hpmn); + + return 0; +} + static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe =3D info; @@ -1214,10 +1277,10 @@ static void __armv8pmu_probe_pmu(void *info) =20 cpu_pmu->pmuver =3D pmuver; probe->present =3D true; + cpu_pmu->hpmn_max =3D -1; =20 /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, armv8pmu_pmcr_n_read()); =20 /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1226,6 +1289,13 @@ static void __armv8pmu_probe_pmu(void *info) if (pmuv3_has_icntr()) set_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask); =20 + if (reserved_host_counters >=3D 0) { + if (kvm_pmu_partition_supported()) + WARN_ON(armv8pmu_partition(cpu_pmu, reserved_host_counters)); + else + pr_err("PMU partition is not supported"); + } + pmceid[0] =3D pmceid_raw[0] =3D read_pmceid0(); pmceid[1] =3D pmceid_raw[1] =3D read_pmceid1(); =20 diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 6dc5e0cd76ca..2c79dc0f09af 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -122,6 +122,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + int hpmn_max; /* MDCR_EL2.HPMN: counter partition pivot */ }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BF5427A904 for ; Mon, 14 Jul 2025 22:59:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533980; cv=none; b=KiV8bk1IELZJXLdeu5NhZwO7C9XfGjH1HXyjua3fzjcywPnzXBdTDaZqpfyCwAbNRFQcLZxDdPXMKs8O59wjvMH2nkLNuuYV5LWhygQAAhG8amDtrphFZRlE+pY/T1w4HauU6RqgPIwLlYBVo9tacdXIaLcj8Y+m92A+o1ox/c0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533980; c=relaxed/simple; bh=7v4sX5q8lLRJ/EAnEhPnRBoogEWKPydlhazoaBQzE90=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DEkOHYX/Vcw4JHMo1VG3beqwjDXaTOX+2pon7m7vGZyBUNqp0sfLJ0WoX8EtJkVDrmfiHtM7iuI4yfjqrxWP6nuRGK9cej1YpJVtErgpgsP75/iaaLN9MCrWJdjtaKU3Q3ZGis0WdDih0w54sVeb84xrmx9kkpnHDmaeIN0AaBs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Zu+YNcNa; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Zu+YNcNa" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86cc7cdb86fso524065739f.1 for ; Mon, 14 Jul 2025 15:59:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533975; x=1753138775; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4LikijBgA9hgJn1z8WT/27yywkDcxcZWUZywAO0Cb1M=; b=Zu+YNcNa3/K+QAWxd2ELbUb/41b/6dRNMoftovfqHLGBpm5S0u89xQcHtiOMDeeTt9 EUJ6s07PSjHp9PwA+Hxu/XV3Y+lx7lZfKnfMB0pV5C5oRMbjAaDx/WKPXQJasbDzP2MS rvqq29m8+GocQa1PjLMe2Wgs6j9IsKTVWhiRHaWOyWcYn3/E1W28D4SFmgtQSt6gdWMn 2j8laq+kTXU68TZxPgsHI6klJ1/vGyhDgTgq+yPg2TD0fjX9k9Z6KTWVwISeY4yXJB82 vJ5hg/PGehayVO7ItFwBZJgdDpEeGvaz9lCF+i67TRwnsMI2EFhX5hBjBhf3r9zS0+OQ cBdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533975; x=1753138775; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4LikijBgA9hgJn1z8WT/27yywkDcxcZWUZywAO0Cb1M=; b=MRAYKkE4FPF2jvU5Pp0MAydIJUVUMUChZC0F/ThKE8GS+1CFQ350X1vjVqTZUMxE9H +OWIX9lrQu4/EAgt+lW+qmMH2hx/PANjWvzaVjHt5zGWxWmRzkXeWU1IR2DqwGmSB7zy QclqkgwS8dPkvBaXwAlSmDu+vlfxOfVD5AXzvio+MtkZ4P0pcVJes2oNUzSPxQBFO7pL Gqumwkmp+/BhMnu7Mc5tOBWmnGiLPbE3sfzaNr/QU0PoLCP15CK/jfn8GNcjsc+T6BKw BPz5r2Mo/VUr6cPxfwdT27m4l7uQrSxBgP7r25YWlTOYu7VwCXMI+ZZj0h3M7YKjhz/8 zNzQ== X-Forwarded-Encrypted: i=1; AJvYcCUBP/k1PqamaD4iHuOy+C7chyZ0Lrmo3mHWaOWJ6oTamBAE8ExKXdYK+Cjqj++IQ8B0vYPfaZ+WsbCDv7k=@vger.kernel.org X-Gm-Message-State: AOJu0YwTGwVztf/m8VC+YYJgBD1tDpY77y8hqdlPbMyKIumSRF9g5RJF rmsHm4WS1RcQiVUmzJjSgOBcVafT15NlQxGSZHgKGwJ4SXLeCXmuz224oHKXL/eZ3A6ezxfxNYR AX+J7fHafuZdKZZPggiOlisvxuA== X-Google-Smtp-Source: AGHT+IEwx9UBGT/8wheREiWV+S0ihYQlL4yh0otkxzUq1va/b8+IqqW7BehvDHYAb7BE/vWyVZofNwSdDr4keaqSbQ== X-Received: from iove26.prod.google.com ([2002:a05:6602:45a:b0:861:c7b1:d848]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:7402:b0:873:35c8:16f9 with SMTP id ca18e2360f4ac-8797882781amr1351696339f.8.1752533975408; Mon, 14 Jul 2025 15:59:35 -0700 (PDT) Date: Mon, 14 Jul 2025 22:58:59 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-6-coltonlewis@google.com> Subject: [PATCH v4 05/23] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The OVSR bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Acked-by: Mark Rutland Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 4 ++-- include/linux/perf/arm_pmuv3.h | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 294ccbdc3816..339d3c2d91a0 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -518,7 +518,7 @@ static u64 armv8pmu_pmcr_n_read(void) =20 static int armv8pmu_has_overflowed(u64 pmovsr) { - return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); + return !!(pmovsr & ARMV8_PMU_CNT_MASK_ALL); } =20 static int armv8pmu_counter_has_overflowed(u64 pmnc, int idx) @@ -754,7 +754,7 @@ static u64 armv8pmu_getreset_flags(void) value =3D read_pmovsclr(); =20 /* Write to clear flags */ - value &=3D ARMV8_PMU_OVERFLOWED_MASK; + value &=3D ARMV8_PMU_CNT_MASK_ALL; write_pmovsclr(value); =20 return value; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a2..fd2a34b4a64d 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -224,14 +224,14 @@ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) =20 /* - * PMOVSR: counters overflow flag status reg + * Counter bitmask layouts for overflow, enable, and interrupts */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ -/* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) =20 /* * PMXEVTYPER: Event selection reg --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3275E285C98 for ; Mon, 14 Jul 2025 22:59:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533981; cv=none; b=M5kpl6DoEzbLbBiTi4Nps/edc/rdO5iH8+8Wf7aEVN0sGl0J4YyJsMy0FhPmfr7omsHzw6Tpm+D2RGnRB5PwMrqReWygkaWEFKBy94YjOfGD5GREAW2hDtm6UHydZ3Uah1UiHJr/TjSzOaad5s4rLTMjGUMs5mFFmAMYpD6WyEc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533981; c=relaxed/simple; bh=G/txQZ/BDq9gkt6hp09zgaWZYatSA85P4NtLWq5cQT0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pN/FcMXt2NBb9vgZV5qDDHaoky1gG7rAqCoHo7B+r6gWSKElXNq0GNGgEgSdc6sG8AdbMzgZmig8UZgxoBJiJ6t3Ex2cMB/y2ZK25KI9p5hgwQL37XaYPTyLMohKXrJQMyLtvdP7c/K3QcIKRkUQWNX0ya66Y7tDygGYfz7bmOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fz5y7Iia; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fz5y7Iia" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86d1218df67so519138739f.1 for ; Mon, 14 Jul 2025 15:59:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533976; x=1753138776; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=am9zpMdbn943xX1i598dJouvcbY5ZVT5eyUs7a9fWoM=; b=fz5y7IiaZJjUCnULpI/ROFWdTQHkitQWSHOlcUwdEVUdXFQ5WPyRXTZCYA91GAMCxz ijPbEMKBj8e0tfexAdQG1GYhVH6M3tra/J2A//hiIaelCNCBXKl9XJayEgCDp0sumdSU WqFtOiLzK+TUs+1bkKAjNLlEQagNmzPK76opCStIMlkBQqFk0w/3VwHrhSPQt0goxvas iDG3hww/BWk6ZePSraKIuEVCtI+1MjtI+cE16WaKzuuG6M9P7qf+9U4+nEtPlj10hfXJ e3f6vxF42bIsSH2+Kc1WwSaa5Afcggk+Tu+1r9LiPFY1xpb+KeFPlQ1T+nVlbdZZ0Pzb 2AOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533976; x=1753138776; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=am9zpMdbn943xX1i598dJouvcbY5ZVT5eyUs7a9fWoM=; b=Jng5kcxoky1AsoQK97BQR/ngaXw664WhfqxkMHpgaGVYtVt8clCUBKzsO3gA8bXlG1 X1Piajvs7qvfY3g1lBml42+UTnou+MsmZAj0+VUr/pj7ErRUg5y+i7TWpSvfOUaJQrK3 HgIE5FRjV7N3mS3ypiqM8sX1z6JId1N89JmiEzM5go0RGOlStgiDwZt2SHnXW3qyGzn0 Ef4w2QCSyp4H6Bf8ACqfwWgGKgSjlUAzvuS0t8diujEb3tMK6ezbzP0eZkZFdUyLrUAL H1+N5qI2aKrURHY0iFHjrEawRJnZPLaw2nBn9f/bGryrXBn3xVAOGMAvMUX/ue/OAgJ6 WImA== X-Forwarded-Encrypted: i=1; AJvYcCWZwn6YJuKxTmVDcE8AQlyO7RQUFgOC8XU7bh2y8Roa2x999rlu/qPG+wP4CW2HkqxvV++mQ87pbdXDISc=@vger.kernel.org X-Gm-Message-State: AOJu0YwNAAYoi9ahXAA7JgnnbgHSmEEl0eRjt3YAsCL9+8R/YmPvwNMF yqLApolIiwfv0fNn/Giru3lUTzQGU3NTgr1EqJfOTI3CKLdD41UN0wKyIddYN2+xeJKok8o6oNk Q8TzV5ROtWqj1SCtnRydAcOdBMA== X-Google-Smtp-Source: AGHT+IE1z+39HIqmM7BKLjtY5wU42naB5FJcsSTxcaG7pf5N9dm7lxvFCjXcuCMwk1bcxRw8i+zH9dMxhRnWzJouvQ== X-Received: from iobeu7.prod.google.com ([2002:a05:6602:4c87:b0:863:ef1c:406e]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:2dcd:b0:86c:fdb3:2798 with SMTP id ca18e2360f4ac-87978866cf4mr1633168739f.11.1752533976249; Mon, 14 Jul 2025 15:59:36 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:00 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-7-coltonlewis@google.com> Subject: [PATCH v4 06/23] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and the maximum value KVM can use is saved in cpu_pmu->hpmn_max. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some functions that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/include/asm/kvm_pmu.h | 24 +++++++++ arch/arm64/kvm/pmu-direct.c | 84 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 36 ++++++++++++-- 4 files changed, 158 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 49b1f2d7842d..5f6269039f44 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -231,6 +231,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 8a2ed02e157d..6328e90952ba 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -88,6 +88,12 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -220,6 +226,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 9423d6f65059..22e9b2f9e7b6 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -5,7 +5,10 @@ */ =20 #include +#include +#include =20 +#include #include =20 /** @@ -20,3 +23,84 @@ bool kvm_pmu_partition_supported(void) { return has_vhe(); } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return pmu->hpmn_max >=3D 0 && + pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Assumes pmu is partitioned and hpmn_max is a valid value. + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + return GENMASK(nr_counters - 1, pmu->hpmn_max); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Assumes pmu is partitioned and hpmn_max is a valid value. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_ALL & ~kvm_pmu_host_counter_mask(pmu); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 339d3c2d91a0..bc8a99cf4f88 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -839,12 +839,18 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) kvm_vcpu_pmu_resync_el0(); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -954,6 +960,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, =20 /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -969,6 +976,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -980,7 +988,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1072,6 +1080,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1079,6 +1095,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1086,11 +1105,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); =20 + + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43D5C2868B7 for ; Mon, 14 Jul 2025 22:59:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533981; cv=none; b=bl/UiHksckxp+FlSLkbzkPXY11cRqQxuY+qO7VUt4BEA7MIAuxb7x3fG+UcGmrciKuHLBwBiNvmSRGfD2Or643iCzPwfmkmeZPa2iK3zXV10lZVIj4ugaH0Vb1lnNI1TEe68vePpy53+ybuWwI9CmnH2msIuD0W1pgJIwmGrt8w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533981; c=relaxed/simple; bh=83DjgNByW3MicWNoftr7pf8Sgs8ybpROMTZWgSw674M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QZpoB9ZPcJYR84ANCEpI6XrIPDooLgWllkKqI8SMkdQbwItqCihqS2UKsE/ELSiySqcSIBZCvli0FaYMpNzHg4eN9YmkH4Aex5Cy3MDlIFx9CTGm2D9glDU/VCN6RC4FgG7xS1yEJdVbHJa9zvChCfGob5raXI4kRH/lndNAIoo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FUACJx5m; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FUACJx5m" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-873fd6e896bso510656039f.3 for ; Mon, 14 Jul 2025 15:59:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533977; x=1753138777; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hXtQgo0JIkUT7TODCNQx7pYKeIXv6hb0g8DOUP6P7to=; b=FUACJx5mo6xzMNDLcxRqhBj7UnvTTJlxbdak1ZK2aycYSGfSzBKemoFXKLfksXl7sx QE6r+supNZ/1uITj2K4roR//Gf4UqFHBoeWvo8j6C/1cJaIlRq1Jb0lM17zb1PF2KlLl jk0g+9a+iMQBcYxegAdi3gAIUdd4TLzZ153SrCh45lWy1tyfg1NVieyRXTZASCvSwj7r bhE9xhAayWmI6Hk9i8iyCJbOZE+64I74yVknBfX26uqkRxHxJmt8OO8A1NhuVLWyTNRD kLujbqskggaREfQymX06zzprHAWMEVhPzLVB73MUcDX8+q3wSHND+irXHj0Pca/ko9Ub UK7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533977; x=1753138777; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hXtQgo0JIkUT7TODCNQx7pYKeIXv6hb0g8DOUP6P7to=; b=L0V7950c1jXD3D04KmxleeryyjZt6qqvyXK2gf6Mt2gHJ2xXK9yaSubyxLbPftFleZ rveZ/5lQcCszlGTzHGJ2J2zPsmFRSXZFBmkP9zHYvkKlmGE2FJuqF38kZmqBukVItIEc TzlGje2rO2xQFLB5ve61Dy3nOoyTaY72ydS2o/POqtZLKFal2Hj3PamXx3wxDBrWeSbb MUGx4MnShabHrezxCOp017mTi9+L3/TroxIkqkMsfKVJYwy0KNBZ2sxlKDJ1IwMAa2iE EAxQ8fGBSBiINQnLpwV7uZ3VukJRuqhz1s0qlxYpaNh7jedNkEVTxu+yTRrFChpSfsEm CaJQ== X-Forwarded-Encrypted: i=1; AJvYcCUdoI+nc4z4yrhxalLEmAsCfLdLUk+SdEX00t/1UC/eyNnqfpT6rAHL98erKMMWaGCOYcppBLYeINJOY44=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/O1oPmWMaBbIXaEOgm2aa3lUBgBN1vEXmnQ3snM5xq/x/HnRs 0gnXtVmOgTBI3oYz+TebJ1F6/TW7DWDVxvvK++347wSBOvGMUmpbwuN4V0BhbNLLcPaexFvhfEL +BlxgilBtIy5hMkozAWJb3yEuwg== X-Google-Smtp-Source: AGHT+IHBO0mASu578wHJOGcQpQ9A5EOTJJDI2Tsl6mLeD713vdCHw+Xsr7SE46k9aYTsouWFo/KzgvhsXQhtYGQfSw== X-Received: from ioni15.prod.google.com ([2002:a5d:840f:0:b0:855:9384:bf3d]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:380d:b0:85d:b054:6eb9 with SMTP id ca18e2360f4ac-87977ff3c65mr1759326739f.14.1752533977132; Mon, 14 Jul 2025 15:59:37 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:01 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-8-coltonlewis@google.com> Subject: [PATCH v4 07/23] KVM: arm64: Account for partitioning in kvm_pmu_get_max_counters() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since cntr_mask is modified when the PMU is partitioned to remove some bits and kvm_pmu_get_max_counters() uses that bitmask to determine how many counters are on the system, make sure that function also counts the counters that are no longer in the bitmask. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 79b7ea037153..8a21ddc42f67 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -533,6 +533,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) { struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + u8 counters; =20 /* * PMUv3 requires that all event counters are capable of counting any @@ -545,7 +546,12 @@ u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) * The arm_pmu->cntr_mask considers the fixed counter(s) as well. * Ignore those and return only the general-purpose counters. */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); + counters =3D bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUN= TERS); + + if (kvm_pmu_is_partitioned(arm_pmu)) + counters +=3D arm_pmu->hpmn_max; + + return counters; } =20 static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAB842877C9 for ; Mon, 14 Jul 2025 22:59:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533982; cv=none; b=UApzvrqYHihWS3wpX3drRDEmq770ziu4PI+l6uxLDBBEvsOi1YLZiIn9EdCF08rA6p72DBIU888cmyIWh6EHQQjyFooBLEQ8fntiGJ9RRXGgp17FQqE2HngJeoDwnDw1hZh6BNUVriLxO08bqqI1pU4yN4dKQkJCQsLr4JEsSWo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533982; c=relaxed/simple; bh=JP8zf6L3hflAD9DfHgHhFb+z9U9XgJ/aXorJou5xHMY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Sdd8NGtGxh9MqiA/YLdkKzIjgLDip9g6DT7DbZspcRFLM3Ac6qt+tti4PJeW6ajnuo6yP0cr94Hb6wUaJJNMm13wJYOPsRnU2Lm0dgpuDFmTZaS/G4fbd5fIQ1ng9Qbzljc7DVKrbtThXPVxntUU9wvVc81WCopEjP0ORHb2SJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eGLY0u6C; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eGLY0u6C" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86cfff5669bso475196039f.0 for ; Mon, 14 Jul 2025 15:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533978; x=1753138778; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SJZ+xwDjyMIGIgaKCFQ3UPZ4fJ+Lh0B/3NWWuRP6Bzw=; b=eGLY0u6CFnxwv/RPEJcT+LpqFXCvh1BMcsuGBXwJc+sx3IoVmJSNKdlswWg+CMn7gz foZzuOhgTK/j1659Voo2EeHBWdoeDXjK0CzkESDDu+7lq5yFN9/v/zzDtjFcfxWNMY6D JVkb3QUNO28luHCECOU4mewKSIY4lBwCaktDVP41Rhu4B5MBxfH2pbAYDYn0FjuyjcQQ HgWbWHjnQbd+h3SmRud/Qhp/5vyjFiL38iFo78Bwqgl934g5wZBZFyt1xAl+4ZElCkgg +lPtDZ+Fsl46VsST//NQaAb/Mx2cl4aL9aHRJZizvC0sI+dGzVGMH+izp6JtWz3Ym/Zi Nrfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533978; x=1753138778; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SJZ+xwDjyMIGIgaKCFQ3UPZ4fJ+Lh0B/3NWWuRP6Bzw=; b=sHGkFHbAiF+qPNe+CfEMSA7tQz2Ckn7L8SPmfQWrc6hzha37swIrhPOg4XKNUMGci0 u50yP3KrRtQJIB6Bw6Aaw5TdpW4jtIoWmd4X8OEeH2esR1UBKhnELLW1szAgMTETCoo8 vN5TnTLbgi3s25hRqznyebYx4TwneaR2RZ5NsOqwPHUftwfnS7BIFj8V19xms0kFPYTO pn0z0NJEcXiwtIfDLPepuEQY8PhaMvVffQWbZanFL/sb8I1ekr6G2JNHQ5gjfhW2uoTE ovoE3hN/d7IMra094L7YcYeJTzHGLqAQKGQnpoqrInAPls16W3VbbMD6wA3mZhQ+WzTe Zh3w== X-Forwarded-Encrypted: i=1; AJvYcCXjKJVDJYWNnUHTpzNdIFmWmVcHsCbGIv4HaBH/Bd2jpRPAJbPtvEHzCIqJA8ulqTxsIsQOLvl5hkev5f0=@vger.kernel.org X-Gm-Message-State: AOJu0Ywku++B1yNCkep1m7CMKWILFrrLLQ3kWXkgsFl7lddIk6zj67B7 ow8jeQyFJ9VHwCTIZdcNI10i7yLsHiCiSIBVJSctCEPcWDf6aiAXAgC3ayjbJ3eJgDSte4cn/2o VqgxJtbSdTLRan/bj0bQlruUShw== X-Google-Smtp-Source: AGHT+IGcsfYXq37Q6WkECHaXfuTpZ4xUC2MHD5EzXhz1ZNa8q/Bm5uigtDkeVAlpUUfariaZ73ybTmOi3F0pZZ07NA== X-Received: from ioxw3.prod.google.com ([2002:a05:6602:5c3:b0:86c:9981:d21d]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:3411:b0:876:b17c:dec3 with SMTP id ca18e2360f4ac-879af05ab13mr188823039f.8.1752533978200; Mon, 14 Jul 2025 15:59:38 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:02 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-9-coltonlewis@google.com> Subject: [PATCH v4 08/23] KVM: arm64: Introduce non-UNDEF FGT control From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown , Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Mark Brown We have support for determining a set of fine grained traps to enable for the guest which is tied to the support for injecting UNDEFs for undefined features. This means that we can't use the mechanism for system registers which should be present but need emulation, such as SMPRI_EL1 which should be accessible when SME is present but if SME priority support is absent SMPRI_EL1.Priority should be RAZ. Add an additional set of fine grained traps fgt, mirroring the existing fgu array. We use the same format where we always set the bit for the trap in the array as for FGU. This makes it clear what is being explicitly managed and keeps the code consistent. We do not convert the handling of ARM_WORKAROUND_AMPERE_ACO3_CPU_38 to this mechanism since this only enables a write trap and when implementing the existing UNDEF that we would share the read and write trap enablement (this being the overwhelmingly common case). Signed-off-by: Mark Brown [Removed unused vcpu argument from macro] Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 7 ++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 92d672429233..f705eb4538c3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -301,6 +301,12 @@ struct kvm_arch { */ u64 fgu[__NR_FGT_GROUP_IDS__]; =20 + /* + * Additional FGTs to enable for the guests, eg. for emulated + * registers, + */ + u64 fgt[__NR_FGT_GROUP_IDS__]; + /* * Stage 2 paging state for VMs with nested S2 using a virtual * VMID. diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 7599844908c0..7fe5b087c95a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -153,9 +153,9 @@ static inline void __activate_traps_fpsimd32(struct kvm= _vcpu *vcpu) id; \ }) =20 -#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \ +#define compute_trap_clr_set(kvm, trap, reg, clr, set) \ do { \ - u64 hfg =3D kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \ + u64 hfg =3D kvm->arch.trap[reg_to_fgt_group_id(reg)]; \ struct fgt_masks *m =3D reg_to_fgt_masks(reg); \ set |=3D hfg & m->mask; \ clr |=3D hfg & m->nmask; \ @@ -171,7 +171,8 @@ static inline void __activate_traps_fpsimd32(struct kvm= _vcpu *vcpu) if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) \ compute_clr_set(vcpu, reg, c, s); \ \ - compute_undef_clr_set(vcpu, kvm, reg, c, s); \ + compute_trap_clr_set(kvm, fgu, reg, c, s); \ + compute_trap_clr_set(kvm, fgt, reg, c, s); \ \ val =3D m->nmask; \ val |=3D s; \ --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 666802882A0 for ; Mon, 14 Jul 2025 22:59:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533983; cv=none; b=GJUY/yEcbMozoBEfCQEpFaeNJpYb7DaED4DXAUCaHlaIyF/ogtQOxePP+NI0pEd0TKLz78lQk66x8qIhnDyywZ0lgXHqkaUBAw2sQchGuFcugSLinCtJiB8cHezZU44Qv4AyLcNHZuqrKhbzb+szPCQcDm0aZc/SPFlbYmq5crU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533983; c=relaxed/simple; bh=CDjTj8cWCX5w5VetZUMx8OnPfitI8X2G7iAMJ7bz63E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=C64mlWQfDoAVFsc2+bRQgeOJlUzRFO8VlUhaZITcejFV+O+kZvhG0ne+ss57udxww6fy4kSuv/ezAxp+NkFypwWF2T5OvpBkUZwjeVj8JxQ37tr9XSrc/GM4uuLSCr+ue+DZreEJZI9+FygAVuUq69AUA6mV142sBHSvLijQ8qQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=T8+BOZ1L; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="T8+BOZ1L" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86d01ff56ebso829313539f.1 for ; Mon, 14 Jul 2025 15:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533979; x=1753138779; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ruhxYZHp5lLB8yelcchT9klIB68L+DKTdiF6okoujCY=; b=T8+BOZ1LzUsaNS6c8agkirXf5bNdFndie5MD26aOYP45zX0V5TxVBAUajO26AJaP60 ei0O++jIYTfId0ROwoacSfs0vbogK7qugoKh2UG9C/fxb8EtPuVVBMM3xqL+k37HENkW sLZffQW7WVWlTTNiaBxk4oT8IO0jH+V23AhfDVfigYLIaXdiC7ZJMRO8U6R+q+eV5Wov 9e70CXdm9i5leatl7PeqhQGtxAgvRq8hPml5OEVL/Z4FhhSI36eqAeoyBZreB8xyAWgy jy0c1xe1/zQQQDwbs2k87dHNPkiZA50YPhyKsH5jFIkwuxLyw2RXfv4WDMklL3nm9PnO 7oTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533979; x=1753138779; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ruhxYZHp5lLB8yelcchT9klIB68L+DKTdiF6okoujCY=; b=RQsZutqxiy2xMjHyyLKZe2JQsdYdHR7PyXf3X/Tbb43ojHZkin1uQvj3q5SuLrL4uV NjNnjbG0yEohFn1HNNKRnWLSCUAQlFLErH7zwtZBujY7eKVSpSJeZz5o5igcd6lVlCuH Q2BUr4pmH2erjs3MKrO9ox2Pj9IerYa3giBAJRn6xbG7K0nleWS1sn0viL1GW1lBq/sc parnz4mymm3xoCRFa4fnX/eUIvAz6KUiUMF9JwwAnYyvThOnt4/Iv5t3hQ0oq5jx1br6 F1pQtBDR+eMl3cnoqfaww0bAtIhn1jlBfgvkVHfFoGdnywh18UxWmJn2UHZfRtPKAEWD uhJQ== X-Forwarded-Encrypted: i=1; AJvYcCXGD7cIq58+i5pj2ei6mQ9hmD6svp8md7ukQKrniOagAwSttBt2r1fvHNvZLcnrBHy60iDcCLnAQnuOvMw=@vger.kernel.org X-Gm-Message-State: AOJu0YwBD7nRezZ6AM1OgBAo8lAAAA1J4f4UftGlIog4jlhBJs2slW5w hyPbnYcUhyF84+Bh4GrqmwK5vM/O41CWkveSBWL9Cp7cPn1pfHiPyZdqZIw9+/Q0qP0VOVaQFYG guCOOoSzLotXSZ2JUUcf/EL3RfQ== X-Google-Smtp-Source: AGHT+IHR2ZhdLDHriP6gMQRHpCn5DY2/9jV0IQOfUZpsLR5icoK2dSXpYkQ3aztL0DPxSOPRUiuKqvoDOj4y5jOVPw== X-Received: from iov28.prod.google.com ([2002:a05:6602:751c:b0:875:b927:6c0a]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:27c1:b0:86d:1218:de96 with SMTP id ca18e2360f4ac-8797886bf8fmr1673502139f.12.1752533979057; Mon, 14 Jul 2025 15:59:39 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:03 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-10-coltonlewis@google.com> Subject: [PATCH v4 09/23] KVM: arm64: Set up FGT for Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to gain the best performance benefit from partitioning the PMU, utilize fine grain traps (FEAT_FGT and FEAT_FGT2) to avoid trapping common PMU register accesses by the guest to remove that overhead. Untrapped: * PMCR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCCNTR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 * PMEVCNTRn_EL0 These are safe to untrap because writing MDCR_EL2.HPMN as this series will do limits the effect of writes to any of these registers to the partition of counters 0..HPMN-1. Reads from these registers will not leak information from between guests as all these registers are context swapped by a later patch in this series. Reads from these registers also do not leak any information about the host's hardware beyond what is promised by PMUv3. Trapped: * PMOVS_EL0 * PMEVTYPERn_EL0 * PMCCFILTR_EL0 * PMICNTR_EL0 * PMICFILTR_EL0 * PMCEIDn_EL0 * PMMIR_EL1 PMOVS remains trapped so KVM can track overflow IRQs that will need to be injected into the guest. PMICNTR and PMIFILTR remain trapped because KVM is not handling them yet. PMEVTYPERn remains trapped so KVM can limit which events guests can count, such as disallowing counting at EL2. PMCCFILTR and PMCIFILTR are special cases of the same. PMCEIDn and PMMIR remain trapped because they can leak information specific to the host hardware implementation. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pmu.h | 23 ++++++++++++++++++ arch/arm64/kvm/pmu-direct.c | 32 +++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 39 +++++++++++++++++++++++++++++++ 4 files changed, 95 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index f705eb4538c3..463dbf7f0821 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1347,6 +1347,7 @@ int __init populate_sysreg_config(const struct sys_re= g_desc *sr, unsigned int idx); int __init populate_nv_trap_config(void); =20 +void kvm_calculate_pmu_traps(struct kvm_vcpu *vcpu); void kvm_calculate_traps(struct kvm_vcpu *vcpu); =20 /* MMIO helpers */ diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 6328e90952ba..73b7161e3f4e 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -94,6 +94,21 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +#if !defined(__KVM_NVHE_HYPERVISOR__) +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); +#else +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} + +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} +#endif + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -133,6 +148,14 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm= _vcpu *vcpu, { return 0; } +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 22e9b2f9e7b6..2eef77e8340d 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -40,6 +40,38 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU + * @vcpu: Pointer to kvm_vcpu struct + * + * Determine if given VCPU has a partitioned PMU by extracting that + * field and passing it to :c:func:`kvm_pmu_is_partitioned` + * + * Return: True if the VCPU PMU is partitioned, false otherwise + */ +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); +} + +/** + * kvm_vcpu_pmu_use_fgt() - Determine if we can use FGT + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if we can use FGT for direct access to registers. We can + * if capabilities permit the number of guest counters requested. + * + * Return: True if we can use FGT, false otherwise + */ +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; + + return kvm_vcpu_pmu_is_partitioned(vcpu) && + cpus_have_final_cap(ARM64_HAS_FGT) && + (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 76c2f0da821f..b3f97980b11f 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -5212,6 +5212,43 @@ static void vcpu_set_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |=3D HCR_TTLBOS; } =20 + +/** + * kvm_calculate_pmu_traps() - Calculate fine grain traps for partitioned = PMU + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate which registers still need to be trapped when the + * partitioned PMU is available, leaving others untrapped. + * + * Because this is only recalculated when the VCPU runs on a new + * thread, the trap bits should be set iff the partitioned PMU is + * supported whether or not it is currently enabled. If it is not + * enabled, this doesn't matter because every PMU access is trapped by + * MDCR_EL2.TPM anyway. + */ +void kvm_calculate_pmu_traps(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm_pmu_partition_supported() || + !cpus_have_final_cap(ARM64_HAS_FGT)) + return; + + kvm->arch.fgt[HDFGRTR_GROUP] |=3D + HDFGRTR_EL2_PMOVS + | HDFGRTR_EL2_PMCCFILTR_EL0 + | HDFGRTR_EL2_PMEVTYPERn_EL0 + | HDFGRTR_EL2_PMCEIDn_EL0 + | HDFGRTR_EL2_PMMIR_EL1; + + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) + return; + + kvm->arch.fgt[HDFGRTR2_GROUP] |=3D + HDFGRTR2_EL2_nPMICFILTR_EL0 + | HDFGRTR2_EL2_nPMICNTR_EL0; +} + void kvm_calculate_traps(struct kvm_vcpu *vcpu) { struct kvm *kvm =3D vcpu->kvm; @@ -5232,6 +5269,8 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) compute_fgu(kvm, HFGITR2_GROUP); compute_fgu(kvm, HDFGRTR2_GROUP); =20 + kvm_calculate_pmu_traps(vcpu); + set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags); out: mutex_unlock(&kvm->arch.config_lock); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3F57288C1F for ; Mon, 14 Jul 2025 22:59:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533985; cv=none; b=sMwhpP/XP5Wb0Cpx3uA+fKn3okQ9Vl8QjaEmFJRlkZ5+WEcfwVTezzW+5inST9TTvry/ow4oUKxjM9WHN/F4cDOvgUqEe+zqiz61vS++ICt+f5GeW4jCaJMvlzJCZHFaDj7INNB9obgOERQeShqe6NPh6yg9rw5zf7Fu9zJQ8UI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533985; c=relaxed/simple; bh=v8A7OTNElLp9NjEON3srySP/azHIcZU0IjsZoZmUJ/c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IaLq2IxwP0S2DmHwzLgJtpNHPX64lAik0pmj8lRFLXlgbWN398fVX3fsDx79yEzTcwcjDr06BUiZE8LvZDuo7xoDQg1pe6D8sQefMxWEOD2Nd6BAoSHORnAfi7fF9x7c4J2alvwbeCoSLoqDeyD9kT3ayUkyCjjoMkhCcdoUrNU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XmixXwc1; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XmixXwc1" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86cf89ff625so432050439f.0 for ; Mon, 14 Jul 2025 15:59:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533980; x=1753138780; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BZ8dG4IcFoEafWjbI5Tl7HqmjBNXuoxF6820bVcH07Q=; b=XmixXwc1HY47T3FLyI7+GX30antrbAGHh8W7QOBlxl1Vi7niRJpeTLprb1R5PzRjtr 1zQkeFVfKHPRYXJ/LdkmjGvCY89071Ib4CzKO7i5/sxYWwYTFYviKmDriSC80r/OMWeP roYnnvJx3XdZgfpvQEgprf3vTCA3PRnBJVWSm55Gg+MZWKZGQ7cgX+/PL3OjxJ5NsYNo iwmr05ujlIgJS2p2FXitDBT8MqJPxx87oXTwKeormk2V+v0xKD5PfsAf18NPL+Xxi/xP fBk7b7rN8G93O7/rP/PLG7JAZZx8wAWS+0nje0LRfs0FnE10tx4ZgrHvir87imq9bnRc GHEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533980; x=1753138780; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BZ8dG4IcFoEafWjbI5Tl7HqmjBNXuoxF6820bVcH07Q=; b=HbTj5EsSRd4Gz1JzdZ4dWoaPxMZboEB/xbnnDVc5ElABoUsO+wCMHDEfM/47cI2hO0 VewUrlPWOWuW9W6LEdy4Y0ZLzjQBKubeSU/UY5nE3id93TeiKVAy6HR9v2Z59LYL3Hz3 0rCE6x+EX+Aj3WRrE2kG7QfSGBgSv5GSOsLFg2YoyfigQLb6xGyOSpC1bwJD9ZfaXaKa Hyj+pYOrjPMaLhFUeQaqWQpfa6cWGKFtvVVnOQp5+84Z5JQ24wktMYbqnOdz8a3R68Sb iX6HdT65164BYzFWBSk4dmR0Un+sV5O0pyWKVh8FlxtgO/49ZmhzL9AhM1Go53MdTvc4 Vzrg== X-Forwarded-Encrypted: i=1; AJvYcCVJHAW1M0H2WfHCBErE8G+T5nGDxOFQ6M3S+H6eCMxT0ux+cX+TEVhjkQUhdmHQnUX85YgfY2jxhOZwRKY=@vger.kernel.org X-Gm-Message-State: AOJu0YzetF0Bs1wrnThpNDCUv4AO7dwu7I7KubMdw0xBK4kRh0kQbeBg H6MjIYQ6fV9fkStRwotq3McMZRln34P9ZS7ufBfhEnaAxi99cQ1bEaOFmxrq5dqGYAvEr+Sy6j4 L2yFKrYGO+BiFii2raolYNL0n9g== X-Google-Smtp-Source: AGHT+IGJW+F7N+Qps6LoDsgu+dPhKH89vA9SG/aNNYSKUKGt/XO72mo40SLlz3/qQJipXch8VyxKlGUxhSMvINEeyA== X-Received: from iove26.prod.google.com ([2002:a05:6602:45a:b0:861:c7b1:d848]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:6d03:b0:86c:cf7e:d85d with SMTP id ca18e2360f4ac-8797888c692mr1676383639f.12.1752533980153; Mon, 14 Jul 2025 15:59:40 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:04 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-11-coltonlewis@google.com> Subject: [PATCH v4 10/23] KVM: arm64: Writethrough trapped PMEVTYPER register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b3f97980b11f..704e5d45ce52 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1036,6 +1036,30 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return true; } =20 +static bool writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_p= arams *p, + u64 reg, u64 idx) +{ + u64 eventsel; + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + eventsel =3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + else + eventsel =3D p->regval & kvm_pmu_evtyper_mask(vcpu->kvm); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) + return false; + + __vcpu_assign_sys_reg(vcpu, reg, eventsel); + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + write_pmccfiltr(eventsel); + else + write_pmevtypern(idx, eventsel); + + return true; +} + static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { @@ -1062,7 +1086,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmevtyper(vcpu, p, reg, idx); + } else if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); kvm_vcpu_pmu_restore_guest(vcpu); } else { --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9F9B285CA5 for ; Mon, 14 Jul 2025 22:59:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533985; cv=none; b=PM4V2aMQO/Pnnufb/caBRE1CuKyD01yIf2VbU4ryQm544MP6+JFGCIe9v44AoFdsLu1WG87Gzi8V342vhmIMGq3m00/AN2vfeZNHIxOfe3OVAbh+ewNb+5ArJNEOwcxN1MGSdZ1d+exxcQRwTfYMbN1gGr6hGAAeQnYwHkczdMo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533985; c=relaxed/simple; bh=1oRYKgyodAIezFvFunmY4ZgugzoQY527IXPevx8VDmQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bpKph8T17AG5lAxRogKPsxC77QAaJcaBSNQr7bIoRA4tlzDbRN+mvn2QHRxxhFr05+k5b9+0yO5ssmOoCkWQV3uMATxfSddxnYNIoj+9OpdUDvMZ0eoTgNt0vr4JFWdVa6CHCf75W74gnwaGK6uf4Pgt6R2UZsB9aCpEKpe4adQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fiwBTgsO; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fiwBTgsO" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86a5def8869so978638439f.0 for ; Mon, 14 Jul 2025 15:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533981; x=1753138781; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xwV3+W7/ehj+b6ug0rN7nGCOEi4xiU5YZ2CQkSQ6/UA=; b=fiwBTgsOP2dUbPoCv5EqMcnQyXxubiZ0MqB0K31VGMKV5RCdk3iijhD8h2QYdCs4tt qn8O8iZs3lHXwbKjozwPRAMhzT2BbpsBi1Nyv/xJazBDVA9Zuklt0/0nPE8FjcWsuFyP BmL81OfpaWBWhkOVEZFiSAl2pYhYBj6vbdDCNZztFdy8pVLgwDqpVEZqs3gfNISst22t g6MCAi7Q2AehQyx471hCeizzYi6Nb6rVf7ZU1w5qOxWHqc719Hxt2KQiMJkBbzLBdEk2 HlbGTv2T7xO9xJYnbcajc9xE45wCPyxeXRFDOBOwa3YyPiQW5nvn77z+0Vh/55MID166 r4ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533981; x=1753138781; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xwV3+W7/ehj+b6ug0rN7nGCOEi4xiU5YZ2CQkSQ6/UA=; b=qzWFvSrJZK0bBbfL64Bn9s1SZvoXABD9QWWPE8ppKe3lrZvvYqF5zO55q/JDSegddR LFgE3RzT9ZmFMU6l+wHrbVvJnMQX/3vM4MVNA6C0rmdjyu65PUW57NnI17wXwesE3X8U 2DkxNN+CUBt7m/lod9jmFlVQ/lWiaRW390JefO1x1HfMInvMLzvxLRHOVKWSejzGRi8D KEzPg6hMOwgAHfrBPFPU/M1RYuDlyomIm6H1Z+y7hPYCad6jz+MtNpqyT4VdqatxNHq2 7NCeLN09goJNQQPYcMhhzbbjfOvuoqmHUb9T4tdqDXC32HhWzO+c3c+43Dr0H3DfhI99 tOJw== X-Forwarded-Encrypted: i=1; AJvYcCVbKdPlKrrd1H07rChxQb+BTr/oBrhV6vHcW76OMdcOr4mparo4v5nGthpx4WXv3Fd/PqfpNR7pAtRkruI=@vger.kernel.org X-Gm-Message-State: AOJu0YzD2tUOQAdrUSiTE9LACYGFPb2I3AcdowreEceECcyI5IpoCGt4 G79OQnV2UDGgLOlRekNp+zMzUoRHoXAe9HagbQa7huIFy6f14G3cRqq5YgwWIJbTmCipYYAkkxj IBbukNijmXkILkyC8KTsQCPoCYg== X-Google-Smtp-Source: AGHT+IG6/Fx4XUt6OiaJ27GEFxRfFyj/zm2cocalWyZGbwfyqaLHqaZuCwZo40dqJJR4NG/NpTcvClqhKJzVbklHQw== X-Received: from ioge24.prod.google.com ([2002:a6b:f118:0:b0:867:188d:7f6c]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:2c95:b0:861:d8ca:3587 with SMTP id ca18e2360f4ac-87977f7198dmr1861411639f.4.1752533981283; Mon, 14 Jul 2025 15:59:41 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:05 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-12-coltonlewis@google.com> Subject: [PATCH v4 11/23] KVM: arm64: Use physical PMSELR for PMXEVTYPER if partitioned From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because PMXEVTYPER is trapped and PMSELR is not, it is not appropriate to use the virtual PMSELR register when it could be outdated and lead to an invalid write. Use the physical register. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 7 ++++++- arch/arm64/kvm/sys_regs.c | 9 +++++++-- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 27c4d6d47da3..60600f04b590 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -70,11 +70,16 @@ static inline u64 read_pmcr(void) return read_sysreg(pmcr_el0); } =20 -static inline void write_pmselr(u32 val) +static inline void write_pmselr(u64 val) { write_sysreg(val, pmselr_el0); } =20 +static inline u64 read_pmselr(void) +{ + return read_sysreg(pmselr_el0); +} + static inline void write_pmccntr(u64 val) { write_sysreg(val, pmccntr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 704e5d45ce52..e761538e1e17 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1063,14 +1063,19 @@ static bool writethrough_pmevtyper(struct kvm_vcpu = *vcpu, struct sys_reg_params static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { - u64 idx, reg; + u64 idx, reg, pmselr; =20 if (pmu_access_el0_disabled(vcpu)) return false; =20 if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13 && r->Op2 =3D=3D 1) { /* PMXEVTYPER_EL0 */ - idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + pmselr =3D read_pmselr(); + else + pmselr =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + + idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, pmselr); reg =3D PMEVTYPER0_EL0 + idx; } else if (r->CRn =3D=3D 14 && (r->CRm & 12) =3D=3D 12) { idx =3D ((r->CRm & 3) << 3) | (r->Op2 & 7); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87F9A28A3F5 for ; Mon, 14 Jul 2025 22:59:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533985; cv=none; b=rq7p1nY/Q2ZjhTFldwR7pmHOEnp+e2UgzXK8YbjfpyG7clvoIMoHb7SfReOGCQZqR9SqF+h859x0pB+Joe2Cyw+4IcB5LQmtCrNvSuC3D+3VLpO3bCK8bYMbtHSzqqFPGFd2jaox0KwwmphywAuALnR3mghaxF+YMvKJuIjS4p0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533985; c=relaxed/simple; bh=/fFdI8wVfZFbSMTxD9yoD8XUlrA4EM1qvQESFfNH+cI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lgZJP7UzfSWfz5GVq6U/6UIaUEIHMXhnj9T6qCxHC9gdkzTTUsW7YaMf7sonpT8rnOtlDsW/N1qw0ZB3DwBCXdYxE4CDZiJcfl2PsI7+z119pLQldoKvbM66wUXDPU4DfG37Mz0ingFvJD2o7A1spbhzNAWqFzvgAmZkSususIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=se0CURER; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="se0CURER" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-8794f047611so510928539f.2 for ; Mon, 14 Jul 2025 15:59:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533982; x=1753138782; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HKFKaT34CSY+XrjUi/PaqmOwj/haOI4Kmp3zrewTJxs=; b=se0CURERzQun2hfI6uslzyyGyejXi/W/9G9kNuUPc4elIyyIUTJ8IwyRNv0//4ysLT kwQPQZ6hN4/3RDytzgg3Rxo9D8S1xB0N6TSYcVUuFjY1ZOEXXwf3VX5LPepobOf3LlAy Pjguu+bLOA+WuOagZPzcPSbbXyPct7iYvuwZlz1BTW2ksCYpxtXqknQSFTjMIbZl/96L enNlLWSEhiHNj45rnDVQV8TaxaZhZ0YadYGJNiKHLg7196LzM3vGWUgpNwO5kMW+orFe 1ifRWRE4U6UbelnYTwrEONY9zGtuBbKepWlQ9hUMfByTVAq42ifCIg6PVy6xr9YO1Hce SMew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533982; x=1753138782; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HKFKaT34CSY+XrjUi/PaqmOwj/haOI4Kmp3zrewTJxs=; b=sjTV/ra/b/nmz9Fp82+gavtc4FQoh2mwlTvB5B8K6i7L8hWOeX0ZJvb85l6v6HGJFe 0MguUrYDl2VcXNs/dMdoifg68UCanbETgZ0FtCPZCYxt+7BiEf0CYvEhIRLxTx3/m5AE 0MnAE68F5gUiaqCL9boVqcjJs/x7LoLI+ONWxsK8D0LxLgRCURcnmMYQh0EnH2ukLZfh 5ofDzJLjNGVjX5BGgoOrmPIuYYrtUvSd9rvuLRsARbUZ5pDRU0C2ZILqzhKlfLekcJuW 2Q2TWh69NwN3c+rgbtos6Hot0AYLm4FYOTzaRwciTTZyO/gZlog9tjLfkV4dTfux8AZV OaZg== X-Forwarded-Encrypted: i=1; AJvYcCWN36Awv87pt2upUmeNzWzIcYHGyX6/nJ3UTdFJL/mOOGwrOJVtz71LXzbppkoNhMaoyolKyfJRgzFsIak=@vger.kernel.org X-Gm-Message-State: AOJu0YylE15G0r6uZipx2wjWrXYmcHoxAXNdB76JuuBlu7b6nVZcAjxX /y2NVxJdrSk/tUU7aov5DcXd34WMNFFKqEp7mY+jzcILrOCiVxbIxlT9nAVMksPSSLuv+RtXEhi SK3ViafFgS0pUCr1CoURV9yBdFQ== X-Google-Smtp-Source: AGHT+IFHDkydm6wR47l1+EJ847BUB3r6NTWQGkm6LnVFFK97D4N3qaxOkSTSlfCSEPiOoL+l6dOgeUU/KUc+l3ngkQ== X-Received: from ioy17.prod.google.com ([2002:a05:6602:a111:b0:876:c211:2aad]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:4019:b0:876:7876:a587 with SMTP id ca18e2360f4ac-87977e98946mr1665370139f.0.1752533982373; Mon, 14 Jul 2025 15:59:42 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:06 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-13-coltonlewis@google.com> Subject: [PATCH v4 12/23] KVM: arm64: Writethrough trapped PMOVS register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 10 ++++++++++ arch/arm64/kvm/sys_regs.c | 17 ++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 60600f04b590..3e25c0313263 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -140,6 +140,16 @@ static inline u64 read_pmicfiltr(void) return read_sysreg_s(SYS_PMICFILTR_EL0); } =20 +static inline void write_pmovsset(u64 val) +{ + write_sysreg(val, pmovsset_el0); +} + +static inline u64 read_pmovsset(void) +{ + return read_sysreg(pmovsset_el0); +} + static inline void write_pmovsclr(u64 val) { write_sysreg(val, pmovsclr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e761538e1e17..68457655a10b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1171,6 +1171,19 @@ static bool access_pminten(struct kvm_vcpu *vcpu, st= ruct sys_reg_params *p, return true; } =20 +static void writethrough_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, bool set) +{ + u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (set) { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); + write_pmovsset(p->regval & mask); + } else { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=3D, ~(p->regval & mask)); + write_pmovsclr(p->regval & mask); + } +} + static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1179,7 +1192,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struc= t sys_reg_params *p, if (pmu_access_el0_disabled(vcpu)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmovs(vcpu, p, r->CRm & 0x2); + } else if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B4D9287519 for ; Mon, 14 Jul 2025 22:59:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533988; cv=none; b=uJuhMd5U6MRxHDiptXarqj7Z5Tk6sF8MjsI9q386NwExtousukxb6Rgvh3Gn12AgnDgP1VQe8PsJGSVA657rUNCNmF08Qu+zEiEU2zUWkK7Ox6Ps5NXtAzRBNAbPCnJ7g7X15OH9Lulhjb6/5Fv6byaQQVEggu4C+IxFzB1mrdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533988; c=relaxed/simple; bh=3VlA3RT61kfd+xDjgJqGcldOvhaxDeVHG/7+5kqRkck=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qx42tRE4dujvEuJAalgwuAmfBjvVUsrcdynh7J0le/tX6Ml5TB4qQn4tRigy4Mh8TsffKzmXOzXT+d6J+Zw8tMGz5nD6uuODpGJVricdMX5vZjF0duqy1/0hSdEQsTgZWHkCtztbgST4SwnnJIImNQMNTV8M5D5y8EvdDyDss9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yNPP7XcM; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yNPP7XcM" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86d01ff56ebso829320739f.1 for ; Mon, 14 Jul 2025 15:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533983; x=1753138783; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YjqwTY+gYWhQfYb74EjHJzgNUSsRFo40dXoQUFsvFa4=; b=yNPP7XcM6+Z/HS8PIEpzffkc6MqbwH+eREiWU4ZnuRmVO2v+YMh6ik1841xk49aSMM q8A6z1URbLEO49qDog1xOAIMLRfHvXHt18rqk2WaX4bUDcH/9n0EfuOpL/qNBB8uRALO KovZyfwWELcDm3ySRu6bfxy/dqRboWkcSCGuvNPYJoj4qXKX+cV2TeSmdKmFglOvU38x MFiUxw2DvJmP+FQ4WL0foc8O3ncvxGoDuQw2tE1eeu45o8uf8EhS2nQNFmVt1IRyRndF ZCZhZRPEQKlUcKeV1BhrgQwPGNGEMaU1Hrx9VccMhsCN5ZKh2bvnp5rgPTVG0ngRzXCS HPhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533983; x=1753138783; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YjqwTY+gYWhQfYb74EjHJzgNUSsRFo40dXoQUFsvFa4=; b=qiIz1CzN7t51O/LWg2KNlmMS3s2VbW6ynoUP5ism/MWUVeUXfNKX0K90iLUrxqbldO 3m3Xx45qmqKsYdSTGuitVTk8hNujEBuuSAyRbgpvffp/bTP7Dhz6areoV3dn2bo74d7i L3Kcdm08Yzs6L7DA9jnSvL7tcmPHD1AfNNR0Agoe734TanEPI9l9PVbgViW0uD6t8oiU B9I2NoNZljd9WadC4tuWwHr0ORyjNkRkAbwdbgkE7r4d21kb4ggqCsGH5U6Wxf5n8T/a hU4d7I+63Hkka5whYFyP4niXK+7QoKGFmqzIOG8DrIRXS2I9lEZcOmlcYYJafaQftXYk GI9g== X-Forwarded-Encrypted: i=1; AJvYcCU7ZNV5cns0cnAeaFdWWd7Nw+lTcS/ZtURLkjV2L1s0sMjNEYXBbKrhZR3glfS4enytNQL9tJb0NJpXhiw=@vger.kernel.org X-Gm-Message-State: AOJu0Yw1Dm7D2sTF2FwTe5eB5tviOaItNlRcKpl4xtSlc6VFoSKeVZa+ Kj1NjYBVPIOS2asBP4Hk5M7T24EXOWjKQvY88F9hb+9NhAkh6eJhjR2b17tnmW6/vVDX4G91MRK DRHULEyL/M1XJFORc7kQqARPrvQ== X-Google-Smtp-Source: AGHT+IFmuHuKbrKCLr1Cw7MZEQLMklR+eN3XgjUB+Ob8JPY0WXuWwV5qFcLvi0V9ygsSZx5dwP8kl6fx3bAGHK9YOw== X-Received: from iojj19.prod.google.com ([2002:a05:6602:8213:b0:876:41f5:7fcc]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:379a:b0:85b:3a51:2923 with SMTP id ca18e2360f4ac-8797888a9fcmr1798282239f.14.1752533983502; Mon, 14 Jul 2025 15:59:43 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:07 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-14-coltonlewis@google.com> Subject: [PATCH v4 13/23] KVM: arm64: Write fast path PMU register handlers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We may want a partitioned PMU but not have FEAT_FGT to untrap the specific registers that would normally be. Add a handler for those registers in the fast path so we can still get a performance boost from partitioning. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 37 ++++- arch/arm64/include/asm/kvm_pmu.h | 10 ++ arch/arm64/kvm/hyp/include/hyp/switch.h | 174 ++++++++++++++++++++++++ arch/arm64/kvm/pmu.c | 16 +++ arch/arm64/kvm/sys_regs.c | 16 --- 5 files changed, 236 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 3e25c0313263..41ec6730ebc6 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -39,6 +39,16 @@ static inline unsigned long read_pmevtypern(int n) return 0; } =20 +static inline void write_pmxevcntr(u64 val) +{ + write_sysreg(val, pmxevcntr_el0); +} + +static inline u64 read_pmxevcntr(void) +{ + return read_sysreg(pmxevcntr_el0); +} + static inline unsigned long read_pmmir(void) { return read_cpuid(PMMIR_EL1); @@ -105,21 +115,41 @@ static inline void write_pmcntenset(u64 val) write_sysreg(val, pmcntenset_el0); } =20 +static inline u64 read_pmcntenset(void) +{ + return read_sysreg(pmcntenset_el0); +} + static inline void write_pmcntenclr(u64 val) { write_sysreg(val, pmcntenclr_el0); } =20 +static inline u64 read_pmcntenclr(void) +{ + return read_sysreg(pmcntenclr_el0); +} + static inline void write_pmintenset(u64 val) { write_sysreg(val, pmintenset_el1); } =20 +static inline u64 read_pmintenset(void) +{ + return read_sysreg(pmintenset_el1); +} + static inline void write_pmintenclr(u64 val) { write_sysreg(val, pmintenclr_el1); } =20 +static inline u64 read_pmintenclr(void) +{ + return read_sysreg(pmintenclr_el1); +} + static inline void write_pmccfiltr(u64 val) { write_sysreg(val, pmccfiltr_el0); @@ -160,11 +190,16 @@ static inline u64 read_pmovsclr(void) return read_sysreg(pmovsclr_el0); } =20 -static inline void write_pmuserenr(u32 val) +static inline void write_pmuserenr(u64 val) { write_sysreg(val, pmuserenr_el0); } =20 +static inline u64 read_pmuserenr(void) +{ + return read_sysreg(pmuserenr_el0); +} + static inline void write_pmuacr(u64 val) { write_sysreg_s(val, SYS_PMUACR_EL1); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 73b7161e3f4e..62c8032a548f 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -81,6 +81,8 @@ struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); +bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags); +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -214,6 +216,14 @@ static inline bool kvm_set_pmuserenr(u64 val) { return false; } +static inline bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 fl= ags) +{ + return false; +} +static inline bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 7fe5b087c95a..92b76764b555 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -24,12 +24,14 @@ #include #include #include +#include #include #include #include #include #include =20 +#include <../../sys_regs.h> #include "arm_psci.h" =20 struct kvm_exception_table_entry { @@ -724,6 +726,175 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu) return true; } =20 +/** + * handle_pmu_reg() - Handle fast access to most PMU regs + * @vcpu: Ponter to kvm_vcpu struct + * @p: System register parameters (read/write, Op0, Op1, CRm, CRn, Op2) + * @reg: VCPU register identifier + * @rt: Target general register + * @val: Value to write + * @readfn: Sysreg read function + * @writefn: Sysreg write function + * + * Handle fast access to most PMU regs. Writethrough to the physical + * register. This function is a wrapper for the simplest case, but + * sadly there aren't many of those. + * + * Always return true. The boolean makes usage more consistent with + * similar functions. + * + * Return: True + */ +static bool handle_pmu_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + enum vcpu_sysreg reg, u8 rt, u64 val, + u64 (*readfn)(void), void (*writefn)(u64)) +{ + if (p->is_write) { + __vcpu_assign_sys_reg(vcpu, reg, val); + writefn(val); + } else { + vcpu_set_reg(vcpu, rt, readfn()); + } + + return true; +} + +/** + * kvm_hyp_handle_pmu_regs() - Fast handler for PMU registers + * @vcpu: Pointer to vcpu struct + * + * This handler immediately writes through certain PMU registers when + * we have a partitioned PMU (that is, MDCR_EL2.HPMN is set to reserve + * a range of counters for the guest) but the machine does not have + * FEAT_FGT to selectively untrap the registers we want. + * + * Return: True if the exception was successfully handled, false otherwise + */ +static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vcpu) +{ + struct sys_reg_params p; + u64 esr; + u32 sysreg; + u8 rt; + u64 val; + u8 idx; + bool ret; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu) + || pmu_access_el0_disabled(vcpu)) + return false; + + esr =3D kvm_vcpu_get_esr(vcpu); + p =3D esr_sys64_to_params(esr); + sysreg =3D esr_sys64_to_sysreg(esr); + rt =3D kvm_vcpu_sys_get_rt(vcpu); + val =3D vcpu_get_reg(vcpu, rt); + + switch (sysreg) { + case SYS_PMCR_EL0: + val &=3D ARMV8_PMU_PMCR_MASK; + + if (p.is_write) { + write_pmcr(val); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, read_pmcr()); + } else { + val =3D u64_replace_bits( + read_pmcr(), + vcpu->kvm->arch.nr_pmu_counters, + ARMV8_PMU_PMCR_N); + vcpu_set_reg(vcpu, rt, val); + } + + ret =3D true; + break; + case SYS_PMUSERENR_EL0: + val &=3D ARMV8_PMU_USERENR_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMUSERENR_EL0, rt, val, + &read_pmuserenr, &write_pmuserenr); + break; + case SYS_PMSELR_EL0: + val &=3D PMSELR_EL0_SEL_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMSELR_EL0, rt, val, + &read_pmselr, &write_pmselr); + break; + case SYS_PMINTENCLR_EL1: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=3D, ~val); + write_pmintenclr(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmintenclr()); + } + ret =3D true; + break; + case SYS_PMINTENSET_EL1: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=3D, val); + write_pmintenset(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmintenset()); + } + ret =3D true; + break; + case SYS_PMCNTENCLR_EL0: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=3D, ~val); + write_pmcntenclr(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmcntenclr()); + } + ret =3D true; + break; + case SYS_PMCNTENSET_EL0: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=3D, val); + write_pmcntenset(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmcntenset()); + } + ret =3D true; + break; + case SYS_PMCCNTR_EL0: + ret =3D handle_pmu_reg(vcpu, &p, PMCCNTR_EL0, rt, val, + &read_pmccntr, &write_pmccntr); + break; + case SYS_PMXEVCNTR_EL0: + idx =3D FIELD_GET(PMSELR_EL0_SEL, read_pmselr()); + + if (idx >=3D vcpu->kvm->arch.nr_pmu_counters) + return false; + + ret =3D handle_pmu_reg(vcpu, &p, PMEVCNTR0_EL0 + idx, rt, val, + &read_pmxevcntr, &write_pmxevcntr); + break; + case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30): + idx =3D ((p.CRm & 3) << 3) | (p.Op2 & 7); + + if (idx >=3D vcpu->kvm->arch.nr_pmu_counters) + return false; + + if (p.is_write) { + write_pmevcntrn(idx, val); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + idx, val); + } else { + vcpu_set_reg(vcpu, rt, read_pmevcntrn(idx)); + } + + ret =3D true; + break; + default: + ret =3D false; + } + + if (ret) + __kvm_skip_instr(vcpu); + + return ret; +} + static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_= code) { if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && @@ -741,6 +912,9 @@ static inline bool kvm_hyp_handle_sysreg(struct kvm_vcp= u *vcpu, u64 *exit_code) if (kvm_handle_cntxct(vcpu)) return true; =20 + if (kvm_hyp_handle_pmu_regs(vcpu)) + return true; + return false; } =20 diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 8a21ddc42f67..30244eb7bc9b 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -890,3 +890,19 @@ u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) =20 return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); } + +bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) +{ + u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); + + if (!enabled) + kvm_inject_undefined(vcpu); + + return !enabled; +} + +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 68457655a10b..ad9c406734a5 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -840,22 +840,6 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const str= uct sys_reg_desc *r) return __vcpu_sys_reg(vcpu, r->reg); } =20 -static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) -{ - u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); - bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); - - if (!enabled) - kvm_inject_undefined(vcpu); - - return !enabled; -} - -static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) -{ - return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); -} - static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu) { return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_SW | ARMV8_PMU_U= SERENR_EN); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28068289E04 for ; Mon, 14 Jul 2025 22:59:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533989; cv=none; b=cWVT+kOODN3+nOMLBcpwoCvf67hQogLd5d9KV5JIaXOiUDhAhlBMqzq+90Bha6Vf/IqWy+UHhtQfDbkSKXtRl3kph5Q/2ojMZm/LknF5IPr1t4FC/cMKDBh4GMJVC+d6bsyD+z8T5skGYfJJ1p5fadJEPZJVOcCHKHN7RLfUM5Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533989; c=relaxed/simple; bh=ncTcOod7yCrp61IU4E/ZxVDp0M1AnUaADW8CBXec3HY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nHInu2BNnKgrlhkOlHd6/Eww3baq9r+smSW/D8zksqagkpVJDro3TvV6LlJ33OVR0qap5ubkqaQZFBUtJnnbaqmy/YrENUo8U7irUuIf1Yx3gr6Cc5YzY8Jtj70du9wi/f+BnEAxHq9ZrRaa6o344ogH/qswwxc6YEZX+HCl2nk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2zy20dyl; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2zy20dyl" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3e168c6bb92so93948605ab.3 for ; Mon, 14 Jul 2025 15:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533984; x=1753138784; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FlCY+xVyEz2zjr3F2/cg+ko0FQSJ5xemxrZgto3SvkM=; b=2zy20dyljdshDyqGJEUumogjkTtSE9KU8/lqJ/zNPFpG7mpNTJ0Olju8rlfLoyBsA/ 85CfPV0hjXTekfAGr4v0t4dCPHBS74QBrp/RFOOcaqOr9viVRlxUrc3ywJO6s+kRLPbf upWKw4Ro5HYgA0RUHTVWZ0dfMC5KooqzzN5USxEIv1MV8/pnlYK23iPHTzQnB9hQ4g6s 7PRnww2EkZIIexWOuOHMjOLFvTIvlW33rrYZ7UpRRJ8ALvae4ijFteP35hv3RT71K5EP Sn6Emi7C259SVR6vTHf/OrVGIk8Me4itWcWqKjx5bajaJSywzZw8PHa/O5cmDU/C0I8H voeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533984; x=1753138784; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FlCY+xVyEz2zjr3F2/cg+ko0FQSJ5xemxrZgto3SvkM=; b=tOfAIgavFNO+FLe4U1eAicahyHi/Gn22EubLhXEhLk/+hROhHvQQYehRrehmaAQ5tL V4qtIcGnwHPWONvrkk7qNQbcY2CHRJWMQN/0AWDteED5sGR3jk8CMMwsmgQi/8BZQcID qtYzTmphNR6yYUxvDaojyxkKseK3pjtgqlY8/E17HHpEuScMvZSUScaaaJHTxX3Cqv4F g9OQdHfUgnWsW61rCJx32p9L4KZ8569NH7hmpRkNDJqtsMhIfyW7K8Q8KZe3/qQDHxpF rGWI+De4Xmxo4+RCTsgPVKoCzBpQjhlHsfCgSRmISaKck4bU2SrCnBBiD9WPAgEsLQEN qK9g== X-Forwarded-Encrypted: i=1; AJvYcCUmcFQhZQq/rEXMff1yyzn4g0SFCt5/94+NxbRw3oykRnNbSuUJeliMQ7bTXyY+QWZ8eRCepkZCyu0R5NU=@vger.kernel.org X-Gm-Message-State: AOJu0YwXGkB98laZxn4wYzEyixE58x86LubHnsaveygkXkd7jNtv1rwV 14cEmNz6HJjNDqbWy3XcMh1VOKfk1V1HacOfqCLKpzb6ul0XZegaY1fHy1uf/zUzXMwRDrPoFy4 Ff36W6o9v0lANRtKetfajdilJMQ== X-Google-Smtp-Source: AGHT+IFGXdkNXkw8CoL88YaoE7byRV8SLefT+pkkqRRylYQFB4bn+wurTk8lSssQNgyaC/vm8Th1YX76JUVCTQT87g== X-Received: from ilpp15.prod.google.com ([2002:a92:d28f:0:b0:3e2:47cf:3c9a]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1582:b0:3df:49fa:7af5 with SMTP id e9e14a558f8ab-3e279250199mr5991275ab.21.1752533984589; Mon, 14 Jul 2025 15:59:44 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:08 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-15-coltonlewis@google.com> Subject: [PATCH v4 14/23] KVM: arm64: Setup MDCR_EL2 to handle a partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Setup MDCR_EL2 to handle a partitioned PMU. That means calculate an appropriate value for HPMN instead of the maximum setting the host allows (which implies no partition) so hardware enforces that a guest will only see the counters in the guest partition. With HPMN set, we can now leave the TPM and TPMCR bits unset unless FGT is not available, in which case we need to fall back to that. Also, if available, set the filtering bits HPMD and HCCD to be extra sure nothing counts at EL2. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 11 ++++++ arch/arm64/kvm/debug.c | 23 ++++++++++--- arch/arm64/kvm/pmu-direct.c | 57 ++++++++++++++++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 62c8032a548f..35674879aae0 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -96,6 +96,9 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -158,6 +161,14 @@ static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcp= u *vcpu) { return false; } +static inline u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + return 0; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 1a7dab333f55..8ae9d141cad4 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -36,15 +36,28 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcp= u) * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); - vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TPM | - MDCR_EL2_TPMS | - MDCR_EL2_TTRF | + vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, kvm_pmu_hpmn(vcpu)); + vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TTRF | MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA); =20 + if (kvm_vcpu_pmu_is_partitioned(vcpu) + && is_pmuv3p1(read_pmuver())) { + /* + * Filtering these should be redundant because we trap + * all the TYPER and FILTR registers anyway and ensure + * they filter EL2, but set the bits if they are here. + */ + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HPMD; + + if (is_pmuv3p5(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HCCD; + } + + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_TPM | MDCR_EL2_TPMCR; + /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 2eef77e8340d..0fac82b152ca 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -136,3 +136,60 @@ void kvm_pmu_host_counters_disable(void) mdcr &=3D ~MDCR_EL2_HPME; write_sysreg(mdcr, mdcr_el2); } + +/** + * kvm_pmu_guest_num_counters() - Number of counters to show to guest + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the number of counters to show to the guest via + * PMCR_EL0.N, making sure to respect the maximum the host allows, + * which is hpmn_max if partitioned and host_max otherwise. + * + * Return: Valid value for PMCR_EL0.N + */ +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + u8 nr_cnt =3D vcpu->kvm->arch.nr_pmu_counters; + int hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (nr_cnt <=3D hpmn_max && nr_cnt <=3D host_max) + return nr_cnt; + if (hpmn_max <=3D host_max) + return hpmn_max; + } + + if (nr_cnt <=3D host_max) + return nr_cnt; + + return host_max; +} + +/** + * kvm_pmu_hpmn() - Calculate HPMN field value + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the appropriate value to set for MDCR_EL2.HPMN, ensuring + * it always stays below the number of counters on the current CPU and + * above 0 unless the CPU has FEAT_HPMN0. + * + * This function works whether or not the PMU is partitioned. + * + * Return: A valid HPMN value + */ +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D kvm_pmu_guest_num_counters(vcpu); + int hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (hpmn =3D=3D 0 && !cpus_have_final_cap(ARM64_HAS_HPMN0)) { + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return hpmn_max; + else + return host_max; + } + + return hpmn; +} --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EAC728E575 for ; Mon, 14 Jul 2025 22:59:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533990; cv=none; b=eIvcx/xROltmrZxqlSOW/JvDirrAp2BsKBvFOyrTfO3kxn4l1DViBXdx51uG7Zrsb8yj0Ny7JMtK/bf6dBd1Uher4Z4D/BcTsaJS1CDzlkcdeq2Ntw70bklAv8k6Lbet+ht5Cey+GFpLtSEIe0mdByy46eGcYGE1C6DltdR3QQQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533990; c=relaxed/simple; bh=mWJcjSVKE8Hfx300gV16GcG/h8AihYm2XSLxVa+DzAI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sudUEF5acxdypQvOMYYokOsDfaQ4dF0lA7BOnTwH3sRtMCQR3kkspXCHiKENdIx7YzGIw9gGnYcyqkYsEp3cxg+mwxLvXWpPEkmTktKCvmHQ88GYXOfBZqG2mbCmS3JnuIvF4Vna8/mXHf2YCmVaxcrEFFDCGrhnjeRqp+PK444= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pjH4MbZ0; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pjH4MbZ0" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-41b357bd679so532899b6e.1 for ; Mon, 14 Jul 2025 15:59:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533986; x=1753138786; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ryTFSv4SUuWjEo9ITdyyZg4InYYKN2XpVG4LgfvjWks=; b=pjH4MbZ0x9G6pHcOh9BvWzJuRti5JoL4VbVQ8m90VL+t++U2CaefiZa7JzJQajMx9D 9tnI44qGuwLFskwN+M9GnThbO1JFiPJfj4WFlGTEz3XwO7troJEonPSqECvEkkpTyOkR VXtn88JHnj0LoiSfOsonWlng/A8pJxnkeBIIouH7hmG87cTOrQE/Z9HIlwVZEePp1KsG jPFiCM95nK7fEHf8ZeFUS27PM0utf3B3HQdvt39bob25lpxVIJF90KDM7cHBCa2TAvEF HTxpE7NZVOfbvWNugLqFA1Snlee7Yel5PAiPFmnIUPENEBIKTaWgZGxVOtPEjkSZtIjT yVrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533986; x=1753138786; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ryTFSv4SUuWjEo9ITdyyZg4InYYKN2XpVG4LgfvjWks=; b=fOTChRgTTOko2sKWiFQw0HVglgPamcfSqBrPVBgJjZYdCy5cXsOPJuxRMENedrgpxg 1+fxsEJHb6TfBnBCNb78+ddItwSCNwpm7YTTfoG+KH+dF9UFSJWQc3Z7AI0Dl0mjCdsh Oge9jvxou1ZbXeMPNLeoRrHP3jY3yUC3zKbg+gyXOFEy0JAex+hPF2M04RlHeDzV295Q GNi5t+LVmxJ1ZdLkw6DNzYrlaMZx+n7oWS5JB6lmANQS/0jlaERTGxUw/esg1432HNNC Lmbj+U7lDgBs29IQCgZtqpGw42+kAPBtDTOUaqIwYiRAQQ7ikT4Uz98M+17CsRCayV/u o2mw== X-Forwarded-Encrypted: i=1; AJvYcCWQ8NvjQ3LqsMeTxbTk2iQwbZACePTJQQOJYhQEobw+tA1wMbOrLI56ngcJmVTK2Tudn17bba1tCKNgjfY=@vger.kernel.org X-Gm-Message-State: AOJu0Yw8NW3hHuX2paFVd1grQ3IuQyql2h1VodNaM1ZJpVLBu/9gvVdf g8Ick/wx5zthyG7fVudrzmKB5POJiE43lxXhz+fkpzVFd8dhN9gJIyApxGiQTEExb1ZKPcmMW8k 6YBBkP3nujUdafj+LzfXeodpHbw== X-Google-Smtp-Source: AGHT+IH/cZ/kysTLecvFDqp/vKXsI7g6OHRQd1TfsMdUj361wwQB7SXIxVBhNRKY/b1QS1sS6spRQdB4N1nadhlFuA== X-Received: from oibix7.prod.google.com ([2002:a05:6808:5187:b0:40a:fcea:87ac]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:1a0e:b0:40b:4208:7fc0 with SMTP id 5614622812f47-4150faa232dmr10429766b6e.30.1752533985740; Mon, 14 Jul 2025 15:59:45 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:09 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-16-coltonlewis@google.com> Subject: [PATCH v4 15/23] KVM: arm64: Account for partitioning in PMCR_EL0 access From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For some reason unknown to me, KVM allows writes to PMCR_EL0.N even though the architecture specifies that field as RO. Make sure these accesses conform to additional constraints imposed when the PMU is partitioned. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 2 +- arch/arm64/kvm/sys_regs.c | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 30244eb7bc9b..1e5f46c1346c 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -883,7 +883,7 @@ u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vc= pu) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + u64 n =3D kvm_pmu_guest_num_counters(vcpu); =20 if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ad9c406734a5..e3d4ca167881 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1247,7 +1247,9 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const stru= ct sys_reg_desc *r, */ if (!kvm_vm_has_ran_once(kvm) && !vcpu_has_nv(vcpu) && - new_n <=3D kvm_arm_pmu_get_max_counters(kvm)) + new_n <=3D kvm_arm_pmu_get_max_counters(kvm) && + (!kvm_vcpu_pmu_is_partitioned(vcpu) || + new_n <=3D kvm_pmu_hpmn(vcpu))) kvm->arch.nr_pmu_counters =3D new_n; =20 mutex_unlock(&kvm->arch.config_lock); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D280290BA5 for ; Mon, 14 Jul 2025 22:59:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533991; cv=none; b=rGU3JG0T9PMFVIe+TSQ7Y4YiVlb3iHAQ5BR2s9g7UVqZBcnvVYpyHuaKPcj/38BxOHmyEIW52h5ZN+DYWb4LLUTFwhMVeSAFp6zHVEWNGB+UIxlqSTBpumFndYQfeKG7vkHhZekr6qJkG/4mfVTTY1xbpSr5qMBr5Bpp8o0DiT8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533991; c=relaxed/simple; bh=HSC+7tbbygYOYGRXPyJFXJYTAG9wurlO2Yyv46F0TVU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Io7dcxahWnhOZ13u7Ekk9Kp7GxHBX8Gt+bGkLck+xGzunb7MI3O2+DR8PTm6JoiHuyYrv4KyTwLG7GZPXPWo6WLy4Un7bzJ9VIYlRqahdxuIVY1yQXZdKGDDiuFrgFKeG3NSLA0jANvYV07H9ZMy/zdiQpTtmXG1F+jGaVI5EL4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2m5gLr4O; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2m5gLr4O" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3e1618f03c9so48501195ab.2 for ; Mon, 14 Jul 2025 15:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533987; x=1753138787; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M3xq61Ak0c1ca5gbgp53wDmnmDBLKzZR4Y7JN/f5xOU=; b=2m5gLr4OFoBnKOPk36NgzM9LKm8FAUjAwPgITZybopUo1l/WsTbMmi+C/Jb9cmsck6 trKemYYbni7/wvnwTAX8IZMSHYo5YzqFmLGO4u/+9nuPMRxnpU5dg/fyk9SIXIH7CLy7 Q+JwOFr89WsXghsGgfFVGXCzcwjUFLoE+2RmtUD6tWfX6rB5eirY7ryEFCkZlMCZYTcl qj1NtOW0nnewDApJlDNnD8WNrlFmHlu91xaLI52phWrdbDYyVNMk/UwiZxhjS3IukM3Q Y6+mkF/pV4nUTtiHtRFTmPNS3oodJU59oul+E0EoF1/KNOhoD2+dcorNEqxn/z9QVPey ETvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533987; x=1753138787; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M3xq61Ak0c1ca5gbgp53wDmnmDBLKzZR4Y7JN/f5xOU=; b=pOs+f8sc1AwEGP7ui6UtQ/AK2s5tGIH/duXkiwLmOj3baJw81m6DBqPl9Fvzu2kbUS nGmNoWPxz12KZreJ4bJCyKVlld92LfNMQoLy64o76jHNhUtDpTE/Kb/dwpKkcBZ60x4w qYF+r6iWjqERX6kPLkPmDR5lruc1AL7NkRPmUS9P2AGMdF1B7SpN2DxkGJ678X/9Mjyq +BJFTPn7UF0wRx3MRY0kclEQAv9NIaAX+VIxt79tfPRaT0u+tpIHdhIFD3jLEl3wMoUp BhcWtG1hudboKtb9k5lnchT9ziMpUglD+0MSafxMzo+64HhBFh74jagNUYk5nc/Y9mVy +Abw== X-Forwarded-Encrypted: i=1; AJvYcCX7ZriyfK6Q/vpEY0Fd4oyEKAt+Cp+S1nyDU4aS4roTn4RPpryVNNRvt+S11ecK3oWyShRuE4CKo8VvaAQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyAU3SN7/mWukuw968vdOURDUbnTCu71ohBjTsr86TBI+prY567 jK9zvJdyg3o7wu8j39f2r8k7Bi7cxQdATYMSb5WgVG0FTCrhFp94OcdNSgpgx9q026MZ9avX0WU 21bmw37D4RZCe0tyLgBQK0ZMQaQ== X-Google-Smtp-Source: AGHT+IGGNXzJ6MClaCHEDVsCSHTe4Mc4/qt4oPTSISuzvsle4ctKRqVs+0zuaGbv1rchsoKRxFJJDAgeqhZ1oRbZOQ== X-Received: from ilbbz8.prod.google.com ([2002:a05:6e02:2688:b0:3e2:5969:18b5]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1544:b0:3dc:8b29:30b1 with SMTP id e9e14a558f8ab-3e2533103e1mr154939995ab.14.1752533986898; Mon, 14 Jul 2025 15:59:46 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:10 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-17-coltonlewis@google.com> Subject: [PATCH v4 16/23] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that can be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMICNTR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If we know we are not using FGT (that is, trapping everything), then return immediately. Either the PMU is not partitioned, or it is but all register writes are being written through the VCPU fields to hardware, so all values are fresh. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 4 ++ arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-direct.c | 101 +++++++++++++++++++++++++++++++ 3 files changed, 107 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 35674879aae0..4f0741bf6779 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -98,6 +98,8 @@ void kvm_pmu_host_counters_disable(void); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -169,6 +171,8 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; } +static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e452aba1a3b2..7c007ee44ecb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -616,6 +616,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -658,6 +659,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 0fac82b152ca..16b01320ca77 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -9,6 +9,7 @@ #include =20 #include +#include #include =20 /** @@ -193,3 +194,103 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) =20 return hpmn; } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If we aren't using FGT then we are trapping everything + * anyway, so no need to bother with the swap. + */ + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + write_pmevcntrn(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCNTR_EL0); + write_pmccntr(val); + + val =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + write_pmuserenr(val); + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_pmselr(val); + + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + write_pmcr(val); + + /* + * Loading these registers is tricky because of + * 1. Applying only the bits for guest counters (indicated by mask) + * 2. Setting and clearing are different registers + */ + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_pmcntenset(val & mask); + write_pmcntenclr(~val & mask); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_pmintenset(val & mask); + write_pmintenclr(~val & mask); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If we aren't using FGT then we are trapping everything + * anyway, so no need to bother with the swap. + */ + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D read_pmevcntrn(i); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val =3D read_pmccntr(); + __vcpu_assign_sys_reg(vcpu, PMCCNTR_EL0, val); + + val =3D read_pmuserenr(); + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val); + + val =3D read_pmselr(); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val =3D read_pmcr(); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + val =3D read_pmcntenset(); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val =3D read_pmintenset(); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); +} --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 544EE2918F1 for ; Mon, 14 Jul 2025 22:59:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533992; cv=none; b=obdeeGM4woIZMf88ANQ4ULrbgavmXbw2s8BW/VxCQT0OijuUJwiKIrZFNYPGsLViuXCsF0ObAmXGBLNaaWfXgIzOsmweUbdBNRquWfbG8C240dWIEeK9Mk9Dv+5RoeljwTHOHQGj9I9ZiUTz8cqR0eOMvZk5O7b9/y9X7aqcyak= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533992; c=relaxed/simple; bh=XicCoLbThtlYTD62aDg8ni1reIMr7qH1gveTP/e6XR4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H7CzxF1n3RFmW6/KhpRknOzQnQ8KG1UqaNqVmvg577gT91zkERfWZkCJJQ+En66y/PtjrHA84Vh5OXEu1FxfLcF13MwZt+s0RTFK6PBtz+ARDOq7gHKszQK2q7D+iBTab3yj6ya3P6k1BmiAr6RI9Q31IXLAQXtxgdQRYU7Bftw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K+d2rj5P; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K+d2rj5P" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86d07944f29so1024767739f.0 for ; Mon, 14 Jul 2025 15:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533988; x=1753138788; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sXnlk8Oj50kBtPmAQ5IhbS6eNdNw6B/15jx/cysRtNw=; b=K+d2rj5PDWs+xY8PVWrV3HZ3eVroETFuoNuPjNWpL0RJQWPuGuNNAdP2zzmmyIfkP3 5UCIJFtEiSioj9frP9vCQHcHiWGqY7EA2L6s/QqP4Z0gtFZrNQBbookxMqtczGVWRdW9 NWvrH4C8KfgWZxKOFP7fMdcbTP4Hw9/zjf9o2LoDW7BnOOsQGvOJT+eYHsXBFDQ+pFop 71F9AWSVHU1whrsml8Vyhu0v/eOWwOkizy/49SikUNmDa4ou0z8ofvwgux0RXhbnJd5J /le9G/nEKwdYqObxjH8NyV+lp4jKOrFbneOhoTzFmrLK11f43swNf2DfvEPp0ipbAOqV U7hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533988; x=1753138788; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sXnlk8Oj50kBtPmAQ5IhbS6eNdNw6B/15jx/cysRtNw=; b=B4vPooTnoH7kz7fptKy6+18ErNAeXBbE80SInsP1q/6W00vs/n2oLs3uyRd3mc1ksI gGJSkqcA0LyhBULVrVPp/9j6FKud07d9h20frP7J1fF9H9OlB6dRKJX9DBqHBQCdqE+X m5OQMKpRJsQSQLR/Lt+x1kJPmz/AyTt4/9zO7uctvoPukG2zm8sGLTUTCkFrLlfNF5pG cV3zOhDrFExAPlHKBlJbjSKLCmyPSK+NjQdQ0vuwVynFziRMe+kb4q00qzAshwMskjGD 8N/ifBVkkc0gljageIKh6WEvXAMR8sWMzoBt/KL5mypqHtMRPRw3OgzNOmmFjNU7G0P5 CjFg== X-Forwarded-Encrypted: i=1; AJvYcCWphEeN1lwzCcwCULlYiTqM4meMKXITVcTS48gcfHnII40L5Qrs7NagrnGjqZF2K+BOXuWbDeR7Mno1OUk=@vger.kernel.org X-Gm-Message-State: AOJu0Yz57Y743F9Z+D9mZdzgnydAtUaIRVx5ZaKHs+vBth8GxoePFGpE 11UWjlZ2HjC/7DVf+THW2NrJKnbngV6Tu8XMp+j/zk6Gj36tzOsbDxyNuxqMGUy3qyXqITljVx/ 3rvAeglAjj1ersBOHBwTG9VWr3w== X-Google-Smtp-Source: AGHT+IE357sOrApLJCq1WzFbQdDxZmtg7XTkGgz6+RXD/fX/V+IYmhdKT8OJFTHuEVtH1u9P9MhFxUvuEWSnRycsww== X-Received: from iola12.prod.google.com ([2002:a5d:980c:0:b0:86c:f382:ca95]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:27c1:b0:86d:1218:de96 with SMTP id ca18e2360f4ac-8797886bf8fmr1673529039f.12.1752533987975; Mon, 14 Jul 2025 15:59:47 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:11 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-18-coltonlewis@google.com> Subject: [PATCH v4 17/23] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 43 +++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 16b01320ca77..e21fdd274c2e 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -195,6 +195,47 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return hpmn; } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 evtyper_set =3D kvm_pmu_evtyper_mask(vcpu->kvm) + & ~kvm_pmu_event_mask(vcpu->kvm) + & ~ARMV8_PMU_INCLUDE_EL2; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + u8 i; + u64 val; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(val, vcpu->kvm->arch.pmu_filter)) { + val |=3D evtyper_set; + val &=3D ~evtyper_clr; + } + + write_pmevtypern(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(ARMV8_PMUV3_PERFCTR_CPU_CYCLES, vcpu->kvm->arch.pmu_filter)= ) { + val |=3D evtyper_set; + val &=3D ~evtyper_clr; + } + + write_pmccfiltr(val); +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -217,6 +258,8 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) if (!kvm_vcpu_pmu_use_fgt(vcpu)) return; =20 + kvm_pmu_apply_event_filter(vcpu); + for (i =3D 0; i < pmu->hpmn_max; i++) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); write_pmevcntrn(i, val); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6256328DF25 for ; Mon, 14 Jul 2025 22:59:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533993; cv=none; b=ELbwhx9gF3aaYrwteEzrJwGFfn2nqUow0pAc5X0kaZoxzQWEMsZM/8rhNv4lmYzlouVAgT+9LgMWZj4pVD2PICV+mybjGCZOPOUXhuLFGeGmQYV8g+xUUJEc3FXIpACLQg3tF6NaJulRH8xfvstypurWPeiaO3mq6bfcCdTUQUk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533993; c=relaxed/simple; bh=0KMiE20SLJLD7kdsdsCqqE1XhXEsDWHdcA9/wtSWkSQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=j8UsXVETMb6ZysM4+VgQYZD4mIF3ANOZWXF8+G3q3gT8Bve7VMtkuJijShxFsT7iH0P1P0gThl9fUmhKC3YrDl0GYUSDoHEGeVEYkoZ6gycrZMj0h9cNfCGar7ey556fbKZRj+sb5ekQg2JA+JY7dJUcvAy1eK3EoCCGnYfrQ4o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kShKSK6B; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kShKSK6B" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-879489ddf11so932425639f.3 for ; Mon, 14 Jul 2025 15:59:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533989; x=1753138789; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kYf1NlmF8/QTGj97Qxw1rCkue71OyUvLh0xkN4pVh4E=; b=kShKSK6Bl5pVcTV+IN3n2Gb/3CFTUHPnr2rY3s69to94bVu/ThyGiMHmyphDFM2SFd EC6n70zZRkxFpdkSl1WFcnqNvfIc8Wnfpp5owanfGv6A/pml8Bmj2NNLCuXCi+g5b57q e1zCLRBz30TyfqWWxD08/Mim1Tl6Bd7Q6SW+vu0lPUYOY3/UWg/3rL73/TBUHIyggvXu O2XIDfdoJtzKA4Zat/STylgM98ccae3I7UobCWHn4ylFEuOm2UlLbucOfWsQXI3XvcYM hQWrAFBuXabEyE30BF1/ohSmJkWqweU0nP6sZ9GpmgbPmu8E3Z3TGdmEvr72xB+aud+P CG7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533989; x=1753138789; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kYf1NlmF8/QTGj97Qxw1rCkue71OyUvLh0xkN4pVh4E=; b=MnUgLIbNGl8OvIDBEJsSK/vWhdhvAF+Y+nXdD2NnvtXmTRRUO5vAgzo1qWpQ31mUBB Okyl5Ku7cwsJCj/M10ScLXAfdaX//YMXWKY2jQGto5W/bBSM04F2NL1XPFyzDe7jUPvh hLp+qx7oD4fTfVbWjH7VbesqevVA8B2QqRWth4CUBftXKDrZWDkEw+tTCCI0naD00rNR m7MVAgDmi7yNgJ3eV7+4IBqVEf3F1GqD+O8+Tvfb5bBgfPZuo9iljg2DyWTugl+3Xpep /z31xTXYSWRDtTtiRHnD8+ColHiS60RHAIPMeVbSIW76vMyACZwdCW1gDOUEc5COmF5l 1ziA== X-Forwarded-Encrypted: i=1; AJvYcCWM9u8qjOPR2JVTPasfZj7S7/cCpf7cBwUlg50RPxAy21yQRsGD8PejJMyIJOQLy4EVSwBMpgatlFQuSLQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzRFMX+EPrULWEG00L3ZLuSiHNNBA4tALW8DQTuAQneD0q9n/6R m1TIpodPSVRmvfct/85E31Y/uk43q+L7HH0Xt3Mj/s15Ay0mSP/TvhMb+lURfUsRc/ZkPG4j3Ba AZ7QqdZYFZMJybOMD5amRyr6Y7g== X-Google-Smtp-Source: AGHT+IEqn9WMczkmgGJMqe3Pfe69AM+/PU1iKi1PXg3+tmFjAie5sPccK6SXrZKFiKE60xITkz3trTASZ14i+omdNw== X-Received: from ilos18.prod.google.com ([2002:a92:cb12:0:b0:3df:53d6:5d35]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3388:b0:3df:3d1a:2e76 with SMTP id e9e14a558f8ab-3e255571390mr143216325ab.1.1752533989026; Mon, 14 Jul 2025 15:59:49 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:12 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-19-coltonlewis@google.com> Subject: [PATCH v4 18/23] KVM: arm64: Extract enum debug_owner to enum vcpu_register_owner From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The concept of a register or set of registers being owned by the host, guest, or neither and choosing how to handle traps based on that state applies equally well to PMU registers as other debug registers. Extract the enum debug_owner previously defined inside struct kvm_vcpu_arch to it's own type and add a the field to struct kvm_pmu as well. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 12 ++++-------- arch/arm64/include/asm/kvm_pmu.h | 1 + arch/arm64/include/asm/kvm_types.h | 7 ++++++- arch/arm64/kvm/debug.c | 8 ++++---- arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 6 +++--- 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 463dbf7f0821..21e32d7fa19b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -846,11 +846,7 @@ struct kvm_vcpu_arch { struct kvm_guest_debug_arch external_debug_state; u64 external_mdscr_el1; =20 - enum { - VCPU_DEBUG_FREE, - VCPU_DEBUG_HOST_OWNED, - VCPU_DEBUG_GUEST_OWNED, - } debug_owner; + enum vcpu_register_owner debug_owner; =20 /* VGIC state */ struct vgic_cpu vgic_cpu; @@ -1467,11 +1463,11 @@ void kvm_debug_handle_oslar(struct kvm_vcpu *vcpu, = u64 val); (!!(__vcpu_sys_reg(vcpu, OSLSR_EL1) & OSLSR_EL1_OSLK)) =20 #define kvm_debug_regs_in_use(vcpu) \ - ((vcpu)->arch.debug_owner !=3D VCPU_DEBUG_FREE) + ((vcpu)->arch.debug_owner !=3D VCPU_REGISTER_FREE) #define kvm_host_owns_debug_regs(vcpu) \ - ((vcpu)->arch.debug_owner =3D=3D VCPU_DEBUG_HOST_OWNED) + ((vcpu)->arch.debug_owner =3D=3D VCPU_REGISTER_HOST_OWNED) #define kvm_guest_owns_debug_regs(vcpu) \ - ((vcpu)->arch.debug_owner =3D=3D VCPU_DEBUG_GUEST_OWNED) + ((vcpu)->arch.debug_owner =3D=3D VCPU_REGISTER_GUEST_OWNED) =20 int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 4f0741bf6779..58c1219adf54 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -38,6 +38,7 @@ struct kvm_pmu { int irq_num; bool created; bool irq_level; + enum vcpu_register_owner owner; }; =20 struct arm_pmu_entry { diff --git a/arch/arm64/include/asm/kvm_types.h b/arch/arm64/include/asm/kv= m_types.h index 9a126b9e2d7c..1d951fb1ad78 100644 --- a/arch/arm64/include/asm/kvm_types.h +++ b/arch/arm64/include/asm/kvm_types.h @@ -4,5 +4,10 @@ =20 #define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 =20 -#endif /* _ASM_ARM64_KVM_TYPES_H */ +enum vcpu_register_owner { + VCPU_REGISTER_FREE, + VCPU_REGISTER_HOST_OWNED, + VCPU_REGISTER_GUEST_OWNED, +}; =20 +#endif /* _ASM_ARM64_KVM_TYPES_H */ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 8ae9d141cad4..fa8b4f846b68 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -161,7 +161,7 @@ void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu) * context needs to be loaded on the CPU. */ if (vcpu->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) { - vcpu->arch.debug_owner =3D VCPU_DEBUG_HOST_OWNED; + vcpu->arch.debug_owner =3D VCPU_REGISTER_HOST_OWNED; setup_external_mdscr(vcpu); =20 /* @@ -183,9 +183,9 @@ void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu) mdscr =3D vcpu_read_sys_reg(vcpu, MDSCR_EL1); =20 if (mdscr & (MDSCR_EL1_KDE | MDSCR_EL1_MDE)) - vcpu->arch.debug_owner =3D VCPU_DEBUG_GUEST_OWNED; + vcpu->arch.debug_owner =3D VCPU_REGISTER_GUEST_OWNED; else - vcpu->arch.debug_owner =3D VCPU_DEBUG_FREE; + vcpu->arch.debug_owner =3D VCPU_REGISTER_FREE; } =20 kvm_arm_setup_mdcr_el2(vcpu); @@ -222,7 +222,7 @@ void kvm_debug_set_guest_ownership(struct kvm_vcpu *vcp= u) if (kvm_host_owns_debug_regs(vcpu)) return; =20 - vcpu->arch.debug_owner =3D VCPU_DEBUG_GUEST_OWNED; + vcpu->arch.debug_owner =3D VCPU_REGISTER_GUEST_OWNED; kvm_arm_setup_mdcr_el2(vcpu); } =20 diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp= /include/hyp/debug-sr.h index 502a5b73ee70..048234439a41 100644 --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h @@ -91,12 +91,12 @@ static struct kvm_guest_debug_arch *__vcpu_debug_regs(struct kvm_vcpu *vcp= u) { switch (vcpu->arch.debug_owner) { - case VCPU_DEBUG_FREE: + case VCPU_REGISTER_FREE: WARN_ON_ONCE(1); fallthrough; - case VCPU_DEBUG_GUEST_OWNED: + case VCPU_REGISTER_GUEST_OWNED: return &vcpu->arch.vcpu_debug_state; - case VCPU_DEBUG_HOST_OWNED: + case VCPU_REGISTER_HOST_OWNED: return &vcpu->arch.external_debug_state; } =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87271294A02 for ; Mon, 14 Jul 2025 22:59:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533994; cv=none; b=CQSXu8BWoIHMWP+uK6ssPj5ecwLL0BvZuJvNJYAzWR/cwadMmteN5zEuYz76pteJhSy8sjSWhJD7xCwJ+/sgAK/vakTkQZGcyb0EL1eq80gzq4vnC0F2afDnfMBi9fJOk7gr6jXoy/l8/Z/tUtR+tgznchkLCfz0mZM7216Ovnk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533994; c=relaxed/simple; bh=LIfK+ThAHCSsdbqh1RpuOzFz2Lg3zSekVpnvx+MJmgk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MeAxhYXYgWc5z0q5Kt+xtpXCIVDCjr9bYkPMqnF/gRbemwa+GIaXnFXYB69DpmZy/ddiXen+PsoWL+N84WXHIGg0pQ83+Ey2Y/FM0OxPgwyEDmAa6GeSl5CNDnfVq7WmH6vVcJb8PjHd3kylci9LFF8ALdlVyY5K5opzRJGjK20= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4PMfGP09; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4PMfGP09" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-40ab2c50f7cso4305339b6e.1 for ; Mon, 14 Jul 2025 15:59:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533990; x=1753138790; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M7mScXH/6+pdXYQkInC9PtowohFYZMkWYozKee5Fshw=; b=4PMfGP09g2t7319v1Mca3s0awvVK6vL6Q5MiMfJtDeqd9/BfC1LHqQFSR7pJPIgg1t 7wz2BWlnLaOoqSW5doIxNa1tCxZIaT2DaCsa6Kh/70nBg1sxgNqTlKsmQk/ObLALNk54 ph528Y/EZK36BJ+KJxs2ZIIeNYxE1jE/U9D9Zn9NKaLF9SBM9DIFIZHGWrsFFC3R9ngb tHzAQRd4Env7FZnJQgeFYW/qeaa2sa1wzT3C3xtkHOYoBsIpUBHgWqr2Q/30WC4xUaFa oYqsSK4+JexmuBQdb4GBy+oohLA1RdBTj01bnEOOe7WovuKZQakno3jxYnMQv9+BjFTL 1igA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533990; x=1753138790; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M7mScXH/6+pdXYQkInC9PtowohFYZMkWYozKee5Fshw=; b=cctXN4tmjVrIwaauCl5ROI0SoQrhbHxeF+88yGb7DHHiOJagtKRzidxICHGdDnRbRX 41y7xmNfBC6RxhbV9Kk91LfzJ9ZLMMY2p1AjgVDeGKO6HCn9nkWPVhnH8OR3W8979Jwo 3karRJPx4+N7a7puInVuKnBHZLa4SGPKKx7jwtZcrYpe3fzV+Ixiv45C3711meV9XNk4 oq0M1jSsrrJZc5P7uFBTkopyTwbwhsEDuFToo/Xdtn6ZZTEzseYf2rj+n179SsFCEWqE WLwf0bDsd8jNwa/dY6wqdqG65M7Sd7zzKmDdqexG8qDkTHIx8LSNqvtgPZ6Zzz4gy2xG nfsw== X-Forwarded-Encrypted: i=1; AJvYcCWKdHRchIrX3v5jQelSQrjEp4dclQnXE19S+3HVdSZnYq4eHbnGRcm63BuTE2UoVCmadKlMeQ/WwCY+tCI=@vger.kernel.org X-Gm-Message-State: AOJu0YwfhDA6gMal6w6mgbrPogA1MXye2/AZS8UzGhzG41fOeV5MQWR7 fbxODH1C0Sqyx+9UjbLnSuOna4Jyhjql8CB/9lKhGgo8bmof1V3Sw10QptmBtt43t7zSPsX3HyF Ujy6wnZ6F7gjKW3CVQ9MxJ98n8g== X-Google-Smtp-Source: AGHT+IHeWycWc8V6KlcE3+nXaI8c1NOdk6N2RoQO/4A7VVje0Wg9HvCVPFl3hQhC7Im+5mMnPvUFMFuuatCQ6TJ/Gg== X-Received: from oigd17.prod.google.com ([2002:a05:6808:4a11:b0:41b:4c26:7a64]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:14ce:b0:40b:a4ca:f7cb with SMTP id 5614622812f47-41b7cafb187mr830089b6e.15.1752533990127; Mon, 14 Jul 2025 15:59:50 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:13 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-20-coltonlewis@google.com> Subject: [PATCH v4 19/23] KVM: arm64: Implement lazy PMU context swaps From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since many guests will never touch the PMU, they need not pay the cost of context swapping those registers. Use the ownership enum from the previous commit to implement a simple state machine for PMU ownership. The PMU is always in one of three states: host owned, guest owned, or free. A host owned state means all PMU registers are trapped coarsely by MDCR_EL2.TPM. In host owned state PMU partitioning is disabled and the PMU may not transition to a different state without intervention from the host. A guest owned state means some PMU registers are untrapped under FGT controls. This is the only state in which context swaps take place. A free state is the default partitioned state. It means no context swaps take place and KVM keeps the registers trapped. If a guest accesses the PMU registers in a free state, the PMU transitions to a guest owned state and KVM recalculates MDCR_EL2 to unset TPM. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pmu.h | 18 ++++++++++++++++++ arch/arm64/kvm/debug.c | 2 +- arch/arm64/kvm/pmu-direct.c | 4 +++- arch/arm64/kvm/pmu.c | 24 ++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++-- 6 files changed, 69 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 21e32d7fa19b..f6803b57b648 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1453,6 +1453,7 @@ static inline bool kvm_system_needs_idmapped_vectors(= void) return cpus_have_final_cap(ARM64_SPECTRE_V3A); } =20 +void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu); void kvm_init_host_debug_data(void); void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu); void kvm_vcpu_put_debug(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 58c1219adf54..47cfff7ebc26 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -97,6 +97,11 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu); +bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu); +bool kvm_pmu_regs_guest_owned(struct kvm_vcpu *vcpu); +void kvm_pmu_regs_set_guest_owned(struct kvm_vcpu *vcpu); + u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); void kvm_pmu_load(struct kvm_vcpu *vcpu); @@ -168,6 +173,19 @@ static inline u8 kvm_pmu_guest_num_counters(struct kvm= _vcpu *vcpu) { return 0; } +static inline bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu) +{ + return true; +} +static inline bool kvm_pmu_regs_guest_owned(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline void kvm_pmu_regs_set_guest_owned(struct kvm_vcpu *vcpu) {} static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index fa8b4f846b68..128fa17b7a35 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -28,7 +28,7 @@ * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF) * - Self-hosted Trace (MDCR_EL2_TTRF/MDCR_EL2_E2TB) */ -static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) +void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { preempt_disable(); =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index e21fdd274c2e..28d8540c5ed2 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -52,7 +52,8 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) */ bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) { - return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) && + !kvm_pmu_regs_host_owned(vcpu); } =20 /** @@ -69,6 +70,7 @@ bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; =20 return kvm_vcpu_pmu_is_partitioned(vcpu) && + kvm_pmu_regs_guest_owned(vcpu) && cpus_have_final_cap(ARM64_HAS_FGT) && (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); } diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 1e5f46c1346c..db11a3e9c4b7 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -496,6 +496,7 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) init_irq_work(&vcpu->arch.pmu.overflow_work, kvm_pmu_perf_overflow_notify_vcpu); =20 + vcpu->arch.pmu.owner =3D VCPU_REGISTER_HOST_OWNED; vcpu->arch.pmu.created =3D true; return 0; } @@ -906,3 +907,26 @@ bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) { return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); } + +bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.pmu.owner =3D=3D VCPU_REGISTER_FREE; +} + +bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.pmu.owner =3D=3D VCPU_REGISTER_HOST_OWNED; +} + +bool kvm_pmu_regs_guest_owned(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.pmu.owner =3D=3D VCPU_REGISTER_GUEST_OWNED; +} + +void kvm_pmu_regs_set_guest_owned(struct kvm_vcpu *vcpu) +{ + if (kvm_pmu_regs_free(vcpu)) { + vcpu->arch.pmu.owner =3D VCPU_REGISTER_GUEST_OWNED; + kvm_arm_setup_mdcr_el2(vcpu); + } +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e3d4ca167881..7d4b194bfa0a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -860,6 +860,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct s= ys_reg_params *p, { u64 val; =20 + kvm_pmu_regs_set_guest_owned(vcpu); + if (pmu_access_el0_disabled(vcpu)) return false; =20 @@ -887,6 +889,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct s= ys_reg_params *p, static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { + kvm_pmu_regs_set_guest_owned(vcpu); + if (pmu_access_event_counter_el0_disabled(vcpu)) return false; =20 @@ -905,6 +909,8 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct= sys_reg_params *p, { u64 pmceid, mask, shift; =20 + kvm_pmu_regs_set_guest_owned(vcpu); + BUG_ON(p->is_write); =20 if (pmu_access_el0_disabled(vcpu)) @@ -973,6 +979,8 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, { u64 idx =3D ~0UL; =20 + kvm_pmu_regs_set_guest_owned(vcpu); + if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13) { if (r->Op2 =3D=3D 2) { /* PMXEVCNTR_EL0 */ @@ -1049,6 +1057,8 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, { u64 idx, reg, pmselr; =20 + kvm_pmu_regs_set_guest_owned(vcpu); + if (pmu_access_el0_disabled(vcpu)) return false; =20 @@ -1110,6 +1120,8 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, str= uct sys_reg_params *p, { u64 val, mask; =20 + kvm_pmu_regs_set_guest_owned(vcpu); + if (pmu_access_el0_disabled(vcpu)) return false; =20 @@ -1134,7 +1146,10 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, st= ruct sys_reg_params *p, static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + u64 mask; + + kvm_pmu_regs_set_guest_owned(vcpu); + mask =3D kvm_pmu_accessible_counter_mask(vcpu); =20 if (check_pmu_access_disabled(vcpu, 0)) return false; @@ -1171,7 +1186,10 @@ static void writethrough_pmovs(struct kvm_vcpu *vcpu= , struct sys_reg_params *p, static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + u64 mask; + + kvm_pmu_regs_set_guest_owned(vcpu); + mask =3D kvm_pmu_accessible_counter_mask(vcpu); =20 if (pmu_access_el0_disabled(vcpu)) return false; @@ -1211,6 +1229,8 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, str= uct sys_reg_params *p, static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params = *p, const struct sys_reg_desc *r) { + kvm_pmu_regs_set_guest_owned(vcpu); + if (p->is_write) { if (!vcpu_mode_priv(vcpu)) return undef_access(vcpu, p, r); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58A97296147 for ; Mon, 14 Jul 2025 22:59:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533994; cv=none; b=TTHDeE4Ee4PjpLRjhSyju/zeOL9rqdi6qnTmGCRire2MpUTAN1AX64ZaZKKO+m4Q8+VbFMKmAwa01Hb+HKhckLoKVW1eCbWy9nkeoQovi/C59P9Ll1LgtwgutT+b07uq8ajkWIbZ9GgndMhHstvlUq+4qk6LOhGDHKmkFQY/1cs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533994; c=relaxed/simple; bh=p5hjuRItxud1tuLT297KADQzzBBvn6FNPEJSo8u9i5M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JMGfrlAp/U4QjiN+fQl5uKZz92YhZlEa+E4KTewJ9oA/qu0zW7Mvror1L3MsR/dc0AiA1Vp2oFciwu5YVS4IZCTGReqmRYkfT0lOVnzT9h/pHha5QdI9O8OHqMlWRqghYmxIZNO5AhHrnBYRIpKFnoZSEfTjzOwSk4JyC4AcdZM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tUEwhvzZ; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tUEwhvzZ" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3e0616aed31so53901105ab.1 for ; Mon, 14 Jul 2025 15:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533991; x=1753138791; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=owVeMj/9hQdREoMrYrf+cb3JPwx5Updk91QFCpBYEPQ=; b=tUEwhvzZPDlo1AKoP/WYN7CZ4ZOoetyGXlfMccTCO04z58WgR2kLBYU7Fug0itKuJY u4c45ozHwy443FjzoSMIGFaC5EDb36/oQPreeoNlknPpePHIMb76io8CQMk6+TBCA63y sHxveAjmurQnU25/VtUxygALJ4A8ocn4nY6kSUbFkS2oWQXcxV29k1N3HCd8sCR6JU/E RJeS6MmoQuzR7yz7NQ8/0ZVWocHqyAHFpJP9v0wqIHM1oGDF8wF0azBIZyl8vcysPsRL lMTLASh3diYvUtuz4muRUKk/canLveSoJsedvp/Lsac6E4z9QA1O70PWlmFiB429fDdh pGKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533991; x=1753138791; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=owVeMj/9hQdREoMrYrf+cb3JPwx5Updk91QFCpBYEPQ=; b=kAr7HbrLQZKGu1TwZGHUOrrbbEPhkHFjdNJ874Hq4LmmTmsyQx7eR4H/+dozIFssjG 3o7pSoWBD8pdUqAJW1QRwnvW7tS2ObsikG/6pDcVev/YTvbx/5X+a/Zwuwpw38VP6aJF 0DWuYzW3HMgaa/dhb2tJCBjV1OzwIEz90kAMxZ5lTo1R9ti0k39zV81+sCGXAeYhLTgo LK4CE1CWu8HpMOfDf1oGIPAgAJ5IwFodmfXVNmYtOyt22ejwFlLpagW2IlrgoRRtDqvg BMbtq0YzX3XGU/XZyZlA7wxRt6tFT73fRmSEPS5cAmcjfgRHWohehbTcdKLVvCQptWCv Tr/Q== X-Forwarded-Encrypted: i=1; AJvYcCUynD1NC0xbPWCqVAdROZiW1ZEkaRNFRpE8SFW+mQGAUf3oF1QrrO2fllStrLNlnf4Ua/NILIOMlgfqlEA=@vger.kernel.org X-Gm-Message-State: AOJu0Yy47q+u9+DCT4I7ZDleEt2sy+Svf4dSMkwDReYxueWykUDB8SOq JxpOyJfmknPLJ05NumPWSyDOdsnr4NHcIQ7K/YW2K77B9PQOCkkpl5flMESYGiGi+9BStE1I8lX /Iz1t8UScmU15YIvG/2mu1AymDw== X-Google-Smtp-Source: AGHT+IFhP5mgyGUmG3pmQj8bVOziDPF6iO6lkJxF0ZjHKQgYCC4Elel7HDH7jDBm3sU0yo1Sa4ndzKb0EA7KNwiitw== X-Received: from ilbea23-n1.prod.google.com ([2002:a05:6e02:4517:10b0:3dd:78b6:1265]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c05:b0:3e0:ec1e:18fe with SMTP id e9e14a558f8ab-3e25332ed05mr145231275ab.15.1752533991250; Mon, 14 Jul 2025 15:59:51 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:14 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-21-coltonlewis@google.com> Subject: [PATCH v4 20/23] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but record which guest overflow flags were set in the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-direct.c | 17 +++++++++++++++++ drivers/perf/arm_pmuv3.c | 9 +++++++++ 4 files changed, 34 insertions(+) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 5f6269039f44..36638efe4258 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -249,6 +254,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 47cfff7ebc26..6149eb051ff9 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -96,6 +96,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(u64 govf); =20 bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu); bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu); @@ -310,6 +311,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 #endif =20 diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 28d8540c5ed2..3f9e0d4a74e1 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -339,3 +339,20 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) val =3D read_pmintenset(); __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @govf: Bitmask of guest overflowed counters + * + * Record IRQs from overflows in guest-reserved counters in the VCPU + * register for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(u64 govf) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (!vcpu) + return; + + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index bc8a99cf4f88..6307cd851eb6 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -755,6 +755,8 @@ static u64 armv8pmu_getreset_flags(void) =20 /* Write to clear flags */ value &=3D ARMV8_PMU_CNT_MASK_ALL; + /* Only reset interrupt enabled counters. */ + value &=3D read_pmintenset(); write_pmovsclr(value); =20 return value; @@ -857,6 +859,7 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u64 pmovsr; + u64 govf; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; @@ -911,6 +914,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) */ perf_event_overflow(event, &data, regs); } + + govf =3D pmovsr & kvm_pmu_guest_counter_mask(cpu_pmu); + + if (kvm_pmu_is_partitioned(cpu_pmu) && govf) + kvm_pmu_handle_guest_irq(govf); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2CB6298987 for ; Mon, 14 Jul 2025 22:59:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533997; cv=none; b=XwOFN1btPraQ71FVjHApth3dEY2EfcPIUZ5arue02lyQQ8lytuluvavn6KhlRqi6jw97wvb/ly14e3X+sHpMB6gBsyB4+ochyyoAoV2Ys/6uMf3fQCFZEivweM/oN6F4B66OMdMunXaF0plNBT+NUsX2529BxzI3SB1VmYFopfQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533997; c=relaxed/simple; bh=+QrD5N262WMEkEIDCLEn9xsl1UHsD+B3fLJKnajCte4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NUHm18XYPwD7MVahpdTR20irk1SWuhVgMKPi+CIGA6O7bA/Ja0Ccg12yXHBwj7HrUls4C5kyaIXTWBSpbf+6iaH+P9lp6KKZ47CGF8150IsvE8mwq1Trr2l5loHVGIr9z21PVX/TU9aaYLfMlwcztP6SGfXU5JbV74kOPuV3xcc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P5TL3jlq; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P5TL3jlq" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-876afa8b530so534980539f.0 for ; Mon, 14 Jul 2025 15:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533992; x=1753138792; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ssu6XVTznYXQ2t7039xq5/RphL9Rc3QnEHfinMHeLm8=; b=P5TL3jlqGj71fzMw7fn1NREEzoRpkXw7o8eVnT9N88XuYYt5cTTTf4d4JfGl2CKsyl ZZ48pS84iy0plGNbXkB8lxHjK4OM+lmPzIgMQJxYameO4UfO+2iB3Wop7QFHXqDGKGu+ cogEOuwIoBJmDgXH6DkeD4feaPUXAqToKvnPsrhFHwDdFpT8C6YT+mnb3eUQoz6QK45f sohA7wLJrEugUCad9EviUR7oeY0JS9a2iVsXX/gJ30SLD3Yf4FeJOijgT9/21W0d15bi BkNP6qYErUn4Dsk6BmU2kjSyBHU+bJWUkEF7LXXTJqpdqhO/ujJhJ5GyeKcUlO6/Oq5H 6j/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533992; x=1753138792; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ssu6XVTznYXQ2t7039xq5/RphL9Rc3QnEHfinMHeLm8=; b=IyGXN24EO5gOiTMUY1zsGagibpmpzbClSE8oCzUPjgUexK4SooOFeF4a5EQNCpCw31 Ru8asXUqf18sIz0G1BnFfGQ4K2+EMyfqPHIcqJduO60u6cq+egYhy58vYInNVemWPeAd 9gUHGMq288KmYyGFfDPmjjmnAz+pxXKjwe12W8GCDA7zNWXxnKEItk3eV+kg5uDlVZDh 9ygIDhGbmtFB22IGATi+esK1lXpPVhlBAAUVHULuNMOL41zAM5N4vIwzrB/C6BKExXNB QOvASPNySG1rqCIXXVjLCMsIqfd99vBPwQfldweFeTYcae2mjTxdvkJw1xdKgqIhsZY9 uMNg== X-Forwarded-Encrypted: i=1; AJvYcCXJ1DXarQQHlko9TjCVKzWcjRelmOlRbIPfItZv/kEbhYnv2UDOZeTefepFgAbzAjAtcyjyIYvxaCI7wj0=@vger.kernel.org X-Gm-Message-State: AOJu0YzbfIQEocLG8h27vl2HsCUheEcT5YWOOLJ4DQVv4CoqMqg8l9QM eZ4zeavtzPMuSrA+q8JiHK60C43ZbBW14b5GUIZEgz2ihv8QxbMgOycCMQV7XRzdsPUDjS5nWGG rSrS/NKTVjQ9GZ8iaTJHWRlOT1w== X-Google-Smtp-Source: AGHT+IGDw6Aiqn+peGv14I4uiw/UfAz/LnaAwVzpsB+zhVreBKr6TdqJGcvfwIs5Vp6INunAGafxrdf30hBME2OE7w== X-Received: from iobbk15.prod.google.com ([2002:a05:6602:400f:b0:879:8855:695f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:4019:b0:876:7876:a587 with SMTP id ca18e2360f4ac-87977e98946mr1665405839f.0.1752533992118; Mon, 14 Jul 2025 15:59:52 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:15 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-22-coltonlewis@google.com> Subject: [PATCH v4 21/23] KVM: arm64: Inject recorded guest interrupts From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-direct.c | 22 +++++++++++++++++++++- arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu.c | 6 +++++- 4 files changed, 30 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 6149eb051ff9..908e43416b50 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -87,6 +87,8 @@ bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 3f9e0d4a74e1..80a3eb89fca1 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -280,7 +280,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) write_pmcr(val); =20 /* - * Loading these registers is tricky because of + * Loading these registers is more intricate because of * 1. Applying only the bits for guest counters (indicated by mask) * 2. Setting and clearing are different registers */ @@ -356,3 +356,23 @@ void kvm_pmu_handle_guest_irq(u64 govf) =20 __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); } + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Ponter to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u64 pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + u64 pmint =3D read_pmintenset(); + u64 pmcr =3D read_pmcr(); + + return (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); +} diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index bcaa9f7a8ca2..6f41fc3e3f74 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -268,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -405,7 +405,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index db11a3e9c4b7..d837cb8fef68 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -407,7 +407,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5A4F29A9EE for ; Mon, 14 Jul 2025 22:59:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533997; cv=none; b=nmzO4uJoD292cEp2l9bGniXdNNfk7Cnp7tGtgHy5TNmmkXIIMfs72l/bR6DCqJhxmX7ldqewHM8L74NZj4Es1Fzf2HI25UdxKmtpMLSF5IKgSuKf9Rq5Vk9s5vFu50bb1zhVTIA0moQCglZjIlJqFx8TIAszm5VVlP/YciNNRV8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533997; c=relaxed/simple; bh=qTj6X96soxkGVQzFguvmdrxI4mk4Dm8ydPWwzM8c75I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bxDI0wbDM6fXzMrdruXxg3ngYgEL3p2KgK1DVdbMKCJ02yt1lH7kibZmh2WcpM1wFbf2U3zcQzGm0cbwYeWSW8ugVBNkTfQnWadYPpsTpApAkKkXX6Xg+w+dy+rxdkdEK1kksr0Z1qW9o7kIrRxA2hGZtMNs2XizNxRwEPTmvzQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=J0n7FfS1; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="J0n7FfS1" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2eaf00d1b3bso4880484fac.1 for ; Mon, 14 Jul 2025 15:59:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533993; x=1753138793; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=voYlHvpwTH7yLEElXkjg86v5ueZmkGB5qg3f9f68cfQ=; b=J0n7FfS194KEnvWK2+YMm9AClXdU8FPURZYYoXFPAvimo/6Rz1tR9GolwCLUAYPqlh ZErBKrGcbIl05FXbtJ0ArPn1lsYLpykcnR4yVlyPW11/5u7HcX2xW5pWWWeZ9J171s+y HUGOe8thOXNJAau+1JUnzzc7Htroc46z47eKpa0+62qmU9FBgyatfdIHYfsK4moDEBm/ RnsJ03ByrOdmDUgU1hJsvk0B8v2ZmWhbBh4rnam5mkBv7Cm0Hn5FSsJwi79dVOFRMkh/ bFHAx+L/IVSA+pVVtkOSFlxtn7mVorTLtMDkbtHutglreKHXdK/MOwMg6mGUv3r3u3IZ uu/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533993; x=1753138793; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=voYlHvpwTH7yLEElXkjg86v5ueZmkGB5qg3f9f68cfQ=; b=YdDbGuefe+L7+jWH7uMHIuQSMHyjgIZ7SS8CfP4OvVGu+uRRgFvoot7GTJ9z8ANNpj Zy8rUrff/0ScSmhr+dAcA3WKdrzrvTDRUI46UKnSOgMI7otKdITUsLBdbtbxjBUgmjvX PFqsRO5vKCUScGU5+yQdiWuaypdrbFsQpcyKp0odnfGHmnmZAdNHzorp9FNrpnyIv2cE /0CxK2BvD5XLmtfb38n1PI2GSMA2exWVBpuGlMNv6yBNJt70sb6XHqklye+tzDdcjDVm ylSPSOz41xpOU0pTwLDhVaw0GFRo9KuvPSWheIqs06XZ5GSy/O+f93QyjA3Qdi9WJhIB H28w== X-Forwarded-Encrypted: i=1; AJvYcCWvZfXlr+OL2cSGgX70M1F+cDzTnIsu2fx4+8YJ/4vAY+BiOlnYHsEmT+xok7nevshOB033fe60lPiqFV4=@vger.kernel.org X-Gm-Message-State: AOJu0Yy3vx8G7uTP5GEdkBQ5iIUP8dtiaVzyETM/1m37GV3dw43DYQ9V +cMbkhvHwx8PLsTFeONJuHrI1D0lYXP4dLBawUuVHl24tFLWZ8stbOhMNKwaCvw6UOWvYemic/c 44n0vQjoqkYLfifwfONt+nmVNHA== X-Google-Smtp-Source: AGHT+IFyjKpY6gj8sq7r2e0sKi7k90orxPelCVX5FgALt8z1+keOnkODjiBrRGtNHEoRbMyudRneuKQS88ohzUTsOQ== X-Received: from oabqg13.prod.google.com ([2002:a05:6870:de0d:b0:2ef:de4a:3752]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:b60d:b0:2d5:25b6:ec14 with SMTP id 586e51a60fabf-2ff2b52e9f0mr8776215fac.15.1752533993205; Mon, 14 Jul 2025 15:59:53 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:16 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-23-coltonlewis@google.com> Subject: [PATCH v4 22/23] KVM: arm64: Add ioctl to partition the PMU when supported From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add KVM_ARM_PMU_PARTITION to enable the partitioned PMU for a given vCPU. Add a corresponding KVM_CAP_ARM_PMU_PARTITION to check for this ability. This capability is allowed on an initialized vCPU where PMUv3 and VHE are supported. However, because the underlying ability relies on the driver being passed some command line arguments to configure the hardware partition at boot, enabling the partitioned PMU will not be allowed without the underlying driver configuration even though the capability exists. Signed-off-by: Colton Lewis --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ arch/arm64/include/asm/kvm_pmu.h | 10 +++++++--- arch/arm64/kvm/arm.c | 20 ++++++++++++++++++++ arch/arm64/kvm/pmu-direct.c | 17 +++++++++++++++++ include/uapi/linux/kvm.h | 4 ++++ 5 files changed, 69 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 4ef3d8482000..7e76f7c87598 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6478,6 +6478,27 @@ the capability to be present. =20 `flags` must currently be zero. =20 +4.144 KVM_ARM_PARTITION_PMU +--------------------------- + +:Capability: KVM_CAP_ARM_PARTITION_PMU +:Architectures: arm64 +:Type: vcpu ioctl +:Parameters: arg[0] is a boolean to enable the partitioned PMU + +This API controls the PMU implementation used for VMs. The capability +is only available if the host PMUv3 driver was configured for +partitioning via the module parameters `arm-pmuv3.partition_pmu=3Dy` and +`arm-pmuv3.reserved_guest_counters=3D[0-$NR_COUNTERS]`. When enabled, +VMs are configured to have direct hardware access to the most +frequently used registers for the counters configured by the +aforementioned module parameters, bypassing the KVM traps in the +standard emulated PMU implementation and reducing overhead of any +guest software that uses PMU capabilities such as `perf`. + +If the host driver was configured for partitioning but the partitioned +PMU is disabled through this interface, the VM will use the legacy PMU +that shares the host partition. =20 .. _kvm_run: =20 diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 908e43416b50..c9d5fe325864 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -110,6 +110,8 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); void kvm_pmu_load(struct kvm_vcpu *vcpu); void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 +void kvm_vcpu_pmu_partition_enable(struct kvm_vcpu *vcpu, bool enable); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -296,17 +298,17 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_= vcpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 -static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +static inline bool kvm_pmu_is_partitioned(void *) { return false; } =20 -static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +static inline u64 kvm_pmu_host_counter_mask(void *) { return ~0; } =20 -static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +static inline u64 kvm_pmu_guest_counter_mask(void *) { return ~0; } @@ -315,6 +317,8 @@ static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 +static inline void kvm_vcpu_pmu_partition_enable(struct kvm_vcpu *vcpu, bo= ol enable) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7c007ee44ecb..94274bee4e65 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -21,6 +21,7 @@ #include #include #include +#include #include =20 #define CREATE_TRACE_POINTS @@ -38,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -383,6 +385,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_PMU_V3: r =3D kvm_supports_guest_pmuv3(); break; + case KVM_CAP_ARM_PARTITION_PMU: + r =3D kvm_pmu_partition_supported(); + break; case KVM_CAP_ARM_INJECT_SERROR_ESR: r =3D cpus_have_final_cap(ARM64_HAS_RAS_EXTN); break; @@ -1810,6 +1815,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp, =20 return kvm_arm_vcpu_finalize(vcpu, what); } + case KVM_ARM_PARTITION_PMU: { + bool enable; + + if (unlikely(!kvm_vcpu_initialized(vcpu))) + return -ENOEXEC; + + if (!kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu)) + return -EPERM; + + if (copy_from_user(&enable, argp, sizeof(enable))) + return -EFAULT; + + kvm_vcpu_pmu_partition_enable(vcpu, enable); + return 0; + } default: r =3D -EINVAL; } diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 80a3eb89fca1..04e7b6a1d749 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -56,6 +56,23 @@ bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) !kvm_pmu_regs_host_owned(vcpu); } =20 +/** + * kvm_vcpu_pmu_partition_enable() - Enable/disable partition flag + * @vcpu: Pointer to vcpu + * @enable: Whether to enable or disable + * + * If we want to enable the partition, the guest is free to grab + * hardware by accessing PMU registers. Otherwise, the host maintains + * control. + */ +void kvm_vcpu_pmu_partition_enable(struct kvm_vcpu *vcpu, bool enable) +{ + if (enable) + vcpu->arch.pmu.owner =3D VCPU_REGISTER_FREE; + else + vcpu->arch.pmu.owner =3D VCPU_REGISTER_HOST_OWNED; +} + /** * kvm_vcpu_pmu_use_fgt() - Determine if we can use FGT * @vcpu: Pointer to struct kvm_vcpu diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c74cf8f73337..2f8a8d4cfe3c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -935,6 +935,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_EL2_E2H0 241 #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_GMEM_SHARED_MEM 243 +#define KVM_CAP_ARM_PARTITION_PMU 244 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1413,6 +1414,9 @@ struct kvm_enc_region { #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) =20 +/* Available with KVM_CAP_ARM_PARTITION_PMU */ +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, bool) + #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Tue Oct 7 03:46:17 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E590C29B772 for ; Mon, 14 Jul 2025 22:59:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533999; cv=none; b=g5No7G3QTztx6aNInteGwrpoKvEovzdFvNUisXwdKAPuZZXPSi9Qm1g5ctzHECDCYJ6dTN9r8KNAxS/1uR+ZHj51JaCGLNjVzsPmNciWc0k1up+Vvd6Y7qfljLb078RbjguxzrehjEP2ubeZJWvBKJKdaI4Dd65pOKHh3PzMRnM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752533999; c=relaxed/simple; bh=9eSwVawIpIf46LRehZWFLyWPO8/7v6TDvCTZI7UPgOU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZKVsrngkY+Ikv4Em3cwX9rE/OV1f7S+4pyRae7YOAy9DyN2EW7oaS23V1eTAAmyqoQmwbkpY5eifMOjcmEslgME0cD/XICl9RQrW5B0cfVrQ55mgf7el2OoEw6P9Bnroj8NpGoss9wpwX7JSAk2tYPPpqSqiSrfmYlgiZe7y3O8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SSrQKsPa; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SSrQKsPa" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2e95bf2f61dso4329041fac.1 for ; Mon, 14 Jul 2025 15:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752533994; x=1753138794; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xdZdMkXachOTBLhsNs+TRwK+/iArlUH+VMAh4h7bRTc=; b=SSrQKsPaPBrHudc9xe88pdzqSjf6O4v9J41aal1zomeXC2ME7JWOjmNv8bj38u3V9B Qcbcz2hzW3yyXD6jW7I2igK3nMS/6mE8IU7P+6pbC6lu2DzT8MZPZ10e3mztl/xSGFTm SLXzhtVuR3vyuckkBRFcILbWyNFHof8KLEadCfqNP2beB/G6SR9CDM91ZenFc+7cSeYz 6Z0ILJ9Kn4TtVN+engTsvNM4Gct4XCxySGXZz39yIL9zqm6mikkc1HHyB6lFd8HHDDjK xxW9deml5yOCyh45nnn9YlvrCKtHqXZ9P1EQcuTYXUlo0iZb+fflg+EGzAz08Xjr4pZA EWGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752533994; x=1753138794; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xdZdMkXachOTBLhsNs+TRwK+/iArlUH+VMAh4h7bRTc=; b=iupuKa+0Z7EzQWgwPpGHBiEXmjEm7ttpyhBCYJcB6KMrUaRDFJXo37sxjtdM0vtyjF vtoxM2BMjj4r+m/L+bFS648TKNlHvoFCUr0pQ9QWqj7YJ8ilY7SUvkhcmI6jGI5uMU/w US4wgbl/fqmE6ZSqzbG0mV68nW8kEH8310IAZpgBOL536GIjNRoCp95T/QKq8n2+XfVg hBFnVyFg7RpwXxz3bpfapRza82TC53/OUlE0okpXz7VDMEkGBsToJGG5JW6GYaj2d3Uf 5ILBoFcbs/LkLPhaJb3FAvWcWBfYZa6DpfaKLbjqJNotqlEJE2kFn4/GdtGpHI4bxhEx teGA== X-Forwarded-Encrypted: i=1; AJvYcCUyEExc016Csx97G9iJiUv0KnYbX4WjtiXg3p9EFru9SFsr6bLNROkUbDEfIV9bNyiq8xk1vazJHRcxzkQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yx5pva3wZsEf4VH9mvzxdB01IGBw3uCpsW9ZJzdTsyidRymaOuE mfg1glHUs449DcGTpBMQP4CGsRheyyt7NwBSMt7HUEexhO9p9vfp5KTPEh7k8QoCh88UdAd5Ww6 KEGqXuP5EKAODZVTxPkw76jqryw== X-Google-Smtp-Source: AGHT+IGtkxxQNHPtdv9dL4s2nR5Wc47eOMAPls/NeDgscE/f3Xbi3rgYw/AG5E5sUgk4Wnfo1XJ8ubTDhh1qfxOcFg== X-Received: from oabuj12.prod.google.com ([2002:a05:6871:4e4c:b0:2c2:2bad:1512]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:3584:b0:2e9:925b:206f with SMTP id 586e51a60fabf-2ff8fc725d6mr1109775fac.17.1752533994367; Mon, 14 Jul 2025 15:59:54 -0700 (PDT) Date: Mon, 14 Jul 2025 22:59:17 +0000 In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250714225917.1396543-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250714225917.1396543-24-coltonlewis@google.com> Subject: [PATCH v4 23/23] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Run separate a test case for a partitioned PMU in vpmu_counter_access. An enum is created specifying whether we are testing the emulated or partitioned PMU and all the test functions are modified to take the implementation as an argument and make the difference in setup appropriately. Because the test should still succeed even if we are on a machine where we have the capability but the ioctl fails because the driver was never configured properly, use __vcpu_ioctl to avoid checking the return code. Signed-off-by: Colton Lewis --- tools/include/uapi/linux/kvm.h | 2 + .../selftests/kvm/arm64/vpmu_counter_access.c | 62 +++++++++++++------ 2 files changed, 46 insertions(+), 18 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index b6ae8ad8934b..21a2c37528c8 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_ARM_PARTITION_PMU 244 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1356,6 +1357,7 @@ struct kvm_vfio_spapr_tce { #define KVM_S390_SET_CMMA_BITS _IOW(KVMIO, 0xb9, struct kvm_s390_cmma= _log) /* Memory Encryption Commands */ #define KVM_MEMORY_ENCRYPT_OP _IOWR(KVMIO, 0xba, unsigned long) +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, bool) =20 struct kvm_enc_region { __u64 addr; diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index f16b3b27e32e..92e665516bc8 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -25,6 +25,16 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 =20 +enum pmu_impl { + EMULATED, + PARTITIONED +}; + +const char *pmu_impl_str[] =3D { + "Emulated", + "Partitioned" +}; + struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -405,7 +415,7 @@ static void guest_code(uint64_t expected_pmcr_n) } =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) +static void create_vpmu_vm(void *guest_code, enum pmu_impl impl) { struct kvm_vcpu_init init; uint8_t pmuver, ec; @@ -419,6 +429,7 @@ static void create_vpmu_vm(void *guest_code) .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, .attr =3D KVM_ARM_VCPU_PMU_V3_INIT, }; + bool partition =3D (impl =3D=3D PARTITIONED); =20 /* The test creates the vpmu_vm multiple times. Ensure a clean state */ memset(&vpmu_vm, 0, sizeof(vpmu_vm)); @@ -449,6 +460,9 @@ static void create_vpmu_vm(void *guest_code) /* Initialize vPMU */ vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + __vcpu_ioctl(vpmu_vm.vcpu, KVM_ARM_PARTITION_PMU, &partition); } =20 static void destroy_vpmu_vm(void) @@ -475,12 +489,12 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) } } =20 -static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_f= ail) +static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, enum pmu_impl= impl, bool expect_fail) { struct kvm_vcpu *vcpu; uint64_t pmcr, pmcr_orig; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); vcpu =3D vpmu_vm.vcpu; =20 pmcr_orig =3D vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); @@ -508,7 +522,7 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pm= cr_n, bool expect_fail) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, enum pmu_impl impl) { uint64_t sp; struct kvm_vcpu *vcpu; @@ -516,7 +530,7 @@ static void run_access_test(uint64_t pmcr_n) =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -550,14 +564,14 @@ static struct pmreg_sets validity_check_reg_sets[] = =3D { * Create a VM, and check if KVM handles the userspace accesses of * the PMU register sets in @validity_check_reg_sets[] correctly. */ -static void run_pmregs_validity_test(uint64_t pmcr_n) +static void run_pmregs_validity_test(uint64_t pmcr_n, enum pmu_impl impl) { int i; struct kvm_vcpu *vcpu; uint64_t set_reg_id, clr_reg_id, reg_val; uint64_t valid_counters_mask, max_counters_mask; =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 valid_counters_mask =3D get_counters_mask(pmcr_n); @@ -607,11 +621,11 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_error_test(uint64_t pmcr_n, enum pmu_impl impl) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, true); destroy_vpmu_vm(); } =20 @@ -619,30 +633,42 @@ static void run_error_test(uint64_t pmcr_n) * Return the default number of implemented PMU event counters excluding * the cycle counter (i.e. PMCR_EL0.N value) for the guest. */ -static uint64_t get_pmcr_n_limit(void) +static uint64_t get_pmcr_n_limit(enum pmu_impl impl) { uint64_t pmcr; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); pmcr =3D vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); destroy_vpmu_vm(); return get_pmcr_n(pmcr); } =20 -int main(void) +void test_pmu(enum pmu_impl impl) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + pr_info("Testing PMU: Implementation =3D %s\n", pmu_impl_str[impl]); + + pmcr_n =3D get_pmcr_n_limit(impl); + pr_debug("PMCR_EL0.N: Limit =3D %lu\n", pmcr_n); =20 - pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); - run_pmregs_validity_test(i); + run_access_test(i, impl); + run_pmregs_validity_test(i, impl); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_error_test(i, impl); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + test_pmu(EMULATED); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + test_pmu(PARTITIONED); =20 return 0; } --=20 2.50.0.727.gbf7dc18ff4-goog