From nobody Wed Mar 5 02:36:53 2025 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52A8B22F14E for ; Thu, 6 Feb 2025 07:23:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738826617; cv=none; b=s9MTwHCiC1FxJX1WfHnAl37UOIwySqABjewIMBgcdHndfVKtyZbu4GPYEW2PyYXDv6K+O0hAIZlQEAL6LGGKxhV0cimpcSjFYymEs04rBr0jOcZc6XL1PWBT3JmVgYMr2y8tV+f0EWUnlscIVN0VdJftAXCk5I4Lnq1/Pu6IAc4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738826617; c=relaxed/simple; bh=gsJerW9M0W8WaM177OP6i6INZbXG5lJHROC0gnrsB6A=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EW0htp6cdDLGLpUuj14COA37f5QOebr1P2zRMhVCkVZpSIAW/Av59ra1NmInxJ4yWuILZRiODPTSPXFfhnbHCV/4JRatVpofX7d9lwl6UlDU6NfONFbnJ0KvNVwEM3ye2ODGxNqJ3upFPJsnz+9mc4AZEmfOrg0pB1V/DwLvbxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=ERmyYn0g; arc=none smtp.client-ip=209.85.216.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="ERmyYn0g" Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-2f9da2a7004so816982a91.0 for ; Wed, 05 Feb 2025 23:23:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1738826614; x=1739431414; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=anXe/CtOz/oIHjbwGvwPwkbrxEowv5h83zk+NamuRlw=; b=ERmyYn0gXguswzTCYAnJjbog5E2NacV5B6qWFH+sYRqJav+7grGh2tJEbn7zH70hy+ K/h2iSEp6T92wAx1E5XMBwdj2V8q3SRJ+EAl53b2M5M8Rpm43ifL+BiBJ1TyUxq0vaNc w8zCaJowMRJobqGns2mXZudgIhSnQjpOv/kQTPmcpUxPYI+40Q05z62V/zPmeGT49ms6 AcL+covNk/FJ1sYhid4gY38vUoMTRvYyGZb0QKff/zswyXisfhr+8mNbrJhct26ROYyi g4KoRzEYUpu1GBYMCJ3VUy6DpVO9UdU6jDYd7PinEkP554ovdJfnB+jkSrfXWiVCoIQJ ZaYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738826614; x=1739431414; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=anXe/CtOz/oIHjbwGvwPwkbrxEowv5h83zk+NamuRlw=; b=Bd2H8Zef7A1U50DtCEK+s2RYM8k+DKS//5/Y3lP3b1z5k79/jZHT5VQMGkNeS8UR8l XfswSDZh/9mrzagAc0TZnMblLsZcDQozBhUduhg0XQs4v6ahP+W3YSNodsg5PLAA6qyT WdA+8EEU8GqCAncXr9xmrsIpOeJDxkXXw6zQA6GoJu8MVnvqna2VIT46Sw3dztWXq1/i WBOmEJEi+npo9c7wrF/LmTWlEtBja7VtfIXF0u/r/xROFUQ4tFh6jKyhkLRUvRyvW267 sCOuzj3iX+nuPbEuPhG36uzO/5VmjAj1fs7BcErT91fJlnjlzxRLCVPM3jCXIHi1mwVJ qy9g== X-Forwarded-Encrypted: i=1; AJvYcCXawKuVhX5Hy4YRzRCis8LZMv0y9q1Ghe13ivDqYLvAy0EXaVSjL7vktQ1dU2IL1A/9G1m3Mx1OexTMV0I=@vger.kernel.org X-Gm-Message-State: AOJu0YznR/O6gLZ8LGcAAyaIWi4AbGX5W3WbRCTMOOucjArR6a5INbQA 7rVDSGQT0RdBPHp4eNpUPArItRqtdonVvg5fUXMs/TMV4ECDZmetBUTjkUD66Xs= X-Gm-Gg: ASbGncsGOxxI7Ctt7UT1aem5ucewwHIgYc5WvBY0Wxv02p7KjcxxxU1Ppbe0/omho/U UUOI1Jjhtw5mOfG/rfehdp8jUVseScPpyrDrcfNNXUcNJzkHERTJPwDednd2DhI2raHRXNSMcUF fj55w7uV9iCxTKYKVI5fHv3MEfl1z1qtTC2LSys4K0ol6+ikEWe72cgTYBSq+puCujMiVc8y+t8 FKkrvqMrKPfGceR2MQZSpLP2DZcxMM6njw+i8C9LiGeEIlf+VZiR3Iy80QJyGm/gqhLcMj5/ooY 0vFrB7DacO4cOQj2z3H2bI1mFuqc X-Google-Smtp-Source: AGHT+IG9ErDsuwbTkF9cQ+fybvLxqYghUN6MYzwdKNRfVONm64cNf/y4/XcpEr/iia3MPgHHlS2RcA== X-Received: by 2002:a17:90b:1d44:b0:2ee:cdea:ad91 with SMTP id 98e67ed59e1d1-2f9e0785075mr9990147a91.15.1738826613453; Wed, 05 Feb 2025 23:23:33 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2fa09a72292sm630883a91.27.2025.02.05.23.23.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Feb 2025 23:23:33 -0800 (PST) From: Atish Patra Date: Wed, 05 Feb 2025 23:23:16 -0800 Subject: [PATCH v4 11/21] RISC-V: perf: Restructure the SBI PMU code Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250205-counter_delegation-v4-11-835cfa88e3b1@rivosinc.com> References: <20250205-counter_delegation-v4-0-835cfa88e3b1@rivosinc.com> In-Reply-To: <20250205-counter_delegation-v4-0-835cfa88e3b1@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Anup Patel , Atish Patra , Will Deacon , Mark Rutland , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , weilin.wang@intel.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Conor Dooley , devicetree@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, Atish Patra X-Mailer: b4 0.15-dev-13183 With Ssccfg/Smcdeleg, we no longer need SBI PMU extension to program/ access hpmcounter/events. However, we do need it for firmware counters. Rename the driver and its related code to represent generic name that will handle both sbi and ISA mechanism for hpmcounter related operations. Take this opportunity to update the Kconfig names to match the new driver name closely. No functional change intended. Signed-off-by: Atish Patra Reviewed-by: Cl=C3=A9ment L=C3=A9ger --- MAINTAINERS | 4 +- arch/riscv/include/asm/kvm_vcpu_pmu.h | 4 +- arch/riscv/include/asm/kvm_vcpu_sbi.h | 2 +- arch/riscv/kvm/Makefile | 4 +- arch/riscv/kvm/vcpu_sbi.c | 2 +- drivers/perf/Kconfig | 16 +- drivers/perf/Makefile | 4 +- drivers/perf/{riscv_pmu.c =3D> riscv_pmu_common.c} | 0 drivers/perf/{riscv_pmu_sbi.c =3D> riscv_pmu_dev.c} | 214 +++++++++++++---= ------ include/linux/perf/riscv_pmu.h | 8 +- 10 files changed, 151 insertions(+), 107 deletions(-) diff --git a/MAINTAINERS b/MAINTAINERS index 30cbc3d44cd5..2ef7ff933266 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20177,9 +20177,9 @@ M: Atish Patra R: Anup Patel L: linux-riscv@lists.infradead.org S: Supported -F: drivers/perf/riscv_pmu.c +F: drivers/perf/riscv_pmu_common.c +F: drivers/perf/riscv_pmu_dev.c F: drivers/perf/riscv_pmu_legacy.c -F: drivers/perf/riscv_pmu_sbi.c =20 RISC-V THEAD SoC SUPPORT M: Drew Fustini diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm= /kvm_vcpu_pmu.h index 1d85b6617508..aa75f52e9092 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -13,7 +13,7 @@ #include #include =20 -#ifdef CONFIG_RISCV_PMU_SBI +#ifdef CONFIG_RISCV_PMU #define RISCV_KVM_MAX_FW_CTRS 32 #define RISCV_KVM_MAX_HW_CTRS 32 #define RISCV_KVM_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_C= TRS) @@ -128,5 +128,5 @@ static inline int kvm_riscv_vcpu_pmu_incr_fw(struct kvm= _vcpu *vcpu, unsigned lon =20 static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} -#endif /* CONFIG_RISCV_PMU_SBI */ +#endif /* CONFIG_RISCV_PMU */ #endif /* !__KVM_VCPU_RISCV_PMU_H */ diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm= /kvm_vcpu_sbi.h index b96705258cf9..764bb158e760 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -89,7 +89,7 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_s= ta; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor; =20 -#ifdef CONFIG_RISCV_PMU_SBI +#ifdef CONFIG_RISCV_PMU extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; #endif #endif /* __RISCV_KVM_VCPU_SBI_H__ */ diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 0fb1840c3e0a..f4ad7af0bdab 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -23,11 +23,11 @@ kvm-y +=3D vcpu_exit.o kvm-y +=3D vcpu_fp.o kvm-y +=3D vcpu_insn.o kvm-y +=3D vcpu_onereg.o -kvm-$(CONFIG_RISCV_PMU_SBI) +=3D vcpu_pmu.o +kvm-$(CONFIG_RISCV_PMU) +=3D vcpu_pmu.o kvm-y +=3D vcpu_sbi.o kvm-y +=3D vcpu_sbi_base.o kvm-y +=3D vcpu_sbi_hsm.o -kvm-$(CONFIG_RISCV_PMU_SBI) +=3D vcpu_sbi_pmu.o +kvm-$(CONFIG_RISCV_PMU) +=3D vcpu_sbi_pmu.o kvm-y +=3D vcpu_sbi_replace.o kvm-y +=3D vcpu_sbi_sta.o kvm-$(CONFIG_RISCV_SBI_V01) +=3D vcpu_sbi_v01.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 6e704ed86a83..4eaf9b0f736b 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -20,7 +20,7 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v= 01 =3D { }; #endif =20 -#ifndef CONFIG_RISCV_PMU_SBI +#ifndef CONFIG_RISCV_PMU static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu =3D { .extid_start =3D -1UL, .extid_end =3D -1UL, diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 4e268de351c4..b3bdff2a99a4 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -75,7 +75,7 @@ config ARM_XSCALE_PMU depends on ARM_PMU && CPU_XSCALE def_bool y =20 -config RISCV_PMU +config RISCV_PMU_COMMON depends on RISCV bool "RISC-V PMU framework" default y @@ -86,7 +86,7 @@ config RISCV_PMU can reuse it. =20 config RISCV_PMU_LEGACY - depends on RISCV_PMU + depends on RISCV_PMU_COMMON bool "RISC-V legacy PMU implementation" default y help @@ -95,15 +95,15 @@ config RISCV_PMU_LEGACY of cycle/instruction counter and doesn't support counter overflow, or programmable counters. It will be removed in future. =20 -config RISCV_PMU_SBI - depends on RISCV_PMU && RISCV_SBI - bool "RISC-V PMU based on SBI PMU extension" +config RISCV_PMU + depends on RISCV_PMU_COMMON && RISCV_SBI + bool "RISC-V PMU based on SBI PMU extension and/or Counter delegation ext= ension" default y help Say y if you want to use the CPU performance monitor - using SBI PMU extension on RISC-V based systems. This option provides - full perf feature support i.e. counter overflow, privilege mode - filtering, counter configuration. + using SBI PMU extension or counter delegation ISA extension on RISC-V + based systems. This option provides full perf feature support i.e. + counter overflow, privilege mode filtering, counter configuration. =20 config STARFIVE_STARLINK_PMU depends on ARCH_STARFIVE || COMPILE_TEST diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index de71d2574857..0805d740c773 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -16,9 +16,9 @@ obj-$(CONFIG_FSL_IMX9_DDR_PMU) +=3D fsl_imx9_ddr_perf.o obj-$(CONFIG_HISI_PMU) +=3D hisilicon/ obj-$(CONFIG_QCOM_L2_PMU) +=3D qcom_l2_pmu.o obj-$(CONFIG_QCOM_L3_PMU) +=3D qcom_l3_pmu.o -obj-$(CONFIG_RISCV_PMU) +=3D riscv_pmu.o +obj-$(CONFIG_RISCV_PMU_COMMON) +=3D riscv_pmu_common.o obj-$(CONFIG_RISCV_PMU_LEGACY) +=3D riscv_pmu_legacy.o -obj-$(CONFIG_RISCV_PMU_SBI) +=3D riscv_pmu_sbi.o +obj-$(CONFIG_RISCV_PMU) +=3D riscv_pmu_dev.o obj-$(CONFIG_STARFIVE_STARLINK_PMU) +=3D starfive_starlink_pmu.o obj-$(CONFIG_THUNDERX2_PMU) +=3D thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) +=3D xgene_pmu.o diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu_common.c similarity index 100% rename from drivers/perf/riscv_pmu.c rename to drivers/perf/riscv_pmu_common.c diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_dev.c similarity index 87% rename from drivers/perf/riscv_pmu_sbi.c rename to drivers/perf/riscv_pmu_dev.c index 1aa303f76cc7..6b43d844eaea 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -8,7 +8,7 @@ * sparc64 and x86 code. */ =20 -#define pr_fmt(fmt) "riscv-pmu-sbi: " fmt +#define pr_fmt(fmt) "riscv-pmu-dev: " fmt =20 #include #include @@ -87,6 +87,8 @@ static const struct attribute_group *riscv_pmu_attr_group= s[] =3D { static int sysctl_perf_user_access __read_mostly =3D SYSCTL_USER_ACCESS; =20 /* + * This structure is SBI specific but counter delegation also require coun= ter + * width, csr mapping. Reuse it for now. * RISC-V doesn't have heterogeneous harts yet. This need to be part of * per_cpu in case of harts with different pmu counters */ @@ -119,7 +121,7 @@ struct sbi_pmu_event_data { }; }; =20 -static struct sbi_pmu_event_data pmu_hw_event_map[] =3D { +static struct sbi_pmu_event_data pmu_hw_event_sbi_map[] =3D { [PERF_COUNT_HW_CPU_CYCLES] =3D {.hw_gen_event =3D { SBI_PMU_HW_CPU_CYCLES, SBI_PMU_EVENT_TYPE_HW, 0}}, @@ -153,7 +155,7 @@ static struct sbi_pmu_event_data pmu_hw_event_map[] =3D= { }; =20 #define C(x) PERF_COUNT_HW_CACHE_##x -static struct sbi_pmu_event_data pmu_cache_event_map[PERF_COUNT_HW_CACHE_M= AX] +static struct sbi_pmu_event_data pmu_cache_event_sbi_map[PERF_COUNT_HW_CAC= HE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] =3D { [C(L1D)] =3D { @@ -298,7 +300,7 @@ static struct sbi_pmu_event_data pmu_cache_event_map[PE= RF_COUNT_HW_CACHE_MAX] }, }; =20 -static void pmu_sbi_check_event(struct sbi_pmu_event_data *edata) +static void rvpmu_sbi_check_event(struct sbi_pmu_event_data *edata) { struct sbiret ret; =20 @@ -313,25 +315,25 @@ static void pmu_sbi_check_event(struct sbi_pmu_event_= data *edata) } } =20 -static void pmu_sbi_check_std_events(struct work_struct *work) +static void rvpmu_sbi_check_std_events(struct work_struct *work) { - for (int i =3D 0; i < ARRAY_SIZE(pmu_hw_event_map); i++) - pmu_sbi_check_event(&pmu_hw_event_map[i]); + for (int i =3D 0; i < ARRAY_SIZE(pmu_hw_event_sbi_map); i++) + rvpmu_sbi_check_event(&pmu_hw_event_sbi_map[i]); =20 - for (int i =3D 0; i < ARRAY_SIZE(pmu_cache_event_map); i++) - for (int j =3D 0; j < ARRAY_SIZE(pmu_cache_event_map[i]); j++) - for (int k =3D 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++) - pmu_sbi_check_event(&pmu_cache_event_map[i][j][k]); + for (int i =3D 0; i < ARRAY_SIZE(pmu_cache_event_sbi_map); i++) + for (int j =3D 0; j < ARRAY_SIZE(pmu_cache_event_sbi_map[i]); j++) + for (int k =3D 0; k < ARRAY_SIZE(pmu_cache_event_sbi_map[i][j]); k++) + rvpmu_sbi_check_event(&pmu_cache_event_sbi_map[i][j][k]); } =20 -static DECLARE_WORK(check_std_events_work, pmu_sbi_check_std_events); +static DECLARE_WORK(check_std_events_work, rvpmu_sbi_check_std_events); =20 -static int pmu_sbi_ctr_get_width(int idx) +static int rvpmu_ctr_get_width(int idx) { return pmu_ctr_list[idx].width; } =20 -static bool pmu_sbi_ctr_is_fw(int cidx) +static bool rvpmu_ctr_is_fw(int cidx) { union sbi_pmu_ctr_info *info; =20 @@ -373,12 +375,12 @@ int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *nu= m_hw_ctr) } EXPORT_SYMBOL_GPL(riscv_pmu_get_hpm_info); =20 -static uint8_t pmu_sbi_csr_index(struct perf_event *event) +static uint8_t rvpmu_csr_index(struct perf_event *event) { return pmu_ctr_list[event->hw.idx].csr - CSR_CYCLE; } =20 -static unsigned long pmu_sbi_get_filter_flags(struct perf_event *event) +static unsigned long rvpmu_sbi_get_filter_flags(struct perf_event *event) { unsigned long cflags =3D 0; bool guest_events =3D false; @@ -399,7 +401,7 @@ static unsigned long pmu_sbi_get_filter_flags(struct pe= rf_event *event) return cflags; } =20 -static int pmu_sbi_ctr_get_idx(struct perf_event *event) +static int rvpmu_sbi_ctr_get_idx(struct perf_event *event) { struct hw_perf_event *hwc =3D &event->hw; struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); @@ -409,7 +411,7 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event) uint64_t cbase =3D 0, cmask =3D rvpmu->cmask; unsigned long cflags =3D 0; =20 - cflags =3D pmu_sbi_get_filter_flags(event); + cflags =3D rvpmu_sbi_get_filter_flags(event); =20 /* * In legacy mode, we have to force the fixed counters for those events @@ -446,7 +448,7 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event) return -ENOENT; =20 /* Additional sanity check for the counter id */ - if (pmu_sbi_ctr_is_fw(idx)) { + if (rvpmu_ctr_is_fw(idx)) { if (!test_and_set_bit(idx, cpuc->used_fw_ctrs)) return idx; } else { @@ -457,7 +459,7 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event) return -ENOENT; } =20 -static void pmu_sbi_ctr_clear_idx(struct perf_event *event) +static void rvpmu_ctr_clear_idx(struct perf_event *event) { =20 struct hw_perf_event *hwc =3D &event->hw; @@ -465,13 +467,13 @@ static void pmu_sbi_ctr_clear_idx(struct perf_event *= event) struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); int idx =3D hwc->idx; =20 - if (pmu_sbi_ctr_is_fw(idx)) + if (rvpmu_ctr_is_fw(idx)) clear_bit(idx, cpuc->used_fw_ctrs); else clear_bit(idx, cpuc->used_hw_ctrs); } =20 -static int pmu_event_find_cache(u64 config) +static int sbi_pmu_event_find_cache(u64 config) { unsigned int cache_type, cache_op, cache_result, ret; =20 @@ -487,7 +489,7 @@ static int pmu_event_find_cache(u64 config) if (cache_result >=3D PERF_COUNT_HW_CACHE_RESULT_MAX) return -EINVAL; =20 - ret =3D pmu_cache_event_map[cache_type][cache_op][cache_result].event_idx; + ret =3D pmu_cache_event_sbi_map[cache_type][cache_op][cache_result].event= _idx; =20 return ret; } @@ -503,7 +505,7 @@ static bool pmu_sbi_is_fw_event(struct perf_event *even= t) return false; } =20 -static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig) +static int rvpmu_sbi_event_map(struct perf_event *event, u64 *econfig) { u32 type =3D event->attr.type; u64 config =3D event->attr.config; @@ -520,10 +522,10 @@ static int pmu_sbi_event_map(struct perf_event *event= , u64 *econfig) case PERF_TYPE_HARDWARE: if (config >=3D PERF_COUNT_HW_MAX) return -EINVAL; - ret =3D pmu_hw_event_map[event->attr.config].event_idx; + ret =3D pmu_hw_event_sbi_map[event->attr.config].event_idx; break; case PERF_TYPE_HW_CACHE: - ret =3D pmu_event_find_cache(config); + ret =3D sbi_pmu_event_find_cache(config); break; case PERF_TYPE_RAW: /* @@ -646,7 +648,7 @@ static int pmu_sbi_snapshot_setup(struct riscv_pmu *pmu= , int cpu) return 0; } =20 -static u64 pmu_sbi_ctr_read(struct perf_event *event) +static u64 rvpmu_sbi_ctr_read(struct perf_event *event) { struct hw_perf_event *hwc =3D &event->hw; int idx =3D hwc->idx; @@ -688,25 +690,25 @@ static u64 pmu_sbi_ctr_read(struct perf_event *event) return val; } =20 -static void pmu_sbi_set_scounteren(void *arg) +static void rvpmu_set_scounteren(void *arg) { struct perf_event *event =3D (struct perf_event *)arg; =20 if (event->hw.idx !=3D -1) csr_write(CSR_SCOUNTEREN, - csr_read(CSR_SCOUNTEREN) | BIT(pmu_sbi_csr_index(event))); + csr_read(CSR_SCOUNTEREN) | BIT(rvpmu_csr_index(event))); } =20 -static void pmu_sbi_reset_scounteren(void *arg) +static void rvpmu_reset_scounteren(void *arg) { struct perf_event *event =3D (struct perf_event *)arg; =20 if (event->hw.idx !=3D -1) csr_write(CSR_SCOUNTEREN, - csr_read(CSR_SCOUNTEREN) & ~BIT(pmu_sbi_csr_index(event))); + csr_read(CSR_SCOUNTEREN) & ~BIT(rvpmu_csr_index(event))); } =20 -static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival) +static void rvpmu_sbi_ctr_start(struct perf_event *event, u64 ival) { struct sbiret ret; struct hw_perf_event *hwc =3D &event->hw; @@ -726,10 +728,10 @@ static void pmu_sbi_ctr_start(struct perf_event *even= t, u64 ival) =20 if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) - pmu_sbi_set_scounteren((void *)event); + rvpmu_set_scounteren((void *)event); } =20 -static void pmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) +static void rvpmu_sbi_ctr_stop(struct perf_event *event, unsigned long fla= g) { struct sbiret ret; struct hw_perf_event *hwc =3D &event->hw; @@ -739,7 +741,7 @@ static void pmu_sbi_ctr_stop(struct perf_event *event, = unsigned long flag) =20 if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) - pmu_sbi_reset_scounteren((void *)event); + rvpmu_reset_scounteren((void *)event); =20 if (sbi_pmu_snapshot_available()) flag |=3D SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT; @@ -765,7 +767,7 @@ static void pmu_sbi_ctr_stop(struct perf_event *event, = unsigned long flag) } } =20 -static int pmu_sbi_find_num_ctrs(void) +static int rvpmu_sbi_find_num_ctrs(void) { struct sbiret ret; =20 @@ -776,7 +778,7 @@ static int pmu_sbi_find_num_ctrs(void) return sbi_err_map_linux_errno(ret.error); } =20 -static int pmu_sbi_get_ctrinfo(int nctr, unsigned long *mask) +static int rvpmu_sbi_get_ctrinfo(int nctr, unsigned long *mask) { struct sbiret ret; int i, num_hw_ctr =3D 0, num_fw_ctr =3D 0; @@ -807,7 +809,7 @@ static int pmu_sbi_get_ctrinfo(int nctr, unsigned long = *mask) return 0; } =20 -static inline void pmu_sbi_stop_all(struct riscv_pmu *pmu) +static inline void rvpmu_sbi_stop_all(struct riscv_pmu *pmu) { /* * No need to check the error because we are disabling all the counters @@ -817,7 +819,7 @@ static inline void pmu_sbi_stop_all(struct riscv_pmu *p= mu) 0, pmu->cmask, SBI_PMU_STOP_FLAG_RESET, 0, 0, 0); } =20 -static inline void pmu_sbi_stop_hw_ctrs(struct riscv_pmu *pmu) +static inline void rvpmu_sbi_stop_hw_ctrs(struct riscv_pmu *pmu) { struct cpu_hw_events *cpu_hw_evt =3D this_cpu_ptr(pmu->hw_events); struct riscv_pmu_snapshot_data *sdata =3D cpu_hw_evt->snapshot_addr; @@ -861,8 +863,8 @@ static inline void pmu_sbi_stop_hw_ctrs(struct riscv_pm= u *pmu) * while the overflowed counters need to be started with updated initializ= ation * value. */ -static inline void pmu_sbi_start_ovf_ctrs_sbi(struct cpu_hw_events *cpu_hw= _evt, - u64 ctr_ovf_mask) +static inline void rvpmu_sbi_start_ovf_ctrs_sbi(struct cpu_hw_events *cpu_= hw_evt, + u64 ctr_ovf_mask) { int idx =3D 0, i; struct perf_event *event; @@ -900,8 +902,8 @@ static inline void pmu_sbi_start_ovf_ctrs_sbi(struct cp= u_hw_events *cpu_hw_evt, } } =20 -static inline void pmu_sbi_start_ovf_ctrs_snapshot(struct cpu_hw_events *c= pu_hw_evt, - u64 ctr_ovf_mask) +static inline void rvpmu_sbi_start_ovf_ctrs_snapshot(struct cpu_hw_events = *cpu_hw_evt, + u64 ctr_ovf_mask) { int i, idx =3D 0; struct perf_event *event; @@ -935,18 +937,18 @@ static inline void pmu_sbi_start_ovf_ctrs_snapshot(st= ruct cpu_hw_events *cpu_hw_ } } =20 -static void pmu_sbi_start_overflow_mask(struct riscv_pmu *pmu, - u64 ctr_ovf_mask) +static void rvpmu_sbi_start_overflow_mask(struct riscv_pmu *pmu, + u64 ctr_ovf_mask) { struct cpu_hw_events *cpu_hw_evt =3D this_cpu_ptr(pmu->hw_events); =20 if (sbi_pmu_snapshot_available()) - pmu_sbi_start_ovf_ctrs_snapshot(cpu_hw_evt, ctr_ovf_mask); + rvpmu_sbi_start_ovf_ctrs_snapshot(cpu_hw_evt, ctr_ovf_mask); else - pmu_sbi_start_ovf_ctrs_sbi(cpu_hw_evt, ctr_ovf_mask); + rvpmu_sbi_start_ovf_ctrs_sbi(cpu_hw_evt, ctr_ovf_mask); } =20 -static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) +static irqreturn_t rvpmu_ovf_handler(int irq, void *dev) { struct perf_sample_data data; struct pt_regs *regs; @@ -978,7 +980,7 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *d= ev) } =20 pmu =3D to_riscv_pmu(event->pmu); - pmu_sbi_stop_hw_ctrs(pmu); + rvpmu_sbi_stop_hw_ctrs(pmu); =20 /* Overflow status register should only be read after counter are stopped= */ if (sbi_pmu_snapshot_available()) @@ -1047,13 +1049,55 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, voi= d *dev) hw_evt->state =3D 0; } =20 - pmu_sbi_start_overflow_mask(pmu, overflowed_ctrs); + rvpmu_sbi_start_overflow_mask(pmu, overflowed_ctrs); perf_sample_event_took(sched_clock() - start_clock); =20 return IRQ_HANDLED; } =20 -static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node) +static void rvpmu_ctr_start(struct perf_event *event, u64 ival) +{ + rvpmu_sbi_ctr_start(event, ival); + /* TODO: Counter delegation implementation */ +} + +static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) +{ + rvpmu_sbi_ctr_stop(event, flag); + /* TODO: Counter delegation implementation */ +} + +static int rvpmu_find_num_ctrs(void) +{ + return rvpmu_sbi_find_num_ctrs(); + /* TODO: Counter delegation implementation */ +} + +static int rvpmu_get_ctrinfo(int nctr, unsigned long *mask) +{ + return rvpmu_sbi_get_ctrinfo(nctr, mask); + /* TODO: Counter delegation implementation */ +} + +static int rvpmu_event_map(struct perf_event *event, u64 *econfig) +{ + return rvpmu_sbi_event_map(event, econfig); + /* TODO: Counter delegation implementation */ +} + +static int rvpmu_ctr_get_idx(struct perf_event *event) +{ + return rvpmu_sbi_ctr_get_idx(event); + /* TODO: Counter delegation implementation */ +} + +static u64 rvpmu_ctr_read(struct perf_event *event) +{ + return rvpmu_sbi_ctr_read(event); + /* TODO: Counter delegation implementation */ +} + +static int rvpmu_starting_cpu(unsigned int cpu, struct hlist_node *node) { struct riscv_pmu *pmu =3D hlist_entry_safe(node, struct riscv_pmu, node); struct cpu_hw_events *cpu_hw_evt =3D this_cpu_ptr(pmu->hw_events); @@ -1068,7 +1112,7 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, str= uct hlist_node *node) csr_write(CSR_SCOUNTEREN, 0x2); =20 /* Stop all the counters so that they can be enabled from perf */ - pmu_sbi_stop_all(pmu); + rvpmu_sbi_stop_all(pmu); =20 if (riscv_pmu_use_irq) { cpu_hw_evt->irq =3D riscv_pmu_irq; @@ -1082,7 +1126,7 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, str= uct hlist_node *node) return 0; } =20 -static int pmu_sbi_dying_cpu(unsigned int cpu, struct hlist_node *node) +static int rvpmu_dying_cpu(unsigned int cpu, struct hlist_node *node) { if (riscv_pmu_use_irq) { disable_percpu_irq(riscv_pmu_irq); @@ -1097,7 +1141,7 @@ static int pmu_sbi_dying_cpu(unsigned int cpu, struct= hlist_node *node) return 0; } =20 -static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_devic= e *pdev) +static int rvpmu_setup_irqs(struct riscv_pmu *pmu, struct platform_device = *pdev) { int ret; struct cpu_hw_events __percpu *hw_events =3D pmu->hw_events; @@ -1137,7 +1181,7 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, = struct platform_device *pde return -ENODEV; } =20 - ret =3D request_percpu_irq(riscv_pmu_irq, pmu_sbi_ovf_handler, "riscv-pmu= ", hw_events); + ret =3D request_percpu_irq(riscv_pmu_irq, rvpmu_ovf_handler, "riscv-pmu",= hw_events); if (ret) { pr_err("registering percpu irq failed [%d]\n", ret); return ret; @@ -1213,7 +1257,7 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); } =20 -static void pmu_sbi_event_init(struct perf_event *event) +static void rvpmu_event_init(struct perf_event *event) { /* * The permissions are set at event_init so that we do not depend @@ -1227,7 +1271,7 @@ static void pmu_sbi_event_init(struct perf_event *eve= nt) event->hw.flags |=3D PERF_EVENT_FLAG_LEGACY; } =20 -static void pmu_sbi_event_mapped(struct perf_event *event, struct mm_struc= t *mm) +static void rvpmu_event_mapped(struct perf_event *event, struct mm_struct = *mm) { if (event->hw.flags & PERF_EVENT_FLAG_NO_USER_ACCESS) return; @@ -1255,14 +1299,14 @@ static void pmu_sbi_event_mapped(struct perf_event = *event, struct mm_struct *mm) * that it is possible to do so to avoid any race. * And we must notify all cpus here because threads that currently run * on other cpus will try to directly access the counter too without - * calling pmu_sbi_ctr_start. + * calling rvpmu_sbi_ctr_start. */ if (event->hw.flags & PERF_EVENT_FLAG_USER_ACCESS) on_each_cpu_mask(mm_cpumask(mm), - pmu_sbi_set_scounteren, (void *)event, 1); + rvpmu_set_scounteren, (void *)event, 1); } =20 -static void pmu_sbi_event_unmapped(struct perf_event *event, struct mm_str= uct *mm) +static void rvpmu_event_unmapped(struct perf_event *event, struct mm_struc= t *mm) { if (event->hw.flags & PERF_EVENT_FLAG_NO_USER_ACCESS) return; @@ -1284,7 +1328,7 @@ static void pmu_sbi_event_unmapped(struct perf_event = *event, struct mm_struct *m =20 if (event->hw.flags & PERF_EVENT_FLAG_USER_ACCESS) on_each_cpu_mask(mm_cpumask(mm), - pmu_sbi_reset_scounteren, (void *)event, 1); + rvpmu_reset_scounteren, (void *)event, 1); } =20 static void riscv_pmu_update_counter_access(void *info) @@ -1327,7 +1371,7 @@ static struct ctl_table sbi_pmu_sysctl_table[] =3D { }, }; =20 -static int pmu_sbi_device_probe(struct platform_device *pdev) +static int rvpmu_device_probe(struct platform_device *pdev) { struct riscv_pmu *pmu =3D NULL; int ret =3D -ENODEV; @@ -1338,7 +1382,7 @@ static int pmu_sbi_device_probe(struct platform_devic= e *pdev) if (!pmu) return -ENOMEM; =20 - num_counters =3D pmu_sbi_find_num_ctrs(); + num_counters =3D rvpmu_find_num_ctrs(); if (num_counters < 0) { pr_err("SBI PMU extension doesn't provide any counters\n"); goto out_free; @@ -1351,10 +1395,10 @@ static int pmu_sbi_device_probe(struct platform_dev= ice *pdev) } =20 /* cache all the information about counters now */ - if (pmu_sbi_get_ctrinfo(num_counters, &cmask)) + if (rvpmu_get_ctrinfo(num_counters, &cmask)) goto out_free; =20 - ret =3D pmu_sbi_setup_irqs(pmu, pdev); + ret =3D rvpmu_setup_irqs(pmu, pdev); if (ret < 0) { pr_info("Perf sampling/filtering is not supported as sscof extension is = not available\n"); pmu->pmu.capabilities |=3D PERF_PMU_CAP_NO_INTERRUPT; @@ -1364,17 +1408,17 @@ static int pmu_sbi_device_probe(struct platform_dev= ice *pdev) pmu->pmu.attr_groups =3D riscv_pmu_attr_groups; pmu->pmu.parent =3D &pdev->dev; pmu->cmask =3D cmask; - pmu->ctr_start =3D pmu_sbi_ctr_start; - pmu->ctr_stop =3D pmu_sbi_ctr_stop; - pmu->event_map =3D pmu_sbi_event_map; - pmu->ctr_get_idx =3D pmu_sbi_ctr_get_idx; - pmu->ctr_get_width =3D pmu_sbi_ctr_get_width; - pmu->ctr_clear_idx =3D pmu_sbi_ctr_clear_idx; - pmu->ctr_read =3D pmu_sbi_ctr_read; - pmu->event_init =3D pmu_sbi_event_init; - pmu->event_mapped =3D pmu_sbi_event_mapped; - pmu->event_unmapped =3D pmu_sbi_event_unmapped; - pmu->csr_index =3D pmu_sbi_csr_index; + pmu->ctr_start =3D rvpmu_ctr_start; + pmu->ctr_stop =3D rvpmu_ctr_stop; + pmu->event_map =3D rvpmu_event_map; + pmu->ctr_get_idx =3D rvpmu_ctr_get_idx; + pmu->ctr_get_width =3D rvpmu_ctr_get_width; + pmu->ctr_clear_idx =3D rvpmu_ctr_clear_idx; + pmu->ctr_read =3D rvpmu_ctr_read; + pmu->event_init =3D rvpmu_event_init; + pmu->event_mapped =3D rvpmu_event_mapped; + pmu->event_unmapped =3D rvpmu_event_unmapped; + pmu->csr_index =3D rvpmu_csr_index; =20 ret =3D riscv_pm_pmu_register(pmu); if (ret) @@ -1430,14 +1474,14 @@ static int pmu_sbi_device_probe(struct platform_dev= ice *pdev) return ret; } =20 -static struct platform_driver pmu_sbi_driver =3D { - .probe =3D pmu_sbi_device_probe, +static struct platform_driver rvpmu_driver =3D { + .probe =3D rvpmu_device_probe, .driver =3D { - .name =3D RISCV_PMU_SBI_PDEV_NAME, + .name =3D RISCV_PMU_PDEV_NAME, }, }; =20 -static int __init pmu_sbi_devinit(void) +static int __init rvpmu_devinit(void) { int ret; struct platform_device *pdev; @@ -1452,20 +1496,20 @@ static int __init pmu_sbi_devinit(void) =20 ret =3D cpuhp_setup_state_multi(CPUHP_AP_PERF_RISCV_STARTING, "perf/riscv/pmu:starting", - pmu_sbi_starting_cpu, pmu_sbi_dying_cpu); + rvpmu_starting_cpu, rvpmu_dying_cpu); if (ret) { pr_err("CPU hotplug notifier could not be registered: %d\n", ret); return ret; } =20 - ret =3D platform_driver_register(&pmu_sbi_driver); + ret =3D platform_driver_register(&rvpmu_driver); if (ret) return ret; =20 - pdev =3D platform_device_register_simple(RISCV_PMU_SBI_PDEV_NAME, -1, NUL= L, 0); + pdev =3D platform_device_register_simple(RISCV_PMU_PDEV_NAME, -1, NULL, 0= ); if (IS_ERR(pdev)) { - platform_driver_unregister(&pmu_sbi_driver); + platform_driver_unregister(&rvpmu_driver); return PTR_ERR(pdev); } =20 @@ -1474,4 +1518,4 @@ static int __init pmu_sbi_devinit(void) =20 return ret; } -device_initcall(pmu_sbi_devinit) +device_initcall(rvpmu_devinit) diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 701974639ff2..525acd6d96d0 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -13,7 +13,7 @@ #include #include =20 -#ifdef CONFIG_RISCV_PMU +#ifdef CONFIG_RISCV_PMU_COMMON =20 /* * The RISCV_MAX_COUNTERS parameter should be specified. @@ -21,7 +21,7 @@ =20 #define RISCV_MAX_COUNTERS 64 #define RISCV_OP_UNSUPP (-EOPNOTSUPP) -#define RISCV_PMU_SBI_PDEV_NAME "riscv-pmu-sbi" +#define RISCV_PMU_PDEV_NAME "riscv-pmu" #define RISCV_PMU_LEGACY_PDEV_NAME "riscv-pmu-legacy" =20 #define RISCV_PMU_STOP_FLAG_RESET 1 @@ -87,10 +87,10 @@ void riscv_pmu_legacy_skip_init(void); static inline void riscv_pmu_legacy_skip_init(void) {}; #endif struct riscv_pmu *riscv_pmu_alloc(void); -#ifdef CONFIG_RISCV_PMU_SBI +#ifdef CONFIG_RISCV_PMU int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); #endif =20 -#endif /* CONFIG_RISCV_PMU */ +#endif /* CONFIG_RISCV_PMU_COMMON */ =20 #endif /* _RISCV_PMU_H */ --=20 2.43.0