From nobody Sun Feb 8 20:17:53 2026 Received: from mail-dy1-f171.google.com (mail-dy1-f171.google.com [74.125.82.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A904346770 for ; Sun, 8 Feb 2026 06:39:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770532740; cv=none; b=JAtXRmLaqUuHHIOwDU+FgFZFLR7rw+IaHVdyf9HuVKkbEP+vzrj288xfJQ/jnYORvIIgJKwy1LWhZdFeHGMu2SL6gtalFn+8nz3Rgv7xsMXghHlD/r6of9+8Jp48Z5ItiaRE2oEgMeQWLmCXBidVFa/B/mtxNeYaf2grg3I2XqQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770532740; c=relaxed/simple; bh=Io+RkrUvt3oEqlYDJ+QrjkDTe//uul8z4oYPSddG2lk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gi3ERKtUy6tDFXbYUnSgFgOAP2/sUKqeKf5VKHxFUesAhR26W0OLv/uM+ttJIr8QwxLhz6rZeTkmZjzr+Url1RMDvsHAVayt/o0mDbnxnF8r6KXPxKSJ/2qQ1nVHuH3GodBnz6BzPF8BR70d+MAnSQk6WbCDGxq+VkqB4wSRkyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=EUw8XhFm; arc=none smtp.client-ip=74.125.82.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="EUw8XhFm" Received: by mail-dy1-f171.google.com with SMTP id 5a478bee46e88-2ba6aa57d5fso144245eec.1 for ; Sat, 07 Feb 2026 22:39:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1770532739; x=1771137539; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lKlf7tRRYP2SeEwPcjTbzlPuBB8uCNTKD6w2wd0IYP0=; b=EUw8XhFmY48Lk4cFlBYPltBeYORPViMTOhysNBG5dhUz/fqMTO/P7nkwQD4HxixfJ0 qJIGTaaQ/q3QvH+/+LojswwB3NVKoPC55BWEMmba98ha7/mVI5y5UeHCKtAQlaBuG8Ko 7+nyMEZ0UcSs+fouh2D67D/TZlObMYootZ3KOdk1xHEaNeK9L9nq+TcLGf0uKZ9p2ufx HBrL4mKLSw9ndcqbJNJayIMB9hHfvleynxuuuuWWXJvOUEn2EU80kzDZwAelQQ5sfNdA YbePwUM3tAbSSaUZPIrd+5fjEpKCiQqi2lvWnJiVqM2IDFzuLH8g99TUXuPaM4o6SeD3 yBzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770532739; x=1771137539; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=lKlf7tRRYP2SeEwPcjTbzlPuBB8uCNTKD6w2wd0IYP0=; b=Hh5bi0rJRvhA6NbdkdnXell6xB3fwKAIK7elhWJLTtKDL+CHOgB170JHVTd9iOkM55 a+/sXr26ieGXEj9DE6eO4j1SwtYPlz9M+iBkYYqwssTjSZLM1B86gfBqh7kXWn5NWcVe 0gR9MohgEWYfBbFiVsnUE0Eltyuam8RQVSL2eOSVPSLGp8rPy6YAtuJo0zt3oOIgeiyI nXqULn/4OakA5/cbarAK5IriUifNcSOUtoVsuvLv5Whu39BHL2LJ12aSimdIZFcpR0K8 d/CVrSALxcs1eL3RMz21axCYfxiVWss+qMc7p5gJwI3ccZ+mv/6fGOad6J0t85smOJIO 2m6w== X-Forwarded-Encrypted: i=1; AJvYcCWhB9QB4lx8q8cAHFRIzODtF0E2RXtGJcEcfYVFXrimjs71aC0a+stLSCdTvtKSQ79s+/P1BtyUy52m8PQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyQaNPSn+vLMtVCHg/8Mdt/822bZHoveMkYzpAGkjWQKkMv/F5z rKNRzLPS6puLhPNqAsKTCtuirGudDRYHnff0vjVc2OemiymQg98JOt9h0BRsa4YYT8c= X-Gm-Gg: AZuq6aJw5ll7xn/q00/SkNtjQNvohvEZeMl6qZCgVWA0VPl5ebSPTOh5BNaFNPJX8DB Y6mRJoHdV/177w3N+fywjm4lMrNgluZYb5hCWIyXOezFkvzdiHOuoI1JfH8fe4S8BixkxAd/p6x t/18O+Sjhcc8BCb6lG4H2XjAqbzYyTCVR8T9Jagx4Cspkp4oZESHpi4FWC4xdTau3mHg6c1OJp8 9DQ3tchSJRkQIICgqN55SWJHCvqgzZXMXTP765ReixRltUTytSqXbe7xztPsR481DeJy6fGoFuJ rPYOnKig3PzICDzyIRd/iznlc8lLlZIx3tNoCi/ZyFU6OlsM2ovUyTCsf7iSzgqBjc8m3ZlbkU0 Mjh0J05Onr3BtmNgopSTBjHlMq99fD6W6/HC3rikwWC2iAvnxTJ8us2p3qL2BSm01pzoNoW4OpR kj6s31GdFlb/ErDQyQX7yPEN8= X-Received: by 2002:a05:7300:fe03:b0:2b7:1abc:a6e9 with SMTP id 5a478bee46e88-2b85647ded4mr2541029eec.12.1770532739138; Sat, 07 Feb 2026 22:38:59 -0800 (PST) Received: from sw04.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id a92af1059eb24-1270c9ff2bcsm5114647c88.1.2026.02.07.22.38.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Feb 2026 22:38:58 -0800 (PST) From: Zong Li To: tjeznach@rivosinc.com, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, robh@kernel.org, pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu, alex@ghiti.fr, mark.rutland@arm.com, conor+dt@kernel.org, krzk@kernel.org, guoyaxing@bosc.ac.cn, luxu.kernel@bytedance.com, lv.zheng@linux.spacemit.com, andrew.jones@oss.qualcomm.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org, linux-perf-users@vger.kernel.org Cc: Zong Li Subject: [PATCH v2 1/2] drivers/perf: riscv-iommu: add risc-v iommu pmu driver Date: Sat, 7 Feb 2026 22:38:35 -0800 Message-ID: <20260208063848.3547817-2-zong.li@sifive.com> X-Mailer: git-send-email @GIT_VERSION@ In-Reply-To: <20260208063848.3547817-1-zong.li@sifive.com> References: <20260208063848.3547817-1-zong.li@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new driver to support the RISC-V IOMMU PMU. This is an auxiliary device driver created by the parent RISC-V IOMMU driver. The RISC-V IOMMU PMU separates the cycle counter from the event counters. The cycle counter is not associated with iohpmevt0, so a software-defined cycle event is required for the perf subsystem. The number and width of the counters are hardware-implemented and must be detected at runtime. The performance monitor provides counters with filtering support to collect events for specific device ID/process ID, or GSCID/PSCID. PMU-related definitions are moved into the perf driver, where they are used exclusively. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu-bits.h | 61 --- drivers/perf/Kconfig | 12 + drivers/perf/Makefile | 1 + drivers/perf/riscv_iommu_pmu.c | 661 +++++++++++++++++++++++++++++++ 4 files changed, 674 insertions(+), 61 deletions(-) create mode 100644 drivers/perf/riscv_iommu_pmu.c diff --git a/drivers/iommu/riscv/iommu-bits.h b/drivers/iommu/riscv/iommu-b= its.h index 98daf0e1a306..746cd11f4938 100644 --- a/drivers/iommu/riscv/iommu-bits.h +++ b/drivers/iommu/riscv/iommu-bits.h @@ -189,67 +189,6 @@ enum riscv_iommu_ddtp_modes { #define RISCV_IOMMU_IPSR_PMIP BIT(RISCV_IOMMU_INTR_PM) #define RISCV_IOMMU_IPSR_PIP BIT(RISCV_IOMMU_INTR_PQ) =20 -/* 5.19 Performance monitoring counter overflow status (32bits) */ -#define RISCV_IOMMU_REG_IOCOUNTOVF 0x0058 -#define RISCV_IOMMU_IOCOUNTOVF_CY BIT(0) -#define RISCV_IOMMU_IOCOUNTOVF_HPM GENMASK_ULL(31, 1) - -/* 5.20 Performance monitoring counter inhibits (32bits) */ -#define RISCV_IOMMU_REG_IOCOUNTINH 0x005C -#define RISCV_IOMMU_IOCOUNTINH_CY BIT(0) -#define RISCV_IOMMU_IOCOUNTINH_HPM GENMASK(31, 1) - -/* 5.21 Performance monitoring cycles counter (64bits) */ -#define RISCV_IOMMU_REG_IOHPMCYCLES 0x0060 -#define RISCV_IOMMU_IOHPMCYCLES_COUNTER GENMASK_ULL(62, 0) -#define RISCV_IOMMU_IOHPMCYCLES_OF BIT_ULL(63) - -/* 5.22 Performance monitoring event counters (31 * 64bits) */ -#define RISCV_IOMMU_REG_IOHPMCTR_BASE 0x0068 -#define RISCV_IOMMU_REG_IOHPMCTR(_n) (RISCV_IOMMU_REG_IOHPMCTR_BASE + ((_n= ) * 0x8)) - -/* 5.23 Performance monitoring event selectors (31 * 64bits) */ -#define RISCV_IOMMU_REG_IOHPMEVT_BASE 0x0160 -#define RISCV_IOMMU_REG_IOHPMEVT(_n) (RISCV_IOMMU_REG_IOHPMEVT_BASE + ((_n= ) * 0x8)) -#define RISCV_IOMMU_IOHPMEVT_EVENTID GENMASK_ULL(14, 0) -#define RISCV_IOMMU_IOHPMEVT_DMASK BIT_ULL(15) -#define RISCV_IOMMU_IOHPMEVT_PID_PSCID GENMASK_ULL(35, 16) -#define RISCV_IOMMU_IOHPMEVT_DID_GSCID GENMASK_ULL(59, 36) -#define RISCV_IOMMU_IOHPMEVT_PV_PSCV BIT_ULL(60) -#define RISCV_IOMMU_IOHPMEVT_DV_GSCV BIT_ULL(61) -#define RISCV_IOMMU_IOHPMEVT_IDT BIT_ULL(62) -#define RISCV_IOMMU_IOHPMEVT_OF BIT_ULL(63) - -/* Number of defined performance-monitoring event selectors */ -#define RISCV_IOMMU_IOHPMEVT_CNT 31 - -/** - * enum riscv_iommu_hpmevent_id - Performance-monitoring event identifier - * - * @RISCV_IOMMU_HPMEVENT_INVALID: Invalid event, do not count - * @RISCV_IOMMU_HPMEVENT_URQ: Untranslated requests - * @RISCV_IOMMU_HPMEVENT_TRQ: Translated requests - * @RISCV_IOMMU_HPMEVENT_ATS_RQ: ATS translation requests - * @RISCV_IOMMU_HPMEVENT_TLB_MISS: TLB misses - * @RISCV_IOMMU_HPMEVENT_DD_WALK: Device directory walks - * @RISCV_IOMMU_HPMEVENT_PD_WALK: Process directory walks - * @RISCV_IOMMU_HPMEVENT_S_VS_WALKS: First-stage page table walks - * @RISCV_IOMMU_HPMEVENT_G_WALKS: Second-stage page table walks - * @RISCV_IOMMU_HPMEVENT_MAX: Value to denote maximum Event IDs - */ -enum riscv_iommu_hpmevent_id { - RISCV_IOMMU_HPMEVENT_INVALID =3D 0, - RISCV_IOMMU_HPMEVENT_URQ =3D 1, - RISCV_IOMMU_HPMEVENT_TRQ =3D 2, - RISCV_IOMMU_HPMEVENT_ATS_RQ =3D 3, - RISCV_IOMMU_HPMEVENT_TLB_MISS =3D 4, - RISCV_IOMMU_HPMEVENT_DD_WALK =3D 5, - RISCV_IOMMU_HPMEVENT_PD_WALK =3D 6, - RISCV_IOMMU_HPMEVENT_S_VS_WALKS =3D 7, - RISCV_IOMMU_HPMEVENT_G_WALKS =3D 8, - RISCV_IOMMU_HPMEVENT_MAX =3D 9 -}; - /* 5.24 Translation request IOVA (64bits) */ #define RISCV_IOMMU_REG_TR_REQ_IOVA 0x0258 #define RISCV_IOMMU_TR_REQ_IOVA_VPN GENMASK_ULL(63, 12) diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 638321fc9800..6d0ece827501 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -105,6 +105,18 @@ config RISCV_PMU_SBI full perf feature support i.e. counter overflow, privilege mode filtering, counter configuration. =20 +config RISCV_IOMMU_PMU + depends on RISCV || COMPILE_TEST + depends on RISCV_IOMMU + bool "RISC-V IOMMU Hardware Performance Monitor" + default y + help + Say Y if you want to use the RISC-V IOMMU performance monitor + implementation. The performance monitor is an optional hardware + feature, and whether it is actually enabled depends on IOMMU + hardware support. If the underlying hardware does not implement + the PMU, this option will have no effect. + config STARFIVE_STARLINK_PMU depends on ARCH_STARFIVE || COMPILE_TEST depends on 64BIT diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index ea52711a87e3..f64f7dc046f1 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -20,6 +20,7 @@ obj-$(CONFIG_QCOM_L3_PMU) +=3D qcom_l3_pmu.o obj-$(CONFIG_RISCV_PMU) +=3D riscv_pmu.o obj-$(CONFIG_RISCV_PMU_LEGACY) +=3D riscv_pmu_legacy.o obj-$(CONFIG_RISCV_PMU_SBI) +=3D riscv_pmu_sbi.o +obj-$(CONFIG_RISCV_IOMMU_PMU) +=3D riscv_iommu_pmu.o obj-$(CONFIG_STARFIVE_STARLINK_PMU) +=3D starfive_starlink_pmu.o obj-$(CONFIG_THUNDERX2_PMU) +=3D thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) +=3D xgene_pmu.o diff --git a/drivers/perf/riscv_iommu_pmu.c b/drivers/perf/riscv_iommu_pmu.c new file mode 100644 index 000000000000..72fc4341b165 --- /dev/null +++ b/drivers/perf/riscv_iommu_pmu.c @@ -0,0 +1,661 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2026 SiFive + * + * Authors + * Zong Li + */ + +#include +#include +#include + +#include "../iommu/riscv/iommu.h" + +/* 5.19 Performance monitoring counter overflow status (32bits) */ +#define RISCV_IOMMU_REG_IOCOUNTOVF 0x0058 +#define RISCV_IOMMU_IOCOUNTOVF_CY BIT(0) +#define RISCV_IOMMU_IOCOUNTOVF_HPM GENMASK_ULL(31, 1) + +/* 5.20 Performance monitoring counter inhibits (32bits) */ +#define RISCV_IOMMU_REG_IOCOUNTINH 0x005C +#define RISCV_IOMMU_IOCOUNTINH_CY BIT(0) +#define RISCV_IOMMU_IOCOUNTINH_HPM GENMASK(31, 0) + +/* 5.21 Performance monitoring cycles counter (64bits) */ +#define RISCV_IOMMU_REG_IOHPMCYCLES 0x0060 +#define RISCV_IOMMU_IOHPMCYCLES_COUNTER GENMASK_ULL(62, 0) +#define RISCV_IOMMU_IOHPMCYCLES_OF BIT_ULL(63) +#define RISCV_IOMMU_REG_IOHPMCTR(_n) (RISCV_IOMMU_REG_IOHPMCYCLES + ((_n) = * 0x8)) + +/* 5.22 Performance monitoring event counters (31 * 64bits) */ +#define RISCV_IOMMU_REG_IOHPMCTR_BASE 0x0068 +#define RISCV_IOMMU_IOHPMCTR_COUNTER GENMASK_ULL(63, 0) + +/* 5.23 Performance monitoring event selectors (31 * 64bits) */ +#define RISCV_IOMMU_REG_IOHPMEVT_BASE 0x0160 +#define RISCV_IOMMU_REG_IOHPMEVT(_n) (RISCV_IOMMU_REG_IOHPMEVT_BASE + ((_n= ) * 0x8)) +#define RISCV_IOMMU_IOHPMEVT_EVENTID GENMASK_ULL(14, 0) +#define RISCV_IOMMU_IOHPMEVT_DMASK BIT_ULL(15) +#define RISCV_IOMMU_IOHPMEVT_PID_PSCID GENMASK_ULL(35, 16) +#define RISCV_IOMMU_IOHPMEVT_DID_GSCID GENMASK_ULL(59, 36) +#define RISCV_IOMMU_IOHPMEVT_PV_PSCV BIT_ULL(60) +#define RISCV_IOMMU_IOHPMEVT_DV_GSCV BIT_ULL(61) +#define RISCV_IOMMU_IOHPMEVT_IDT BIT_ULL(62) +#define RISCV_IOMMU_IOHPMEVT_OF BIT_ULL(63) +#define RISCV_IOMMU_IOHPMEVT_EVENT GENMASK_ULL(62, 0) + +/* The total number of counters is 31 event counters plus 1 cycle counter = */ +#define RISCV_IOMMU_HPM_COUNTER_NUM 32 + +static int cpuhp_state; + +/** + * enum riscv_iommu_hpmevent_id - Performance-monitoring event identifier + * + * @RISCV_IOMMU_HPMEVENT_CYCLE: Clock cycle counter + * @RISCV_IOMMU_HPMEVENT_URQ: Untranslated requests + * @RISCV_IOMMU_HPMEVENT_TRQ: Translated requests + * @RISCV_IOMMU_HPMEVENT_ATS_RQ: ATS translation requests + * @RISCV_IOMMU_HPMEVENT_TLB_MISS: TLB misses + * @RISCV_IOMMU_HPMEVENT_DD_WALK: Device directory walks + * @RISCV_IOMMU_HPMEVENT_PD_WALK: Process directory walks + * @RISCV_IOMMU_HPMEVENT_S_VS_WALKS: First-stage page table walks + * @RISCV_IOMMU_HPMEVENT_G_WALKS: Second-stage page table walks + * @RISCV_IOMMU_HPMEVENT_MAX: Value to denote maximum Event IDs + * + * The specification does not define an event ID for counting the + * number of clock cycles, meaning there is no associated 'iohpmevt0'. + * Event ID 0 is an invalid event and does not overlap with any valid + * event ID. Let's repurpose ID 0 as the cycle for perf, the cycle + * event is not actually written into any register, it serves solely + * as an identifier. + */ +enum riscv_iommu_hpmevent_id { + RISCV_IOMMU_HPMEVENT_CYCLE =3D 0, + RISCV_IOMMU_HPMEVENT_URQ =3D 1, + RISCV_IOMMU_HPMEVENT_TRQ =3D 2, + RISCV_IOMMU_HPMEVENT_ATS_RQ =3D 3, + RISCV_IOMMU_HPMEVENT_TLB_MISS =3D 4, + RISCV_IOMMU_HPMEVENT_DD_WALK =3D 5, + RISCV_IOMMU_HPMEVENT_PD_WALK =3D 6, + RISCV_IOMMU_HPMEVENT_S_VS_WALKS =3D 7, + RISCV_IOMMU_HPMEVENT_G_WALKS =3D 8, + RISCV_IOMMU_HPMEVENT_MAX =3D 9 +}; + +struct riscv_iommu_pmu { + struct pmu pmu; + struct hlist_node node; + void __iomem *reg; + unsigned int on_cpu; + int num_counters; + u64 cycle_cntr_mask; + u64 event_cntr_mask; + struct perf_event *events[RISCV_IOMMU_HPM_COUNTER_NUM]; + DECLARE_BITMAP(used_counters, RISCV_IOMMU_HPM_COUNTER_NUM); +}; + +#define to_riscv_iommu_pmu(p) (container_of(p, struct riscv_iommu_pmu, pmu= )) + +#define RISCV_IOMMU_PMU_ATTR_EXTRACTOR(_name, _mask) \ + static inline u32 get_##_name(struct perf_event *event) \ + { \ + return FIELD_GET(_mask, event->attr.config); \ + } \ + +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(event, RISCV_IOMMU_IOHPMEVT_EVENTID); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(partial_matching, RISCV_IOMMU_IOHPMEVT_DMAS= K); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(pid_pscid, RISCV_IOMMU_IOHPMEVT_PID_PSCID); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(did_gscid, RISCV_IOMMU_IOHPMEVT_DID_GSCID); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(filter_pid_pscid, RISCV_IOMMU_IOHPMEVT_PV_P= SCV); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(filter_did_gscid, RISCV_IOMMU_IOHPMEVT_DV_G= SCV); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(filter_id_type, RISCV_IOMMU_IOHPMEVT_IDT); + +/* Formats */ +PMU_FORMAT_ATTR(event, "config:0-14"); +PMU_FORMAT_ATTR(partial_matching, "config:15"); +PMU_FORMAT_ATTR(pid_pscid, "config:16-35"); +PMU_FORMAT_ATTR(did_gscid, "config:36-59"); +PMU_FORMAT_ATTR(filter_pid_pscid, "config:60"); +PMU_FORMAT_ATTR(filter_did_gscid, "config:61"); +PMU_FORMAT_ATTR(filter_id_type, "config:62"); + +static struct attribute *riscv_iommu_pmu_formats[] =3D { + &format_attr_event.attr, + &format_attr_partial_matching.attr, + &format_attr_pid_pscid.attr, + &format_attr_did_gscid.attr, + &format_attr_filter_pid_pscid.attr, + &format_attr_filter_did_gscid.attr, + &format_attr_filter_id_type.attr, + NULL, +}; + +static const struct attribute_group riscv_iommu_pmu_format_group =3D { + .name =3D "format", + .attrs =3D riscv_iommu_pmu_formats, +}; + +/* Events */ +static ssize_t riscv_iommu_pmu_event_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr =3D container_of(attr, struct perf_pmu_events_attr, attr); + + return sysfs_emit(page, "event=3D0x%02llx\n", pmu_attr->id); +} + +#define RISCV_IOMMU_PMU_EVENT_ATTR(name, id) \ + PMU_EVENT_ATTR_ID(name, riscv_iommu_pmu_event_show, id) + +static struct attribute *riscv_iommu_pmu_events[] =3D { + RISCV_IOMMU_PMU_EVENT_ATTR(cycle, RISCV_IOMMU_HPMEVENT_CYCLE), + RISCV_IOMMU_PMU_EVENT_ATTR(untranslated_req, RISCV_IOMMU_HPMEVENT_URQ), + RISCV_IOMMU_PMU_EVENT_ATTR(translated_req, RISCV_IOMMU_HPMEVENT_TRQ), + RISCV_IOMMU_PMU_EVENT_ATTR(ats_trans_req, RISCV_IOMMU_HPMEVENT_ATS_RQ), + RISCV_IOMMU_PMU_EVENT_ATTR(tlb_miss, RISCV_IOMMU_HPMEVENT_TLB_MISS), + RISCV_IOMMU_PMU_EVENT_ATTR(ddt_walks, RISCV_IOMMU_HPMEVENT_DD_WALK), + RISCV_IOMMU_PMU_EVENT_ATTR(pdt_walks, RISCV_IOMMU_HPMEVENT_PD_WALK), + RISCV_IOMMU_PMU_EVENT_ATTR(s_vs_pt_walks, RISCV_IOMMU_HPMEVENT_S_VS_WALKS= ), + RISCV_IOMMU_PMU_EVENT_ATTR(g_pt_walks, RISCV_IOMMU_HPMEVENT_G_WALKS), + NULL, +}; + +static const struct attribute_group riscv_iommu_pmu_events_group =3D { + .name =3D "events", + .attrs =3D riscv_iommu_pmu_events, +}; + +/* cpumask */ +static ssize_t riscv_iommu_cpumask_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(dev_get_drvdata(dev)); + + return cpumap_print_to_pagebuf(true, buf, cpumask_of(pmu->on_cpu)); +} + +static struct device_attribute riscv_iommu_cpumask_attr =3D + __ATTR(cpumask, 0444, riscv_iommu_cpumask_show, NULL); + +static struct attribute *riscv_iommu_cpumask_attrs[] =3D { + &riscv_iommu_cpumask_attr.attr, + NULL +}; + +static const struct attribute_group riscv_iommu_pmu_cpumask_group =3D { + .attrs =3D riscv_iommu_cpumask_attrs, +}; + +static const struct attribute_group *riscv_iommu_pmu_attr_grps[] =3D { + &riscv_iommu_pmu_cpumask_group, + &riscv_iommu_pmu_format_group, + &riscv_iommu_pmu_events_group, + NULL, +}; + +/* PMU Operations */ +static void riscv_iommu_pmu_set_counter(struct riscv_iommu_pmu *pmu, u32 i= dx, + u64 value) +{ + u64 counter_mask =3D idx ? pmu->event_cntr_mask : pmu->cycle_cntr_mask; + + writeq(value & counter_mask, pmu->reg + RISCV_IOMMU_REG_IOHPMCTR(idx)); +} + +static u64 riscv_iommu_pmu_get_counter(struct riscv_iommu_pmu *pmu, u32 id= x) +{ + u64 value, counter_mask =3D idx ? pmu->event_cntr_mask : pmu->cycle_cntr_= mask; + + /* Use readq to read counter would be imprecise on 32-bits system */ + value =3D readq(pmu->reg + RISCV_IOMMU_REG_IOHPMCTR(idx)) & counter_mask; + + /* The bit 63 of cycle counter (i.e., idx =3D=3D 0) is OF bit */ + return idx ? value : (value & ~RISCV_IOMMU_IOHPMCYCLES_OF); +} + +static bool is_cycle_event(u64 event) +{ + return event =3D=3D RISCV_IOMMU_HPMEVENT_CYCLE; +} + +static void riscv_iommu_pmu_set_event(struct riscv_iommu_pmu *pmu, u32 idx, + u64 value) +{ + /* There is no associtated IOHPMEVT0 for IOHPMCYCLES */ + if (is_cycle_event(value)) + return; + + /* Event counter start from idx 1 */ + writeq(FIELD_GET(RISCV_IOMMU_IOHPMEVT_EVENT, value), + pmu->reg + RISCV_IOMMU_REG_IOHPMEVT(idx - 1)); +} + +static void riscv_iommu_pmu_enable_counter(struct riscv_iommu_pmu *pmu, u3= 2 idx) +{ + void __iomem *addr =3D pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH; + u32 value =3D readl(addr); + + writel(value & ~BIT(idx), addr); +} + +static void riscv_iommu_pmu_disable_counter(struct riscv_iommu_pmu *pmu, u= 32 idx) +{ + void __iomem *addr =3D pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH; + u32 value =3D readl(addr); + + writel(value | BIT(idx), addr); +} + +static void riscv_iommu_pmu_start_all(struct riscv_iommu_pmu *pmu) +{ + void __iomem *addr =3D pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH; + u32 used_cntr =3D 0; + + /* The performance-monitoring counter inhibits is a 32-bit WARL register = */ + bitmap_to_arr32(&used_cntr, pmu->used_counters, pmu->num_counters); + + writel(~used_cntr, addr); +} + +static void riscv_iommu_pmu_stop_all(struct riscv_iommu_pmu *pmu) +{ + writel(GENMASK_ULL(pmu->num_counters - 1, 0), + pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH); +} + +/* PMU APIs */ +static void riscv_iommu_pmu_set_period(struct perf_event *event) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc =3D &event->hw; + u64 counter_mask =3D hwc->idx ? pmu->event_cntr_mask : pmu->cycle_cntr_ma= sk; + u64 period; + + /* + * Limit the maximum period to prevent the counter value + * from overtaking the one we are about to program. + * In effect we are reducing max_period to account for + * interrupt latency (and we are being very conservative). + */ + period =3D counter_mask >> 1; + riscv_iommu_pmu_set_counter(pmu, hwc->idx, period); + local64_set(&hwc->prev_count, period); +} + +static int riscv_iommu_pmu_event_init(struct perf_event *event) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc =3D &event->hw; + struct perf_event *sibling; + int total_event_counters =3D pmu->num_counters - 1; + int counters =3D 0; + + if (event->attr.type !=3D event->pmu->type) + return -ENOENT; + + if (hwc->sample_period) + return -EOPNOTSUPP; + + if (event->cpu < 0) + return -EOPNOTSUPP; + + event->cpu =3D pmu->on_cpu; + + hwc->idx =3D -1; + hwc->config =3D event->attr.config; + + if (event->group_leader =3D=3D event) + return 0; + + if (is_cycle_event(get_event(event->group_leader))) + if (++counters > total_event_counters) + return -EINVAL; + + for_each_sibling_event(sibling, event->group_leader) { + if (is_cycle_event(get_event(sibling))) + continue; + + if (sibling->pmu !=3D event->pmu && !is_software_event(sibling)) + return -EINVAL; + + if (++counters > total_event_counters) + return -EINVAL; + } + + return 0; +} + +static void riscv_iommu_pmu_update(struct perf_event *event) +{ + struct hw_perf_event *hwc =3D &event->hw; + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + u64 delta, prev, now; + u32 idx =3D hwc->idx; + u64 counter_mask =3D idx ? pmu->event_cntr_mask : pmu->cycle_cntr_mask; + + do { + prev =3D local64_read(&hwc->prev_count); + now =3D riscv_iommu_pmu_get_counter(pmu, idx); + } while (local64_cmpxchg(&hwc->prev_count, prev, now) !=3D prev); + + delta =3D (now - prev) & counter_mask; + local64_add(delta, &event->count); +} + +static void riscv_iommu_pmu_start(struct perf_event *event, int flags) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc =3D &event->hw; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + if (flags & PERF_EF_RELOAD) + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE)); + + hwc->state =3D 0; + riscv_iommu_pmu_set_period(event); + riscv_iommu_pmu_set_event(pmu, hwc->idx, hwc->config); + riscv_iommu_pmu_enable_counter(pmu, hwc->idx); + + perf_event_update_userpage(event); +} + +static void riscv_iommu_pmu_stop(struct perf_event *event, int flags) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc =3D &event->hw; + int idx =3D hwc->idx; + + if (hwc->state & PERF_HES_STOPPED) + return; + + riscv_iommu_pmu_disable_counter(pmu, idx); + + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) + riscv_iommu_pmu_update(event); + + hwc->state |=3D PERF_HES_STOPPED | PERF_HES_UPTODATE; +} + +static int riscv_iommu_pmu_add(struct perf_event *event, int flags) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc =3D &event->hw; + unsigned int num_counters =3D pmu->num_counters; + int idx; + + /* Reserve index zero for iohpmcycles */ + if (is_cycle_event(get_event(event))) + idx =3D 0; + else + idx =3D find_next_zero_bit(pmu->used_counters, num_counters, 1); + + /* All event counters or cycle counter are in use */ + if (idx =3D=3D num_counters || pmu->events[idx]) + return -EAGAIN; + + set_bit(idx, pmu->used_counters); + + pmu->events[idx] =3D event; + hwc->idx =3D idx; + hwc->state =3D PERF_HES_STOPPED | PERF_HES_UPTODATE; + local64_set(&hwc->prev_count, 0); + + if (flags & PERF_EF_START) + riscv_iommu_pmu_start(event, flags); + + /* Propagate changes to the userspace mapping. */ + perf_event_update_userpage(event); + + return 0; +} + +static void riscv_iommu_pmu_read(struct perf_event *event) +{ + riscv_iommu_pmu_update(event); +} + +static void riscv_iommu_pmu_del(struct perf_event *event, int flags) +{ + struct riscv_iommu_pmu *pmu =3D to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc =3D &event->hw; + int idx =3D hwc->idx; + + riscv_iommu_pmu_stop(event, PERF_EF_UPDATE); + pmu->events[idx] =3D NULL; + clear_bit(idx, pmu->used_counters); + + perf_event_update_userpage(event); +} + +static int riscv_iommu_pmu_online_cpu(unsigned int cpu, struct hlist_node = *node) +{ + struct riscv_iommu_pmu *iommu_pmu; + + iommu_pmu =3D hlist_entry_safe(node, struct riscv_iommu_pmu, node); + + if (iommu_pmu->on_cpu =3D=3D -1) + iommu_pmu->on_cpu =3D cpu; + + return 0; +} + +static int riscv_iommu_pmu_offline_cpu(unsigned int cpu, struct hlist_node= *node) +{ + struct riscv_iommu_pmu *iommu_pmu; + unsigned int target_cpu; + + iommu_pmu =3D hlist_entry_safe(node, struct riscv_iommu_pmu, node); + + if (cpu !=3D iommu_pmu->on_cpu) + return 0; + + iommu_pmu->on_cpu =3D -1; + + target_cpu =3D cpumask_any_but(cpu_online_mask, cpu); + if (target_cpu >=3D nr_cpu_ids) + return 0; + + perf_pmu_migrate_context(&iommu_pmu->pmu, cpu, target_cpu); + iommu_pmu->on_cpu =3D target_cpu; + + return 0; +} + +static irqreturn_t riscv_iommu_pmu_handle_irq(struct riscv_iommu_pmu *pmu) +{ + u32 ovf =3D readl(pmu->reg + RISCV_IOMMU_REG_IOCOUNTOVF); + int idx; + + if (!ovf) + return IRQ_NONE; + + riscv_iommu_pmu_stop_all(pmu); + + for_each_set_bit(idx, (unsigned long *)&ovf, pmu->num_counters) { + struct perf_event *event =3D pmu->events[idx]; + + if (WARN_ON_ONCE(!event)) + continue; + + riscv_iommu_pmu_update(event); + riscv_iommu_pmu_set_period(event); + } + + riscv_iommu_pmu_start_all(pmu); + + return IRQ_HANDLED; +} + +static irqreturn_t riscv_iommu_pmu_irq_handler(int irq, void *dev_id) +{ + struct riscv_iommu_pmu *pmu =3D (struct riscv_iommu_pmu *)dev_id; + irqreturn_t ret; + + /* Check whether this interrupt is for PMU */ + if (!(readl_relaxed(pmu->reg + RISCV_IOMMU_REG_IPSR) & RISCV_IOMMU_IPSR_P= MIP)) + return IRQ_NONE; + + /* Process PMU IRQ */ + ret =3D riscv_iommu_pmu_handle_irq(pmu); + + /* Clear performance monitoring interrupt pending bit */ + writel_relaxed(RISCV_IOMMU_IPSR_PMIP, pmu->reg + RISCV_IOMMU_REG_IPSR); + + return ret; +} + +static unsigned int riscv_iommu_pmu_get_irq_num(struct riscv_iommu_device = *iommu) +{ + /* Reuse ICVEC.CIV mask for all interrupt vectors mapping */ + int vec =3D (iommu->icvec >> (RISCV_IOMMU_IPSR_PMIP * 4)) & RISCV_IOMMU_I= CVEC_CIV; + + return iommu->irqs[vec]; +} + +static int riscv_iommu_pmu_request_irq(struct riscv_iommu_device *iommu, + struct riscv_iommu_pmu *pmu) +{ + unsigned int irq =3D riscv_iommu_pmu_get_irq_num(iommu); + + /* + * Set the IRQF_ONESHOT flag because this IRQ can be shared with + * other threaded IRQs by other queues. + */ + return devm_request_irq(iommu->dev, irq, riscv_iommu_pmu_irq_handler, + IRQF_ONESHOT | IRQF_SHARED, dev_name(iommu->dev), pmu); +} + +static void riscv_iommu_pmu_free_irq(struct riscv_iommu_device *iommu, + struct riscv_iommu_pmu *pmu) +{ + unsigned int irq =3D riscv_iommu_pmu_get_irq_num(iommu); + + free_irq(irq, pmu); +} + +static int riscv_iommu_pmu_probe(struct auxiliary_device *auxdev, + const struct auxiliary_device_id *id) +{ + struct riscv_iommu_device *iommu_dev =3D dev_get_platdata(&auxdev->dev); + struct riscv_iommu_pmu *iommu_pmu; + void __iomem *addr; + char *name; + int ret; + + iommu_pmu =3D devm_kzalloc(&auxdev->dev, sizeof(*iommu_pmu), GFP_KERNEL); + if (!iommu_pmu) + return -ENOMEM; + + iommu_pmu->reg =3D iommu_dev->reg; + + /* Counter number and width are hardware-implemented. Detect them by writ= e 1s */ + addr =3D iommu_pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH; + writel(RISCV_IOMMU_IOCOUNTINH_HPM, addr); + iommu_pmu->num_counters =3D hweight32(readl(addr)); + + addr =3D iommu_pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES; + writeq(RISCV_IOMMU_IOHPMCYCLES_COUNTER, addr); + iommu_pmu->cycle_cntr_mask =3D readq(addr); + + /* Assume the width of all event counters are the same */ + addr =3D iommu_pmu->reg + RISCV_IOMMU_REG_IOHPMCTR_BASE; + writeq(RISCV_IOMMU_IOHPMCTR_COUNTER, addr); + iommu_pmu->event_cntr_mask =3D readq(addr); + + iommu_pmu->pmu =3D (struct pmu) { + .module =3D THIS_MODULE, + .parent =3D &auxdev->dev, + .task_ctx_nr =3D perf_invalid_context, + .event_init =3D riscv_iommu_pmu_event_init, + .add =3D riscv_iommu_pmu_add, + .del =3D riscv_iommu_pmu_del, + .start =3D riscv_iommu_pmu_start, + .stop =3D riscv_iommu_pmu_stop, + .read =3D riscv_iommu_pmu_read, + .attr_groups =3D riscv_iommu_pmu_attr_grps, + .capabilities =3D PERF_PMU_CAP_NO_EXCLUDE, + }; + + auxiliary_set_drvdata(auxdev, iommu_pmu); + + name =3D devm_kasprintf(&auxdev->dev, GFP_KERNEL, + "riscv_iommu_pmu_%s", dev_name(iommu_dev->dev)); + if (!name) { + dev_err(&auxdev->dev, "Failed to create name riscv_iommu_pmu_%s\n", + dev_name(iommu_dev->dev)); + return -ENOMEM; + } + + /* Bind all events to the same cpu context to avoid race enabling */ + iommu_pmu->on_cpu =3D raw_smp_processor_id(); + + ret =3D cpuhp_state_add_instance_nocalls(cpuhp_state, &iommu_pmu->node); + if (ret) { + dev_err(&auxdev->dev, "Failed to register hotplug %s: %d\n", name, ret); + return ret; + } + + ret =3D riscv_iommu_pmu_request_irq(iommu_dev, iommu_pmu); + if (ret) { + dev_err(&auxdev->dev, "Failed to request irq %s: %d\n", name, ret); + goto err_cpuhp_remove; + } + + ret =3D perf_pmu_register(&iommu_pmu->pmu, name, -1); + if (ret) { + dev_err(&auxdev->dev, "Failed to registe %s: %d\n", name, ret); + goto err_free_irq; + } + + dev_info(&auxdev->dev, "%s: Registered with %d counters\n", + name, iommu_pmu->num_counters); + + return 0; + +err_free_irq: + riscv_iommu_pmu_free_irq(iommu_dev, iommu_pmu); +err_cpuhp_remove: + cpuhp_state_remove_instance_nocalls(cpuhp_state, &iommu_pmu->node); + return ret; +} + +static const struct auxiliary_device_id riscv_iommu_pmu_id_table[] =3D { + { .name =3D "iommu.pmu" }, + {} +}; +MODULE_DEVICE_TABLE(auxiliary, riscv_iommu_pmu_id_table); + +static struct auxiliary_driver iommu_pmu_driver =3D { + .probe =3D riscv_iommu_pmu_probe, + .id_table =3D riscv_iommu_pmu_id_table, +}; + +static int __init riscv_iommu_pmu_init(void) +{ + int ret; + + cpuhp_state =3D cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, + "perf/riscv/iommu:online", + riscv_iommu_pmu_online_cpu, + riscv_iommu_pmu_offline_cpu); + if (cpuhp_state < 0) + return cpuhp_state; + + ret =3D auxiliary_driver_register(&iommu_pmu_driver); + if (ret) + cpuhp_remove_multi_state(cpuhp_state); + + return ret; +} +module_init(riscv_iommu_pmu_init); + +MODULE_DESCRIPTION("RISC-V IOMMU PMU"); +MODULE_LICENSE("GPL"); --=20 2.43.7 From nobody Sun Feb 8 20:17:53 2026 Received: from mail-dl1-f43.google.com (mail-dl1-f43.google.com [74.125.82.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7AEB93451D7 for ; Sun, 8 Feb 2026 06:39:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770532743; cv=none; b=RTm9IQAVjEmtBMzhKgjVZ38rhZG3IDC11+JJpAOAgnbL7FcgX0QpHZdcXSvLHokQCX1f0RAbmLwA+R88BFkrU//0FBVt71E6sZ0O27ZFCgiDaK9oUXFJwuxMgFtbLrdlTBT9juOO0PbRO7CFONORzJdfvZYbacy23o28GwcHXIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770532743; c=relaxed/simple; bh=wcc9KnVjKdcKm499FqDP7KbpPOAGrVGwzxwJ6V3VrBA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IDy0iB3nZWONUap7lCEeo0NzIZ8zFLkE4FLz3VpCFXbaoDoA9K/Eo7y2HFfqjjOShJXARSZu37eU1+aPfCk06oLmuuG9OuRyFXg+6gvNBYrO9CNMjJGFv11V9fPB0AQEys6rnjKmuKOS2nKM2AHtwhHafc/qVZIpmbMk+TRr9VM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=CGoGWOH0; arc=none smtp.client-ip=74.125.82.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="CGoGWOH0" Received: by mail-dl1-f43.google.com with SMTP id a92af1059eb24-124afd03fd1so2703056c88.0 for ; Sat, 07 Feb 2026 22:39:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1770532743; x=1771137543; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SqJrVLmLyHXBLluZVvcfbrCnXHVEtGPNlw21Z5ct3vE=; b=CGoGWOH0EiU+ZgeTnwMu315guL0cjRUSouxxgje1gOxVOQITnmjwY7W8B+ORoHrcKC uJYIWapqNcRJkuq95SfvWTrARWdnRG5A1DWBbkLjsvjxaVD+t4XIRYxRCD51OL+Fa4jp 1rQHIUq/Nde7AmHKtvYOetSX46oR1xIuxFFKoPR7WIZk5l0JbFMEN7m18ztM4Gx7N8d6 K2ydaft7+QBk7KqX18flUcG049JwsXtacFeXKK15sQ0giBQOTsF9BGVDqGiscLB2oAQF NJvpdsQpfVoWsBe0WaJ+nw7zPhY9V2yFnwf4oPov4fAraYeJZmwueB+BPbTjb5d4oXF9 ysGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770532743; x=1771137543; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=SqJrVLmLyHXBLluZVvcfbrCnXHVEtGPNlw21Z5ct3vE=; b=KdVSvuRewjpr7cBLmBpJH35UrDVhfhLJ2/fP7PD69S3oNkrEYC4Kub2cW6xqp6tQqo 5PzVN8aypmxtP9/hY9F72OuI2N9di2KeWO/PCcQyv54DcSaIxmGJ6ra/c7fp1RVBqofw irwSfYj2BAlzIr0H7/35yqB8oP3dniKkSVQMGNaFFOjWPwqrNHqVD3tCVWAZ/TAE/95o 73rwxjZ9D5MhETzLkiqJGVKUM+1WVPFZCRJegQArV1v7OCEzeCjEv7zLGKInvlnLHEeY HqHmKgZ0xMY34wPCkzo5rMl/ROhgHza3t5YJ9lUH4PW0e0Ut6LvqsBX0GusJGncaiAtU buWQ== X-Forwarded-Encrypted: i=1; AJvYcCVs47/1W+P8VOBv73gXOwRJnleIsWR05uDij/eLl7taQEyXXi8wGThGatv8m4UetSy8tdQfLZtEUmqadIA=@vger.kernel.org X-Gm-Message-State: AOJu0YyttC4PcvulfigOkMTk9YuE6T3Jf0zv7S4m5775wYrNmk0RczRm Ox3raWCdiSiZDxMXv16akUVAifl6loANHbjpx9TL/A381EugEn6Kv1A18F2He8m8zzI= X-Gm-Gg: AZuq6aLklOs5fNhWOvqpEivTl85cSudJ/+l6pslih3o2bZOS1wOS1jTfUMdOFXcR4Wm gskQvgbHPK7vC5Oe3czOTPdBNyutBLzt7CX3O3i7Jf8eUye2Cp8hmODkxvUVy7dwOuXcmmgomK+ YAybAfnPhfOF0cCk7Yhf6aEpOyFVlGTSMSHq421m8aEDSTei4ojjnM+MNOvZAygePXbpZTVoCk0 B+21htQ1HSIWd6ofpxj3SX4M6Le8yG1cf/NDWGFtEpP52VbMsAR6Aq37aVr7OZ/eYwO/TOYs/lu vRkNQ5eHtTH2bvH0S+bCaiEP973DLmzlK+P3HpgxlWps3qNwkM4BrqCWtYlxgbs7nCU7enwKUF/ F4J5GJY5hZFIkRni77THgJilmtf9s0lM7BxLObVqpVMQgmCxji2n9hlYcaN03YZDZc6eGUhg1P7 x9AnADQhQSQxaN36ZTSYY+s6U= X-Received: by 2002:a05:7022:2527:b0:123:2d4f:ef1c with SMTP id a92af1059eb24-12703ff4c81mr3247994c88.26.1770532742709; Sat, 07 Feb 2026 22:39:02 -0800 (PST) Received: from sw04.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id a92af1059eb24-1270c9ff2bcsm5114647c88.1.2026.02.07.22.38.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Feb 2026 22:39:00 -0800 (PST) From: Zong Li To: tjeznach@rivosinc.com, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, robh@kernel.org, pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu, alex@ghiti.fr, mark.rutland@arm.com, conor+dt@kernel.org, krzk@kernel.org, guoyaxing@bosc.ac.cn, luxu.kernel@bytedance.com, lv.zheng@linux.spacemit.com, andrew.jones@oss.qualcomm.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org, linux-perf-users@vger.kernel.org Cc: Zong Li , Samuel Holland Subject: [PATCH v2 2/2] iommu/riscv: create a auxiliary device for HPM Date: Sat, 7 Feb 2026 22:38:36 -0800 Message-ID: <20260208063848.3547817-3-zong.li@sifive.com> X-Mailer: git-send-email @GIT_VERSION@ In-Reply-To: <20260208063848.3547817-1-zong.li@sifive.com> References: <20260208063848.3547817-1-zong.li@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Create an auxiliary device for HPM when the IOMMU supports a hardware performance monitor. Suggested-by: Samuel Holland Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index d9429097a2b5..7cdb80d4343d 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -14,6 +14,7 @@ =20 #include #include +#include #include #include #include @@ -559,6 +560,22 @@ static irqreturn_t riscv_iommu_fltq_process(int irq, v= oid *data) return IRQ_HANDLED; } =20 +/* + * IOMMU Hardware performance monitor + */ +static int riscv_iommu_hpm_enable(struct riscv_iommu_device *iommu) +{ + struct auxiliary_device *auxdev; + + /* TODO: for custom event support, the modname should come from compatibl= e */ + auxdev =3D __devm_auxiliary_device_create(iommu->dev, KBUILD_MODNAME, + "pmu", iommu, 0); + if (!auxdev) + return -ENODEV; + + return 0; +} + /* Lookup and initialize device context info structure. */ static struct riscv_iommu_dc *riscv_iommu_get_dc(struct riscv_iommu_device= *iommu, unsigned int devid) @@ -1669,6 +1686,9 @@ int riscv_iommu_init(struct riscv_iommu_device *iommu) goto err_remove_sysfs; } =20 + if (iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM) + riscv_iommu_hpm_enable(iommu); + return 0; =20 err_remove_sysfs: --=20 2.43.7