From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34F2C19992C for ; Thu, 16 Jan 2025 23:10:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069031; cv=none; b=YBtiC8JQ+vVtWnHR28fIpE44Xil12/QbJVqUnAq19zNURjmn5wwED4TC9B97fyGL0VL+WT5MzImxz3lEMYHYJU48WNmfsBo4pxq8NHt8OzjdLCkTO5QmEzJZaaYfYuxJlkfUyvUyUGJwzdwk3Fv2JZPG2c7TGXKNnvu+wWHaJ1k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069031; c=relaxed/simple; bh=bF/cdbPvyjz0FfZNx2ey9irVZg4a8yyfWpE7AZtnRxM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=K4P0CugcV5tKKLqDMDRAXrbrH9he9RCewHmq0NAEKDFgqZRXx8TiUFURBOTooT7280CGcSv7kSCXZuEb3OOVEvyMYYdLnz3ue9MzrMX6TPOsGYTvyZUacHskfxzsawXdi6AjeDyEu2UmBUlksCEIf3O7RkAiD0+keJPf5qY7pis= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=BpGfvkRW; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="BpGfvkRW" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-43675b1155bso15688695e9.2 for ; Thu, 16 Jan 2025 15:10:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069028; x=1737673828; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ih0D7I8XCBuUn8cwCzLwA9EVpPglV1xp5bSahtqmwB0=; b=BpGfvkRWn9kpf0xiC6LnrTpHJCPPft5iWuctLPwOPCconJJVRoQxJNEOk5Nz3LN9nm QfcRK6C3G3My22pjoel2v/kkeVy08qsfRPrAAAvnaW7ypHM8fStIqGe+8Taabu5FHHN8 4ysAGmUlTpAX60CrSe/K3f0BflhltF845bpHF7dLY3IYLLgSbp+Gnvsx4fydS7IcENbl pjqbG7TyC7tYmQ3ZT5YJ0t2zvv86hEgGF6CrBa3fI4vtWuRvtoGB211wm+TjuzPyr1iZ VyNCpaZU1ybG/5Wky5tGE5UgkUJV9kdOfDyf7qYiTkk4pt+sDBOg/RzYn9OhJcjQTkUX dzgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069028; x=1737673828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ih0D7I8XCBuUn8cwCzLwA9EVpPglV1xp5bSahtqmwB0=; b=BdhCfnbTXetsemCxoPjl3gY+43zdFYINsp7QrJ93DOuBuBEcFTNA8IJwxhiVBw6VmU nmcGj7gzx99joNPOxi+MSqiFYcjLkugsKyO2TwZ9WpEiIDVU2cl+tCkTAA1F0wWGGPSQ VLxZUNFfrUzWjTB9XZYe+2MkWvaog94ygyrgy45hVdGhrUlcu2gCGfCCCNoaAlL7aAJu 6mlUU+rXCIAUwU5EzPYALVv/BItx7IKTx6qQoEjCR9ICI77jJ0FQi5HA4nUc1qenu1i0 nPsIFQLAvNZwap+315xxTpYQOCaiceZHeTDjyj30qNkWoKKJthEMzg/WJHgGyvcRFgVf BKSA== X-Gm-Message-State: AOJu0Yw8+HT6KFe5eM5Ke4zJcP3D6PZIS9urqeMkRzFuv4oeaLBFqU3F ouJSgnbpbuVmKTQskcM9Y3bW9FhGYmWHaSZ2K+67n0DcY2/ZWKvCHICMMf6vYCcdoieWXuBIeXb d X-Gm-Gg: ASbGnctor1HEZwzfjSsTkFIg13P8DyTIodimzOcp2potQv0ohn5tGObpd7a3p01e2/k hQuYQZEgFH4A+5+0M2iVhka09H9D9y0HaNvwmNfelQew56tGMFQEQpsbNuBldkAdfTRpHEdHNFt RUo57Xcc76yG/zQZYZd7I/+UFqYs9CIz8Vy7V8vobplnAw4U1poGSOoR/P8BD2DSqARGrUCDSMq iB6u9yYRBwM8IaZMmfO2FyKhpdyNXIaVRYzMu5W9jLESQSN6OZLdnHEJqnA4ieSko6fgMMSRWJs 7NyPjOajeHTAcRDX X-Google-Smtp-Source: AGHT+IGZkSeIeWLg78b7QIySnUPbuLbB0vhAavzUj7u+Y4mZGnlfT+raQ/EoGquZ508s9+cVypWC2g== X-Received: by 2002:adf:ffca:0:b0:38a:88e2:e6aa with SMTP id ffacd0b85a97d-38bf56745bcmr243859f8f.29.1737069028398; Thu, 16 Jan 2025 15:10:28 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:28 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 1/7] perf: Increase the maximum number of samples to 256. Date: Thu, 16 Jan 2025 23:09:49 +0000 Message-Id: <20250116230955.867152-2-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" RISCV CTR extension support a maximum depth of 256 last branch records. The 127 entries limit results in corrupting CTR entries for RISC-V if configured to be 256 entries. This will not impact any other architectures as it is just increasing maximum limit of possible entries. Signed-off-by: Rajnesh Kanwal --- tools/perf/util/machine.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 27d5345d2b30..f2eb3c20274e 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -2174,25 +2174,32 @@ static void save_iterations(struct iterations *iter, iter->cycles +=3D be[i].flags.cycles; } =20 -#define CHASHSZ 127 -#define CHASHBITS 7 -#define NO_ENTRY 0xff +#define CHASHBITS 8 +#define NO_ENTRY 0xffU =20 -#define PERF_MAX_BRANCH_DEPTH 127 +#define PERF_MAX_BRANCH_DEPTH 256 =20 /* Remove loops. */ +/* Note: Last entry (i=3D=3Dff) will never be checked against NO_ENTRY + * so it's safe to have an unsigned char array to process 256 entries + * without causing clash between last entry and NO_ENTRY value. + */ static int remove_loops(struct branch_entry *l, int nr, struct iterations *iter) { int i, j, off; - unsigned char chash[CHASHSZ]; + unsigned char chash[PERF_MAX_BRANCH_DEPTH]; =20 memset(chash, NO_ENTRY, sizeof(chash)); =20 - BUG_ON(PERF_MAX_BRANCH_DEPTH > 255); + BUG_ON(PERF_MAX_BRANCH_DEPTH > 256); =20 for (i =3D 0; i < nr; i++) { - int h =3D hash_64(l[i].from, CHASHBITS) % CHASHSZ; + /* Remainder division by PERF_MAX_BRANCH_DEPTH is not + * needed as hash_64 will anyway limit the hash + * to CHASHBITS + */ + int h =3D hash_64(l[i].from, CHASHBITS); =20 /* no collision handling for now */ if (chash[h] =3D=3D NO_ENTRY) { --=20 2.34.1 From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F81622CF02 for ; Thu, 16 Jan 2025 23:10:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069033; cv=none; b=FQkSHgT+6g86x9qHv0FcNVmTifgwJF51km/W4lzELpGnSkPqIl/joFFwfPsqcE+WDZ20lpv7Rau7cxKHbi8Zt8N6nEels3DhBouojVvVaVcSVnI+jUUVK8xbKxo+SzIRQZuuSOb1vc9WmncWAWqdpAEZ1mmO+IVcIHyZ/s+xClA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069033; c=relaxed/simple; bh=SQQaty6fCkVnUgpY4sT8w9n0kpRWVXbvtIPxo6ahfMk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=g0qU+KREAQCAlK806BvvihHDLhgJWGTZNiPlfmJXJzpav2Plh+5XmPdhVpTZIzgZTH7+yuoDQUfT32fpekhEvIdqQPAf3IVIMyj6qDML2SVgor4tc5SYyy8YEgoDGRgfWj9gtCL+JtglespJxDDCVqrhQqYxiViBngAXkfqwyb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=labC1pHT; arc=none smtp.client-ip=209.85.128.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="labC1pHT" Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-4362bae4d7dso9794315e9.1 for ; Thu, 16 Jan 2025 15:10:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069029; x=1737673829; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iExoXWcsgpz1yiXKVDiuJsm//UYZR8b0hyvjPLZ7n2M=; b=labC1pHTyGwvS9WWHqSAG/DnlL7A2jALM54CCuoyFWbUSSlM5a7CQPKXLfoK8dGaoI 1fTDtjvZ+d3nApRF2YxQ0jAPJPjmSdU67XcDTvJP3Efy+MN/TYrajQzDRRidYL7iHmuK kgOFMfp1liWGSblrjmDfjOZRbBybWYLltfdD112nvki7n3+d+FA7Y0DWvYD+nzZRYnmL 2badOp/IX0PNLzgcJ5i24EItV8eiMCwoUb9Mmpr3nSj9ylLMsh0rCWsUbpimGYAM8WZd Y9E6JefCuDFJqfnWLQYDW4lWt75SH4T5jeLxuy+Hi4nsSYwtYsBpptAdk8mqWAtp+5/s Mjhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069029; x=1737673829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iExoXWcsgpz1yiXKVDiuJsm//UYZR8b0hyvjPLZ7n2M=; b=EXFIa706K2+8sJr9UpbxXN74zVTexorzrAnQhw5+h27EYGuNyZ/5nAjk16k3Lpc5Ih 2LtXFd2ANHIGtgUdSSCzbpNkEz4nBgyQG2d3vmWLFq8V089cwBF5CtV7W1ZS9KbKZk1K FfbIX+DzVE3t8TOlE2Vc2nCgdTUSX8N1Bmn1w0YjcRK8zYPbS8eFEbS/2dKZydI48fV4 2uSuD4HEiaes5BD7phOTEGJznpSq+Wd7eQuZ85uwn2G6vowoMP8gQWjqYLe479iqRGdM IOijB+o3bAKPek8WzDcnH/DpMIf/xTMIczPsJ5REW/kaZS7m9rS5UrVj+YE4iraj0ZqE FEiA== X-Gm-Message-State: AOJu0Ywm+6cfkl+T6Z5nORFPh8diqWaM1P+y0avjn5wY5iPb52PR6Cof 3RVJ64brc9bbAdAoLBi9Ipl/0UeqzdF/6NiV4Yxa+AEGwI1XAlbAA9lr+9p1eop0QoVsnGG1GMs 2 X-Gm-Gg: ASbGnct888kzoBrYME1Fij2i9VhMYcIF+XeZs8girh7qNE4Sw3uYfprB5c0RaaBe2a4 vUSbCFuBuexUrRme1Y2G0fTGUbU++UhEwRK7gb9i27XjFd2MAKV+LsaN5Lw0PGYMFPjqkrH3tlw BqfBxa+fgtAUpCVhFg/b1LHJPi0FXWTD+Q6D4iTU9DNSFAJ2mHcuKdMkfnwEC4lFRREK4Xo7YIj PTecGlcVWVbFwwLKt5DcCjaz1gCSUU5zqDoV1aHaS1jOBQdjwIVNhxu1OTCxVfdBAUNhYEVEQRQ R3ADjhnfC5M1hV1l X-Google-Smtp-Source: AGHT+IEE85RzZXgjKSxtatDeczrhYN+ZJtEmiy39XawqUlljd+N8P1eGO6eoSvUUxpVRipcUDt0wBw== X-Received: by 2002:a5d:6d06:0:b0:38a:418e:1177 with SMTP id ffacd0b85a97d-38bf565531dmr377720f8f.11.1737069029362; Thu, 16 Jan 2025 15:10:29 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:29 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 2/7] riscv: pmu: Add Control transfer records CSR definations. Date: Thu, 16 Jan 2025 23:09:50 +0000 Message-Id: <20250116230955.867152-3-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adding CSR defines for RISCV Control Transfer Records extension [0] along with bit-field macros for each CSR. [0]: https://github.com/riscv/riscv-control-transfer-records Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/csr.h | 83 ++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index a06d5fec6e6d..465a5e338ccb 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -325,6 +325,85 @@ =20 #define CSR_SCOUNTOVF 0xda0 =20 +/* M-mode Control Transfer Records CSRs */ +#define CSR_MCTRCTL 0x34e + +/* S-mode Control Transfer Records CSRs */ +#define CSR_SCTRCTL 0x14e +#define CSR_SCTRSTATUS 0x14f +#define CSR_SCTRDEPTH 0x15f + +/* VS-mode Control Transfer Records CSRs */ +#define CSR_VSCTRCTL 0x24e + +/* xctrtl CSR bits. */ +#define CTRCTL_U_ENABLE _AC(0x1, UL) +#define CTRCTL_S_ENABLE _AC(0x2, UL) +#define CTRCTL_M_ENABLE _AC(0x4, UL) +#define CTRCTL_RASEMU _AC(0x80, UL) +#define CTRCTL_STE _AC(0x100, UL) +#define CTRCTL_MTE _AC(0x200, UL) +#define CTRCTL_BPFRZ _AC(0x800, UL) +#define CTRCTL_LCOFIFRZ _AC(0x1000, UL) +#define CTRCTL_EXCINH _AC(0x200000000, UL) +#define CTRCTL_INTRINH _AC(0x400000000, UL) +#define CTRCTL_TRETINH _AC(0x800000000, UL) +#define CTRCTL_NTBREN _AC(0x1000000000, UL) +#define CTRCTL_TKBRINH _AC(0x2000000000, UL) +#define CTRCTL_INDCALL_INH _AC(0x10000000000, UL) +#define CTRCTL_DIRCALL_INH _AC(0x20000000000, UL) +#define CTRCTL_INDJUMP_INH _AC(0x40000000000, UL) +#define CTRCTL_DIRJUMP_INH _AC(0x80000000000, UL) +#define CTRCTL_CORSWAP_INH _AC(0x100000000000, UL) +#define CTRCTL_RET_INH _AC(0x200000000000, UL) +#define CTRCTL_INDOJUMP_INH _AC(0x400000000000, UL) +#define CTRCTL_DIROJUMP_INH _AC(0x800000000000, UL) + +/* sctrstatus CSR bits. */ +#define SCTRSTATUS_WRPTR_MASK 0xFF +#define SCTRSTATUS_FROZEN _AC(0x80000000, UL) + +#ifdef CONFIG_RISCV_M_MODE +#define CTRCTL_KERNEL_ENABLE CTRCTL_M_ENABLE +#else +#define CTRCTL_KERNEL_ENABLE CTRCTL_S_ENABLE +#endif + +/* sctrdepth CSR bits. */ +#define SCTRDEPTH_MASK 0x7 + +#define SCTRDEPTH_MIN 0x0 /* 16 Entries. */ +#define SCTRDEPTH_MAX 0x4 /* 256 Entries. */ + +/* ctrsource, ctrtarget and ctrdata CSR bits. */ +#define CTRSOURCE_VALID 0x1ULL +#define CTRTARGET_MISP 0x1ULL + +#define CTRDATA_TYPE_MASK 0xF +#define CTRDATA_CCV 0x8000 +#define CTRDATA_CCM_MASK 0xFFF0000 +#define CTRDATA_CCE_MASK 0xF0000000 + +#define CTRDATA_TYPE_NONE 0 +#define CTRDATA_TYPE_EXCEPTION 1 +#define CTRDATA_TYPE_INTERRUPT 2 +#define CTRDATA_TYPE_TRAP_RET 3 +#define CTRDATA_TYPE_NONTAKEN_BRANCH 4 +#define CTRDATA_TYPE_TAKEN_BRANCH 5 +#define CTRDATA_TYPE_RESERVED_6 6 +#define CTRDATA_TYPE_RESERVED_7 7 +#define CTRDATA_TYPE_INDIRECT_CALL 8 +#define CTRDATA_TYPE_DIRECT_CALL 9 +#define CTRDATA_TYPE_INDIRECT_JUMP 10 +#define CTRDATA_TYPE_DIRECT_JUMP 11 +#define CTRDATA_TYPE_CO_ROUTINE_SWAP 12 +#define CTRDATA_TYPE_RETURN 13 +#define CTRDATA_TYPE_OTHER_INDIRECT_JUMP 14 +#define CTRDATA_TYPE_OTHER_DIRECT_JUMP 15 + +#define CTR_ENTRIES_FIRST 0x200 +#define CTR_ENTRIES_LAST 0x2ff + #define CSR_SSTATUS 0x100 #define CSR_SIE 0x104 #define CSR_STVEC 0x105 @@ -508,6 +587,8 @@ # define CSR_TOPEI CSR_MTOPEI # define CSR_TOPI CSR_MTOPI =20 +# define CSR_CTRCTL CSR_MCTRCTL + # define SR_IE SR_MIE # define SR_PIE SR_MPIE # define SR_PP SR_MPP @@ -538,6 +619,8 @@ # define CSR_TOPEI CSR_STOPEI # define CSR_TOPI CSR_STOPI =20 +# define CSR_CTRCTL CSR_SCTRCTL + # define SR_IE SR_SIE # define SR_PIE SR_SPIE # define SR_PP SR_SPP --=20 2.34.1 From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DDA5236EA5 for ; Thu, 16 Jan 2025 23:10:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069034; cv=none; b=qYwjqG1bu+hS1RAiqtJ2FOwWPndWzSTmIWLudFj1bHUiju9gqBbO93iQHHq0x1VJtKR9YmKXUmgbLB2apEtqmPYpgTx3y5pbCis21jo9aaMdBww1yqR2STR7R26Y7UVc25M/eGh5IvOGJj7XSD547HakaoBnIFambuePCgMhzeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069034; c=relaxed/simple; bh=zlS1xy6+dHjpI5H8uR2wkbHrZK3tJwT3UrYnLGr2MPs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=t9xmnQioujatW0Ghs9yU0qmGHUP6tDcorxBpE4aZk6FrP8PAAEOkdhTa0KankZB98qfePmazV5882GuiahTEJfwXyJr/rr7DfGPzO0AzuyCp8XpQ4wfCsxeYbIJa3ebTPNdsL9fj5x4tjynyC1uUP/TGj1+HIKfzAgNDQvGpgvo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=hd3Wh4zs; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="hd3Wh4zs" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-436202dd730so9744685e9.2 for ; Thu, 16 Jan 2025 15:10:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069030; x=1737673830; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vp/FsiaV0Z94/Qxw/F7YAAt+C4ai0FuIMW/dVHkyoAE=; b=hd3Wh4zsEzE4yaLb3u0k/mDowC7jfFcYhUKnX2XB2FpjlCFn2gSyMbXEpErS1rqHp+ WhFHxcOhpRl8piHM0rA9vgutvtersBdX4fpfcDPlflTsVf+MxFww12hqPDI4Z1ihUP9L 6SDd7EF5ITexYmZgIfvSk6thmLjbsQkmQCOmabmjp0btr1TBDuSncZ1CdaFVJc07+Cw5 0SVs/MK8S+MfyalyeQAJonudadHoFZrPD5cz/RNSEr+ZVnbwcSUD6ZMRImx4Tpr3HcNm WYjhRCchduFadWxSsHs0PVuu71rasTn2xGorom2Xhe5riujkVuyIChKBDaCRw9rgQDXQ k+hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069030; x=1737673830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vp/FsiaV0Z94/Qxw/F7YAAt+C4ai0FuIMW/dVHkyoAE=; b=RYJ64eVxUKwCyA+3Pwxc3hSj/OTmjkSV7ZQ/ZjzfLL0v7nt3uBB0Axk5urTQp3Chrg pyYuFumTNi1toNb79qiiAOiBH5CGfS48SEaFgtYSJ31lUyTQ2sVTzfjEbGq5Y74tABF6 VtUuRIA3otenfNqmr3L2cZ7qI/JbjjOmCIribfb1mlbmxzBX5AeP+8qSP6s0vv5ARmMv ffRPfFGAp0KwkGHlzVXYH22wO7sYJh/3D6FRxjAxDGsUao1X/6w2p9N6FK2JXNHNELh8 2Kcz0dI+soYSaEYTmMGN8V9eyZjDwACyEZUiJNez5AYaHJqY2txv8jKszBr/99zuvzuZ SBhQ== X-Gm-Message-State: AOJu0YxWv4vYM3WjaAN621aDNqYohhNH12B2sQt3P8hgHkQfKdSengMr 9kF6ecsCN1inhvaWYCfYQejwyut3vcJsHJy68V3+oKMjPRBezizfKcDGnJYd2/p3wamESfBKh34 t X-Gm-Gg: ASbGncvJg+Go2ThF2IvJqN9AjUpIR9TpBEP4Rgf9cwRWrMt23ddZQXmazS/C/tLTRci NJyqw3dFmFm80iTkTkCHQow+Tw8AgfVwNV2lN7c0uU/xFyKdBCNX2+MgPKiJ58eF9794qNHbpfZ Senkyg991StJ0kcNnRWqsl1o1oaoRkedZZ8BOf/aNksxv7QScMV2TS8kHpE2SDtsVqjy7aSy6PH 0dmMbxee6GeQhCXHBdNUxExn29Kr2sGYE0ZZlgBuIjR8xkVkAdwmEJ+0tcDLomGAZUgf9Ij6pRy M/kaaIMem67XQoKC X-Google-Smtp-Source: AGHT+IE0MecDnwhZClHJ7PgerRYXtp8cYAxIgYswrea4HoSFCj4iDiT/DdG8q//dcV9mlJlAaDCGOA== X-Received: by 2002:a5d:5f51:0:b0:385:dc45:ea26 with SMTP id ffacd0b85a97d-38bf566240amr315638f8f.12.1737069030525; Thu, 16 Jan 2025 15:10:30 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:30 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 3/7] riscv: Add Control Transfer Records extension parsing Date: Thu, 16 Jan 2025 23:09:51 +0000 Message-Id: <20250116230955.867152-4-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adding CTR extension in ISA extension map to lookup for extension availability. Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/hwcap.h | 4 ++++ arch/riscv/kernel/cpufeature.c | 2 ++ 2 files changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 42b34e2f80e8..552c7ebae7be 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -105,6 +105,8 @@ #define RISCV_ISA_EXT_SSCCFG 96 #define RISCV_ISA_EXT_SMCDELEG 97 #define RISCV_ISA_EXT_SMCNTRPMF 98 +#define RISCV_ISA_EXT_SMCTR 99 +#define RISCV_ISA_EXT_SSCTR 100 =20 #define RISCV_ISA_EXT_XLINUXENVCFG 127 =20 @@ -115,11 +117,13 @@ #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA #define RISCV_ISA_EXT_SUPM RISCV_ISA_EXT_SMNPM #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SMCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SMCTR #else #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SUPM RISCV_ISA_EXT_SSNPM #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SSCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SSCTR #endif =20 #endif /* _ASM_RISCV_HWCAP_H */ diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index ec068c9130e5..ef3b70f7d5d2 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -391,6 +391,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] =3D { __RISCV_ISA_EXT_DATA(zvkt, RISCV_ISA_EXT_ZVKT), __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), __RISCV_ISA_EXT_DATA(smcdeleg, RISCV_ISA_EXT_SMCDELEG), + __RISCV_ISA_EXT_DATA(smctr, RISCV_ISA_EXT_SMCTR), __RISCV_ISA_EXT_DATA(smmpm, RISCV_ISA_EXT_SMMPM), __RISCV_ISA_EXT_SUPERSET(smnpm, RISCV_ISA_EXT_SMNPM, riscv_xlinuxenvcfg_e= xts), __RISCV_ISA_EXT_DATA(smstateen, RISCV_ISA_EXT_SMSTATEEN), @@ -400,6 +401,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] =3D { __RISCV_ISA_EXT_DATA(sscsrind, RISCV_ISA_EXT_SSCSRIND), __RISCV_ISA_EXT_DATA(ssccfg, RISCV_ISA_EXT_SSCCFG), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), + __RISCV_ISA_EXT_DATA(ssctr, RISCV_ISA_EXT_SSCTR), __RISCV_ISA_EXT_SUPERSET(ssnpm, RISCV_ISA_EXT_SSNPM, riscv_xlinuxenvcfg_e= xts), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svade, RISCV_ISA_EXT_SVADE), --=20 2.34.1 From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A632241A03 for ; Thu, 16 Jan 2025 23:10:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069035; cv=none; b=A2cEtmdHiwg1Acsk0QyO+SirRxgmOfBNLfzIkSq+FzGPC5rJcxv/Bka14wNFUlUDmtmfOQYasAY9kS92Xt3pUd0N5qAO1xvhN9diedHDEa6fGEgQNA1Ab2VPlSBs3dVpEApyJbB5ZLH2MUb9xRjbZwwMp8VHsJEzuiwVfiSX7dc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069035; c=relaxed/simple; bh=XBw79LnYjlB57EbPiUfj8QEVhxDaH28VkwL1Z1KHM4I=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UKkujc1yWjPk4qIxsCukYQMjXat3/kGApxxlTqgvHpENlb0RO0K8tH6UBEEvSpEFdFuALIEpJFj+w0UK27gFJbIj4nGaW7icTrEZwHX1tRblcUVnzWDiAL49UgBxtkpPoYL0sD62JhC6B0F5lTJrtP3jCDp12n4pqkkSEndjv2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=2b6ZAdal; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="2b6ZAdal" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-436202dd730so9744735e9.2 for ; Thu, 16 Jan 2025 15:10:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069032; x=1737673832; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TQjN+UKbroqD83qO2UjBTcTlF8+/q2QiSosN0QwUoU4=; b=2b6ZAdalQm6lBt9X8Kjf/jjvCGqKs+VGHmUQ5RrRUk9ngzoNwBfGu2hbg2s7I8Nl8/ XoXrLEEy49rgMAQhlW3fkY/K2NLutn5Aihzi8DHrTc5lR0BbWWRb53Y1FllWwTGq8EQW UcArfhaRroWMexY0jxLuww8pyPAE7g5qcf2MQzfML5sHHoZvsX6qPRAKXT8JwwbqnXP3 aP0o1fbEwWC7waDkUYzcC5qvDoaBhZ29X551oGdM3QTVLPLUNxfp5OtlHlJTuD1Pat7+ /h2iLtSOLz9wgOhOFt2lKA7/c/D84CQj4/5LNCATp/1jl/qMpVG5AGQMxRjwdKWAI8t4 RMiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069032; x=1737673832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TQjN+UKbroqD83qO2UjBTcTlF8+/q2QiSosN0QwUoU4=; b=K5fXAbZtHh0JfSLbmn+GQewTtI1hq0O3E8uQyHwVOZ91ZXp57I9ocnfVcWx6ehnKXH N7GzcZDz6qVHX/4pdte0xFA6Cyc9ausJBoLUPtew9tDwr1XHhUJUg3oqPTsAH7O2CwBF A2L6tEmobI034GpP7pBVBVv5ecSHOaJBLQ+2Nh25KkdiIROYAf2ud4oTdgDKz1jz57Ay TeUizvtU9CTYHvcHXhpXQ2Xfe1kd/FP3fk/bK+o/W5tPbgAAR8c/nC4idI2McIiXXBbh oUMfAtpIFs7XASmPdixxWlzWMTw1zQdYAV+3dVsrCEMyeJUFCHqelLpxwijt2puIOCYd rgkw== X-Gm-Message-State: AOJu0YyNNzvxK35QG0dIxuW6rO0JmDJmgEaYnBoHeinHzCYHloa68YXp jQ19PVaUyNCWyTeZCLIxmE2mmH21tVw/T1al92FL74T4qaOgE3nCzGG8k6eKfidmrrvgECTP4rw j X-Gm-Gg: ASbGncsIIeT6dqF2UHRCYK/J1zody7NKfv2XMpHW1f8s7j+orCTF7/pzvuW0omNmsMQ P2Z8sdfgYQfmljv4nURdgicx8EWSd7ZXQtrWBKIveT5GZ45fU0Dtwj+n2iUpfGM2BekbB/YHaCo nlujFGLnw11uWA6adRX22aY43PtfExDi4zzBs2fgtThN7gbWlv4DT3ds9YLWbRbL2wl3NMHAXdB 8FfwmpHtJq5FxWYmhfVbjALc+cvlJZQNb6NSEUH9tO1KqdXa1vdh3J4poMMlEEwUT7Ojil8r1ur 67THVx+fR8BrLoot X-Google-Smtp-Source: AGHT+IE8w0tW/wpQVyEgF5mM+IUwGM3ccJfDjyTPUx2Pdbu8PB/1j71pGZjZsCD/2FEHJdYpAfi9sw== X-Received: by 2002:a05:6000:1a87:b0:385:fc32:1ec6 with SMTP id ffacd0b85a97d-38bf57bb947mr268961f8f.50.1737069031710; Thu, 16 Jan 2025 15:10:31 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:31 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 4/7] dt-bindings: riscv: add Sxctr ISA extension description Date: Thu, 16 Jan 2025 23:09:52 +0000 Message-Id: <20250116230955.867152-5-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the S[m|s]ctr ISA extension description. Signed-off-by: Rajnesh Kanwal --- .../devicetree/bindings/riscv/extensions.yaml | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Docu= mentation/devicetree/bindings/riscv/extensions.yaml index 848354e3048f..8322503f0773 100644 --- a/Documentation/devicetree/bindings/riscv/extensions.yaml +++ b/Documentation/devicetree/bindings/riscv/extensions.yaml @@ -167,6 +167,13 @@ properties: extension allows other ISA extension to use indirect CSR access mechanism in M-mode. =20 + - const: smctr + description: | + The standard Smctr supervisor-level extension for the machine = mode + to enable recording limited branch history in a register-acces= sible + internal core storage. Smctr depend on both the implementation= of + S-mode and the Sscsrind extension. + - const: sscsrind description: | The standard Sscsrind supervisor-level extension extends the @@ -193,6 +200,13 @@ properties: and mode-based filtering as ratified at commit 01d1df0 ("Add a= bility to manually trigger workflow. (#2)") of riscv-count-overflow. =20 + - const: ssctr + description: | + The standard Ssctr supervisor-level extension enables recordin= g of + limited branch history in a register-accessible internal core + storage. Ssctr depend on both the implementation of S-mode and= the + Sscsrind extension. + - const: ssnpm description: | The standard Ssnpm extension for next-mode pointer masking as --=20 2.34.1 From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A66AE24333F for ; Thu, 16 Jan 2025 23:10:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069037; cv=none; b=Np+sIfQgs7RWWpu4mIGDpw3y1yL7DqIe/9I8P3KvHRRi6qYiffIqOs1/I+UB1NFtxQj7uvwg4WG88oTopuscxEvHMbHm0r+s1OJY5IcHTjNImO066LSAMhhFh9IE+pASjihZCx2hz/XCQ1QPr9GGE2WDZtp3VYRyG7D+CJNChCs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069037; c=relaxed/simple; bh=rGHKedXfHrDo3lqW3oF9DMphEKq6q03N6Xzb1G1zBm4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mtBQTFX0o86LmJxfc3x35+aKu9xY+Dzfs5/CXTcalMlgOouzadM8webdiLHQHD4AW2c0ucgdOEu18ZvDMSJN0XR5Gd6hXBBiAZvmjzrwZGcl+DDG+5L+OBWPR8tF8yv3Ph03nKMKs3uJHQUmvw4AkKeK+OtPCh1K8HCpDsHGOYU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=OMlDtd0o; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="OMlDtd0o" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-436249df846so9653535e9.3 for ; Thu, 16 Jan 2025 15:10:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069033; x=1737673833; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hcwn1RBtRoJipu7fOBlFbI2SRrDtGNtcTvKQWm9rBOc=; b=OMlDtd0oJOyDbmeB0eGjFNN0ndVHgPCk9/Kzyxrz7gNodc/FoSUrBALiNQIhD2GO4c HSg31vbmtISLJk5xcVzcfm15LWOKz/MXoSEJAsVgSumZ8/ocgqWKu7/ZS3uEyfK/HyyF 0u4ZL/kfDfGWIUlyDPWflQD7q4h8vSpwm89hF+QN5OkAB3d3sLttWGWe6W16XwHhzIID vCdBN7JRcEAHk1I3YycTedpqaNzyCio2EhtxuUQDpKbhTkmtcntReQUt6/swMMF5Ju+U mwqGABTWwLzHj70lWUA1e4pLb+kKz5XP9aRtuJHvKJ0jC+U+dQ0K7IfczpgHbui79ms6 P7kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069033; x=1737673833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hcwn1RBtRoJipu7fOBlFbI2SRrDtGNtcTvKQWm9rBOc=; b=SbOxutXHUSRy2azoFmd0rSoO1uuYmkrPMNkKzaOlBFSQs/xQo76VPeav3TddM0SIMF haoKXEFfDgFOFIgUjOMwIx4Za4Ii9lTwquyV9UxuEo68s1EZ8Ub+u7f4O45ImLPfNteo /apHlLD34cWiLuBDG8G14HPVkR72rohfGp2t6l6pU5YZxBTQkbgRFeJzBECgf6GJLlp+ 4xwh+t+MAhc0J8Axz6atEtxpqJIpWNgTU1nrJaDtGykP/2NmOiAYP84WIzd/MmhwC65C P2htMWkv0zAVAVJ6D4kduWTbvLNyYPZSozPrmMW+SqpWj/Tmfvc+Ni4Re5WhFxAiRS4k fIjg== X-Gm-Message-State: AOJu0Yxq6PfFlvUolToUqyOrA1fghJXICpEe/Fmc2GUDtV9qyPmkEUkZ L0L+XXJ3MFpCj+El6zeo6o4LEAJr9K14aj4i2cd4idTEE7wlSvsGw/um5UFgIwz2YS3mbhsIems n X-Gm-Gg: ASbGncss0EFqa/3q4HfPbAhTjxANdmV62Gv4Am9gRwCW3Vb7Y++Of4RHGEu1aWT4H/o dgenKH2RFjsWkClZJlCppW9C5DgKw+Hbp/wDN/13ocSuuMYNozJaIce5M6jswYdFX7us6OFdbpo bUtCIOkFNhuQhlsHHL/B9CvCavEgnhW2jNDOtbKyKJRYnkKk6c1l+GyrfsXBJmFqvvzAgGF51Ev tm47rccuIu7jSw+X3PxNvr8W6jUtbtdZX95OJ4qt6XGyKt0lKFpsH3c2CxfYqCYoJWwBSn7hxLO NTfNPnsslDvjH2ji X-Google-Smtp-Source: AGHT+IGhZmz9Itp7KaTHT92wABYL6U7X2X4dHUE4Or0Kr1OPRGRfA1au2n+XsimRBSL8IAL2MNW8AQ== X-Received: by 2002:a05:600c:3d06:b0:434:eb73:b0c0 with SMTP id 5b1f17b1804b1-438913c7e31mr4321905e9.5.1737069032813; Thu, 16 Jan 2025 15:10:32 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:32 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 5/7] riscv: pmu: Add infrastructure for Control Transfer Record Date: Thu, 16 Jan 2025 23:09:53 +0000 Message-Id: <20250116230955.867152-6-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To support Control Transfer Records (CTR) extension, we need to extend the riscv_pmu framework with some basic infrastructure for branch stack samplin= g. Subsequent patches will use this to add support for CTR in the riscv_pmu_dev driver. With CTR, the branches are stored into a hardware FIFO, which will be sampl= ed by software when perf events overflow. A task may be context-switched betwe= en overflows, and to avoid leaking samples we need to clear the last task's records when a task is context-switched in. To do this we will be using the pmu::sched_task() callback added in this patch. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 20 ++++++++++++++++++++ drivers/perf/riscv_pmu_dev.c | 17 +++++++++++++++++ drivers/perf/riscv_pmu_legacy.c | 2 ++ include/linux/perf/riscv_pmu.h | 18 ++++++++++++++++++ 4 files changed, 57 insertions(+) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_commo= n.c index 7644147d50b4..c4c4b5d6bed0 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -157,6 +157,19 @@ u64 riscv_pmu_ctr_get_width_mask(struct perf_event *ev= ent) return GENMASK_ULL(cwidth, 0); } =20 +static void riscv_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *pmu; + + if (!pmu_ctx) + return; + + pmu =3D to_riscv_pmu(pmu_ctx->pmu); + if (pmu->sched_task) + pmu->sched_task(pmu_ctx, sched_in); +} + u64 riscv_pmu_event_update(struct perf_event *event) { struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); @@ -269,6 +282,8 @@ static int riscv_pmu_add(struct perf_event *event, int = flags) cpuc->events[idx] =3D event; cpuc->n_events++; hwc->state =3D PERF_HES_UPTODATE | PERF_HES_STOPPED; + if (rvpmu->ctr_add) + rvpmu->ctr_add(event, flags); if (flags & PERF_EF_START) riscv_pmu_start(event, PERF_EF_RELOAD); =20 @@ -286,6 +301,9 @@ static void riscv_pmu_del(struct perf_event *event, int= flags) =20 riscv_pmu_stop(event, PERF_EF_UPDATE); cpuc->events[hwc->idx] =3D NULL; + if (rvpmu->ctr_del) + rvpmu->ctr_del(event, flags); + /* The firmware need to reset the counter mapping */ if (rvpmu->ctr_stop) rvpmu->ctr_stop(event, RISCV_PMU_STOP_FLAG_RESET); @@ -402,6 +420,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) for_each_possible_cpu(cpuid) { cpuc =3D per_cpu_ptr(pmu->hw_events, cpuid); cpuc->n_events =3D 0; + cpuc->ctr_users =3D 0; for (i =3D 0; i < RISCV_MAX_COUNTERS; i++) cpuc->events[i] =3D NULL; cpuc->snapshot_addr =3D NULL; @@ -416,6 +435,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) .start =3D riscv_pmu_start, .stop =3D riscv_pmu_stop, .read =3D riscv_pmu_read, + .sched_task =3D riscv_pmu_sched_task, }; =20 return pmu; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index d28d60abaaf2..b9b257607b76 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -1027,6 +1027,12 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *ev= ent, unsigned long flag) } } =20 +static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + /* Call CTR specific Sched hook. */ +} + static int rvpmu_sbi_find_num_ctrs(void) { struct sbiret ret; @@ -1569,6 +1575,14 @@ static int rvpmu_deleg_ctr_get_idx(struct perf_event= *event) return -ENOENT; } =20 +static void rvpmu_ctr_add(struct perf_event *event, int flags) +{ +} + +static void rvpmu_ctr_del(struct perf_event *event, int flags) +{ +} + static void rvpmu_ctr_start(struct perf_event *event, u64 ival) { struct hw_perf_event *hwc =3D &event->hw; @@ -1984,6 +1998,8 @@ static int rvpmu_device_probe(struct platform_device = *pdev) else pmu->pmu.attr_groups =3D riscv_sbi_pmu_attr_groups; pmu->cmask =3D cmask; + pmu->ctr_add =3D rvpmu_ctr_add; + pmu->ctr_del =3D rvpmu_ctr_del; pmu->ctr_start =3D rvpmu_ctr_start; pmu->ctr_stop =3D rvpmu_ctr_stop; pmu->event_map =3D rvpmu_event_map; @@ -1995,6 +2011,7 @@ static int rvpmu_device_probe(struct platform_device = *pdev) pmu->event_mapped =3D rvpmu_event_mapped; pmu->event_unmapped =3D rvpmu_event_unmapped; pmu->csr_index =3D rvpmu_csr_index; + pmu->sched_task =3D pmu_sched_task; =20 ret =3D riscv_pm_pmu_register(pmu); if (ret) diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legac= y.c index 93c8e0fdb589..bee6742d35fa 100644 --- a/drivers/perf/riscv_pmu_legacy.c +++ b/drivers/perf/riscv_pmu_legacy.c @@ -115,6 +115,8 @@ static void pmu_legacy_init(struct riscv_pmu *pmu) BIT(RISCV_PMU_LEGACY_INSTRET); pmu->ctr_start =3D pmu_legacy_ctr_start; pmu->ctr_stop =3D NULL; + pmu->ctr_add =3D NULL; + pmu->ctr_del =3D NULL; pmu->event_map =3D pmu_legacy_event_map; pmu->ctr_get_idx =3D pmu_legacy_ctr_get_idx; pmu->ctr_get_width =3D pmu_legacy_ctr_get_width; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index e58f83811988..883781f12ae0 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -46,6 +46,13 @@ }, \ } =20 +#define MAX_BRANCH_RECORDS 256 + +struct branch_records { + struct perf_branch_stack branch_stack; + struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS]; +}; + struct cpu_hw_events { /* currently enabled events */ int n_events; @@ -65,6 +72,12 @@ struct cpu_hw_events { bool snapshot_set_done; /* A shadow copy of the counter values to avoid clobbering during multipl= e SBI calls */ u64 snapshot_cval_shcopy[RISCV_MAX_COUNTERS]; + + /* Saved branch records. */ + struct branch_records *branches; + + /* Active events requesting branch records */ + int ctr_users; }; =20 struct riscv_pmu { @@ -78,6 +91,8 @@ struct riscv_pmu { int (*ctr_get_idx)(struct perf_event *event); int (*ctr_get_width)(int idx); void (*ctr_clear_idx)(struct perf_event *event); + void (*ctr_add)(struct perf_event *event, int flags); + void (*ctr_del)(struct perf_event *event, int flags); void (*ctr_start)(struct perf_event *event, u64 init_val); void (*ctr_stop)(struct perf_event *event, unsigned long flag); int (*event_map)(struct perf_event *event, u64 *config); @@ -85,10 +100,13 @@ struct riscv_pmu { void (*event_mapped)(struct perf_event *event, struct mm_struct *mm); void (*event_unmapped)(struct perf_event *event, struct mm_struct *mm); uint8_t (*csr_index)(struct perf_event *event); + void (*sched_task)(struct perf_event_pmu_context *ctx, bool sched_in); =20 struct cpu_hw_events __percpu *hw_events; struct hlist_node node; struct notifier_block riscv_pm_nb; + + unsigned int ctr_depth; }; =20 #define to_riscv_pmu(p) (container_of(p, struct riscv_pmu, pmu)) --=20 2.34.1 From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FB3F243853 for ; Thu, 16 Jan 2025 23:10:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069038; cv=none; b=NZvE3lDQf76vqUqZYVmVv24D67vAQGHt0HqMg4piWqo1KcLUvvijyg+bMAN2ByjgURh5IL+ClH5BgOUq3NwQM4iZXmJr6ZGMX5oFBBjs0wwwum9QxMG32gNguaMVEGSCgE5DnFZDTV8CwKKblXBNFIFlzra+jRdVJEvyWBRqtdM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069038; c=relaxed/simple; bh=fS8ZkumUUmeGvgiHb9kTSGkFfCs3SatwwMZJznzajnM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ige8I1Ue9tUbkKd6SwyeMYU0gycBXYr2X1BTVjyRlAde0fgLhRg8uqiDjgo3QUoDlCLygdGc3BrBWzkbkpN8fmUdWQ640gftoAr9syT7HqGh29qGs3VvCoqpfMPvjBF0FWDBnNKU7XplnVZONgijgKt11ZL8Za7Fqueh97IUPxA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=CQUhe7nZ; arc=none smtp.client-ip=209.85.128.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="CQUhe7nZ" Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-436281c8a38so9735125e9.3 for ; Thu, 16 Jan 2025 15:10:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069034; x=1737673834; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mZCF2qhQTSScochH887+09o1kiVfF8lDkMyjFmVHhR0=; b=CQUhe7nZoU73QAvfsInKfNjEDC0x/RGDdK4iqWJSHz6CmopSxggS8u85FbGP8SAQii A1csroQib9+EX07ZLNMclvo4GK5xlx+QGnvC9Y/pZiLKOYtqSq4LQPoNqkpZLOhVpAHs eS76MzooNPLkZtLdt/ttG856W8EMIK46Q4fC81chND3bC+yC05zlpEpPZwYTm+D8LN/v C35xqSVa24W7iJFls5GkErXF+khSswDWAoshFjq19PpVGWHXHpD3JTSGAfL4MJsgo72u awd+3jILxJnBqBMWCa3pVW36+T1VQIOnOfe1VwX12JrOru21hMNzHzUbSQWTMQzU7zVW 7zOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069034; x=1737673834; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mZCF2qhQTSScochH887+09o1kiVfF8lDkMyjFmVHhR0=; b=IsU/mmOE0w5NtdvtlAl5g3H2m/0tw0+B/xU7vpIobemMzm+v3Xkj01c9Xxrrs8nobU 78uEZazr7Bq5cqm9K+qOIzvv3KKzrf/jqjftKsajwhz62SY7Z6AEQ1dsELyV0xPtMok+ W3oFe1G1YZGt7P9IXxqZiKcKBRmRDjZFX+E58LeAHshlI5wn/SbLVRDQQts+0FXNdoAY YGwjSsrYL4ZaT7IVjalDO/ob9tM+RAwyO/txITLQ9GFkz3nn5tSubqCUH3mOdFDBPwT9 +OCsz/7dDKuMmyw6DQQ2hajhslj9N9zSnoGSe2Zj+JAAS9Y0U0Snrnkfwr6qXCMeveFb dEow== X-Gm-Message-State: AOJu0YzOuT8xll5qbR6gdeVnOoTzFUmRrUx+yj0gROKdHVuhm96W1j+0 O7dyo+xEEcpHpxFjxoyEIX8ypWcimd8f1Lj+pzZ+SYttgrqk3CRZWjh5rlsLwahl7ZO81PJM2el v X-Gm-Gg: ASbGncsfy1yY3wZzKc4no6xIo92p/RozXzEAyqL/yfbFjL48kRW3aR+wVVIrv85iTl/ MX3Q64OvhUEZC/jauYHibKh8Rf/bq0iUbfg73FU7spFZceyvakb2YUPPfFvGIR6iq/4iMKlltoJ SGsLWsy7ahw6KPFzj9Tq3BkMq7JY/wM7DgH+dzJWk7NRNHuY6Gie1D8UZ4Ix4nit+9jHxTMr/U5 XUrX3muZZEgf3Rke7p/rY8ikBkObF2qb74QimyhCwBJWKB/Vvh7ljftMF9jJxT6P0jeQ/HDfs07 BKtmhmj4PkOUqZK/ X-Google-Smtp-Source: AGHT+IENNVt/z6/PQNOhZEjHjFyExbEVG67N2G2WBWJGwM1eJgy85tbXpBlyUecckzcRYTVjoisjXQ== X-Received: by 2002:a05:600c:4e06:b0:434:e8cf:6390 with SMTP id 5b1f17b1804b1-438913c6856mr3992575e9.6.1737069033888; Thu, 16 Jan 2025 15:10:33 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:33 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 6/7] riscv: pmu: Add driver for Control Transfer Records Ext. Date: Thu, 16 Jan 2025 23:09:54 +0000 Message-Id: <20250116230955.867152-7-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This adds support for CTR Ext defined in [0]. The extension allows to records a maximum for 256 last branch records. CTR extension depends on s[m|s]csrind and Sscofpmf extensions. Signed-off-by: Rajnesh Kanwal --- MAINTAINERS | 1 + drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/riscv_ctr.c | 608 +++++++++++++++++++++++++++++++++ include/linux/perf/riscv_pmu.h | 37 ++ 5 files changed, 658 insertions(+) create mode 100644 drivers/perf/riscv_ctr.c diff --git a/MAINTAINERS b/MAINTAINERS index 2ef7ff933266..7bcd79f33811 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20177,6 +20177,7 @@ M: Atish Patra R: Anup Patel L: linux-riscv@lists.infradead.org S: Supported +F: drivers/perf/riscv_ctr.c F: drivers/perf/riscv_pmu_common.c F: drivers/perf/riscv_pmu_dev.c F: drivers/perf/riscv_pmu_legacy.c diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index b3bdff2a99a4..9107c5208bf5 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -129,6 +129,17 @@ config ANDES_CUSTOM_PMU =20 If you don't know what to do here, say "Y". =20 +config RISCV_CTR + bool "Enable support for Control Transfer Records (CTR)" + depends on PERF_EVENTS && RISCV_PMU + default y + help + Enable support for Control Transfer Records (CTR) which + allows recording branches, Jumps, Calls, returns etc taken in an + execution path. This also supports privilege based filtering. It + captures additional relevant information such as cycle count, + branch misprediction etc. + config ARM_PMU_ACPI depends on ARM_PMU && ACPI def_bool y diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 0805d740c773..755609f184fe 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -20,6 +20,7 @@ obj-$(CONFIG_RISCV_PMU_COMMON) +=3D riscv_pmu_common.o obj-$(CONFIG_RISCV_PMU_LEGACY) +=3D riscv_pmu_legacy.o obj-$(CONFIG_RISCV_PMU) +=3D riscv_pmu_dev.o obj-$(CONFIG_STARFIVE_STARLINK_PMU) +=3D starfive_starlink_pmu.o +obj-$(CONFIG_RISCV_CTR) +=3D riscv_ctr.o obj-$(CONFIG_THUNDERX2_PMU) +=3D thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) +=3D xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) +=3D arm_spe_pmu.o diff --git a/drivers/perf/riscv_ctr.c b/drivers/perf/riscv_ctr.c new file mode 100644 index 000000000000..53419a656043 --- /dev/null +++ b/drivers/perf/riscv_ctr.c @@ -0,0 +1,608 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Control transfer records extension Helpers. + * + * Copyright (C) 2024 Rivos Inc. + * + * Author: Rajnesh Kanwal + */ + +#define pr_fmt(fmt) "CTR: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define CTR_BRANCH_FILTERS_INH (CTRCTL_EXCINH | \ + CTRCTL_INTRINH | \ + CTRCTL_TRETINH | \ + CTRCTL_TKBRINH | \ + CTRCTL_INDCALL_INH | \ + CTRCTL_DIRCALL_INH | \ + CTRCTL_INDJUMP_INH | \ + CTRCTL_DIRJUMP_INH | \ + CTRCTL_CORSWAP_INH | \ + CTRCTL_RET_INH | \ + CTRCTL_INDOJUMP_INH | \ + CTRCTL_DIROJUMP_INH) + +#define CTR_BRANCH_ENABLE_BITS (CTRCTL_KERNEL_ENABLE | CTRCTL_U_ENABLE) + +/* Branch filters not-supported by CTR extension. */ +#define CTR_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \ + PERF_SAMPLE_BRANCH_IN_TX | \ + PERF_SAMPLE_BRANCH_PRIV_SAVE | \ + PERF_SAMPLE_BRANCH_NO_TX | \ + PERF_SAMPLE_BRANCH_COUNTERS) + +/* Branch filters supported by CTR extension. */ +#define CTR_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \ + PERF_SAMPLE_BRANCH_KERNEL | \ + PERF_SAMPLE_BRANCH_HV | \ + PERF_SAMPLE_BRANCH_ANY | \ + PERF_SAMPLE_BRANCH_ANY_CALL | \ + PERF_SAMPLE_BRANCH_ANY_RETURN | \ + PERF_SAMPLE_BRANCH_IND_CALL | \ + PERF_SAMPLE_BRANCH_COND | \ + PERF_SAMPLE_BRANCH_IND_JUMP | \ + PERF_SAMPLE_BRANCH_HW_INDEX | \ + PERF_SAMPLE_BRANCH_NO_FLAGS | \ + PERF_SAMPLE_BRANCH_NO_CYCLES | \ + PERF_SAMPLE_BRANCH_CALL_STACK | \ + PERF_SAMPLE_BRANCH_CALL | \ + PERF_SAMPLE_BRANCH_TYPE_SAVE) + +#define CTR_PERF_BRANCH_FILTERS (CTR_ALLOWED_BRANCH_FILTERS | \ + CTR_EXCLUDE_BRANCH_FILTERS) + +static u64 allowed_filters __read_mostly; + +struct ctr_regset { + unsigned long src; + unsigned long target; + unsigned long ctr_data; +}; + +enum { + CTR_STATE_NONE, + CTR_STATE_VALID, +}; + +/* Head is the idx of the next available slot. The slot may be already pop= ulated + * by an old entry which will be lost on new writes. + */ +struct riscv_perf_task_context { + int callstack_users; + int stack_state; + unsigned int num_entries; + uint32_t ctr_status; + uint64_t ctr_control; + struct ctr_regset store[MAX_BRANCH_RECORDS]; +}; + +static inline u64 get_ctr_src_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_src_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline u64 get_ctr_tgt_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG2, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_tgt_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG2, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline u64 get_ctr_data_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG3, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_data_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG3, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline bool ctr_record_valid(u64 ctr_src) +{ + return !!FIELD_GET(CTRSOURCE_VALID, ctr_src); +} + +static inline int ctr_get_mispredict(u64 ctr_target) +{ + return FIELD_GET(CTRTARGET_MISP, ctr_target); +} + +static inline unsigned int ctr_get_cycles(u64 ctr_data) +{ + const unsigned int cce =3D FIELD_GET(CTRDATA_CCE_MASK, ctr_data); + const unsigned int ccm =3D FIELD_GET(CTRDATA_CCM_MASK, ctr_data); + + if (ctr_data & CTRDATA_CCV) + return 0; + + /* Formula to calculate cycles from spec: (2^12 + CCM) << CCE-1 */ + if (cce > 0) + return (4096 + ccm) << (cce - 1); + + return FIELD_GET(CTRDATA_CCM_MASK, ctr_data); +} + +static inline unsigned int ctr_get_type(u64 ctr_data) +{ + return FIELD_GET(CTRDATA_TYPE_MASK, ctr_data); +} + +static inline unsigned int ctr_get_depth(u64 ctr_depth) +{ + /* Depth table from CTR Spec: 2.4 sctrdepth. + * + * sctrdepth.depth Depth + * 000 - 16 + * 001 - 32 + * 010 - 64 + * 011 - 128 + * 100 - 256 + * + * Depth =3D 16 * 2 ^ (ctrdepth.depth) + * or + * Depth =3D 16 << ctrdepth.depth. + */ + return 16 << FIELD_GET(SCTRDEPTH_MASK, ctr_depth); +} + +static inline struct riscv_perf_task_context *task_context(void *ctx) +{ + return (struct riscv_perf_task_context *)ctx; +} + +/* Reads CTR entry at idx and stores it in entry struct. */ +static bool get_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + entry->src =3D get_ctr_src_reg(idx); + + if (!ctr_record_valid(entry->src)) + return false; + + entry->src =3D entry->src; + entry->target =3D get_ctr_tgt_reg(idx); + entry->ctr_data =3D get_ctr_data_reg(idx); + + return true; +} + +static void set_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + set_ctr_src_reg(idx, entry->src); + set_ctr_tgt_reg(idx, entry->target); + set_ctr_data_reg(idx, entry->ctr_data); +} + +static u64 branch_type_to_ctr(int branch_type) +{ + u64 config =3D CTR_BRANCH_FILTERS_INH | CTRCTL_LCOFIFRZ; + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + config |=3D CTRCTL_U_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + config |=3D CTRCTL_KERNEL_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_HV) { + if (riscv_isa_extension_available(NULL, h)) + config |=3D CTRCTL_KERNEL_ENABLE; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + config &=3D ~CTR_BRANCH_FILTERS_INH; + return config; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) { + config &=3D ~CTRCTL_INDCALL_INH; + config &=3D ~CTRCTL_DIRCALL_INH; + config &=3D ~CTRCTL_EXCINH; + config &=3D ~CTRCTL_INTRINH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + config &=3D ~(CTRCTL_RET_INH | CTRCTL_TRETINH); + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + config &=3D ~CTRCTL_INDCALL_INH; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + config &=3D ~CTRCTL_TKBRINH; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL_STACK) + config |=3D CTRCTL_RASEMU; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) { + config &=3D ~CTRCTL_INDJUMP_INH; + config &=3D ~CTRCTL_INDOJUMP_INH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + config &=3D ~CTRCTL_DIRCALL_INH; + + return config; +} + +static const int ctr_perf_map[] =3D { + [CTRDATA_TYPE_NONE] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_EXCEPTION] =3D PERF_BR_SYSCALL, + [CTRDATA_TYPE_INTERRUPT] =3D PERF_BR_IRQ, + [CTRDATA_TYPE_TRAP_RET] =3D PERF_BR_ERET, + [CTRDATA_TYPE_NONTAKEN_BRANCH] =3D PERF_BR_COND, + [CTRDATA_TYPE_TAKEN_BRANCH] =3D PERF_BR_COND, + [CTRDATA_TYPE_RESERVED_6] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RESERVED_7] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_INDIRECT_CALL] =3D PERF_BR_IND_CALL, + [CTRDATA_TYPE_DIRECT_CALL] =3D PERF_BR_CALL, + [CTRDATA_TYPE_INDIRECT_JUMP] =3D PERF_BR_IND, + [CTRDATA_TYPE_DIRECT_JUMP] =3D PERF_BR_UNCOND, + [CTRDATA_TYPE_CO_ROUTINE_SWAP] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RETURN] =3D PERF_BR_RET, + [CTRDATA_TYPE_OTHER_INDIRECT_JUMP] =3D PERF_BR_IND, + [CTRDATA_TYPE_OTHER_DIRECT_JUMP] =3D PERF_BR_UNCOND, +}; + +static void ctr_set_perf_entry_type(struct perf_branch_entry *entry, + u64 ctr_data) +{ + int ctr_type =3D ctr_get_type(ctr_data); + + entry->type =3D ctr_perf_map[ctr_type]; + if (entry->type =3D=3D PERF_BR_UNKNOWN) + pr_warn("%d - unknown branch type captured\n", ctr_type); +} + +static void capture_ctr_flags(struct perf_branch_entry *entry, + struct perf_event *event, u64 ctr_data, + u64 ctr_target) +{ + if (branch_sample_type(event)) + ctr_set_perf_entry_type(entry, ctr_data); + + if (!branch_sample_no_cycles(event)) + entry->cycles =3D ctr_get_cycles(ctr_data); + + if (!branch_sample_no_flags(event)) { + entry->abort =3D 0; + entry->mispred =3D ctr_get_mispredict(ctr_target); + entry->predicted =3D !entry->mispred; + } + + if (branch_sample_priv(event)) + entry->priv =3D PERF_BR_PRIV_UNKNOWN; +} + +static void ctr_regset_to_branch_entry(struct cpu_hw_events *cpuc, + struct perf_event *event, + struct ctr_regset *regset, + unsigned int idx) +{ + struct perf_branch_entry *entry =3D &cpuc->branches->branch_entries[idx]; + + perf_clear_branch_entry_bitfields(entry); + entry->from =3D regset->src & (~CTRSOURCE_VALID); + entry->to =3D regset->target & (~CTRTARGET_MISP); + capture_ctr_flags(entry, event, regset->ctr_data, regset->target); +} + +static void ctr_read_entries(struct cpu_hw_events *cpuc, + struct perf_event *event, + unsigned int depth) +{ + struct ctr_regset entry =3D {}; + u64 ctr_ctl; + int i; + + ctr_ctl =3D csr_read_clear(CSR_CTRCTL, CTR_BRANCH_ENABLE_BITS); + + for (i =3D 0; i < depth; i++) { + if (!get_ctr_regset(&entry, i)) + break; + + ctr_regset_to_branch_entry(cpuc, event, &entry, i); + } + + csr_set(CSR_CTRCTL, ctr_ctl & CTR_BRANCH_ENABLE_BITS); + + cpuc->branches->branch_stack.nr =3D i; + cpuc->branches->branch_stack.hw_idx =3D 0; +} + +bool riscv_pmu_ctr_valid(struct perf_event *event) +{ + u64 branch_type =3D event->attr.branch_sample_type; + + if (branch_type & ~allowed_filters) { + pr_debug_once("Requested branch filters not supported 0x%llx\n", + branch_type & ~allowed_filters); + return false; + } + + return true; +} + +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *= event) +{ + unsigned int depth =3D to_riscv_pmu(event->pmu)->ctr_depth; + + ctr_read_entries(cpuc, event, depth); + + /* Clear frozen bit. */ + csr_clear(CSR_SCTRSTATUS, SCTRSTATUS_FROZEN); +} + +static void riscv_pmu_ctr_reset(void) +{ + /* FIXME: Replace with sctrclr instruction once support is merged + * into toolchain. + */ + asm volatile(".4byte 0x10400073\n" ::: "memory"); + csr_write(CSR_SCTRSTATUS, 0); + csr_write(CSR_CTRCTL, 0); +} + +static void __riscv_pmu_ctr_restore(void *ctx) +{ + struct riscv_perf_task_context *task_ctx =3D ctx; + unsigned int i; + + csr_write(CSR_SCTRSTATUS, task_ctx->ctr_status); + + for (i =3D 0; i < task_ctx->num_entries; i++) + set_ctr_regset(&task_ctx->store[i], i); +} + +static void riscv_pmu_ctr_restore(void *ctx) +{ + if (task_context(ctx)->callstack_users =3D=3D 0 || + task_context(ctx)->stack_state =3D=3D CTR_STATE_NONE) { + riscv_pmu_ctr_reset(); + return; + } + + __riscv_pmu_ctr_restore(ctx); + + task_context(ctx)->stack_state =3D CTR_STATE_NONE; +} + +static void __riscv_pmu_ctr_save(void *ctx, unsigned int depth) +{ + struct riscv_perf_task_context *task_ctx =3D ctx; + struct ctr_regset *dst; + unsigned int i; + + for (i =3D 0; i < depth; i++) { + dst =3D &task_ctx->store[i]; + if (!get_ctr_regset(dst, i)) + break; + } + + task_ctx->num_entries =3D i; + + task_ctx->ctr_status =3D csr_read(CSR_SCTRSTATUS); +} + +static void riscv_pmu_ctr_save(void *ctx, unsigned int depth) +{ + if (task_context(ctx)->callstack_users =3D=3D 0) { + task_context(ctx)->stack_state =3D CTR_STATE_NONE; + return; + } + + __riscv_pmu_ctr_save(ctx, depth); + + task_context(ctx)->stack_state =3D CTR_STATE_VALID; +} + +/* + * On context switch in, we need to make sure no samples from previous tas= ks + * are left in the CTR. + * + * On ctxswin, sched_in =3D true, called after the PMU has started + * On ctxswout, sched_in =3D false, called before the PMU is stopped + */ +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *rvpmu =3D to_riscv_pmu(pmu_ctx->pmu); + struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); + void *task_ctx; + + if (!cpuc->ctr_users) + return; + + /* Save branch records in task_ctx on sched out */ + task_ctx =3D pmu_ctx ? pmu_ctx->task_ctx_data : NULL; + if (task_ctx) { + if (sched_in) + riscv_pmu_ctr_restore(task_ctx); + else + riscv_pmu_ctr_save(task_ctx, rvpmu->ctr_depth); + return; + } + + /* Reset branch records on sched in */ + if (sched_in) + riscv_pmu_ctr_reset(); +} + +static inline bool branch_user_callstack(unsigned int br_type) +{ + return (br_type & PERF_SAMPLE_BRANCH_USER) && + (br_type & PERF_SAMPLE_BRANCH_CALL_STACK); +} + +void riscv_pmu_ctr_add(struct perf_event *event) +{ + struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); + + if (branch_user_callstack(event->attr.branch_sample_type) && + event->pmu_ctx->task_ctx_data) + task_context(event->pmu_ctx->task_ctx_data)->callstack_users++; + + perf_sched_cb_inc(event->pmu); + + if (!cpuc->ctr_users++) + riscv_pmu_ctr_reset(); +} + +void riscv_pmu_ctr_del(struct perf_event *event) +{ + struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); + + if (branch_user_callstack(event->attr.branch_sample_type) && + event->pmu_ctx->task_ctx_data) + task_context(event->pmu_ctx->task_ctx_data)->callstack_users--; + + cpuc->ctr_users--; + WARN_ON_ONCE(cpuc->ctr_users < 0); + + perf_sched_cb_dec(event->pmu); +} + +void riscv_pmu_ctr_enable(struct perf_event *event) +{ + u64 branch_type =3D event->attr.branch_sample_type; + u64 ctr; + + ctr =3D branch_type_to_ctr(branch_type); + csr_write(CSR_CTRCTL, ctr); +} + +void riscv_pmu_ctr_disable(struct perf_event *event) +{ + /* Clear CTRCTL to disable the recording. */ + csr_write(CSR_CTRCTL, 0); +} + +/* + * Check for hardware supported perf filters here. To avoid missing + * any new added filter in perf, we do a BUILD_BUG_ON check, so make sure + * to update CTR_ALLOWED_BRANCH_FILTERS or CTR_EXCLUDE_BRANCH_FILTERS + * defines when adding support for it in below function. + */ +static void __init check_available_filters(void) +{ + u64 ctr_ctl; + + /* + * Ensure both perf branch filter allowed and exclude + * masks are always in sync with the generic perf ABI. + */ + BUILD_BUG_ON(CTR_PERF_BRANCH_FILTERS !=3D (PERF_SAMPLE_BRANCH_MAX - 1)); + + allowed_filters =3D PERF_SAMPLE_BRANCH_USER | + PERF_SAMPLE_BRANCH_KERNEL | + PERF_SAMPLE_BRANCH_ANY | + PERF_SAMPLE_BRANCH_HW_INDEX | + PERF_SAMPLE_BRANCH_NO_FLAGS | + PERF_SAMPLE_BRANCH_NO_CYCLES | + PERF_SAMPLE_BRANCH_TYPE_SAVE; + + csr_write(CSR_CTRCTL, ~0); + ctr_ctl =3D csr_read(CSR_CTRCTL); + + if (riscv_isa_extension_available(NULL, h)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_HV; + + if (ctr_ctl & (CTRCTL_INDCALL_INH | CTRCTL_DIRCALL_INH)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_ANY_CALL; + + if (ctr_ctl & (CTRCTL_RET_INH | CTRCTL_TRETINH)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_ANY_RETURN; + + if (ctr_ctl & CTRCTL_INDCALL_INH) + allowed_filters |=3D PERF_SAMPLE_BRANCH_IND_CALL; + + if (ctr_ctl & CTRCTL_TKBRINH) + allowed_filters |=3D PERF_SAMPLE_BRANCH_COND; + + if (ctr_ctl & CTRCTL_RASEMU) + allowed_filters |=3D PERF_SAMPLE_BRANCH_CALL_STACK; + + if (ctr_ctl & (CTRCTL_INDOJUMP_INH | CTRCTL_INDJUMP_INH)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_IND_JUMP; + + if (ctr_ctl & CTRCTL_DIRCALL_INH) + allowed_filters |=3D PERF_SAMPLE_BRANCH_CALL; +} + +void riscv_pmu_ctr_starting_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); +} + +void riscv_pmu_ctr_dying_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Clear and reset CTR CSRs. */ + csr_write(CSR_SCTRDEPTH, 0); + riscv_pmu_ctr_reset(); +} + +int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) +{ + size_t size =3D sizeof(struct riscv_perf_task_context); + + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return 0; + + riscv_pmu->pmu.task_ctx_cache =3D + kmem_cache_create("ctr_task_ctx", size, sizeof(u64), 0, NULL); + if (!riscv_pmu->pmu.task_ctx_cache) + return -ENOMEM; + + check_available_filters(); + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); + riscv_pmu->ctr_depth =3D ctr_get_depth(csr_read(CSR_SCTRDEPTH)); + + pr_info("Perf CTR available, with %d depth\n", riscv_pmu->ctr_depth); + + return 0; +} + +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) +{ + if (!riscv_pmu_ctr_supported(riscv_pmu)) + return; + + csr_write(CSR_SCTRDEPTH, 0); + riscv_pmu->ctr_depth =3D 0; + riscv_pmu_ctr_reset(); + + kmem_cache_destroy(riscv_pmu->pmu.task_ctx_cache); +} diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 883781f12ae0..f32b6dcc3491 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -127,6 +127,43 @@ struct riscv_pmu *riscv_pmu_alloc(void); int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); #endif =20 +static inline bool riscv_pmu_ctr_supported(struct riscv_pmu *pmu) +{ + return !!pmu->ctr_depth; +} + #endif /* CONFIG_RISCV_PMU_COMMON */ =20 +#ifdef CONFIG_RISCV_CTR + +bool riscv_pmu_ctr_valid(struct perf_event *event); +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *= event); +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool= sched_in); +void riscv_pmu_ctr_add(struct perf_event *event); +void riscv_pmu_ctr_del(struct perf_event *event); +void riscv_pmu_ctr_enable(struct perf_event *event); +void riscv_pmu_ctr_disable(struct perf_event *event); +void riscv_pmu_ctr_dying_cpu(void); +void riscv_pmu_ctr_starting_cpu(void); +int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu); +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu); + +#else + +static inline bool riscv_pmu_ctr_valid(struct perf_event *event) { return = false; } +static inline void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, + struct perf_event *event) { } +static inline void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context = *, + bool sched_in) { } +static void riscv_pmu_ctr_add(struct perf_event *event) { } +static void riscv_pmu_ctr_del(struct perf_event *event) { } +static inline void riscv_pmu_ctr_enable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_disable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_dying_cpu(void) { } +static inline void riscv_pmu_ctr_starting_cpu(void) { } +static inline int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) { return= 0; } +static inline void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) { } + +#endif /* CONFIG_RISCV_CTR */ + #endif /* _RISCV_PMU_H */ --=20 2.34.1 From nobody Sun Feb 8 18:56:34 2026 Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31945243870 for ; Thu, 16 Jan 2025 23:10:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069039; cv=none; b=Y/JSYNIVaQMYA8Ay3d/BggjgTLMPfWup2ZJOYx257HODfsskbCqKYq94FeHNM3e4Bpwg9hBpJIRp1U997ktN3PR5u09mYu4/cfCV89wTjGgaJdjZKnC09YfLLGuZlccar14XyzbKCr7MtViRHf/Npves5d0sUskXMqiu5AS+G+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737069039; c=relaxed/simple; bh=en3osisonvml/lgF+LtGxrhewyYz8KPgAh9/6KZlsO8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gth4Z/QWqy+nJKgPpesUNRVo7sVaLTGj8xM6dq3jubsXbLEpfethNrQGqZ+buuST1XQFHAymPNAegEFPD1KicjcmF9gUiZiwQjxZuIDXbrnzIPPFqopKriOwmNybAdDwOxnesUdJSGn9kCd+MQTThEbrxWmXtu3sy5RuSg6xh54= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=VUoAzm2Z; arc=none smtp.client-ip=209.85.221.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="VUoAzm2Z" Received: by mail-wr1-f51.google.com with SMTP id ffacd0b85a97d-38a8b35e168so1049906f8f.1 for ; Thu, 16 Jan 2025 15:10:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069035; x=1737673835; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JEkq9sMQTY0nBcBSdfXe/oLw7i/Fvb8myFij4FR9Lmw=; b=VUoAzm2ZPy38FIWPeaUDpKC+1ZHxetZDnaegrk+X2uBBcrlykEurb9fZrtRAaH1/uW k93TnBFLucuoEBw/syfclvCGGWozMokAiJJ8DMWjZ43N34W+IG92whowbTpZaJwh7NYo Zy07trwIABOj0n5GnbjxyhqRzdnxt6timXFPX2Fs0tfplJCZ+qadOyva0/s9qwfSGT3R kOuz94LfqasSP4prLl57kmU9IyOYv0Hbs9X6HqihP8Xgt1oPi4FrI3ZJ3wNnqbLsLJYZ m5bXhcLDXpVuB8E4IEP8lptYZ4iTcThBHDnppUGxlRNw7a1EhQEDRDaTJINPgcABHALM tFUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069035; x=1737673835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JEkq9sMQTY0nBcBSdfXe/oLw7i/Fvb8myFij4FR9Lmw=; b=DS4OkoAlyho6Fp7lj3voL9E90/lrEHsSNhfP0whTtkBzAsDpRsA6jlYhOAmO2q87my jDdiHrf9CnpU5n9UwlPzFFogQpITW468fH+iGpIPLFrHkdslpPr6/ywMxbr47I6Z6Ps5 DKBTmnEJCLZkYEp+G+fpCeS2BtZLigEJ/DODxZXoe6Ok6g58rdOzwrwwbv5dwgUZTjae 70mzRgpgZ/Cy/8vvyQAdBxLWY7xoarwp4wrrnXawG5Dzlc8bP5dvqyPmR4yURKDUDhuc jMYJotCJ2X6teOAt4rK6PrKlyqLTdCnP0vL3vWl42W2h8LQ9B13YGnUjawkSkOtj6Pg4 xhNA== X-Gm-Message-State: AOJu0YwP2cB6p9YWM1KTnnbIrl9MIASbCBh+N36lAB6GJcL7tSuV8VFH 63RloqoxgbKldTW9X933s/NVWNk2H2JdOwNBNoX1GOpL+pZIcDf/R1ALWcKZAKMzhaMGGzoVUQ6 E X-Gm-Gg: ASbGncuYUQaPLpPihOYC15hfnvW677GhHd1kdLnRfYVeeIe838Y9Rsi5jPVtkiXIEBZ ozLcmDJCoB4OwRqY1np7R70CyprUUP88U4K3cyRsJ/TOjYQf8GOCgE4rY7OYtgqZWULNZJsIA2d 22039RMXl6oHXM56AcAizMqNdUVH4KBiWbZCrQC5N1pEe4Ra+45aK298lTd1odzQB8ESgWcFOJk GXYyStBWvyGGLHchMcdKqeqJuY7yXh+w9zlxaiwwYeg9a7dM9jmSmr8MmgjjHoMKni2R6m96OxA NhmrVR05u347sQPy X-Google-Smtp-Source: AGHT+IGuu6zWay2mqSvZNm4qasvF1tOh4RKXx4XbpxtExBqMV1QOAm0F3fvLQN9z0J/WkxXd0ISjQg== X-Received: by 2002:a05:6000:1548:b0:385:e328:8908 with SMTP id ffacd0b85a97d-38bf5b0b67bmr204983f8f.29.1737069035166; Thu, 16 Jan 2025 15:10:35 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:34 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 7/7] riscv: pmu: Integrate CTR Ext support in riscv_pmu_dev driver Date: Thu, 16 Jan 2025 23:09:55 +0000 Message-Id: <20250116230955.867152-8-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This integrates recently added CTR ext support in riscv_pmu_dev driver to enable branch stack sampling using PMU events. This mainly adds CTR enable/disable callbacks in rvpmu_ctr_stop() and rvpmu_ctr_start() function to start/stop branch recording along with the event. PMU overflow handler rvpmu_ovf_handler() is also updated to sample CTR entries in case of the overflow for the particular event programmed to records branches. The recorded entries are fed to core perf for further processing. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 3 +- drivers/perf/riscv_pmu_dev.c | 67 ++++++++++++++++++++++++++++++++- 2 files changed, 67 insertions(+), 3 deletions(-) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_commo= n.c index c4c4b5d6bed0..23077a6c4931 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -327,8 +327,7 @@ static int riscv_pmu_event_init(struct perf_event *even= t) u64 event_config =3D 0; uint64_t cmask; =20 - /* driver does not support branch stack sampling */ - if (has_branch_stack(event)) + if (needs_branch_stack(event) && !riscv_pmu_ctr_supported(rvpmu)) return -EOPNOTSUPP; =20 hwc->flags =3D 0; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index b9b257607b76..10697deb1d26 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -1030,7 +1030,7 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *eve= nt, unsigned long flag) static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) { - /* Call CTR specific Sched hook. */ + riscv_pmu_ctr_sched_task(pmu_ctx, sched_in); } =20 static int rvpmu_sbi_find_num_ctrs(void) @@ -1379,6 +1379,13 @@ static irqreturn_t rvpmu_ovf_handler(int irq, void *= dev) hw_evt->state |=3D PERF_HES_UPTODATE; perf_sample_data_init(&data, 0, hw_evt->last_period); if (riscv_pmu_event_set_period(event)) { + if (needs_branch_stack(event)) { + riscv_pmu_ctr_consume(cpu_hw_evt, event); + perf_sample_save_brstack( + &data, event, + &cpu_hw_evt->branches->branch_stack, NULL); + } + /* * Unlike other ISAs, RISC-V don't have to disable interrupts * to avoid throttling here. As per the specification, the @@ -1577,10 +1584,14 @@ static int rvpmu_deleg_ctr_get_idx(struct perf_even= t *event) =20 static void rvpmu_ctr_add(struct perf_event *event, int flags) { + if (needs_branch_stack(event)) + riscv_pmu_ctr_add(event); } =20 static void rvpmu_ctr_del(struct perf_event *event, int flags) { + if (needs_branch_stack(event)) + riscv_pmu_ctr_del(event); } =20 static void rvpmu_ctr_start(struct perf_event *event, u64 ival) @@ -1595,6 +1606,9 @@ static void rvpmu_ctr_start(struct perf_event *event,= u64 ival) if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) rvpmu_set_scounteren((void *)event); + + if (needs_branch_stack(event)) + riscv_pmu_ctr_enable(event); } =20 static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) @@ -1617,6 +1631,9 @@ static void rvpmu_ctr_stop(struct perf_event *event, = unsigned long flag) } else { rvpmu_sbi_ctr_stop(event, flag); } + + if (needs_branch_stack(event) && flag !=3D RISCV_PMU_STOP_FLAG_RESET) + riscv_pmu_ctr_disable(event); } =20 static int rvpmu_find_ctrs(void) @@ -1652,6 +1669,9 @@ static int rvpmu_event_map(struct perf_event *event, = u64 *econfig) { u64 config1; =20 + if (needs_branch_stack(event) && !riscv_pmu_ctr_valid(event)) + return -EOPNOTSUPP; + config1 =3D event->attr.config1; if (riscv_pmu_cdeleg_available() && !pmu_sbi_is_fw_event(event) && !(config1 & RISCV_PMU_CONFIG1_GUEST_EVENTS)) { /* GUEST events rely o= n SBI encoding */ @@ -1701,6 +1721,8 @@ static int rvpmu_starting_cpu(unsigned int cpu, struc= t hlist_node *node) enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE); } =20 + riscv_pmu_ctr_starting_cpu(); + if (sbi_pmu_snapshot_available()) return pmu_sbi_snapshot_setup(pmu, cpu); =20 @@ -1715,6 +1737,7 @@ static int rvpmu_dying_cpu(unsigned int cpu, struct h= list_node *node) =20 /* Disable all counters access for user mode now */ csr_write(CSR_SCOUNTEREN, 0x0); + riscv_pmu_ctr_dying_cpu(); =20 if (sbi_pmu_snapshot_available()) return pmu_sbi_snapshot_disable(); @@ -1838,6 +1861,29 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); } =20 +static int branch_records_alloc(struct riscv_pmu *pmu) +{ + struct branch_records __percpu *tmp_alloc_ptr; + struct branch_records *records; + struct cpu_hw_events *events; + int cpu; + + if (!riscv_pmu_ctr_supported(pmu)) + return 0; + + tmp_alloc_ptr =3D alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!tmp_alloc_ptr) + return -ENOMEM; + + for_each_possible_cpu(cpu) { + events =3D per_cpu_ptr(pmu->hw_events, cpu); + records =3D per_cpu_ptr(tmp_alloc_ptr, cpu); + events->branches =3D records; + } + + return 0; +} + static void rvpmu_event_init(struct perf_event *event) { /* @@ -1850,6 +1896,9 @@ static void rvpmu_event_init(struct perf_event *event) event->hw.flags |=3D PERF_EVENT_FLAG_USER_ACCESS; else event->hw.flags |=3D PERF_EVENT_FLAG_LEGACY; + + if (branch_sample_call_stack(event)) + event->attach_state |=3D PERF_ATTACH_TASK_DATA; } =20 static void rvpmu_event_mapped(struct perf_event *event, struct mm_struct = *mm) @@ -1997,6 +2046,15 @@ static int rvpmu_device_probe(struct platform_device= *pdev) pmu->pmu.attr_groups =3D riscv_cdeleg_pmu_attr_groups; else pmu->pmu.attr_groups =3D riscv_sbi_pmu_attr_groups; + + ret =3D riscv_pmu_ctr_init(pmu); + if (ret) + goto out_free; + + ret =3D branch_records_alloc(pmu); + if (ret) + goto out_ctr_finish; + pmu->cmask =3D cmask; pmu->ctr_add =3D rvpmu_ctr_add; pmu->ctr_del =3D rvpmu_ctr_del; @@ -2013,6 +2071,10 @@ static int rvpmu_device_probe(struct platform_device= *pdev) pmu->csr_index =3D rvpmu_csr_index; pmu->sched_task =3D pmu_sched_task; =20 + ret =3D cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node= ); + if (ret) + goto out_ctr_finish; + ret =3D riscv_pm_pmu_register(pmu); if (ret) goto out_unregister; @@ -2062,6 +2124,9 @@ static int rvpmu_device_probe(struct platform_device = *pdev) out_unregister: riscv_pmu_destroy(pmu); =20 +out_ctr_finish: + riscv_pmu_ctr_finish(pmu); + out_free: kfree(pmu); return ret; --=20 2.34.1