From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 151632D321C for ; Thu, 22 May 2025 23:26:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956380; cv=none; b=p7I+EaELt1jWzEpkleCckm4xw6r1OWsksg53P2ov5QYBwPl0IgfXelF/o8greEF5NKGcKbxK3HLCmZAQMXDVVi02nfAme0CfX/wtA9wSn3VgYGjnb9VEdgruQxcDglW/2xFOhaI6VEe5mavmAFwjsJSUDzRlIuK1U/4KZiOye6w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956380; c=relaxed/simple; bh=yRzc0icmCI3Wfp8FtEoLTbRp1C8aPjfzbWi+UvR2NUk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=mYJCRLixxORfbB3+0GdsN8H9+QRsSs9/Twx1VOvh0vlkzS5akkgGLKlsAkmdmI2VCNnv2/7WUfsddvhSZeGo2FJptK7QJM/KLjwxn2OPvfg48z8zHaWFudZoM/YmkIWtW/GyAqQEO4/GkyHnmdk1zviKCBoHoWl6536dXZP0Em8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=kjYUwQQ2; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="kjYUwQQ2" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-43edecbfb94so93759255e9.1 for ; Thu, 22 May 2025 16:26:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956377; x=1748561177; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=dKVNMuDSduacbIBYKcZNnzCtGR0eRj8WvfLgYTzDz7M=; b=kjYUwQQ20ICED7JofCgsYJKVwkIAK+g1K/cv21Gqlg8dz3A43DDvoNoADjh16Vqx/s +g1mcOSyxKjsJOb11WY3VYZw5jAy798a3yKVDj/88103NVc4RSQVrMZY+k9Rd1ajOdCL XKNsN6xUHgCc3WnXguz9UCsDpXmqC6rHOwpB4n92aJwlBgaVtubRYT2AYwxl47/3JFEa FjhOBZfpBQz+mmRoDQiAMpFfOj+7RT4Wf0pR2NLpFGXmASyXVV12MeYxbLVzoROcoHik W9e8N+mhsgnWk5Y3wX0bzHBLRJmHraLL9pxvdmMs2RugNAcSQuLhTO0z1spyXjh6b8j2 oaYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956377; x=1748561177; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dKVNMuDSduacbIBYKcZNnzCtGR0eRj8WvfLgYTzDz7M=; b=QJLa+o/KYsHeBEgP3UCz+rcKhQ/TNVDB+H7N66LTJ1PHwAh8FWM7WB7yHYomv74BlK MGn9EWTgBFLQLuwz3eE/3gHzw71u8RN/qSCFoJDzFC/nQGQ2kgbYJi1PkgmA65vzbPgS ozddOPGhXWZ23opA4RcFIRTkb6jxMjzbKar4OnJzjg+3Mh+vH2BqKjXhbASHRSkXuttx 1qXDqcWNeH/PQpZFlCGlfCPuBnI3rtrJpr0VkPr+BsjDb8rR7JNw/qpl/40ToO594nfY lBFHXdkcvoAKGFVK+Uvb61YynfgggHIAmeqQgbDxCDU5sWgOJwb6RXbzfnhrrmIs/dkH 95yQ== X-Forwarded-Encrypted: i=1; AJvYcCWG8xaXS4hgN5Xa/Ac79rJOuNNn5lYuRr35Vwh7gaLL8aZgPHzVqevKOFPofKU8lmyp2DmyiVyImr8Xn+E=@vger.kernel.org X-Gm-Message-State: AOJu0YwtWBNKvdEK7JfwRhHuTHqzn2ZsFq7L0Eo1Hj+GkMFz1CbXfVN/ 2jJylOs6RO5V9KWmuiLkOKq+I24BzYZDaCeU7bXQooH5kKXRroHTgNm/K0GbtO7B9BA= X-Gm-Gg: ASbGncvxdL57jXGnPAC8+la74GSC0p8DZcj06HWvvBKZ/+B3tKqp6tDraJhxXjSlNe+ luMvpRZUCJJtd2I23ov0+66Nh3pCr4JGGJnwAdqzwpxthID7KArdRufPovGG46LpQA8I0n5J6WR NJKA8DqyONlyKZEprt7ic9JPhu3Kli1DcFRTEOWC6g+tCiZMAKBZC+yp8yJth/iVjtfa4GLBap0 R6O2xeHjKmLLTshwTPqHUG8biAIuPBc2EFTUy6ELg0F+V8Ugyp2RjNuLwF4dpJNw6/RjtG6K+26 GyzZP2+tGlp4hAh3WY1CedbrDVdokIaqcbF0xXdKEJcZt3glueN5zg== X-Google-Smtp-Source: AGHT+IG6EHMwWLdGi5KKzk4nvM0cwE/VEL6TR7m61EKgB/eRoEgtVitn+rOSsP5hXJxsGHaL7N+kKQ== X-Received: by 2002:a05:600c:8411:b0:442:e9ec:4654 with SMTP id 5b1f17b1804b1-44b6d1d39b6mr6921755e9.8.1747956377309; Thu, 22 May 2025 16:26:17 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:17 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:07 +0100 Subject: [PATCH v3 1/7] perf: Increase the maximum number of branches remove_loops() can process. Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-1-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=1919; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=yRzc0icmCI3Wfp8FtEoLTbRp1C8aPjfzbWi+UvR2NUk=; b=eXHAPMGQUlSkoKw/ZujHfDdVrgUDThW/41UrEOojqu5cpgpKRvW3csxhNcSG5dQjunGp++UGp BxjZ3PfI7eeCjvpIQxYxdbhtwmDvg9UZA3QLufVRs12DfstYvvJmpYm X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= RISCV CTR extension supports a maximum depth of 256 last branch records. Currently remove_loops() can only process 127 entries at max. This leads to samples with more than 127 entries being skipped. This change simply updates the remove_loops() logic to be able to process 256 entries. Signed-off-by: Rajnesh Kanwal --- tools/perf/util/machine.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 2d51badfbf2e2d1588fa4fdd42ef6c8fea35bf0e..5414528b9d336790decfb42a4f6= a4da6c6b68b07 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -2176,25 +2176,32 @@ static void save_iterations(struct iterations *iter, iter->cycles +=3D be[i].flags.cycles; } =20 -#define CHASHSZ 127 -#define CHASHBITS 7 -#define NO_ENTRY 0xff +#define CHASHBITS 8 +#define NO_ENTRY 0xffU =20 -#define PERF_MAX_BRANCH_DEPTH 127 +#define PERF_MAX_BRANCH_DEPTH 256 =20 /* Remove loops. */ +/* Note: Last entry (i=3D=3Dff) will never be checked against NO_ENTRY + * so it's safe to have an unsigned char array to process 256 entries + * without causing clash between last entry and NO_ENTRY value. + */ static int remove_loops(struct branch_entry *l, int nr, struct iterations *iter) { int i, j, off; - unsigned char chash[CHASHSZ]; + unsigned char chash[PERF_MAX_BRANCH_DEPTH]; =20 memset(chash, NO_ENTRY, sizeof(chash)); =20 - BUG_ON(PERF_MAX_BRANCH_DEPTH > 255); + BUG_ON(PERF_MAX_BRANCH_DEPTH > 256); =20 for (i =3D 0; i < nr; i++) { - int h =3D hash_64(l[i].from, CHASHBITS) % CHASHSZ; + /* Remainder division by PERF_MAX_BRANCH_DEPTH is not + * needed as hash_64 will anyway limit the hash + * to CHASHBITS + */ + int h =3D hash_64(l[i].from, CHASHBITS); =20 /* no collision handling for now */ if (chash[h] =3D=3D NO_ENTRY) { --=20 2.43.0 From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com [209.85.221.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 076272D3228 for ; Thu, 22 May 2025 23:26:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956383; cv=none; b=WV3LBVJXLhKqEkUOWOUSQF4wgi4Mqp/fUQrCyoRQfV1fuygMVw1ndn/eCnrzJvdVcXbA2wdIhf8CfdjwdHZqjS39cMBYlBwls6Y8ubhHPnE31QQbIqLZxrZ0FYQdXMr1kcSd+iNHO5mZ9gagQiNvpoX11VTDa0UACFOXgieAfHg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956383; c=relaxed/simple; bh=FOp1y09/3SytdmvNsRNI8bjrwNGWBMKFRpE30aP/AEg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WVoP4b8AH1CfP6pJXYSicAcTHQ9cnnkmxOrfCxxk/aNa2rgJ0ARQdI1hnNUskbldROzzXz/yf2BNkERxmnHqu5KjK7v/YQhPoxATXXZSbtAwjBHuKPyNn1C3ElkERJ3RWUFr2bVIDwzhYUI1H6xBmrTBh0xpL2gZKa8qdu66gl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=bA0WeckT; arc=none smtp.client-ip=209.85.221.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="bA0WeckT" Received: by mail-wr1-f46.google.com with SMTP id ffacd0b85a97d-3a36748920cso5714977f8f.2 for ; Thu, 22 May 2025 16:26:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956378; x=1748561178; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=l53aiezq63VwGVulH0Yi9XQBVwtNa/T40ZWDbw9c1U8=; b=bA0WeckTYVEWROV07KpSEjHgHCe9Eb64CmQKTSE7au1aVTCdlYWjCoYVWYZtDMQaDi OawMgvVRGhOgbvVS40NDYKjp4q0JjcmHXnPMWacBV0SKr75pxXPtKafmaWPgc2M6BQcu gz6FdUmbYQKUWIrq6aFYxeinnSD5WU0AQaEQT5Rjl6LwfhGwNDDlVV9BQfqSeD+BRda9 RNpC0EE400ByWQ115BLULyZbXEzpUHWExkf2JKeUJOqgXBOYR9rH4HK+UwcPcAwtl+SG C0oYGxvat2jXdHS10CDpvBhDTEcLf+YiW2ikU2KfwNxYDycEFk2ACkD2v6xHbjtKcrMf ZtiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956378; x=1748561178; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l53aiezq63VwGVulH0Yi9XQBVwtNa/T40ZWDbw9c1U8=; b=l2ZSEUezrqvfLredNCUG9DhcpQ/gl88/uuSuYwLFZTB1bio179l/foqhrdRqHEgKp0 XzBofc4nHICizgm0ODSiGiBnoW2cN8ZlsB1jbQwo+Swaz32hq0PBqA3cWZ/44J6Ssa3n r1nzoznKWl208wDp2/J934hVpyedrZX8H5Jqcr2aWGpM8s9YQbx+Q9T9j/6mqgcVKwuD awea4ZmGPZb4qHjePUq3AIxE/y7I5gFsBTIqPaLxZrWS3saMq1WQ85mJHkfsUlQtmf8B IflEb5AWzoKHhVOBfp0rilfX+oSoUKpgWCchkNISVDiQtud7mUDM8388fiKV9PW2f/8I qgJg== X-Forwarded-Encrypted: i=1; AJvYcCW65Xx9ft4QisD4VwZ9Do5IWzEtwFvpcj+NRhFJ/obTonmEW07Oh1Brm8L2nTFyCG/o8slz6KkSbl7p/iY=@vger.kernel.org X-Gm-Message-State: AOJu0Ywahs+qyIZCja4MYJ08pmCz9zpA5mg9nCQuDHwZhbTZa0lOf+Tp RMuiBhGDsQgiv7h81EdZfPvBjx6SuiM/eNNwHpdpy/n7Ne+ebkCytR747/AVnDvTVaQ= X-Gm-Gg: ASbGncvCK7Cw8go1B0jhoyfES53uSLQsaRzholdl3UK+SDRWTqaSt5i5ca9RKaCnS9z cS7uFPO2jQxQvfdrJ0cH8WoUYWnFA7bbCd8EeFBgBYkfBxZ/9NUsnGnCYNuvHmrPti36E6/u7jw 3+W0EuqMPwOw5FFTYpR7vUvxQRwHKMcW50uBxy5EWRXnk0kcZfx8kHnbcXbb1kLlTQvYqXz+iF0 0UI2nd111mAxRBYmX5kIdi8WyH75PvOLPmyZzWYH2H+cOJjIbCMi4GDNbtHJ/+SSvpE/MKxCRHb Gs+d5DCMVG2ghfmfYP9Xd9Z0PHkF8M0Lb6p6n9IPVYfvNGyXIlekrQ== X-Google-Smtp-Source: AGHT+IHAmL4SGVYuRfQdx4Te1SZB2AHBQgY26GyZwg325ZZdeZGi5gjiCaXcu6MypqA+Dk0jD2u8ww== X-Received: by 2002:a05:6000:2903:b0:3a3:6f26:5816 with SMTP id ffacd0b85a97d-3a36f265984mr15292194f8f.36.1747956378244; Thu, 22 May 2025 16:26:18 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:18 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:08 +0100 Subject: [PATCH v3 2/7] riscv: pmu: Add Control transfer records CSR definations. Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-2-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=3834; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=FOp1y09/3SytdmvNsRNI8bjrwNGWBMKFRpE30aP/AEg=; b=VAKiiJbUQk/CKnHQk537iR4eh6srfGqgxMprrnILqLhhp+3+U+RkJeqfMk302Xmwyi5fnxrYd Ogwp9C565GICK4Ub/rWcyt+4/fWPFEieJZE2YOTuQ8gYHwDi31VfK57 X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= Adding CSR defines for RISCV Control Transfer Records extension [0] along with bit-field macros for each CSR. [0]: https://github.com/riscv/riscv-control-transfer-records Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/csr.h | 83 ++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 83 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 8b2f5ae1d60efadbec90eab4b1a3637488a9431f..3aef621657603483e1cafd036f1= 26692a731a333 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -331,6 +331,85 @@ =20 #define CSR_SCOUNTOVF 0xda0 =20 +/* M-mode Control Transfer Records CSRs */ +#define CSR_MCTRCTL 0x34e + +/* S-mode Control Transfer Records CSRs */ +#define CSR_SCTRCTL 0x14e +#define CSR_SCTRSTATUS 0x14f +#define CSR_SCTRDEPTH 0x15f + +/* VS-mode Control Transfer Records CSRs */ +#define CSR_VSCTRCTL 0x24e + +/* xctrtl CSR bits. */ +#define CTRCTL_U_ENABLE _AC(0x1, UL) +#define CTRCTL_S_ENABLE _AC(0x2, UL) +#define CTRCTL_M_ENABLE _AC(0x4, UL) +#define CTRCTL_RASEMU _AC(0x80, UL) +#define CTRCTL_STE _AC(0x100, UL) +#define CTRCTL_MTE _AC(0x200, UL) +#define CTRCTL_BPFRZ _AC(0x800, UL) +#define CTRCTL_LCOFIFRZ _AC(0x1000, UL) +#define CTRCTL_EXCINH _AC(0x200000000, UL) +#define CTRCTL_INTRINH _AC(0x400000000, UL) +#define CTRCTL_TRETINH _AC(0x800000000, UL) +#define CTRCTL_NTBREN _AC(0x1000000000, UL) +#define CTRCTL_TKBRINH _AC(0x2000000000, UL) +#define CTRCTL_INDCALL_INH _AC(0x10000000000, UL) +#define CTRCTL_DIRCALL_INH _AC(0x20000000000, UL) +#define CTRCTL_INDJUMP_INH _AC(0x40000000000, UL) +#define CTRCTL_DIRJUMP_INH _AC(0x80000000000, UL) +#define CTRCTL_CORSWAP_INH _AC(0x100000000000, UL) +#define CTRCTL_RET_INH _AC(0x200000000000, UL) +#define CTRCTL_INDOJUMP_INH _AC(0x400000000000, UL) +#define CTRCTL_DIROJUMP_INH _AC(0x800000000000, UL) + +/* sctrstatus CSR bits. */ +#define SCTRSTATUS_WRPTR_MASK 0xFF +#define SCTRSTATUS_FROZEN _AC(0x80000000, UL) + +#ifdef CONFIG_RISCV_M_MODE +#define CTRCTL_KERNEL_ENABLE CTRCTL_M_ENABLE +#else +#define CTRCTL_KERNEL_ENABLE CTRCTL_S_ENABLE +#endif + +/* sctrdepth CSR bits. */ +#define SCTRDEPTH_MASK 0x7 + +#define SCTRDEPTH_MIN 0x0 /* 16 Entries. */ +#define SCTRDEPTH_MAX 0x4 /* 256 Entries. */ + +/* ctrsource, ctrtarget and ctrdata CSR bits. */ +#define CTRSOURCE_VALID 0x1ULL +#define CTRTARGET_MISP 0x1ULL + +#define CTRDATA_TYPE_MASK 0xF +#define CTRDATA_CCV 0x8000 +#define CTRDATA_CCM_MASK 0xFFF0000 +#define CTRDATA_CCE_MASK 0xF0000000 + +#define CTRDATA_TYPE_NONE 0 +#define CTRDATA_TYPE_EXCEPTION 1 +#define CTRDATA_TYPE_INTERRUPT 2 +#define CTRDATA_TYPE_TRAP_RET 3 +#define CTRDATA_TYPE_NONTAKEN_BRANCH 4 +#define CTRDATA_TYPE_TAKEN_BRANCH 5 +#define CTRDATA_TYPE_RESERVED_6 6 +#define CTRDATA_TYPE_RESERVED_7 7 +#define CTRDATA_TYPE_INDIRECT_CALL 8 +#define CTRDATA_TYPE_DIRECT_CALL 9 +#define CTRDATA_TYPE_INDIRECT_JUMP 10 +#define CTRDATA_TYPE_DIRECT_JUMP 11 +#define CTRDATA_TYPE_CO_ROUTINE_SWAP 12 +#define CTRDATA_TYPE_RETURN 13 +#define CTRDATA_TYPE_OTHER_INDIRECT_JUMP 14 +#define CTRDATA_TYPE_OTHER_DIRECT_JUMP 15 + +#define CTR_ENTRIES_FIRST 0x200 +#define CTR_ENTRIES_LAST 0x2ff + #define CSR_SSTATUS 0x100 #define CSR_SIE 0x104 #define CSR_STVEC 0x105 @@ -523,6 +602,8 @@ # define CSR_TOPEI CSR_MTOPEI # define CSR_TOPI CSR_MTOPI =20 +# define CSR_CTRCTL CSR_MCTRCTL + # define SR_IE SR_MIE # define SR_PIE SR_MPIE # define SR_PP SR_MPP @@ -553,6 +634,8 @@ # define CSR_TOPEI CSR_STOPEI # define CSR_TOPI CSR_STOPI =20 +# define CSR_CTRCTL CSR_SCTRCTL + # define SR_IE SR_SIE # define SR_PIE SR_SPIE # define SR_PP SR_SPP --=20 2.43.0 From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1B142D3238 for ; Thu, 22 May 2025 23:26:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956384; cv=none; b=eLsRkr12vk7A8HoaJ1iJG9jHuP7DpCL7QbjaORnklRF6fEHBsEYzxrW4SKO2xTyJCYf3HreZbGUo7pLwp1QpX3mDdWMdaCTRMvCoVg1ijJs6cwenMmACijFBntePX09BXsgCxufVjL+Rmkz5M9ooegWK0pKqx9bb7eO9zaiKjag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956384; c=relaxed/simple; bh=gvU0uMFD6Kovn+U10/vBvR06A23qjRw3z0F47+AnYRQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=JzsmlWxt42ETUv0owf8Gsqf72POnZJxHkNoZw/jnqr+HGSsNpf9hgCz8+XwMWKnHUhJDWYyovpsGQJ6aD+ReZtvTiKCF7XYtrnIx/CjI0N/aRvATXPhtMRHyCnoI94YIxdcXna6DBPPe4s5WQUCuxbrCS4lviGDHsEr75YMaVOY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=b63uANoc; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="b63uANoc" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-44b1ff82597so5900495e9.3 for ; Thu, 22 May 2025 16:26:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956379; x=1748561179; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=H7dqPj/4OqgIxw+lFGbLNi9FTj66PFQpLla1/LzexBE=; b=b63uANocjbw2DlX6gKABD7uEtcoltWcGw0+N9eNwN4iO7rZVS2HD4tbB3AGa+T46vp c3Ky51leopP15UA8k2KML7fx0wc8eSA2SyFcJHIVZz33rwRDt9NUUVAGoaMaR18fIFgp We3e7Oo005yhV2lphcjgylsn/zhe+iq9omK+HU+58XcaS6IYJP/iR3UZgF5WZF5VFhVY bluhkSh4BgzpXtSZ7At/Q2vo7n9Krvz+fn5QTc24bJ9nwMWwzaEVkxboGDM5jm7PQho5 6Ub9ECCLqfNJCKow5YQw7xAx5+X4RGyxONQaAzG8xoeT0WtZ0CpCRjDypw3Efwo+mG9u aYWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956379; x=1748561179; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H7dqPj/4OqgIxw+lFGbLNi9FTj66PFQpLla1/LzexBE=; b=smK3ikqW1Ran0DWn1pdoe7HEFKJRCYaq4DJMaBSkIUFiYTRk+aZzw7XDI4u6IILb69 VESotuO7O+yEkEsfjgT9weoqs4yolFjwv4nMunKxJgaEgtw5Ufh4sY9yIYt49xzUdie9 OLN/S9t+hwCX9gx84izTSk4B+sy5lUss9Ehh2AK88bYcApI9yG8910MuKJyRAMsayj6t yuS2DnlWHChbaa8Be9yBeHD7n19AZU2dqsrg6uf4gXSYVgFg9QLM92W8GL2SIsIgi2tC Jd4gT0z2PEXa8kQlFzTWyHFEmzCW0ESLn1sjdjpNzhuyFvadqBC0IJf6zfeqGm46caW/ Crfg== X-Forwarded-Encrypted: i=1; AJvYcCVVJ9pBkoF7eyfyIz/kEr67t1F2xzVhrx8extTcY4DRlty0hAlc34ZX1yTiQTRglwJmiRhut9/2CbBJw1A=@vger.kernel.org X-Gm-Message-State: AOJu0YymSOC+OIiMn5JlpQZBqCItS+3oKFfNnTdLoU1XQ0XtO5SnyBFr 2SwPTHl2MMDHBdPuN+8br3jcnmca1O8h7ESyeEoyvBnA3IRA0kVUyjdYutWMQ8c6W4g= X-Gm-Gg: ASbGncsFEuLv6Yty6xRuBG/yRyPJ3XDxdpxed+EIYNuT6hZ33qwVEC/1Sv0nxQhRuae vRMVeNJkJrJErwjlsFKYDQkQWSNQ/esGutF9qFuSn9jN1GwR2vTSyaSNcDvvf42syt+1YpR1WYn FQzKAkLIYFjesqI+XpYSUVJ1b/fYEqHbh9s2jxNk8By2VoYHcMDoRrwchuCqFz8uQCcr7bPmvTC PIC5ZYUGrIvucS/6V2OwLPb/Pu5OQM7T0goJvfZ6EtAmqVOOuHy7kEG9O7xlktrr/q/0rtH/9Oz c3mt3UPfAg8TJ4i0m6EEX3tO8uTGidg4Bzl8qeDk3T1hl9L/VQawVA== X-Google-Smtp-Source: AGHT+IFXXmQbzS5vqDGu4y4TwQnk0EiK4CC5C6meSlb8RPETRBYPhp8qkAqi3TzGPklrLZOBPgJ0mA== X-Received: by 2002:a05:600c:64c6:b0:43d:878c:7c40 with SMTP id 5b1f17b1804b1-44b6d1d44aemr7930635e9.10.1747956379233; Thu, 22 May 2025 16:26:19 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:19 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:09 +0100 Subject: [PATCH v3 3/7] riscv: Add Control Transfer Records extension parsing Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-3-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=2624; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=gvU0uMFD6Kovn+U10/vBvR06A23qjRw3z0F47+AnYRQ=; b=pyfnCvhX86bz9z1y3qjxlNR2g5lGOH2Tq7LR0ghQj5QSvYJrDGwpQ/jbb4U4sW1BvShvZvEdX QqJA4YOOC0EDUOV9duD2meq3fYPM1ise96t5o6riam+ED3WjJWGTHfg X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= Adding CTR extension in ISA extension map to lookup for extension availability. Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/hwcap.h | 4 ++++ arch/riscv/kernel/cpufeature.c | 2 ++ 2 files changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index fa5e01bcb990ec26a2681916be6f9b27262a0add..9b88dfd0e53c7070793ec71d363= f8cd46ea43b92 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -105,6 +105,8 @@ #define RISCV_ISA_EXT_SMCNTRPMF 96 #define RISCV_ISA_EXT_SSCCFG 97 #define RISCV_ISA_EXT_SMCDELEG 98 +#define RISCV_ISA_EXT_SMCTR 99 +#define RISCV_ISA_EXT_SSCTR 100 =20 #define RISCV_ISA_EXT_XLINUXENVCFG 127 =20 @@ -115,11 +117,13 @@ #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA #define RISCV_ISA_EXT_SUPM RISCV_ISA_EXT_SMNPM #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SMCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SMCTR #else #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SUPM RISCV_ISA_EXT_SSNPM #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SSCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SSCTR #endif =20 #endif /* _ASM_RISCV_HWCAP_H */ diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index f72552adb257681c35a9f94ad5bbf7165fb93945..7fcbde89e4b9ee55b30b27f5b93= e33dbe8f9ce58 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -419,6 +419,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] =3D { riscv_ext_smcdeleg_validate), __RISCV_ISA_EXT_DATA(smcntrpmf, RISCV_ISA_EXT_SMCNTRPMF), __RISCV_ISA_EXT_DATA(smcsrind, RISCV_ISA_EXT_SMCSRIND), + __RISCV_ISA_EXT_DATA(smctr, RISCV_ISA_EXT_SMCTR), __RISCV_ISA_EXT_DATA(smmpm, RISCV_ISA_EXT_SMMPM), __RISCV_ISA_EXT_SUPERSET(smnpm, RISCV_ISA_EXT_SMNPM, riscv_xlinuxenvcfg_e= xts), __RISCV_ISA_EXT_DATA(smstateen, RISCV_ISA_EXT_SMSTATEEN), @@ -426,6 +427,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] =3D { __RISCV_ISA_EXT_DATA_VALIDATE(ssccfg, RISCV_ISA_EXT_SSCCFG, riscv_ext_ssc= cfg_validate), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sscsrind, RISCV_ISA_EXT_SSCSRIND), + __RISCV_ISA_EXT_DATA(ssctr, RISCV_ISA_EXT_SSCTR), __RISCV_ISA_EXT_SUPERSET(ssnpm, RISCV_ISA_EXT_SSNPM, riscv_xlinuxenvcfg_e= xts), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svade, RISCV_ISA_EXT_SVADE), --=20 2.43.0 From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD9592D3236 for ; Thu, 22 May 2025 23:26:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956385; cv=none; b=MVtRc/r19NHSD+oNBb5TDd1A2SV/VMZ0CsUWyr9lS35CjuPsooBtXM9TtFTwSRWI7pNeV4wH1egxY5SVLmLogbbFbYIDx5baCRn799XiOHPUOBZM4gQ1v3qe/USNXC5unFwyU4MUKMh9Tq7cI5CfeYYeN+EsqZxXpP3mG5aVMfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956385; c=relaxed/simple; bh=HQlCiM1VZsjqb/niITpfgq1gwhQMuUn578fiz8Bl7N0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NZi96OThhKRgmWX00ZVfprTwj78RDsPmKYZn8eX4OnXB2Vg1e5LYR1jAeAbfosggGpo2kuZxXsTiDVxd+fB1gmx1+CbWBC7p7G26uju2Yl0PW2D+JiqtSqkOvap0m2F3RSPx59bFpPXiS7D5gPM9LTewYIbIFWN4aYak5bA1hfM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=D31tcYJ4; arc=none smtp.client-ip=209.85.221.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="D31tcYJ4" Received: by mail-wr1-f52.google.com with SMTP id ffacd0b85a97d-3a366843fa6so3904978f8f.1 for ; Thu, 22 May 2025 16:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956380; x=1748561180; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=WnCGsMvvb/sl+WABzO40apajVfbKrxsfIXszsES8O4k=; b=D31tcYJ4vz80+6Xp1MDFau3bAquLRfg9Opk0b1kItM7H0P3MpZ7SxQnwvmhhoLQRmn hzBAZn8ExBzIkVe7aT3dFJcG0DLyK7/yzPD3rn/pvvuRaNr7K6RpgWQO+ergwYvSvpbT btL+Inf1cLEBN9WcXnWqRS4Aeb/2b37QSQsK9h8K4cz7SUkRmRqn4K3qA3lEX9FwP9Z5 G+25/TwCo9a5rickimq61teUtgu8TNiJ5iB6ukUPRrYQ3jVmvY3WXBWomw/5yq/+o9W3 w4PeS2CQdLOLRDce1V1HrF40xibDBQaU2UeUXLc4DmyAMFz9Zw2lBptTQvfbEqPidl7l 1xLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956380; x=1748561180; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WnCGsMvvb/sl+WABzO40apajVfbKrxsfIXszsES8O4k=; b=Sfd9lzpjdqbBu2EbUwaEixm+wqNRPppPiP0N+Wl9mii/Prgtzwo7GyA7uO6vUuPa2t Q4SPdvKqW5Pm83Pyl8sUcFh7Mw2I/dP6HPVx5YbUQtmrGGZSLL+spndxszK0s+hMgtbH hRlkKBeuhwQBUrGFzPbsXmtBNfKlYSIX4WBTI712e54HiOZU+ynJXQtTrwWqD4YyhB2J qJQUUq8RytmQJWOPjXlbGMQDnE1xHqJLpRGJtbaXX9Js35hrTFQYqbT2TOis4W93cpbp 4pS5feLgng5/12bShiBr2ZNvolBfHYL4NHoSY2HvKLDJ2rSQSf0XfmjSmhmhSecITJMI W5yA== X-Forwarded-Encrypted: i=1; AJvYcCU9AXE+A6oMduDDGALWhuQtgBiUo9StCnzCQxTwj3XiiuNuP0RSPAE8QZekhRAOVq7sfnCpulmy+dLVqfw=@vger.kernel.org X-Gm-Message-State: AOJu0Yx8TVMuUcHEeEZ3UITBpd547kjutj8B/cKJTq0kGxuFjJCbBB7R TKYL5RykvmswZzzCJ6HIjCL6oEGpN4smIPTuh4CTezc5WA7BkBDBqN2kM9bRV+kY2gE= X-Gm-Gg: ASbGncvmh3xzVdUyZwmLgmuzn/zDWdHNKERM2k9txq33aG9GOkWwcqNbhUoX5ha6QNu Dqsd877jqYi1mY97cR5T8v9jDl7xbrfsRI7QS56tlJKU37iJLfiInv+aAOydz5Ba2lbvZh/ZvPK iSa6lhT7wcO3Naqd0GP8JglidMDkcJOkP3AGGqZMeVltLbIPnROSGWsp+hHQygAR/J+dKLmqfm+ Ei5FmC39GinDRlXcunX8fyowhNT40926lME2d4IvxkLrZfqVc1ienasjKiErXPflJnBgd8gCDPZ XrWANW/ZTt1Z2Bou8dHPG1fSHtfDX8bTDPz09Xm7y9PbIc+MNr7raw== X-Google-Smtp-Source: AGHT+IEAPIcFtYM4Tgsv9MlTn3dUZjmgON/3+kE5HDdsMMPWIynQV0odP8gmbR56f/uO+RLDFOXOwg== X-Received: by 2002:a05:6000:178e:b0:3a0:b53a:bc06 with SMTP id ffacd0b85a97d-3a35fe5ba93mr22446826f8f.1.1747956380256; Thu, 22 May 2025 16:26:20 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:20 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:10 +0100 Subject: [PATCH v3 4/7] riscv: pmu: Add infrastructure for Control Transfer Record Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-4-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=7051; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=HQlCiM1VZsjqb/niITpfgq1gwhQMuUn578fiz8Bl7N0=; b=44dyGW9sGFwk6vSzLsN7JNhG+rcehhGCISKRHtl7qiGT06vranNGoHavClp+64Oe2FooG8+aB hbd7PyOu11+Czw/bzOTiH4SgSmzUPxsOl2iTVI1jEeuNb2pSNEE6Q78 X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= To support Control Transfer Records (CTR) extension, we need to extend the riscv_pmu framework with some basic infrastructure for branch stack sampling. Subsequent patches will use this to add support for CTR in the riscv_pmu_dev driver. With CTR, the branches are stored into a hardware FIFO, which will be sampled by software when perf events overflow. A task may be context-switched between overflows, and to avoid leaking samples we need to clear the last task's records when a task is context-switched in. To do this we will be using the pmu::sched_task() callback added in this patch. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 22 ++++++++++++++++++++++ drivers/perf/riscv_pmu_dev.c | 17 +++++++++++++++++ drivers/perf/riscv_pmu_legacy.c | 2 ++ include/linux/perf/riscv_pmu.h | 18 ++++++++++++++++++ 4 files changed, 59 insertions(+) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_commo= n.c index 7644147d50b46a79f349d6cb7e32554cc9a39a74..b2dc78cbbb93926964f81f30be9= ef4a1c02501df 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -157,6 +157,19 @@ u64 riscv_pmu_ctr_get_width_mask(struct perf_event *ev= ent) return GENMASK_ULL(cwidth, 0); } =20 +static void riscv_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *pmu; + + if (!pmu_ctx) + return; + + pmu =3D to_riscv_pmu(pmu_ctx->pmu); + if (pmu->sched_task) + pmu->sched_task(pmu_ctx, sched_in); +} + u64 riscv_pmu_event_update(struct perf_event *event) { struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); @@ -269,6 +282,8 @@ static int riscv_pmu_add(struct perf_event *event, int = flags) cpuc->events[idx] =3D event; cpuc->n_events++; hwc->state =3D PERF_HES_UPTODATE | PERF_HES_STOPPED; + if (rvpmu->ctr_add) + rvpmu->ctr_add(event, flags); if (flags & PERF_EF_START) riscv_pmu_start(event, PERF_EF_RELOAD); =20 @@ -290,8 +305,13 @@ static void riscv_pmu_del(struct perf_event *event, in= t flags) if (rvpmu->ctr_stop) rvpmu->ctr_stop(event, RISCV_PMU_STOP_FLAG_RESET); cpuc->n_events--; + + if (rvpmu->ctr_del) + rvpmu->ctr_del(event, flags); + if (rvpmu->ctr_clear_idx) rvpmu->ctr_clear_idx(event); + perf_event_update_userpage(event); hwc->idx =3D -1; } @@ -402,6 +422,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) for_each_possible_cpu(cpuid) { cpuc =3D per_cpu_ptr(pmu->hw_events, cpuid); cpuc->n_events =3D 0; + cpuc->ctr_users =3D 0; for (i =3D 0; i < RISCV_MAX_COUNTERS; i++) cpuc->events[i] =3D NULL; cpuc->snapshot_addr =3D NULL; @@ -416,6 +437,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) .start =3D riscv_pmu_start, .stop =3D riscv_pmu_stop, .read =3D riscv_pmu_read, + .sched_task =3D riscv_pmu_sched_task, }; =20 return pmu; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index cd2ac4cf34f12618a2df1895f1fab8522016d325..95e6dd272db69f53b679e5fc345= 0785e45d5e8b9 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -1035,6 +1035,12 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *ev= ent, unsigned long flag) } } =20 +static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + /* Call CTR specific Sched hook. */ +} + static int rvpmu_sbi_find_num_ctrs(void) { struct sbiret ret; @@ -1561,6 +1567,14 @@ static int rvpmu_deleg_ctr_get_idx(struct perf_event= *event) return -ENOENT; } =20 +static void rvpmu_ctr_add(struct perf_event *event, int flags) +{ +} + +static void rvpmu_ctr_del(struct perf_event *event, int flags) +{ +} + static void rvpmu_ctr_start(struct perf_event *event, u64 ival) { struct hw_perf_event *hwc =3D &event->hw; @@ -1979,6 +1993,8 @@ static int rvpmu_device_probe(struct platform_device = *pdev) else pmu->pmu.attr_groups =3D riscv_sbi_pmu_attr_groups; pmu->cmask =3D cmask; + pmu->ctr_add =3D rvpmu_ctr_add; + pmu->ctr_del =3D rvpmu_ctr_del; pmu->ctr_start =3D rvpmu_ctr_start; pmu->ctr_stop =3D rvpmu_ctr_stop; pmu->event_map =3D rvpmu_event_map; @@ -1990,6 +2006,7 @@ static int rvpmu_device_probe(struct platform_device = *pdev) pmu->event_mapped =3D rvpmu_event_mapped; pmu->event_unmapped =3D rvpmu_event_unmapped; pmu->csr_index =3D rvpmu_csr_index; + pmu->sched_task =3D pmu_sched_task; =20 ret =3D riscv_pm_pmu_register(pmu); if (ret) diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legac= y.c index 93c8e0fdb5898587e89115c10587d69380da19ec..bee6742d35fa54a9b82d4a4842b= 481efaa226765 100644 --- a/drivers/perf/riscv_pmu_legacy.c +++ b/drivers/perf/riscv_pmu_legacy.c @@ -115,6 +115,8 @@ static void pmu_legacy_init(struct riscv_pmu *pmu) BIT(RISCV_PMU_LEGACY_INSTRET); pmu->ctr_start =3D pmu_legacy_ctr_start; pmu->ctr_stop =3D NULL; + pmu->ctr_add =3D NULL; + pmu->ctr_del =3D NULL; pmu->event_map =3D pmu_legacy_event_map; pmu->ctr_get_idx =3D pmu_legacy_ctr_get_idx; pmu->ctr_get_width =3D pmu_legacy_ctr_get_width; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index e58f8381198849ea6134a46e894d91064a1a6154..883781f12ae0be93d8292ae1a7e= 7b03fea3ea955 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -46,6 +46,13 @@ }, \ } =20 +#define MAX_BRANCH_RECORDS 256 + +struct branch_records { + struct perf_branch_stack branch_stack; + struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS]; +}; + struct cpu_hw_events { /* currently enabled events */ int n_events; @@ -65,6 +72,12 @@ struct cpu_hw_events { bool snapshot_set_done; /* A shadow copy of the counter values to avoid clobbering during multipl= e SBI calls */ u64 snapshot_cval_shcopy[RISCV_MAX_COUNTERS]; + + /* Saved branch records. */ + struct branch_records *branches; + + /* Active events requesting branch records */ + int ctr_users; }; =20 struct riscv_pmu { @@ -78,6 +91,8 @@ struct riscv_pmu { int (*ctr_get_idx)(struct perf_event *event); int (*ctr_get_width)(int idx); void (*ctr_clear_idx)(struct perf_event *event); + void (*ctr_add)(struct perf_event *event, int flags); + void (*ctr_del)(struct perf_event *event, int flags); void (*ctr_start)(struct perf_event *event, u64 init_val); void (*ctr_stop)(struct perf_event *event, unsigned long flag); int (*event_map)(struct perf_event *event, u64 *config); @@ -85,10 +100,13 @@ struct riscv_pmu { void (*event_mapped)(struct perf_event *event, struct mm_struct *mm); void (*event_unmapped)(struct perf_event *event, struct mm_struct *mm); uint8_t (*csr_index)(struct perf_event *event); + void (*sched_task)(struct perf_event_pmu_context *ctx, bool sched_in); =20 struct cpu_hw_events __percpu *hw_events; struct hlist_node node; struct notifier_block riscv_pm_nb; + + unsigned int ctr_depth; }; =20 #define to_riscv_pmu(p) (container_of(p, struct riscv_pmu, pmu)) --=20 2.43.0 From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5D462D3A69 for ; Thu, 22 May 2025 23:26:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956385; cv=none; b=oa6xWIz3+BvgyB70Oae6vQ2zQnoaEtS7UMaiXwdTMZsUcKd9RYS6kZ8XSbewgRsDM0VJ2V6TFBnoC/7N2LRlJi9ztgcqbtyymY3KESKbGEuu5/u/6vVFe+hmr2Sx+fC0S/M3EM4QeJZMow1JR7+++c7IGqO5KY/dQi5NWxu2eH0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956385; c=relaxed/simple; bh=NnsJw3tsE5qEf7zlNhdIYIjmmA5oVtcXKuz+F5ZXS3c=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Pw1uw8ZFVKmEnKhuAYwZ3+kyG6dkaB9UjmGGdq57AQ6tczQFHzJHW6vRThd9ZmRouseuorVTOygBKaK9a1YP88HLyhoX6USFUppOkSX+8nY+fCB5/ufU2Bszkqd9gSydkdKz5ge1lyGkdVaLvRfa4E2damhhl6dgnL3RgNGaiOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=wW5tIt87; arc=none smtp.client-ip=209.85.128.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="wW5tIt87" Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-43ede096d73so63582675e9.2 for ; Thu, 22 May 2025 16:26:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956381; x=1748561181; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=GaHH5mdttUKdT56VwAZFcB+tyeGK0bZqC8g5F8EkAt4=; b=wW5tIt871uipboy61je+eWM3u36WqyR/4NTK6ziHhRT5VTOYDpaorXASv5nEVvEEqJ M3pnTMe5gfL1OziB1PJS+VmIXpzFKtTfCsz45O/YFMZoY1Y3f5kOmYdikaNgk5KErscr 00bI+LxIYOIM/sDFoDaHHX/AGso/ouXrDW4ucItvkQ+zSRYZxTOgW+FsodF+BxhYO1zr CBUQAsvEzwDrouIGQNIyUxPsRgysF7RXLgK9kTfHzW2MSPcCbpVBdrFDO2fIa1vAqeNL XFoElWP1lWJocJrRvM0OABr69MShm/TxRbMwi/PxC/dfBG+ZgiC48iAc2Lzo0AEOxxtz +75g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956381; x=1748561181; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GaHH5mdttUKdT56VwAZFcB+tyeGK0bZqC8g5F8EkAt4=; b=OZZ4ILU+UacIzSU35OFsn9G9QYD40lFDz89ODh42TB/lQo+fGOqfayhZO6n7nOFyjf PTGdePvlNkldDa4klhXrkrMdD4/Dg+frHJ88+fwdjS1HbOh9Jbgli/ZYJuBWcBGNn7zv Iaj2Pbci2k+Pjt48PyCRWbmJKLEBWwB+/POZksOBHCPAoTBSCgQF5PXGRhX/rb03gTg4 7s8jjedGDVzkC9YlFykqkhKhM1+MAB5UBAeu/dPojHK0QcwiaPFV7gF8yG7madhhPCuH 4aTQDF7X6F2XV4PxPu+TMDbcSZiPBqVgR0A/EnfWsyzjVAQCT5JcGKXgUPfdweYL/bdJ psPA== X-Forwarded-Encrypted: i=1; AJvYcCW3HSp+dveZJbqIWJUUd5k16aPmz0FMgGQ/39UUxzh9qNfA38xZ0a+g3UMvOcQ5Onl0GfiBrvoBqEeVb/E=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+g6uWQEGDdYwF/J+bCC3qQTWzx/RO+uTsfPKBQhlUxhxGDhv8 1WjgN/oigAJFYOTAik63NP4tUKAIxFt8y5cMMrkjI3t3ZKiD2kv9/6rLFrVos9U/veo= X-Gm-Gg: ASbGncvdy4aJgqzwLcWQYpw7vPd346sK1TOvPlY3jDD2fA03xsRfvC9o6YwiJLZe8gE nrPcwKlk8KDwg5wVsdANlgkRIqcO3dcVYF51nrg5e8RoReN3WDMtlqtXdxry8dq2zWMXBRmVT9A KmDhGKtACoPqUtXP6FbDsRiDTD8UJUr9SULg0yxcfLWgAn8ET6bVBmRbbaO4XuD8AzJE1guqtzY WOKZpjvxKzClYdL5TkN7RasKky5KK02gbYoP0GcdbiKmAujdyzGkdLLCoMq7c9ubz9W/XJhpRm7 i+4TqoXM0YZ0afjkxQlDzmuY2KriDOeQP8hVcupJxKWOJa4WlBLp/g== X-Google-Smtp-Source: AGHT+IFZPYtZlMvLNPpZSsfDsdXLSnpyPxXG8MdaKWwvWmT8+dPpiTqI3MYbIJ9zpijJPQyyR5h5zg== X-Received: by 2002:a05:600c:3592:b0:448:e8c0:c778 with SMTP id 5b1f17b1804b1-44b6e85ee5dmr8150675e9.22.1747956381260; Thu, 22 May 2025 16:26:21 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:21 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:11 +0100 Subject: [PATCH v3 5/7] riscv: pmu: Add driver for Control Transfer Records Ext. Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-5-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=22286; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=NnsJw3tsE5qEf7zlNhdIYIjmmA5oVtcXKuz+F5ZXS3c=; b=cMYOBsyXODB7J7Ss9gbknZ07PSyPklzm7tzpmLwUX5oluH/clO7tuujGi1Of4ypJbGEQVQETF UB7eG70mlAeBnHpANo6kc/iKKozP9fbnvddyXXQd3BdVCU/5t+wU0QV X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= This adds support for CTR Ext defined in [0]. The extension allows to records a maximum for 256 last branch records. CTR extension depends on s[m|s]csrind and Sscofpmf extensions. Signed-off-by: Rajnesh Kanwal --- MAINTAINERS | 1 + drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/riscv_ctr.c | 612 +++++++++++++++++++++++++++++++++++++= ++++ include/linux/perf/riscv_pmu.h | 37 +++ 5 files changed, 662 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index b6d174f7735e6d8e4c3c2eac91450e38f8b48519..068994eff9fdfda82f61f607e76= ecacb54809792 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20406,6 +20406,7 @@ M: Atish Patra R: Anup Patel L: linux-riscv@lists.infradead.org S: Supported +F: drivers/perf/riscv_ctr.c F: drivers/perf/riscv_pmu_common.c F: drivers/perf/riscv_pmu_dev.c F: drivers/perf/riscv_pmu_legacy.c diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index b3bdff2a99a4a160718a322ed3b0a6af2b01a750..9107c5208bf5eba6c9db378ae8e= d596f2b27498c 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -129,6 +129,17 @@ config ANDES_CUSTOM_PMU =20 If you don't know what to do here, say "Y". =20 +config RISCV_CTR + bool "Enable support for Control Transfer Records (CTR)" + depends on PERF_EVENTS && RISCV_PMU + default y + help + Enable support for Control Transfer Records (CTR) which + allows recording branches, Jumps, Calls, returns etc taken in an + execution path. This also supports privilege based filtering. It + captures additional relevant information such as cycle count, + branch misprediction etc. + config ARM_PMU_ACPI depends on ARM_PMU && ACPI def_bool y diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 0805d740c773f51263c94cf97c9fb4339bcd6767..755609f184fe4b4ad7cd77de10c= c56319489f495 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -20,6 +20,7 @@ obj-$(CONFIG_RISCV_PMU_COMMON) +=3D riscv_pmu_common.o obj-$(CONFIG_RISCV_PMU_LEGACY) +=3D riscv_pmu_legacy.o obj-$(CONFIG_RISCV_PMU) +=3D riscv_pmu_dev.o obj-$(CONFIG_STARFIVE_STARLINK_PMU) +=3D starfive_starlink_pmu.o +obj-$(CONFIG_RISCV_CTR) +=3D riscv_ctr.o obj-$(CONFIG_THUNDERX2_PMU) +=3D thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) +=3D xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) +=3D arm_spe_pmu.o diff --git a/drivers/perf/riscv_ctr.c b/drivers/perf/riscv_ctr.c new file mode 100644 index 0000000000000000000000000000000000000000..4bbac1ce29c5dd558a3ebd89d6e= fef9db3a405b8 --- /dev/null +++ b/drivers/perf/riscv_ctr.c @@ -0,0 +1,612 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Control transfer records extension Helpers. + * + * Copyright (C) 2024 Rivos Inc. + * + * Author: Rajnesh Kanwal + */ + +#define pr_fmt(fmt) "CTR: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define CTR_BRANCH_FILTERS_INH (CTRCTL_EXCINH | \ + CTRCTL_INTRINH | \ + CTRCTL_TRETINH | \ + CTRCTL_TKBRINH | \ + CTRCTL_INDCALL_INH | \ + CTRCTL_DIRCALL_INH | \ + CTRCTL_INDJUMP_INH | \ + CTRCTL_DIRJUMP_INH | \ + CTRCTL_CORSWAP_INH | \ + CTRCTL_RET_INH | \ + CTRCTL_INDOJUMP_INH | \ + CTRCTL_DIROJUMP_INH) + +#define CTR_BRANCH_ENABLE_BITS (CTRCTL_KERNEL_ENABLE | CTRCTL_U_ENABLE) + +/* Branch filters not-supported by CTR extension. */ +#define CTR_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \ + PERF_SAMPLE_BRANCH_IN_TX | \ + PERF_SAMPLE_BRANCH_PRIV_SAVE | \ + PERF_SAMPLE_BRANCH_NO_TX | \ + PERF_SAMPLE_BRANCH_COUNTERS) + +/* Branch filters supported by CTR extension. */ +#define CTR_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \ + PERF_SAMPLE_BRANCH_KERNEL | \ + PERF_SAMPLE_BRANCH_HV | \ + PERF_SAMPLE_BRANCH_ANY | \ + PERF_SAMPLE_BRANCH_ANY_CALL | \ + PERF_SAMPLE_BRANCH_ANY_RETURN | \ + PERF_SAMPLE_BRANCH_IND_CALL | \ + PERF_SAMPLE_BRANCH_COND | \ + PERF_SAMPLE_BRANCH_IND_JUMP | \ + PERF_SAMPLE_BRANCH_HW_INDEX | \ + PERF_SAMPLE_BRANCH_NO_FLAGS | \ + PERF_SAMPLE_BRANCH_NO_CYCLES | \ + PERF_SAMPLE_BRANCH_CALL_STACK | \ + PERF_SAMPLE_BRANCH_CALL | \ + PERF_SAMPLE_BRANCH_TYPE_SAVE) + +#define CTR_PERF_BRANCH_FILTERS (CTR_ALLOWED_BRANCH_FILTERS | \ + CTR_EXCLUDE_BRANCH_FILTERS) + +static u64 allowed_filters __read_mostly; + +struct ctr_regset { + unsigned long src; + unsigned long target; + unsigned long ctr_data; +}; + +enum { + CTR_STATE_NONE, + CTR_STATE_VALID, +}; + +/* Head is the idx of the next available slot. The slot may be already pop= ulated + * by an old entry which will be lost on new writes. + */ +struct riscv_perf_task_context { + int callstack_users; + int stack_state; + unsigned int num_entries; + uint32_t ctr_status; + uint64_t ctr_control; + struct ctr_regset store[MAX_BRANCH_RECORDS]; +}; + +static inline u64 get_ctr_src_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_src_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline u64 get_ctr_tgt_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG2, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_tgt_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG2, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline u64 get_ctr_data_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG3, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_data_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG3, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline bool ctr_record_valid(u64 ctr_src) +{ + return !!FIELD_GET(CTRSOURCE_VALID, ctr_src); +} + +static inline int ctr_get_mispredict(u64 ctr_target) +{ + return FIELD_GET(CTRTARGET_MISP, ctr_target); +} + +static inline unsigned int ctr_get_cycles(u64 ctr_data) +{ + const unsigned int cce =3D FIELD_GET(CTRDATA_CCE_MASK, ctr_data); + const unsigned int ccm =3D FIELD_GET(CTRDATA_CCM_MASK, ctr_data); + + if (ctr_data & CTRDATA_CCV) + return 0; + + /* Formula to calculate cycles from spec: (2^12 + CCM) << CCE-1 */ + if (cce > 0) + return (4096 + ccm) << (cce - 1); + + return FIELD_GET(CTRDATA_CCM_MASK, ctr_data); +} + +static inline unsigned int ctr_get_type(u64 ctr_data) +{ + return FIELD_GET(CTRDATA_TYPE_MASK, ctr_data); +} + +static inline unsigned int ctr_get_depth(u64 ctr_depth) +{ + /* Depth table from CTR Spec: 2.4 sctrdepth. + * + * sctrdepth.depth Depth + * 000 - 16 + * 001 - 32 + * 010 - 64 + * 011 - 128 + * 100 - 256 + * + * Depth =3D 16 * 2 ^ (ctrdepth.depth) + * or + * Depth =3D 16 << ctrdepth.depth. + */ + return 16 << FIELD_GET(SCTRDEPTH_MASK, ctr_depth); +} + +static inline struct riscv_perf_task_context *task_context(void *ctx) +{ + return (struct riscv_perf_task_context *)ctx; +} + +/* Reads CTR entry at idx and stores it in entry struct. */ +static bool get_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + entry->src =3D get_ctr_src_reg(idx); + + if (!ctr_record_valid(entry->src)) + return false; + + entry->src =3D entry->src; + entry->target =3D get_ctr_tgt_reg(idx); + entry->ctr_data =3D get_ctr_data_reg(idx); + + return true; +} + +static void set_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + set_ctr_src_reg(idx, entry->src); + set_ctr_tgt_reg(idx, entry->target); + set_ctr_data_reg(idx, entry->ctr_data); +} + +static u64 branch_type_to_ctr(int branch_type) +{ + u64 config =3D CTR_BRANCH_FILTERS_INH | CTRCTL_LCOFIFRZ; + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + config |=3D CTRCTL_U_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + config |=3D CTRCTL_KERNEL_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_HV) { + if (riscv_isa_extension_available(NULL, h)) + config |=3D CTRCTL_KERNEL_ENABLE; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + config &=3D ~CTR_BRANCH_FILTERS_INH; + return config; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) { + config &=3D ~CTRCTL_INDCALL_INH; + config &=3D ~CTRCTL_DIRCALL_INH; + config &=3D ~CTRCTL_EXCINH; + config &=3D ~CTRCTL_INTRINH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + config &=3D ~(CTRCTL_RET_INH | CTRCTL_TRETINH); + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + config &=3D ~CTRCTL_INDCALL_INH; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + config &=3D ~CTRCTL_TKBRINH; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL_STACK) + config |=3D CTRCTL_RASEMU; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) { + config &=3D ~CTRCTL_INDJUMP_INH; + config &=3D ~CTRCTL_INDOJUMP_INH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + config &=3D ~CTRCTL_DIRCALL_INH; + + return config; +} + +static const int ctr_perf_map[] =3D { + [CTRDATA_TYPE_NONE] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_EXCEPTION] =3D PERF_BR_SYSCALL, + [CTRDATA_TYPE_INTERRUPT] =3D PERF_BR_IRQ, + [CTRDATA_TYPE_TRAP_RET] =3D PERF_BR_ERET, + [CTRDATA_TYPE_NONTAKEN_BRANCH] =3D PERF_BR_COND, + [CTRDATA_TYPE_TAKEN_BRANCH] =3D PERF_BR_COND, + [CTRDATA_TYPE_RESERVED_6] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RESERVED_7] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_INDIRECT_CALL] =3D PERF_BR_IND_CALL, + [CTRDATA_TYPE_DIRECT_CALL] =3D PERF_BR_CALL, + [CTRDATA_TYPE_INDIRECT_JUMP] =3D PERF_BR_IND, + [CTRDATA_TYPE_DIRECT_JUMP] =3D PERF_BR_UNCOND, + [CTRDATA_TYPE_CO_ROUTINE_SWAP] =3D PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RETURN] =3D PERF_BR_RET, + [CTRDATA_TYPE_OTHER_INDIRECT_JUMP] =3D PERF_BR_IND, + [CTRDATA_TYPE_OTHER_DIRECT_JUMP] =3D PERF_BR_UNCOND, +}; + +static void ctr_set_perf_entry_type(struct perf_branch_entry *entry, + u64 ctr_data) +{ + int ctr_type =3D ctr_get_type(ctr_data); + + entry->type =3D ctr_perf_map[ctr_type]; + if (entry->type =3D=3D PERF_BR_UNKNOWN) + pr_warn("%d - unknown branch type captured\n", ctr_type); +} + +static void capture_ctr_flags(struct perf_branch_entry *entry, + struct perf_event *event, u64 ctr_data, + u64 ctr_target) +{ + if (branch_sample_type(event)) + ctr_set_perf_entry_type(entry, ctr_data); + + if (!branch_sample_no_cycles(event)) + entry->cycles =3D ctr_get_cycles(ctr_data); + + if (!branch_sample_no_flags(event)) { + entry->abort =3D 0; + entry->mispred =3D ctr_get_mispredict(ctr_target); + entry->predicted =3D !entry->mispred; + } + + if (branch_sample_priv(event)) + entry->priv =3D PERF_BR_PRIV_UNKNOWN; +} + +static void ctr_regset_to_branch_entry(struct cpu_hw_events *cpuc, + struct perf_event *event, + struct ctr_regset *regset, + unsigned int idx) +{ + struct perf_branch_entry *entry =3D &cpuc->branches->branch_entries[idx]; + + perf_clear_branch_entry_bitfields(entry); + entry->from =3D regset->src & (~CTRSOURCE_VALID); + entry->to =3D regset->target & (~CTRTARGET_MISP); + capture_ctr_flags(entry, event, regset->ctr_data, regset->target); +} + +static void ctr_read_entries(struct cpu_hw_events *cpuc, + struct perf_event *event, + unsigned int depth) +{ + struct ctr_regset entry =3D {}; + u64 ctr_ctl; + int i; + + ctr_ctl =3D csr_read_clear(CSR_CTRCTL, CTR_BRANCH_ENABLE_BITS); + + for (i =3D 0; i < depth; i++) { + if (!get_ctr_regset(&entry, i)) + break; + + ctr_regset_to_branch_entry(cpuc, event, &entry, i); + } + + csr_set(CSR_CTRCTL, ctr_ctl & CTR_BRANCH_ENABLE_BITS); + + cpuc->branches->branch_stack.nr =3D i; + cpuc->branches->branch_stack.hw_idx =3D 0; +} + +bool riscv_pmu_ctr_valid(struct perf_event *event) +{ + u64 branch_type =3D event->attr.branch_sample_type; + + if (branch_type & ~allowed_filters) { + pr_debug_once("Requested branch filters not supported 0x%llx\n", + branch_type & ~allowed_filters); + return false; + } + + return true; +} + +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *= event) +{ + unsigned int depth =3D to_riscv_pmu(event->pmu)->ctr_depth; + + ctr_read_entries(cpuc, event, depth); + + /* Clear frozen bit. */ + csr_clear(CSR_SCTRSTATUS, SCTRSTATUS_FROZEN); +} + +static void riscv_pmu_ctr_reset(void) +{ + /* FIXME: Replace with sctrclr instruction once support is merged + * into toolchain. + */ + asm volatile(".4byte 0x10400073\n" ::: "memory"); + csr_write(CSR_SCTRSTATUS, 0); +} + +static void __riscv_pmu_ctr_restore(void *ctx) +{ + struct riscv_perf_task_context *task_ctx =3D ctx; + unsigned int i; + + csr_write(CSR_SCTRSTATUS, task_ctx->ctr_status); + + for (i =3D 0; i < task_ctx->num_entries; i++) + set_ctr_regset(&task_ctx->store[i], i); +} + +static void riscv_pmu_ctr_restore(void *ctx) +{ + if (task_context(ctx)->stack_state =3D=3D CTR_STATE_NONE || + task_context(ctx)->callstack_users =3D=3D 0) { + return; + } + + riscv_pmu_ctr_reset(); + __riscv_pmu_ctr_restore(ctx); + + task_context(ctx)->stack_state =3D CTR_STATE_NONE; +} + +static void __riscv_pmu_ctr_save(void *ctx, unsigned int depth) +{ + struct riscv_perf_task_context *task_ctx =3D ctx; + struct ctr_regset *dst; + unsigned int i; + + for (i =3D 0; i < depth; i++) { + dst =3D &task_ctx->store[i]; + if (!get_ctr_regset(dst, i)) + break; + } + + task_ctx->num_entries =3D i; + + task_ctx->ctr_status =3D csr_read(CSR_SCTRSTATUS); +} + +static void riscv_pmu_ctr_save(void *ctx, unsigned int depth) +{ + if (task_context(ctx)->stack_state =3D=3D CTR_STATE_VALID) + return; + + if (task_context(ctx)->callstack_users =3D=3D 0) { + task_context(ctx)->stack_state =3D CTR_STATE_NONE; + return; + } + + __riscv_pmu_ctr_save(ctx, depth); + + task_context(ctx)->stack_state =3D CTR_STATE_VALID; +} + +/* + * On context switch in, we need to make sure no samples from previous tas= ks + * are left in the CTR. + * + * On ctxswin, sched_in =3D true, called after the PMU has started + * On ctxswout, sched_in =3D false, called before the PMU is stopped + */ +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *rvpmu =3D to_riscv_pmu(pmu_ctx->pmu); + struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); + void *task_ctx; + + if (!cpuc->ctr_users) + return; + + /* Save branch records in task_ctx on sched out */ + task_ctx =3D pmu_ctx ? pmu_ctx->task_ctx_data : NULL; + if (task_ctx) { + if (sched_in) + riscv_pmu_ctr_restore(task_ctx); + else + riscv_pmu_ctr_save(task_ctx, rvpmu->ctr_depth); + return; + } + + /* Reset branch records on sched in */ + if (sched_in) + riscv_pmu_ctr_reset(); +} + +static inline bool branch_user_callstack(unsigned int br_type) +{ + return (br_type & PERF_SAMPLE_BRANCH_USER) && + (br_type & PERF_SAMPLE_BRANCH_CALL_STACK); +} + +void riscv_pmu_ctr_add(struct perf_event *event) +{ + struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); + + if (branch_user_callstack(event->attr.branch_sample_type) && + event->pmu_ctx->task_ctx_data) + task_context(event->pmu_ctx->task_ctx_data)->callstack_users++; + + perf_sched_cb_inc(event->pmu); + + if (!cpuc->ctr_users++) + riscv_pmu_ctr_reset(); +} + +void riscv_pmu_ctr_del(struct perf_event *event) +{ + struct riscv_pmu *rvpmu =3D to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc =3D this_cpu_ptr(rvpmu->hw_events); + + if (branch_user_callstack(event->attr.branch_sample_type) && + event->pmu_ctx->task_ctx_data) + task_context(event->pmu_ctx->task_ctx_data)->callstack_users--; + + perf_sched_cb_dec(event->pmu); + cpuc->ctr_users--; + WARN_ON_ONCE(cpuc->ctr_users < 0); +} + +void riscv_pmu_ctr_enable(struct perf_event *event) +{ + u64 branch_type =3D event->attr.branch_sample_type; + u64 ctr; + + ctr =3D branch_type_to_ctr(branch_type); + csr_write(CSR_CTRCTL, ctr); +} + +void riscv_pmu_ctr_disable(struct perf_event *event) +{ + /* Clear CTRCTL to disable the recording. */ + csr_write(CSR_CTRCTL, 0); +} + +/* + * Check for hardware supported perf filters here. To avoid missing + * any new added filter in perf, we do a BUILD_BUG_ON check, so make sure + * to update CTR_ALLOWED_BRANCH_FILTERS or CTR_EXCLUDE_BRANCH_FILTERS + * defines when adding support for it in below function. + */ +static void __init check_available_filters(void) +{ + u64 ctr_ctl; + + /* + * Ensure both perf branch filter allowed and exclude + * masks are always in sync with the generic perf ABI. + */ + BUILD_BUG_ON(CTR_PERF_BRANCH_FILTERS !=3D (PERF_SAMPLE_BRANCH_MAX - 1)); + + allowed_filters =3D PERF_SAMPLE_BRANCH_USER | + PERF_SAMPLE_BRANCH_KERNEL | + PERF_SAMPLE_BRANCH_ANY | + PERF_SAMPLE_BRANCH_HW_INDEX | + PERF_SAMPLE_BRANCH_NO_FLAGS | + PERF_SAMPLE_BRANCH_NO_CYCLES | + PERF_SAMPLE_BRANCH_TYPE_SAVE; + + csr_write(CSR_CTRCTL, ~0); + ctr_ctl =3D csr_read(CSR_CTRCTL); + csr_write(CSR_CTRCTL, 0); + + if (riscv_isa_extension_available(NULL, h)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_HV; + + if (ctr_ctl & (CTRCTL_INDCALL_INH | CTRCTL_DIRCALL_INH)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_ANY_CALL; + + if (ctr_ctl & (CTRCTL_RET_INH | CTRCTL_TRETINH)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_ANY_RETURN; + + if (ctr_ctl & CTRCTL_INDCALL_INH) + allowed_filters |=3D PERF_SAMPLE_BRANCH_IND_CALL; + + if (ctr_ctl & CTRCTL_TKBRINH) + allowed_filters |=3D PERF_SAMPLE_BRANCH_COND; + + if (ctr_ctl & CTRCTL_RASEMU) + allowed_filters |=3D PERF_SAMPLE_BRANCH_CALL_STACK; + + if (ctr_ctl & (CTRCTL_INDOJUMP_INH | CTRCTL_INDJUMP_INH)) + allowed_filters |=3D PERF_SAMPLE_BRANCH_IND_JUMP; + + if (ctr_ctl & CTRCTL_DIRCALL_INH) + allowed_filters |=3D PERF_SAMPLE_BRANCH_CALL; +} + +void riscv_pmu_ctr_starting_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); +} + +void riscv_pmu_ctr_dying_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Clear and reset CTR CSRs. */ + csr_write(CSR_SCTRDEPTH, 0); + csr_write(CSR_CTRCTL, 0); + riscv_pmu_ctr_reset(); +} + +int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) +{ + size_t size =3D sizeof(struct riscv_perf_task_context); + + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return 0; + + riscv_pmu->pmu.task_ctx_cache =3D + kmem_cache_create("ctr_task_ctx", size, sizeof(u64), 0, NULL); + if (!riscv_pmu->pmu.task_ctx_cache) + return -ENOMEM; + + check_available_filters(); + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); + riscv_pmu->ctr_depth =3D ctr_get_depth(csr_read(CSR_SCTRDEPTH)); + + pr_info("Perf CTR available, with %d depth\n", riscv_pmu->ctr_depth); + + return 0; +} + +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) +{ + if (!riscv_pmu_ctr_supported(riscv_pmu)) + return; + + riscv_pmu->ctr_depth =3D 0; + csr_write(CSR_SCTRDEPTH, 0); + csr_write(CSR_CTRCTL, 0); + riscv_pmu_ctr_reset(); + + kmem_cache_destroy(riscv_pmu->pmu.task_ctx_cache); +} diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 883781f12ae0be93d8292ae1a7e7b03fea3ea955..f32b6dcc349109dc0aa74cbe152= 381c0b2c662d0 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -127,6 +127,43 @@ struct riscv_pmu *riscv_pmu_alloc(void); int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); #endif =20 +static inline bool riscv_pmu_ctr_supported(struct riscv_pmu *pmu) +{ + return !!pmu->ctr_depth; +} + #endif /* CONFIG_RISCV_PMU_COMMON */ =20 +#ifdef CONFIG_RISCV_CTR + +bool riscv_pmu_ctr_valid(struct perf_event *event); +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *= event); +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool= sched_in); +void riscv_pmu_ctr_add(struct perf_event *event); +void riscv_pmu_ctr_del(struct perf_event *event); +void riscv_pmu_ctr_enable(struct perf_event *event); +void riscv_pmu_ctr_disable(struct perf_event *event); +void riscv_pmu_ctr_dying_cpu(void); +void riscv_pmu_ctr_starting_cpu(void); +int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu); +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu); + +#else + +static inline bool riscv_pmu_ctr_valid(struct perf_event *event) { return = false; } +static inline void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, + struct perf_event *event) { } +static inline void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context = *, + bool sched_in) { } +static void riscv_pmu_ctr_add(struct perf_event *event) { } +static void riscv_pmu_ctr_del(struct perf_event *event) { } +static inline void riscv_pmu_ctr_enable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_disable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_dying_cpu(void) { } +static inline void riscv_pmu_ctr_starting_cpu(void) { } +static inline int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) { return= 0; } +static inline void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) { } + +#endif /* CONFIG_RISCV_CTR */ + #endif /* _RISCV_PMU_H */ --=20 2.43.0 From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7E3E2D3A84 for ; Thu, 22 May 2025 23:26:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956386; cv=none; b=fNHsbUVGJNFtgy7WJzu6o8aB85Hcvk+MnirEehoNLvsPlMJf754dE5LP/rS0JO3lwghv/4q//mOFfFqaP2jaQyMIXHUgWkpgyI/4s8zKvFqrwFQf4oy0PkJ0D7cUv2SLWoAUhe7iqrd0Fwy06bNIZuy1bzmDgNyxEcbakfgPknU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956386; c=relaxed/simple; bh=P0rRRLXHPd36tAaXY4UVfPZybaxsXmDGzadqOkqrzE4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=XgfPGWgZue8HTeMkrvFBkxQDOM+MYCJVFjrwYrWpv74Oig1F637HilR6I4GXXS9xXZnONEbRpnGvO4QIda5QZFKS/TiJG5GqFCVQl+dVwwXuHO3GucMWctmYlT0mpd5bUnztoWn8spy47aFmqC3QsaC8/zcxFU51mt8zWFjohmM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=FPwVHT6t; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="FPwVHT6t" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-43cfebc343dso67010015e9.2 for ; Thu, 22 May 2025 16:26:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956382; x=1748561182; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=iB4a0Cbh2NjKnWpwZFuK+rtkyVkQIkFS80E49Qu0ieA=; b=FPwVHT6t86ufO1066QgPLJ67I1DJILsnY4M9gX1uZUHEGPmFs4gsAAy0RrSSdaSTt0 yTt1F+GxJNrEqshI0peR50IGWFol1vegAFHSjf0wGRfPOf4GweTLaZR3FL0UUSL2n1an rOQ6Xvj8w7rnjeeBF+pBX4Lwv7wzIBPfVJE4XDjC7QCpHyHBZQRxXRidcnZ9mQydf4lp X3M/4dChEzT6kWKwPA4SM/uujRPuS1sjf9CXVxxP6/SutYF6OMLrqqEE49ndOLA2P0nC fPYLoNeCpkA32/K0nMZ+bLswwy65SjvF50S0Cd+Dnv+JNn1V2jbyt6Rnjm14t3PJwG9x qIww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956382; x=1748561182; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iB4a0Cbh2NjKnWpwZFuK+rtkyVkQIkFS80E49Qu0ieA=; b=cdjf1M5h2PO7+Om6DVXhSRHkm7a5P785KaxFxOQGj3FB/yDE2UaDPXTq+IfodMLG/V SphdsEtrBKAd7mnyV9Q/WUtL70a478xLds4JVw0/Ko+A8jT27WTzAoZQpZHwOdhC161E jSUI/mDNw+LP779CA6Dyz9eHlqDRwk1ykVJSxdhDQhrpYpFFCP0lmf/PBjbKNtHiGqkm QfyUg5VbPyq93bH5NY+8hSM3G9Q7J4x5lXSYezElo9mEgF1jxFoGAMOXqM6ZWopk08Ca jv8KozZyajiYxeQ5Nw2arZ9HBgQk35wb2YAUXemK2aySKk42V5brta55TcSboFWX9zyj mtsg== X-Forwarded-Encrypted: i=1; AJvYcCVErvMnm4KnoUpjX4DsKVBtYyjFCwZVxTRcQrMgov7po6uamBKZfzzuDYonYd4erEtYatRrUSt08BAkDRE=@vger.kernel.org X-Gm-Message-State: AOJu0Yw22djSfpc7zXRcnSOvb3Om/Ibv272uv3kLr53eVzvdcygypCxf HfK63Jeo+mSjCkFkoJ1EeZCQqgvIdr3dk0dXE8qqzX2cTiyFoiBB9isoFnVTOsotk2s= X-Gm-Gg: ASbGncuaPUsmOrqgTiwvWeIjLBgtBFC5orl9xGUCUZnGAqFkPuenkTx+CF4LqyVnCyt l9fnezwkZVf+lpT2Pmn0OeTxyfOwOYVOkRPCIPXn/7HQbHgkXYHdg6MFIAxsJBA1te1Iz/Nplv1 uf/GCAUg/uOscEBRjpgFrjx6Di/vey8rxZ57WrNHaddUXnsiFUK7BQl26UowuSwz1r6FTuekvoE tNxSr8T+rNpZZxh4V8haOXG7PBO5gTy4A7v81e9/akR3mJcDMu/wVAO7EXxNFV7H9C8spg+Ya6q ahk/JFtWHyajXl0o/jrd5kNVppbkuII8pAOIsF2h0ZFJSuIhISUgYw== X-Google-Smtp-Source: AGHT+IGE6jY2JmFj2M+Ketq4VLKj1dDV4Tagl9qnZaoCfDb1MRZkhunSbxJSEeTZaqOhTSm2kAWnnw== X-Received: by 2002:a05:600c:5247:b0:442:f4a3:b5f2 with SMTP id 5b1f17b1804b1-442fd60b4e8mr228193215e9.6.1747956382192; Thu, 22 May 2025 16:26:22 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:21 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:12 +0100 Subject: [PATCH v3 6/7] riscv: pmu: Integrate CTR Ext support in riscv_pmu_dev driver Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-6-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=6642; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=P0rRRLXHPd36tAaXY4UVfPZybaxsXmDGzadqOkqrzE4=; b=2qqrKnwDcn0slIAzGVFAAbSi8ELDFNsa8LcbAOZDrVPPJHUaNgQod52uFcLIegOjFZCCRAY0f Pkw9TZixbyHDopUhF+k3fCLpS5iahzM/Eo8247LJfXAX+vKraTAvgut X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= This integrates recently added CTR ext support in riscv_pmu_dev driver to enable branch stack sampling using PMU events. This mainly adds CTR enable/disable callbacks in rvpmu_ctr_stop() and rvpmu_ctr_start() function to start/stop branch recording along with the event. PMU overflow handler rvpmu_ovf_handler() is also updated to sample CTR entries in case of the overflow for the particular event programmed to records branches. The recorded entries are fed to core perf for further processing. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 3 +- drivers/perf/riscv_pmu_dev.c | 67 +++++++++++++++++++++++++++++++++++++= +++- 2 files changed, 67 insertions(+), 3 deletions(-) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_commo= n.c index b2dc78cbbb93926964f81f30be9ef4a1c02501df..0b032b8d8762e77d2b553643b0f= 9064e7c789cfe 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -329,8 +329,7 @@ static int riscv_pmu_event_init(struct perf_event *even= t) u64 event_config =3D 0; uint64_t cmask; =20 - /* driver does not support branch stack sampling */ - if (has_branch_stack(event)) + if (needs_branch_stack(event) && !riscv_pmu_ctr_supported(rvpmu)) return -EOPNOTSUPP; =20 hwc->flags =3D 0; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index 95e6dd272db69f53b679e5fc3450785e45d5e8b9..b0c616fb939fcc61f7493877a88= 01916069f16f7 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -1038,7 +1038,7 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *eve= nt, unsigned long flag) static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) { - /* Call CTR specific Sched hook. */ + riscv_pmu_ctr_sched_task(pmu_ctx, sched_in); } =20 static int rvpmu_sbi_find_num_ctrs(void) @@ -1370,6 +1370,13 @@ static irqreturn_t rvpmu_ovf_handler(int irq, void *= dev) hw_evt->state |=3D PERF_HES_UPTODATE; perf_sample_data_init(&data, 0, hw_evt->last_period); if (riscv_pmu_event_set_period(event)) { + if (needs_branch_stack(event)) { + riscv_pmu_ctr_consume(cpu_hw_evt, event); + perf_sample_save_brstack( + &data, event, + &cpu_hw_evt->branches->branch_stack, NULL); + } + /* * Unlike other ISAs, RISC-V don't have to disable interrupts * to avoid throttling here. As per the specification, the @@ -1569,16 +1576,23 @@ static int rvpmu_deleg_ctr_get_idx(struct perf_even= t *event) =20 static void rvpmu_ctr_add(struct perf_event *event, int flags) { + if (needs_branch_stack(event)) + riscv_pmu_ctr_add(event); } =20 static void rvpmu_ctr_del(struct perf_event *event, int flags) { + if (needs_branch_stack(event)) + riscv_pmu_ctr_del(event); } =20 static void rvpmu_ctr_start(struct perf_event *event, u64 ival) { struct hw_perf_event *hwc =3D &event->hw; =20 + if (needs_branch_stack(event)) + riscv_pmu_ctr_enable(event); + if (riscv_pmu_cdeleg_available() && !pmu_sbi_is_fw_event(event)) rvpmu_deleg_ctr_start(event, ival); else @@ -1593,6 +1607,9 @@ static void rvpmu_ctr_stop(struct perf_event *event, = unsigned long flag) { struct hw_perf_event *hwc =3D &event->hw; =20 + if (needs_branch_stack(event) && flag !=3D RISCV_PMU_STOP_FLAG_RESET) + riscv_pmu_ctr_disable(event); + if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) rvpmu_reset_scounteren((void *)event); @@ -1650,6 +1667,9 @@ static u32 rvpmu_find_ctrs(void) =20 static int rvpmu_event_map(struct perf_event *event, u64 *econfig) { + if (needs_branch_stack(event) && !riscv_pmu_ctr_valid(event)) + return -EOPNOTSUPP; + if (riscv_pmu_cdeleg_available() && !pmu_sbi_is_fw_event(event)) return rvpmu_cdeleg_event_map(event, econfig); else @@ -1696,6 +1716,8 @@ static int rvpmu_starting_cpu(unsigned int cpu, struc= t hlist_node *node) enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE); } =20 + riscv_pmu_ctr_starting_cpu(); + if (sbi_pmu_snapshot_available()) return pmu_sbi_snapshot_setup(pmu, cpu); =20 @@ -1710,6 +1732,7 @@ static int rvpmu_dying_cpu(unsigned int cpu, struct h= list_node *node) =20 /* Disable all counters access for user mode now */ csr_write(CSR_SCOUNTEREN, 0x0); + riscv_pmu_ctr_dying_cpu(); =20 if (sbi_pmu_snapshot_available()) return pmu_sbi_snapshot_disable(); @@ -1833,6 +1856,29 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); } =20 +static int branch_records_alloc(struct riscv_pmu *pmu) +{ + struct branch_records __percpu *tmp_alloc_ptr; + struct branch_records *records; + struct cpu_hw_events *events; + int cpu; + + if (!riscv_pmu_ctr_supported(pmu)) + return 0; + + tmp_alloc_ptr =3D alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!tmp_alloc_ptr) + return -ENOMEM; + + for_each_possible_cpu(cpu) { + events =3D per_cpu_ptr(pmu->hw_events, cpu); + records =3D per_cpu_ptr(tmp_alloc_ptr, cpu); + events->branches =3D records; + } + + return 0; +} + static void rvpmu_event_init(struct perf_event *event) { /* @@ -1845,6 +1891,9 @@ static void rvpmu_event_init(struct perf_event *event) event->hw.flags |=3D PERF_EVENT_FLAG_USER_ACCESS; else event->hw.flags |=3D PERF_EVENT_FLAG_LEGACY; + + if (branch_sample_call_stack(event)) + event->attach_state |=3D PERF_ATTACH_TASK_DATA; } =20 static void rvpmu_event_mapped(struct perf_event *event, struct mm_struct = *mm) @@ -1992,6 +2041,15 @@ static int rvpmu_device_probe(struct platform_device= *pdev) pmu->pmu.attr_groups =3D riscv_cdeleg_pmu_attr_groups; else pmu->pmu.attr_groups =3D riscv_sbi_pmu_attr_groups; + + ret =3D riscv_pmu_ctr_init(pmu); + if (ret) + goto out_free; + + ret =3D branch_records_alloc(pmu); + if (ret) + goto out_ctr_finish; + pmu->cmask =3D cmask; pmu->ctr_add =3D rvpmu_ctr_add; pmu->ctr_del =3D rvpmu_ctr_del; @@ -2008,6 +2066,10 @@ static int rvpmu_device_probe(struct platform_device= *pdev) pmu->csr_index =3D rvpmu_csr_index; pmu->sched_task =3D pmu_sched_task; =20 + ret =3D cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node= ); + if (ret) + goto out_ctr_finish; + ret =3D riscv_pm_pmu_register(pmu); if (ret) goto out_unregister; @@ -2057,6 +2119,9 @@ static int rvpmu_device_probe(struct platform_device = *pdev) out_unregister: riscv_pmu_destroy(pmu); =20 +out_ctr_finish: + riscv_pmu_ctr_finish(pmu); + out_free: kfree(pmu); return ret; --=20 2.43.0 From nobody Mon Feb 9 11:59:13 2026 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDBA82D4B49 for ; Thu, 22 May 2025 23:26:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956387; cv=none; b=guL8V2WcUk/7gEJX+bsOH/9B2BjMotkSoF1V1UJMcnHJrcEURjC+X+O75f0AoZIVEcZbXNXmrDdL0ZJ5x/4EgHvM1xHij+JLAPVC30Pndna86o5kpcUvwkeGBcxXGokofZTghy+764phq0Djx55s1BFgJomQOcM4NBTV9z2WwoI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747956387; c=relaxed/simple; bh=4/41jFZ7X8VawKTZS2OnRtVeXztOo2HbkndVhYtr+SA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IPfCBtddx7TYqhEov2flHbXgAw9OXqgwG4dqkUmeII3UBBDfYhlbCU11WOS/bea1DWB4psO8yAXi/vwfFGxK8oKSS0UzmGsZCV2xDLBcA6bL2EwXTyjbD6UEiLQkUmqhpIhQ2efZe+LrCEWlN6WeKwVfuxXn7BJI1UNpg9vGkew= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=s/MFnMql; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="s/MFnMql" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-43edecbfb46so69649235e9.0 for ; Thu, 22 May 2025 16:26:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1747956383; x=1748561183; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=2TGQ9Um79TGA1rdJIfqkWtDlxZ0F5uqHcIUUMheXtuM=; b=s/MFnMqlagQ232hEYp5xSURoyTSiNQeD28sHlznTLBjUXSavqGww8VRVFUHsNVBgti 8EhFUeekXRuZ3O6wdcuYr205bFHbGqzmTOUQrCH0iY5YMK3LLojF1qbQqNtZ3bZtVVLN oJ4hEK79oqKiO+IUujmFt2MiUtS9ryyCOlGNCx/xqJi4JyLw25Ltim9RDYCa2q48LbaI Po7Dhr/IIhcxnLq/hbNLqFiKS20UJtnPiLG2Arz/rtrGHcENyDDLVA31wiG1a6Xrwy01 UUwHIaSEv9VummezSTKG9SnBd7AZa9SrsHpBR4evHg2MW1vHpq9FKu0TwAUdG77quFtg QQgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747956383; x=1748561183; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2TGQ9Um79TGA1rdJIfqkWtDlxZ0F5uqHcIUUMheXtuM=; b=WWxL4EiS7nUfBdWm5AVlMuQbQ4AAePBO1k+q9xquqF+e03T53/DsbUitHnZoq9HNJY Dl1dHspbaD6bpgi2crlNHMfPYMA/AO6IKH04IkTkDJPNHoOudVuW48rO1abkH8yidLpQ bwUs9NFwBNmakrg/pjYSCTu9ZrYA1uxHp8T+nfFcq1Wqxrh9bzw9Y/RTYKJn37XJaBqV v3JPr4tQ5N9XhoDvDIgyWb6N2IPJ/HA48molG6Af8jQ8q4Th4nSVZI2fvFhSY+Tl15CQ geEw5lE2RxAHZHD0gaJCWH0hsf49jhMWUjGY/LDLJXkXf0P+LP5HuP0aPqvxn/pd/Uma a7Cw== X-Forwarded-Encrypted: i=1; AJvYcCXYbQwatXPXFRGqur6ZeF2lQSImcVR6AEtUtSAyGTgekb02vK7RXmEVEvlT37ZsZI2ksUu/5diLGqByS0A=@vger.kernel.org X-Gm-Message-State: AOJu0Yz/E1BlX8HK4DsuBzIOl6imII6UoBuxl+alsuDIvOEVo3o0AsDP xEgmtX9qhNVqKvPl5kWtpWNC/Dh8HC7wkEOxgdRlxoOb/rI4SSQxaYnznZG6n7FqRcs= X-Gm-Gg: ASbGncvdlIXuScoIJHzMH+WhXUSTH0QAHMP4kQkpNSdp3500PETAqslVnaVaSdrv+Ht fauC3pyLLwqKKt8cB4OUCfYKa+PQT+eE2OG/+oX9G0ZXAiaWM7ZFmxefODorq6jjPGOQvOC5oU0 cWjRnbPgeKkru1opHW+ZcGKgkG6i4A4LomqQRMNTzxS0qE3YaXlBpC8eaS4TSUFRP4PAr5YpSIj tIJakrReGhEs5ZZMTcUyZbJfWPwTk/42X4FNiBDxivs4NUUH4xaPXqbvGYAuLa9A4bWCeIxqhYe IML29P39lCeIQn9NbJ/WppBuoj4esnpsMWNQ0ZaSP9NVZQbmIKId4w== X-Google-Smtp-Source: AGHT+IF6ivtK0jlTOWplkDrHgDgHgJQdZAf+be8xON8uguEYmscbWhUXECsK2nYD7OUkDLNgE9SoYA== X-Received: by 2002:a05:600c:8207:b0:44a:b9e4:4e6f with SMTP id 5b1f17b1804b1-44ab9e44edcmr45387425e9.16.1747956383213; Thu, 22 May 2025 16:26:23 -0700 (PDT) Received: from [127.0.1.1] ([2a02:c7c:75ac:6300:c05a:35d:17ae:e731]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-447f6f04334sm117825395e9.10.2025.05.22.16.26.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 May 2025 16:26:23 -0700 (PDT) From: Rajnesh Kanwal Date: Fri, 23 May 2025 00:25:13 +0100 Subject: [PATCH v3 7/7] dt-bindings: riscv: add Sxctr ISA extension description Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250523-b4-ctr_upstream_v3-v3-7-ad355304ba1c@rivosinc.com> References: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> In-Reply-To: <20250523-b4-ctr_upstream_v3-v3-0-ad355304ba1c@rivosinc.com> To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Atish Kumar Patra , Anup Patel , Will Deacon , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Beeman Strong Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, Rajnesh Kanwal X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747956375; l=2497; i=rkanwal@rivosinc.com; s=20250522; h=from:subject:message-id; bh=4/41jFZ7X8VawKTZS2OnRtVeXztOo2HbkndVhYtr+SA=; b=jbzPeS5MNGd7AFFPThGCJziNnql/9Fc4HHGH91C3BLqfIJnqOWhSXa3ocJBdIz4rrC/pVNW7X eE6pHLfkSXrBFviRejtBWb+GcQKMp69KtkiO2MuH6iYL+oRko9WVfgD X-Developer-Key: i=rkanwal@rivosinc.com; a=ed25519; pk=aw8nvncslGKHEmTBTJqvkP/4tj6pijL8fwRRym/GuS8= Add the S[m|s]ctr ISA extension description. Signed-off-by: Rajnesh Kanwal Reviewed-by: Conor Dooley --- .../devicetree/bindings/riscv/extensions.yaml | 28 ++++++++++++++++++= ++++ 1 file changed, 28 insertions(+) diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Docu= mentation/devicetree/bindings/riscv/extensions.yaml index f34bc66940c06bf9b3c18fcd7cce7bfd0593cd28..193751400933ca3fe69e0b2bc03= e9c635e2db244 100644 --- a/Documentation/devicetree/bindings/riscv/extensions.yaml +++ b/Documentation/devicetree/bindings/riscv/extensions.yaml @@ -149,6 +149,13 @@ properties: to enable privilege mode filtering for cycle and instret count= ers as ratified in the 20240326 version of the privileged ISA specifi= cation. =20 + - const: smctr + description: | + The standard Smctr supervisor-level extension for the machine = mode + to enable recording limited branch history in a register-acces= sible + internal core storage as ratified at commit 9c87013 ("Merge pu= ll + request #44 from riscv/issue-42-fix") of riscv-control-transfe= r-records. + - const: smmpm description: | The standard Smmpm extension for M-mode pointer masking as @@ -196,6 +203,13 @@ properties: and mode-based filtering as ratified at commit 01d1df0 ("Add a= bility to manually trigger workflow. (#2)") of riscv-count-overflow. =20 + - const: ssctr + description: | + The standard Ssctr supervisor-level extension for recording li= mited + branch history in a register-accessible internal core storage = as + ratified at commit 9c87013 ("Merge pull request #44 from + riscv/issue-42-fix") of riscv-control-transfer-records. + - const: ssnpm description: | The standard Ssnpm extension for next-mode pointer masking as @@ -740,6 +754,20 @@ properties: const: zihpm - contains: const: zicntr + # Smctr depends on Sscsrind + - if: + contains: + const: smctr + then: + contains: + const: sscsrind + # Ssctr depends on Sscsrind + - if: + contains: + const: ssctr + then: + contains: + const: sscsrind =20 allOf: # Zcf extension does not exist on rv64 --=20 2.43.0