From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BD69ECAAD5 for ; Thu, 8 Sep 2022 05:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229731AbiIHFLK (ORCPT ); Thu, 8 Sep 2022 01:11:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbiIHFLF (ORCPT ); Thu, 8 Sep 2022 01:11:05 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E251DB655E; Wed, 7 Sep 2022 22:11:02 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 03C8E106F; Wed, 7 Sep 2022 22:11:09 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 57E1A3F7B4; Wed, 7 Sep 2022 22:10:57 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 1/7] arm64/perf: Add register definitions for BRBE Date: Thu, 8 Sep 2022 10:40:40 +0530 Message-Id: <20220908051046.465307-2-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This adds BRBE related register definitions and various other related field macros there in. These will be used subsequently in a BRBE driver which is being added later on. Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/include/asm/sysreg.h | 222 ++++++++++++++++++++++++++++++++ 1 file changed, 222 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysre= g.h index 7c71358d44c4..66b031e6f671 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -161,6 +161,224 @@ #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0) #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0) =20 +/* + * BRBINF_EL1 Encoding: [2, 1, 8, CRm, op2] + * + * derived as =3D c{N<3:0>} =3D (N<4>x4 + 0) + */ +#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), (((n) & 0x10)) >> = 2) + +#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0) +#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1) +#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2) +#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3) +#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4) +#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5) +#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6) +#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7) +#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8) +#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9) +#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10) +#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11) +#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12) +#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13) +#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14) +#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15) +#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16) +#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17) +#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18) +#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19) +#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20) +#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21) +#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22) +#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23) +#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24) +#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25) +#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26) +#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27) +#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28) +#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29) +#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30) +#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31) + +/* + * BRBSRC_EL1 Encoding: [2, 1, 8, CRm, op2] + * + * derived as =3D c{N<3:0>} =3D (N<4>x4 + 1) + */ +#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >>= 2 + 1)) + +#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0) +#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1) +#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2) +#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3) +#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4) +#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5) +#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6) +#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7) +#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8) +#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9) +#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10) +#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11) +#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12) +#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13) +#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14) +#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15) +#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16) +#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17) +#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18) +#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19) +#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20) +#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21) +#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22) +#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23) +#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24) +#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25) +#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26) +#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27) +#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28) +#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29) +#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30) +#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31) + +/* + * BRBTGT_EL1 Encoding: [2, 1, 8, CRm, op2] + * + * derived as =3D c{N<3:0>} =3D (N<4>x4 + 2) + */ +#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >>= 2 + 2)) + +#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0) +#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1) +#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2) +#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3) +#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4) +#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5) +#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6) +#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7) +#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8) +#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9) +#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10) +#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11) +#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12) +#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13) +#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14) +#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15) +#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16) +#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17) +#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18) +#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19) +#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20) +#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21) +#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22) +#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23) +#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24) +#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25) +#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26) +#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27) +#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28) +#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29) +#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30) +#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31) + +#define SYS_BRBIDR0_EL1 sys_reg(2, 1, 9, 2, 0) +#define SYS_BRBCR_EL1 sys_reg(2, 1, 9, 0, 0) +#define SYS_BRBFCR_EL1 sys_reg(2, 1, 9, 0, 1) +#define SYS_BRBTS_EL1 sys_reg(2, 1, 9, 0, 2) +#define SYS_BRBINFINJ_EL1 sys_reg(2, 1, 9, 1, 0) +#define SYS_BRBSRCINJ_EL1 sys_reg(2, 1, 9, 1, 1) +#define SYS_BRBTGTINJ_EL1 sys_reg(2, 1, 9, 1, 2) + +#define BRBIDR0_CC_SHIFT 12 +#define BRBIDR0_CC_MASK GENMASK(3, 0) +#define BRBIDR0_FORMAT_SHIFT 8 +#define BRBIDR0_FORMAT_MASK GENMASK(3, 0) +#define BRBIDR0_NUMREC_SHIFT 0 +#define BRBIDR0_NUMREC_MASK GENMASK(7, 0) + +#define BRBIDR0_CC_20_BIT 0x5 +#define BRBIDR0_FORMAT_0 0x0 + +#define BRBIDR0_NUMREC_8 0x08 +#define BRBIDR0_NUMREC_16 0x10 +#define BRBIDR0_NUMREC_32 0x20 +#define BRBIDR0_NUMREC_64 0x40 + +#define BRBINF_VALID_SHIFT 0 +#define BRBINF_VALID_MASK GENMASK(1, 0) +#define BRBINF_MPRED (1UL << 5) +#define BRBINF_EL_SHIFT 6 +#define BRBINF_EL_MASK GENMASK(1, 0) +#define BRBINF_TYPE_SHIFT 8 +#define BRBINF_TYPE_MASK GENMASK(5, 0) +#define BRBINF_TX (1UL << 16) +#define BRBINF_LASTFAILED (1UL << 17) +#define BRBINF_CC_SHIFT 32 +#define BRBINF_CC_MASK GENMASK(13, 0) +#define BRBINF_CCU (1UL << 46) + +#define BRBINF_EL_EL0 0x0 +#define BRBINF_EL_EL1 0x1 +#define BRBINF_EL_EL2 0x2 + +#define BRBINF_VALID_INVALID 0x0 +#define BRBINF_VALID_TARGET 0x1 +#define BRBINF_VALID_SOURCE 0x2 +#define BRBINF_VALID_ALL 0x3 + +#define BRBINF_TYPE_UNCOND_DIR 0x0 +#define BRBINF_TYPE_INDIR 0x1 +#define BRBINF_TYPE_DIR_LINK 0x2 +#define BRBINF_TYPE_INDIR_LINK 0x3 +#define BRBINF_TYPE_RET_SUB 0x5 +#define BRBINF_TYPE_RET_EXCPT 0x7 +#define BRBINF_TYPE_COND_DIR 0x8 +#define BRBINF_TYPE_DEBUG_HALT 0x21 +#define BRBINF_TYPE_CALL 0x22 +#define BRBINF_TYPE_TRAP 0x23 +#define BRBINF_TYPE_SERROR 0x24 +#define BRBINF_TYPE_INST_DEBUG 0x26 +#define BRBINF_TYPE_DATA_DEBUG 0x27 +#define BRBINF_TYPE_ALGN_FAULT 0x2A +#define BRBINF_TYPE_INST_FAULT 0x2B +#define BRBINF_TYPE_DATA_FAULT 0x2C +#define BRBINF_TYPE_IRQ 0x2E +#define BRBINF_TYPE_FIQ 0x2F +#define BRBINF_TYPE_DEBUG_EXIT 0x39 + +#define BRBCR_E0BRE (1UL << 0) +#define BRBCR_E1BRE (1UL << 1) +#define BRBCR_CC (1UL << 3) +#define BRBCR_MPRED (1UL << 4) +#define BRBCR_FZP (1UL << 8) +#define BRBCR_ERTN (1UL << 22) +#define BRBCR_EXCEPTION (1UL << 23) +#define BRBCR_TS_MASK GENMASK(1, 0) +#define BRBCR_TS_SHIFT 5 + +#define BRBCR_TS_VIRTUAL 0x1 +#define BRBCR_TS_GST_PHYSICAL 0x2 +#define BRBCR_TS_PHYSICAL 0x3 + +#define BRBFCR_LASTFAILED (1UL << 6) +#define BRBFCR_PAUSED (1UL << 7) +#define BRBFCR_ENL (1UL << 16) +#define BRBFCR_DIRECT (1UL << 17) +#define BRBFCR_INDIRECT (1UL << 18) +#define BRBFCR_RTN (1UL << 19) +#define BRBFCR_INDCALL (1UL << 20) +#define BRBFCR_DIRCALL (1UL << 21) +#define BRBFCR_CONDDIR (1UL << 22) +#define BRBFCR_BANK_MASK GENMASK(1, 0) +#define BRBFCR_BANK_SHIFT 28 + +#define BRBFCR_BANK_FIRST 0x0 +#define BRBFCR_BANK_SECOND 0x1 + +#define BRBFCR_BRANCH_ALL (BRBFCR_DIRECT | BRBFCR_INDIRECT | \ + BRBFCR_RTN | BRBFCR_INDCALL | \ + BRBFCR_DIRCALL | BRBFCR_CONDDIR) + #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) @@ -826,6 +1044,7 @@ #define ID_AA64MMFR2_CNP_SHIFT 0 =20 /* id_aa64dfr0 */ +#define ID_AA64DFR0_BRBE_SHIFT 52 #define ID_AA64DFR0_MTPMU_SHIFT 48 #define ID_AA64DFR0_TRBE_SHIFT 44 #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 @@ -848,6 +1067,9 @@ #define ID_AA64DFR0_PMSVER_8_2 0x1 #define ID_AA64DFR0_PMSVER_8_3 0x2 =20 +#define ID_AA64DFR0_BRBE 0x1 +#define ID_AA64DFR0_BRBE_V1P1 0x2 + #define ID_DFR0_PERFMON_SHIFT 24 =20 #define ID_DFR0_PERFMON_8_0 0x3 --=20 2.25.1 From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36849C38145 for ; Thu, 8 Sep 2022 05:11:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229780AbiIHFLN (ORCPT ); Thu, 8 Sep 2022 01:11:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229526AbiIHFLJ (ORCPT ); Thu, 8 Sep 2022 01:11:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A3741B69C1; Wed, 7 Sep 2022 22:11:07 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B14DDD6E; Wed, 7 Sep 2022 22:11:13 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 447513F7B4; Wed, 7 Sep 2022 22:11:03 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 2/7] arm64/perf: Update struct arm_pmu for BRBE Date: Thu, 8 Sep 2022 10:40:41 +0530 Message-Id: <20220908051046.465307-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Although BRBE is an armv8 speciifc HW feature, abstracting out its various function callbacks at the struct arm_pmu level is preferred, as it cleaner , easier to follow and maintain. Besides some helpers i.e brbe_supported(), brbe_probe() and brbe_reset() might not fit seamlessly, when tried to be embedded via existing arm_pmu helpers in the armv8 implementation. Updates the struct arm_pmu to include all required helpers that will drive BRBE functionality for a given PMU implementation. These are the following. - brbe_filter : Convert perf event filters into BRBE HW filters - brbe_probe : Probe BRBE HW and capture its attributes - brbe_enable : Enable BRBE HW with a given config - brbe_disable : Disable BRBE HW - brbe_read : Read BRBE buffer for captured branch records - brbe_reset : Reset BRBE buffer - brbe_supported: Whether BRBE is supported or not A BRBE driver implementation needs to provide these functionalities. Cc: Will Deacon Cc: Mark Rutland Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 36 ++++++++++++++++++++++++++++++++++ include/linux/perf/arm_pmu.h | 21 ++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index cb69ff1e6138..e7013699171f 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1025,6 +1025,35 @@ static int armv8pmu_filter_match(struct perf_event *= event) return evtype !=3D ARMV8_PMUV3_PERFCTR_CHAIN; } =20 +static void armv8pmu_brbe_filter(struct pmu_hw_events *hw_event, struct pe= rf_event *event) +{ +} + +static void armv8pmu_brbe_enable(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_disable(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_read(struct pmu_hw_events *hw_event, struct perf= _event *event) +{ +} + +static void armv8pmu_brbe_probe(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_reset(struct pmu_hw_events *hw_event) +{ +} + +static bool armv8pmu_brbe_supported(struct perf_event *event) +{ + return false; +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1257,6 +1286,13 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, c= har *name, =20 cpu_pmu->pmu.event_idx =3D armv8pmu_user_event_idx; =20 + cpu_pmu->brbe_filter =3D armv8pmu_brbe_filter; + cpu_pmu->brbe_enable =3D armv8pmu_brbe_enable; + cpu_pmu->brbe_disable =3D armv8pmu_brbe_disable; + cpu_pmu->brbe_read =3D armv8pmu_brbe_read; + cpu_pmu->brbe_probe =3D armv8pmu_brbe_probe; + cpu_pmu->brbe_reset =3D armv8pmu_brbe_reset; + cpu_pmu->brbe_supported =3D armv8pmu_brbe_supported; cpu_pmu->name =3D name; cpu_pmu->map_event =3D map_event; cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] =3D events ? diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 0407a38b470a..3d427ac0ca45 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -100,6 +100,27 @@ struct arm_pmu { void (*reset)(void *); int (*map_event)(struct perf_event *event); int (*filter_match)(struct perf_event *event); + + /* Convert perf event filters into BRBE HW filters */ + void (*brbe_filter)(struct pmu_hw_events *hw_events, struct perf_event *= event); + + /* Probe BRBE HW and capture its attributes */ + void (*brbe_probe)(struct pmu_hw_events *hw_events); + + /* Enable BRBE HW with a given config */ + void (*brbe_enable)(struct pmu_hw_events *hw_events); + + /* Disable BRBE HW */ + void (*brbe_disable)(struct pmu_hw_events *hw_events); + + /* Process BRBE buffer for captured branch records */ + void (*brbe_read)(struct pmu_hw_events *hw_events, struct perf_event *ev= ent); + + /* Reset BRBE buffer */ + void (*brbe_reset)(struct pmu_hw_events *hw_events); + + /* Check whether BRBE is supported */ + bool (*brbe_supported)(struct perf_event *event); int num_events; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 --=20 2.25.1 From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A3CDC6FA83 for ; Thu, 8 Sep 2022 05:11:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229782AbiIHFLX (ORCPT ); Thu, 8 Sep 2022 01:11:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229795AbiIHFLP (ORCPT ); Thu, 8 Sep 2022 01:11:15 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 22E34B6D6D; Wed, 7 Sep 2022 22:11:12 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20835106F; Wed, 7 Sep 2022 22:11:18 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 08AF23F7B4; Wed, 7 Sep 2022 22:11:07 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 3/7] arm64/perf: Update struct pmu_hw_events for BRBE Date: Thu, 8 Sep 2022 10:40:42 +0530 Message-Id: <20220908051046.465307-4-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A single perf event instance BRBE related contexts and data will be tracked in struct pmu_hw_events. Hence update the structure to accommodate required details related to BRBE. Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reported-by: kernel test robot --- include/linux/perf/arm_pmu.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 3d427ac0ca45..18e519e4e658 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -43,6 +43,11 @@ }, \ } =20 +/* + * Maximum branch records in BRBE + */ +#define BRBE_MAX_ENTRIES 64 + /* The events for a given PMU register set. */ struct pmu_hw_events { /* @@ -69,6 +74,23 @@ struct pmu_hw_events { struct arm_pmu *percpu_pmu; =20 int irq; + + /* Detected BRBE attributes */ + bool v1p1; + int brbe_cc; + int brbe_nr; + + /* Evaluated BRBE configuration */ + u64 brbfcr; + u64 brbcr; + + /* Tracked BRBE context */ + unsigned int brbe_users; + void *brbe_context; + + /* Captured BRBE buffer - copied as is into perf_sample_data */ + struct perf_branch_stack brbe_stack; + struct perf_branch_entry brbe_entries[BRBE_MAX_ENTRIES]; }; =20 enum armpmu_attr_groups { --=20 2.25.1 From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D89DECAAD5 for ; Thu, 8 Sep 2022 05:11:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229788AbiIHFL2 (ORCPT ); Thu, 8 Sep 2022 01:11:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbiIHFLU (ORCPT ); Thu, 8 Sep 2022 01:11:20 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7B1F6BADA2; Wed, 7 Sep 2022 22:11:17 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8ED7D6E; Wed, 7 Sep 2022 22:11:22 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 52EC53F7B4; Wed, 7 Sep 2022 22:11:12 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 4/7] driver/perf/arm_pmu_platform: Add support for BRBE attributes detection Date: Thu, 8 Sep 2022 10:40:43 +0530 Message-Id: <20220908051046.465307-5-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This adds arm pmu infrastrure to probe BRBE implementation's attributes via driver exported callbacks later. The actual BRBE feature detection will be added by the driver itself. CPU specific BRBE entries, cycle count, format support gets detected during PMU init. This information gets saved in per-cpu struct pmu_hw_events which later helps in operating BRBE during a perf event context. Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu_platform.c | 34 +++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platfor= m.c index 513de1f54e2d..800e4a6e8bc3 100644 --- a/drivers/perf/arm_pmu_platform.c +++ b/drivers/perf/arm_pmu_platform.c @@ -172,6 +172,36 @@ static int armpmu_request_irqs(struct arm_pmu *armpmu) return err; } =20 +static void arm_brbe_probe_cpu(void *info) +{ + struct pmu_hw_events *hw_events; + struct arm_pmu *armpmu =3D info; + + /* + * Return from here, if BRBE driver has not been + * implemented for this PMU. This helps prevent + * kernel crash later when brbe_probe() will be + * called on the PMU. + */ + if (!armpmu->brbe_probe) + return; + + hw_events =3D per_cpu_ptr(armpmu->hw_events, smp_processor_id()); + armpmu->brbe_probe(hw_events); +} + +static int armpmu_request_brbe(struct arm_pmu *armpmu) +{ + int cpu, err =3D 0; + + for_each_cpu(cpu, &armpmu->supported_cpus) { + err =3D smp_call_function_single(cpu, arm_brbe_probe_cpu, armpmu, 1); + if (err) + return err; + } + return err; +} + static void armpmu_free_irqs(struct arm_pmu *armpmu) { int cpu; @@ -229,6 +259,10 @@ int arm_pmu_device_probe(struct platform_device *pdev, if (ret) goto out_free_irqs; =20 + ret =3D armpmu_request_brbe(pmu); + if (ret) + goto out_free_irqs; + ret =3D armpmu_register(pmu); if (ret) { dev_err(dev, "failed to register PMU devices!\n"); --=20 2.25.1 From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44C46C6FA83 for ; Thu, 8 Sep 2022 05:11:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbiIHFLj (ORCPT ); Thu, 8 Sep 2022 01:11:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229876AbiIHFLc (ORCPT ); Thu, 8 Sep 2022 01:11:32 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5CE51BBA65; Wed, 7 Sep 2022 22:11:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C28D7D6E; Wed, 7 Sep 2022 22:11:27 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3D03E3F7B4; Wed, 7 Sep 2022 22:11:16 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 5/7] arm64/perf: Drive BRBE from perf event states Date: Thu, 8 Sep 2022 10:40:44 +0530 Message-Id: <20220908051046.465307-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Branch stack sampling rides along the normal perf event and all the branch records get captured during the PMU interrupt. This just changes perf event handling on the arm64 platform to accommodate required BRBE operations that will enable branch stack sampling support. It adds a new 'hw_perf_event.flags' element i.e ARMPMU_EVT_PRIV, which will enable caching perf event privilege information required for capturing some branch record types. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 6 ++++ drivers/perf/arm_pmu.c | 50 ++++++++++++++++++++++++++++++++++ include/linux/perf/arm_pmu.h | 4 +++ 3 files changed, 60 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index e7013699171f..5bfaba8edad1 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -874,6 +874,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) if (!armpmu_event_set_period(event)) continue; =20 + if (has_branch_stack(event)) { + cpu_pmu->brbe_read(cpuc, event); + data.br_stack =3D &cpuc->brbe_stack; + cpu_pmu->brbe_reset(cpuc); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 59d3980b8ca2..1fe5d6238b81 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -271,12 +271,22 @@ armpmu_stop(struct perf_event *event, int flags) { struct arm_pmu *armpmu =3D to_arm_pmu(event->pmu); struct hw_perf_event *hwc =3D &event->hw; + struct pmu_hw_events *hw_events =3D this_cpu_ptr(armpmu->hw_events); =20 /* * ARM pmu always has to update the counter, so ignore * PERF_EF_UPDATE, see comments in armpmu_start(). */ if (!(hwc->state & PERF_HES_STOPPED)) { + if (has_branch_stack(event)) { + WARN_ON_ONCE(!hw_events->brbe_users); + hw_events->brbe_users--; + if (!hw_events->brbe_users) { + hw_events->brbe_context =3D NULL; + armpmu->brbe_disable(hw_events); + } + } + armpmu->disable(event); armpmu_event_update(event); hwc->state |=3D PERF_HES_STOPPED | PERF_HES_UPTODATE; @@ -287,6 +297,7 @@ static void armpmu_start(struct perf_event *event, int = flags) { struct arm_pmu *armpmu =3D to_arm_pmu(event->pmu); struct hw_perf_event *hwc =3D &event->hw; + struct pmu_hw_events *hw_events =3D this_cpu_ptr(armpmu->hw_events); =20 /* * ARM pmu always has to reprogram the period, so ignore @@ -304,6 +315,14 @@ static void armpmu_start(struct perf_event *event, int= flags) * happened since disabling. */ armpmu_event_set_period(event); + if (has_branch_stack(event)) { + if (event->ctx->task && hw_events->brbe_context !=3D event->ctx) { + armpmu->brbe_reset(hw_events); + hw_events->brbe_context =3D event->ctx; + } + armpmu->brbe_enable(hw_events); + hw_events->brbe_users++; + } armpmu->enable(event); } =20 @@ -349,6 +368,10 @@ armpmu_add(struct perf_event *event, int flags) hw_events->events[idx] =3D event; =20 hwc->state =3D PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + if (flags & PERF_EF_START) armpmu_start(event, PERF_EF_RELOAD); =20 @@ -443,6 +466,7 @@ __hw_perf_event_init(struct perf_event *event) { struct arm_pmu *armpmu =3D to_arm_pmu(event->pmu); struct hw_perf_event *hwc =3D &event->hw; + struct pmu_hw_events *hw_events =3D this_cpu_ptr(armpmu->hw_events); int mapping; =20 hwc->flags =3D 0; @@ -492,6 +516,19 @@ __hw_perf_event_init(struct perf_event *event) local64_set(&hwc->period_left, hwc->sample_period); } =20 + if (has_branch_stack(event)) { + /* + * Cache whether the perf event is allowed to capture exception + * and exception return branch records. It allows us to perform + * the privilege check via perfmon_capable(), in the context of + * the event owner, just once, during the pmu->event_init(). + */ + if (perfmon_capable()) + event->hw.flags |=3D ARMPMU_EVT_PRIV; + + armpmu->brbe_filter(hw_events, event); + } + return validate_group(event); } =20 @@ -520,6 +557,18 @@ static int armpmu_event_init(struct perf_event *event) return __hw_perf_event_init(event); } =20 +static void armpmu_sched_task(struct perf_event_context *ctx, bool sched_i= n) +{ + struct arm_pmu *armpmu =3D to_arm_pmu(ctx->pmu); + struct pmu_hw_events *hw_events =3D this_cpu_ptr(armpmu->hw_events); + + if (!hw_events->brbe_users) + return; + + if (sched_in) + armpmu->brbe_reset(hw_events); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu =3D to_arm_pmu(pmu); @@ -877,6 +926,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) } =20 pmu->pmu =3D (struct pmu) { + .sched_task =3D armpmu_sched_task, .pmu_enable =3D armpmu_enable, .pmu_disable =3D armpmu_disable, .event_init =3D armpmu_event_init, diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 18e519e4e658..67f44020a736 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -29,6 +29,10 @@ /* Event uses a 47bit counter */ #define ARMPMU_EVT_47BIT 2 =20 +#define ARMPMU_EVT_PRIV 0x00004 /* Event is privileged */ + +static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_PRIV) =3D=3D ARMPMU_EVT_P= RIV); + #define HW_OP_UNSUPPORTED 0xFFFF #define C(_x) PERF_COUNT_HW_CACHE_##_x #define CACHE_OP_UNSUPPORTED 0xFFFF --=20 2.25.1 From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE33DC38145 for ; Thu, 8 Sep 2022 05:11:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229872AbiIHFLu (ORCPT ); Thu, 8 Sep 2022 01:11:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbiIHFLk (ORCPT ); Thu, 8 Sep 2022 01:11:40 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0627CBB68B; Wed, 7 Sep 2022 22:11:30 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6799B106F; Wed, 7 Sep 2022 22:11:32 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F29B63F7B4; Wed, 7 Sep 2022 22:11:21 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 6/7] arm64/perf: Add BRBE driver Date: Thu, 8 Sep 2022 10:40:45 +0530 Message-Id: <20220908051046.465307-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This adds a BRBE driver which implements all the required helper functions for struct arm_pmu. Following functions are defined by this driver which will configure, enable, capture, reset and disable BRBE buffer HW as and when requested via perf branch stack sampling framework. - arm64_pmu_brbe_filter() - arm64_pmu_brbe_enable() - arm64_pmu_brbe_disable() - arm64_pmu_brbe_read() - arm64_pmu_brbe_probe() - arm64_pmu_brbe_reset() - arm64_pmu_brbe_supported() Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 8 +- drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/arm_pmu_brbe.c | 448 +++++++++++++++++++++++++++++++++ drivers/perf/arm_pmu_brbe.h | 259 +++++++++++++++++++ include/linux/perf/arm_pmu.h | 20 ++ 6 files changed, 746 insertions(+), 1 deletion(-) create mode 100644 drivers/perf/arm_pmu_brbe.c create mode 100644 drivers/perf/arm_pmu_brbe.h diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 5bfaba8edad1..76d409d9b5f3 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1033,31 +1033,37 @@ static int armv8pmu_filter_match(struct perf_event = *event) =20 static void armv8pmu_brbe_filter(struct pmu_hw_events *hw_event, struct pe= rf_event *event) { + arm64_pmu_brbe_filter(hw_event, event); } =20 static void armv8pmu_brbe_enable(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_enable(hw_event); } =20 static void armv8pmu_brbe_disable(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_disable(hw_event); } =20 static void armv8pmu_brbe_read(struct pmu_hw_events *hw_event, struct perf= _event *event) { + arm64_pmu_brbe_read(hw_event, event); } =20 static void armv8pmu_brbe_probe(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_probe(hw_event); } =20 static void armv8pmu_brbe_reset(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_reset(hw_event); } =20 static bool armv8pmu_brbe_supported(struct perf_event *event) { - return false; + return arm64_pmu_brbe_supported(event); } =20 static void armv8pmu_reset(void *info) diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 1e2d69453771..9fa34a1d3a23 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -183,6 +183,17 @@ config APPLE_M1_CPU_PMU Provides support for the non-architectural CPU PMUs present on the Apple M1 SoCs and derivatives. =20 +config ARM_BRBE_PMU + tristate "Enable support for Branch Record Buffer Extension (BRBE)" + depends on ARM64 && ARM_PMU + default y + help + Enable perf support for Branch Record Buffer Extension (BRBE) which + records all branches taken in an execution path. This supports some + branch types and privilege based filtering. It captured additional + relevant information such as cycle count, misprediction and branch + type, branch privilege level etc. + source "drivers/perf/hisilicon/Kconfig" =20 config MARVELL_CN10K_DDR_PMU diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 57a279c61df5..b81fc134d95f 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -20,3 +20,4 @@ obj-$(CONFIG_ARM_DMC620_PMU) +=3D arm_dmc620_pmu.o obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) +=3D marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) +=3D marvell_cn10k_ddr_pmu.o obj-$(CONFIG_APPLE_M1_CPU_PMU) +=3D apple_m1_cpu_pmu.o +obj-$(CONFIG_ARM_BRBE_PMU) +=3D arm_pmu_brbe.o diff --git a/drivers/perf/arm_pmu_brbe.c b/drivers/perf/arm_pmu_brbe.c new file mode 100644 index 000000000000..d2d546a8eaab --- /dev/null +++ b/drivers/perf/arm_pmu_brbe.c @@ -0,0 +1,448 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Branch Record Buffer Extension Driver. + * + * Copyright (C) 2021 ARM Limited + * + * Author: Anshuman Khandual + */ +#include "arm_pmu_brbe.h" + +#define BRBE_FCR_MASK (BRBFCR_BRANCH_ALL) +#define BRBE_CR_MASK (BRBCR_EXCEPTION | BRBCR_ERTN | BRBCR_CC | \ + BRBCR_MPRED | BRBCR_E1BRE | BRBCR_E0BRE) + +static bool arm64_pmu_brbe_has_priv(struct perf_event *event) +{ + return !!(event->hw.flags & ARMPMU_EVT_PRIV); +} + +static void set_brbe_disabled(struct pmu_hw_events *cpuc) +{ + cpuc->brbe_nr =3D 0; +} + +static bool brbe_disabled(struct pmu_hw_events *cpuc) +{ + return !cpuc->brbe_nr; +} + +bool arm64_pmu_brbe_supported(struct perf_event *event) +{ + struct arm_pmu *armpmu =3D to_arm_pmu(event->pmu); + struct pmu_hw_events *hw_events =3D per_cpu_ptr(armpmu->hw_events, event-= >cpu); + + /* + * If the event does not have at least one of the privilege + * branch filters as in PERF_SAMPLE_BRANCH_PLM_ALL, the core + * perf will adjust its value based on perf event's existing + * privilege level via attr.exclude_[user|kernel|hv]. + * + * As event->attr.branch_sample_type might have been changed + * when the event reaches here, it is not possible to figure + * out whether the event originally had HV privilege request + * or got added via the core perf. Just report this situation + * once and continue ignoring if there are other instances. + */ + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_HV) + pr_warn_once("does not support hypervisor privilege branch filter\n"); + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_ABORT_TX) { + pr_warn_once("does not support aborted transaction branch filter\n"); + return false; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_NO_TX) { + pr_warn_once("does not support non transaction branch filter\n"); + return false; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_IN_TX) { + pr_warn_once("does not support in transaction branch filter\n"); + return false; + } + return !brbe_disabled(hw_events); +} + +void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc) +{ + u64 aa64dfr0, brbidr; + unsigned int brbe, format, cpu =3D smp_processor_id(); + + aa64dfr0 =3D read_sysreg_s(SYS_ID_AA64DFR0_EL1); + brbe =3D cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_BRBE_= SHIFT); + if (!brbe) { + pr_info("no implementation found on cpu %d\n", cpu); + set_brbe_disabled(cpuc); + return; + } else if (brbe =3D=3D ID_AA64DFR0_BRBE) { + pr_info("implementation found on cpu %d\n", cpu); + cpuc->v1p1 =3D false; + } else if (brbe =3D=3D ID_AA64DFR0_BRBE_V1P1) { + pr_info("implementation (v1p1) found on cpu %d\n", cpu); + cpuc->v1p1 =3D true; + } + + brbidr =3D read_sysreg_s(SYS_BRBIDR0_EL1); + format =3D brbe_fetch_format(brbidr); + if (format !=3D BRBIDR0_FORMAT_0) { + pr_warn("format 0 not implemented\n"); + set_brbe_disabled(cpuc); + return; + } + + cpuc->brbe_cc =3D brbe_fetch_cc_bits(brbidr); + if (cpuc->brbe_cc !=3D BRBIDR0_CC_20_BIT) { + pr_warn("20-bit counter not implemented\n"); + set_brbe_disabled(cpuc); + return; + } + + cpuc->brbe_nr =3D brbe_fetch_numrec(brbidr); + if (!valid_brbe_nr(cpuc->brbe_nr)) { + pr_warn("invalid number of records\n"); + set_brbe_disabled(cpuc); + return; + } +} + +void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc) +{ + u64 brbfcr, brbcr; + + if (brbe_disabled(cpuc)) + return; + + brbfcr =3D read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &=3D ~(BRBFCR_BANK_MASK << BRBFCR_BANK_SHIFT); + brbfcr &=3D ~(BRBFCR_ENL | BRBFCR_PAUSED | BRBE_FCR_MASK); + brbfcr |=3D (cpuc->brbfcr & BRBE_FCR_MASK); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + + brbcr =3D read_sysreg_s(SYS_BRBCR_EL1); + brbcr &=3D ~BRBE_CR_MASK; + brbcr |=3D BRBCR_FZP; + brbcr |=3D (BRBCR_TS_PHYSICAL << BRBCR_TS_SHIFT); + brbcr |=3D (cpuc->brbcr & BRBE_CR_MASK); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); +} + +void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc) +{ + u64 brbcr; + + if (brbe_disabled(cpuc)) + return; + + brbcr =3D read_sysreg_s(SYS_BRBCR_EL1); + brbcr &=3D ~(BRBCR_E0BRE | BRBCR_E1BRE); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); +} + +static void perf_branch_to_brbfcr(struct pmu_hw_events *cpuc, int branch_t= ype) +{ + cpuc->brbfcr =3D 0; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + cpuc->brbfcr |=3D BRBFCR_BRANCH_ALL; + return; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + cpuc->brbfcr |=3D (BRBFCR_INDCALL | BRBFCR_DIRCALL); + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + cpuc->brbfcr |=3D BRBFCR_RTN; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + cpuc->brbfcr |=3D BRBFCR_INDCALL; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + cpuc->brbfcr |=3D BRBFCR_CONDDIR; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) + cpuc->brbfcr |=3D BRBFCR_INDIRECT; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + cpuc->brbfcr |=3D BRBFCR_DIRCALL; +} + +static void perf_branch_to_brbcr(struct pmu_hw_events *cpuc, int branch_ty= pe, bool privilege) +{ + cpuc->brbcr =3D (BRBCR_CC | BRBCR_MPRED); + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + cpuc->brbcr |=3D BRBCR_E0BRE; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) { + /* + * This should have been verified earlier. + */ + WARN_ON(!privilege); + cpuc->brbcr |=3D BRBCR_E1BRE; + } + + if (branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES) + cpuc->brbcr &=3D ~BRBCR_CC; + + if (branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS) + cpuc->brbcr &=3D ~BRBCR_MPRED; + + if (!privilege) + return; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + cpuc->brbcr |=3D BRBCR_EXCEPTION; + cpuc->brbcr |=3D BRBCR_ERTN; + return; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + cpuc->brbcr |=3D BRBCR_EXCEPTION; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + cpuc->brbcr |=3D BRBCR_ERTN; +} + + +void arm64_pmu_brbe_filter(struct pmu_hw_events *cpuc, struct perf_event *= event) +{ + u64 branch_type =3D event->attr.branch_sample_type; + bool privilege =3D arm64_pmu_brbe_has_priv(event); + + if (brbe_disabled(cpuc)) + return; + + perf_branch_to_brbfcr(cpuc, branch_type); + perf_branch_to_brbcr(cpuc, branch_type, privilege); +} + +static int brbe_fetch_perf_type(u64 brbinf, bool *new_branch_type) +{ + int brbe_type =3D brbe_fetch_type(brbinf); + *new_branch_type =3D false; + + switch (brbe_type) { + case BRBINF_TYPE_UNCOND_DIR: + return PERF_BR_UNCOND; + case BRBINF_TYPE_INDIR: + return PERF_BR_IND; + case BRBINF_TYPE_DIR_LINK: + return PERF_BR_CALL; + case BRBINF_TYPE_INDIR_LINK: + return PERF_BR_IND_CALL; + case BRBINF_TYPE_RET_SUB: + return PERF_BR_RET; + case BRBINF_TYPE_COND_DIR: + return PERF_BR_COND; + case BRBINF_TYPE_CALL: + return PERF_BR_CALL; + case BRBINF_TYPE_TRAP: + return PERF_BR_SYSCALL; + case BRBINF_TYPE_RET_EXCPT: + return PERF_BR_ERET; + case BRBINF_TYPE_IRQ: + return PERF_BR_IRQ; + case BRBINF_TYPE_DEBUG_HALT: + *new_branch_type =3D true; + return PERF_BR_ARM64_DEBUG_HALT; + case BRBINF_TYPE_SERROR: + return PERF_BR_SERROR; + case BRBINF_TYPE_INST_DEBUG: + *new_branch_type =3D true; + return PERF_BR_ARM64_DEBUG_INST; + case BRBINF_TYPE_DATA_DEBUG: + *new_branch_type =3D true; + return PERF_BR_ARM64_DEBUG_DATA; + case BRBINF_TYPE_ALGN_FAULT: + *new_branch_type =3D true; + return PERF_BR_NEW_FAULT_ALGN; + case BRBINF_TYPE_INST_FAULT: + *new_branch_type =3D true; + return PERF_BR_NEW_FAULT_INST; + case BRBINF_TYPE_DATA_FAULT: + *new_branch_type =3D true; + return PERF_BR_NEW_FAULT_DATA; + case BRBINF_TYPE_FIQ: + *new_branch_type =3D true; + return PERF_BR_ARM64_FIQ; + case BRBINF_TYPE_DEBUG_EXIT: + *new_branch_type =3D true; + return PERF_BR_ARM64_DEBUG_EXIT; + default: + pr_warn("unknown branch type captured\n"); + return PERF_BR_UNKNOWN; + } +} + +static int brbe_fetch_perf_priv(u64 brbinf) +{ + int brbe_el =3D brbe_fetch_el(brbinf); + + switch (brbe_el) { + case BRBINF_EL_EL0: + return PERF_BR_PRIV_USER; + case BRBINF_EL_EL1: + return PERF_BR_PRIV_KERNEL; + case BRBINF_EL_EL2: + if (is_kernel_in_hyp_mode()) + return PERF_BR_PRIV_KERNEL; + return PERF_BR_PRIV_HV; + default: + pr_warn("unknown branch privilege captured\n"); + return -1; + } +} + +static void capture_brbe_flags(struct pmu_hw_events *cpuc, struct perf_eve= nt *event, + u64 brbinf, int idx) +{ + int branch_type, type =3D brbe_record_valid(brbinf); + bool new_branch_type; + + if (!branch_sample_no_cycles(event)) + cpuc->brbe_entries[idx].cycles =3D brbe_fetch_cycles(brbinf); + + if (branch_sample_type(event)) { + branch_type =3D brbe_fetch_perf_type(brbinf, &new_branch_type); + if (new_branch_type) { + cpuc->brbe_entries[idx].type =3D PERF_BR_EXTEND_ABI; + cpuc->brbe_entries[idx].new_type =3D branch_type; + } else { + cpuc->brbe_entries[idx].type =3D branch_type; + } + } + + if (!branch_sample_no_flags(event)) { + /* + * BRBINF_LASTFAILED does not indicate that the last transaction + * got failed or aborted during the current branch record itself. + * Rather, this indicates that all the branch records which were + * in transaction until the curret branch record have failed. So + * the entire BRBE buffer needs to be processed later on to find + * all branch records which might have failed. + */ + cpuc->brbe_entries[idx].abort =3D brbinf & BRBINF_LASTFAILED; + + /* + * All these information (i.e transaction state and mispredicts) + * are not available for target only branch records. + */ + if (type !=3D BRBINF_VALID_TARGET) { + cpuc->brbe_entries[idx].mispred =3D brbinf & BRBINF_MPRED; + cpuc->brbe_entries[idx].predicted =3D !(brbinf & BRBINF_MPRED); + cpuc->brbe_entries[idx].in_tx =3D brbinf & BRBINF_TX; + } + } + + if (branch_sample_priv(event)) { + /* + * All these information (i.e branch privilege level) are not + * available for source only branch records. + */ + if (type !=3D BRBINF_VALID_SOURCE) + cpuc->brbe_entries[idx].priv =3D brbe_fetch_perf_priv(brbinf); + } +} + +/* + * A branch record with BRBINF_EL1.LASTFAILED set, implies that all + * preceding consecutive branch records, that were in a transaction + * (i.e their BRBINF_EL1.TX set) have been aborted. + * + * Similarly BRBFCR_EL1.LASTFAILED set, indicate that all preceding + * consecutive branch records upto the last record, which were in a + * transaction (i.e their BRBINF_EL1.TX set) have been aborted. + * + * --------------------------------- ------------------- + * | 00 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX success] + * --------------------------------- ------------------- + * | 01 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX success] + * --------------------------------- ------------------- + * | 02 | BRBSRC | BRBTGT | BRBINF | | TX =3D 0 | LF =3D 0 | + * --------------------------------- ------------------- + * | 03 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX failed] + * --------------------------------- ------------------- + * | 04 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX failed] + * --------------------------------- ------------------- + * | 05 | BRBSRC | BRBTGT | BRBINF | | TX =3D 0 | LF =3D 1 | + * --------------------------------- ------------------- + * | .. | BRBSRC | BRBTGT | BRBINF | | TX =3D 0 | LF =3D 0 | + * --------------------------------- ------------------- + * | 61 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX failed] + * --------------------------------- ------------------- + * | 62 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX failed] + * --------------------------------- ------------------- + * | 63 | BRBSRC | BRBTGT | BRBINF | | TX =3D 1 | LF =3D 0 | [TX failed] + * --------------------------------- ------------------- + * + * BRBFCR_EL1.LASTFAILED =3D=3D 1 + * + * Here BRBFCR_EL1.LASTFAILED failes all those consecutive and also + * in transaction branches near the end of the BRBE buffer. + */ +static void process_branch_aborts(struct pmu_hw_events *cpuc) +{ + u64 brbfcr =3D read_sysreg_s(SYS_BRBFCR_EL1); + bool lastfailed =3D !!(brbfcr & BRBFCR_LASTFAILED); + int idx =3D cpuc->brbe_nr - 1; + + do { + if (cpuc->brbe_entries[idx].in_tx) { + cpuc->brbe_entries[idx].abort =3D lastfailed; + } else { + lastfailed =3D cpuc->brbe_entries[idx].abort; + cpuc->brbe_entries[idx].abort =3D false; + } + } while (idx--, idx >=3D 0); +} + +void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *ev= ent) +{ + u64 brbinf; + int idx; + + if (brbe_disabled(cpuc)) + return; + + set_brbe_paused(); + for (idx =3D 0; idx < cpuc->brbe_nr; idx++) { + select_brbe_bank_index(idx); + brbinf =3D get_brbinf_reg(idx); + /* + * There are no valid entries anymore on the buffer. + * Abort the branch record processing to save some + * cycles and also reduce the capture/process load + * for the user space as well. + */ + if (brbe_invalid(brbinf)) + break; + + if (brbe_valid(brbinf)) { + cpuc->brbe_entries[idx].from =3D get_brbsrc_reg(idx); + cpuc->brbe_entries[idx].to =3D get_brbtgt_reg(idx); + } else if (brbe_source(brbinf)) { + cpuc->brbe_entries[idx].from =3D get_brbsrc_reg(idx); + cpuc->brbe_entries[idx].to =3D 0; + } else if (brbe_target(brbinf)) { + cpuc->brbe_entries[idx].from =3D 0; + cpuc->brbe_entries[idx].to =3D get_brbtgt_reg(idx); + } + capture_brbe_flags(cpuc, event, brbinf, idx); + } + cpuc->brbe_stack.nr =3D idx; + cpuc->brbe_stack.hw_idx =3D -1ULL; + process_branch_aborts(cpuc); +} + +void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc) +{ + if (brbe_disabled(cpuc)) + return; + + asm volatile(BRB_IALL); + isb(); +} diff --git a/drivers/perf/arm_pmu_brbe.h b/drivers/perf/arm_pmu_brbe.h new file mode 100644 index 000000000000..f04975cdc242 --- /dev/null +++ b/drivers/perf/arm_pmu_brbe.h @@ -0,0 +1,259 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Branch Record Buffer Extension Helpers. + * + * Copyright (C) 2021 ARM Limited + * + * Author: Anshuman Khandual + */ +#define pr_fmt(fmt) "brbe: " fmt + +#include + +/* + * BRBE Instructions + * + * BRB_IALL : Invalidate the entire buffer + * BRB_INJ : Inject latest branch record derived from [BRBSRCINJ, BRBTGTI= NJ, BRBINFINJ] + */ +#define BRB_IALL __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 4) | (0x1f)) +#define BRB_INJ __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 5) | (0x1f)) + +/* + * BRBE Buffer Organization + * + * BRBE buffer is arranged as multiple banks of 32 branch record + * entries each. An indivdial branch record in a given bank could + * be accessedi, after selecting the bank in BRBFCR_EL1.BANK and + * accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with + * indices [0..31]. + * + * Bank 0 + * + * --------------------------------- ------ + * | 00 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 01 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 31 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + * + * Bank 1 + * + * --------------------------------- ------ + * | 32 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 33 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 63 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + */ +#define BRBE_BANK0_IDX_MIN 0 +#define BRBE_BANK0_IDX_MAX 31 +#define BRBE_BANK1_IDX_MIN 32 +#define BRBE_BANK1_IDX_MAX 63 + +#define RETURN_READ_BRBSRCN(n) \ + read_sysreg_s(SYS_BRBSRC##n##_EL1) + +#define RETURN_READ_BRBTGTN(n) \ + read_sysreg_s(SYS_BRBTGT##n##_EL1) + +#define RETURN_READ_BRBINFN(n) \ + read_sysreg_s(SYS_BRBINF##n##_EL1) + +#define BRBE_REGN_CASE(n, case_macro) \ + case n: return case_macro(n); break + +#define BRBE_REGN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + BRBE_REGN_CASE(0, case_macro); \ + BRBE_REGN_CASE(1, case_macro); \ + BRBE_REGN_CASE(2, case_macro); \ + BRBE_REGN_CASE(3, case_macro); \ + BRBE_REGN_CASE(4, case_macro); \ + BRBE_REGN_CASE(5, case_macro); \ + BRBE_REGN_CASE(6, case_macro); \ + BRBE_REGN_CASE(7, case_macro); \ + BRBE_REGN_CASE(8, case_macro); \ + BRBE_REGN_CASE(9, case_macro); \ + BRBE_REGN_CASE(10, case_macro); \ + BRBE_REGN_CASE(11, case_macro); \ + BRBE_REGN_CASE(12, case_macro); \ + BRBE_REGN_CASE(13, case_macro); \ + BRBE_REGN_CASE(14, case_macro); \ + BRBE_REGN_CASE(15, case_macro); \ + BRBE_REGN_CASE(16, case_macro); \ + BRBE_REGN_CASE(17, case_macro); \ + BRBE_REGN_CASE(18, case_macro); \ + BRBE_REGN_CASE(19, case_macro); \ + BRBE_REGN_CASE(20, case_macro); \ + BRBE_REGN_CASE(21, case_macro); \ + BRBE_REGN_CASE(22, case_macro); \ + BRBE_REGN_CASE(23, case_macro); \ + BRBE_REGN_CASE(24, case_macro); \ + BRBE_REGN_CASE(25, case_macro); \ + BRBE_REGN_CASE(26, case_macro); \ + BRBE_REGN_CASE(27, case_macro); \ + BRBE_REGN_CASE(28, case_macro); \ + BRBE_REGN_CASE(29, case_macro); \ + BRBE_REGN_CASE(30, case_macro); \ + BRBE_REGN_CASE(31, case_macro); \ + default: \ + pr_warn("unknown register index\n"); \ + return -1; \ + } \ + } while (0) + +static inline int buffer_to_brbe_idx(int buffer_idx) +{ + return buffer_idx % 32; +} + +static inline u64 get_brbsrc_reg(int buffer_idx) +{ + int brbe_idx =3D buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBSRCN); +} + +static inline u64 get_brbtgt_reg(int buffer_idx) +{ + int brbe_idx =3D buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBTGTN); +} + +static inline u64 get_brbinf_reg(int buffer_idx) +{ + int brbe_idx =3D buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBINFN); +} + +static inline u64 brbe_record_valid(u64 brbinf) +{ + return brbinf & (BRBINF_VALID_MASK << BRBINF_VALID_SHIFT); +} + +static inline bool brbe_invalid(u64 brbinf) +{ + return brbe_record_valid(brbinf) =3D=3D BRBINF_VALID_INVALID; +} + +static inline bool brbe_valid(u64 brbinf) +{ + return brbe_record_valid(brbinf) =3D=3D BRBINF_VALID_ALL; +} + +static inline bool brbe_source(u64 brbinf) +{ + return brbe_record_valid(brbinf) =3D=3D BRBINF_VALID_SOURCE; +} + +static inline bool brbe_target(u64 brbinf) +{ + return brbe_record_valid(brbinf) =3D=3D BRBINF_VALID_TARGET; +} + +static inline int brbe_fetch_cycles(u64 brbinf) +{ + /* + * Captured cycle count is unknown and hence + * should not be passed on the user space. + */ + if (brbinf & BRBINF_CCU) + return 0; + + return (brbinf >> BRBINF_CC_SHIFT) & BRBINF_CC_MASK; +} + +static inline int brbe_fetch_type(u64 brbinf) +{ + return (brbinf >> BRBINF_TYPE_SHIFT) & BRBINF_TYPE_MASK; +} + +static inline int brbe_fetch_el(u64 brbinf) +{ + return (brbinf >> BRBINF_EL_SHIFT) & BRBINF_EL_MASK; +} + +static inline int brbe_fetch_numrec(u64 brbidr) +{ + return (brbidr >> BRBIDR0_NUMREC_SHIFT) & BRBIDR0_NUMREC_MASK; +} + +static inline int brbe_fetch_format(u64 brbidr) +{ + return (brbidr >> BRBIDR0_FORMAT_SHIFT) & BRBIDR0_FORMAT_MASK; +} + +static inline int brbe_fetch_cc_bits(u64 brbidr) +{ + return (brbidr >> BRBIDR0_CC_SHIFT) & BRBIDR0_CC_MASK; +} + +static inline void select_brbe_bank(int bank) +{ + static int brbe_current_bank =3D -1; + u64 brbfcr; + + if (brbe_current_bank =3D=3D bank) + return; + + WARN_ON(bank > 1); + brbfcr =3D read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &=3D ~(BRBFCR_BANK_MASK << BRBFCR_BANK_SHIFT); + brbfcr |=3D ((bank & BRBFCR_BANK_MASK) << BRBFCR_BANK_SHIFT); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + brbe_current_bank =3D bank; +} + +static inline void select_brbe_bank_index(int buffer_idx) +{ + switch (buffer_idx) { + case BRBE_BANK0_IDX_MIN ... BRBE_BANK0_IDX_MAX: + select_brbe_bank(0); + break; + case BRBE_BANK1_IDX_MIN ... BRBE_BANK1_IDX_MAX: + select_brbe_bank(1); + break; + default: + pr_warn("unsupported BRBE index\n"); + } +} + +static inline bool valid_brbe_nr(int brbe_nr) +{ + switch (brbe_nr) { + case BRBIDR0_NUMREC_8: + case BRBIDR0_NUMREC_16: + case BRBIDR0_NUMREC_32: + case BRBIDR0_NUMREC_64: + return true; + default: + pr_warn("unsupported BRBE entries\n"); + return false; + } +} + +static inline bool brbe_paused(void) +{ + u64 brbfcr =3D read_sysreg_s(SYS_BRBFCR_EL1); + + return brbfcr & BRBFCR_PAUSED; +} + +static inline void set_brbe_paused(void) +{ + u64 brbfcr =3D read_sysreg_s(SYS_BRBFCR_EL1); + + write_sysreg_s(brbfcr | BRBFCR_PAUSED, SYS_BRBFCR_EL1); + isb(); +} diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 67f44020a736..3e7757d05146 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -166,6 +166,26 @@ struct arm_pmu { unsigned long acpi_cpuid; }; =20 +#ifdef CONFIG_ARM_BRBE_PMU +void arm64_pmu_brbe_filter(struct pmu_hw_events *hw_events, struct perf_ev= ent *event); +void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *ev= ent); +void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc); +bool arm64_pmu_brbe_supported(struct perf_event *event); +#else +static inline void arm64_pmu_brbe_filter(struct pmu_hw_events *hw_events, = struct perf_event *event) +{ +} +static inline void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct = perf_event *event) { } +static inline void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc) { } +static inline bool arm64_pmu_brbe_supported(struct perf_event *event) {ret= urn false; } +#endif + #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) =20 u64 armpmu_event_update(struct perf_event *event); --=20 2.25.1 From nobody Mon Apr 6 11:23:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4B8EC38145 for ; Thu, 8 Sep 2022 05:12:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230012AbiIHFML (ORCPT ); Thu, 8 Sep 2022 01:12:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229991AbiIHFLx (ORCPT ); Thu, 8 Sep 2022 01:11:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3CE07B8A53; Wed, 7 Sep 2022 22:11:35 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E745814BF; Wed, 7 Sep 2022 22:11:36 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 998843F7B4; Wed, 7 Sep 2022 22:11:26 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar Subject: [PATCH V2 7/7] arm64/perf: Enable branch stack sampling Date: Thu, 8 Sep 2022 10:40:46 +0530 Message-Id: <20220908051046.465307-8-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908051046.465307-1-anshuman.khandual@arm.com> References: <20220908051046.465307-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that all the required pieces are already in place, just enable the perf branch stack sampling support on arm64 platform, by removing the gate which blocks it in armpmu_event_init(). Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 32 +++++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 1fe5d6238b81..05848c6d955c 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -547,9 +547,35 @@ static int armpmu_event_init(struct perf_event *event) !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) return -ENOENT; =20 - /* does not support taken branch sampling */ - if (has_branch_stack(event)) - return -EOPNOTSUPP; + if (has_branch_stack(event)) { + /* + * BRBE support is absent. Select CONFIG_ARM_BRBE_PMU + * in the config, before branch stack sampling events + * can be requested. + */ + if (!IS_ENABLED(CONFIG_ARM_BRBE_PMU)) { + pr_warn_once("BRBE is disabled, select CONFIG_ARM_BRBE_PMU\n"); + return -EOPNOTSUPP; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_KERNEL) { + if (!perfmon_capable()) { + pr_warn_once("does not have permission for kernel branch filter\n"); + return -EPERM; + } + } + + /* + * Branch stack sampling event can not be supported in + * case either the required driver itself is absent or + * BRBE buffer, is not supported. Besides checking for + * the callback prevents a crash in case it's absent. + */ + if (!armpmu->brbe_supported || !armpmu->brbe_supported(event)) { + pr_warn_once("BRBE is not supported\n"); + return -EOPNOTSUPP; + } + } =20 if (armpmu->map_event(event) =3D=3D -ENOENT) return -ENOENT; --=20 2.25.1