From nobody Thu Sep 11 18:29:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D2D8C61DA4 for ; Thu, 16 Feb 2023 14:13:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229843AbjBPONA (ORCPT ); Thu, 16 Feb 2023 09:13:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229712AbjBPOMt (ORCPT ); Thu, 16 Feb 2023 09:12:49 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B7D23212A8 for ; Thu, 16 Feb 2023 06:12:47 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 37F55113E; Thu, 16 Feb 2023 06:13:30 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2A97C3F703; Thu, 16 Feb 2023 06:12:46 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: asahi@lists.linux.dev, ecurtin@redhat.com, j@jannau.net, lina@asahilina.net, linux-kernel@vger.kernel.org, mark.rutland@arm.com, peterz@infradead.org, ravi.bangoria@amd.com, will@kernel.org Subject: [PATCH 1/2] arm_pmu: fix event CPU filtering Date: Thu, 16 Feb 2023 14:12:38 +0000 Message-Id: <20230216141240.3833272-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230216141240.3833272-1-mark.rutland@arm.com> References: <20230216141240.3833272-1-mark.rutland@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Janne reports that perf has been broken on Apple M1 as of commit: bd27568117664b8b ("perf: Rewrite core context handling") That commit replaced the pmu::filter_match() callback with pmu::filter(), whose return value has the opposite polarity, with true implying events should be ignored rather than scheduled. While an attempt was made to update the logic in armv8pmu_filter() and armpmu_filter() accordingly, the return value remains inverted in a couple of cases: * If the arm_pmu does not have an arm_pmu::filter() callback, armpmu_filter() will always return whether the CPU is supported rather than whether the CPU is not supported. As a result, the perf core will not schedule events on supported CPUs, resulting in a loss of events. Additionally, the perf core will attempt to schedule events on unsupported CPUs, but this will be rejected by armpmu_add(), which may result in a loss of events from other PMUs on those unsupported CPUs. * If the arm_pmu does have an arm_pmu::filter() callback, and armpmu_filter() is called on a CPU which is not supported by the arm_pmu, armpmu_filter() will return false rather than true. As a result, the perf core will attempt to schedule events on unsupported CPUs, but this will be rejected by armpmu_add(), which may result in a loss of events from other PMUs on those unsupported CPUs. This means a loss of events can be seen with any arm_pmu driver, but with the ARMv8 PMUv3 driver (which is the only arm_pmu driver with an arm_pmu::filter() callback) the event loss will be more limited and may go unnoticed, which is how this issue evaded testing so far. Fix the CPU filtering by performing this consistently in armpmu_filter(), and remove the redundant arm_pmu::filter() callback and armv8pmu_filter() implementation. Commit bd2756811766 also silently removed the CHAIN event filtering from armv8pmu_filter(), which will be addressed by a separate patch without using the filter callback. Fixes: bd27568117664b8b ("perf: Rewrite core context handling") Reported-by: Janne Grunau Link: https://lore.kernel.org/asahi/20230215-arm_pmu_m1_regression-v1-1-f5a= 266577c8d@jannau.net/ Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Peter Zijlstra Cc: Ravi Bangoria Cc: Asahi Lina Cc: Eric Curtin Tested-by: Janne Grunau --- arch/arm64/kernel/perf_event.c | 7 ------- drivers/perf/arm_pmu.c | 8 +------- include/linux/perf/arm_pmu.h | 1 - 3 files changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index a5193f2146a6..3e43538f6b72 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1023,12 +1023,6 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 -static bool armv8pmu_filter(struct pmu *pmu, int cpu) -{ - struct arm_pmu *armpmu =3D to_arm_pmu(pmu); - return !cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus); -} - static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1258,7 +1252,6 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, ch= ar *name, cpu_pmu->stop =3D armv8pmu_stop; cpu_pmu->reset =3D armv8pmu_reset; cpu_pmu->set_event_filter =3D armv8pmu_set_event_filter; - cpu_pmu->filter =3D armv8pmu_filter; =20 cpu_pmu->pmu.event_idx =3D armv8pmu_user_event_idx; =20 diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 9b593f985805..40f70f83daba 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -550,13 +550,7 @@ static void armpmu_disable(struct pmu *pmu) static bool armpmu_filter(struct pmu *pmu, int cpu) { struct arm_pmu *armpmu =3D to_arm_pmu(pmu); - bool ret; - - ret =3D cpumask_test_cpu(cpu, &armpmu->supported_cpus); - if (ret && armpmu->filter) - return armpmu->filter(pmu, cpu); - - return ret; + return !cpumask_test_cpu(cpu, &armpmu->supported_cpus); } =20 static ssize_t cpus_show(struct device *dev, diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index ef914a600087..525b5d64e394 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -100,7 +100,6 @@ struct arm_pmu { void (*stop)(struct arm_pmu *); void (*reset)(void *); int (*map_event)(struct perf_event *event); - bool (*filter)(struct pmu *pmu, int cpu); int num_events; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 --=20 2.30.2 From nobody Thu Sep 11 18:29:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EB38C61DA4 for ; Thu, 16 Feb 2023 14:13:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229988AbjBPOND (ORCPT ); Thu, 16 Feb 2023 09:13:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229822AbjBPOMw (ORCPT ); Thu, 16 Feb 2023 09:12:52 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 19A77976D for ; Thu, 16 Feb 2023 06:12:50 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5D2AF1682; Thu, 16 Feb 2023 06:13:32 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 51DCA3F703; Thu, 16 Feb 2023 06:12:48 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: asahi@lists.linux.dev, ecurtin@redhat.com, j@jannau.net, lina@asahilina.net, linux-kernel@vger.kernel.org, mark.rutland@arm.com, peterz@infradead.org, ravi.bangoria@amd.com, will@kernel.org Subject: [PATCH 2/2] arm64: perf: reject CHAIN events at creation time Date: Thu, 16 Feb 2023 14:12:39 +0000 Message-Id: <20230216141240.3833272-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230216141240.3833272-1-mark.rutland@arm.com> References: <20230216141240.3833272-1-mark.rutland@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently it's possible for a user to open CHAIN events arbitrarily, which we previously tried to rule out in commit: ca2b497253ad01c8 ("arm64: perf: Reject stand-alone CHAIN events for PMUv3= ") Which allowed the events to be opened, but prevented them from being scheduled by by using an arm_pmu::filter_match hook to reject the relevant events. The CHAIN event filtering in the arm_pmu::filter_match hook was silently removed in commit: bd27568117664b8b ("perf: Rewrite core context handling") As a result, it's now possible for users to open CHAIN events, and for these to be installed arbitrarily. Fix this by rejecting CHAIN events at creation time. This avoids the creation of events which will never count, and doesn't require using the dynamic filtering. Attempting to open a CHAIN event (0x1e) will now be rejected: | # ./perf stat -e armv8_pmuv3/config=3D0x1e/ ls | perf | | Performance counter stats for 'ls': | | armv8_pmuv3/config=3D0x1e/ | | 0.002197470 seconds time elapsed | | 0.000000000 seconds user | 0.002294000 seconds sys Other events (e.g. CPU_CYCLES / 0x11) will open as usual: | # ./perf stat -e armv8_pmuv3/config=3D0x11/ ls | perf | | Performance counter stats for 'ls': | | 2538761 armv8_pmuv3/config=3D0x11/ | | 0.002227330 seconds time elapsed | | 0.002369000 seconds user | 0.000000000 seconds sys Fixes: bd27568117664b8b ("perf: Rewrite core context handling") Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Ravi Bangoria Cc: Will Deacon --- arch/arm64/kernel/perf_event.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 3e43538f6b72..dde06c0f97f3 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1063,6 +1063,14 @@ static int __armv8_pmuv3_map_event(struct perf_event= *event, &armv8_pmuv3_perf_cache_map, ARMV8_PMU_EVTYPE_EVENT); =20 + /* + * CHAIN events only work when paired with an adjacent counter, and it + * never makes sense for a user to open one in isolation, as they'll be + * rotated arbitrarily. + */ + if (hw_event_id =3D=3D ARMV8_PMUV3_PERFCTR_CHAIN) + return -EINVAL; + if (armv8pmu_event_is_64bit(event)) event->hw.flags |=3D ARMPMU_EVT_64BIT; =20 --=20 2.30.2