From nobody Wed Apr 8 14:09:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13562C54EE9 for ; Mon, 5 Sep 2022 05:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236047AbiIEFnm (ORCPT ); Mon, 5 Sep 2022 01:43:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235977AbiIEFn2 (ORCPT ); Mon, 5 Sep 2022 01:43:28 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 359A42FFD2; Sun, 4 Sep 2022 22:43:21 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 139E6D6E; Sun, 4 Sep 2022 22:43:27 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.40.17]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 36B7A3F73D; Sun, 4 Sep 2022 22:43:43 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, peterz@infradead.org Cc: Anshuman Khandual , James Clark , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Thomas Gleixner , Borislav Petkov Subject: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH Date: Mon, 5 Sep 2022 11:12:39 +0530 Message-Id: <20220905054239.324029-5-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220905054239.324029-1-anshuman.khandual@arm.com> References: <20220905054239.324029-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Namhyung Kim Cc: Thomas Gleixner Cc: Borislav Petkov Cc: x86@kernel.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/x86/events/perf_event.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index ba3d24a6a4ec..12136a33e9b7 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constra= int *c, u64 ecode) #define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */ #define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid= */ =20 +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) =3D=3D PE= RF_X86_EVENT_PEBS_LDLAT); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) =3D=3D PERF_= X86_EVENT_PEBS_ST); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) =3D=3D P= ERF_X86_EVENT_PEBS_ST_HSW); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) =3D=3D P= ERF_X86_EVENT_PEBS_LD_HSW); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) =3D=3D P= ERF_X86_EVENT_PEBS_NA_HSW); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) =3D=3D PERF_X86= _EVENT_EXCL); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) =3D=3D PERF_= X86_EVENT_DYNAMIC); + +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) =3D=3D PER= F_X86_EVENT_EXCL_ACCT); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) =3D=3D P= ERF_X86_EVENT_AUTO_RELOAD); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) =3D=3D PE= RF_X86_EVENT_LARGE_PEBS); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) =3D=3D P= ERF_X86_EVENT_PEBS_VIA_PT); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) =3D=3D PERF_X86= _EVENT_PAIR); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) =3D=3D PE= RF_X86_EVENT_LBR_SELECT); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) =3D=3D PERF_= X86_EVENT_TOPDOWN); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) =3D=3D PE= RF_X86_EVENT_PEBS_STLAT); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) =3D=3D PERF_= X86_EVENT_AMD_BRS); +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID) + =3D=3D PERF_X86_EVENT_PEBS_LAT_HYBRID); + static inline bool is_topdown_count(struct perf_event *event) { return event->hw.flags & PERF_X86_EVENT_TOPDOWN; --=20 2.25.1