From nobody Wed Apr 29 08:05:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04496C433EF for ; Fri, 10 Jun 2022 02:55:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345856AbiFJCzI (ORCPT ); Thu, 9 Jun 2022 22:55:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242219AbiFJCzF (ORCPT ); Thu, 9 Jun 2022 22:55:05 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9335849F34; Thu, 9 Jun 2022 19:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654829704; x=1686365704; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J7T9oTi1+gCh7W12rcI4H82ILOO2Ne7XZFAwIAhAFYA=; b=Xhimmu5eA02GtNvOC1/2F/a/7vRxGFCoUv+V0+sw3fgWOn0pmLbTZ0H1 NJs/J1hiGQehgDr9ht/p9Q3FG7aJnpkBdd4pYtCATE2QP+tbrJJ8A+7bF 0cUYqqD6M5L4Q48VC04krQR0dKAISD/jMAyZmabtNnF+d6Sbds0gqYK6B 2vV3WjxEl721T31tM2regoJx9uq2+fOblvnPDit6GVZm1WogvplePVsCB LA7DGUnO8MsgJJgT3ecDwlxG/ZVCW+bqBnyBtXRvEWcdUbuGxXeX4ZM3M Q4/V0/2Bftlu0pgn5m5xgNqJA+kWm4dRjA+iibWnk1j9chW9RoKNie58L Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="257318933" X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="257318933" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 19:55:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="566670336" Received: from zxingrtx.sh.intel.com ([10.239.159.110]) by orsmga002.jf.intel.com with ESMTP; 09 Jun 2022 19:55:01 -0700 From: zhengjun.xing@linux.intel.com To: acme@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@intel.com, jolsa@kernel.org, namhyung@kernel.org Cc: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, irogers@google.com, ak@linux.intel.com, kan.liang@linux.intel.com, zhengjun.xing@linux.intel.com Subject: [PATCH v3 1/5] perf stat: Revert "perf stat: Add default hybrid events" Date: Fri, 10 Jun 2022 10:54:45 +0800 Message-Id: <20220610025449.2089232-2-zhengjun.xing@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> References: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kan Liang This reverts commit ac2dc29edd21 ("perf stat: Add default hybrid events"). Between this patch and the reverted patch, the commit 6c1912898ed2 ("perf parse-events: Rename parse_events_error functions") and the commit 07eafd4e053a ("perf parse-event: Add init and exit to parse_event_error") clean up the parse_events_error_*() codes. The related change is also reverted. The reverted patch is hard to be extended to support new default events, e.g., Topdown events, and the existing "--detailed" option on a hybrid platform. A new solution will be proposed in the following patch to enable the perf stat default on a hybrid platform. Signed-off-by: Kan Liang Signed-off-by: Zhengjun Xing Acked-by: Namhyung Kim --- Change log: v3: * no change since v1. tools/perf/builtin-stat.c | 30 ------------------------------ 1 file changed, 30 deletions(-) diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 4ce87a8eb7d7..6ac79d95f3b5 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -1685,12 +1685,6 @@ static int add_default_attributes(void) { .type =3D PERF_TYPE_HARDWARE, .config =3D PERF_COUNT_HW_BRANCH_INSTRUC= TIONS }, { .type =3D PERF_TYPE_HARDWARE, .config =3D PERF_COUNT_HW_BRANCH_MISSES = }, =20 -}; - struct perf_event_attr default_sw_attrs[] =3D { - { .type =3D PERF_TYPE_SOFTWARE, .config =3D PERF_COUNT_SW_TASK_CLOCK }, - { .type =3D PERF_TYPE_SOFTWARE, .config =3D PERF_COUNT_SW_CONTEXT_SWITCH= ES }, - { .type =3D PERF_TYPE_SOFTWARE, .config =3D PERF_COUNT_SW_CPU_MIGRATIONS= }, - { .type =3D PERF_TYPE_SOFTWARE, .config =3D PERF_COUNT_SW_PAGE_FAULTS }, }; =20 /* @@ -1947,30 +1941,6 @@ static int add_default_attributes(void) } =20 if (!evsel_list->core.nr_entries) { - if (perf_pmu__has_hybrid()) { - struct parse_events_error errinfo; - const char *hybrid_str =3D "cycles,instructions,branches,branch-misses"; - - if (target__has_cpu(&target)) - default_sw_attrs[0].config =3D PERF_COUNT_SW_CPU_CLOCK; - - if (evlist__add_default_attrs(evsel_list, - default_sw_attrs) < 0) { - return -1; - } - - parse_events_error__init(&errinfo); - err =3D parse_events(evsel_list, hybrid_str, &errinfo); - if (err) { - fprintf(stderr, - "Cannot set up hybrid events %s: %d\n", - hybrid_str, err); - parse_events_error__print(&errinfo, hybrid_str); - } - parse_events_error__exit(&errinfo); - return err ? -1 : 0; - } - if (target__has_cpu(&target)) default_attrs0[0].config =3D PERF_COUNT_SW_CPU_CLOCK; =20 --=20 2.25.1 From nobody Wed Apr 29 08:05:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFAB8C43334 for ; Fri, 10 Jun 2022 02:55:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346079AbiFJCzU (ORCPT ); Thu, 9 Jun 2022 22:55:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345903AbiFJCzQ (ORCPT ); Thu, 9 Jun 2022 22:55:16 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F7DB4B1C3; Thu, 9 Jun 2022 19:55:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654829715; x=1686365715; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BffFOdQMPEYvC7h1x1iCTPOeihZiP6YHL12WLtj2ijc=; b=B2g3VikObh/dEJplh0yJTsdKoJW/lmijRrqiHPQVksAPYj4zurAAMtRD RYMAch2NHRwSak8bljSnk4sat8d6suIMxHbkS5xEeVtZf+MILY0b69LbI bdbRa5qUvLtaKC8BCx4aGeEK4rIr5hT1Kttau9iek/nH9xRcB97qIHxjl MifSpOPdocoZ0ZFtEHtHl3LaUvgB1U7ggUAdm4IDzOf01jaa4Azq7eMes HYeaAEkHywRigoIRzT9IUzgHc3JZOb/qmjm+wpm4JvcFN6/Z7esIPSs+N M1cYaYNrfPC7Aagm3M0U4d6qLo+DFalFddyuj8NvQE96f+AzCa8BLTKrc A==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="339246127" X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="339246127" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 19:55:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="566670372" Received: from zxingrtx.sh.intel.com ([10.239.159.110]) by orsmga002.jf.intel.com with ESMTP; 09 Jun 2022 19:55:11 -0700 From: zhengjun.xing@linux.intel.com To: acme@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@intel.com, jolsa@kernel.org, namhyung@kernel.org Cc: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, irogers@google.com, ak@linux.intel.com, kan.liang@linux.intel.com, zhengjun.xing@linux.intel.com Subject: [PATCH v3 2/5] perf evsel: Add arch_evsel__hw_name() Date: Fri, 10 Jun 2022 10:54:46 +0800 Message-Id: <20220610025449.2089232-3-zhengjun.xing@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> References: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kan Liang The commit 55bcf6ef314a ("perf: Extend PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE") extends the two types to become PMU aware types for a hybrid system. However, current evsel__hw_name doesn't take the PMU type into account. It mistakenly returns the "unknown-hardware" for the hardware event with a specific PMU type. Add an Arch specific arch_evsel__hw_name() to specially handle the PMU aware hardware event. Currently, the extend PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE is only supported by X86. Only implement the specific arch_evsel__hw_name() for X86 in the patch. Nothing is changed for the other Archs. Signed-off-by: Kan Liang Signed-off-by: Zhengjun Xing Acked-by: Namhyung Kim --- Change log: v3: * no change since v1. tools/perf/arch/x86/util/evsel.c | 20 ++++++++++++++++++++ tools/perf/util/evsel.c | 7 ++++++- tools/perf/util/evsel.h | 1 + 3 files changed, 27 insertions(+), 1 deletion(-) diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/ev= sel.c index 3501399cef35..f6feb61d98a0 100644 --- a/tools/perf/arch/x86/util/evsel.c +++ b/tools/perf/arch/x86/util/evsel.c @@ -61,3 +61,23 @@ bool arch_evsel__must_be_in_group(const struct evsel *ev= sel) (strcasestr(evsel->name, "slots") || strcasestr(evsel->name, "topdown")); } + +int arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size) +{ + u64 event =3D evsel->core.attr.config & PERF_HW_EVENT_MASK; + u64 pmu =3D evsel->core.attr.config >> PERF_PMU_TYPE_SHIFT; + const char *event_name; + + if (event < PERF_COUNT_HW_MAX && evsel__hw_names[event]) + event_name =3D evsel__hw_names[event]; + else + event_name =3D "unknown-hardware"; + + /* The PMU type is not required for the non-hybrid platform. */ + if (!pmu) + return scnprintf(bf, size, "%s", event_name); + + return scnprintf(bf, size, "%s/%s/", + evsel->pmu_name ? evsel->pmu_name : "cpu", + event_name); +} diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index ce499c5da8d7..782be377208f 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -593,9 +593,14 @@ static int evsel__add_modifiers(struct evsel *evsel, c= har *bf, size_t size) return r; } =20 +int __weak arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size) +{ + return scnprintf(bf, size, "%s", __evsel__hw_name(evsel->core.attr.config= )); +} + static int evsel__hw_name(struct evsel *evsel, char *bf, size_t size) { - int r =3D scnprintf(bf, size, "%s", __evsel__hw_name(evsel->core.attr.con= fig)); + int r =3D arch_evsel__hw_name(evsel, bf, size); return r + evsel__add_modifiers(evsel, bf + r, size - r); } =20 diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index 73ea48e94079..8dd3f04a5bdb 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -271,6 +271,7 @@ extern const char *const evsel__hw_names[PERF_COUNT_HW_= MAX]; extern const char *const evsel__sw_names[PERF_COUNT_SW_MAX]; extern char *evsel__bpf_counter_events; bool evsel__match_bpf_counter_events(const char *name); +int arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size); =20 int __evsel__hw_cache_type_op_res_name(u8 type, u8 op, u8 result, char *bf= , size_t size); const char *evsel__name(struct evsel *evsel); --=20 2.25.1 From nobody Wed Apr 29 08:05:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E01EFC433EF for ; Fri, 10 Jun 2022 02:55:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346119AbiFJCzg (ORCPT ); Thu, 9 Jun 2022 22:55:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346089AbiFJCzb (ORCPT ); Thu, 9 Jun 2022 22:55:31 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A95B24B86A; Thu, 9 Jun 2022 19:55:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654829728; x=1686365728; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dbAL2kogCiAQAVPubxcZrHFfRkttEU/wB74xBJLYppc=; b=WG2SSx0jvHH7ppCtlF9C8n4KqYUEKJTsk9HUldb3M33oG0kRtivRg8bG eHGyEOhTGnKiVIMgi5yNQQ7qw9mWLcTot9hz8VRT4rTjU82uhB7I+KQwA f0AfMnC5nZzBifEFBGz3xYVTYWiOyiEmhdxirAhyWFyuwGH7ZxurVEaic Wkv/KODfzq+CyPw9UG9SDne7e/h9AoHF6S+1uwkcpusNRvM0h4g7+rzQK CRkQsX9E0K77rUATCP3cl1LFj5BG3TUoVQaAR38yuNA81j44lOixwY5RZ i5ksobDvLLEl2g3/7Q3m4Y+1+6pjmZrjYF+VvqeyZ25rUacFpzeebOXIE g==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="257318955" X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="257318955" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 19:55:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="566670396" Received: from zxingrtx.sh.intel.com ([10.239.159.110]) by orsmga002.jf.intel.com with ESMTP; 09 Jun 2022 19:55:19 -0700 From: zhengjun.xing@linux.intel.com To: acme@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@intel.com, jolsa@kernel.org, namhyung@kernel.org Cc: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, irogers@google.com, ak@linux.intel.com, kan.liang@linux.intel.com, zhengjun.xing@linux.intel.com Subject: [PATCH v3 3/5] perf evlist: Always use arch_evlist__add_default_attrs() Date: Fri, 10 Jun 2022 10:54:47 +0800 Message-Id: <20220610025449.2089232-4-zhengjun.xing@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> References: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kan Liang Current perf stat uses the evlist__add_default_attrs() to add the generic default attrs, and uses arch_evlist__add_default_attrs() to add the Arch specific default attrs, e.g., Topdown for X86. It works well for the non-hybrid platforms. However, for a hybrid platform, the hard code generic default attrs don't work. Uses arch_evlist__add_default_attrs() to replace the evlist__add_default_attrs(). The arch_evlist__add_default_attrs() is modified to invoke the same __evlist__add_default_attrs() for the generic default attrs. No functional change. Add default_null_attrs[] to indicate the Arch specific attrs. No functional change for the Arch specific default attrs either. Signed-off-by: Kan Liang Signed-off-by: Zhengjun Xing Acked-by: Namhyung Kim --- Change log: v3: * no change since v1. tools/perf/arch/x86/util/evlist.c | 7 ++++++- tools/perf/builtin-stat.c | 6 +++++- tools/perf/util/evlist.c | 9 +++++++-- tools/perf/util/evlist.h | 7 +++++-- 4 files changed, 23 insertions(+), 6 deletions(-) diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/e= vlist.c index 68f681ad54c1..777bdf182a58 100644 --- a/tools/perf/arch/x86/util/evlist.c +++ b/tools/perf/arch/x86/util/evlist.c @@ -8,8 +8,13 @@ #define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdow= n-fe-bound,topdown-be-bound}" #define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdow= n-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown= -fetch-lat,topdown-mem-bound}" =20 -int arch_evlist__add_default_attrs(struct evlist *evlist) +int arch_evlist__add_default_attrs(struct evlist *evlist, + struct perf_event_attr *attrs, + size_t nr_attrs) { + if (nr_attrs) + return __evlist__add_default_attrs(evlist, attrs, nr_attrs); + if (!pmu_have_event("cpu", "slots")) return 0; =20 diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 6ac79d95f3b5..837c3ca91af1 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -1777,6 +1777,9 @@ static int add_default_attributes(void) (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) | (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) }, }; + + struct perf_event_attr default_null_attrs[] =3D {}; + /* Set attrs if no event is selected and !null_run: */ if (stat_config.null_run) return 0; @@ -1958,7 +1961,8 @@ static int add_default_attributes(void) return -1; =20 stat_config.topdown_level =3D TOPDOWN_MAX_LEVEL; - if (arch_evlist__add_default_attrs(evsel_list) < 0) + /* Platform specific attrs */ + if (evlist__add_default_attrs(evsel_list, default_null_attrs) < 0) return -1; } =20 diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 48af7d379d82..efa5f006b5c6 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -342,9 +342,14 @@ int __evlist__add_default_attrs(struct evlist *evlist,= struct perf_event_attr *a return evlist__add_attrs(evlist, attrs, nr_attrs); } =20 -__weak int arch_evlist__add_default_attrs(struct evlist *evlist __maybe_un= used) +__weak int arch_evlist__add_default_attrs(struct evlist *evlist, + struct perf_event_attr *attrs, + size_t nr_attrs) { - return 0; + if (!nr_attrs) + return 0; + + return __evlist__add_default_attrs(evlist, attrs, nr_attrs); } =20 struct evsel *evlist__find_tracepoint_by_id(struct evlist *evlist, int id) diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 1bde9ccf4e7d..129095c0fe6d 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -107,10 +107,13 @@ static inline int evlist__add_default(struct evlist *= evlist) int __evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs); =20 +int arch_evlist__add_default_attrs(struct evlist *evlist, + struct perf_event_attr *attrs, + size_t nr_attrs); + #define evlist__add_default_attrs(evlist, array) \ - __evlist__add_default_attrs(evlist, array, ARRAY_SIZE(array)) + arch_evlist__add_default_attrs(evlist, array, ARRAY_SIZE(array)) =20 -int arch_evlist__add_default_attrs(struct evlist *evlist); struct evsel *arch_evlist__leader(struct list_head *list); =20 int evlist__add_dummy(struct evlist *evlist); --=20 2.25.1 From nobody Wed Apr 29 08:05:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94674C43334 for ; Fri, 10 Jun 2022 02:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346121AbiFJCzo (ORCPT ); Thu, 9 Jun 2022 22:55:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346098AbiFJCzc (ORCPT ); Thu, 9 Jun 2022 22:55:32 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFC304B1FF; Thu, 9 Jun 2022 19:55:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654829729; x=1686365729; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=el40M1P1bdwXChWvgKMsKsNgusoQUe1up37ivhCVmf0=; b=eX96AjyC8ucetWLPM22gbQ1Q5aXknXYbA0AJseVWPN+MBFqwSjqEczfv lhUEKCxvcP1S1FIGNKAQFY2NvqgsyJ+hYe8tl9XQVRS2nC+yZcPQlxpvC xSrGf+Ec2ayn8OuDLlNHylsOdeD7+lyLteDBlyMbzKH/7jvxIxYaycHRa pVEYrvtwUBqXMAYmFFZm3Hku+pzWP+YE9a0o+cTs7afW0rLTIV4llNWea J2H36ba5hJ1UkVRCoITV5URnEWF37mWjnl0/Q9BbFmIEXfnjNqJtOJYZY miv/XY2DxGqwS/Dymhs2aU5T4JqSCe1ExN/f0nee120l+XDt//fvBPA58 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="341557448" X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="341557448" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 19:55:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="566670421" Received: from zxingrtx.sh.intel.com ([10.239.159.110]) by orsmga002.jf.intel.com with ESMTP; 09 Jun 2022 19:55:25 -0700 From: zhengjun.xing@linux.intel.com To: acme@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@intel.com, jolsa@kernel.org, namhyung@kernel.org Cc: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, irogers@google.com, ak@linux.intel.com, kan.liang@linux.intel.com, zhengjun.xing@linux.intel.com Subject: [PATCH v3 4/5] perf x86 evlist: Add default hybrid events for perf stat Date: Fri, 10 Jun 2022 10:54:48 +0800 Message-Id: <20220610025449.2089232-5-zhengjun.xing@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> References: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kan Liang Provide a new solution to replace the reverted commit ac2dc29edd21 ("perf stat: Add default hybrid events"). For the default software attrs, nothing is changed. For the default hardware attrs, create a new evsel for each hybrid pmu. With the new solution, adding a new default attr will not require the special support for the hybrid platform anymore. Also, the "--detailed" is supported on the hybrid platform With the patch, ./perf stat -a -ddd sleep 1 Performance counter stats for 'system wide': 32,231.06 msec cpu-clock # 32.056 CPUs utilized 529 context-switches # 16.413 /sec 32 cpu-migrations # 0.993 /sec 69 page-faults # 2.141 /sec 176,754,151 cpu_core/cycles/ # 5.484 M/sec (= 41.65%) 161,695,280 cpu_atom/cycles/ # 5.017 M/sec (= 49.92%) 48,595,992 cpu_core/instructions/ # 1.508 M/sec (= 49.98%) 32,363,337 cpu_atom/instructions/ # 1.004 M/sec (= 58.26%) 10,088,639 cpu_core/branches/ # 313.010 K/sec (= 58.31%) 6,390,582 cpu_atom/branches/ # 198.274 K/sec (= 58.26%) 846,201 cpu_core/branch-misses/ # 26.254 K/sec (= 66.65%) 676,477 cpu_atom/branch-misses/ # 20.988 K/sec (= 58.27%) 14,290,070 cpu_core/L1-dcache-loads/ # 443.363 K/sec (= 66.66%) 9,983,532 cpu_atom/L1-dcache-loads/ # 309.749 K/sec (= 58.27%) 740,725 cpu_core/L1-dcache-load-misses/ # 22.982 K/sec (= 66.66%) cpu_atom/L1-dcache-load-misses/ 480,441 cpu_core/LLC-loads/ # 14.906 K/sec (= 66.67%) 326,570 cpu_atom/LLC-loads/ # 10.132 K/sec (= 58.27%) 329 cpu_core/LLC-load-misses/ # 10.208 /sec (= 66.68%) 0 cpu_atom/LLC-load-misses/ # 0.000 /sec (= 58.32%) cpu_core/L1-icache-loads/ 21,982,491 cpu_atom/L1-icache-loads/ # 682.028 K/sec (= 58.43%) 4,493,189 cpu_core/L1-icache-load-misses/ # 139.406 K/sec (= 33.34%) 4,711,404 cpu_atom/L1-icache-load-misses/ # 146.176 K/sec (= 50.08%) 13,713,090 cpu_core/dTLB-loads/ # 425.462 K/sec (= 33.34%) 9,384,727 cpu_atom/dTLB-loads/ # 291.170 K/sec (= 50.08%) 157,387 cpu_core/dTLB-load-misses/ # 4.883 K/sec (= 33.33%) 108,328 cpu_atom/dTLB-load-misses/ # 3.361 K/sec (= 50.08%) cpu_core/iTLB-loads/ cpu_atom/iTLB-loads/ 37,655 cpu_core/iTLB-load-misses/ # 1.168 K/sec (= 33.32%) 61,661 cpu_atom/iTLB-load-misses/ # 1.913 K/sec (= 50.03%) cpu_core/L1-dcache-prefetches/ cpu_atom/L1-dcache-prefetches/ cpu_core/L1-dcache-prefetch-misses/ cpu_atom/L1-dcache-prefetch-misses/ 1.005466919 seconds time elapsed Signed-off-by: Kan Liang Signed-off-by: Zhengjun Xing Acked-by: Namhyung Kim --- Change log: v3: * Use evsel__new() in place of evsel__new_idx() v2: * The index of all new evsel will be updated when adding to the evlist, just set 0 idx for the new evsel. tools/perf/arch/x86/util/evlist.c | 52 ++++++++++++++++++++++++++++++- tools/perf/util/evlist.c | 2 +- tools/perf/util/evlist.h | 2 ++ 3 files changed, 54 insertions(+), 2 deletions(-) diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/e= vlist.c index 777bdf182a58..c83f8c11735f 100644 --- a/tools/perf/arch/x86/util/evlist.c +++ b/tools/perf/arch/x86/util/evlist.c @@ -4,16 +4,66 @@ #include "util/evlist.h" #include "util/parse-events.h" #include "topdown.h" +#include "util/event.h" +#include "util/pmu-hybrid.h" =20 #define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdow= n-fe-bound,topdown-be-bound}" #define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdow= n-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown= -fetch-lat,topdown-mem-bound}" =20 +static int ___evlist__add_default_attrs(struct evlist *evlist, + struct perf_event_attr *attrs, + size_t nr_attrs) +{ + struct perf_cpu_map *cpus; + struct evsel *evsel, *n; + struct perf_pmu *pmu; + LIST_HEAD(head); + size_t i =3D 0; + + for (i =3D 0; i < nr_attrs; i++) + event_attr_init(attrs + i); + + if (!perf_pmu__has_hybrid()) + return evlist__add_attrs(evlist, attrs, nr_attrs); + + for (i =3D 0; i < nr_attrs; i++) { + if (attrs[i].type =3D=3D PERF_TYPE_SOFTWARE) { + evsel =3D evsel__new(attrs + i); + if (evsel =3D=3D NULL) + goto out_delete_partial_list; + list_add_tail(&evsel->core.node, &head); + continue; + } + + perf_pmu__for_each_hybrid_pmu(pmu) { + evsel =3D evsel__new(attrs + i); + if (evsel =3D=3D NULL) + goto out_delete_partial_list; + evsel->core.attr.config |=3D (__u64)pmu->type << PERF_PMU_TYPE_SHIFT; + cpus =3D perf_cpu_map__get(pmu->cpus); + evsel->core.cpus =3D cpus; + evsel->core.own_cpus =3D perf_cpu_map__get(cpus); + evsel->pmu_name =3D strdup(pmu->name); + list_add_tail(&evsel->core.node, &head); + } + } + + evlist__splice_list_tail(evlist, &head); + + return 0; + +out_delete_partial_list: + __evlist__for_each_entry_safe(&head, n, evsel) + evsel__delete(evsel); + return -1; +} + int arch_evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs) { if (nr_attrs) - return __evlist__add_default_attrs(evlist, attrs, nr_attrs); + return ___evlist__add_default_attrs(evlist, attrs, nr_attrs); =20 if (!pmu_have_event("cpu", "slots")) return 0; diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index efa5f006b5c6..5ff4b9504828 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -309,7 +309,7 @@ struct evsel *evlist__add_aux_dummy(struct evlist *evli= st, bool system_wide) return evsel; } =20 -static int evlist__add_attrs(struct evlist *evlist, struct perf_event_attr= *attrs, size_t nr_attrs) +int evlist__add_attrs(struct evlist *evlist, struct perf_event_attr *attrs= , size_t nr_attrs) { struct evsel *evsel, *n; LIST_HEAD(head); diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 129095c0fe6d..351ba2887a79 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -104,6 +104,8 @@ static inline int evlist__add_default(struct evlist *ev= list) return __evlist__add_default(evlist, true); } =20 +int evlist__add_attrs(struct evlist *evlist, struct perf_event_attr *attrs= , size_t nr_attrs); + int __evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs); =20 --=20 2.25.1 From nobody Wed Apr 29 08:05:49 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D1DC43334 for ; Fri, 10 Jun 2022 02:55:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346181AbiFJCzy (ORCPT ); Thu, 9 Jun 2022 22:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346098AbiFJCzp (ORCPT ); Thu, 9 Jun 2022 22:55:45 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81F0B4B841; Thu, 9 Jun 2022 19:55:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654829735; x=1686365735; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HorYf3nLNCdX8tcdJkTuAPZiWk/LbkgyLHvDeaCbH3g=; b=gGTz/4LUhf67g3SLM3tnUZwPrG4t6Ibr3L8fo1sUTc71ThG/LH39GonQ 3ZRC2M1Z1DkrlSbBypVYHcc/fp8yoQBmzSJwATcFIHB7kPdHp96QhsAoE lhyAHKGeYW5Xf0X3qtpNZQbdUq8xrqEv9Z2WgKMTph4QTZ4DlqbCpphCo nt4ckdN/Hv/s2Um9duYeKxBx7Rx9dVay1YHO3cdgXJ/UaonnwrnpfyA3+ SAejCkmMvacfcB5VjitFd3cpoKlzO+i16ZpXb0x6EJsDzgW2hzp32OEec ST6HVF9sNEQ45McqxxGZxT7/+SiHHxxo0LlUl9BwWoGAlUTrggGr3N69I w==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="266261223" X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="266261223" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 19:55:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="566670441" Received: from zxingrtx.sh.intel.com ([10.239.159.110]) by orsmga002.jf.intel.com with ESMTP; 09 Jun 2022 19:55:31 -0700 From: zhengjun.xing@linux.intel.com To: acme@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@intel.com, jolsa@kernel.org, namhyung@kernel.org Cc: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, irogers@google.com, ak@linux.intel.com, kan.liang@linux.intel.com, zhengjun.xing@linux.intel.com Subject: [PATCH v3 5/5] perf stat: Add topdown metrics in the default perf stat on the hybrid machine Date: Fri, 10 Jun 2022 10:54:49 +0800 Message-Id: <20220610025449.2089232-6-zhengjun.xing@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> References: <20220610025449.2089232-1-zhengjun.xing@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Zhengjun Xing Topdown metrics are missed in the default perf stat on the hybrid machine, add Topdown metrics in default perf stat for hybrid systems. Currently, we support the perf metrics Topdown for the p-core PMU in the perf stat default, the perf metrics Topdown support for e-core PMU will be implemented later separately. Refactor the code adds two x86 specific functions. Widen the size of the event name column by 7 chars, so that all metrics after the "#" become aligned again. The perf metrics topdown feature is supported on the cpu_core of ADL. The dedicated perf metrics counter and the fixed counter 3 are used for the topdown events. Adding the topdown metrics doesn't trigger multiplexing. Before: # ./perf stat -a true Performance counter stats for 'system wide': 53.70 msec cpu-clock # 25.736 CPUs utilized 80 context-switches # 1.490 K/sec 24 cpu-migrations # 446.951 /sec 52 page-faults # 968.394 /sec 2,788,555 cpu_core/cycles/ # 51.931 M/sec 851,129 cpu_atom/cycles/ # 15.851 M/sec 2,974,030 cpu_core/instructions/ # 55.385 M/sec 416,919 cpu_atom/instructions/ # 7.764 M/sec 586,136 cpu_core/branches/ # 10.916 M/sec 79,872 cpu_atom/branches/ # 1.487 M/sec 14,220 cpu_core/branch-misses/ # 264.819 K/sec 7,691 cpu_atom/branch-misses/ # 143.229 K/sec 0.002086438 seconds time elapsed After: # ./perf stat -a true Performance counter stats for 'system wide': 61.39 msec cpu-clock # 24.874 CPUs ut= ilized 76 context-switches # 1.238 K/sec 24 cpu-migrations # 390.968 /sec 52 page-faults # 847.097 /sec 2,753,695 cpu_core/cycles/ # 44.859 M/sec 903,899 cpu_atom/cycles/ # 14.725 M/sec 2,927,529 cpu_core/instructions/ # 47.690 M/sec 428,498 cpu_atom/instructions/ # 6.980 M/sec 581,299 cpu_core/branches/ # 9.470 M/sec 83,409 cpu_atom/branches/ # 1.359 M/sec 13,641 cpu_core/branch-misses/ # 222.216 K/sec 8,008 cpu_atom/branch-misses/ # 130.453 K/sec 14,761,308 cpu_core/slots/ # 240.466 M/sec 3,288,625 cpu_core/topdown-retiring/ # 22.3% retiri= ng 1,323,323 cpu_core/topdown-bad-spec/ # 9.0% bad sp= eculation 5,477,470 cpu_core/topdown-fe-bound/ # 37.1% fronte= nd bound 4,679,199 cpu_core/topdown-be-bound/ # 31.7% backen= d bound 646,194 cpu_core/topdown-heavy-ops/ # 4.4% heavy = operations # 17.9% light operations 1,244,999 cpu_core/topdown-br-mispredict/ # 8.4% branch= mispredict # 0.5% machine clears 3,891,800 cpu_core/topdown-fetch-lat/ # 26.4% fetch = latency # 10.7% fetch bandwidth 1,879,034 cpu_core/topdown-mem-bound/ # 12.7% memory= bound # 19.0% Core bound 0.002467839 seconds time elapsed Signed-off-by: Zhengjun Xing Reviewed-by: Kan Liang Acked-by: Namhyung Kim --- Change log: v3: * Make the pr_warning in one line. v2: * Refactor arch_get_topdown_pmu_name() as Namhyung's suggestion. tools/perf/arch/x86/util/evlist.c | 13 ++------ tools/perf/arch/x86/util/topdown.c | 51 ++++++++++++++++++++++++++++++ tools/perf/arch/x86/util/topdown.h | 1 + tools/perf/builtin-stat.c | 14 ++------ tools/perf/util/stat-display.c | 2 +- tools/perf/util/topdown.c | 7 ++++ tools/perf/util/topdown.h | 3 +- 7 files changed, 66 insertions(+), 25 deletions(-) diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/e= vlist.c index c83f8c11735f..cb59ce9b9638 100644 --- a/tools/perf/arch/x86/util/evlist.c +++ b/tools/perf/arch/x86/util/evlist.c @@ -3,12 +3,9 @@ #include "util/pmu.h" #include "util/evlist.h" #include "util/parse-events.h" -#include "topdown.h" #include "util/event.h" #include "util/pmu-hybrid.h" - -#define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdow= n-fe-bound,topdown-be-bound}" -#define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdow= n-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown= -fetch-lat,topdown-mem-bound}" +#include "topdown.h" =20 static int ___evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *attrs, @@ -65,13 +62,7 @@ int arch_evlist__add_default_attrs(struct evlist *evlist, if (nr_attrs) return ___evlist__add_default_attrs(evlist, attrs, nr_attrs); =20 - if (!pmu_have_event("cpu", "slots")) - return 0; - - if (pmu_have_event("cpu", "topdown-heavy-ops")) - return parse_events(evlist, TOPDOWN_L2_EVENTS, NULL); - else - return parse_events(evlist, TOPDOWN_L1_EVENTS, NULL); + return topdown_parse_events(evlist); } =20 struct evsel *arch_evlist__leader(struct list_head *list) diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/= topdown.c index f81a7cfe4d63..67c524324125 100644 --- a/tools/perf/arch/x86/util/topdown.c +++ b/tools/perf/arch/x86/util/topdown.c @@ -3,9 +3,17 @@ #include "api/fs/fs.h" #include "util/pmu.h" #include "util/topdown.h" +#include "util/evlist.h" +#include "util/debug.h" +#include "util/pmu-hybrid.h" #include "topdown.h" #include "evsel.h" =20 +#define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,= topdown-fe-bound,topdown-be-bound}" +#define TOPDOWN_L1_EVENTS_CORE "{slots,cpu_core/topdown-retiring/,cpu_cor= e/topdown-bad-spec/,cpu_core/topdown-fe-bound/,cpu_core/topdown-be-bound/}" +#define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,= topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,t= opdown-fetch-lat,topdown-mem-bound}" +#define TOPDOWN_L2_EVENTS_CORE "{slots,cpu_core/topdown-retiring/,cpu_cor= e/topdown-bad-spec/,cpu_core/topdown-fe-bound/,cpu_core/topdown-be-bound/,c= pu_core/topdown-heavy-ops/,cpu_core/topdown-br-mispredict/,cpu_core/topdown= -fetch-lat/,cpu_core/topdown-mem-bound/}" + /* Check whether there is a PMU which supports the perf metrics. */ bool topdown_sys_has_perf_metrics(void) { @@ -73,3 +81,46 @@ bool arch_topdown_sample_read(struct evsel *leader) =20 return false; } + +const char *arch_get_topdown_pmu_name(struct evlist *evlist, bool warn) +{ + const char *pmu_name; + + if (!perf_pmu__has_hybrid()) + return "cpu"; + + if (!evlist->hybrid_pmu_name) { + if (warn) + pr_warning("WARNING: default to use cpu_core topdown events\n"); + evlist->hybrid_pmu_name =3D perf_pmu__hybrid_type_to_pmu("core"); + } + + pmu_name =3D evlist->hybrid_pmu_name; + + return pmu_name; +} + +int topdown_parse_events(struct evlist *evlist) +{ + const char *topdown_events; + const char *pmu_name; + + if (!topdown_sys_has_perf_metrics()) + return 0; + + pmu_name =3D arch_get_topdown_pmu_name(evlist, false); + + if (pmu_have_event(pmu_name, "topdown-heavy-ops")) { + if (!strcmp(pmu_name, "cpu_core")) + topdown_events =3D TOPDOWN_L2_EVENTS_CORE; + else + topdown_events =3D TOPDOWN_L2_EVENTS; + } else { + if (!strcmp(pmu_name, "cpu_core")) + topdown_events =3D TOPDOWN_L1_EVENTS_CORE; + else + topdown_events =3D TOPDOWN_L1_EVENTS; + } + + return parse_events(evlist, topdown_events, NULL); +} diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/= topdown.h index 46bf9273e572..7eb81f042838 100644 --- a/tools/perf/arch/x86/util/topdown.h +++ b/tools/perf/arch/x86/util/topdown.h @@ -3,5 +3,6 @@ #define _TOPDOWN_H 1 =20 bool topdown_sys_has_perf_metrics(void); +int topdown_parse_events(struct evlist *evlist); =20 #endif diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 837c3ca91af1..c6b68be78f8c 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -71,6 +71,7 @@ #include "util/bpf_counter.h" #include "util/iostat.h" #include "util/pmu-hybrid.h" + #include "util/topdown.h" #include "asm/bug.h" =20 #include @@ -1858,22 +1859,11 @@ static int add_default_attributes(void) unsigned int max_level =3D 1; char *str =3D NULL; bool warn =3D false; - const char *pmu_name =3D "cpu"; + const char *pmu_name =3D arch_get_topdown_pmu_name(evsel_list, true); =20 if (!force_metric_only) stat_config.metric_only =3D true; =20 - if (perf_pmu__has_hybrid()) { - if (!evsel_list->hybrid_pmu_name) { - pr_warning("WARNING: default to use cpu_core topdown events\n"); - evsel_list->hybrid_pmu_name =3D perf_pmu__hybrid_type_to_pmu("core"); - } - - pmu_name =3D evsel_list->hybrid_pmu_name; - if (!pmu_name) - return -1; - } - if (pmu_have_event(pmu_name, topdown_metric_L2_attrs[5])) { metric_attrs =3D topdown_metric_L2_attrs; max_level =3D 2; diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c index 606f09b09226..44045565c8f8 100644 --- a/tools/perf/util/stat-display.c +++ b/tools/perf/util/stat-display.c @@ -374,7 +374,7 @@ static void abs_printout(struct perf_stat_config *confi= g, config->csv_output ? 0 : config->unit_width, evsel->unit, config->csv_sep); =20 - fprintf(output, "%-*s", config->csv_output ? 0 : 25, evsel__name(evsel)); + fprintf(output, "%-*s", config->csv_output ? 0 : 32, evsel__name(evsel)); =20 print_cgroup(config, evsel); } diff --git a/tools/perf/util/topdown.c b/tools/perf/util/topdown.c index a369f84ceb6a..1090841550f7 100644 --- a/tools/perf/util/topdown.c +++ b/tools/perf/util/topdown.c @@ -65,3 +65,10 @@ __weak bool arch_topdown_sample_read(struct evsel *leade= r __maybe_unused) { return false; } + +__weak const char *arch_get_topdown_pmu_name(struct evlist *evlist + __maybe_unused, + bool warn __maybe_unused) +{ + return "cpu"; +} diff --git a/tools/perf/util/topdown.h b/tools/perf/util/topdown.h index 118e75281f93..f9531528c559 100644 --- a/tools/perf/util/topdown.h +++ b/tools/perf/util/topdown.h @@ -2,11 +2,12 @@ #ifndef TOPDOWN_H #define TOPDOWN_H 1 #include "evsel.h" +#include "evlist.h" =20 bool arch_topdown_check_group(bool *warn); void arch_topdown_group_warn(void); bool arch_topdown_sample_read(struct evsel *leader); - +const char *arch_get_topdown_pmu_name(struct evlist *evlist, bool warn); int topdown_filter_events(const char **attr, char **str, bool use_group, const char *pmu_name); =20 --=20 2.25.1