From nobody Fri Jan 2 06:46:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49818CDB47E for ; Sat, 14 Oct 2023 01:52:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232808AbjJNBww (ORCPT ); Fri, 13 Oct 2023 21:52:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232824AbjJNBwp (ORCPT ); Fri, 13 Oct 2023 21:52:45 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 597F0116; Fri, 13 Oct 2023 18:52:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697248355; x=1728784355; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hFwGUjeSEwV3M8jNdW1cuoi8OgD2wUFllwc1RJBRXBA=; b=Nyl1hKVwHx0eHUdm8dk8S/H9N4a50493wzh3PFN9TwMNF15bVx/tXSOz cp4LdGMVUqHCpxbCqy47A4ejtJoOBIHNUEnreLBq7yQ4EQpy/QVkKrJdQ nU4HrJ/gornpHA6K86T4umsVtrIjlDz91rPu8+l5bVVPG54YTo3mE1BVX CIKOmmrMbKjkF4s4WFi0xwUMhT5RH9WWJunJ2T5Dg+a7VNVKg+JAO/OZC yuJOtrS+V+TwKeYM3Z0HKcAOEpc0C9ihoFDbGT7t2JuzWcJl2Ww0EtDWB v4AdlqQlU8WFhOKeQ9QGgQhpMN4Hjy2mzS+B1rwi7e1EIBvDVIhTKpItO w==; X-IronPort-AV: E=McAfee;i="6600,9927,10862"; a="389154719" X-IronPort-AV: E=Sophos;i="6.03,223,1694761200"; d="scan'208";a="389154719" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2023 18:52:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10862"; a="731565680" X-IronPort-AV: E=Sophos;i="6.03,223,1694761200"; d="scan'208";a="731565680" Received: from b49691a75598.jf.intel.com ([10.54.34.22]) by orsmga006.jf.intel.com with ESMTP; 13 Oct 2023 18:52:19 -0700 From: weilin.wang@intel.com To: weilin.wang@intel.com, Ian Rogers , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Adrian Hunter , Kan Liang Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Perry Taylor , Samantha Alt , Caleb Biggers , Mark Rutland , Yang Jihong Subject: [RFC PATCH v2 08/17] perf stat: Add functions to get counter info Date: Fri, 13 Oct 2023 18:51:53 -0700 Message-Id: <20231014015202.1175377-9-weilin.wang@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231014015202.1175377-1-weilin.wang@intel.com> References: <20231014015202.1175377-1-weilin.wang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Weilin Wang Add data structure metricgroup__pmu_counters to represent hardware counters available in the system. Add functions to parse pmu-events and create the list of pmu_info_list to hold the counter information of the system. Add functions to free pmu_info_list and event_info_list before exit grouping for hardware-grouping method This method would fall back to normal grouping when event json files do not support hardware aware grouping. Signed-off-by: Weilin Wang --- tools/perf/util/metricgroup.c | 85 +++++++++++++++++++++++++++++++++-- tools/perf/util/metricgroup.h | 15 +++++++ tools/perf/util/pmu.c | 5 +++ tools/perf/util/pmu.h | 1 + 4 files changed, 103 insertions(+), 3 deletions(-) diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c index 6af8a7341..75257b68b 100644 --- a/tools/perf/util/metricgroup.c +++ b/tools/perf/util/metricgroup.c @@ -1507,6 +1507,27 @@ static int parse_counter(const char *counter, return 0; } =20 +static void metricgroup__free_event_info(struct list_head + *event_info_list) +{ + struct metricgroup__event_info *e, *tmp; + + list_for_each_entry_safe(e, tmp, event_info_list, nd) { + list_del_init(&e->nd); + free(e); + } +} + +static void metricgroup__free_pmu_info(struct list_head *pmu_info_list) +{ + struct metricgroup__pmu_counters *p, *tmp; + + list_for_each_entry_safe(p, tmp, pmu_info_list, nd) { + list_del_init(&p->nd); + free(p); + } +} + static struct metricgroup__event_info *event_info__new(const char *name, const char *pmu_name, const char *counter, @@ -1525,7 +1546,7 @@ static struct metricgroup__event_info *event_info__ne= w(const char *name, =20 e->name =3D name; e->free_counter =3D free_counter; - e->pmu_name =3D strdup(pmu_name); + e->pmu_name =3D pmu_name; if (free_counter) { ret =3D set_counter_bitmap(0, e->counters); if (ret) @@ -1560,6 +1581,8 @@ static int metricgroup__add_metric_event_callback(con= st struct pmu_event *pe, struct metricgroup__add_metric_event_data *d =3D data; =20 if (!strcasecmp(pe->name, d->event_name)) { + if (!pe->counter) + return -EINVAL; event =3D event_info__new(d->event_id, pe->pmu, pe->counter, /*free_coun= ter=3D*/false); if (!event) return -ENOMEM; @@ -1599,7 +1622,7 @@ static int get_metricgroup_events(const char *full_id, .event_name =3D id, .event_id =3D full_id, }; - ret =3D pmu_events_table_for_each_event(table, + ret =3D pmu_events_table__for_each_event(table, /*pmu=3D*/NULL, metricgroup__add_metric_event_callback, &data); } =20 @@ -1608,6 +1631,57 @@ static int get_metricgroup_events(const char *full_i= d, return ret; } =20 +static struct metricgroup__pmu_counters *pmu_layout__new(const struct pmu_= layout *pl) +{ + struct metricgroup__pmu_counters *l; + + l =3D zalloc(sizeof(*l)); + + if (!l) + return NULL; + + l->name =3D pl->pmu; + l->size =3D pl->size; + l->fixed_size =3D pl->fixed_size; + pr_debug("create new pmu_layout: [pmu]=3D%s, [gp_size]=3D%ld, [fixed_size= ]=3D%ld\n", + l->name, l->size, l->fixed_size); + return l; +} + +static int metricgroup__add_pmu_layout_callback(const struct pmu_layout *p= l, + void *data) +{ + struct metricgroup__pmu_counters *pmu; + struct list_head *d =3D data; + int ret =3D 0; + + pmu =3D pmu_layout__new(pl); + if (!pmu) + return -ENOMEM; + list_add(&pmu->nd, d); + return ret; +} + +/** + * get_pmu_counter_layouts - Find counter info of the architecture from + * the pmu_layouts table + * @pmu_info_list: the list that the new counter info of a pmu is added to. + * @table: pmu_layouts table that is searched for counter info. + */ +static int get_pmu_counter_layouts(struct list_head *pmu_info_list, + const struct pmu_layouts_table + *table) +{ + LIST_HEAD(list); + int ret; + + ret =3D pmu_layouts_table__for_each_layout(table, + metricgroup__add_pmu_layout_callback, &list); + + list_splice(&list, pmu_info_list); + return ret; +} + /** * hw_aware_build_grouping - Build event groupings by reading counter * requirement of the events and counter available on the system from @@ -1626,6 +1700,7 @@ static int hw_aware_build_grouping(struct expr_parse_= ctx *ctx __maybe_unused, LIST_HEAD(event_info_list); size_t bkt; const struct pmu_events_table *etable =3D pmu_events_table__find(); + const struct pmu_layouts_table *ltable =3D pmu_layouts_table__find(); =20 #define RETURN_IF_NON_ZERO(x) do { if (x) return x; } while (0) hashmap__for_each_entry(ctx->ids, cur, bkt) { @@ -1635,9 +1710,13 @@ static int hw_aware_build_grouping(struct expr_parse= _ctx *ctx __maybe_unused, =20 ret =3D get_metricgroup_events(id, etable, &event_info_list); if (ret) - return ret; + goto err_out; } + ret =3D get_pmu_counter_layouts(&pmu_info_list, ltable); =20 +err_out: + metricgroup__free_event_info(&event_info_list); + metricgroup__free_pmu_info(&pmu_info_list); return ret; #undef RETURN_IF_NON_ZERO } diff --git a/tools/perf/util/metricgroup.h b/tools/perf/util/metricgroup.h index 3704545c9..802ca15e7 100644 --- a/tools/perf/util/metricgroup.h +++ b/tools/perf/util/metricgroup.h @@ -94,6 +94,21 @@ struct metricgroup__event_info { DECLARE_BITMAP(counters, NR_COUNTERS); }; =20 +/** + * A node is the counter availability of a pmu. + * This info is built up at the beginning from JSON file and + * used as a reference in metric grouping process. + */ +struct metricgroup__pmu_counters { + struct list_head nd; + /** The name of the pmu the event collected on. */ + const char *name; + //DECLARE_BITMAP(counter_bits, NR_COUNTERS); + /** The number of gp counters in the pmu. */ + size_t size; + size_t fixed_size; +}; + /** * Each group is one node in the group string list. */ diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index cde33e019..af4056a88 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -813,6 +813,11 @@ __weak const struct pmu_metrics_table *pmu_metrics_tab= le__find(void) return perf_pmu__find_metrics_table(NULL); } =20 +__weak const struct pmu_layouts_table *pmu_layouts_table__find(void) +{ + return perf_pmu__find_layouts_table(NULL); +} + /** * perf_pmu__match_ignoring_suffix - Does the pmu_name match tok ignoring = any * trailing suffix? The Suffix must be i= n form diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h index 6a4e170c6..3e9243e00 100644 --- a/tools/perf/util/pmu.h +++ b/tools/perf/util/pmu.h @@ -240,6 +240,7 @@ void pmu_add_cpu_aliases_table(struct perf_pmu *pmu, char *perf_pmu__getcpuid(struct perf_pmu *pmu); const struct pmu_events_table *pmu_events_table__find(void); const struct pmu_metrics_table *pmu_metrics_table__find(void); +const struct pmu_layouts_table *pmu_layouts_table__find(void); =20 int perf_pmu__convert_scale(const char *scale, char **end, double *sval); =20 --=20 2.39.3