From nobody Fri Feb 13 14:09:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D88ACE7A81 for ; Mon, 25 Sep 2023 06:20:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232213AbjIYGUf (ORCPT ); Mon, 25 Sep 2023 02:20:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232119AbjIYGUN (ORCPT ); Mon, 25 Sep 2023 02:20:13 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63A1DCD1; Sun, 24 Sep 2023 23:19:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695622798; x=1727158798; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+aubAkzZ9fM27xEp4GwOouz7KCLea9GS4cMIVbkPI8s=; b=GWfDtBcvwEEirs/X3/MmYnrT35QSomXJCr+plBdWyBUE3kkB+Wji+ZJ1 tiPLZ8D7bvJleBNJrUa1HjKIxUvYlObxbl+rtO3T6O/z0yZz7ZWKeLB9V 23B3NqF/9CERxWnaUdfSzoJVH/fkqhfegu5j/R9WzvO5/OBwbx2w6Ilb4 yJVWEaMbdUiAK4Zw4NwQLNiADFLJeKuRduehJ6ZIqRDFigBn08K3cCtf9 nLLUAKxfyYOXoTQi4G16WTVGu9TpEOrgG1EJACxrBJW63ja+XjE2+tvJ2 3PTWDRRB4j4NgUVT0xhPmhmFB0bYVjpZrPltcPfL0y3121KsZRpLaNV19 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="445279470" X-IronPort-AV: E=Sophos;i="6.03,174,1694761200"; d="scan'208";a="445279470" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2023 23:19:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="818494366" X-IronPort-AV: E=Sophos;i="6.03,174,1694761200"; d="scan'208";a="818494366" Received: from b49691a75598.jf.intel.com ([10.54.34.22]) by fmsmga004.fm.intel.com with ESMTP; 24 Sep 2023 23:19:17 -0700 From: weilin.wang@intel.com To: weilin.wang@intel.com, Ian Rogers , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Adrian Hunter , Kan Liang Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Perry Taylor , Samantha Alt , Caleb Biggers , Mark Rutland Subject: [RFC PATCH 12/25] perf stat: Add more functions for hardware-grouping method Date: Sun, 24 Sep 2023 23:18:11 -0700 Message-Id: <20230925061824.3818631-13-weilin.wang@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230925061824.3818631-1-weilin.wang@intel.com> References: <20230925061824.3818631-1-weilin.wang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Weilin Wang Add function to fill all bits of one counter bitmap. Add functions to create new groups when no counter is available in all the existing groups. Signed-off-by: Weilin Wang --- tools/perf/util/metricgroup.c | 39 ++++++++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 5 deletions(-) diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c index 68d56087b..8d54e71bf 100644 --- a/tools/perf/util/metricgroup.c +++ b/tools/perf/util/metricgroup.c @@ -1702,6 +1702,19 @@ static int get_pmu_counter_layouts(struct list_head = *pmu_info_list, return ret; } =20 +static int fill_counter_bitmap(unsigned long *bitmap, int start, int size) +{ + int ret; + bitmap_zero(bitmap, NR_COUNTERS); + + for (int pos =3D start; pos < start + size; pos++) { + ret =3D set_counter_bitmap(pos, bitmap); + if (ret) + return ret; + } + return 0; +} + /** * Find if there is a counter available for event e in current_group. If a * counter is available, use this counter by fill the bit in the correct c= ounter @@ -1750,6 +1763,21 @@ static int _insert_event(struct metricgroup__event_i= nfo *e, return 0; } =20 +/** + * Insert the new_group node at the end of the group list. + */ +static int insert_new_group(struct list_head *head, + struct metricgroup__group *new_group, + size_t size, + size_t fixed_size) +{ + INIT_LIST_HEAD(&new_group->event_head); + fill_counter_bitmap(new_group->gp_counters, 0, size); + fill_counter_bitmap(new_group->fixed_counters, 0, fixed_size); + list_add_tail(&new_group->nd, head); + return 0; +} + /** * Insert event e into a group capable to include it * @@ -1759,7 +1787,7 @@ static int insert_event_to_group(struct metricgroup__= event_info *e, { struct metricgroup__group *g; int ret; - //struct list_head *head; + struct list_head *head; =20 list_for_each_entry(g, &pmu_group_head->group_head, nd) { ret =3D find_and_set_counters(e, g); @@ -1774,13 +1802,14 @@ static int insert_event_to_group(struct metricgroup= __event_info *e, */ { struct metricgroup__group *current_group =3D malloc(sizeof(struct metric= group__group)); + if (!current_group) return -ENOMEM; pr_debug("create_new_group for [event] %s\n", e->name); =20 - //head =3D &pmu_group_head->group_head; - //ret =3D create_new_group(head, current_group, pmu_group_head->size, - // pmu_group_head->fixed_size); + head =3D &pmu_group_head->group_head; + ret =3D insert_new_group(head, current_group, pmu_group_head->size, + pmu_group_head->fixed_size); if (ret) return ret; ret =3D find_and_set_counters(e, current_group); @@ -1817,7 +1846,7 @@ static int assign_event_grouping(struct metricgroup__= event_info *e, =20 pmu_group_head =3D malloc(sizeof(struct metricgroup__pmu_group_list)); INIT_LIST_HEAD(&pmu_group_head->group_head); - pr_debug("create new group for event %s in pmu %s ", e->name, e->pmu_nam= e); + pr_debug("create new group for event %s in pmu %s\n", e->name, e->pmu_na= me); pmu_group_head->pmu_name =3D e->pmu_name; list_for_each_entry(p, pmu_info_list, nd) { if (!strcasecmp(p->name, e->pmu_name)) { --=20 2.39.3