From nobody Sun Feb 8 11:44:54 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D13DC0015E for ; Wed, 19 Jul 2023 00:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229813AbjGSASu (ORCPT ); Tue, 18 Jul 2023 20:18:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbjGSASo (ORCPT ); Tue, 18 Jul 2023 20:18:44 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56B9E134 for ; Tue, 18 Jul 2023 17:18:43 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5618857518dso42513537b3.2 for ; Tue, 18 Jul 2023 17:18:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689725922; x=1692317922; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :from:to:cc:subject:date:message-id:reply-to; bh=EzZfmowfpaOAxfw4qEjYh3Dd4yzwOwjsuGJmX6Wkr1Y=; b=JlGx8OjDED/9mGbeDU0purblcp19YsAhhm1yfmMZ5AxoB5/YnBIVJ/U+xIe03wLKmZ KF045vV6VIxDYj67PwbuF6ceSylQbxVCDPbUn9W+wXvL/fdkQPPQNXt6qOzJd0aioqru LL55ddw9h6A+ybAxScavuNT3dw4seXKoJ4LOIjLjVxHs6TAw7NpQbdssGguG+n60DshU iUD522FEt7WLwEErStpyo+Dp1IgZFHnwau4a9j4FI7Wc/CzQ0HVo9GevlpRV6tFjPdxQ aT4XxWx40ZtmI31Koarzs2xCPGSXSj6nUoHSxwvI630sTTaZpZ5CJn3n+/PSAx0wXvxH 7ZWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689725922; x=1692317922; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EzZfmowfpaOAxfw4qEjYh3Dd4yzwOwjsuGJmX6Wkr1Y=; b=cbSfsxps48XklisU9Xdg6eUO51pgApaLr5EGoND0WbgcB+qi/2zLEEoGVyGOhlBzJL oiv4NcTauw1dWlLDcuQe6NywqVBNFDVy/kxF8W01UacYHz98qXEzCVD0Oop5dXsx5p5a cRBp6e0je91SD7ysp0BoJSHsauRnBKAcCSJ1Do2uUJp2rrenZI4BA8t711ItB9xxmCp9 Y5gTzDQMYSPZInTpWs2ZMMEa7p/uxztzT5fj8nPyicSccIC32k4IzKgKmi362Udq/Yvp tTRIlWXOxRN62dqGo/g25L0csgwZY+bkBQgldH+ciwPjuk+euIlqBYS3D0lq8GwVlUX+ bTyg== X-Gm-Message-State: ABy/qLbltfb6IDJQzblssnpwRU7biCG5CpZIDzMeggl9e0nXsz9fLDmI dN9lzyClPu15x5sHLPztU3hlKDv7scp2 X-Google-Smtp-Source: APBJJlGoqw5slG37ZamIc0FtM+3St0BI0iyNqMpsLcXhIrObXzSVzC4tb8ifO4A5x/NN2TV3wfKqESDC4+Gt X-Received: from irogers.svl.corp.google.com ([2620:15c:2a3:200:c587:348f:a079:d876]) (user=irogers job=sendgmr) by 2002:a81:8d51:0:b0:579:fa4c:1f22 with SMTP id w17-20020a818d51000000b00579fa4c1f22mr191864ywj.6.1689725922615; Tue, 18 Jul 2023 17:18:42 -0700 (PDT) Date: Tue, 18 Jul 2023 17:18:34 -0700 In-Reply-To: <20230719001836.198363-1-irogers@google.com> Message-Id: <20230719001836.198363-2-irogers@google.com> Mime-Version: 1.0 References: <20230719001836.198363-1-irogers@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Subject: [PATCH v1 1/3] perf parse-events: Extra care around force grouped events From: Ian Rogers To: Andi Kleen , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kan Liang , Zhengjun Xing , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Perf metric (topdown) events on Intel Icelake+ machines require a group, however, they may be next to events that don't require a group. Consider: cycles,slots,topdown-fe-bound The cycles event needn't be grouped but slots and topdown-fe-bound need grouping. Prior to this change, as slots and topdown-fe-bound need a group forcing and all events share the same PMU, slots and topdown-fe-bound would be forced into a group with cycles. This is a bug on two fronts, cycles wasn't supposed to be grouped and cycles can't be a group leader with a perf metric event. This change adds recognition that cycles isn't force grouped and so it shouldn't be force grouped with slots and topdown-fe-bound. Fixes: a90cc5a9eeab ("perf evsel: Don't let evsel__group_pmu_name() travers= e unsorted group") Signed-off-by: Ian Rogers --- tools/perf/util/parse-events.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index 5dcfbf316bf6..f10760ac1781 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -2141,7 +2141,7 @@ static int parse_events__sort_events_and_fix_groups(s= truct list_head *list) int idx =3D 0, unsorted_idx =3D -1; struct evsel *pos, *cur_leader =3D NULL; struct perf_evsel *cur_leaders_grp =3D NULL; - bool idx_changed =3D false; + bool idx_changed =3D false, cur_leader_force_grouped =3D false; int orig_num_leaders =3D 0, num_leaders =3D 0; int ret; =20 @@ -2182,7 +2182,7 @@ static int parse_events__sort_events_and_fix_groups(s= truct list_head *list) const struct evsel *pos_leader =3D evsel__leader(pos); const char *pos_pmu_name =3D pos->group_pmu_name; const char *cur_leader_pmu_name, *pos_leader_pmu_name; - bool force_grouped =3D arch_evsel__must_be_in_group(pos); + bool pos_force_grouped =3D arch_evsel__must_be_in_group(pos); =20 /* Reset index and nr_members. */ if (pos->core.idx !=3D idx) @@ -2198,7 +2198,8 @@ static int parse_events__sort_events_and_fix_groups(s= truct list_head *list) cur_leader =3D pos; =20 cur_leader_pmu_name =3D cur_leader->group_pmu_name; - if ((cur_leaders_grp !=3D pos->core.leader && !force_grouped) || + if ((cur_leaders_grp !=3D pos->core.leader && + (!pos_force_grouped || !cur_leader_force_grouped)) || strcmp(cur_leader_pmu_name, pos_pmu_name)) { /* Event is for a different group/PMU than last. */ cur_leader =3D pos; @@ -2208,9 +2209,14 @@ static int parse_events__sort_events_and_fix_groups(= struct list_head *list) * group. */ cur_leaders_grp =3D pos->core.leader; + /* + * Avoid forcing events into groups with events that + * don't need to be in the group. + */ + cur_leader_force_grouped =3D pos_force_grouped; } pos_leader_pmu_name =3D pos_leader->group_pmu_name; - if (strcmp(pos_leader_pmu_name, pos_pmu_name) || force_grouped) { + if (strcmp(pos_leader_pmu_name, pos_pmu_name) || pos_force_grouped) { /* * Event's PMU differs from its leader's. Groups can't * span PMUs, so update leader from the group/PMU --=20 2.41.0.487.g6d72f3e995-goog