From nobody Mon Apr 6 11:51:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92DCBC32771 for ; Mon, 26 Sep 2022 20:08:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229517AbiIZUIK (ORCPT ); Mon, 26 Sep 2022 16:08:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229906AbiIZUIE (ORCPT ); Mon, 26 Sep 2022 16:08:04 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 174C465649; Mon, 26 Sep 2022 13:08:02 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id b21so7224313plz.7; Mon, 26 Sep 2022 13:08:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=Y8C8oz+huARdrMZgl9qptQi98PJBj+1V0jnBMtVXEHc=; b=oIFsN6ZpRJ9RlO4+FgFopaehngnc+SMKCc9/qz8xJY2oGvri/GfyRfZw4pjG6Ru2J/ 9KTGKLC4MPZvuoIbYeziEiWgES76R5amclyIuvqH7ycO8IATwTdk/SF1aCpODl3TcEE6 eo3VAiUcRMkKZhu6LueGpbyKuhXi5c3NwtD30+DD5BHZ7Db5iOqR0jIEgu+EuxTp0doe zGdPPnGSsIQFHM1Oz6HpO7t518TWGVOY27uLfV36zjxcIbwiWKY09ShwfWJxLOG8nx4X dVHVy0QAScNA6M+Tvche0s8ZtATctXyJnTLq+v2QlAssbBdzvtFXyeAluA8BttkrQHLn w/zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=Y8C8oz+huARdrMZgl9qptQi98PJBj+1V0jnBMtVXEHc=; b=a9i8vubNqwu+BvUc78U7p09P+2Di/Kq9SoT1LjxswPZZv7gk4y0prEBsbruQgMEn5F n+ig4xAPHsHynuTKVd/3Mob/nND9US1ERFGQrJv+/5wGr8ZnmsHco9/S7hSNZDCjyr4M Nzk8rbzOe1BNcDnGyHI2FGaUzeN/vOA1n76BS4O4kC0sk91Oh19cwvEcNC6Zy6CKVT4N VgJ6EOGNkEoAOJPmOJBJIehkcvdkt9rVDiQ1NouTxdOU//5SWXDN9o60U/KYiVsdTE5z NwbVF8vi42gniCaJ0Hh4SeDmSSil/dX2vV6mOtA07Etm20mcvCniJ8q6c37RVz+fcBXy 4yiw== X-Gm-Message-State: ACrzQf2ScwIrxN9FIc1N7LFBB7ikXspkdBVp3XYECz4bpdIOyjEiDJMv MeeOrRcddf+SAZvwIdlpbBU= X-Google-Smtp-Source: AMsMyM4cVqm+vOFXLn3YmloxorW5nhHq4yqmkJ4Xp+oG8SjrdHMuDcclWwioApBWL6wJzELflhfqsQ== X-Received: by 2002:a17:902:848e:b0:178:54ce:c108 with SMTP id c14-20020a170902848e00b0017854cec108mr24307924plo.134.1664222881536; Mon, 26 Sep 2022 13:08:01 -0700 (PDT) Received: from balhae.corp.google.com ([2620:15c:2c1:200:2d32:19ce:817e:166]) by smtp.gmail.com with ESMTPSA id s21-20020aa78bd5000000b00540c24ba181sm12510148pfd.120.2022.09.26.13.08.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 13:08:01 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Arnaldo Carvalho de Melo , Jiri Olsa Cc: Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Andi Kleen , Kan Liang , Leo Yan , Zhengjun Xing Subject: [PATCH 1/6] perf stat: Convert perf_stat_evsel.res_stats array Date: Mon, 26 Sep 2022 13:07:52 -0700 Message-Id: <20220926200757.1161448-2-namhyung@kernel.org> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog In-Reply-To: <20220926200757.1161448-1-namhyung@kernel.org> References: <20220926200757.1161448-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It uses only one member, no need to have it as an array. Signed-off-by: Namhyung Kim Reviewed-by: James Clark --- tools/perf/util/stat-display.c | 2 +- tools/perf/util/stat.c | 10 +++------- tools/perf/util/stat.h | 2 +- 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c index b82844cb0ce7..234491f43c36 100644 --- a/tools/perf/util/stat-display.c +++ b/tools/perf/util/stat-display.c @@ -67,7 +67,7 @@ static void print_noise(struct perf_stat_config *config, return; =20 ps =3D evsel->stats; - print_noise_pct(config, stddev_stats(&ps->res_stats[0]), avg); + print_noise_pct(config, stddev_stats(&ps->res_stats), avg); } =20 static void print_cgroup(struct perf_stat_config *config, struct evsel *ev= sel) diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index ce5e9e372fc4..6bcd3dc32a71 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -132,12 +132,9 @@ static void perf_stat_evsel_id_init(struct evsel *evse= l) =20 static void evsel__reset_stat_priv(struct evsel *evsel) { - int i; struct perf_stat_evsel *ps =3D evsel->stats; =20 - for (i =3D 0; i < 3; i++) - init_stats(&ps->res_stats[i]); - + init_stats(&ps->res_stats); perf_stat_evsel_id_init(evsel); } =20 @@ -440,7 +437,7 @@ int perf_stat_process_counter(struct perf_stat_config *= config, struct perf_counts_values *aggr =3D &counter->counts->aggr; struct perf_stat_evsel *ps =3D counter->stats; u64 *count =3D counter->counts->aggr.values; - int i, ret; + int ret; =20 aggr->val =3D aggr->ena =3D aggr->run =3D 0; =20 @@ -458,8 +455,7 @@ int perf_stat_process_counter(struct perf_stat_config *= config, evsel__compute_deltas(counter, -1, -1, aggr); perf_counts_values__scale(aggr, config->scale, &counter->counts->scaled); =20 - for (i =3D 0; i < 3; i++) - update_stats(&ps->res_stats[i], count[i]); + update_stats(&ps->res_stats, *count); =20 if (verbose > 0) { fprintf(config->output, "%s: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n", diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h index 72713b344b79..3eba38a1a149 100644 --- a/tools/perf/util/stat.h +++ b/tools/perf/util/stat.h @@ -43,7 +43,7 @@ enum perf_stat_evsel_id { }; =20 struct perf_stat_evsel { - struct stats res_stats[3]; + struct stats res_stats; enum perf_stat_evsel_id id; u64 *group_data; }; --=20 2.37.3.998.g577e59143f-goog From nobody Mon Apr 6 11:51:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A44BC32771 for ; Mon, 26 Sep 2022 20:08:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230352AbiIZUIN (ORCPT ); Mon, 26 Sep 2022 16:08:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230257AbiIZUIE (ORCPT ); Mon, 26 Sep 2022 16:08:04 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61B2166A48; Mon, 26 Sep 2022 13:08:03 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id j6-20020a17090a694600b00200bba67dadso8003461pjm.5; Mon, 26 Sep 2022 13:08:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=9J94PVT0UfXD4DP+QaSiTx8HSwssHCBbfYg6tJw6jtI=; b=CazEgvwuWoudVIBWDJKNh4Labrt/0qoRvoaU5LZRYz6WTBocYco6GTrLvY+GvFBpvK Ba299cUT+AkPrHi4ioDf9Osd8D3DOaXT9aQnTKTSU1to7uSwJTSnR6xbfYoreWB1bKAo A+axZVHW4vbTC4NQk/68BwvtknseOWNa39Qv340JgziALvTvOwZ7+rtQtyd3R19hAMXr foJhs6W1NMiDOggyrVLW3a4+KtE8c3acGLuQOo/e0/PZW5I2FnogPXueudXs4utfo1JZ 4z3m/g9lxqupxZ5uM595xktJ0RwvhUQ0nXmRKOHW+Ozvcq0VvLaOndNyutHi8HF1WIAo R9NQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=9J94PVT0UfXD4DP+QaSiTx8HSwssHCBbfYg6tJw6jtI=; b=VZg8Kydiy7svNoctsnfHzRRIw3isZbZqw+z9EZWjSWmBzTnk6nUG+A7dVLrm33kEFw 66qyBqTmq0TgbmmYkN6038FTVLYBx7TXCpv4HeZkIp7+ue2cDJLESSm+1g4lm3AIUGpG prLxmeTubm7nOB+rKayTyUlyG6Js6ankEDGTro+GxsRluv2JqQ+Pifo9KIqWrLHNQ7pL 7tZU/8mmA3unAtNWPEeQLt8QppLC54+h6hdl2iIdaVVqpKYYA1ZhqUkkcR4i8/E+P68q ruvKWCKHpSyx7N0Tkl1a1u9HfFVahbXkQkKEJmeFsRJbQcatB+beEuUewMxbtr/UfNJG IVGg== X-Gm-Message-State: ACrzQf0a3EolOZo0QuYRigz6yvzlBgpLGAC0IAJMPyj5VrOhH/eUFWo9 82tw3uMCIZGC+MlhfUwTVDE= X-Google-Smtp-Source: AMsMyM4evfuPc9Hc5RUxXcJNvIBbJDZgE2qi2KOu+8k8Z0ahxoNasFuHVoTis1jxyra+z0VME/a2Uw== X-Received: by 2002:a17:902:d708:b0:178:a2be:649b with SMTP id w8-20020a170902d70800b00178a2be649bmr24087297ply.121.1664222882853; Mon, 26 Sep 2022 13:08:02 -0700 (PDT) Received: from balhae.corp.google.com ([2620:15c:2c1:200:2d32:19ce:817e:166]) by smtp.gmail.com with ESMTPSA id s21-20020aa78bd5000000b00540c24ba181sm12510148pfd.120.2022.09.26.13.08.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 13:08:02 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Arnaldo Carvalho de Melo , Jiri Olsa Cc: Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Andi Kleen , Kan Liang , Leo Yan , Zhengjun Xing Subject: [PATCH 2/6] perf stat: Don't call perf_stat_evsel_id_init() repeatedly Date: Mon, 26 Sep 2022 13:07:53 -0700 Message-Id: <20220926200757.1161448-3-namhyung@kernel.org> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog In-Reply-To: <20220926200757.1161448-1-namhyung@kernel.org> References: <20220926200757.1161448-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The evsel__reset_stat_priv() is called more than once if user gave -r option for multiple run. But it doesn't need to re-initialize the id. Signed-off-by: Namhyung Kim Reviewed-by: James Clark --- tools/perf/util/stat.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index 6bcd3dc32a71..e1d3152ce664 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -135,7 +135,6 @@ static void evsel__reset_stat_priv(struct evsel *evsel) struct perf_stat_evsel *ps =3D evsel->stats; =20 init_stats(&ps->res_stats); - perf_stat_evsel_id_init(evsel); } =20 static int evsel__alloc_stat_priv(struct evsel *evsel) @@ -143,6 +142,7 @@ static int evsel__alloc_stat_priv(struct evsel *evsel) evsel->stats =3D zalloc(sizeof(struct perf_stat_evsel)); if (evsel->stats =3D=3D NULL) return -ENOMEM; + perf_stat_evsel_id_init(evsel); evsel__reset_stat_priv(evsel); return 0; } --=20 2.37.3.998.g577e59143f-goog From nobody Mon Apr 6 11:51:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A71C32771 for ; Mon, 26 Sep 2022 20:08:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230361AbiIZUIX (ORCPT ); Mon, 26 Sep 2022 16:08:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230269AbiIZUIH (ORCPT ); Mon, 26 Sep 2022 16:08:07 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 194D665553; Mon, 26 Sep 2022 13:08:05 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id c24so7242395plo.3; Mon, 26 Sep 2022 13:08:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=DwZ9aWiJqYF72KXaQ6u1yUGyD4E+8JdihPl0k8ftEA0=; b=Jle25ZMRyNDhisSqsImcBXmvPOy2xiaQunpQ8GQvuvKaChB0W3yPhiiMDd+jpnWvJl dQ/zN4AYBN8ngZfDqdt91w1eVWrpUEIRYb0wjlWtantTeBoAM6eVFGelMXUJJuFZEdh1 cKfUjwmmVfz2LxrfYvJ1o2c495opAvdlySo5rrxtaswtLJmWvgvT6bRVwYScMYXTDsMZ UcPvXaOnSzA1dGUBRwGn3e4toFKOBGCR3gZ+clKiLIgM6QzGbgjhFbtAX/BdFNSeCQeO L5ESsA36/ehtS7sKWmZsG6EVO2VTrFYTOMxSmfx9hUtCP8gX3KfeMdZfzZ/H8y39GnP3 eU7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=DwZ9aWiJqYF72KXaQ6u1yUGyD4E+8JdihPl0k8ftEA0=; b=JvGL7Lel5ghih5pwSgkrgZSe9PeByu7/wKnSa5lQHFwYWUnlp2pTxzG09qfo+muEAx WLh7+dGe0OTIqlTwMJJ93j/FrXFzPcn41ZueVfchzsz5xctaKOdOxeeFm3vUmcPJakO/ 3ETeay+DkmQDQR+VhIba86Icr6NBxdm5atkMPUxd3SkZEfQ64mFkJMFoFQQ5y3L0C73q Q4JDlZOWUnoFItqXB2VormG//HynIg3MCP81pW2RZLUPhXxmYt1ikB8uYMt6E8uqL24I idreWARnkLXeB9dYSSQrSVG3zs3iuxXyePVTUxlg4KNmQyAEYUKan6DZek56BCmIkJNc uilQ== X-Gm-Message-State: ACrzQf2s0m04UmUs0Oz623cb/lZQyTej2PQAMGqs8pI15vUxfht1pRRK zEUAPCFOHb0c6XZ1cr5NW4k= X-Google-Smtp-Source: AMsMyM7zxLXUs7ckd8zEzSxxW7K7I6JMZc2Puuwo2MN8S+1AtB7TBTZsgUOaGjcLT0PN/cHcHuk99g== X-Received: by 2002:a17:90b:1e0f:b0:203:2308:2ae6 with SMTP id pg15-20020a17090b1e0f00b0020323082ae6mr480049pjb.187.1664222884296; Mon, 26 Sep 2022 13:08:04 -0700 (PDT) Received: from balhae.corp.google.com ([2620:15c:2c1:200:2d32:19ce:817e:166]) by smtp.gmail.com with ESMTPSA id s21-20020aa78bd5000000b00540c24ba181sm12510148pfd.120.2022.09.26.13.08.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 13:08:03 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Arnaldo Carvalho de Melo , Jiri Olsa Cc: Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Andi Kleen , Kan Liang , Leo Yan , Zhengjun Xing Subject: [PATCH 3/6] perf stat: Rename saved_value->cpu_map_idx Date: Mon, 26 Sep 2022 13:07:54 -0700 Message-Id: <20220926200757.1161448-4-namhyung@kernel.org> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog In-Reply-To: <20220926200757.1161448-1-namhyung@kernel.org> References: <20220926200757.1161448-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The cpu_map_idx fields is just to differentiate values from other entries. It doesn't need to be strictly cpu map index. Actually we can pass thread map index or aggr map index. So rename the fields first. No functional change intended. Signed-off-by: Namhyung Kim --- tools/perf/util/stat-shadow.c | 308 +++++++++++++++++----------------- 1 file changed, 154 insertions(+), 154 deletions(-) diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c index 9e1eddeff21b..99d05262055c 100644 --- a/tools/perf/util/stat-shadow.c +++ b/tools/perf/util/stat-shadow.c @@ -33,7 +33,7 @@ struct saved_value { struct evsel *evsel; enum stat_type type; int ctx; - int cpu_map_idx; + int map_idx; struct cgroup *cgrp; struct runtime_stat *stat; struct stats stats; @@ -48,8 +48,8 @@ static int saved_value_cmp(struct rb_node *rb_node, const= void *entry) rb_node); const struct saved_value *b =3D entry; =20 - if (a->cpu_map_idx !=3D b->cpu_map_idx) - return a->cpu_map_idx - b->cpu_map_idx; + if (a->map_idx !=3D b->map_idx) + return a->map_idx - b->map_idx; =20 /* * Previously the rbtree was used to link generic metrics. @@ -106,7 +106,7 @@ static void saved_value_delete(struct rblist *rblist __= maybe_unused, } =20 static struct saved_value *saved_value_lookup(struct evsel *evsel, - int cpu_map_idx, + int map_idx, bool create, enum stat_type type, int ctx, @@ -116,7 +116,7 @@ static struct saved_value *saved_value_lookup(struct ev= sel *evsel, struct rblist *rblist; struct rb_node *nd; struct saved_value dm =3D { - .cpu_map_idx =3D cpu_map_idx, + .map_idx =3D map_idx, .evsel =3D evsel, .type =3D type, .ctx =3D ctx, @@ -215,10 +215,10 @@ struct runtime_stat_data { =20 static void update_runtime_stat(struct runtime_stat *st, enum stat_type type, - int cpu_map_idx, u64 count, + int map_idx, u64 count, struct runtime_stat_data *rsd) { - struct saved_value *v =3D saved_value_lookup(NULL, cpu_map_idx, true, typ= e, + struct saved_value *v =3D saved_value_lookup(NULL, map_idx, true, type, rsd->ctx, st, rsd->cgrp); =20 if (v) @@ -231,7 +231,7 @@ static void update_runtime_stat(struct runtime_stat *st, * instruction rates, etc: */ void perf_stat__update_shadow_stats(struct evsel *counter, u64 count, - int cpu_map_idx, struct runtime_stat *st) + int map_idx, struct runtime_stat *st) { u64 count_ns =3D count; struct saved_value *v; @@ -243,88 +243,88 @@ void perf_stat__update_shadow_stats(struct evsel *cou= nter, u64 count, count *=3D counter->scale; =20 if (evsel__is_clock(counter)) - update_runtime_stat(st, STAT_NSECS, cpu_map_idx, count_ns, &rsd); + update_runtime_stat(st, STAT_NSECS, map_idx, count_ns, &rsd); else if (evsel__match(counter, HARDWARE, HW_CPU_CYCLES)) - update_runtime_stat(st, STAT_CYCLES, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_CYCLES, map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, CYCLES_IN_TX)) - update_runtime_stat(st, STAT_CYCLES_IN_TX, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_CYCLES_IN_TX, map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TRANSACTION_START)) - update_runtime_stat(st, STAT_TRANSACTION, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_TRANSACTION, map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, ELISION_START)) - update_runtime_stat(st, STAT_ELISION, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_ELISION, map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS)) update_runtime_stat(st, STAT_TOPDOWN_TOTAL_SLOTS, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED)) update_runtime_stat(st, STAT_TOPDOWN_SLOTS_ISSUED, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED)) update_runtime_stat(st, STAT_TOPDOWN_SLOTS_RETIRED, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES)) update_runtime_stat(st, STAT_TOPDOWN_FETCH_BUBBLES, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES)) update_runtime_stat(st, STAT_TOPDOWN_RECOVERY_BUBBLES, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_RETIRING)) update_runtime_stat(st, STAT_TOPDOWN_RETIRING, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_BAD_SPEC)) update_runtime_stat(st, STAT_TOPDOWN_BAD_SPEC, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_FE_BOUND)) update_runtime_stat(st, STAT_TOPDOWN_FE_BOUND, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_BE_BOUND)) update_runtime_stat(st, STAT_TOPDOWN_BE_BOUND, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_HEAVY_OPS)) update_runtime_stat(st, STAT_TOPDOWN_HEAVY_OPS, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_BR_MISPREDICT)) update_runtime_stat(st, STAT_TOPDOWN_BR_MISPREDICT, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_LAT)) update_runtime_stat(st, STAT_TOPDOWN_FETCH_LAT, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_MEM_BOUND)) update_runtime_stat(st, STAT_TOPDOWN_MEM_BOUND, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) update_runtime_stat(st, STAT_STALLED_CYCLES_FRONT, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND)) update_runtime_stat(st, STAT_STALLED_CYCLES_BACK, - cpu_map_idx, count, &rsd); + map_idx, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS)) - update_runtime_stat(st, STAT_BRANCHES, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_BRANCHES, map_idx, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_CACHE_REFERENCES)) - update_runtime_stat(st, STAT_CACHEREFS, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_CACHEREFS, map_idx, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1D)) - update_runtime_stat(st, STAT_L1_DCACHE, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_L1_DCACHE, map_idx, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1I)) - update_runtime_stat(st, STAT_L1_ICACHE, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_L1_ICACHE, map_idx, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_LL)) - update_runtime_stat(st, STAT_LL_CACHE, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_LL_CACHE, map_idx, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_DTLB)) - update_runtime_stat(st, STAT_DTLB_CACHE, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_DTLB_CACHE, map_idx, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_ITLB)) - update_runtime_stat(st, STAT_ITLB_CACHE, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_ITLB_CACHE, map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, SMI_NUM)) - update_runtime_stat(st, STAT_SMI_NUM, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_SMI_NUM, map_idx, count, &rsd); else if (perf_stat_evsel__is(counter, APERF)) - update_runtime_stat(st, STAT_APERF, cpu_map_idx, count, &rsd); + update_runtime_stat(st, STAT_APERF, map_idx, count, &rsd); =20 if (counter->collect_stat) { - v =3D saved_value_lookup(counter, cpu_map_idx, true, STAT_NONE, 0, st, + v =3D saved_value_lookup(counter, map_idx, true, STAT_NONE, 0, st, rsd.cgrp); update_stats(&v->stats, count); if (counter->metric_leader) v->metric_total +=3D count; } else if (counter->metric_leader) { v =3D saved_value_lookup(counter->metric_leader, - cpu_map_idx, true, STAT_NONE, 0, st, rsd.cgrp); + map_idx, true, STAT_NONE, 0, st, rsd.cgrp); v->metric_total +=3D count; v->metric_other++; } @@ -466,12 +466,12 @@ void perf_stat__collect_metric_expr(struct evlist *ev= sel_list) } =20 static double runtime_stat_avg(struct runtime_stat *st, - enum stat_type type, int cpu_map_idx, + enum stat_type type, int map_idx, struct runtime_stat_data *rsd) { struct saved_value *v; =20 - v =3D saved_value_lookup(NULL, cpu_map_idx, false, type, rsd->ctx, st, rs= d->cgrp); + v =3D saved_value_lookup(NULL, map_idx, false, type, rsd->ctx, st, rsd->c= grp); if (!v) return 0.0; =20 @@ -479,12 +479,12 @@ static double runtime_stat_avg(struct runtime_stat *s= t, } =20 static double runtime_stat_n(struct runtime_stat *st, - enum stat_type type, int cpu_map_idx, + enum stat_type type, int map_idx, struct runtime_stat_data *rsd) { struct saved_value *v; =20 - v =3D saved_value_lookup(NULL, cpu_map_idx, false, type, rsd->ctx, st, rs= d->cgrp); + v =3D saved_value_lookup(NULL, map_idx, false, type, rsd->ctx, st, rsd->c= grp); if (!v) return 0.0; =20 @@ -492,7 +492,7 @@ static double runtime_stat_n(struct runtime_stat *st, } =20 static void print_stalled_cycles_frontend(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -500,7 +500,7 @@ static void print_stalled_cycles_frontend(struct perf_s= tat_config *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_CYCLES, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -515,7 +515,7 @@ static void print_stalled_cycles_frontend(struct perf_s= tat_config *config, } =20 static void print_stalled_cycles_backend(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -523,7 +523,7 @@ static void print_stalled_cycles_backend(struct perf_st= at_config *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_CYCLES, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -534,7 +534,7 @@ static void print_stalled_cycles_backend(struct perf_st= at_config *config, } =20 static void print_branch_misses(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -542,7 +542,7 @@ static void print_branch_misses(struct perf_stat_config= *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_BRANCHES, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_BRANCHES, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -553,7 +553,7 @@ static void print_branch_misses(struct perf_stat_config= *config, } =20 static void print_l1_dcache_misses(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -561,7 +561,7 @@ static void print_l1_dcache_misses(struct perf_stat_con= fig *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_L1_DCACHE, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_L1_DCACHE, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -572,7 +572,7 @@ static void print_l1_dcache_misses(struct perf_stat_con= fig *config, } =20 static void print_l1_icache_misses(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -580,7 +580,7 @@ static void print_l1_icache_misses(struct perf_stat_con= fig *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_L1_ICACHE, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_L1_ICACHE, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -590,7 +590,7 @@ static void print_l1_icache_misses(struct perf_stat_con= fig *config, } =20 static void print_dtlb_cache_misses(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -598,7 +598,7 @@ static void print_dtlb_cache_misses(struct perf_stat_co= nfig *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_DTLB_CACHE, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_DTLB_CACHE, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -608,7 +608,7 @@ static void print_dtlb_cache_misses(struct perf_stat_co= nfig *config, } =20 static void print_itlb_cache_misses(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -616,7 +616,7 @@ static void print_itlb_cache_misses(struct perf_stat_co= nfig *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_ITLB_CACHE, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_ITLB_CACHE, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -626,7 +626,7 @@ static void print_itlb_cache_misses(struct perf_stat_co= nfig *config, } =20 static void print_ll_cache_misses(struct perf_stat_config *config, - int cpu_map_idx, double avg, + int map_idx, double avg, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -634,7 +634,7 @@ static void print_ll_cache_misses(struct perf_stat_conf= ig *config, double total, ratio =3D 0.0; const char *color; =20 - total =3D runtime_stat_avg(st, STAT_LL_CACHE, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_LL_CACHE, map_idx, rsd); =20 if (total) ratio =3D avg / total * 100.0; @@ -692,61 +692,61 @@ static double sanitize_val(double x) return x; } =20 -static double td_total_slots(int cpu_map_idx, struct runtime_stat *st, +static double td_total_slots(int map_idx, struct runtime_stat *st, struct runtime_stat_data *rsd) { - return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, cpu_map_idx, rsd); + return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, map_idx, rsd); } =20 -static double td_bad_spec(int cpu_map_idx, struct runtime_stat *st, +static double td_bad_spec(int map_idx, struct runtime_stat *st, struct runtime_stat_data *rsd) { double bad_spec =3D 0; double total_slots; double total; =20 - total =3D runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, cpu_map_idx, rs= d) - - runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, cpu_map_idx, rsd) + - runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, cpu_map_idx, rsd); + total =3D runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, map_idx, rsd) - + runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, map_idx, rsd) + + runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, map_idx, rsd); =20 - total_slots =3D td_total_slots(cpu_map_idx, st, rsd); + total_slots =3D td_total_slots(map_idx, st, rsd); if (total_slots) bad_spec =3D total / total_slots; return sanitize_val(bad_spec); } =20 -static double td_retiring(int cpu_map_idx, struct runtime_stat *st, +static double td_retiring(int map_idx, struct runtime_stat *st, struct runtime_stat_data *rsd) { double retiring =3D 0; - double total_slots =3D td_total_slots(cpu_map_idx, st, rsd); + double total_slots =3D td_total_slots(map_idx, st, rsd); double ret_slots =3D runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, - cpu_map_idx, rsd); + map_idx, rsd); =20 if (total_slots) retiring =3D ret_slots / total_slots; return retiring; } =20 -static double td_fe_bound(int cpu_map_idx, struct runtime_stat *st, +static double td_fe_bound(int map_idx, struct runtime_stat *st, struct runtime_stat_data *rsd) { double fe_bound =3D 0; - double total_slots =3D td_total_slots(cpu_map_idx, st, rsd); + double total_slots =3D td_total_slots(map_idx, st, rsd); double fetch_bub =3D runtime_stat_avg(st, STAT_TOPDOWN_FETCH_BUBBLES, - cpu_map_idx, rsd); + map_idx, rsd); =20 if (total_slots) fe_bound =3D fetch_bub / total_slots; return fe_bound; } =20 -static double td_be_bound(int cpu_map_idx, struct runtime_stat *st, +static double td_be_bound(int map_idx, struct runtime_stat *st, struct runtime_stat_data *rsd) { - double sum =3D (td_fe_bound(cpu_map_idx, st, rsd) + - td_bad_spec(cpu_map_idx, st, rsd) + - td_retiring(cpu_map_idx, st, rsd)); + double sum =3D (td_fe_bound(map_idx, st, rsd) + + td_bad_spec(map_idx, st, rsd) + + td_retiring(map_idx, st, rsd)); if (sum =3D=3D 0) return 0; return sanitize_val(1.0 - sum); @@ -757,15 +757,15 @@ static double td_be_bound(int cpu_map_idx, struct run= time_stat *st, * the ratios we need to recreate the sum. */ =20 -static double td_metric_ratio(int cpu_map_idx, enum stat_type type, +static double td_metric_ratio(int map_idx, enum stat_type type, struct runtime_stat *stat, struct runtime_stat_data *rsd) { - double sum =3D runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu_map_idx,= rsd) + - runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu_map_idx, rsd) + - runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu_map_idx, rsd) + - runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu_map_idx, rsd); - double d =3D runtime_stat_avg(stat, type, cpu_map_idx, rsd); + double sum =3D runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, map_idx, rsd= ) + + runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, map_idx, rsd) + + runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, map_idx, rsd) + + runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, map_idx, rsd); + double d =3D runtime_stat_avg(stat, type, map_idx, rsd); =20 if (sum) return d / sum; @@ -777,23 +777,23 @@ static double td_metric_ratio(int cpu_map_idx, enum s= tat_type type, * We allow two missing. */ =20 -static bool full_td(int cpu_map_idx, struct runtime_stat *stat, +static bool full_td(int map_idx, struct runtime_stat *stat, struct runtime_stat_data *rsd) { int c =3D 0; =20 - if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu_map_idx, rsd) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, map_idx, rsd) > 0) c++; - if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu_map_idx, rsd) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, map_idx, rsd) > 0) c++; - if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu_map_idx, rsd) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, map_idx, rsd) > 0) c++; - if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu_map_idx, rsd) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, map_idx, rsd) > 0) c++; return c >=3D 2; } =20 -static void print_smi_cost(struct perf_stat_config *config, int cpu_map_id= x, +static void print_smi_cost(struct perf_stat_config *config, int map_idx, struct perf_stat_output_ctx *out, struct runtime_stat *st, struct runtime_stat_data *rsd) @@ -801,9 +801,9 @@ static void print_smi_cost(struct perf_stat_config *con= fig, int cpu_map_idx, double smi_num, aperf, cycles, cost =3D 0.0; const char *color =3D NULL; =20 - smi_num =3D runtime_stat_avg(st, STAT_SMI_NUM, cpu_map_idx, rsd); - aperf =3D runtime_stat_avg(st, STAT_APERF, cpu_map_idx, rsd); - cycles =3D runtime_stat_avg(st, STAT_CYCLES, cpu_map_idx, rsd); + smi_num =3D runtime_stat_avg(st, STAT_SMI_NUM, map_idx, rsd); + aperf =3D runtime_stat_avg(st, STAT_APERF, map_idx, rsd); + cycles =3D runtime_stat_avg(st, STAT_CYCLES, map_idx, rsd); =20 if ((cycles =3D=3D 0) || (aperf =3D=3D 0)) return; @@ -820,7 +820,7 @@ static void print_smi_cost(struct perf_stat_config *con= fig, int cpu_map_idx, static int prepare_metric(struct evsel **metric_events, struct metric_ref *metric_refs, struct expr_parse_ctx *pctx, - int cpu_map_idx, + int map_idx, struct runtime_stat *st) { double scale; @@ -859,7 +859,7 @@ static int prepare_metric(struct evsel **metric_events, abort(); } } else { - v =3D saved_value_lookup(metric_events[i], cpu_map_idx, false, + v =3D saved_value_lookup(metric_events[i], map_idx, false, STAT_NONE, 0, st, metric_events[i]->cgrp); if (!v) @@ -897,7 +897,7 @@ static void generic_metric(struct perf_stat_config *con= fig, const char *metric_name, const char *metric_unit, int runtime, - int cpu_map_idx, + int map_idx, struct perf_stat_output_ctx *out, struct runtime_stat *st) { @@ -915,7 +915,7 @@ static void generic_metric(struct perf_stat_config *con= fig, pctx->sctx.user_requested_cpu_list =3D strdup(config->user_requested_cpu= _list); pctx->sctx.runtime =3D runtime; pctx->sctx.system_wide =3D config->system_wide; - i =3D prepare_metric(metric_events, metric_refs, pctx, cpu_map_idx, st); + i =3D prepare_metric(metric_events, metric_refs, pctx, map_idx, st); if (i < 0) { expr__ctx_free(pctx); return; @@ -960,7 +960,7 @@ static void generic_metric(struct perf_stat_config *con= fig, expr__ctx_free(pctx); } =20 -double test_generic_metric(struct metric_expr *mexp, int cpu_map_idx, stru= ct runtime_stat *st) +double test_generic_metric(struct metric_expr *mexp, int map_idx, struct r= untime_stat *st) { struct expr_parse_ctx *pctx; double ratio =3D 0.0; @@ -969,7 +969,7 @@ double test_generic_metric(struct metric_expr *mexp, in= t cpu_map_idx, struct run if (!pctx) return NAN; =20 - if (prepare_metric(mexp->metric_events, mexp->metric_refs, pctx, cpu_map_= idx, st) < 0) + if (prepare_metric(mexp->metric_events, mexp->metric_refs, pctx, map_idx,= st) < 0) goto out; =20 if (expr__parse(&ratio, pctx, mexp->metric_expr)) @@ -982,7 +982,7 @@ double test_generic_metric(struct metric_expr *mexp, in= t cpu_map_idx, struct run =20 void perf_stat__print_shadow_stats(struct perf_stat_config *config, struct evsel *evsel, - double avg, int cpu_map_idx, + double avg, int map_idx, struct perf_stat_output_ctx *out, struct rblist *metric_events, struct runtime_stat *st) @@ -1001,7 +1001,7 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, if (config->iostat_run) { iostat_print_metric(config, evsel, out); } else if (evsel__match(evsel, HARDWARE, HW_INSTRUCTIONS)) { - total =3D runtime_stat_avg(st, STAT_CYCLES, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES, map_idx, &rsd); =20 if (total) { ratio =3D avg / total; @@ -1011,11 +1011,11 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, print_metric(config, ctxp, NULL, NULL, "insn per cycle", 0); } =20 - total =3D runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, cpu_map_idx, &= rsd); + total =3D runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, map_idx, &rsd); =20 total =3D max(total, runtime_stat_avg(st, STAT_STALLED_CYCLES_BACK, - cpu_map_idx, &rsd)); + map_idx, &rsd)); =20 if (total && avg) { out->new_line(config, ctxp); @@ -1025,8 +1025,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, ratio); } } else if (evsel__match(evsel, HARDWARE, HW_BRANCH_MISSES)) { - if (runtime_stat_n(st, STAT_BRANCHES, cpu_map_idx, &rsd) !=3D 0) - print_branch_misses(config, cpu_map_idx, avg, out, st, &rsd); + if (runtime_stat_n(st, STAT_BRANCHES, map_idx, &rsd) !=3D 0) + print_branch_misses(config, map_idx, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all branches", 0); } else if ( @@ -1035,8 +1035,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { =20 - if (runtime_stat_n(st, STAT_L1_DCACHE, cpu_map_idx, &rsd) !=3D 0) - print_l1_dcache_misses(config, cpu_map_idx, avg, out, st, &rsd); + if (runtime_stat_n(st, STAT_L1_DCACHE, map_idx, &rsd) !=3D 0) + print_l1_dcache_misses(config, map_idx, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all L1-dcache accesses", 0); } else if ( @@ -1045,8 +1045,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { =20 - if (runtime_stat_n(st, STAT_L1_ICACHE, cpu_map_idx, &rsd) !=3D 0) - print_l1_icache_misses(config, cpu_map_idx, avg, out, st, &rsd); + if (runtime_stat_n(st, STAT_L1_ICACHE, map_idx, &rsd) !=3D 0) + print_l1_icache_misses(config, map_idx, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all L1-icache accesses", 0); } else if ( @@ -1055,8 +1055,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { =20 - if (runtime_stat_n(st, STAT_DTLB_CACHE, cpu_map_idx, &rsd) !=3D 0) - print_dtlb_cache_misses(config, cpu_map_idx, avg, out, st, &rsd); + if (runtime_stat_n(st, STAT_DTLB_CACHE, map_idx, &rsd) !=3D 0) + print_dtlb_cache_misses(config, map_idx, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all dTLB cache accesses", 0); } else if ( @@ -1065,8 +1065,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { =20 - if (runtime_stat_n(st, STAT_ITLB_CACHE, cpu_map_idx, &rsd) !=3D 0) - print_itlb_cache_misses(config, cpu_map_idx, avg, out, st, &rsd); + if (runtime_stat_n(st, STAT_ITLB_CACHE, map_idx, &rsd) !=3D 0) + print_itlb_cache_misses(config, map_idx, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all iTLB cache accesses", 0); } else if ( @@ -1075,27 +1075,27 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { =20 - if (runtime_stat_n(st, STAT_LL_CACHE, cpu_map_idx, &rsd) !=3D 0) - print_ll_cache_misses(config, cpu_map_idx, avg, out, st, &rsd); + if (runtime_stat_n(st, STAT_LL_CACHE, map_idx, &rsd) !=3D 0) + print_ll_cache_misses(config, map_idx, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all LL-cache accesses", 0); } else if (evsel__match(evsel, HARDWARE, HW_CACHE_MISSES)) { - total =3D runtime_stat_avg(st, STAT_CACHEREFS, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_CACHEREFS, map_idx, &rsd); =20 if (total) ratio =3D avg * 100 / total; =20 - if (runtime_stat_n(st, STAT_CACHEREFS, cpu_map_idx, &rsd) !=3D 0) + if (runtime_stat_n(st, STAT_CACHEREFS, map_idx, &rsd) !=3D 0) print_metric(config, ctxp, NULL, "%8.3f %%", "of all cache refs", ratio); else print_metric(config, ctxp, NULL, NULL, "of all cache refs", 0); } else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) { - print_stalled_cycles_frontend(config, cpu_map_idx, avg, out, st, &rsd); + print_stalled_cycles_frontend(config, map_idx, avg, out, st, &rsd); } else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_BACKEND)) { - print_stalled_cycles_backend(config, cpu_map_idx, avg, out, st, &rsd); + print_stalled_cycles_backend(config, map_idx, avg, out, st, &rsd); } else if (evsel__match(evsel, HARDWARE, HW_CPU_CYCLES)) { - total =3D runtime_stat_avg(st, STAT_NSECS, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_NSECS, map_idx, &rsd); =20 if (total) { ratio =3D avg / total; @@ -1104,7 +1104,7 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, print_metric(config, ctxp, NULL, NULL, "Ghz", 0); } } else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX)) { - total =3D runtime_stat_avg(st, STAT_CYCLES, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES, map_idx, &rsd); =20 if (total) print_metric(config, ctxp, NULL, @@ -1114,8 +1114,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, print_metric(config, ctxp, NULL, NULL, "transactional cycles", 0); } else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX_CP)) { - total =3D runtime_stat_avg(st, STAT_CYCLES, cpu_map_idx, &rsd); - total2 =3D runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES, map_idx, &rsd); + total2 =3D runtime_stat_avg(st, STAT_CYCLES_IN_TX, map_idx, &rsd); =20 if (total2 < avg) total2 =3D avg; @@ -1125,19 +1125,19 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, else print_metric(config, ctxp, NULL, NULL, "aborted cycles", 0); } else if (perf_stat_evsel__is(evsel, TRANSACTION_START)) { - total =3D runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES_IN_TX, map_idx, &rsd); =20 if (avg) ratio =3D total / avg; =20 - if (runtime_stat_n(st, STAT_CYCLES_IN_TX, cpu_map_idx, &rsd) !=3D 0) + if (runtime_stat_n(st, STAT_CYCLES_IN_TX, map_idx, &rsd) !=3D 0) print_metric(config, ctxp, NULL, "%8.0f", "cycles / transaction", ratio); else print_metric(config, ctxp, NULL, NULL, "cycles / transaction", 0); } else if (perf_stat_evsel__is(evsel, ELISION_START)) { - total =3D runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_CYCLES_IN_TX, map_idx, &rsd); =20 if (avg) ratio =3D total / avg; @@ -1150,28 +1150,28 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, else print_metric(config, ctxp, NULL, NULL, "CPUs utilized", 0); } else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_BUBBLES)) { - double fe_bound =3D td_fe_bound(cpu_map_idx, st, &rsd); + double fe_bound =3D td_fe_bound(map_idx, st, &rsd); =20 if (fe_bound > 0.2) color =3D PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "frontend bound", fe_bound * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_RETIRED)) { - double retiring =3D td_retiring(cpu_map_idx, st, &rsd); + double retiring =3D td_retiring(map_idx, st, &rsd); =20 if (retiring > 0.7) color =3D PERF_COLOR_GREEN; print_metric(config, ctxp, color, "%8.1f%%", "retiring", retiring * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_RECOVERY_BUBBLES)) { - double bad_spec =3D td_bad_spec(cpu_map_idx, st, &rsd); + double bad_spec =3D td_bad_spec(map_idx, st, &rsd); =20 if (bad_spec > 0.1) color =3D PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "bad speculation", bad_spec * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_ISSUED)) { - double be_bound =3D td_be_bound(cpu_map_idx, st, &rsd); + double be_bound =3D td_be_bound(map_idx, st, &rsd); const char *name =3D "backend bound"; static int have_recovery_bubbles =3D -1; =20 @@ -1184,14 +1184,14 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, =20 if (be_bound > 0.2) color =3D PERF_COLOR_RED; - if (td_total_slots(cpu_map_idx, st, &rsd) > 0) + if (td_total_slots(map_idx, st, &rsd) > 0) print_metric(config, ctxp, color, "%8.1f%%", name, be_bound * 100.); else print_metric(config, ctxp, NULL, NULL, name, 0); } else if (perf_stat_evsel__is(evsel, TOPDOWN_RETIRING) && - full_td(cpu_map_idx, st, &rsd)) { - double retiring =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd)) { + double retiring =3D td_metric_ratio(map_idx, STAT_TOPDOWN_RETIRING, st, &rsd); if (retiring > 0.7) @@ -1199,8 +1199,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, print_metric(config, ctxp, color, "%8.1f%%", "Retiring", retiring * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_FE_BOUND) && - full_td(cpu_map_idx, st, &rsd)) { - double fe_bound =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd)) { + double fe_bound =3D td_metric_ratio(map_idx, STAT_TOPDOWN_FE_BOUND, st, &rsd); if (fe_bound > 0.2) @@ -1208,8 +1208,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, print_metric(config, ctxp, color, "%8.1f%%", "Frontend Bound", fe_bound * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_BE_BOUND) && - full_td(cpu_map_idx, st, &rsd)) { - double be_bound =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd)) { + double be_bound =3D td_metric_ratio(map_idx, STAT_TOPDOWN_BE_BOUND, st, &rsd); if (be_bound > 0.2) @@ -1217,8 +1217,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, print_metric(config, ctxp, color, "%8.1f%%", "Backend Bound", be_bound * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_BAD_SPEC) && - full_td(cpu_map_idx, st, &rsd)) { - double bad_spec =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd)) { + double bad_spec =3D td_metric_ratio(map_idx, STAT_TOPDOWN_BAD_SPEC, st, &rsd); if (bad_spec > 0.1) @@ -1226,11 +1226,11 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, print_metric(config, ctxp, color, "%8.1f%%", "Bad Speculation", bad_spec * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_HEAVY_OPS) && - full_td(cpu_map_idx, st, &rsd) && (config->topdown_level > 1)) { - double retiring =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd) && (config->topdown_level > 1)) { + double retiring =3D td_metric_ratio(map_idx, STAT_TOPDOWN_RETIRING, st, &rsd); - double heavy_ops =3D td_metric_ratio(cpu_map_idx, + double heavy_ops =3D td_metric_ratio(map_idx, STAT_TOPDOWN_HEAVY_OPS, st, &rsd); double light_ops =3D retiring - heavy_ops; @@ -1246,11 +1246,11 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, print_metric(config, ctxp, color, "%8.1f%%", "Light Operations", light_ops * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_BR_MISPREDICT) && - full_td(cpu_map_idx, st, &rsd) && (config->topdown_level > 1)) { - double bad_spec =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd) && (config->topdown_level > 1)) { + double bad_spec =3D td_metric_ratio(map_idx, STAT_TOPDOWN_BAD_SPEC, st, &rsd); - double br_mis =3D td_metric_ratio(cpu_map_idx, + double br_mis =3D td_metric_ratio(map_idx, STAT_TOPDOWN_BR_MISPREDICT, st, &rsd); double m_clears =3D bad_spec - br_mis; @@ -1266,11 +1266,11 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, print_metric(config, ctxp, color, "%8.1f%%", "Machine Clears", m_clears * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_LAT) && - full_td(cpu_map_idx, st, &rsd) && (config->topdown_level > 1)) { - double fe_bound =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd) && (config->topdown_level > 1)) { + double fe_bound =3D td_metric_ratio(map_idx, STAT_TOPDOWN_FE_BOUND, st, &rsd); - double fetch_lat =3D td_metric_ratio(cpu_map_idx, + double fetch_lat =3D td_metric_ratio(map_idx, STAT_TOPDOWN_FETCH_LAT, st, &rsd); double fetch_bw =3D fe_bound - fetch_lat; @@ -1286,11 +1286,11 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, print_metric(config, ctxp, color, "%8.1f%%", "Fetch Bandwidth", fetch_bw * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_MEM_BOUND) && - full_td(cpu_map_idx, st, &rsd) && (config->topdown_level > 1)) { - double be_bound =3D td_metric_ratio(cpu_map_idx, + full_td(map_idx, st, &rsd) && (config->topdown_level > 1)) { + double be_bound =3D td_metric_ratio(map_idx, STAT_TOPDOWN_BE_BOUND, st, &rsd); - double mem_bound =3D td_metric_ratio(cpu_map_idx, + double mem_bound =3D td_metric_ratio(map_idx, STAT_TOPDOWN_MEM_BOUND, st, &rsd); double core_bound =3D be_bound - mem_bound; @@ -1308,12 +1308,12 @@ void perf_stat__print_shadow_stats(struct perf_stat= _config *config, } else if (evsel->metric_expr) { generic_metric(config, evsel->metric_expr, evsel->metric_events, NULL, evsel->name, evsel->metric_name, NULL, 1, - cpu_map_idx, out, st); - } else if (runtime_stat_n(st, STAT_NSECS, cpu_map_idx, &rsd) !=3D 0) { + map_idx, out, st); + } else if (runtime_stat_n(st, STAT_NSECS, map_idx, &rsd) !=3D 0) { char unit =3D ' '; char unit_buf[10] =3D "/sec"; =20 - total =3D runtime_stat_avg(st, STAT_NSECS, cpu_map_idx, &rsd); + total =3D runtime_stat_avg(st, STAT_NSECS, map_idx, &rsd); if (total) ratio =3D convert_unit_double(1000000000.0 * avg / total, &unit); =20 @@ -1321,7 +1321,7 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, snprintf(unit_buf, sizeof(unit_buf), "%c/sec", unit); print_metric(config, ctxp, NULL, "%8.3f", unit_buf, ratio); } else if (perf_stat_evsel__is(evsel, SMI_NUM)) { - print_smi_cost(config, cpu_map_idx, out, st, &rsd); + print_smi_cost(config, map_idx, out, st, &rsd); } else { num =3D 0; } @@ -1335,7 +1335,7 @@ void perf_stat__print_shadow_stats(struct perf_stat_c= onfig *config, generic_metric(config, mexp->metric_expr, mexp->metric_events, mexp->metric_refs, evsel->name, mexp->metric_name, mexp->metric_unit, mexp->runtime, - cpu_map_idx, out, st); + map_idx, out, st); } } if (num =3D=3D 0) --=20 2.37.3.998.g577e59143f-goog From nobody Mon Apr 6 11:51:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBBB9C07E9D for ; Mon, 26 Sep 2022 20:08:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229532AbiIZUIT (ORCPT ); Mon, 26 Sep 2022 16:08:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230301AbiIZUIG (ORCPT ); Mon, 26 Sep 2022 16:08:06 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E33163F02; Mon, 26 Sep 2022 13:08:06 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id b21so7224481plz.7; Mon, 26 Sep 2022 13:08:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=IOT7jzMOg4ykF0pG9M1eK7xPNIKdwWwmdCoPtWAbkrY=; b=eLNLCcy5SKVpDxbxxdUX9m817iFX58sa8dCbsC6MShx2isMKgi1bgIHt+xJA3C4hLl rC84H1CNscAqy+WgtFYJ6jyFlc/9mg58aNYd/+16sH1aev3+rWqKs2f6t8trek9VJboE 7hTNtzB3g2fQ87fnSZ0d49u5QwYYCbr74VyLdij0ujh+df1S6Os448m0zraGCBNbV9hQ xYmuAOtePuTE/qhiarmVaEsJHPJ0Vyzt31/1H2Y8pdGOIfjJ9lHgqvovyCxVMXl3vOQE Ui5T4V64Hle0gLRiQgRS39DCYObUjzxcEdigPvrFMKtBCH3mL0Gzy+/3e39s2gXY2GQ0 y31Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=IOT7jzMOg4ykF0pG9M1eK7xPNIKdwWwmdCoPtWAbkrY=; b=sq2FCSTUGSAw5M8cPu8cKLsfVSti3pgItOtw8JswvLI84hvPtdSou1Or43RUHdw05M v212Epr506suBA70flPsY+wOUr1B6AaBBaiIP6oXFTqi50mYwdYZLPifKad0PixVrbLt ODLd0qQGZeeUs3MpE3rkC/PvI+vf7arfDGsPD/9HebQrk+mUC/133OO1ulzqcSZ2PCiU LKqNWGB/w1hZBtEo/SSLEyUCZdTn02oSrFjzZbOWN9cAOzF6WTuy1yU8CuWyxTeavVDY PykZ9SGSY8Mw+8usVx557g2/NgbCleU85zovntR9lx3UwWHT2AlbRmwlA39uy/2dTUyv xPxg== X-Gm-Message-State: ACrzQf0Xn6vb4+Xhqt8Iqdr5eFpThHU2Le3eapgL2v8CMRB+RlJDM9Bx 4qT3r3due4nT3/TIU7xsuN0= X-Google-Smtp-Source: AMsMyM5qhjQ0NYbvDo/PChJbJfq86/5GisIURQZZ4mCKF+qz8rq8wgB3okczRjaw2zwVJHJtuJXBAA== X-Received: by 2002:a17:90b:5096:b0:202:df4f:89a with SMTP id rt22-20020a17090b509600b00202df4f089amr501448pjb.25.1664222885621; Mon, 26 Sep 2022 13:08:05 -0700 (PDT) Received: from balhae.corp.google.com ([2620:15c:2c1:200:2d32:19ce:817e:166]) by smtp.gmail.com with ESMTPSA id s21-20020aa78bd5000000b00540c24ba181sm12510148pfd.120.2022.09.26.13.08.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 13:08:05 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Arnaldo Carvalho de Melo , Jiri Olsa Cc: Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Andi Kleen , Kan Liang , Leo Yan , Zhengjun Xing Subject: [PATCH 4/6] perf stat: Use thread map index for shadow stat Date: Mon, 26 Sep 2022 13:07:55 -0700 Message-Id: <20220926200757.1161448-5-namhyung@kernel.org> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog In-Reply-To: <20220926200757.1161448-1-namhyung@kernel.org> References: <20220926200757.1161448-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When AGGR_THREAD is active, it aggregates the values for each thread. Previously it used cpu map index which is invalid for AGGR_THREAD so it had to use separate runtime stats with index 0. But it can just use the rt_stat with thread_map_index. Rename the first_shadow_map_idx() and make it return the thread index. Signed-off-by: Namhyung Kim Reviewed-by: James Clark --- tools/perf/util/stat-display.c | 20 +++++++++----------- tools/perf/util/stat.c | 8 ++------ 2 files changed, 11 insertions(+), 17 deletions(-) diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c index 234491f43c36..570e2c04d47d 100644 --- a/tools/perf/util/stat-display.c +++ b/tools/perf/util/stat-display.c @@ -442,7 +442,7 @@ static void print_metric_header(struct perf_stat_config= *config, fprintf(os->fh, "%*s ", config->metric_only_len, unit); } =20 -static int first_shadow_cpu_map_idx(struct perf_stat_config *config, +static int first_shadow_map_idx(struct perf_stat_config *config, struct evsel *evsel, const struct aggr_cpu_id *id) { struct perf_cpu_map *cpus =3D evsel__cpus(evsel); @@ -452,6 +452,9 @@ static int first_shadow_cpu_map_idx(struct perf_stat_co= nfig *config, if (config->aggr_mode =3D=3D AGGR_NONE) return perf_cpu_map__idx(cpus, id->cpu); =20 + if (config->aggr_mode =3D=3D AGGR_THREAD) + return id->thread; + if (!config->aggr_get_id) return 0; =20 @@ -646,7 +649,7 @@ static void printout(struct perf_stat_config *config, s= truct aggr_cpu_id id, int } =20 perf_stat__print_shadow_stats(config, counter, uval, - first_shadow_cpu_map_idx(config, counter, &id), + first_shadow_map_idx(config, counter, &id), &out, &config->metric_events, st); if (!config->csv_output && !config->metric_only && !config->json_output) { print_noise(config, counter, noise); @@ -676,7 +679,7 @@ static void aggr_update_shadow(struct perf_stat_config = *config, val +=3D perf_counts(counter->counts, idx, 0)->val; } perf_stat__update_shadow_stats(counter, val, - first_shadow_cpu_map_idx(config, counter, &id), + first_shadow_map_idx(config, counter, &id), &rt_stat); } } @@ -979,14 +982,9 @@ static void print_aggr_thread(struct perf_stat_config = *config, fprintf(output, "%s", prefix); =20 id =3D buf[thread].id; - if (config->stats) - printout(config, id, 0, buf[thread].counter, buf[thread].uval, - prefix, buf[thread].run, buf[thread].ena, 1.0, - &config->stats[id.thread]); - else - printout(config, id, 0, buf[thread].counter, buf[thread].uval, - prefix, buf[thread].run, buf[thread].ena, 1.0, - &rt_stat); + printout(config, id, 0, buf[thread].counter, buf[thread].uval, + prefix, buf[thread].run, buf[thread].ena, 1.0, + &rt_stat); fputc('\n', output); } =20 diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index e1d3152ce664..21137c9d5259 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -389,12 +389,8 @@ process_counter_values(struct perf_stat_config *config= , struct evsel *evsel, } =20 if (config->aggr_mode =3D=3D AGGR_THREAD) { - if (config->stats) - perf_stat__update_shadow_stats(evsel, - count->val, 0, &config->stats[thread]); - else - perf_stat__update_shadow_stats(evsel, - count->val, 0, &rt_stat); + perf_stat__update_shadow_stats(evsel, count->val, + thread, &rt_stat); } break; case AGGR_GLOBAL: --=20 2.37.3.998.g577e59143f-goog From nobody Mon Apr 6 11:51:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F793C07E9D for ; Mon, 26 Sep 2022 20:08:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230403AbiIZUI1 (ORCPT ); Mon, 26 Sep 2022 16:08:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230335AbiIZUIJ (ORCPT ); Mon, 26 Sep 2022 16:08:09 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8E8072B64; Mon, 26 Sep 2022 13:08:07 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id p14so4710330pjd.3; Mon, 26 Sep 2022 13:08:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=yVWViY7fxIIM4RCB3cCJEIT4RK67T32Qj8MOeBmOq/s=; b=HvizWQuJA+CZnTwc6xXTbxk0huqgmphMNleLZLQudFjL/H49y/yfQ+rD8WS/xOv8i8 SLmfTXyxaTCf+XLyf3xHqDiNEa1VtJ9A7P2ejXXYLEHewbbXKD1sHC2ZM++pXqWxO3fA IQsReZ/E+DgNlLK2N7dmHxoqr6jXlbvvNcGsQ7mwG+iFH0DdsnX3rm0gLGSwNLX5jGOM lhjTWxi3wzViFikO7M/rWJud4xkWVDNlW/L7vdK9iieS4fE3f69BzocqNM1a77oa4Ok7 NPzW6SdlxmuLAcJOVyQKWrAfKCy7dmU1ppJo8QCqWkPsvAAYPTS4puPWwEzqZfMmvbvM Tqiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=yVWViY7fxIIM4RCB3cCJEIT4RK67T32Qj8MOeBmOq/s=; b=bq0b0qEIDI2Mng6nuYCSwhIHPyFUusO3b9hkLp0CGYiZS3VvRWaWtendR0PMsH3BbC 7kWA7HWsIroYTIuefZv33As4Rt+pn+kyJGFGtjMEnD3FOoS1LOnDP0XKDHQ8TQvEdTdy ENsncsyhl2KeAVz5qnM+GvPENt/nJmWQi9XbWxoy4q0ZN5vq+oW5ULSo/vnyDJnuRvmw xGYdB6ya2qJdVvMvknr1UsO5dDYrSZ2O7zvG0GhKisMt7LqCnm5avO1bL+CS6aMYqJED TaY/bFqbTNYIpaRZGSLHdbxLB/jTvdu9rRyzovg60BeILMS5zExPaYvH8EizY40DQK5T A8VA== X-Gm-Message-State: ACrzQf3Y33NWEPuJURwuFTEB4cy0nzW7drTpNzxtHMKvIZ77Udig+ikp XAt2+Tp4FX2Qg/CSkbdVuZE= X-Google-Smtp-Source: AMsMyM58d9Vvo7w/7a102mYJkv4B/CATdvJqKDxsW+jP7CHAmrSGDLqQqYEJ2MxNGz0D1htehg7Ksg== X-Received: by 2002:a17:90b:4a50:b0:203:1204:5bc4 with SMTP id lb16-20020a17090b4a5000b0020312045bc4mr493910pjb.79.1664222886921; Mon, 26 Sep 2022 13:08:06 -0700 (PDT) Received: from balhae.corp.google.com ([2620:15c:2c1:200:2d32:19ce:817e:166]) by smtp.gmail.com with ESMTPSA id s21-20020aa78bd5000000b00540c24ba181sm12510148pfd.120.2022.09.26.13.08.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 13:08:06 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Arnaldo Carvalho de Melo , Jiri Olsa Cc: Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Andi Kleen , Kan Liang , Leo Yan , Zhengjun Xing Subject: [PATCH 5/6] perf stat: Kill unused per-thread runtime stats Date: Mon, 26 Sep 2022 13:07:56 -0700 Message-Id: <20220926200757.1161448-6-namhyung@kernel.org> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog In-Reply-To: <20220926200757.1161448-1-namhyung@kernel.org> References: <20220926200757.1161448-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now it's using the global rt_stat, no need to use per-thread stats. Let get rid of them. Signed-off-by: Namhyung Kim Reviewed-by: James Clark --- tools/perf/builtin-stat.c | 54 --------------------------------------- tools/perf/util/stat.h | 2 -- 2 files changed, 56 deletions(-) diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index e05fe72c1d87..b86ebb25a799 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -292,13 +292,8 @@ static inline void diff_timespec(struct timespec *r, s= truct timespec *a, =20 static void perf_stat__reset_stats(void) { - int i; - evlist__reset_stats(evsel_list); perf_stat__reset_shadow_stats(); - - for (i =3D 0; i < stat_config.stats_num; i++) - perf_stat__reset_shadow_per_stat(&stat_config.stats[i]); } =20 static int process_synthesized_event(struct perf_tool *tool __maybe_unused, @@ -489,46 +484,6 @@ static void read_counters(struct timespec *rs) } } =20 -static int runtime_stat_new(struct perf_stat_config *config, int nthreads) -{ - int i; - - config->stats =3D calloc(nthreads, sizeof(struct runtime_stat)); - if (!config->stats) - return -1; - - config->stats_num =3D nthreads; - - for (i =3D 0; i < nthreads; i++) - runtime_stat__init(&config->stats[i]); - - return 0; -} - -static void runtime_stat_delete(struct perf_stat_config *config) -{ - int i; - - if (!config->stats) - return; - - for (i =3D 0; i < config->stats_num; i++) - runtime_stat__exit(&config->stats[i]); - - zfree(&config->stats); -} - -static void runtime_stat_reset(struct perf_stat_config *config) -{ - int i; - - if (!config->stats) - return; - - for (i =3D 0; i < config->stats_num; i++) - perf_stat__reset_shadow_per_stat(&config->stats[i]); -} - static void process_interval(void) { struct timespec ts, rs; @@ -537,7 +492,6 @@ static void process_interval(void) diff_timespec(&rs, &ts, &ref_time); =20 perf_stat__reset_shadow_per_stat(&rt_stat); - runtime_stat_reset(&stat_config); read_counters(&rs); =20 if (STAT_RECORD) { @@ -1018,7 +972,6 @@ static int __run_perf_stat(int argc, const char **argv= , int run_idx) =20 evlist__copy_prev_raw_counts(evsel_list); evlist__reset_prev_raw_counts(evsel_list); - runtime_stat_reset(&stat_config); perf_stat__reset_shadow_per_stat(&rt_stat); } else { update_stats(&walltime_nsecs_stats, t1 - t0); @@ -2514,12 +2467,6 @@ int cmd_stat(int argc, const char **argv) */ if (stat_config.aggr_mode =3D=3D AGGR_THREAD) { thread_map__read_comms(evsel_list->core.threads); - if (target.system_wide) { - if (runtime_stat_new(&stat_config, - perf_thread_map__nr(evsel_list->core.threads))) { - goto out; - } - } } =20 if (stat_config.aggr_mode =3D=3D AGGR_NODE) @@ -2660,7 +2607,6 @@ int cmd_stat(int argc, const char **argv) evlist__delete(evsel_list); =20 metricgroup__rblist_exit(&stat_config.metric_events); - runtime_stat_delete(&stat_config); evlist__close_control(stat_config.ctl_fd, stat_config.ctl_fd_ack, &stat_c= onfig.ctl_fd_close); =20 return status; diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h index 3eba38a1a149..43cb3f13d4d6 100644 --- a/tools/perf/util/stat.h +++ b/tools/perf/util/stat.h @@ -153,8 +153,6 @@ struct perf_stat_config { int run_count; int print_free_counters_hint; int print_mixed_hw_group_error; - struct runtime_stat *stats; - int stats_num; const char *csv_sep; struct stats *walltime_nsecs_stats; struct rusage ru_data; --=20 2.37.3.998.g577e59143f-goog From nobody Mon Apr 6 11:51:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C8AC07E9D for ; Mon, 26 Sep 2022 20:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230416AbiIZUId (ORCPT ); Mon, 26 Sep 2022 16:08:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230365AbiIZUIP (ORCPT ); Mon, 26 Sep 2022 16:08:15 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB8B07A511; Mon, 26 Sep 2022 13:08:08 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id w10so7211300pll.11; Mon, 26 Sep 2022 13:08:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=Fx9hvwG/rasPOvbUmK3cj26jkXamy+G4vGSpbyVyxd0=; b=S36dYOwLYsE0JvmU6NwcdU+y2Wj6jnY2uJWyQkL/nm4S7U7W2f2QVAD6/5I4JeTvkr jqWwZY/InR5fNU5qCHXEV9prrEhOY0RRR6xv9TUXUxvsd+aFn1qDkdnzarrr4E92LqTJ fMGyj7LtyazBCjZl7hirW/X8KipqoDpgHGmLCEHevCqp9Q/a+Ygiib9Ve8+yK1fDd3F3 U7zmWWheo86qV+JcDWxmGsNdzJ2HHuz98ej+UrjUwtv79elZwVbhegKRj/Ou3O3ShieR 6gLIwuuiNuALekDzpHGy+LcE/a3mfG5kpQQwpFOvfG8lNsAKzO6j8699dVmPMpnspvM0 ToMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=Fx9hvwG/rasPOvbUmK3cj26jkXamy+G4vGSpbyVyxd0=; b=VE+u/40YXCspeknk2yNPSB8KDMjeE0TgonunIN4/6EPV7s1Evwa8FQyAFmm1cNBXYo IqTrXb6fxS9K/YesIQ43s+8cS92ZUSWq240y7ah07wMmY4byQBk9Q535Jp6ywHSFhdBI lJ3Xn72w/O3VNrS5UEkc5rvd+aSDDnpd3e+eY72uWLpeEFxEiYZnLAqRhoaMyj5QwJSe MwFfY69UZNKuq2uV9B1ZA2Ej/pZINcL6vpqeRV4hETvuG1NanLV2wKp4ciwDfWIxNCwp 8pWhs7rDV9xrZGS7sPKoqXOC435Xxlf9q+VZu0J/m90sU7r9q9KP1qGqGGBzLaZ8thaB uC4w== X-Gm-Message-State: ACrzQf2K4U5hBWm/BRC1Xa7M6VIBINynNbZMpJOVEXNxM1T+4FKAFjCj BpjJBICjOELBYJ80sqBE0k8= X-Google-Smtp-Source: AMsMyM6Z8B+WIxXnOWJgbwJpqmVL2pB56DWeCJG4au4tSxTDWeVurE1vE8ry6EWRw+zeYkMUKKAbGw== X-Received: by 2002:a17:903:2345:b0:179:b6d0:f910 with SMTP id c5-20020a170903234500b00179b6d0f910mr22438500plh.6.1664222888196; Mon, 26 Sep 2022 13:08:08 -0700 (PDT) Received: from balhae.corp.google.com ([2620:15c:2c1:200:2d32:19ce:817e:166]) by smtp.gmail.com with ESMTPSA id s21-20020aa78bd5000000b00540c24ba181sm12510148pfd.120.2022.09.26.13.08.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 13:08:07 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Arnaldo Carvalho de Melo , Jiri Olsa Cc: Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Andi Kleen , Kan Liang , Leo Yan , Zhengjun Xing Subject: [PATCH 6/6] perf stat: Don't compare runtime stat for shadow stats Date: Mon, 26 Sep 2022 13:07:57 -0700 Message-Id: <20220926200757.1161448-7-namhyung@kernel.org> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog In-Reply-To: <20220926200757.1161448-1-namhyung@kernel.org> References: <20220926200757.1161448-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now it always uses the global rt_stat. Let's get rid of the field from the saved_value. When the both evsels are NULL, it'd return 0 so remove the block in the saved_value_cmp. Signed-off-by: Namhyung Kim Reviewed-by: James Clark --- tools/perf/util/stat-shadow.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c index 99d05262055c..700563306637 100644 --- a/tools/perf/util/stat-shadow.c +++ b/tools/perf/util/stat-shadow.c @@ -35,7 +35,6 @@ struct saved_value { int ctx; int map_idx; struct cgroup *cgrp; - struct runtime_stat *stat; struct stats stats; u64 metric_total; int metric_other; @@ -67,16 +66,6 @@ static int saved_value_cmp(struct rb_node *rb_node, cons= t void *entry) if (a->cgrp !=3D b->cgrp) return (char *)a->cgrp < (char *)b->cgrp ? -1 : +1; =20 - if (a->evsel =3D=3D NULL && b->evsel =3D=3D NULL) { - if (a->stat =3D=3D b->stat) - return 0; - - if ((char *)a->stat < (char *)b->stat) - return -1; - - return 1; - } - if (a->evsel =3D=3D b->evsel) return 0; if ((char *)a->evsel < (char *)b->evsel) @@ -120,7 +109,6 @@ static struct saved_value *saved_value_lookup(struct ev= sel *evsel, .evsel =3D evsel, .type =3D type, .ctx =3D ctx, - .stat =3D st, .cgrp =3D cgrp, }; =20 --=20 2.37.3.998.g577e59143f-goog