From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0122CC433F5 for ; Sat, 1 Oct 2022 06:06:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229542AbiJAGGu (ORCPT ); Sat, 1 Oct 2022 02:06:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229468AbiJAGGr (ORCPT ); Sat, 1 Oct 2022 02:06:47 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17B2D12B4A2 for ; Fri, 30 Sep 2022 23:06:46 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-349f88710b2so61757417b3.20 for ; Fri, 30 Sep 2022 23:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=bSlDDvy7umSAu0bpLEGU6tsn1qiN6maFAz0/dtGy1S0=; b=fnDIvUZczAr7Z1Y6TG2yiOr31o50H89ISpQgh/uEsxlmVS5wnUwCgm6Vn0/ExOjuED 7e/f7AYZJbCg7G6Cb2hfyMe33lqqa3cCGiKDG+IOaukFEuW2RFIgY3Xo//JIgstYagu2 SNDTzU1alwWRMJd/rBU6z0CKF3lCadMpm/RqRqu4Qif2iSMTycx8JKaFAYFmsLgcNPGT g/IQpgyz4DxY7APOtY7p9SQhXDNuB48NzfNQ8CcZKBBEBZ/UYN5KpCcGbdqpRAxGUUzT ejU2Gt3B9d7Hq9wglQG5ET35WlC7PzCaSQZANaI6EisukqTb8Gfz4Y/7fStzus6qc5tl X/6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=bSlDDvy7umSAu0bpLEGU6tsn1qiN6maFAz0/dtGy1S0=; b=vhIW6go0XNyGsBJalmMWZcCid20E/NIcjWMbJN7cLOdLcnJwAkDywikK68dEYj6D0m /6aauOaEk3pM2D4IGde1GRCTevZD60L4cVLkBWxt4iSqfbLzsBE82erJXLLju7akQdXG CZm/J1HCpQxDttxif0+K8vx2AQKCBQ1xJf7xxYmSrlLsXPVz2uYjgkixvd6aQvcK+Trz PZEQyZsnquBS0Mqsrhjf+eMF4gozGHIe/ALkyR/UUnDfuZyvwD5uyurteGegmTe0gxv1 km8ujJkdRDmvew8vrnUZTmNpjglafiVvPthImkZm7668DqqA+/j+N2OQ4KtX29cgwERv 5WZQ== X-Gm-Message-State: ACrzQf3ZYrolBtElCHmcKaCAxab3eWRkQltYWSATh0YTs41/Sv7/uwoy 41g64LWzLd3NrXBxkeH3NqlWKZ9kDb3d X-Google-Smtp-Source: AMsMyM7b0i9jRQi1uSKXZQNpVu8mtF3OIb9XV6eL68bps4RFs7fn3CU67eA1uCxMoJoDaTRWAXfxp/MvxIjo X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:4654:0:b0:353:980:67ad with SMTP id t81-20020a814654000000b00353098067admr11207235ywa.204.1664604405297; Fri, 30 Sep 2022 23:06:45 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:15 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-2-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 01/22] perf expr: Allow a double if expression From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some TMA metrics have double if expressions like: ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) ) if #core_wide < 1 else ( CPU_CLK_UNHALTED.T= HREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD This currently fails to parse as the left hand side if expression needs to be in parentheses. By allowing the if expression to have a right hand side that is an if expression we can parse the expression above, with left to right evaluation order that matches languages like Python. Signed-off-by: Ian Rogers --- tools/perf/tests/expr.c | 4 ++++ tools/perf/util/expr.y | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/tools/perf/tests/expr.c b/tools/perf/tests/expr.c index 8bd719766814..6512f5e22045 100644 --- a/tools/perf/tests/expr.c +++ b/tools/perf/tests/expr.c @@ -95,6 +95,10 @@ static int test__expr(struct test_suite *t __maybe_unuse= d, int subtest __maybe_u ret |=3D test(ctx, "min(1,2) + 1", 2); ret |=3D test(ctx, "max(1,2) + 1", 3); ret |=3D test(ctx, "1+1 if 3*4 else 0", 2); + ret |=3D test(ctx, "100 if 1 else 200 if 1 else 300", 100); + ret |=3D test(ctx, "100 if 0 else 200 if 1 else 300", 200); + ret |=3D test(ctx, "100 if 1 else 200 if 0 else 300", 100); + ret |=3D test(ctx, "100 if 0 else 200 if 0 else 300", 300); ret |=3D test(ctx, "1.1 + 2.1", 3.2); ret |=3D test(ctx, ".1 + 2.", 2.1); ret |=3D test(ctx, "d_ratio(1, 2)", 0.5); diff --git a/tools/perf/util/expr.y b/tools/perf/util/expr.y index a30b825adb7b..635e562350c5 100644 --- a/tools/perf/util/expr.y +++ b/tools/perf/util/expr.y @@ -156,7 +156,7 @@ start: if_expr } ; =20 -if_expr: expr IF expr ELSE expr +if_expr: expr IF expr ELSE if_expr { if (fpclassify($3.val) =3D=3D FP_ZERO) { /* --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 755AAC4332F for ; Sat, 1 Oct 2022 06:07:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229573AbiJAGG7 (ORCPT ); Sat, 1 Oct 2022 02:06:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229530AbiJAGGu (ORCPT ); Sat, 1 Oct 2022 02:06:50 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 027A212C1ED for ; Fri, 30 Sep 2022 23:06:48 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3547f988c19so60673837b3.9 for ; Fri, 30 Sep 2022 23:06:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=Qii+hgPKPKNgROLcY0ySesmuzhBUXysFitkb8xtJsts=; b=kZ8O2caPqw53jlKugZLlR+Ha0l+KrjBKX2Sa8I9wM+nnPRsYQSeH/Df7IoCdGrUcIl rgQE7UAmyPVC2VMDQbPmth6owH/L5RF4o4O4zoOlzdUPIHGd/AEYT5QOJuhcrdfoNKFf 3aYLJb/6WHhWrQxdKEISRWilw09+C2yno3n9j1hymm5MEHmhcpqOc+0n/xe39xkg+RKN hn7YL2DdCVRiSnW2pEUdwZu47kzBDmzv6T2SY9gW5u3YeddJcf2u6cBpFXgEjsI0/k+l ply3jBRpOAiZ9MMRH5MF4kXo77kZ0R03iw0vlyeECWlLSoaBjPXIyFewjLqo0RA3PbOQ gGow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=Qii+hgPKPKNgROLcY0ySesmuzhBUXysFitkb8xtJsts=; b=acAa4ZzpYqfPFANjOa7jmUZzFnrgvJ7F+zoi2zL+ADTM7FGWZAM5yD0kRXUEbkb0JG Gq6N7+H3qCW2oPQf+62pEhM3PPfJp4D2OQMFcBmvDxghQaFOhU/dDkqbxXkoY624MVOl qtQVxmj1jloBfX0fXDIGViS1VZ5IDvKPKbQJ/lqoitUI6W+HchnsUYeE0VJ14BXpHx9H Hw3KSNoV5Wz8/syOGFZ8Q0Hjp0j0dY1bfh1Tv0rVJTsDS0mHOEPpyJyZTu20Obzn37k6 ngZoStUdWSSMkknKpqM2sK8QHsFBg+Rt0gJ2tOejGod1MqJxI7si6lA2IVMmxVVcKD6q 6oqw== X-Gm-Message-State: ACrzQf3vmd3Iwpjljs5fPctuKOhDzI8uxI8q/Tm/2g6HlR6RFYyD0e+7 E3Zrv87wBjR8FGn2Kf40vtJGlemtGC3R X-Google-Smtp-Source: AMsMyM6CSUg0wKtyN/RjOi6+y/dw3yyuWC79SAMLGo1wP4u6qYogIbOdCOf+9bT2czdlpoe/KIHHWrXQKFX4 X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:160e:0:b0:356:e96c:d5ab with SMTP id 14-20020a81160e000000b00356e96cd5abmr3725765yww.506.1664604407953; Fri, 30 Sep 2022 23:06:47 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:16 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-3-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 02/22] perf expr: Remove jevents case workaround From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" jevents.py no longer lowercases metrics and altering the case can cause hashmap lookups to fail, so remove. Signed-off-by: Ian Rogers --- tools/perf/util/expr.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c index c6827900f8d3..aaacf514dc09 100644 --- a/tools/perf/util/expr.c +++ b/tools/perf/util/expr.c @@ -182,7 +182,7 @@ int expr__add_ref(struct expr_parse_ctx *ctx, struct me= tric_ref *ref) { struct expr_id_data *data_ptr =3D NULL, *old_data =3D NULL; char *old_key =3D NULL; - char *name, *p; + char *name; int ret; =20 data_ptr =3D zalloc(sizeof(*data_ptr)); @@ -195,15 +195,6 @@ int expr__add_ref(struct expr_parse_ctx *ctx, struct m= etric_ref *ref) return -ENOMEM; } =20 - /* - * The jevents tool converts all metric expressions - * to lowercase, including metric references, hence - * we need to add lowercase name for metric, so it's - * properly found. - */ - for (p =3D name; *p; p++) - *p =3D tolower(*p); - /* * Intentionally passing just const char pointers, * originally from 'struct pmu_event' object. --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35653C433FE for ; Sat, 1 Oct 2022 06:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229552AbiJAGHB (ORCPT ); Sat, 1 Oct 2022 02:07:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229519AbiJAGG5 (ORCPT ); Sat, 1 Oct 2022 02:06:57 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4ADB5130F22 for ; Fri, 30 Sep 2022 23:06:51 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3576c47f204so14020277b3.2 for ; Fri, 30 Sep 2022 23:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=HP8sEZI7Emeer8Gpo0u0RqVNXzqW3axisWvdt479D+w=; b=Zo86w1JcZP0SwLmLGNZbnAfu15T7mKvR8zvQGA12p2mpV8zMZlYOthL30w2p7LZ1Ln 8fxbzYboFouTztwetukz42wpV7hzW0Jg6p/QexYaRBvg95zi1oST81k30zEX7YuSqTyc eSilxCudAE8hDEk8MTVLrxT9KKD8x/v8Q+sQH/Yxe3APtq1Nq15RC9cDAgVAbCY7HP2E Acj+ZfWTB1EUhhLm497H4xp/53VA/L+jHNCzCzxn/j2ZJj5aIX1IWB6KJvSZXP3KWHci YJNuvXrl9vPYaakLHNPYNSkUN+5anpC5qirEs9fLLSaHTyBTFd4T6/2FxhVylR+ZKJuz qakQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=HP8sEZI7Emeer8Gpo0u0RqVNXzqW3axisWvdt479D+w=; b=kn3DlFArcQSu3nznQ4EZz/pRnB5ztQaG0lrXsynr67HPB5HjSAQCWL9Uu6AMZsnYgi J4lVgbtnE7W2wdTR4TTFmPuWZ3XTic2r/eKoZ8GcSDzAOLGzQ8cmum2QIj+AK6ZJTsT3 WrYNgmcn6k6wMH3m2/e6aUdP2rbvx3K9zTa8ZONQ+r/vLWWOTqQoc5G5e8VQQwt+aolw kmfTkYANpdbpCFd1jyebYEeTBcvNEiK0n3U8IUZIhgsicTfSp4ZKMWwUVG0kkZkeGCSG S5mzKsBUamV8p5GyL4jgXGniT064d0QA9ZI1751REgGHXgRYfEuxfBPQf+LaKa6MoAPP jvJw== X-Gm-Message-State: ACrzQf3sP487Lw7yJyxYqmEetvsTksyOM+Mw0MImk0BKDW1AdnUFxlQC OBFXpcxa4mocTMUxo+PWKt0J/eNwQOa4 X-Google-Smtp-Source: AMsMyM4GM2AzwBt4Mm2uplMShAXDEOgSI9+FipCwaQwrAVwq/yKgi0tGjWFWWd9Q2eklTv7s5Wd8EtSwp+7E X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:498f:0:b0:355:bb57:f988 with SMTP id w137-20020a81498f000000b00355bb57f988mr8437612ywa.298.1664604410478; Fri, 30 Sep 2022 23:06:50 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:17 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-4-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 03/22] perf metrics: Don't scale counts going into metrics From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Counts are scaled prior to going into saved_value, reverse the scaling so that metrics don't double scale values. Signed-off-by: Ian Rogers --- tools/perf/util/stat-shadow.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c index 9e1eddeff21b..b5cedd37588f 100644 --- a/tools/perf/util/stat-shadow.c +++ b/tools/perf/util/stat-shadow.c @@ -865,11 +865,16 @@ static int prepare_metric(struct evsel **metric_event= s, if (!v) break; stats =3D &v->stats; - scale =3D 1.0; + /* + * If an event was scaled during stat gathering, reverse + * the scale before computing the metric. + */ + scale =3D 1.0 / metric_events[i]->scale; + source_count =3D evsel__source_count(metric_events[i]); =20 if (v->metric_other) - metric_total =3D v->metric_total; + metric_total =3D v->metric_total * scale; } n =3D strdup(evsel__metric_id(metric_events[i])); if (!n) --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33DBFC433FE for ; Sat, 1 Oct 2022 06:07:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229619AbiJAGHf (ORCPT ); Sat, 1 Oct 2022 02:07:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229544AbiJAGG7 (ORCPT ); Sat, 1 Oct 2022 02:06:59 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 136011A598B for ; Fri, 30 Sep 2022 23:06:54 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id e8-20020a5b0cc8000000b006bca0fa3ab6so5620336ybr.0 for ; Fri, 30 Sep 2022 23:06:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=kJL3x0mhKXHwU1kw8YtnMCghBqtb6bbdFZ73LfAOkSA=; b=bQU7V8JKUxtbReh0+KUR8SkaiGke4QaUJGaGVgipr+XlWIS5iA6ehyqgK4dAOYOBkz xfigavjeO5TafvEHSokqcNrWSH57r4c4wQNZDL1Mq4MsQpnLV8MFUvILoH6rdPAUXtWE 3Qd/9SiLSCPzVm7k5FdAsyyqum1riSqG4IhaaisfnuSNe2QS1ADQrYdlrAr8AKNn/YWq OSyWxsfU6Qx/PMgTFV7ljYowgdPrGG7SBDL6/iouPbK/e13YCM+wLXKdISaSOGb0q8K+ 3UhrLIdjxviLVI3fqWRdOQODXgsC6IsSsAQKgBvir08FEHUnqQNfGtARV08tjfapspTy J0pA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=kJL3x0mhKXHwU1kw8YtnMCghBqtb6bbdFZ73LfAOkSA=; b=ozdPVO6UlnffSgG0po5R/EHBxwCE/ib/e/jDiNx3zBhED4iBEidCxlBp4KoYWx8vyp 0zC73zYRCfTyh5GQwz2vaGA7pl/j+dCa9lPwzIaLigQ68bsvAEuhJhBri3/n2MxeKD8l 0f+KhlOi4i1sJSKmPcBGIZBAx0I3BTW/0cjbRAC/hvqBxCRwgAvvboEb/NQ5H2x8GKEf /9bZ8Hi12Vq4CIvqUN8hZv73XP0wUG7s4z0e/iaUgNgkKfzNv59iAoGMKHDV5a2rV8Ut YKGrTIctVGhA1J8fs2PBQKDj47hJx8cfIEuvnsE6jzRkpmI0ovYpW7tD0OZWIh/SyZuu fMng== X-Gm-Message-State: ACrzQf2marOWkj3ooqbjGyu8nGZ5dwrQTaqAtFu/gcDVm8MK6Orieix5 t+6cPP8uOw5o9r5B3yRZnmE0K+ecx4uK X-Google-Smtp-Source: AMsMyM67AfydUWL6lXQMQ9DGYvd+9H2dU9EJ/6Rsh362VfBy+CmE8Ls6QBXKTB74ZqwLMG36cT8NpcXAmiY2 X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a05:6902:1028:b0:6b4:8d06:6b76 with SMTP id x8-20020a056902102800b006b48d066b76mr11176409ybt.182.1664604413227; Fri, 30 Sep 2022 23:06:53 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:18 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-5-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 04/22] perf vendor events: Update Intel skylakex From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v1.28, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Removal of ScaleUnit from uncore events by Zhengjun Xing . - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Latest metrics from: https://github.com/intel/perfmon-metrics Tested on a skylakex manually and with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok 93: perf all metricgroups test : Ok 94: perf all metrics test : Skip 95: perf all PMU test : Ok Signed-off-by: Ian Rogers --- .../arch/x86/skylakex/skx-metrics.json | 1262 ++++++++++------- .../arch/x86/skylakex/uncore-memory.json | 18 +- .../arch/x86/skylakex/uncore-other.json | 19 +- 3 files changed, 782 insertions(+), 517 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json b/too= ls/perf/pmu-events/arch/x86/skylakex/skx-metrics.json index 6a6764e1504b..bc8e42554096 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json @@ -1,148 +1,726 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE= / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "(ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDAT= A_STALL\\,cmask\\=3D1\\,edge@) / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_64B.IFTAG_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT))) * INT_MISC.CLEAR_RESTEER_CYCLES /= CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "9 * BACLEARS.ANY / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNH= ALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * = CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - tma_frontend_bound - (UOPS_ISSUED.ANY + 4 * ((I= NT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLES))= / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUN= D_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + = tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) = * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(12 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 11 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "((MEM_LOAD_RETIRED.L2_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / ((MEM_LOAD_RETIRED.L2_HIT * (1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + cpu@L1D_PEND_MISS.FB_= FULL\\,cmask\\=3D1@)) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.S= TALLS_L2_MISS) / CLKS)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((44 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRE= D.XSNP_HITM * (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OF= FCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OFFCORE_RESPONSE.DEM= AND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + (44 * Average_Frequency) * MEM_L= OAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RE= TIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(44 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRED= .XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - (OFFCORE_RESPONSE.DEMA= ND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT= .HITM_OTHER_CORE + OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FW= D)))) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CL= KS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(17 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT = * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CYCLE_ACT= IVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma_l2_bou= nd)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "(59.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIR= ED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "(127 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRE= D.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "((89.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETI= RED.REMOTE_HITM + (89.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRED.REM= OTE_FWD) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) /= CLKS", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_L3_MISS_RE= TIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 11 * (1 - (MEM_INST_RETIRED.LO= CK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.LOCK_LOA= DS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_R= EQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "((110 * Average_Frequency) * (OFFCORE_RESPONSE.DEMA= ND_RFO.L3_MISS.REMOTE_HITM + OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HITM= ) + (47.5 * Average_Frequency) * (OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_O= THER_CORE + OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE)) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_P= ORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / CLKS if (ARITH.DIV= IDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY)= ) else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTI= L) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_NONE / 2 if #SMT_on else= CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "PARTIAL_RAT_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK= _UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK= _UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCL= ES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNH= ALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "CLKS * UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISS= UED.ANY", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((UOPS_EXECUTED.CORE_CYCLES_GE_1 - UOPS_EXECUTED.CO= RE_CYCLES_GE_2) / 2 if #SMT_on else EXE_ACTIVITY.1_PORTS_UTIL) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((UOPS_EXECUTED.CORE_CYCLES_GE_2 - UOPS_EXECUTED.CO= RE_CYCLES_GE_3) / 2 if #SMT_on else EXE_ACTIVITY.2_PORTS_UTIL) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_GE_3 / 2 if #SMT_on else= UOPS_EXECUTED.CORE_CYCLES_GE_3) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" }, { - "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", - "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_= RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALT= ED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * = CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETI= RED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES = / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "Bad;BadSpec;BrMispredicts", - "MetricName": "Mispredictions" + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_P= ACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / UOPS_RETIRED.RET= IRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 512-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.512B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_512b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 512-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_INST_RETIRED.ANY / INST_= RETIRED.ANY", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", + "MetricExpr": "tma_light_operations * UOPS_RETIRED.MACRO_FUSED / U= OPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fused_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JC= C are commonly used examples.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", + "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - UOPS_RETIRED.MACRO_FUSED) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_non_fused_branches", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / UOPS_RETI= RED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_m= emory_operations + tma_fused_instructions + tma_non_fused_branches + tma_no= p_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS + UOPS_RETIRED.MACRO_FUS= ED - INST_RETIRED.ANY) / SLOTS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", - "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_= RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNH= ALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIR= ED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) = * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS= _NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))) )", - "MetricGroup": "Bad;BadSpec;BrMispredicts_SMT", - "MetricName": "Mispredictions_SMT" + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" }, { "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_= ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALL= S_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (= ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_H= IT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1= @ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS )= / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_AC= TIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_P= ORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * E= XE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS= _NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + = 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min= ( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,= cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREAD) / #(CYCLE_ACTIVITY.STALLS_L3_MISS= / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTI= VITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_= HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (ME= M_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L= 1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVI= TY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THR= EAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MI= SS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_= ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1= _PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) *= EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UO= PS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY = + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (O= FFCORE_REQUESTS_BUFFER.SQ_FULL / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIV= ITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THR= EAD) ) ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_= L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_ME= M_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EX= E_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTE= D.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * = (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS= _ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD= ))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_R= ETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ / CPU_CLK_UNHAL= TED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALL= S_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_boun= d / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_stor= e_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma= _l3_hit_latency + tma_sq_full))) + (tma_l1_bound / (tma_dram_bound + tma_l1= _bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (= tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_spli= t_loads + tma_store_fwd_blk)) ", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "Memory_Bandwidth" }, - { - "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK= _UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALL= S_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 = + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RET= IRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ))= + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_= L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #= ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE= _ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_= SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE= _THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTI= L) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (= 4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_A= CTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MIS= C.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( = 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )= * ( (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_D= ATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREAD) / #(CYCLE_ACTIVITY.STALL= S_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - C= YCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RE= TIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )= ) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_= RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYC= LE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNH= ALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_A= NY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_A= CTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( OFFCORE_REQUESTS_BUFFER= .SQ_FULL / 2 ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(( CYCLE_ACTIVITY.ST= ALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) )= ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MI= SS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY = + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTI= VITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.= THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.= REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)= ) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / = 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK = ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4= * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_AC= TIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( ((L1D_PEND_MISS.PENDING / ( M= EM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB= _FULL\\,cmask\\=3D1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.S= TALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD = , 0 )) ) ", - "MetricGroup": "Mem;MemoryBW;Offcore_SMT", - "MetricName": "Memory_Bandwidth_SMT" - }, { "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_= ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALL= S_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (= ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_H= IT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1= @ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS )= / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_AC= TIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_P= ORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * E= XE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS= _NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + = 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min= ( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_R= D ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE= _REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREA= D)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_= ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALT= ED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT /= MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOA= D_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL= \\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.ST= ALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_= L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #(((= CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_AC= TIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLO= TS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTI= VITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 = * CPU_CLK_UNHALTED.THREAD))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / C= PU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 *= ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000= 000 / duration_time)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CY= CLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNH= ALTED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.F= B_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (= MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.= FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTI= VITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STA= LLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL= + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_U= NHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORE= S)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - = ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.= THREAD))) ) )", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_bound = / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_= bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing = + tma_l3_hit_latency + tma_sq_full)) + (tma_l2_bound / (tma_dram_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)))", "MetricGroup": "Mem;MemoryLat;Offcore", "MetricName": "Memory_Latency" }, - { - "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK= _UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALL= S_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 = + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RET= IRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ))= + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_= L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #= ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE= _ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_= SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE= _THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTI= L) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (= 4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_A= CTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MIS= C.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( = 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )= * ( (min( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WI= TH_DATA_RD ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cp= u@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHAL= TED.THREAD)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + = (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_C= LK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 += (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MIS= S.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_AC= TIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVIT= Y.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREA= D) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / = (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.R= ETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALT= ED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_POR= TS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CO= RE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_TH= READ_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( I= NT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC)= * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THRE= AD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) = * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRE= D.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_M= ISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) + ( (( (= MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED= .L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT = / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ )= ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / = CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVI= TY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS= _UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 )= * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )= )) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (ID= Q_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + = CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UO= PS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) )))) ) )", - "MetricGroup": "Mem;MemoryLat;Offcore_SMT", - "MetricName": "Memory_Latency_SMT" - }, { "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (max( (= CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK= _UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.= BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UT= IL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTI= VITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DE= LIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( 9 * c= pu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_LOAD_MISSES.WALK_ACTIVE = , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 )= ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYC= LE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_A= CTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.ST= ALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTA= L + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_= UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) -= ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED= .THREAD))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTL= B_STORE_MISSES.WALK_ACTIVE ) / CPU_CLK_UNHALTED.THREAD) / #(EXE_ACTIVITY.BO= UND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tm= a_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_= fwd_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_boun= d + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + = tma_false_sharing + tma_split_stores + tma_store_latency))) ", "MetricGroup": "Mem;MemoryTLB;Offcore", "MetricName": "Memory_Data_TLBs" }, - { - "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - = CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYC= LE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVI= TY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EX= E_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (m= in( 9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_LOAD_MISSES.WAL= K_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_M= ISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_= ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) += ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_AC= TIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.ST= ALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 *= ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACT= IVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_C= YCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_= UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( 9 * = cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_STORE_MISSES.WALK_ACTI= VE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREA= D_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(EXE_ACTIVITY.BOUND_ON_STORES = / CPU_CLK_UNHALTED.THREAD) ) ) ", - "MetricGroup": "Mem;MemoryTLB;Offcore_SMT", - "MetricName": "Memory_Data_TLBs_SMT" - }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_= RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITI= ONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 = * CPU_CLK_UNHALTED.THREAD))", + "MetricExpr": "100 * ((BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_R= ETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.CONDITION= AL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, - { - "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_= RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITI= ONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 = * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACT= IVE / CPU_CLK_UNHALTED.REF_XCLK ) )))", - "MetricGroup": "Ret_SMT", - "MetricName": "Branching_Overhead_SMT" - }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_= CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDA= TA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS= .ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "Big_Code" }, - { - "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_ST= ALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACH= E_16B.IFDATA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 = * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.= CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + C= PU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))", - "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB_SMT", - "MetricName": "Big_Code_SMT" - }, { "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", - "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK= _UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE /= (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MIS= P_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_C= YCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UO= PS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) - (100 * (4 * IDQ_UOPS_NOT= _DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (I= CACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STA= LL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHA= LTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_U= OPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))= )", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", "MetricGroup": "Fed;FetchBW;Frontend", "MetricName": "Instruction_Fetch_BW" }, - { - "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", - "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DE= LIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.= ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL= _BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_= MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_D= ELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) = * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))= ) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNH= ALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STAL= L\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / = CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))", - "MetricGroup": "Fed;FetchBW;Frontend_SMT", - "MetricName": "Instruction_Fetch_BW_SMT" - }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -158,6 +736,12 @@ "MetricGroup": "Branches;Fed;FetchBW", "MetricName": "UpTB" }, + { + "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", + "MetricName": "CPI" + }, { "BriefDescription": "Per-Logical Processor actual clocks when the = Logical Processor is active.", "MetricExpr": "CPU_CLK_UNHALTED.THREAD", @@ -166,16 +750,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -185,63 +763,38 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARI= TH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKE= D_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU= _CLK_UNHALTED.THREAD )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.51= 2B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)) / (2 * CORE_C= LKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, - { - "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width). SMT versi= on; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ) )", - "MetricGroup": "Cor;Flops;HPC_SMT", - "MetricName": "FP_Arith_Utilization_SMT", - "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n). SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "( 1 - ((1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU= _CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES )= / (4 * CPU_CLK_UNHALTED.THREAD)) - ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE= _ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.= 1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) = * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_U= OPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY= + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)))) / ((EX= E_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.R= ETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL)) = / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STAL= LS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_PORTS_UTI= L + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIV= ITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if ((1 - (IDQ_UOPS_NOT_DELIVER= ED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC= .RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) - ((( CYCLE_ACTIVITY.ST= ALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTA= L + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_= UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) -= ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED= .THREAD)))) < ((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL= + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVI= TY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( C= YCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_AC= TIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.TH= READ)) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) else 1 ) if = 0 > 0.5 else 0", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", "MetricGroup": "Cor;SMT", "MetricName": "Core_Bound_Likely" }, - { - "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "( 1 - ((1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ((( CYC= LE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVI= TY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EX= E_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))) / ((EXE= _ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTE= D.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORT= S_UTIL)) / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTI= VITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_= PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if ((1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))))) < ((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACT= IVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED= .THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED= .REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if = ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STA= LLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOT= S / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THR= EAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) /= CPU_CLK_UNHALTED.THREAD) else 1 ) if (1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / ( CPU_CLK_UNHALTED.REF_XCLK_ANY / 2 )) > 0.5 else 0", - "MetricGroup": "Cor;SMT_SMT", - "MetricName": "Core_Bound_Likely_SMT" - }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else= CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -283,13 +836,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.25= 6B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARI= TH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PAC= KED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST= _RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SI= NGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SIN= GLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -310,21 +863,21 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit in= struction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX512", "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit i= nstruction (lower number means higher occurrence rate). May undercount due = to FMA double counting." @@ -336,9 +889,9 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -373,16 +926,10 @@ }, { "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * ( (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * (DSB2MITE_SWITCHES.PENALTY_CYC= LES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS= _DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) + ((IDQ_UOPS_NOT_DELIVERED.COR= E / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) * (( IDQ.ALL_MITE_CYCLES_A= NY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / CPU_CLK_UNHALTED.THREAD / 2) / #((= IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOP= S_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) = )", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_mite))", "MetricGroup": "DSBmiss;Fed", "MetricName": "DSB_Misses" }, - { - "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * ( (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (DSB2MITE_SWITCHES.P= ENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYC= LES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_= CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) + ((IDQ_UO= PS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_= CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ= _UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.TH= READ / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.RE= F_XCLK ) )))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOP= S ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) / 2) / #((IDQ_UOPS_NOT_DELIVERED.CO= RE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_TH= READ_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED= .CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + = CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )", - "MetricGroup": "DSBmiss;Fed_SMT", - "MetricName": "DSB_Misses_SMT" - }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -397,16 +944,10 @@ }, { "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_C= LK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.A= LL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU= _CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CO= RE / (4 * CPU_CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_= MISP_RETIRED.ALL_BRANCHES", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", "MetricGroup": "Bad;BrMispredicts", "MetricName": "Branch_Misprediction_Cost" }, - { - "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.AL= L_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT= _MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_= DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 )= * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )= )) ) * (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_= THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCH= ES", - "MetricGroup": "Bad;BrMispredicts_SMT", - "MetricName": "Branch_Misprediction_Cost_SMT" - }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.NOT_TAKEN / BR_INST_RETIRED.ALL_BRA= NCHES", @@ -415,101 +956,95 @@ }, { "BriefDescription": "Fraction of branches that are taken condition= als", - "MetricExpr": "( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT= _TAKEN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_= TAKEN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches;CodeGen;PGO", "MetricName": "Cond_TK" }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, { "BriefDescription": "Fraction of branches that are unconditional (= direct or indirect) jumps", - "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CON= DITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) / B= R_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.COND= ITIONAL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_= INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "Jump" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS = + MEM_LOAD_RETIRED.FB_HIT )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS += MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CPU_C= LK_UNHALTED.THREAD )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING) / (2 * CORE_CLK= S)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * ( ( C= PU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / C= PU_CLK_UNHALTED.REF_XCLK ) ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -536,37 +1071,37 @@ }, { "BriefDescription": "Rate of silent evictions from the L2 cache pe= r Kilo instruction where the evicted lines are dropped (no writeback to L3 = or memory)", - "MetricExpr": "1000 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_Silent_PKI" }, { "BriefDescription": "Rate of non silent evictions from the L2 cach= e per Kilo instruction", - "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_NonSilent_PKI" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -578,68 +1113,47 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE = + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.5= 12B_PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACK= ED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.TH= READ", + "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / 2 / CORE_CLKS if #S= MT_on else CORE_POWER.LVL0_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License0_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes." }, - { - "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )", - "MetricGroup": "Power_SMT", - "MetricName": "Power_License0_Utilization_SMT", - "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes. SMT versio= n; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.TH= READ", + "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / 2 / CORE_CLKS if #S= MT_on else CORE_POWER.LVL1_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License1_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions." }, - { - "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1. SMT version; use when SMT is= enabled and measuring per logical CPU.", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )", - "MetricGroup": "Power_SMT", - "MetricName": "Power_License1_Utilization_SMT", - "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions. = SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX)", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.TH= READ", + "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / 2 / CORE_CLKS if #S= MT_on else CORE_POWER.LVL2_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License2_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions." }, - { - "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX). SMT vers= ion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )", - "MetricGroup": "Power_SMT", - "MetricName": "Power_License2_Utilization_SMT", - "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions. SMT version; use when SMT is= enabled and measuring per logical CPU." - }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -657,13 +1171,13 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( cha@event\\=3D0x36\\,umask\\=3D0x21\= \,config\\=3D0x40433@ / cha@event\\=3D0x35\\,umask\\=3D0x21\\,config\\=3D0x= 40433@ ) / ( cha_0@event\\=3D0x0@ / duration_time )", + "MetricExpr": "1000000000 * (cha@event\\=3D0x36\\,umask\\=3D0x21\\= ,config\\=3D0x40433@ / cha@event\\=3D0x35\\,umask\\=3D0x21\\,config\\=3D0x4= 0433@) / (Socket_CLKS / duration_time)", "MetricGroup": "Mem;MemoryLat;SoC", "MetricName": "MEM_Read_Latency" }, @@ -675,20 +1189,20 @@ }, { "BriefDescription": "Average latency of data read request to exter= nal DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-= read prefetches", - "MetricExpr": "1000000000 * ( UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSE= RTS ) / imc_0@event\\=3D0x0@", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": "1000000000 * (UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSER= TS) / imc_0@event\\=3D0x0@", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_DRAM_Read_Latency" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Writes [GB / sec]", - "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_= DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + U= NC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 ) * 4 / 1000000000 / duration_time", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_D= ATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UN= C_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1000000000 / duration_time", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Write_BW" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Reads [GB / sec]", - "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO= _DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 = + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3 ) * 4 / 1000000000 / duration_tim= e", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_= DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 += UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3) * 4 / 1000000000 / duration_time", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Read_BW" }, { @@ -697,12 +1211,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cha_0@event\\=3D0x0@ / #num_dies / duration_time / = 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -752,11 +1260,10 @@ "MetricName": "C7_Pkg_Residency" }, { - "BriefDescription": "Percentage of time spent in the active CPU po= wer state C0", - "MetricExpr": "100 * CPU_CLK_UNHALTED.REF_TSC / TSC", - "MetricGroup": "", - "MetricName": "cpu_utilization_percent", - "ScaleUnit": "1%" + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" }, { "BriefDescription": "CPU operating frequency (in GHz)", @@ -765,13 +1272,6 @@ "MetricName": "cpu_operating_frequency", "ScaleUnit": "1GHz" }, - { - "BriefDescription": "Cycles per instruction retired; indicating ho= w much time each executed instruction took; in units of cycles.", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY", - "MetricGroup": "", - "MetricName": "cpi", - "ScaleUnit": "1per_instr" - }, { "BriefDescription": "The ratio of number of completed memory load = instructions to the total number completed instructions", "MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY", @@ -790,7 +1290,7 @@ "BriefDescription": "Ratio of number of requests missing L1 data c= ache (includes data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l1d_mpi_includes_data_plus_rfo_with_prefetches", + "MetricName": "l1d_mpi", "ScaleUnit": "1per_instr" }, { @@ -818,7 +1318,7 @@ "BriefDescription": "Ratio of number of requests missing L2 cache = (includes code+data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l2_mpi_includes_code_plus_data_plus_rfo_with_prefet= ches", + "MetricName": "l2_mpi", "ScaleUnit": "1per_instr" }, { @@ -849,58 +1349,79 @@ "MetricName": "llc_code_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, + { + "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) in nano seconds", + "MetricExpr": "( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_miss\= \,config1\\=3D0x4043300000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config1\= \=3D0x4043300000000@ ) / ( UNC_CHA_CLOCKTICKS / ( #num_cores / #num_package= s * #num_packages ) ) ) * duration_time", + "MetricGroup": "", + "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency", + "ScaleUnit": "1ns" + }, + { + "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to local m= emory in nano seconds", + "MetricExpr": "( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_miss\= \,config1\\=3D0x4043200000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config1\= \=3D0x4043200000000@ ) / ( UNC_CHA_CLOCKTICKS / ( #num_cores / #num_package= s * #num_packages ) ) ) * duration_time", + "MetricGroup": "", + "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _local_requests", + "ScaleUnit": "1ns" + }, + { + "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to remote = memory in nano seconds", + "MetricExpr": "( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_miss\= \,config1\\=3D0x4043100000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config1\= \=3D0x4043100000000@ ) / ( UNC_CHA_CLOCKTICKS / ( #num_cores / #num_package= s * #num_packages ) ) ) * duration_time", + "MetricGroup": "", + "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _remote_requests", + "ScaleUnit": "1ns" + }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by a code fetch to the total number of completed ins= tructions. This implies it missed in the ITLB (Instruction TLB) and further= levels of TLB.", "MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "itlb_2nd_level_mpi", + "MetricName": "itlb_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total n= umber of completed instructions. This implies it missed in the Instruction = Translation Lookaside Buffer (ITLB) and further levels of TLB.", "MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY= ", "MetricGroup": "", - "MetricName": "itlb_2nd_level_large_page_mpi", + "MetricName": "itlb_large_page_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by demand data loads to the total number of complete= d instructions. This implies it missed in the DTLB and further levels of TL= B.", "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "dtlb_2nd_level_load_mpi", + "MetricName": "dtlb_load_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = 2 megabyte page sizes) caused by demand data loads to the total number of c= ompleted instructions. This implies it missed in the Data Translation Looka= side Buffer (DTLB) and further levels of TLB.", "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRE= D.ANY", "MetricGroup": "", - "MetricName": "dtlb_2nd_level_2mb_large_page_load_mpi", + "MetricName": "dtlb_2mb_large_page_load_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by demand data stores to the total number of complet= ed instructions. This implies it missed in the DTLB and further levels of T= LB.", "MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY= ", "MetricGroup": "", - "MetricName": "dtlb_2nd_level_store_mpi", + "MetricName": "dtlb_store_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Memory read that miss the last level cache (L= LC) addressed to local DRAM as a percentage of total memory read accesses, = does not include LLC prefetches.", "MetricExpr": "100 * cha@unc_cha_tor_inserts.ia_miss\\,config1\\= =3D0x4043200000000@ / ( cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x404= 3200000000@ + cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x4043100000000= @ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_local_dram", + "MetricName": "numa_reads_addressed_to_local_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Memory reads that miss the last level cache (= LLC) addressed to remote DRAM as a percentage of total memory read accesses= , does not include LLC prefetches.", "MetricExpr": "100 * cha@unc_cha_tor_inserts.ia_miss\\,config1\\= =3D0x4043100000000@ / ( cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x404= 3200000000@ + cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x4043100000000= @ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_remote_dram", + "MetricName": "numa_reads_addressed_to_remote_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Uncore operating frequency in GHz", - "MetricExpr": "( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CLOCK= TICKS) * #num_packages ) / 1000000000) / duration_time", + "MetricExpr": "( UNC_CHA_CLOCKTICKS / ( #num_cores / #num_packages= * #num_packages ) / 1000000000) / duration_time", "MetricGroup": "", "MetricName": "uncore_frequency", "ScaleUnit": "1GHz" @@ -909,7 +1430,7 @@ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data t= ransmit bandwidth (MB/sec)", "MetricExpr": "( UNC_UPI_TxL_FLITS.ALL_DATA * (64 / 9.0) / 1000000= ) / duration_time", "MetricGroup": "", - "MetricName": "upi_data_transmit_bw_only_data", + "MetricName": "upi_data_transmit_bw", "ScaleUnit": "1MB/s" }, { @@ -937,35 +1458,35 @@ "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", "MetricExpr": "(( UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO= _DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 ) * 4 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_read", + "MetricName": "io_bandwidth_disk_or_network_writes", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", "MetricExpr": "(( UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0 + UNC_I= IO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PA= RT2 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3 ) * 4 / 1000000) / duration_= time", "MetricGroup": "", - "MetricName": "io_bandwidth_write", + "MetricName": "io_bandwidth_disk_or_network_reads", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Uops delivered from decoded instruction cache= (decoded stream buffer or DSB) as a percent of total uops delivered to Ins= truction Decode Queue", "MetricExpr": "100 * ( IDQ.DSB_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_decoded_icache_dsb", + "MetricName": "percent_uops_delivered_from_decoded_icache", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from legacy decode pipeline (M= icro-instruction Translation Engine or MITE) as a percent of total uops del= ivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MITE_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline_= mite", + "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from microcode sequencer (MS) = as a percent of total uops delivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MS_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_microcode_sequencer_ms", + "MetricName": "percent_uops_delivered_from_microcode_sequencer", "ScaleUnit": "1%" }, { @@ -988,250 +1509,5 @@ "MetricGroup": "", "MetricName": "llc_miss_remote_memory_bandwidth_read", "ScaleUnit": "1MB/s" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. Frontend denotes th= e first part of the processor core responsible to fetch operations that are= executed later on by the Backend part. Within the Frontend; a branch predi= ctor predicts the next address to fetch; cache-lines are fetched from the m= emory subsystem; parsed into instructions; and lastly decoded into micro-op= erations (uops). Ideally the Frontend can issue Machine_Width uops every cy= cle to the Backend. Frontend Bound denotes unutilized issue-slots when ther= e is no Backend stall; i.e. bubbles where Frontend delivered no uops while = Backend could have accepted them. For example; stalls due to instruction-ca= che misses would be categorized under Frontend Bound.", - "MetricExpr": "100 * ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( (= CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREA= D ) ) ) )", - "MetricGroup": "TmaL1;PGO", - "MetricName": "tma_frontend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues. For example; instruction-c= ache misses; iTLB misses or fetch stalls after a branch misprediction are c= ategorized under Frontend Latency. In such cases; the Frontend eventually d= elivers no uops for some period.", - "MetricExpr": "100 * ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOP= S_DELIV.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on e= lse ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "Frontend;TmaL2;m_tma_frontend_bound_percent", - "MetricName": "tma_fetch_latency_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", - "MetricExpr": "100 * ( ( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_= 16B.IFDATA_STALL\\,cmask\\=3D0x1\\,edge\\=3D0x1@ ) / ( CPU_CLK_UNHALTED.THR= EAD ) )", - "MetricGroup": "BigFoot;FetchLat;IcMiss;TmaL3;m_tma_fetch_latency_= percent", - "MetricName": "tma_icache_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses.", - "MetricExpr": "100 * ( ICACHE_64B.IFTAG_STALL / ( CPU_CLK_UNHALTED= .THREAD ) )", - "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TmaL3;m_tma_fetch_laten= cy_percent", - "MetricName": "tma_itlb_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fron= tend delay in fetching operations from corrected path; following all sorts = of miss-predicted branches. For example; branchy code with lots of miss-pre= dictions might get categorized under Branch Resteers. Note the value of thi= s node may overlap with its siblings.", - "MetricExpr": "100 * ( INT_MISC.CLEAR_RESTEER_CYCLES / ( CPU_CLK_U= NHALTED.THREAD ) + ( ( 9 ) * BACLEARS.ANY / ( CPU_CLK_UNHALTED.THREAD ) ) )= ", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_branch_resteers_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decod= ed i-cache) is a Uop Cache where the front-end directly delivers Uops (micr= o operations) avoiding heavy x86 decoding. The DSB pipeline has shorter lat= ency and delivered higher bandwidth than the MITE (legacy instruction decod= e pipeline). Switching between the two pipelines can cause penalties hence = this metric measures the exposed penalty.", - "MetricExpr": "100 * ( DSB2MITE_SWITCHES.PENALTY_CYCLES / ( CPU_CL= K_UNHALTED.THREAD ) )", - "MetricGroup": "DSBmiss;FetchLat;TmaL3;m_tma_fetch_latency_percent= ", - "MetricName": "tma_dsb_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs). Using proper compiler = flags or Intel Compiler by default will certainly avoid this. #Link: Optimi= zation Guide about LCP BKMs.", - "MetricExpr": "100 * ( ILD_STALL.LCP / ( CPU_CLK_UNHALTED.THREAD )= )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_lcp_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS). Commonly used instructions are optimized for delivery by the= DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certa= in operations cannot be handled natively by the execution pipeline; and mus= t be performed by microcode (small programs injected into the execution str= eam). Switching to the MS too often can negatively impact performance. The = MS is designated to deliver long uop flows required by CISC instructions li= ke CPUID; or uncommon conditions like Floating Point Assists when dealing w= ith Denormals.", - "MetricExpr": "100 * ( ( 2 ) * IDQ.MS_SWITCHES / ( CPU_CLK_UNHALTE= D.THREAD ) )", - "MetricGroup": "FetchLat;MicroSeq;TmaL3;m_tma_fetch_latency_percen= t", - "MetricName": "tma_ms_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues. For example; inefficienc= ies at the instruction decoders; or restrictions for caching in the DSB (de= coded uops cache) are categorized under Fetch Bandwidth. In such cases; the= Frontend typically delivers suboptimal amount of uops to the Backend.", - "MetricExpr": "100 * ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) ) - ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (= ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UN= HALTED.THREAD ) ) ) ) )", - "MetricGroup": "FetchBW;Frontend;TmaL2;m_tma_frontend_bound_percen= t", - "MetricName": "tma_fetch_bandwidth_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline). This pipeline is used for code that was not pre-cached in the= DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of= long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", - "MetricExpr": "100 * ( ( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MI= TE_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else = ( CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSBmiss;FetchBW;TmaL3;m_tma_fetch_bandwidth_percen= t", - "MetricName": "tma_mite_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line. For example; inefficient utilization of the DSB cache structure or b= ank conflict when reading from it; are categorized here.", - "MetricExpr": "100 * ( ( IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB= _CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( = CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSB;FetchBW;TmaL3;m_tma_fetch_bandwidth_percent", - "MetricName": "tma_dsb_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. This include slots used to issue uops t= hat do not eventually get retired and slots for which the issue-pipeline wa= s blocked due to recovery from earlier incorrect speculation. For example; = wasted work due to miss-predicted branches are categorized under Bad Specul= ation category. Incorrect data speculation followed by Memory Ordering Nuke= s is another example.", - "MetricExpr": "100 * ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_S= LOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT= _MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 )= if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_bad_speculation_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction. These slots are either wasted = by uops fetched from an incorrectly speculated program path; or stalls when= the out-of-order part of the machine needs to recover its state from a spe= culative path.", - "MetricExpr": "100 * ( ( BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * ( ( UOPS_ISSUED.ANY - ( U= OPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 )= if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHAL= TED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "BadSpec;BrMispredicts;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_branch_mispredicts_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears. These slots are either wasted by uop= s fetched prior to the clear; or stalls the out-of-order portion of the mac= hine needs to recover its state after the clear. For example; this can happ= en due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modify= ing-Code (SMC) nukes.", - "MetricExpr": "100 * ( ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE= _SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else I= NT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2= ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( BR_MISP_RETIRED.= ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * = ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.= RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( = ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "BadSpec;MachineClears;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_machine_clears_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. Backend is the portion of the processor cor= e where the out-of-order scheduler dispatches ready uops into their respect= ive execution units; and once completed these uops get retired according to= program order. For example; stalls due to data-cache misses or stalls due = to the divider unit being overloaded are both categorized under Backend Bou= nd. Backend Bound is further divided into two main categories: Memory Bound= and Core Bound.", - "MetricExpr": "100 * ( 1 - ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_= ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_= CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) )= ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_backend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck. Memory Bound estimat= es fraction of slots where pipeline is likely stalled due to demand load or= store instructions. This accounts mainly for (1) non-completed in-flight m= emory demand loads which coincides with execution units starvation; in addi= tion to (2) cases where stores could impose backpressure on the pipeline wh= en many of them get buffered at the same time (less common out of the two).= ", - "MetricExpr": "100 * ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACT= IVITY.BOUND_ON_STORES ) / ( CYCLE_ACTIVITY.STALLS_TOTAL + ( EXE_ACTIVITY.1_= PORTS_UTIL + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALT= ED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) * EXE= _ACTIVITY.2_PORTS_UTIL ) + EXE_ACTIVITY.BOUND_ON_STORES ) ) * ( 1 - ( IDQ_U= OPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 )= * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY= _CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on el= se ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;m_tma_backend_bound_percent", - "MetricName": "tma_memory_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache. The L1 data cache typicall= y has the shortest latency. However; in certain cases like loads blocked o= n older stores; a load might suffer due to high latency even though it is b= eing satisfied by the L1. Another example is loads who miss in the TLB. The= se cases are characterized by execution unit stalls; while some non-complet= ed demand load lives in the machine without having that demand load missing= the L1 cache.", - "MetricExpr": "100 * ( max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCL= E_ACTIVITY.STALLS_L1D_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) , 0 ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l1_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 m= isses/L2 hits) can improve the latency and increase performance.", - "MetricExpr": "100 * ( ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_L= OAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) / ( ( MEM_LOAD_RETI= RED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS )= ) ) ) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D0x1@ ) ) * ( ( CYCLE_ACTIVIT= Y.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.TH= READ ) ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l2_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core. = Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and= increase performance.", - "MetricExpr": "100 * ( ( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACT= IVITY.STALLS_L3_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l3_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads. Better caching can i= mprove the latency and increase performance.", - "MetricExpr": "100 * ( min( ( ( CYCLE_ACTIVITY.STALLS_L3_MISS / ( = CPU_CLK_UNHALTED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTI= VITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RETI= RED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS )= ) ) ) / ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( = MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D0x= 1@ ) ) * ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS= ) / ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) , ( 1 ) ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_dram_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write. Even though store accesses do not typically stall= out-of-order CPUs; there are few cases where stores can lead to actual sta= lls. This metric will be flagged should RFO stores be a bottleneck.", - "MetricExpr": "100 * ( EXE_ACTIVITY.BOUND_ON_STORES / ( CPU_CLK_UN= HALTED.THREAD ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_store_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck. Shortage in hardware comput= e resources; or dependencies in software's instructions are both categorize= d under Core Bound. Hence it may indicate the machine ran out of an out-of-= order resource; certain execution units are overloaded or dependencies in p= rogram's data- or instruction-flow are limiting the performance (e.g. FP-ch= ained long-latency arithmetic operations).", - "MetricExpr": "100 * ( ( 1 - ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4= ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALT= ED.THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLE= S_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CP= U_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD )= ) ) ) - ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES= ) / ( CYCLE_ACTIVITY.STALLS_TOTAL + ( EXE_ACTIVITY.1_PORTS_UTIL + ( ( UOPS= _RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) i= f #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) * EXE_ACTIVITY.2_PORTS_UTI= L ) + EXE_ACTIVITY.BOUND_ON_STORES ) ) * ( 1 - ( IDQ_UOPS_NOT_DELIVERED.COR= E / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_C= LK_UNHALTED.THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 ) * ( ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;Compute;m_tma_backend_bound_percent", - "MetricName": "tma_core_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active. Divide and square root instructions are per= formed by the Divider unit and can take considerably longer latency than in= teger or Floating Point addition; subtraction; or multiplication.", - "MetricExpr": "100 * ( ARITH.DIVIDER_ACTIVE / ( CPU_CLK_UNHALTED.T= HREAD ) )", - "MetricGroup": "TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_divider_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related). Two distinct categories can be attributed into this met= ric: (1) heavy data-dependency among contiguous instructions would manifest= in this metric - such cases are often referred to as low Instruction Level= Parallelism (ILP). (2) Contention on some hardware execution unit other th= an Divider. For example; when there are too many multiply operations.", - "MetricExpr": "100 * ( ( EXE_ACTIVITY.EXE_BOUND_0_PORTS + ( EXE_AC= TIVITY.1_PORTS_UTIL + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_C= LK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) = ) ) * EXE_ACTIVITY.2_PORTS_UTIL ) ) / ( CPU_CLK_UNHALTED.THREAD ) if ( ARIT= H.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_ME= M_ANY ) ) else ( EXE_ACTIVITY.1_PORTS_UTIL + ( ( UOPS_RETIRED.RETIRE_SLOTS = ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_C= LK_UNHALTED.THREAD ) ) ) ) * EXE_ACTIVITY.2_PORTS_UTIL ) / ( CPU_CLK_UNHALT= ED.THREAD ) )", - "MetricGroup": "PortsUtil;TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_ports_utilization_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. Ideally= ; all pipeline slots would be attributed to the Retiring category. Retirin= g of 100% would indicate the maximum Pipeline_Width throughput was achieved= . Maximizing Retiring typically increases the Instructions-per-cycle (see = IPC metric). Note that a high Retiring value does not necessary mean there = is no room for more performance. For example; Heavy-operations or Microcod= e Assists are categorized under Retiring. They often indicate suboptimal pe= rformance and can often be optimized or avoided. ", - "MetricExpr": "100 * ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_retiring_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation). This correlates with total number = of instructions used by the program. A uops-per-instruction (see UPI metric= ) ratio of 1 or less should be expected for decently optimized software run= ning on Intel Core/Xeon products. While this often indicates efficient X86 = instructions were executed; high value does not necessarily mean better per= formance cannot be achieved.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) *= ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.T= HREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSE= D - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_light_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired). Note t= his metric's value may exceed its parent due to use of \"Uops\" CountDomain= and FMA double-counting.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD ) + ( ( FP_ARITH= _INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) / ( UOP= S_RETIRED.RETIRE_SLOTS ) ) + ( min( ( ( FP_ARITH_INST_RETIRED.128B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.25= 6B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST= _RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / = ( UOPS_RETIRED.RETIRE_SLOTS ) ) , ( 1 ) ) ) )", - "MetricGroup": "HPC;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fp_arith_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * MEM_INST_RETIRED.ANY = / INST_RETIRED.ANY )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_memory_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JCC= are commonly used examples.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * UOPS_RETIRED.MACRO_FU= SED / ( UOPS_RETIRED.RETIRE_SLOTS ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fused_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused. Non-conditi= onal branches like direct JMP or CALL would count here. Can be used to exam= ine fusible conditional jumps that were not fused.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * ( BR_INST_RETIRED.ALL= _BRANCHES - UOPS_RETIRED.MACRO_FUSED ) / ( UOPS_RETIRED.RETIRE_SLOTS ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_non_fused_branches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions. Compilers often use NOPs f= or certain address alignments - e.g. start address of a function or loop bo= dy.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * INST_RETIRED.NOP / ( = UOPS_RETIRED.RETIRE_SLOTS ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_nop_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", - "MetricExpr": "100 * ( max( 0 , ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) = / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK= _UNHALTED.THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED= .MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_A= NY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) - ( ( ( ( ( UO= PS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 )= if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) * UOPS_EXECUTED.X87 / UO= PS_EXECUTED.THREAD ) + ( ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) + ( min( ( ( = FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKE= D_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED= .256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_I= NST_RETIRED.512B_PACKED_SINGLE ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) , ( 1 ) = ) ) ) + ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTE= D.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( = ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY= ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_= CLK_UNHALTED.THREAD ) ) ) ) ) * MEM_INST_RETIRED.ANY / INST_RETIRED.ANY ) += ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREA= D_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( ( UOPS_= RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY ) / ( = ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) ) * UOPS_RETIRED.MACRO_FUSED / ( UOPS_RETIRED.RETIRE_S= LOTS ) ) + ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHA= LTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - (= ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.= ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( C= PU_CLK_UNHALTED.THREAD ) ) ) ) ) * ( BR_INST_RETIRED.ALL_BRANCHES - UOPS_RE= TIRED.MACRO_FUSED ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) + ( ( ( ( UOPS_RETIRE= D.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_= on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS= ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_= UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )= ) * INST_RETIRED.NOP / ( UOPS_RETIRED.RETIRE_SLOTS ) ) ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_other_light_ops_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences. This highly-correlates with the = uop length of these instructions/sequences.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETI= RED.MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREA= D_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_heavy_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of= uops in such instructions.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RE= TIRED.MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THR= EAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( ( UOP= S_RETIRED.RETIRE_SLOTS ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( = CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD= ) ) ) ) )", - "MetricGroup": "TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_few_uops_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS= is used for CISC instructions not supported by the default decoders (like = repeat move strings; or CPUID); or by microcode assists used to address som= e operation modes (like in Floating Point assists). These cases can often b= e avoided.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_ISSU= ED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "MicroSeq;TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_microcode_sequencer_percent", - "ScaleUnit": "1%" } ] diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json b/t= ools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json index 0746fcf2ebd9..62941146e396 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json @@ -27,20 +27,19 @@ "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", + "BriefDescription": "All DRAM Read CAS Commands issued (including = underfills)", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_READ", + "EventName": "UNC_M_CAS_COUNT.RD", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0x3", "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller", + "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.RD", + "EventName": "LLC_MISSES.MEM_READ", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0x3", @@ -56,20 +55,19 @@ "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", + "BriefDescription": "All DRAM Write CAS commands issued", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_WRITE", + "EventName": "UNC_M_CAS_COUNT.WR", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0xC", "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller", + "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.WR", + "EventName": "LLC_MISSES.MEM_WRITE", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0xC", diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json b/to= ols/perf/pmu-events/arch/x86/skylakex/uncore-other.json index f55aeadc630f..0d106fe7aae3 100644 --- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json +++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json @@ -1089,7 +1089,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1101,7 +1100,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1113,7 +1111,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1125,7 +1122,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1196,7 +1192,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1208,7 +1203,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1220,7 +1214,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1232,7 +1225,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1974,20 +1966,19 @@ "Unit": "UPI LL" }, { - "BriefDescription": "UPI interconnect send bandwidth for payload. = Derived from unc_upi_txl_flits.all_data", + "BriefDescription": "Valid data FLITs transmitted via any slot", "Counter": "0,1,2,3", "EventCode": "0x2", - "EventName": "UPI_DATA_BANDWIDTH_TX", + "EventName": "UNC_UPI_TxL_FLITS.ALL_DATA", "PerPkg": "1", - "ScaleUnit": "7.11E-06Bytes", - "UMask": "0xf", + "UMask": "0x0F", "Unit": "UPI LL" }, { - "BriefDescription": "UPI interconnect send bandwidth for payload", + "BriefDescription": "UPI interconnect send bandwidth for payload. = Derived from unc_upi_txl_flits.all_data", "Counter": "0,1,2,3", "EventCode": "0x2", - "EventName": "UNC_UPI_TxL_FLITS.ALL_DATA", + "EventName": "UPI_DATA_BANDWIDTH_TX", "PerPkg": "1", "ScaleUnit": "7.11E-06Bytes", "UMask": "0xf", --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A709C433F5 for ; Sat, 1 Oct 2022 06:07:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229576AbiJAGH2 (ORCPT ); Sat, 1 Oct 2022 02:07:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229577AbiJAGHB (ORCPT ); Sat, 1 Oct 2022 02:07:01 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6301B1ABFC1 for ; Fri, 30 Sep 2022 23:06:56 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-349f88710b2so61759707b3.20 for ; Fri, 30 Sep 2022 23:06:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=6JJoXYIRaF09UvAIdB9a08R7rpmpIHtxfEjG5PbmTGo=; b=X1D6KiMPuvJt2uF5+7yInNFsKeIASiiUZPCF3q1rxUD04J96nYHnzBgSl0afF4hmQN vV287K/g/Culx9JF4yhDOeAJcXTWIeargwedGa9jXfXHJJfCSpcIT8kCmGK+tZarlwrt 4FIj7yPHhU1LgkkUpGFNf9hecCr50MvYZoqnue8Q6azE3nvYpg5B3RBJaBo4m3yAzb7l Smx9VCqGleIMK92efCEPKKKgcIhxdHLWl9i1yBuQYvYdnQFlsLo7y2cD41URDCDaUXNy R/sqtaGyG3LS/cGp8uNP7kQRJ7vYg9Cb5TDTAA6J15zXqsQ2PGmpdEMFaU7i5zRGa2VR qMzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=6JJoXYIRaF09UvAIdB9a08R7rpmpIHtxfEjG5PbmTGo=; b=aIMmrwPZZzg45iwlxMwVNQUBcelhQNglL4iiTcG7gZ5mlY4TGbw/Us3REVI6gKsp8P m7JNLbGHfqWFW3VvYQWNwTwVdwtDK7oxenze1bp3r9mR5/AiN4YEwVyGd8Lq9GYqRsWF I7YbV/cI/LBQ80i4oQmylk08tdjtz1aKCQZN1tFosZCWOpv1+Le+HvmXkCgOzluBoLLT Y9QoCYTzxJer3qc3zj8n9evgoa0/CZng/bHj42vzXdY5RmL0Zr6Gpjk2F700ayZ2Le1z MUred5+okyNRztqm2tjpaYkX9VQpzcBO+iY6RfwuNmqd+PM8f0G1J4IwGhDD6lFBIgmb xjxA== X-Gm-Message-State: ACrzQf3qtxMRZn0cigb8Ca0B9diwf+AfXMnbAJEnus42Kvnd1SXYkEey LrrqcwMII/iVFiVY8e+tjSTddch1jsAU X-Google-Smtp-Source: AMsMyM45U0cqiSilfRaERg4k7NNFdCHuwWVb7imfOLoS95hU3sa6G/4rWhHG4L6YeRhx7y8nv6ibFLOBMqxw X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:be84:0:b0:6ae:8eca:4fd8 with SMTP id i4-20020a25be84000000b006ae8eca4fd8mr11837232ybk.197.1664604415836; Fri, 30 Sep 2022 23:06:55 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:19 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-6-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 05/22] perf vendor events: Update Intel alderlake From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v1.15, the core metrics are based on TMA 4.4 full and the atom metrics on E-core TMA 2.2. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - Addition of all 6 levels of TMA metrics. Previously metrics involving topdown events were dropped. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Update mapfile.csv CPUIDs to match 01.org. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/alderlake/adl-metrics.json | 1353 ++++++++++++++++- .../pmu-events/arch/x86/alderlake/cache.json | 129 +- .../arch/x86/alderlake/frontend.json | 12 + .../pmu-events/arch/x86/alderlake/memory.json | 22 + .../pmu-events/arch/x86/alderlake/other.json | 22 + .../arch/x86/alderlake/pipeline.json | 14 +- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 7 files changed, 1460 insertions(+), 94 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json b/to= ols/perf/pmu-events/arch/x86/alderlake/adl-metrics.json index 095dd8c7f161..e06d26ad5138 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json @@ -1,22 +1,852 @@ [ + { + "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", + "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UO= P_DROPPING / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "(topdown\\-fetch\\-lat / (topdown\\-fe\\-bound + to= pdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.= UOP_DROPPING / SLOTS)", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "ICACHE_DATA.STALLS / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_TAG.STALLS / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(tma_branch_mispredicts / tma_bad_speculation) * IN= T_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (tma_branch_mispredicts / tma_bad_speculation)= ) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "DECODE.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: FRONTEND_RETIRED.MS_FLOWS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / CORE_C= LKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu_core@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cp= u_core@INST_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / CORE_CLK= S / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to LSD (Loop Stream Detector) unit", + "MetricExpr": "(LSD.CYCLES_ACTIVE - LSD.CYCLES_OK) / CORE_CLKS / 2= ", + "MetricGroup": "FetchBW;LSD;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_lsd", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to LSD (Loop Stream Detector) unit. = LSD typically does well sustaining Uop supply. However; in some rare cases= ; optimal uop-delivery could not be reached for small loops whose size (in = terms of number of uops) does not suit well the LSD structure.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", + "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + t= ma_retiring), 0)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "topdown\\-br\\-mispredict / (topdown\\-fe\\-bound += topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOT= S", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts= )", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", + "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "topdown\\-mem\\-bound / (topdown\\-fe\\-bound + top= down\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((EXE_ACTIVITY.BOUND_ON_LOADS - MEMORY_ACTIVITY.= STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(7 * cpu_core@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\= \=3D1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - = MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "L1D_PEND_MISS.FB_FULL / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_ACTIVITY.= STALLS_L2_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_ACTIVITY.S= TALLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((25 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRE= D.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3= _HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + (24 * A= verage_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + (MEM_LOAD_RET= IRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(24 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRED= .XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - (OCR.DEMAND_DATA_RD.= L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA= _RD.L3_HIT.SNOOP_HIT_WITH_FWD)))) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOA= D_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_NO_FWD", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(9 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT *= (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "(XQ.FULL_CYCLES + L1D_PEND_MISS.L2_STALLS) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L3_MISS / CLKS)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu_core@OFFCORE_REQUE= STS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((MEM_STORE_RETIRED.L2_HIT * 10 * (1 - (MEM_INST_RE= TIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.= LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, O= FFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(28 * Average_Frequency) * OCR.DEMAND_RFO.L3_HIT.SN= OOP_HITM / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.= L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to Streaming store memory accesses; Streaming store optimize out a = read request required by RFO stores", + "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_streaming_stores", + "PublicDescription": "This metric estimates how often CPU was stal= led due to Streaming store memory accesses; Streaming store optimize out a= read request required by RFO stores. Even though store accesses do not typ= ically stall out-of-order CPUs; there are few cases where stores can lead t= o actual stalls. This metric will be flagged should Streaming stores be a b= ottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * cpu_core@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\= =3D1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(cpu_core@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x8= 0@ + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * cpu_core@EXE_ACTIVITY.2_PO= RTS_UTIL\\,umask\\=3D0xc@)) / CLKS if (ARITH.DIVIDER_ACTIVE < (CYCLE_ACTIVI= TY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS)) else (EXE_ACTIVITY.1_PORTS_= UTIL + tma_retiring * cpu_core@EXE_ACTIVITY.2_PORTS_UTIL\\,umask\\=3D0xc@) = / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "cpu_core@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80= @ / CLKS + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - EXE_A= CTIVITY.BOUND_ON_LOADS) / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to PAUSE Instructions", + "MetricExpr": "CPU_CLK_UNHALTED.PAUSE / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_slow_pause", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.= PAUSE_INST", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to LFENCE Instructions.", + "MetricExpr": "13 * MISC2_RETIRED.LFENCE / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_memory_fence", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "160 * ASSISTS.SSE_AVX_MIX / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop. S= ample with: EXE_ACTIVITY.2_PORTS_UTIL", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + = UOPS_DISPATCHED.PORT_5_11 + UOPS_DISPATCHED.PORT_6) / (5 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED.PORT_0", + "MetricExpr": "UOPS_DISPATCHED.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D.PORT_1", + "MetricExpr": "UOPS_DISPATCHED.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED.PORT_6", + "MetricExpr": "UOPS_DISPATCHED.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3_10", + "MetricExpr": "UOPS_DISPATCHED.PORT_2_3_10 / (3 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations Sample with: U= OPS_DISPATCHED.PORT_7_8", + "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_= 8) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", + "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.SLOTS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents overall Integer (Int) = select operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b + tma_shu= ffles", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_int_operations", + "PublicDescription": "This metric represents overall Integer (Int)= select operations fraction the CPU has executed (retired). Vector/Matrix I= nt operations and shuffles are counted. Note this metric's value may exceed= its parent due to use of \"Uops\" CountDomain.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents 128-bit vector Integer= ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the= CPU has retired.", + "MetricExpr": "(INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIRED.VNNI_128= ) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_int_opera= tions_group", + "MetricName": "tma_int_vector_128b", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents 256-bit vector Integer= ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the= CPU has retired.", + "MetricExpr": "(INT_VEC_RETIRED.ADD_256 + INT_VEC_RETIRED.MUL_256 = + INT_VEC_RETIRED.VNNI_256) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_int_opera= tions_group", + "MetricName": "tma_int_vector_256b", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents Shuffle (cross \"vecto= r lane\" data transfers) uops fraction the CPU has retired.", + "MetricExpr": "INT_VEC_RETIRED.SHUFFLES / (tma_retiring * SLOTS)", + "MetricGroup": "HPC;Pipeline;TopdownL4;tma_int_operations_group", + "MetricName": "tma_shuffles", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_UOP_RETIRED.ANY / (tma_r= etiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.MACRO_FUSED / (= tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fused_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JC= C are commonly used examples.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", + "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - INST_RETIRED.MACRO_FUSED) / (tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_non_fused_branches", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_reti= ring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_i= nt_operations + tma_memory_operations + tma_fused_instructions + tma_non_fu= sed_branches + tma_nop_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "topdown\\-heavy\\-ops / (topdown\\-fe\\-bound + top= down\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences. Sample with: UOPS_RETIRED.HEAV= Y", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "UOPS_RETIRED.MS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: UOPS_RETIRED.MS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * cpu_core@ASSISTS.ANY\\,umask\\=3D0x1B@ / SLOT= S", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates fraction of slo= ts the CPU retired uops as a result of handing Page Faults", + "MetricExpr": "99 * ASSISTS.PAGE_FAULT / SLOTS", + "MetricGroup": "TopdownL5;tma_assists_group", + "MetricName": "tma_page_faults", + "PublicDescription": "This metric roughly estimates fraction of sl= ots the CPU retired uops as a result of handing Page Faults. A Page Fault m= ay apply on first application access to a memory page. Note operating syste= m handling of page faults accounts for the majority of its cost.", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric roughly estimates fraction of slo= ts the CPU retired uops as a result of handing Floating Point (FP) Assists", + "MetricExpr": "30 * ASSISTS.FP / SLOTS", + "MetricGroup": "HPC;TopdownL5;tma_assists_group", + "MetricName": "tma_fp_assists", + "PublicDescription": "This metric roughly estimates fraction of sl= ots the CPU retired uops as a result of handing Floating Point (FP) Assists= . FP Assist may apply when working with very small floating point values (s= o-called denormals).", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops as a result of handing SSE to AVX* or AVX* to SSE transitio= n Assists. ", + "MetricExpr": "63 * ASSISTS.SSE_AVX_MIX / SLOTS", + "MetricGroup": "HPC;TopdownL5;tma_assists_group", + "MetricName": "tma_avx_assists", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources. Sample with: FRONTEND_RETIRE= D.MS_FLOWS", + "ScaleUnit": "100%", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_boun= d / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_stor= e_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma= _l3_hit_latency + tma_sq_full))) + (tma_l1_bound / (tma_dram_bound + tma_l1= _bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (= tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_stor= e_fwd_blk)) ", + "MetricGroup": "Mem;MemoryBW;Offcore", + "MetricName": "Memory_Bandwidth", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_bound = / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_= bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing = + tma_l3_hit_latency + tma_sq_full)) + (tma_l2_bound / (tma_dram_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)))", + "MetricGroup": "Mem;MemoryLat;Offcore", + "MetricName": "Memory_Latency", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_= fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + (tma_s= tore_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound += tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing = + tma_split_stores + tma_store_latency + tma_streaming_stores))) ", + "MetricGroup": "Mem;MemoryTLB;Offcore", + "MetricName": "Memory_Data_TLBs", + "Unit": "cpu_core" + }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED= .NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 *= BR_INST_RETIRED.NEAR_CALL) ) / TOPDOWN.SLOTS)", + "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.= NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * = BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead", "Unit": "cpu_core" }, + { + "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", + "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", + "MetricName": "Big_Code", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", + "MetricGroup": "Fed;FetchBW;Frontend", + "MetricName": "Instruction_Fetch_BW", + "Unit": "cpu_core" + }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC", "Unit": "cpu_core" }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "(tma_retiring * SLOTS) / INST_RETIRED.ANY", + "MetricGroup": "Pipeline;Ret;Retire", + "MetricName": "UPI", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Instruction per taken branch", + "MetricExpr": "(tma_retiring * SLOTS) / BR_INST_RETIRED.NEAR_TAKEN= ", + "MetricGroup": "Branches;Fed;FetchBW", + "MetricName": "UpTB", + "Unit": "cpu_core" + }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI", "Unit": "cpu_core" }, @@ -30,14 +860,14 @@ { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", "MetricExpr": "TOPDOWN.SLOTS", - "MetricGroup": "TmaL1", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS", "Unit": "cpu_core" }, { "BriefDescription": "Fraction of Physical Core issue-slots utilize= d by this Logical Processor", - "MetricExpr": "TOPDOWN.SLOTS / ( TOPDOWN.SLOTS / 2 ) if #SMT_on el= se 1", - "MetricGroup": "SMT;TmaL1", + "MetricExpr": "SLOTS / (TOPDOWN.SLOTS / 2) if #SMT_on else 1", + "MetricGroup": "SMT;tma_L1_group", "MetricName": "Slots_Utilization", "Unit": "cpu_core" }, @@ -51,21 +881,21 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC", "Unit": "cpu_core" }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / CORE_C= LKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc", "Unit": "cpu_core" }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.= PORT_1 + FP_ARITH_DISPATCHED.PORT_5 ) / ( 2 * CPU_CLK_UNHALTED.DISTRIBUTED = )", + "MetricExpr": "(FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.P= ORT_1 + FP_ARITH_DISPATCHED.PORT_5) / (2 * CORE_CLKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n).", @@ -73,11 +903,18 @@ }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP", "Unit": "cpu_core" }, + { + "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", + "MetricGroup": "Cor;SMT", + "MetricName": "Core_Bound_Likely", + "Unit": "cpu_core" + }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED", @@ -129,14 +966,14 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B= _PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACK= ED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP", "Unit": "cpu_core" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW.", @@ -160,7 +997,7 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting.", @@ -168,7 +1005,7 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting.", @@ -182,12 +1019,19 @@ "Unit": "cpu_core" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions", "Unit": "cpu_core" }, + { + "BriefDescription": "Average number of Uops retired in cycles wher= e at least one uop has retired.", + "MetricExpr": "(tma_retiring * SLOTS) / cpu_core@UOPS_RETIRED.SLOT= S\\,cmask\\=3D1@", + "MetricGroup": "Pipeline;Ret", + "MetricName": "Retire", + "Unit": "cpu_core" + }, { "BriefDescription": "Estimated fraction of retirement-cycles deali= ng with repeat instructions", "MetricExpr": "INST_RETIRED.REP_ITERATION / cpu_core@UOPS_RETIRED.= SLOTS\\,cmask\\=3D1@", @@ -237,6 +1081,13 @@ "MetricName": "DSB_Switch_Cost", "Unit": "cpu_core" }, + { + "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_lsd + tma_mite))", + "MetricGroup": "DSBmiss;Fed", + "MetricName": "DSB_Misses", + "Unit": "cpu_core" + }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -251,6 +1102,13 @@ "MetricName": "IpMispredict", "Unit": "cpu_core" }, + { + "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", + "MetricGroup": "Bad;BrMispredicts", + "MetricName": "Branch_Misprediction_Cost", + "Unit": "cpu_core" + }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_B= RANCHES", @@ -267,7 +1125,7 @@ }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet", "Unit": "cpu_core" @@ -281,7 +1139,7 @@ }, { "BriefDescription": "Fraction of branches of other types (not indi= vidually covered by other metrics in Info.Branches group)", - "MetricExpr": "1 - ( (BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRE= D.ALL_BRANCHES) + (BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHE= S) + (( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST= _RETIRED.ALL_BRANCHES) + ((BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.CON= D_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES) )", + "MetricExpr": "1 - (Cond_NT + Cond_TK + CallRet + Jump)", "MetricGroup": "Bad;Branches", "MetricName": "Other_Branches", "Unit": "cpu_core" @@ -296,77 +1154,77 @@ { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP", "Unit": "cpu_core" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI", "Unit": "cpu_core" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load", "Unit": "cpu_core" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI", "Unit": "cpu_core" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All", "Unit": "cpu_core" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load", "Unit": "cpu_core" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All", "Unit": "cpu_core" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load", "Unit": "cpu_core" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI", "Unit": "cpu_core" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI", "Unit": "cpu_core" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING ) / ( 4 * CPU_CLK_UNHALTED.DISTRIB= UTED )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING) / (4 * CORE_CLKS)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization", "Unit": "cpu_core" @@ -401,28 +1259,28 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T", "Unit": "cpu_core" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T", "Unit": "cpu_core" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T", "Unit": "cpu_core" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T", "Unit": "cpu_core" @@ -436,14 +1294,14 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency", "Unit": "cpu_core" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) = / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 10000= 00000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine.", @@ -451,7 +1309,7 @@ }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization", "Unit": "cpu_core" @@ -479,7 +1337,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use", "Unit": "cpu_core" @@ -500,41 +1358,408 @@ }, { "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to frontend stalls.", - "MetricExpr": "TOPDOWN_FE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)", + "MetricExpr": "TOPDOWN_FE_BOUND.ALL / SLOTS", "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", + "MetricName": "tma_frontend_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to frontend bandwidth restrictions due to = decode, predecode, cisc, and other limitations.", + "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY / SLOTS", + "MetricGroup": "TopdownL2;tma_frontend_bound_group", + "MetricName": "tma_frontend_latency", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to instruction cache misses.", + "MetricExpr": "TOPDOWN_FE_BOUND.ICACHE / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_latency_group", + "MetricName": "tma_icache", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to Instruction Table Lookaside Buffer (ITL= B) misses.", + "MetricExpr": "TOPDOWN_FE_BOUND.ITLB / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_latency_group", + "MetricName": "tma_itlb", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to BACLEARS, which occurs when the Branch = Target Buffer (BTB) prediction or lack thereof, was corrected by a later br= anch predictor in the frontend", + "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_DETECT / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_latency_group", + "MetricName": "tma_branch_detect", + "PublicDescription": "Counts the number of issue slots that were = not delivered by the frontend due to BACLEARS, which occurs when the Branch= Target Buffer (BTB) prediction or lack thereof, was corrected by a later b= ranch predictor in the frontend. Includes BACLEARS due to all branch types = including conditional and unconditional jumps, returns, and indirect branch= es.", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to BTCLEARS, which occurs when the Branch = Target Buffer (BTB) predicts a taken branch.", + "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_RESTEER / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_latency_group", + "MetricName": "tma_branch_resteer", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to frontend bandwidth restrictions due to = decode, predecode, cisc, and other limitations.", + "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH / SLOTS", + "MetricGroup": "TopdownL2;tma_frontend_bound_group", + "MetricName": "tma_frontend_bandwidth", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to the microcode sequencer (MS).", + "MetricExpr": "TOPDOWN_FE_BOUND.CISC / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_bandwidth_group", + "MetricName": "tma_cisc", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to decode stalls.", + "MetricExpr": "TOPDOWN_FE_BOUND.DECODE / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_bandwidth_group", + "MetricName": "tma_decode", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to wrong predecodes.", + "MetricExpr": "TOPDOWN_FE_BOUND.PREDECODE / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_bandwidth_group", + "MetricName": "tma_predecode", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot delivered by the frontend due to other common frontend stalls not catego= rized.", + "MetricExpr": "TOPDOWN_FE_BOUND.OTHER / SLOTS", + "MetricGroup": "TopdownL3;tma_frontend_bandwidth_group", + "MetricName": "tma_other_fb", + "ScaleUnit": "100%", "Unit": "cpu_atom" }, { "BriefDescription": "Counts the total number of issue slots that w= ere not consumed by the backend because allocation is stalled due to a misp= redicted jump or a machine clear", - "MetricExpr": "TOPDOWN_BAD_SPECULATION.ALL / (5 * CPU_CLK_UNHALTED= .CORE)", + "MetricExpr": "(SLOTS - (TOPDOWN_FE_BOUND.ALL + TOPDOWN_BE_BOUND.A= LL + TOPDOWN_RETIRING.ALL)) / SLOTS", "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", + "MetricName": "tma_bad_speculation", "PublicDescription": "Counts the total number of issue slots that = were not consumed by the backend because allocation is stalled due to a mis= predicted jump or a machine clear. Only issue slots wasted due to fast nuke= s such as memory ordering nukes are counted. Other nukes are not accounted = for. Counts all issue slots blocked during this recovery window including r= elevant microcode flows and while uops are not yet available in the instruc= tion queue (IQ). Also includes the issue slots that were consumed by the ba= ckend but were thrown away because they were younger than the mispredict or= machine clear.", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to branch mispredicts.", + "MetricExpr": "TOPDOWN_BAD_SPECULATION.MISPREDICT / SLOTS", + "MetricGroup": "TopdownL2;tma_bad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the total number of issue slots that w= ere not consumed by the backend because allocation is stalled due to a mach= ine clear (nuke) of any kind including memory ordering and memory disambigu= ation.", + "MetricExpr": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS / SLOTS", + "MetricGroup": "TopdownL2;tma_bad_speculation_group", + "MetricName": "tma_machine_clears", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to a machine clear (slow nuke).", + "MetricExpr": "TOPDOWN_BAD_SPECULATION.NUKE / SLOTS", + "MetricGroup": "TopdownL3;tma_machine_clears_group", + "MetricName": "tma_nuke", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of machine clears relative = to the number of nuke slots due to SMC. ", + "MetricExpr": "tma_nuke * (MACHINE_CLEARS.SMC / MACHINE_CLEARS.SLO= W)", + "MetricGroup": "TopdownL4;tma_nuke_group", + "MetricName": "tma_smc", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of machine clears relative = to the number of nuke slots due to memory ordering. ", + "MetricExpr": "tma_nuke * (MACHINE_CLEARS.MEMORY_ORDERING / MACHIN= E_CLEARS.SLOW)", + "MetricGroup": "TopdownL4;tma_nuke_group", + "MetricName": "tma_memory_ordering", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of machine clears relative = to the number of nuke slots due to FP assists. ", + "MetricExpr": "tma_nuke * (MACHINE_CLEARS.FP_ASSIST / MACHINE_CLEA= RS.SLOW)", + "MetricGroup": "TopdownL4;tma_nuke_group", + "MetricName": "tma_fp_assist", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of machine clears relative = to the number of nuke slots due to memory disambiguation. ", + "MetricExpr": "tma_nuke * (MACHINE_CLEARS.DISAMBIGUATION / MACHINE= _CLEARS.SLOW)", + "MetricGroup": "TopdownL4;tma_nuke_group", + "MetricName": "tma_disambiguation", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of machine clears relative = to the number of nuke slots due to page faults. ", + "MetricExpr": "tma_nuke * (MACHINE_CLEARS.PAGE_FAULT / MACHINE_CLE= ARS.SLOW)", + "MetricGroup": "TopdownL4;tma_nuke_group", + "MetricName": "tma_page_fault", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to a machine clear classified as a fast nuke= due to memory ordering, memory disambiguation and memory renaming.", + "MetricExpr": "TOPDOWN_BAD_SPECULATION.FASTNUKE / SLOTS", + "MetricGroup": "TopdownL3;tma_machine_clears_group", + "MetricName": "tma_fast_nuke", + "ScaleUnit": "100%", "Unit": "cpu_atom" }, { "BriefDescription": "Counts the total number of issue slots that = were not consumed by the backend due to backend stalls", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "TOPDOWN_BE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)", + "MetricExpr": "TOPDOWN_BE_BOUND.ALL / SLOTS", "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", + "MetricName": "tma_backend_bound", "PublicDescription": "Counts the total number of issue slots that= were not consumed by the backend due to backend stalls. Note that uops mu= st be available for consumption in order for this event to count. If a uop= is not available (IQ is empty), this event will not count. The rest of t= hese subevents count backend stalls, in cycles, due to an outstanding reque= st which is memory bound vs core bound. The subevents are not slot based = events and therefore can not be precisely added or subtracted from the Back= end_Bound_Aux subevents which are slot based.", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles due to backend bo= und stalls that are core execution bound and not attributed to outstanding = demand load or store stalls. ", + "MetricExpr": "max(0, tma_backend_bound - tma_load_store_bound)", + "MetricGroup": "TopdownL2;tma_backend_bound_group", + "MetricName": "tma_core_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles the core is stall= ed due to stores or loads. ", + "MetricExpr": "min((TOPDOWN_BE_BOUND.ALL / SLOTS), (LD_HEAD.ANY_AT= _RET / CLKS) + tma_store_bound)", + "MetricGroup": "TopdownL2;tma_backend_bound_group", + "MetricName": "tma_load_store_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles the core is stall= ed due to store buffer full.", + "MetricExpr": "tma_mem_scheduler * (MEM_SCHEDULER_BLOCK.ST_BUF / M= EM_SCHEDULER_BLOCK.ALL)", + "MetricGroup": "TopdownL3;tma_load_store_bound_group", + "MetricName": "tma_store_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles that the oldest l= oad of the load buffer is stalled at retirement due to a load block.", + "MetricExpr": "LD_HEAD.L1_BOUND_AT_RET / CLKS", + "MetricGroup": "TopdownL3;tma_load_store_bound_group", + "MetricName": "tma_l1_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles that the oldest l= oad of the load buffer is stalled at retirement due to a store forward bloc= k.", + "MetricExpr": "LD_HEAD.ST_ADDR_AT_RET / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles that the oldest l= oad of the load buffer is stalled at retirement due to a first level TLB mi= ss.", + "MetricExpr": "LD_HEAD.DTLB_MISS_AT_RET / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_stlb_hit", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles that the oldest l= oad of the load buffer is stalled at retirement due to a second level TLB m= iss requiring a page walk.", + "MetricExpr": "LD_HEAD.PGWALK_AT_RET / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_stlb_miss", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles that the oldest l= oad of the load buffer is stalled at retirement due to a number of other lo= ad blocks.", + "MetricExpr": "LD_HEAD.OTHER_AT_RET / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_other_l1", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles a core is stalled= due to a demand load which hit in the L2 Cache.", + "MetricExpr": "(MEM_BOUND_STALLS.LOAD_L2_HIT / CLKS) - (MEM_BOUND_= STALLS_AT_RET_CORRECTION * MEM_BOUND_STALLS.LOAD_L2_HIT / MEM_BOUND_STALLS.= LOAD)", + "MetricGroup": "TopdownL3;tma_load_store_bound_group", + "MetricName": "tma_l2_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles a core is stalled= due to a demand load which hit in the Last Level Cache (LLC) or other core= with HITE/F/M.", + "MetricExpr": "(MEM_BOUND_STALLS.LOAD_LLC_HIT / CLKS) - (MEM_BOUND= _STALLS_AT_RET_CORRECTION * MEM_BOUND_STALLS.LOAD_LLC_HIT / MEM_BOUND_STALL= S.LOAD)", + "MetricGroup": "TopdownL3;tma_load_store_bound_group", + "MetricName": "tma_l3_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles the core is stall= ed due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).", + "MetricExpr": "(MEM_BOUND_STALLS.LOAD_DRAM_HIT / CLKS) - (MEM_BOUN= D_STALLS_AT_RET_CORRECTION * MEM_BOUND_STALLS.LOAD_DRAM_HIT / MEM_BOUND_STA= LLS.LOAD)", + "MetricGroup": "TopdownL3;tma_load_store_bound_group", + "MetricName": "tma_dram_bound", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles the core is stall= ed due to a demand load miss which hits in the L2, LLC, DRAM or MMIO (Non-D= RAM) but could not be correctly attributed or cycles in which the load miss= is waiting on a request buffer.", + "MetricExpr": "max(0, tma_load_store_bound - (tma_store_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound))", + "MetricGroup": "TopdownL3;tma_load_store_bound_group", + "MetricName": "tma_other_load_store", + "ScaleUnit": "100%", "Unit": "cpu_atom" }, { "BriefDescription": "Counts the total number of issue slots that = were not consumed by the backend due to backend stalls", - "MetricExpr": "(TOPDOWN_BE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)= )", + "MetricExpr": "tma_backend_bound", "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound_Aux", + "MetricName": "tma_backend_bound_aux", "PublicDescription": "Counts the total number of issue slots that= were not consumed by the backend due to backend stalls. Note that UOPS mu= st be available for consumption in order for this event to count. If a uop= is not available (IQ is empty), this event will not count. All of these s= ubevents count backend stalls, in slots, due to a resource limitation. Th= ese are not cycle based events and therefore can not be precisely added or = subtracted from the Backend_Bound subevents which are cycle based. These s= ubevents are supplementary to Backend_Bound and can be used to analyze resu= lts from a resource perspective at allocation. ", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the total number of issue slots that = were not consumed by the backend due to backend stalls", + "MetricExpr": "tma_backend_bound", + "MetricGroup": "TopdownL2;tma_backend_bound_aux_group", + "MetricName": "tma_resource_bound", + "PublicDescription": "Counts the total number of issue slots that= were not consumed by the backend due to backend stalls. Note that uops mu= st be available for consumption in order for this event to count. If a uop= is not available (IQ is empty), this event will not count. ", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to memory reservation stalls in which a sche= duler is not able to accept uops.", + "MetricExpr": "TOPDOWN_BE_BOUND.MEM_SCHEDULER / SLOTS", + "MetricGroup": "TopdownL3;tma_resource_bound_group", + "MetricName": "tma_mem_scheduler", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles, relative to the = number of mem_scheduler slots, in which uops are blocked due to store buffe= r full", + "MetricExpr": "tma_mem_scheduler * (MEM_SCHEDULER_BLOCK.ST_BUF / M= EM_SCHEDULER_BLOCK.ALL)", + "MetricGroup": "TopdownL4;tma_mem_scheduler_group", + "MetricName": "tma_st_buffer", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles, relative to the = number of mem_scheduler slots, in which uops are blocked due to load buffer= full", + "MetricExpr": "tma_mem_scheduler * MEM_SCHEDULER_BLOCK.LD_BUF / ME= M_SCHEDULER_BLOCK.ALL", + "MetricGroup": "TopdownL4;tma_mem_scheduler_group", + "MetricName": "tma_ld_buffer", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cycles, relative to the = number of mem_scheduler slots, in which uops are blocked due to RSV full re= lative ", + "MetricExpr": "tma_mem_scheduler * MEM_SCHEDULER_BLOCK.RSV / MEM_S= CHEDULER_BLOCK.ALL", + "MetricGroup": "TopdownL4;tma_mem_scheduler_group", + "MetricName": "tma_rsv", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to IEC or FPC RAT stalls, which can be due t= o FIQ or IEC reservation stalls in which the integer, floating point or SIM= D scheduler is not able to accept uops.", + "MetricExpr": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER / SLOTS", + "MetricGroup": "TopdownL3;tma_resource_bound_group", + "MetricName": "tma_non_mem_scheduler", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to the physical register file unable to acce= pt an entry (marble stalls).", + "MetricExpr": "TOPDOWN_BE_BOUND.REGISTER / SLOTS", + "MetricGroup": "TopdownL3;tma_resource_bound_group", + "MetricName": "tma_register", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to the reorder buffer being full (ROB stalls= ).", + "MetricExpr": "TOPDOWN_BE_BOUND.REORDER_BUFFER / SLOTS", + "MetricGroup": "TopdownL3;tma_resource_bound_group", + "MetricName": "tma_reorder_buffer", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to certain allocation restrictions.", + "MetricExpr": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS / SLOTS", + "MetricGroup": "TopdownL3;tma_resource_bound_group", + "MetricName": "tma_alloc_restriction", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of issue slots that were n= ot consumed by the backend due to scoreboards from the instruction queue (I= Q), jump execution unit (JEU), or microcode sequencer (MS).", + "MetricExpr": "TOPDOWN_BE_BOUND.SERIALIZATION / SLOTS", + "MetricGroup": "TopdownL3;tma_resource_bound_group", + "MetricName": "tma_serialization", + "ScaleUnit": "100%", "Unit": "cpu_atom" }, { "BriefDescription": "Counts the numer of issue slots that result = in retirement slots. ", - "MetricExpr": "TOPDOWN_RETIRING.ALL / (5 * CPU_CLK_UNHALTED.CORE)", + "MetricExpr": "TOPDOWN_RETIRING.ALL / SLOTS", "MetricGroup": "TopdownL1", - "MetricName": "Retiring", + "MetricName": "tma_retiring", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of uops that are not from t= he microsequencer. ", + "MetricExpr": "(TOPDOWN_RETIRING.ALL - UOPS_RETIRED.MS) / SLOTS", + "MetricGroup": "TopdownL2;tma_retiring_group", + "MetricName": "tma_base", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of floating point operation= s per uop with all default weighting.", + "MetricExpr": "UOPS_RETIRED.FPDIV / SLOTS", + "MetricGroup": "TopdownL3;tma_base_group", + "MetricName": "tma_fp_uops", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of uops retired excluding m= s and fp div uops.", + "MetricExpr": "(TOPDOWN_RETIRING.ALL - UOPS_RETIRED.MS - UOPS_RETI= RED.FPDIV) / SLOTS", + "MetricGroup": "TopdownL3;tma_base_group", + "MetricName": "tma_other_ret", + "ScaleUnit": "100%", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of uops that are from the c= omplex flows issued by the micro-sequencer (MS)", + "MetricExpr": "UOPS_RETIRED.MS / SLOTS", + "MetricGroup": "TopdownL2;tma_retiring_group", + "MetricName": "tma_ms_uops", + "PublicDescription": "Counts the number of uops that are from the = complex flows issued by the micro-sequencer (MS). This includes uops from = flows due to complex instructions, faults, assists, and inserted flows.", + "ScaleUnit": "100%", "Unit": "cpu_atom" }, { @@ -551,19 +1776,19 @@ }, { "BriefDescription": "", - "MetricExpr": "5 * CPU_CLK_UNHALTED.CORE", + "MetricExpr": "5 * CLKS", "MetricName": "SLOTS", "Unit": "cpu_atom" }, { "BriefDescription": "Instructions Per Cycle", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.CORE", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricName": "IPC", "Unit": "cpu_atom" }, { "BriefDescription": "Cycles Per Instruction", - "MetricExpr": "CPU_CLK_UNHALTED.CORE / INST_RETIRED.ANY", + "MetricExpr": "CLKS / INST_RETIRED.ANY", "MetricName": "CPI", "Unit": "cpu_atom" }, @@ -623,7 +1848,7 @@ }, { "BriefDescription": "Instructions per Far Branch", - "MetricExpr": "INST_RETIRED.ANY / ( BR_INST_RETIRED.FAR_BRANCH / 2= )", + "MetricExpr": "INST_RETIRED.ANY / (BR_INST_RETIRED.FAR_BRANCH / 2)= ", "MetricName": "IpFarBranch", "Unit": "cpu_atom" }, @@ -665,7 +1890,7 @@ }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.CORE / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricName": "Turbo_Utilization", "Unit": "cpu_atom" }, @@ -681,12 +1906,6 @@ "MetricName": "CPU_Utilization", "Unit": "cpu_atom" }, - { - "BriefDescription": "Estimated Pause cost. In percent", - "MetricExpr": "100 * SERIALIZATION.NON_C01_MS_SCB / (5 * CPU_CLK_U= NHALTED.CORE)", - "MetricName": "Estimated_Pause_Cost", - "Unit": "cpu_atom" - }, { "BriefDescription": "Cycle cost per L2 hit", "MetricExpr": "MEM_BOUND_STALLS.LOAD_L2_HIT / MEM_LOAD_UOPS_RETIRE= D.L2_HIT", @@ -707,19 +1926,19 @@ }, { "BriefDescription": "Percent of instruction miss cost that hit in = the L2", - "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_L2_HIT / ( MEM_BOUND_= STALLS.IFETCH )", + "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_L2_HIT / (MEM_BOUND_S= TALLS.IFETCH)", "MetricName": "Inst_Miss_Cost_L2Hit_Percent", "Unit": "cpu_atom" }, { "BriefDescription": "Percent of instruction miss cost that hit in = the L3", - "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_LLC_HIT / ( MEM_BOUND= _STALLS.IFETCH )", + "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_LLC_HIT / (MEM_BOUND_= STALLS.IFETCH)", "MetricName": "Inst_Miss_Cost_L3Hit_Percent", "Unit": "cpu_atom" }, { "BriefDescription": "Percent of instruction miss cost that hit in = DRAM", - "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_DRAM_HIT / ( MEM_BOUN= D_STALLS.IFETCH )", + "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_DRAM_HIT / (MEM_BOUND= _STALLS.IFETCH)", "MetricName": "Inst_Miss_Cost_DRAMHit_Percent", "Unit": "cpu_atom" }, diff --git a/tools/perf/pmu-events/arch/x86/alderlake/cache.json b/tools/pe= rf/pmu-events/arch/x86/alderlake/cache.json index 887dce4dfeba..2cc62d2779d2 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/cache.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/cache.json @@ -1,4 +1,28 @@ [ + { + "BriefDescription": "Counts the number of cacheable memory request= s that miss in the LLC. Counts on a per core basis.", + "CollectPEBSRecord": "2", + "Counter": "0,1,2,3,4,5", + "EventCode": "0x2e", + "EventName": "LONGEST_LAT_CACHE.MISS", + "PEBScounters": "0,1,2,3,4,5", + "SampleAfterValue": "200003", + "Speculative": "1", + "UMask": "0x41", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts the number of cacheable memory request= s that access the LLC. Counts on a per core basis.", + "CollectPEBSRecord": "2", + "Counter": "0,1,2,3,4,5", + "EventCode": "0x2e", + "EventName": "LONGEST_LAT_CACHE.REFERENCE", + "PEBScounters": "0,1,2,3,4,5", + "SampleAfterValue": "200003", + "Speculative": "1", + "UMask": "0x4f", + "Unit": "cpu_atom" + }, { "BriefDescription": "Counts the number of cycles the core is stall= ed due to an instruction cache or TLB miss which hit in the L2, LLC, DRAM o= r MMIO (Non-DRAM).", "CollectPEBSRecord": "2", @@ -210,8 +234,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 128 cycles as defi= ned in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128", @@ -219,7 +243,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x80", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -227,8 +251,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 16 cycles as defin= ed in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16", @@ -236,7 +260,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x10", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -244,8 +268,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 256 cycles as defi= ned in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256", @@ -253,7 +277,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x100", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -261,8 +285,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 32 cycles as defin= ed in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32", @@ -270,7 +294,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x20", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -278,8 +302,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 4 cycles as define= d in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4", @@ -287,7 +311,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x4", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -295,8 +319,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 512 cycles as defi= ned in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512", @@ -304,7 +328,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x200", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -312,8 +336,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 64 cycles as defin= ed in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64", @@ -321,7 +345,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x40", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -329,8 +353,8 @@ }, { "BriefDescription": "Counts the number of tagged loads with an ins= truction latency that exceeds or equals the threshold of 8 cycles as define= d in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.", - "CollectPEBSRecord": "3", - "Counter": "0,1,2,3,4,5", + "CollectPEBSRecord": "2", + "Counter": "0,1", "Data_LA": "1", "EventCode": "0xd0", "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8", @@ -338,7 +362,7 @@ "MSRIndex": "0x3F6", "MSRValue": "0x8", "PEBS": "2", - "PEBScounters": "0,1,2,3,4,5", + "PEBScounters": "0,1", "SampleAfterValue": "1000003", "TakenAlone": "1", "UMask": "0x5", @@ -359,7 +383,7 @@ }, { "BriefDescription": "Counts the number of stores uops retired. Cou= nts with or without PEBS enabled.", - "CollectPEBSRecord": "3", + "CollectPEBSRecord": "2", "Counter": "0,1,2,3,4,5", "Data_LA": "1", "EventCode": "0xd0", @@ -371,6 +395,61 @@ "UMask": "0x6", "Unit": "cpu_atom" }, + { + "BriefDescription": "Counts demand data reads that were supplied b= y the L3 cache.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_DATA_RD.L3_HIT", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x3F803C0001", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts demand data reads that were supplied b= y the L3 cache where a snoop was sent, the snoop hit, and modified data was= forwarded.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x10003C0001", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts demand data reads that were supplied b= y the L3 cache where a snoop was sent, the snoop hit, but no data was forwa= rded.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x4003C0001", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts demand data reads that were supplied b= y the L3 cache where a snoop was sent, the snoop hit, and non-modified data= was forwarded.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x8003C0001", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, + { + "BriefDescription": "Counts demand reads for ownership (RFO) and s= oftware prefetches for exclusive ownership (PREFETCHW) that were supplied b= y the L3 cache.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_RFO.L3_HIT", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x3F803C0002", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, { "BriefDescription": "Counts demand reads for ownership (RFO) and s= oftware prefetches for exclusive ownership (PREFETCHW) that were supplied b= y the L3 cache where a snoop was sent, the snoop hit, and modified data was= forwarded.", "Counter": "0,1,2,3,4,5", diff --git a/tools/perf/pmu-events/arch/x86/alderlake/frontend.json b/tools= /perf/pmu-events/arch/x86/alderlake/frontend.json index 2cfa70b2d5e1..da1a7ba0e568 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/frontend.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/frontend.json @@ -47,6 +47,18 @@ "UMask": "0x1", "Unit": "cpu_core" }, + { + "BriefDescription": "Cycles the Microcode Sequencer is busy.", + "CollectPEBSRecord": "2", + "Counter": "0,1,2,3", + "EventCode": "0x87", + "EventName": "DECODE.MS_BUSY", + "PEBScounters": "0,1,2,3", + "SampleAfterValue": "500009", + "Speculative": "1", + "UMask": "0x2", + "Unit": "cpu_core" + }, { "BriefDescription": "DSB-to-MITE switch true penalty cycles.", "CollectPEBSRecord": "2", diff --git a/tools/perf/pmu-events/arch/x86/alderlake/memory.json b/tools/p= erf/pmu-events/arch/x86/alderlake/memory.json index 586fb961e46d..f894e4a0212b 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/memory.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/memory.json @@ -82,6 +82,17 @@ "UMask": "0x1", "Unit": "cpu_atom" }, + { + "BriefDescription": "Counts demand data reads that were not suppli= ed by the L3 cache.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x3F84400001", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, { "BriefDescription": "Counts demand reads for ownership (RFO) and s= oftware prefetches for exclusive ownership (PREFETCHW) that were not suppli= ed by the L3 cache.", "Counter": "0,1,2,3,4,5", @@ -93,6 +104,17 @@ "UMask": "0x1", "Unit": "cpu_atom" }, + { + "BriefDescription": "Counts demand reads for ownership (RFO) and s= oftware prefetches for exclusive ownership (PREFETCHW) that were not suppli= ed by the L3 cache.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x3F84400002", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, { "BriefDescription": "Execution stalls while L3 cache miss demand l= oad is outstanding.", "CollectPEBSRecord": "2", diff --git a/tools/perf/pmu-events/arch/x86/alderlake/other.json b/tools/pe= rf/pmu-events/arch/x86/alderlake/other.json index 67a9c13cc71d..c49d8ce27310 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/other.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/other.json @@ -1,4 +1,15 @@ [ + { + "BriefDescription": "Counts modified writebacks from L1 cache and = L2 cache that have any type of response.", + "Counter": "0,1,2,3,4,5", + "EventCode": "0xB7", + "EventName": "OCR.COREWB_M.ANY_RESPONSE", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x10008", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_atom" + }, { "BriefDescription": "Counts demand data reads that have any type o= f response.", "Counter": "0,1,2,3,4,5", @@ -103,6 +114,17 @@ "UMask": "0x1", "Unit": "cpu_core" }, + { + "BriefDescription": "Counts demand data reads that were supplied b= y DRAM.", + "Counter": "0,1,2,3,4,5,6,7", + "EventCode": "0x2A,0x2B", + "EventName": "OCR.DEMAND_DATA_RD.DRAM", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x184000001", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_core" + }, { "BriefDescription": "Counts demand read for ownership (RFO) reques= ts and software prefetches for exclusive ownership (PREFETCHW) that have an= y type of response.", "Counter": "0,1,2,3,4,5,6,7", diff --git a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json b/tools= /perf/pmu-events/arch/x86/alderlake/pipeline.json index d02e078a90c9..1a137f7f8b7e 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json @@ -330,6 +330,18 @@ "UMask": "0x3", "Unit": "cpu_atom" }, + { + "BriefDescription": "Counts the number of unhalted reference clock= cycles at TSC frequency.", + "CollectPEBSRecord": "2", + "Counter": "0,1,2,3,4,5", + "EventCode": "0x3c", + "EventName": "CPU_CLK_UNHALTED.REF_TSC_P", + "PEBScounters": "0,1,2,3,4,5", + "SampleAfterValue": "2000003", + "Speculative": "1", + "UMask": "0x1", + "Unit": "cpu_atom" + }, { "BriefDescription": "Counts the number of unhalted core clock cycl= es. (Fixed event)", "CollectPEBSRecord": "2", @@ -874,7 +886,7 @@ "PEBScounters": "0,1,2,3,4,5,6,7", "SampleAfterValue": "100003", "Speculative": "1", - "UMask": "0x1f", + "UMask": "0x1b", "Unit": "cpu_core" }, { diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 7f2d777fd97f..bc873a1e84e1 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -1,5 +1,5 @@ Family-model,Version,Filename,EventType -GenuineIntel-6-9[7A],v1.13,alderlake,core +GenuineIntel-6-(97|9A|B7|BA|BE|BF),v1.15,alderlake,core GenuineIntel-6-(1C|26|27|35|36),v4,bonnell,core GenuineIntel-6-(3D|47),v26,broadwell,core GenuineIntel-6-56,v23,broadwellde,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A763BC4332F for ; Sat, 1 Oct 2022 06:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229626AbiJAGHm (ORCPT ); Sat, 1 Oct 2022 02:07:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229598AbiJAGHH (ORCPT ); Sat, 1 Oct 2022 02:07:07 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85278182771 for ; Fri, 30 Sep 2022 23:06:59 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33dc888dc62so61328537b3.4 for ; Fri, 30 Sep 2022 23:06:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=265Z5PzXUlOt4RRCieU8UYimtKeY9egVFNwjaaJaB/c=; b=HRgya4HT5EKRItrF9EfjL5xzAcXavBvradiZVQ6kBDyobLWlnVE1V18HycHXnrchCa WiHPaNvxoBxcUhwpRn6ziGxmjrXzl2kDeXCB0k/FEr4WD8aM0pI+eYAHuuDvPAeeeqxw zEsO65Sd+1YavtaIg1/dNA3SOKgIuLke7ERYSzBf78QAXH7dwJz/PRBfNWftahVKhmhh mWqpS/9Loxg4mcxY8byvDSUtsbMypQNTsm0CzzQ/X1cAjTazmdVTOswxjWAjaJ16mZDP As/N7IZil2ZZOFV817Wy8c1/D7jUzG39oV6mNLnTo/YCyP04a8SF3ThjkGYHPljNLh2n MDIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=265Z5PzXUlOt4RRCieU8UYimtKeY9egVFNwjaaJaB/c=; b=GqhEHc4oLLMankTfRAjJe6rdOUT0Rtx83f1Xfugcua2xIu5JSnBQXb+hxw/ICpJ8AN ty4gU3aDKf5n2dyaagXycHzFd17sphXCrYRhMbCCeT2j3wMNsNJkz9OTYBEp4W9aJHXQ e83OytSU9vH9BA2N+qOm/8MJyK5ZaifN5gAiNr8dVEM9JAT5t3qFYNQIfxkzh0v5aq7E CqHOjq8BqUE9hW/9yecsX2O4tGzJB+HEJpyZTmrsfVHMATWHoyF1GLp48CKlSCx0W3Rh UWZteR1sB9JaBnttPlByENrt0e/Bhzo4eCkip87AhhG2gmYy3l3mrz6gvM8hgjHCuKBM PknQ== X-Gm-Message-State: ACrzQf1FfCqVH9rzKEtrVJMDcSXXjsF0IDmfs9OcVHxf8nMVaOsuablI F8rU8jrHfzPuOgKRokijA9CMt27CzbcO X-Google-Smtp-Source: AMsMyM4Qjr/rrqPBd3fUlcOOafRT/F3kyQ1Ii+3YoTqAVSRDQDIngmXVHfjI2yf5IENZ3903Ztsz7Bt91GFV X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:1042:0:b0:6bb:f807:aa04 with SMTP id 63-20020a251042000000b006bbf807aa04mr10902793ybq.639.1664604418642; Fri, 30 Sep 2022 23:06:58 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:20 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-7-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 06/22] perf vendor events: Update Intel broadwell From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v26, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/broadwell/bdw-metrics.json | 679 +++++++++++++++--- 1 file changed, 565 insertions(+), 114 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json b/to= ols/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json index d65afe3d0b06..c220b1cf1740 100644 --- a/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json +++ b/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json @@ -1,64 +1,552 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE= / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", + "MetricExpr": "ICACHE.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_D= URATION\\,cmask\\=3D1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage. ", + "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers = / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears. ", + "MetricExpr": "MACHINE_CLEARS.COUNT * tma_branch_resteers / (BR_MI= SP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "tma_branch_resteers - tma_mispredicts_resteers - tm= a_clears_resteers", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + RESOURCE_STALLS.S= B) / (CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UO= PS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_= GE_2_UOPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else R= ESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_= RETIRED.HIT_LFB_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISS= ES.WALK_DURATION\\,cmask\\=3D1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / CL= KS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.ST= ALLS_L2_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETI= RED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2= _MISS / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 = + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD= _UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOP= S_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_= LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS= * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + ME= M_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LO= AD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) = + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_RETIRED.L3_MISS))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + mem_load_= uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIR= ED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RE= TIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_UOPS_R= ETIRED.L3_MISS))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS= _RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS))) * CYCLE_ACTIVITY.STA= LLS_L2_MISS / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM = / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MI= SSES.WALK_DURATION\\,cmask\\=3D1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) /= CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLE= S_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else U= OPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_l= atency > 0.1) else RESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - CYCLE_ACTIVIT= Y.STALLS_MEM_ANY) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - RS_EVENTS.EMPTY_CYCLES if (tm= a_fetch_latency > 0.1) else 0) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1= _UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2= _UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "INST_RETIRED.X87 * UPI / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -76,8 +564,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -88,16 +576,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -107,51 +589,32 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / CORE_C= LKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIV= E / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * CPU_CLK_UNHALT= ED.THREAD )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)) / (2 * CORE_CLKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, - { - "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width). SMT versi= on; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * ( ( CPU_CLK_UN= HALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UN= HALTED.REF_XCLK ) ) )", - "MetricGroup": "Cor;Flops;HPC_SMT", - "MetricName": "FP_Arith_Utilization_SMT", - "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n). SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ / 2 ) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((cpu@UOPS_EXECUTED.CORE\\,c= mask\\=3D1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -193,13 +656,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B= _PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACK= ED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -220,22 +683,22 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -252,7 +715,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -264,83 +727,71 @@ }, { "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_C= LK_UNHALTED.THREAD)) * (BR_MISP_RETIRED.ALL_BRANCHES * (12 * ( BR_MISP_RETI= RED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY ) / CPU_CLK_UNHALTED= .THREAD) / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS= .ANY )) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_= CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_MISP_RETIRED.A= LL_BRANCHES", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", "MetricGroup": "Bad;BrMispredicts", "MetricName": "Branch_Misprediction_Cost" }, - { - "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (BR_MISP_RETIRED.ALL= _BRANCHES * (12 * ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + B= ACLEARS.ANY ) / CPU_CLK_UNHALTED.THREAD) / ( BR_MISP_RETIRED.ALL_BRANCHES += MACHINE_CLEARS.COUNT + BACLEARS.ANY )) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCL= ES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) * (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCHES", - "MetricGroup": "Bad;BrMispredicts_SMT", - "MetricName": "Branch_Misprediction_Cost_SMT" - }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cp= u@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cpu@DTLB_STORE_MISSES.WAL= K_DURATION\\,cmask\\=3D1@ + 7 * ( DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_L= OAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) ) / CPU_CLK_UNHALT= ED.THREAD", + "MetricExpr": "(cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cpu= @DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cpu@DTLB_STORE_MISSES.WALK= _DURATION\\,cmask\\=3D1@ + 7 * (DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_LOA= D_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED)) / CORE_CLKS", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cp= u@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cpu@DTLB_STORE_MISSES.WAL= K_DURATION\\,cmask\\=3D1@ + 7 * ( DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_L= OAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) ) / ( ( CPU_CLK_UN= HALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UN= HALTED.REF_XCLK ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -361,19 +812,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -391,26 +842,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) = / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 10000= 00000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -428,7 +879,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16685C43219 for ; Sat, 1 Oct 2022 06:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229604AbiJAGHa (ORCPT ); Sat, 1 Oct 2022 02:07:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229617AbiJAGHK (ORCPT ); Sat, 1 Oct 2022 02:07:10 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3FDB1AD78D for ; Fri, 30 Sep 2022 23:07:01 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3547f988c19so60676877b3.9 for ; Fri, 30 Sep 2022 23:07:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=3c0N/GU7V8fkY3M1mi144WGaBkfX6m88+LweP2rQpvg=; b=BacAxURrS3jueBqnf9WPUc9tMEMZCPESdId98FKgBzPW9YQ/8Id4XOsquF4OaRhhFj oo+mQjhJTl/fdpnDw/d8MMzZAsTAJg3WvYCgymvLqp5p42O/p680g69pnjx9Po3lw1zy TGoykJGOLC2SppdEWMeHGzaJMqsDfXcJy7YUQ0nIIo1mmR2CmLOBL3qsAVzoaThYqLF2 Vh0ItBF41I2FomnxUXWja27EDV72zomMs2CxNH+G4M6SF2eoeLT0Y7Xh3eWBELmnI/K4 buxbuR5s/VIzjTxeSgFPmkRzZrj8Uxbc/kNJS0ZQvD/ylfIEJMXUuQqn9YqhkKw/cw4Y UsDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=3c0N/GU7V8fkY3M1mi144WGaBkfX6m88+LweP2rQpvg=; b=42K0kCHUlv1mg4+UwEwzA5MoNSmuJVFTFV0hOHjUPufz4TjZomcidcSfn0xNLPF96n dAbH44ZazZ5APZrIjPHIIpF+5d5BGtY8eNc+jYxVhivKI645MwRVa01ux9vNj3K6Mm2H JLoY/BcoqLy3jiXleQZwHfwPi8hWCfoRyqjXXjOJsPXCNaYrW9IhChFBZBq21oTNH4hK hGFZIL7MTpWArTIUGF/e4waLJkI1UXgo7zfNdx8o1p+gMQwgysTe6wEc8HwhscFPMOEm /k54mLkUN7cOF+o1La0MV2qZ4GexD3Nh50MhEx7xfhvkMytBalrIUwyQFX9qnpmBLPZr mUsA== X-Gm-Message-State: ACrzQf1c+qZrHDEOW9lfsrvYxbOSg2NrOWDBfxoVuoXJXIxZRtdUpyVl sd5Fr/WuWhxr6H8p1GntdVFOdNe7TcYF X-Google-Smtp-Source: AMsMyM63cEii0uj0SsPASl4kRKhW4xRQFExkKtOEX0cyecAD2uCRpkXbE1Fddh6fxURLA3djAw5XwtYlMejC X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:1211:0:b0:350:b5ad:76bd with SMTP id 17-20020a811211000000b00350b5ad76bdmr11480530yws.296.1664604421320; Fri, 30 Sep 2022 23:07:01 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:21 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-8-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 07/22] perf vendor events: Update Intel broadwellx From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v19, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Uncore event updates by Zhengjun Xing . - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Latest metrics from: https://github.com/intel/perfmon-metrics Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/broadwellx/bdx-metrics.json | 965 +++++++++++------- .../arch/x86/broadwellx/uncore-cache.json | 10 +- .../x86/broadwellx/uncore-interconnect.json | 18 +- .../arch/x86/broadwellx/uncore-memory.json | 18 +- 4 files changed, 638 insertions(+), 373 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json b/t= ools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json index a3a15ee52841..e89fa536ca03 100644 --- a/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json @@ -1,64 +1,576 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE= / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", + "MetricExpr": "ICACHE.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_D= URATION\\,cmask\\=3D1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage. ", + "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers = / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears. ", + "MetricExpr": "MACHINE_CLEARS.COUNT * tma_branch_resteers / (BR_MI= SP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "tma_branch_resteers - tma_mispredicts_resteers - tm= a_clears_resteers", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + RESOURCE_STALLS.S= B) / (CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UO= PS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_= GE_2_UOPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else R= ESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_= RETIRED.HIT_LFB_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISS= ES.WALK_DURATION\\,cmask\\=3D1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / CL= KS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.ST= ALLS_L2_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETI= RED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2= _MISS / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 = + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD= _UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOP= S_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_= LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE= _DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_R= ETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_D= RAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RET= IRED.REMOTE_FWD)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_D= RAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RET= IRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "41 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + mem_load_= uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIR= ED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RE= TIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_UOPS_L= 3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM= _LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMO= TE_FWD))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS= _RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS))) * CYCLE_ACTIVITY.STA= LLS_L2_MISS / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM * (= 1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LO= AD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_U= OPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + ME= M_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMO= TE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS= _RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3= _MISS_RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "310 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM * = (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_L= OAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_= UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + M= EM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REM= OTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MIS= S_RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "(200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM *= (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_= LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD= _UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + = MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.RE= MOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MI= SS_RETIRED.REMOTE_FWD))) + 180 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD = * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM= _LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOA= D_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) += MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.R= EMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_M= ISS_RETIRED.REMOTE_FWD)))) / CLKS", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_UOPS_L3_MI= SS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(200 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_= HITM + 60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MI= SSES.WALK_DURATION\\,cmask\\=3D1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) /= CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLE= S_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else U= OPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_l= atency > 0.1) else RESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - CYCLE_ACTIVIT= Y.STALLS_MEM_ANY) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - RS_EVENTS.EMPTY_CYCLES if (tm= a_fetch_latency > 0.1) else 0) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1= _UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2= _UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "INST_RETIRED.X87 * UPI / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -74,6 +586,12 @@ "MetricGroup": "Branches;Fed;FetchBW", "MetricName": "UpTB" }, + { + "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", + "MetricName": "CPI" + }, { "BriefDescription": "Per-Logical Processor actual clocks when the = Logical Processor is active.", "MetricExpr": "CPU_CLK_UNHALTED.THREAD", @@ -82,16 +600,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -101,51 +613,32 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / CORE_C= LKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIV= E / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * CPU_CLK_UNHALT= ED.THREAD )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)) / (2 * CORE_CLKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, - { - "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width). SMT versi= on; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * ( ( CPU_CLK_UN= HALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UN= HALTED.REF_XCLK ) ) )", - "MetricGroup": "Cor;Flops;HPC_SMT", - "MetricName": "FP_Arith_Utilization_SMT", - "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n). SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ / 2 ) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((cpu@UOPS_EXECUTED.CORE\\,c= mask\\=3D1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -187,13 +680,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B= _PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACK= ED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -214,22 +707,22 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -246,7 +739,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -258,83 +751,71 @@ }, { "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_C= LK_UNHALTED.THREAD)) * (BR_MISP_RETIRED.ALL_BRANCHES * (12 * ( BR_MISP_RETI= RED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY ) / CPU_CLK_UNHALTED= .THREAD) / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS= .ANY )) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_= CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_MISP_RETIRED.A= LL_BRANCHES", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", "MetricGroup": "Bad;BrMispredicts", "MetricName": "Branch_Misprediction_Cost" }, - { - "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (BR_MISP_RETIRED.ALL= _BRANCHES * (12 * ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + B= ACLEARS.ANY ) / CPU_CLK_UNHALTED.THREAD) / ( BR_MISP_RETIRED.ALL_BRANCHES += MACHINE_CLEARS.COUNT + BACLEARS.ANY )) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCL= ES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) * (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCHES", - "MetricGroup": "Bad;BrMispredicts_SMT", - "MetricName": "Branch_Misprediction_Cost_SMT" - }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION + 7 * ( DTLB_STORE_MISSES.WALK_= COMPLETED + DTLB_LOAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) = ) / ( 2 * CPU_CLK_UNHALTED.THREAD )", + "MetricExpr": "(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_= DURATION + DTLB_STORE_MISSES.WALK_DURATION + 7 * (DTLB_STORE_MISSES.WALK_CO= MPLETED + DTLB_LOAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED)) / = (2 * CORE_CLKS)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION + 7 * ( DTLB_STORE_MISSES.WALK_= COMPLETED + DTLB_LOAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) = ) / ( 2 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_TH= READ_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -355,19 +836,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -385,26 +866,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) = / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 10000= 00000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -422,13 +903,13 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( cbox@event\\=3D0x36\\,umask\\=3D0x3\= \,filter_opc\\=3D0x182@ / cbox@event\\=3D0x35\\,umask\\=3D0x3\\,filter_opc\= \=3D0x182@ ) / ( cbox_0@event\\=3D0x0@ / duration_time )", + "MetricExpr": "1000000000 * (cbox@event\\=3D0x36\\,umask\\=3D0x3\\= ,filter_opc\\=3D0x182@ / cbox@event\\=3D0x35\\,umask\\=3D0x3\\,filter_opc\\= =3D0x182@) / (Socket_CLKS / duration_time)", "MetricGroup": "Mem;MemoryLat;SoC", "MetricName": "MEM_Read_Latency" }, @@ -444,12 +925,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cbox_0@event\\=3D0x0@ / #num_dies / duration_time /= 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -498,20 +973,19 @@ "MetricGroup": "Power", "MetricName": "C7_Pkg_Residency" }, + { + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" + }, { "BriefDescription": "CPU operating frequency (in GHz)", - "MetricExpr": "( CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TS= C * #SYSTEM_TSC_FREQ ) / 1000000000", + "MetricExpr": "(( CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_T= SC * #SYSTEM_TSC_FREQ ) / 1000000000) / duration_time", "MetricGroup": "", "MetricName": "cpu_operating_frequency", "ScaleUnit": "1GHz" }, - { - "BriefDescription": "Cycles per instruction retired; indicating ho= w much time each executed instruction took; in units of cycles.", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY", - "MetricGroup": "", - "MetricName": "cpi", - "ScaleUnit": "1per_instr" - }, { "BriefDescription": "The ratio of number of completed memory load = instructions to the total number completed instructions", "MetricExpr": "MEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANY", @@ -530,7 +1004,7 @@ "BriefDescription": "Ratio of number of requests missing L1 data c= ache (includes data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l1d_mpi_includes_data_plus_rfo_with_prefetches", + "MetricName": "l1d_mpi", "ScaleUnit": "1per_instr" }, { @@ -558,7 +1032,7 @@ "BriefDescription": "Ratio of number of requests missing L2 cache = (includes code+data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l2_mpi_includes_code_plus_data_plus_rfo_with_prefet= ches", + "MetricName": "l2_mpi", "ScaleUnit": "1per_instr" }, { @@ -591,21 +1065,21 @@ }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) in nano seconds", - "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_OPCO= DE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_op= c\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( source_count(UNC_C_CLOCKTICKS) * #n= um_packages ) ) ) * duration_time", + "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_OPCO= DE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_op= c\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( #num_cores / #num_packages * #num_p= ackages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to local m= emory in nano seconds", - "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_LOCA= L_OPCODE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,fil= ter_opc\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( source_count(UNC_C_CLOCKTICKS= ) * #num_packages ) ) ) * duration_time", + "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_LOCA= L_OPCODE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE= \\,filter_opc\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( #num_cores / #num_packa= ges * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _local_requests", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to remote = memory in nano seconds", - "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_REMO= TE_OPCODE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,fi= lter_opc\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( source_count(UNC_C_CLOCKTICK= S) * #num_packages ) ) ) * duration_time", + "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_REMO= TE_OPCODE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCO= DE\\,filter_opc\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( #num_cores / #num_pac= kages * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _remote_requests", "ScaleUnit": "1ns" @@ -640,21 +1114,21 @@ }, { "BriefDescription": "Memory read that miss the last level cache (L= LC) addressed to local DRAM as a percentage of total memory read accesses, = does not include LLC prefetches.", - "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_o= pc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x182= @ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x182@ )", + "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,fi= lter_opc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_o= pc\\=3D0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=3D= 0x182@ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_local_dram", + "MetricName": "numa_reads_addressed_to_local_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Memory reads that miss the last level cache (= LLC) addressed to remote DRAM as a percentage of total memory read accesses= , does not include LLC prefetches.", - "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_o= pc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x182= @ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x182@ )", + "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,f= ilter_opc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_= opc\\=3D0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\= =3D0x182@ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_remote_dram", + "MetricName": "numa_reads_addressed_to_remote_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Uncore operating frequency in GHz", - "MetricExpr": "UNC_C_CLOCKTICKS / ( source_count(UNC_C_CLOCKTICKS)= * #num_packages ) / 1000000000", + "MetricExpr": "( UNC_C_CLOCKTICKS / ( #num_cores / #num_packages *= #num_packages ) / 1000000000) / duration_time", "MetricGroup": "", "MetricName": "uncore_frequency", "ScaleUnit": "1GHz" @@ -663,7 +1137,7 @@ "BriefDescription": "Intel(R) Quick Path Interconnect (QPI) data t= ransmit bandwidth (MB/sec)", "MetricExpr": "( UNC_Q_TxL_FLITS_G0.DATA * 8 / 1000000) / duration= _time", "MetricGroup": "", - "MetricName": "qpi_data_transmit_bw_only_data", + "MetricName": "qpi_data_transmit_bw", "ScaleUnit": "1MB/s" }, { @@ -691,245 +1165,42 @@ "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=3D0x= 19e@ * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_read", + "MetricName": "io_bandwidth_disk_or_network_writes", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", "MetricExpr": "(( cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=3D0= x1c8\\,filter_tid\\=3D0x3e@ + cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\= =3D0x180\\,filter_tid\\=3D0x3e@ ) * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_write", + "MetricName": "io_bandwidth_disk_or_network_reads", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Uops delivered from decoded instruction cache= (decoded stream buffer or DSB) as a percent of total uops delivered to Ins= truction Decode Queue", "MetricExpr": "100 * ( IDQ.DSB_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_decoded_icache_dsb", + "MetricName": "percent_uops_delivered_from_decoded_icache", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from legacy decode pipeline (M= icro-instruction Translation Engine or MITE) as a percent of total uops del= ivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MITE_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline_= mite", + "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from microcode sequencer (MS) = as a percent of total uops delivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MS_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_microcode_sequencer_ms", + "MetricName": "percent_uops_delivered_from_microcode_sequencer", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from loop stream detector(LSD)= as a percent of total uops delivered to Instruction Decode Queue", "MetricExpr": "100 * ( LSD.UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_loop_stream_detector_ls= d", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. Frontend denotes th= e first part of the processor core responsible to fetch operations that are= executed later on by the Backend part. Within the Frontend; a branch predi= ctor predicts the next address to fetch; cache-lines are fetched from the m= emory subsystem; parsed into instructions; and lastly decoded into micro-op= erations (uops). Ideally the Frontend can issue Machine_Width uops every cy= cle to the Backend. Frontend Bound denotes unutilized issue-slots when ther= e is no Backend stall; i.e. bubbles where Frontend delivered no uops while = Backend could have accepted them. For example; stalls due to instruction-ca= che misses would be categorized under Frontend Bound.", - "MetricExpr": "100 * ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( (= CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREA= D ) ) ) )", - "MetricGroup": "TmaL1;PGO", - "MetricName": "tma_frontend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues. For example; instruction-c= ache misses; iTLB misses or fetch stalls after a branch misprediction are c= ategorized under Frontend Latency. In such cases; the Frontend eventually d= elivers no uops for some period.", - "MetricExpr": "100 * ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOP= S_DELIV.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on e= lse ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "Frontend;TmaL2;m_tma_frontend_bound_percent", - "MetricName": "tma_fetch_latency_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", - "MetricExpr": "100 * ( ICACHE.IFDATA_STALL / ( CPU_CLK_UNHALTED.TH= READ ) )", - "MetricGroup": "BigFoot;FetchLat;IcMiss;TmaL3;m_tma_fetch_latency_= percent", - "MetricName": "tma_icache_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses.", - "MetricExpr": "100 * ( ( 14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISS= ES.WALK_DURATION\\,cmask\\=3D0x1@ + 7 * ITLB_MISSES.WALK_COMPLETED ) / ( CP= U_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TmaL3;m_tma_fetch_laten= cy_percent", - "MetricName": "tma_itlb_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fron= tend delay in fetching operations from corrected path; following all sorts = of miss-predicted branches. For example; branchy code with lots of miss-pre= dictions might get categorized under Branch Resteers. Note the value of thi= s node may overlap with its siblings.", - "MetricExpr": "100 * ( ( 12 ) * ( BR_MISP_RETIRED.ALL_BRANCHES + M= ACHINE_CLEARS.COUNT + BACLEARS.ANY ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_branch_resteers_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decod= ed i-cache) is a Uop Cache where the front-end directly delivers Uops (micr= o operations) avoiding heavy x86 decoding. The DSB pipeline has shorter lat= ency and delivered higher bandwidth than the MITE (legacy instruction decod= e pipeline). Switching between the two pipelines can cause penalties hence = this metric measures the exposed penalty.", - "MetricExpr": "100 * ( DSB2MITE_SWITCHES.PENALTY_CYCLES / ( CPU_CL= K_UNHALTED.THREAD ) )", - "MetricGroup": "DSBmiss;FetchLat;TmaL3;m_tma_fetch_latency_percent= ", - "MetricName": "tma_dsb_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs). Using proper compiler = flags or Intel Compiler by default will certainly avoid this. #Link: Optimi= zation Guide about LCP BKMs.", - "MetricExpr": "100 * ( ILD_STALL.LCP / ( CPU_CLK_UNHALTED.THREAD )= )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_lcp_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS). Commonly used instructions are optimized for delivery by the= DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certa= in operations cannot be handled natively by the execution pipeline; and mus= t be performed by microcode (small programs injected into the execution str= eam). Switching to the MS too often can negatively impact performance. The = MS is designated to deliver long uop flows required by CISC instructions li= ke CPUID; or uncommon conditions like Floating Point Assists when dealing w= ith Denormals.", - "MetricExpr": "100 * ( ( 2 ) * IDQ.MS_SWITCHES / ( CPU_CLK_UNHALTE= D.THREAD ) )", - "MetricGroup": "FetchLat;MicroSeq;TmaL3;m_tma_fetch_latency_percen= t", - "MetricName": "tma_ms_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues. For example; inefficienc= ies at the instruction decoders; or restrictions for caching in the DSB (de= coded uops cache) are categorized under Fetch Bandwidth. In such cases; the= Frontend typically delivers suboptimal amount of uops to the Backend.", - "MetricExpr": "100 * ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) ) - ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (= ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UN= HALTED.THREAD ) ) ) ) )", - "MetricGroup": "FetchBW;Frontend;TmaL2;m_tma_frontend_bound_percen= t", - "MetricName": "tma_fetch_bandwidth_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline). This pipeline is used for code that was not pre-cached in the= DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of= long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", - "MetricExpr": "100 * ( ( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MI= TE_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else = ( CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSBmiss;FetchBW;TmaL3;m_tma_fetch_bandwidth_percen= t", - "MetricName": "tma_mite_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line. For example; inefficient utilization of the DSB cache structure or b= ank conflict when reading from it; are categorized here.", - "MetricExpr": "100 * ( ( IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB= _CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( = CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSB;FetchBW;TmaL3;m_tma_fetch_bandwidth_percent", - "MetricName": "tma_dsb_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. This include slots used to issue uops t= hat do not eventually get retired and slots for which the issue-pipeline wa= s blocked due to recovery from earlier incorrect speculation. For example; = wasted work due to miss-predicted branches are categorized under Bad Specul= ation category. Incorrect data speculation followed by Memory Ordering Nuke= s is another example.", - "MetricExpr": "100 * ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_S= LOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT= _MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 )= if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_bad_speculation_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction. These slots are either wasted = by uops fetched from an incorrectly speculated program path; or stalls when= the out-of-order part of the machine needs to recover its state from a spe= culative path.", - "MetricExpr": "100 * ( ( BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * ( ( UOPS_ISSUED.ANY - ( U= OPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 )= if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHAL= TED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "BadSpec;BrMispredicts;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_branch_mispredicts_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears. These slots are either wasted by uop= s fetched prior to the clear; or stalls the out-of-order portion of the mac= hine needs to recover its state after the clear. For example; this can happ= en due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modify= ing-Code (SMC) nukes.", - "MetricExpr": "100 * ( ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE= _SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else I= NT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2= ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( BR_MISP_RETIRED.= ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * = ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.= RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( = ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "BadSpec;MachineClears;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_machine_clears_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. Backend is the portion of the processor cor= e where the out-of-order scheduler dispatches ready uops into their respect= ive execution units; and once completed these uops get retired according to= program order. For example; stalls due to data-cache misses or stalls due = to the divider unit being overloaded are both categorized under Backend Bou= nd. Backend Bound is further divided into two main categories: Memory Bound= and Core Bound.", - "MetricExpr": "100 * ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4= ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALT= ED.THREAD ) ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + (= 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECO= VERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_o= n else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_RETIRED.RETIRE_SLOTS ) = / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK= _UNHALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_backend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck. Memory Bound estimat= es fraction of slots where pipeline is likely stalled due to demand load or= store instructions. This accounts mainly for (1) non-completed in-flight m= emory demand loads which coincides with execution units starvation; in addi= tion to (2) cases where stores could impose backpressure on the pipeline wh= en many of them get buffered at the same time (less common out of the two).= ", - "MetricExpr": "100 * ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + RESOURC= E_STALLS.SB ) / ( ( CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_GE_1= _UOP_EXEC - ( UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if ( ( INST_RETIRED.ANY /= ( CPU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else UOPS_EXECUTED.CYCLES_GE_2_UOPS_= EXEC ) - ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYC= LES_0_UOPS_DELIV.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if = #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) > 0.1 ) else 0 ) + RESOURCE_= STALLS.SB ) ) ) * ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( ( CPU= _CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) = ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( I= NT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES = ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU= _CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * = ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.TH= READ ) ) ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;m_tma_backend_bound_percent", - "MetricName": "tma_memory_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache. The L1 data cache typicall= y has the shortest latency. However; in certain cases like loads blocked o= n older stores; a load might suffer due to high latency even though it is b= eing satisfied by the L1. Another example is loads who miss in the TLB. The= se cases are characterized by execution unit stalls; while some non-complet= ed demand load lives in the machine without having that demand load missing= the L1 cache.", - "MetricExpr": "100 * ( max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCL= E_ACTIVITY.STALLS_L1D_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) , 0 ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l1_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 m= isses/L2 hits) can improve the latency and increase performance.", - "MetricExpr": "100 * ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_AC= TIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l2_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core. = Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and= increase performance.", - "MetricExpr": "100 * ( ( MEM_LOAD_UOPS_RETIRED.L3_HIT / ( MEM_LOAD= _UOPS_RETIRED.L3_HIT + ( 7 ) * MEM_LOAD_UOPS_RETIRED.L3_MISS ) ) * CYCLE_AC= TIVITY.STALLS_L2_MISS / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l3_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads. Better caching can i= mprove the latency and increase performance.", - "MetricExpr": "100 * ( min( ( ( 1 - ( MEM_LOAD_UOPS_RETIRED.L3_HIT= / ( MEM_LOAD_UOPS_RETIRED.L3_HIT + ( 7 ) * MEM_LOAD_UOPS_RETIRED.L3_MISS )= ) ) * CYCLE_ACTIVITY.STALLS_L2_MISS / ( CPU_CLK_UNHALTED.THREAD ) ) , ( 1 = ) ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_dram_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write. Even though store accesses do not typically stall= out-of-order CPUs; there are few cases where stores can lead to actual sta= lls. This metric will be flagged should RFO stores be a bottleneck.", - "MetricExpr": "100 * ( RESOURCE_STALLS.SB / ( CPU_CLK_UNHALTED.THR= EAD ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_store_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck. Shortage in hardware comput= e resources; or dependencies in software's instructions are both categorize= d under Core Bound. Hence it may indicate the machine ran out of an out-of-= order resource; certain execution units are overloaded or dependencies in p= rogram's data- or instruction-flow are limiting the performance (e.g. FP-ch= ained long-latency arithmetic operations).", - "MetricExpr": "100 * ( ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( (= 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHA= LTED.THREAD ) ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) += ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RE= COVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT= _on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_RETIRED.RETIRE_SLOTS = ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_C= LK_UNHALTED.THREAD ) ) ) ) ) ) - ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + RESO= URCE_STALLS.SB ) / ( ( CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_G= E_1_UOP_EXEC - ( UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if ( ( INST_RETIRED.AN= Y / ( CPU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else UOPS_EXECUTED.CYCLES_GE_2_UO= PS_EXEC ) - ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.= CYCLES_0_UOPS_DELIV.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) > 0.1 ) else 0 ) + RESOUR= CE_STALLS.SB ) ) ) * ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( ( = CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD= ) ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( = ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCL= ES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( = CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;Compute;m_tma_backend_bound_percent", - "MetricName": "tma_core_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active. Divide and square root instructions are per= formed by the Divider unit and can take considerably longer latency than in= teger or Floating Point addition; subtraction; or multiplication.", - "MetricExpr": "100 * ( ARITH.FPU_DIV_ACTIVE / ( ( CPU_CLK_UNHALTED= .THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) )", - "MetricGroup": "TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_divider_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related). Two distinct categories can be attributed into this met= ric: (1) heavy data-dependency among contiguous instructions would manifest= in this metric - such cases are often referred to as low Instruction Level= Parallelism (ILP). (2) Contention on some hardware execution unit other th= an Divider. For example; when there are too many multiply operations.", - "MetricExpr": "100 * ( ( ( ( CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EX= ECUTED.CYCLES_GE_1_UOP_EXEC - ( UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if ( ( = INST_RETIRED.ANY / ( CPU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else UOPS_EXECUTED= .CYCLES_GE_2_UOPS_EXEC ) - ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * IDQ_UOPS= _NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.TH= READ_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) > 0.1 ) el= se 0 ) + RESOURCE_STALLS.SB ) ) - RESOURCE_STALLS.SB - CYCLE_ACTIVITY.STALL= S_MEM_ANY ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "PortsUtil;TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_ports_utilization_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. Ideally= ; all pipeline slots would be attributed to the Retiring category. Retirin= g of 100% would indicate the maximum Pipeline_Width throughput was achieved= . Maximizing Retiring typically increases the Instructions-per-cycle (see = IPC metric). Note that a high Retiring value does not necessary mean there = is no room for more performance. For example; Heavy-operations or Microcod= e Assists are categorized under Retiring. They often indicate suboptimal pe= rformance and can often be optimized or avoided. ", - "MetricExpr": "100 * ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_retiring_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation). This correlates with total number = of instructions used by the program. A uops-per-instruction (see UPI metric= ) ratio of 1 or less should be expected for decently optimized software run= ning on Intel Core/Xeon products. While this often indicates efficient X86 = instructions were executed; high value does not necessarily mean better per= formance cannot be achieved.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) *= ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.T= HREAD ) ) ) ) - ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_ISSUED.ANY ) * I= DQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on els= e ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_light_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired). Note t= his metric's value may exceed its parent due to use of \"Uops\" CountDomain= and FMA double-counting.", - "MetricExpr": "100 * ( ( INST_RETIRED.X87 * ( ( UOPS_RETIRED.RETIR= E_SLOTS ) / INST_RETIRED.ANY ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) + ( ( FP_A= RITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) / (= UOPS_RETIRED.RETIRE_SLOTS ) ) + ( min( ( ( FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRE= D.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / ( UOPS_= RETIRED.RETIRE_SLOTS ) ) , ( 1 ) ) ) )", - "MetricGroup": "HPC;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fp_arith_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences. This highly-correlates with the = uop length of these instructions/sequences.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_IS= SUED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_heavy_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS= is used for CISC instructions not supported by the default decoders (like = repeat move strings; or CPUID); or by microcode assists used to address som= e operation modes (like in Floating Point assists). These cases can often b= e avoided.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_ISSU= ED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "MicroSeq;TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_microcode_sequencer_percent", + "MetricName": "percent_uops_delivered_from_loop_stream_detector", "ScaleUnit": "1%" } ] diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json b/= tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json index abee6f773c1f..449fa723d0aa 100644 --- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json +++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json @@ -947,21 +947,19 @@ "Unit": "CBO" }, { - "BriefDescription": "LLC misses - demand and prefetch data reads -= excludes LLC prefetches. Derived from unc_c_tor_inserts.miss_opcode", + "BriefDescription": "TOR Inserts; Miss Opcode Match", "Counter": "0,1,2,3", "EventCode": "0x35", - "EventName": "LLC_MISSES.DATA_READ", - "Filter": "filter_opc=3D0x182", + "EventName": "UNC_C_TOR_INSERTS.MISS_OPCODE", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0x3", "Unit": "CBO" }, { - "BriefDescription": "LLC misses - demand and prefetch data reads -= excludes LLC prefetches", + "BriefDescription": "LLC misses - demand and prefetch data reads -= excludes LLC prefetches. Derived from unc_c_tor_inserts.miss_opcode", "Counter": "0,1,2,3", "EventCode": "0x35", - "EventName": "UNC_C_TOR_INSERTS.MISS_OPCODE", + "EventName": "LLC_MISSES.DATA_READ", "Filter": "filter_opc=3D0x182", "PerPkg": "1", "ScaleUnit": "64Bytes", diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.= json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json index 071ce45620d2..cb1916f52607 100644 --- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json +++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json @@ -685,36 +685,34 @@ "Unit": "QPI LL" }, { - "BriefDescription": "Number of data flits transmitted . Derived fr= om unc_q_txl_flits_g0.data", + "BriefDescription": "Flits Transferred - Group 0; Data Tx Flits", "Counter": "0,1,2,3", - "EventName": "QPI_DATA_BANDWIDTH_TX", + "EventName": "UNC_Q_TxL_FLITS_G0.DATA", "PerPkg": "1", - "ScaleUnit": "8Bytes", "UMask": "0x2", "Unit": "QPI LL" }, { - "BriefDescription": "Number of data flits transmitted ", + "BriefDescription": "Number of data flits transmitted . Derived fr= om unc_q_txl_flits_g0.data", "Counter": "0,1,2,3", - "EventName": "UNC_Q_TxL_FLITS_G0.DATA", + "EventName": "QPI_DATA_BANDWIDTH_TX", "PerPkg": "1", "ScaleUnit": "8Bytes", "UMask": "0x2", "Unit": "QPI LL" }, { - "BriefDescription": "Number of non data (control) flits transmitte= d . Derived from unc_q_txl_flits_g0.non_data", + "BriefDescription": "Flits Transferred - Group 0; Non-Data protoco= l Tx Flits", "Counter": "0,1,2,3", - "EventName": "QPI_CTL_BANDWIDTH_TX", + "EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA", "PerPkg": "1", - "ScaleUnit": "8Bytes", "UMask": "0x4", "Unit": "QPI LL" }, { - "BriefDescription": "Number of non data (control) flits transmitte= d ", + "BriefDescription": "Number of non data (control) flits transmitte= d . Derived from unc_q_txl_flits_g0.non_data", "Counter": "0,1,2,3", - "EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA", + "EventName": "QPI_CTL_BANDWIDTH_TX", "PerPkg": "1", "ScaleUnit": "8Bytes", "UMask": "0x4", diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json b= /tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json index 302e956a82ed..05fab7d2723e 100644 --- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json @@ -72,20 +72,19 @@ "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", + "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM Re= ads (RD_CAS + Underfills)", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_READ", + "EventName": "UNC_M_CAS_COUNT.RD", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0x3", "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller", + "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.RD", + "EventName": "LLC_MISSES.MEM_READ", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0x3", @@ -110,20 +109,19 @@ "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", + "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR= _CAS (both Modes)", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_WRITE", + "EventName": "UNC_M_CAS_COUNT.WR", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0xC", "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller", + "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.WR", + "EventName": "LLC_MISSES.MEM_WRITE", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0xC", --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2F8BC433F5 for ; Sat, 1 Oct 2022 06:07:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229657AbiJAGH6 (ORCPT ); Sat, 1 Oct 2022 02:07:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbiJAGHR (ORCPT ); Sat, 1 Oct 2022 02:07:17 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB72312B4A2 for ; Fri, 30 Sep 2022 23:07:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id a10-20020a5b0aca000000b006b05bfb6ab0so5562112ybr.9 for ; Fri, 30 Sep 2022 23:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=QTgsboOBJfQ6StFcL+CXhaEWYleJHOoZxAlHNiY8EnA=; b=qJgLUXihXeM+uwpIhim3PJ50NQzmI6w/RoF/3fd/PpGNoy/0oabsWmrZmwcE8nd1N9 3N87RRGiCyreqiTrBuyBRXQwpGOvu5i1Y1y3ZMG0mr10nYpjRF6Wh8+uNXYRvNUsjgYo tVBTksROlS897Jyu7Y3V3K4i2JW5KVfGursNWsNqOn/sQyA6tSr0gHqvc96Zb6sEwBFq baBNfdWegDBn1FCHGga//zIu1O+q2CMhyzYvP9AqYG5GrSlwkcFyj3GIDzcSb+v/Zo1J AC0PV99db+IcNb4N/vahhzB8B3D9TiVz4psF4c7ji5FE4Edx/4AVZE0g0xDtk7Muz9PT F6Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=QTgsboOBJfQ6StFcL+CXhaEWYleJHOoZxAlHNiY8EnA=; b=pDh3FZhIFjNa0dpzmHnJMifrLk+Fsext2pd+2KdX4fU0V1ThMuHb9fKj65F41FznER v8LYeIIVM8xCJ1DHFzM0o0rI4s5bWaEgqZeVNoLZCJfYre4Cg7Ou6fd8M4yjrp/XA53o 5n0cn8PQCN31WwBiYiq7ZGalfF2jJkeEbVh0Jddx9vgjt0v3TfkWLAx6Q+B0jpyL8B/O h3y9eFysIiYZncbvShFIxazJgHGhv6uB0WAlBGI/o8AwV1Hw1oxbfi+CWp3Fnz0oydPL Dul+IhPEMc5a/elHqlpfUrBHaBCJsRbnSSBO6SuzYB+wU580SOPelXfZ7PpBhHwUlbfs VQPA== X-Gm-Message-State: ACrzQf3Ke//9rGLVHGAS5nBUKGe5h+MuTrL8QzkUj4dSvlQLIOipwF7h rGaaShSecFlplgu19qM0DM6n4dzqcHIV X-Google-Smtp-Source: AMsMyM7GlNycyyepFZ4WYoFvnBbPIg8djvXr/v/DWL6wuOQud2+ogwO9OO48dITftRiqlq/GUHghxhMiuDun X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:9f0e:0:b0:691:f74:9ed6 with SMTP id n14-20020a259f0e000000b006910f749ed6mr11181466ybq.307.1664604424036; Fri, 30 Sep 2022 23:07:04 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:22 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-9-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 08/22] perf vendor events: Update Intel cascadelakex From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v1.16, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Removal of ScaleUnit from uncore events by Zhengjun Xing . - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Latest metrics from: https://github.com/intel/perfmon-metrics Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/cascadelakex/clx-metrics.json | 1285 ++++++++++------- .../arch/x86/cascadelakex/uncore-memory.json | 18 +- .../arch/x86/cascadelakex/uncore-other.json | 10 +- 3 files changed, 787 insertions(+), 526 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json b= /tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json index 46613504b816..81de1149297d 100644 --- a/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json @@ -1,148 +1,742 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE= / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "(ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDAT= A_STALL\\,cmask\\=3D1\\,edge@) / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_64B.IFTAG_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT))) * INT_MISC.CLEAR_RESTEER_CYCLES /= CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "9 * BACLEARS.ANY / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNH= ALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * = CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - tma_frontend_bound - (UOPS_ISSUED.ANY + 4 * ((I= NT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLES))= / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUN= D_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + = tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) = * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(12 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 11 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "((MEM_LOAD_RETIRED.L2_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / ((MEM_LOAD_RETIRED.L2_HIT * (1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + cpu@L1D_PEND_MISS.FB_= FULL\\,cmask\\=3D1@)) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.S= TALLS_L2_MISS) / CLKS)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((44 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRE= D.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA= _RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))= ) + (44 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + (M= EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(44 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRED= .XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - (OCR.DEMAND_DATA_RD.L3= _HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEM= AND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD)))) * (1 + (MEM_LOAD_RETIRED.FB_HIT /= MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(17 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT = * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CYCLE_AC= TIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma_l2_bo= und) - tma_pmm_bound)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "(59.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIR= ED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "(127 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRE= D.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "((89.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETI= RED.REMOTE_HITM + (89.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRED.REM= OTE_FWD) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) /= CLKS", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_L3_MISS_RE= TIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates (based on idle = latencies) how often the CPU was stalled on accesses to external 3D-Xpoint = (Crystal Ridge, a.k.a", + "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM= * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + 10 * ((MEM= _LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD= _RETIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + (MEM_LOAD= _RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.R= EMOTE_HITM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) = / ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS))) + 10 * ((MEM_LOAD_L3_MISS_RETIRED.LOCAL_D= RAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + (MEM_LO= AD_L3_MISS_RETIRED.REMOTE_FWD * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RE= TIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + (MEM_LOAD_R= ETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) + (25 * (MEM_LOAD_RETIRED.LOC= AL_PMM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + 33 *= (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM= _LOAD_RETIRED.L1_MISS))))))) * (CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CY= CLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma= _l2_bound)) if (1000000 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_R= ETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS) else 0)", + "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_memory_b= ound_group", + "MetricName": "tma_pmm_bound", + "PublicDescription": "This metric roughly estimates (based on idle= latencies) how often the CPU was stalled on accesses to external 3D-Xpoint= (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Mem= ory Module. ", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 11 * (1 - (MEM_INST_RETIRED.LO= CK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.LOCK_LOA= DS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_R= EQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "((110 * Average_Frequency) * (OCR.DEMAND_RFO.L3_MIS= S.REMOTE_HITM + OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM) + (47.5 * Average_Freque= ncy) * (OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE + OCR.PF_L2_RFO.L3_HIT.HITM_O= THER_CORE)) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK= _UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK= _UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCL= ES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNH= ALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_P= ORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / CLKS if (ARITH.DIV= IDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY)= ) else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTI= L) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_NONE / 2 if #SMT_on else= CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "PARTIAL_RAT_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to PAUSE Instructions", + "MetricExpr": "40 * ROB_MISC_EVENTS.PAUSE_INST / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_slow_pause", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUS= E_INST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "CLKS * UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISS= UED.ANY", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((UOPS_EXECUTED.CORE_CYCLES_GE_1 - UOPS_EXECUTED.CO= RE_CYCLES_GE_2) / 2 if #SMT_on else EXE_ACTIVITY.1_PORTS_UTIL) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((UOPS_EXECUTED.CORE_CYCLES_GE_2 - UOPS_EXECUTED.CO= RE_CYCLES_GE_3) / 2 if #SMT_on else EXE_ACTIVITY.2_PORTS_UTIL) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_GE_3 / 2 if #SMT_on else= UOPS_EXECUTED.CORE_CYCLES_GE_3) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" }, { - "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", - "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_= RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALT= ED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * = CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETI= RED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES = / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "Bad;BadSpec;BrMispredicts", - "MetricName": "Mispredictions" + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_P= ACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / UOPS_RETIRED.RET= IRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 512-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.512B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_512b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 512-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_INST_RETIRED.ANY / INST_= RETIRED.ANY", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", + "MetricExpr": "tma_light_operations * UOPS_RETIRED.MACRO_FUSED / U= OPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fused_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JC= C are commonly used examples.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", + "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - UOPS_RETIRED.MACRO_FUSED) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_non_fused_branches", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / UOPS_RETI= RED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_m= emory_operations + tma_fused_instructions + tma_non_fused_branches + tma_no= p_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS + UOPS_RETIRED.MACRO_FUS= ED - INST_RETIRED.ANY) / SLOTS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", - "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_= RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNH= ALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIR= ED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) = * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS= _NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))) )", - "MetricGroup": "Bad;BadSpec;BrMispredicts_SMT", - "MetricName": "Mispredictions_SMT" + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" }, { "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( ( (CYCL= E_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STA= LLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) -= (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RE= TIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB= _HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\= =3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MI= SS ) / CPU_CLK_UNHALTED.THREAD))) - ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MISS_RE= TIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MI= SS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETI= RED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOT= E_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (ME= M_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_L= OAD_RETIRED.L1_MISS) )) ) ) / ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRA= M * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( = (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM= _LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (M= EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_R= ETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_M= ISS) )) ) ) + ( 25 * ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) + 33 * ( MEM_LOAD_L3_MISS_RETIRED.R= EMOTE_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) = ) ) ) ) * (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYC= LE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNH= ALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HI= T / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_= LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_F= ULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY= .STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) if ( 1000000 * ( MEM_LOAD_= L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRE= D.L1_MISS ) else 0 ) ) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.= BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UT= IL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTI= VITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DE= LIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( CPU_C= LK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\= =3D4@ ) / CPU_CLK_UNHALTED.THREAD) / #( (CYCLE_ACTIVITY.STALLS_L3_MISS / CP= U_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.= STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT *= ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOA= D_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MIS= S) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.ST= ALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))= ) - ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_L= OAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MIS= S_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1= _MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HI= TM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) / ( = ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_H= IT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_= DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM= _LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOA= D_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_= LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) + ( 25 * ( MEM_LOAD_= RETIRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MI= SS) ) ) + 33 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * ( 1 + (MEM_LOAD_RETI= RED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) ) ) ) ) * (CYCLE_ACTIVITY.STALLS= _L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CY= CLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RET= IRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ))= / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_R= ETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCL= E_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHA= LTED.THREAD))) ) if ( 1000000 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM= _LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) ) + ( (( = CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_U= NHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_O= N_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (U= OPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_= PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED= .CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.R= ECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (OFFCORE_REQUESTS_= BUFFER.SQ_FULL / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MI= SS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) ) + ( (ma= x( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU= _CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTI= VITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_POR= TS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE= _ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_N= OT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 = * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( ((L1D_= PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT ))= * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ / CPU_CLK_UNHALTED.THREAD) / #(= max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / C= PU_CLK_UNHALTED.THREAD , 0 )) ) ", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)= ) + (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_= bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_a= ccesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + (tma_l1_= bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_= pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_= load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk= )) ", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "Memory_Bandwidth" }, - { - "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_C= LK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STA= LLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( = 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_R= ETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) = )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALL= S_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) -= ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD= _RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_R= ETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MI= SS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM = * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) / ( ( 1= 9 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT = / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRA= M * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LO= AD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_R= ETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOA= D_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) + ( 25 * ( MEM_LOAD_RET= IRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)= ) ) + 33 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) ) ) ) ) * (CYCLE_ACTIVITY.STALLS_L3= _MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE= _ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRE= D.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / = ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_A= CTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTE= D.THREAD))) ) if ( 1000000 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LO= AD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) / #((( CYCLE= _ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY= .STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (= 4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_A= CTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_= ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( C= PU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / C= PU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVER= Y_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min= ( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,= cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREAD) / #( (CYCLE_ACTIVITY.STALLS_L3_MI= SS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_AC= TIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L= 2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (= MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED= .L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTI= VITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.T= HREAD))) - ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 += (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD= _L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RET= IRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_R= ETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.RE= MOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) )= ) / ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIR= ED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED= .LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ))= + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / = MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 = + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) + ( 25 * ( ME= M_LOAD_RETIRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRE= D.L1_MISS) ) ) + 33 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * ( 1 + (MEM_LO= AD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) ) ) ) ) * (CYCLE_ACTIVITY= .STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MI= SS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_L= OAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_M= ISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM= _LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * = (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_C= LK_UNHALTED.THREAD))) ) if ( 1000000 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PM= M + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) ) = + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CP= U_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.= BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UT= IL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * = ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) = * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_U= OPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_= ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_= UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_= UNHALTED.REF_XCLK ) )))) ) * ( (( OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 ) / (= ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE= / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYC= LE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) ) + ( (max( ( CYC= LE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNH= ALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOU= ND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL = + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1= + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * E= XE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS= _NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CL= K_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISS= UED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1= _MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D= 1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CY= CLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ", - "MetricGroup": "Mem;MemoryBW;Offcore_SMT", - "MetricName": "Memory_Bandwidth_SMT" - }, { "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( ( (CYCL= E_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STA= LLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) -= (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RE= TIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB= _HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\= =3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MI= SS ) / CPU_CLK_UNHALTED.THREAD))) - ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MISS_RE= TIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MI= SS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETI= RED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOT= E_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (ME= M_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_L= OAD_RETIRED.L1_MISS) )) ) ) / ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRA= M * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( = (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM= _LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (M= EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_R= ETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_M= ISS) )) ) ) + ( 25 * ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) + 33 * ( MEM_LOAD_L3_MISS_RETIRED.R= EMOTE_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) = ) ) ) ) * (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYC= LE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNH= ALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HI= T / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_= LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_F= ULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY= .STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) if ( 1000000 * ( MEM_LOAD_= L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRE= D.L1_MISS ) else 0 ) ) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.= BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UT= IL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTI= VITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DE= LIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( CPU_C= LK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD ) / C= PU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUES= TS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREAD)) / #= ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIV= ITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.TH= READ) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_= LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RET= IRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cm= ask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_= L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) - ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MI= SS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.= L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD= _RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.= REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) = + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / = MEM_LOAD_RETIRED.L1_MISS) )) ) ) / ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOT= E_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10= * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT = / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1= + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_M= ISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED= .L1_MISS) )) ) ) + ( 25 * ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RE= TIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) + 33 * ( MEM_LOAD_L3_MISS_RETI= RED.REMOTE_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)= ) ) ) ) ) ) * (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (= ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CL= K_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + = (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS= .FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACT= IVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) if ( 1000000 * ( MEM_= LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_R= ETIRED.L1_MISS ) else 0 ) ) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_= ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.= STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TO= TAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CL= K_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_ST= ORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD))= - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALT= ED.THREAD))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.= REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHAL= TED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_t= ime)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOA= D_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STA= LLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) = + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD= _RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\= \=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_M= ISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EX= E_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY= .1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD))= * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_= UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.AN= Y + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) )", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) = + (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bo= und + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contes= ted_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + (tma= _l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)))", "MetricGroup": "Mem;MemoryLat;Offcore", "MetricName": "Memory_Latency" }, - { - "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_C= LK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STA= LLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( = 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_R= ETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) = )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALL= S_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) -= ( ( ( 1 - ( ( 19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD= _RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_R= ETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MI= SS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM = * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) / ( ( 1= 9 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT = / MEM_LOAD_RETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRA= M * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LO= AD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_R= ETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOA= D_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) + ( 25 * ( MEM_LOAD_RET= IRED.LOCAL_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)= ) ) + 33 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) ) ) ) ) * (CYCLE_ACTIVITY.STALLS_L3= _MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE= _ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRE= D.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / = ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_A= CTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTE= D.THREAD))) ) if ( 1000000 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LO= AD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) / #((( CYCLE= _ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY= .STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (= 4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_A= CTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_= ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( C= PU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / C= PU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVER= Y_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min= ( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_R= D ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE= _REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREA= D)) / #( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCL= E_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHA= LTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT= / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_L= OAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FU= LL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.= STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) - ( ( ( 1 - ( ( 19 * (MEM_LOA= D_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_R= ETIRED.L1_MISS) )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (M= EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_R= ETIRED.REMOTE_FWD * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MI= SS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB= _HIT / MEM_LOAD_RETIRED.L1_MISS) )) ) ) / ( ( 19 * (MEM_LOAD_L3_MISS_RETIRE= D.REMOTE_DRAM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) = )) + 10 * ( (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FW= D * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + (MEM_LO= AD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_= RETIRED.L1_MISS) )) ) ) + ( 25 * ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + (MEM_= LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ) ) + 33 * ( MEM_LOAD_L3_MI= SS_RETIRED.REMOTE_PMM * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L= 1_MISS) ) ) ) ) ) ) * (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THR= EAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) /= CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_R= ETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT *= ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PE= ND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CY= CLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) if ( 1000000 *= ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM= _LOAD_RETIRED.L1_MISS ) else 0 ) ) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS -= CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_AC= TIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.ST= ALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 *= ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACT= IVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_C= YCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_= UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( (20.= 5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000= 000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHAL= TED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) * MEM_LOAD_RETIRED= .L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / = CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVI= TY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L= 2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (= MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED= .L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTI= VITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.T= HREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES= ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETI= RED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.= 2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVER= ED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 = * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) )))) ) )", - "MetricGroup": "Mem;MemoryLat;Offcore_SMT", - "MetricName": "Memory_Latency_SMT" - }, { "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (max( (= CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK= _UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.= BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UT= IL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTI= VITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DE= LIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( 9 * c= pu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_LOAD_MISSES.WALK_ACTIVE = , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 )= ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYC= LE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_A= CTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.ST= ALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTA= L + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_= UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) -= ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED= .THREAD))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTL= B_STORE_MISSES.WALK_ACTIVE ) / CPU_CLK_UNHALTED.THREAD) / #(EXE_ACTIVITY.BO= UND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_= 4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_lo= ads + tma_store_fwd_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bou= nd + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma= _dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_= store_latency))) ", "MetricGroup": "Mem;MemoryTLB;Offcore", "MetricName": "Memory_Data_TLBs" }, - { - "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - = CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYC= LE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVI= TY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EX= E_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (m= in( 9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_LOAD_MISSES.WAL= K_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_M= ISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_= ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) += ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_AC= TIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.ST= ALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 *= ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACT= IVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_C= YCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_= UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( 9 * = cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_STORE_MISSES.WALK_ACTI= VE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREA= D_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(EXE_ACTIVITY.BOUND_ON_STORES = / CPU_CLK_UNHALTED.THREAD) ) ) ", - "MetricGroup": "Mem;MemoryTLB;Offcore_SMT", - "MetricName": "Memory_Data_TLBs_SMT" - }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_= RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITI= ONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 = * CPU_CLK_UNHALTED.THREAD))", + "MetricExpr": "100 * ((BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_R= ETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.CONDITION= AL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, - { - "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_= RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITI= ONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 = * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACT= IVE / CPU_CLK_UNHALTED.REF_XCLK ) )))", - "MetricGroup": "Ret_SMT", - "MetricName": "Branching_Overhead_SMT" - }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_= CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDA= TA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS= .ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "Big_Code" }, - { - "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_ST= ALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACH= E_16B.IFDATA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 = * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.= CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + C= PU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))", - "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB_SMT", - "MetricName": "Big_Code_SMT" - }, { "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", - "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK= _UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE /= (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MIS= P_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_C= YCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UO= PS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) - (100 * (4 * IDQ_UOPS_NOT= _DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (I= CACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STA= LL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHA= LTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_U= OPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))= )", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", "MetricGroup": "Fed;FetchBW;Frontend", "MetricName": "Instruction_Fetch_BW" }, - { - "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", - "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DE= LIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.= ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL= _BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_= MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_D= ELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) = * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))= ) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNH= ALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STAL= L\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / = CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))", - "MetricGroup": "Fed;FetchBW;Frontend_SMT", - "MetricName": "Instruction_Fetch_BW_SMT" - }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -158,6 +752,12 @@ "MetricGroup": "Branches;Fed;FetchBW", "MetricName": "UpTB" }, + { + "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", + "MetricName": "CPI" + }, { "BriefDescription": "Per-Logical Processor actual clocks when the = Logical Processor is active.", "MetricExpr": "CPU_CLK_UNHALTED.THREAD", @@ -166,16 +766,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -185,63 +779,38 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARI= TH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKE= D_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU= _CLK_UNHALTED.THREAD )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.51= 2B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)) / (2 * CORE_C= LKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, - { - "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width). SMT versi= on; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ) )", - "MetricGroup": "Cor;Flops;HPC_SMT", - "MetricName": "FP_Arith_Utilization_SMT", - "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n). SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "( 1 - ((1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU= _CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES )= / (4 * CPU_CLK_UNHALTED.THREAD)) - ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE= _ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.= 1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) = * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_U= OPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY= + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)))) / ((EX= E_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.R= ETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL)) = / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STAL= LS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_PORTS_UTI= L + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIV= ITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if ((1 - (IDQ_UOPS_NOT_DELIVER= ED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC= .RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) - ((( CYCLE_ACTIVITY.ST= ALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTA= L + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_= UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) -= ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED= .THREAD)))) < ((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL= + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVI= TY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( C= YCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_AC= TIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.TH= READ)) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) else 1 ) if = 0 > 0.5 else 0", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", "MetricGroup": "Cor;SMT", "MetricName": "Core_Bound_Likely" }, - { - "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "( 1 - ((1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ((( CYC= LE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVI= TY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EX= E_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))) / ((EXE= _ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTE= D.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORT= S_UTIL)) / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTI= VITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_= PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if ((1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))))) < ((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACT= IVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED= .THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED= .REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if = ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STA= LLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOT= S / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THR= EAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) /= CPU_CLK_UNHALTED.THREAD) else 1 ) if (1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / ( CPU_CLK_UNHALTED.REF_XCLK_ANY / 2 )) > 0.5 else 0", - "MetricGroup": "Cor;SMT_SMT", - "MetricName": "Core_Bound_Likely_SMT" - }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else= CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -283,13 +852,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.25= 6B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARI= TH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PAC= KED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST= _RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SI= NGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SIN= GLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -310,21 +879,21 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit in= struction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX512", "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit i= nstruction (lower number means higher occurrence rate). May undercount due = to FMA double counting." @@ -336,9 +905,9 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -373,16 +942,10 @@ }, { "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * ( (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * (DSB2MITE_SWITCHES.PENALTY_CYC= LES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS= _DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) + ((IDQ_UOPS_NOT_DELIVERED.COR= E / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) * (( IDQ.ALL_MITE_CYCLES_A= NY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / CPU_CLK_UNHALTED.THREAD / 2) / #((= IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOP= S_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) = )", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_mite))", "MetricGroup": "DSBmiss;Fed", "MetricName": "DSB_Misses" }, - { - "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * ( (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (DSB2MITE_SWITCHES.P= ENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYC= LES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_= CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) + ((IDQ_UO= PS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_= CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ= _UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.TH= READ / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.RE= F_XCLK ) )))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOP= S ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) / 2) / #((IDQ_UOPS_NOT_DELIVERED.CO= RE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_TH= READ_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED= .CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + = CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )", - "MetricGroup": "DSBmiss;Fed_SMT", - "MetricName": "DSB_Misses_SMT" - }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -397,16 +960,10 @@ }, { "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_C= LK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.A= LL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU= _CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CO= RE / (4 * CPU_CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_= MISP_RETIRED.ALL_BRANCHES", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", "MetricGroup": "Bad;BrMispredicts", "MetricName": "Branch_Misprediction_Cost" }, - { - "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.AL= L_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT= _MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_= DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 )= * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )= )) ) * (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_= THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCH= ES", - "MetricGroup": "Bad;BrMispredicts_SMT", - "MetricName": "Branch_Misprediction_Cost_SMT" - }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.NOT_TAKEN / BR_INST_RETIRED.ALL_BRA= NCHES", @@ -415,101 +972,95 @@ }, { "BriefDescription": "Fraction of branches that are taken condition= als", - "MetricExpr": "( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT= _TAKEN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_= TAKEN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches;CodeGen;PGO", "MetricName": "Cond_TK" }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, { "BriefDescription": "Fraction of branches that are unconditional (= direct or indirect) jumps", - "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CON= DITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) / B= R_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.COND= ITIONAL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_= INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "Jump" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS = + MEM_LOAD_RETIRED.FB_HIT )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS += MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CPU_C= LK_UNHALTED.THREAD )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING) / (2 * CORE_CLK= S)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * ( ( C= PU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / C= PU_CLK_UNHALTED.REF_XCLK ) ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -536,37 +1087,37 @@ }, { "BriefDescription": "Rate of silent evictions from the L2 cache pe= r Kilo instruction where the evicted lines are dropped (no writeback to L3 = or memory)", - "MetricExpr": "1000 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_Silent_PKI" }, { "BriefDescription": "Rate of non silent evictions from the L2 cach= e per Kilo instruction", - "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_NonSilent_PKI" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -578,68 +1129,47 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE = + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.5= 12B_PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACK= ED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.TH= READ", + "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / 2 / CORE_CLKS if #S= MT_on else CORE_POWER.LVL0_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License0_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes." }, - { - "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )", - "MetricGroup": "Power_SMT", - "MetricName": "Power_License0_Utilization_SMT", - "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes. SMT versio= n; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.TH= READ", + "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / 2 / CORE_CLKS if #S= MT_on else CORE_POWER.LVL1_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License1_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions." }, - { - "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1. SMT version; use when SMT is= enabled and measuring per logical CPU.", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )", - "MetricGroup": "Power_SMT", - "MetricName": "Power_License1_Utilization_SMT", - "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions. = SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX)", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.TH= READ", + "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / 2 / CORE_CLKS if #S= MT_on else CORE_POWER.LVL2_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License2_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions." }, - { - "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX). SMT vers= ion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNH= ALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNH= ALTED.REF_XCLK ) )", - "MetricGroup": "Power_SMT", - "MetricName": "Power_License2_Utilization_SMT", - "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions. SMT version; use when SMT is= enabled and measuring per logical CPU." - }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -657,13 +1187,13 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( cha@event\\=3D0x36\\,umask\\=3D0x21\= \,config\\=3D0x40433@ / cha@event\\=3D0x35\\,umask\\=3D0x21\\,config\\=3D0x= 40433@ ) / ( cha_0@event\\=3D0x0@ / duration_time )", + "MetricExpr": "1000000000 * (cha@event\\=3D0x36\\,umask\\=3D0x21\\= ,config\\=3D0x40433@ / cha@event\\=3D0x35\\,umask\\=3D0x21\\,config\\=3D0x4= 0433@) / (Socket_CLKS / duration_time)", "MetricGroup": "Mem;MemoryLat;SoC", "MetricName": "MEM_Read_Latency" }, @@ -675,38 +1205,38 @@ }, { "BriefDescription": "Average latency of data read request to exter= nal 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2= data-read prefetches", - "MetricExpr": "( 1000000000 * ( imc@event\\=3D0xe0\\,umask\\=3D0x1= @ / imc@event\\=3D0xe3@ ) / imc_0@event\\=3D0x0@ )", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": "(1000000000 * (imc@event\\=3D0xe0\\,umask\\=3D0x1@ = / imc@event\\=3D0xe3@) / imc_0@event\\=3D0x0@)", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_PMM_Read_Latency" }, { "BriefDescription": "Average latency of data read request to exter= nal DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-= read prefetches", - "MetricExpr": "1000000000 * ( UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSE= RTS ) / imc_0@event\\=3D0x0@", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": "1000000000 * (UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSER= TS) / imc_0@event\\=3D0x0@", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_DRAM_Read_Latency" }, { "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [= GB / sec]", - "MetricExpr": "( ( 64 * imc@event\\=3D0xe3@ / 1000000000 ) / durat= ion_time )", - "MetricGroup": "Mem;MemoryBW;SoC;Server", + "MetricExpr": "((64 * imc@event\\=3D0xe3@ / 1000000000) / duration= _time)", + "MetricGroup": "Mem;MemoryBW;Server;SoC", "MetricName": "PMM_Read_BW" }, { "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes = [GB / sec]", - "MetricExpr": "( ( 64 * imc@event\\=3D0xe7@ / 1000000000 ) / durat= ion_time )", - "MetricGroup": "Mem;MemoryBW;SoC;Server", + "MetricExpr": "((64 * imc@event\\=3D0xe7@ / 1000000000) / duration= _time)", + "MetricGroup": "Mem;MemoryBW;Server;SoC", "MetricName": "PMM_Write_BW" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Writes [GB / sec]", - "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_= DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + U= NC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 ) * 4 / 1000000000 / duration_time", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_D= ATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UN= C_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1000000000 / duration_time", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Write_BW" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Reads [GB / sec]", - "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO= _DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 = + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3 ) * 4 / 1000000000 / duration_tim= e", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_= DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 += UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3) * 4 / 1000000000 / duration_time", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Read_BW" }, { @@ -715,12 +1245,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cha_0@event\\=3D0x0@ / #num_dies / duration_time / = 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -770,26 +1294,18 @@ "MetricName": "C7_Pkg_Residency" }, { - "BriefDescription": "Percentage of time spent in the active CPU po= wer state C0", - "MetricExpr": "100 * CPU_CLK_UNHALTED.REF_TSC / TSC", - "MetricGroup": "", - "MetricName": "cpu_utilization_percent", - "ScaleUnit": "1%" + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" }, { "BriefDescription": "CPU operating frequency (in GHz)", - "MetricExpr": "( CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TS= C * #SYSTEM_TSC_FREQ ) / 1000000000", + "MetricExpr": "(( CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_T= SC * #SYSTEM_TSC_FREQ ) / 1000000000) / duration_time", "MetricGroup": "", "MetricName": "cpu_operating_frequency", "ScaleUnit": "1GHz" }, - { - "BriefDescription": "Cycles per instruction retired; indicating ho= w much time each executed instruction took; in units of cycles.", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY", - "MetricGroup": "", - "MetricName": "cpi", - "ScaleUnit": "1per_instr" - }, { "BriefDescription": "The ratio of number of completed memory load = instructions to the total number completed instructions", "MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY", @@ -808,7 +1324,7 @@ "BriefDescription": "Ratio of number of requests missing L1 data c= ache (includes data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l1d_mpi_includes_data_plus_rfo_with_prefetches", + "MetricName": "l1d_mpi", "ScaleUnit": "1per_instr" }, { @@ -836,7 +1352,7 @@ "BriefDescription": "Ratio of number of requests missing L2 cache = (includes code+data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l2_mpi_includes_code_plus_data_plus_rfo_with_prefet= ches", + "MetricName": "l2_mpi", "ScaleUnit": "1per_instr" }, { @@ -869,21 +1385,21 @@ }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) in nano seconds", - "MetricExpr": "( ( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_mis= s\\,config1\\=3D0x4043300000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config= 1\\=3D0x4043300000000@ ) / ( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CL= OCKTICKS) * #num_packages ) ) ) * duration_time )", + "MetricExpr": "( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_miss\= \,config1\\=3D0x4043300000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config1\= \=3D0x4043300000000@ ) / ( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CLOC= KTICKS) * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to local m= emory in nano seconds", - "MetricExpr": "( ( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_mis= s\\,config1\\=3D0x4043200000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config= 1\\=3D0x4043200000000@ ) / ( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CL= OCKTICKS) * #num_packages ) ) ) * duration_time )", + "MetricExpr": "( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_miss\= \,config1\\=3D0x4043200000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config1\= \=3D0x4043200000000@ ) / ( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CLOC= KTICKS) * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _local_requests", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to remote = memory in nano seconds", - "MetricExpr": "( ( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_mis= s\\,config1\\=3D0x4043100000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config= 1\\=3D0x4043100000000@ ) / ( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CL= OCKTICKS) * #num_packages ) ) ) * duration_time )", + "MetricExpr": "( 1000000000 * ( cha@unc_cha_tor_occupancy.ia_miss\= \,config1\\=3D0x4043100000000@ / cha@unc_cha_tor_inserts.ia_miss\\,config1\= \=3D0x4043100000000@ ) / ( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CLOC= KTICKS) * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _remote_requests", "ScaleUnit": "1ns" @@ -892,54 +1408,54 @@ "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by a code fetch to the total number of completed ins= tructions. This implies it missed in the ITLB (Instruction TLB) and further= levels of TLB.", "MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "itlb_2nd_level_mpi", + "MetricName": "itlb_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total n= umber of completed instructions. This implies it missed in the Instruction = Translation Lookaside Buffer (ITLB) and further levels of TLB.", "MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY= ", "MetricGroup": "", - "MetricName": "itlb_2nd_level_large_page_mpi", + "MetricName": "itlb_large_page_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by demand data loads to the total number of complete= d instructions. This implies it missed in the DTLB and further levels of TL= B.", "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "dtlb_2nd_level_load_mpi", + "MetricName": "dtlb_load_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = 2 megabyte page sizes) caused by demand data loads to the total number of c= ompleted instructions. This implies it missed in the Data Translation Looka= side Buffer (DTLB) and further levels of TLB.", "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRE= D.ANY", "MetricGroup": "", - "MetricName": "dtlb_2nd_level_2mb_large_page_load_mpi", + "MetricName": "dtlb_2mb_large_page_load_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by demand data stores to the total number of complet= ed instructions. This implies it missed in the DTLB and further levels of T= LB.", "MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY= ", "MetricGroup": "", - "MetricName": "dtlb_2nd_level_store_mpi", + "MetricName": "dtlb_store_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Memory read that miss the last level cache (L= LC) addressed to local DRAM as a percentage of total memory read accesses, = does not include LLC prefetches.", "MetricExpr": "100 * cha@unc_cha_tor_inserts.ia_miss\\,config1\\= =3D0x4043200000000@ / ( cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x404= 3200000000@ + cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x4043100000000= @ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_local_dram", + "MetricName": "numa_reads_addressed_to_local_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Memory reads that miss the last level cache (= LLC) addressed to remote DRAM as a percentage of total memory read accesses= , does not include LLC prefetches.", "MetricExpr": "100 * cha@unc_cha_tor_inserts.ia_miss\\,config1\\= =3D0x4043100000000@ / ( cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x404= 3200000000@ + cha@unc_cha_tor_inserts.ia_miss\\,config1\\=3D0x4043100000000= @ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_remote_dram", + "MetricName": "numa_reads_addressed_to_remote_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Uncore operating frequency in GHz", - "MetricExpr": "UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CLOCKTI= CKS) * #num_packages ) / 1000000000", + "MetricExpr": "( UNC_CHA_CLOCKTICKS / ( source_count(UNC_CHA_CLOCK= TICKS) * #num_packages ) / 1000000000) / duration_time", "MetricGroup": "", "MetricName": "uncore_frequency", "ScaleUnit": "1GHz" @@ -948,7 +1464,7 @@ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data t= ransmit bandwidth (MB/sec)", "MetricExpr": "( UNC_UPI_TxL_FLITS.ALL_DATA * (64 / 9.0) / 1000000= ) / duration_time", "MetricGroup": "", - "MetricName": "upi_data_transmit_bw_only_data", + "MetricName": "upi_data_transmit_bw", "ScaleUnit": "1MB/s" }, { @@ -997,35 +1513,35 @@ "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", "MetricExpr": "(( UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO= _DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + = UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 ) * 4 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_read", + "MetricName": "io_bandwidth_disk_or_network_writes", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", "MetricExpr": "(( UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0 + UNC_I= IO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PA= RT2 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3 ) * 4 / 1000000) / duration_= time", "MetricGroup": "", - "MetricName": "io_bandwidth_write", + "MetricName": "io_bandwidth_disk_or_network_reads", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Uops delivered from decoded instruction cache= (decoded stream buffer or DSB) as a percent of total uops delivered to Ins= truction Decode Queue", "MetricExpr": "100 * ( IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_UO= PS + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_decoded_icache_dsb", + "MetricName": "percent_uops_delivered_from_decoded_icache", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from legacy decode pipeline (M= icro-instruction Translation Engine or MITE) as a percent of total uops del= ivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MITE_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_U= OPS + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline_= mite", + "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from microcode sequencer (MS) = as a percent of total uops delivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MS_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_UOP= S + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_microcode_sequencer_ms", + "MetricName": "percent_uops_delivered_from_microcode_sequencer", "ScaleUnit": "1%" }, { @@ -1050,255 +1566,10 @@ "ScaleUnit": "1MB/s" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. Frontend denotes th= e first part of the processor core responsible to fetch operations that are= executed later on by the Backend part. Within the Frontend; a branch predi= ctor predicts the next address to fetch; cache-lines are fetched from the m= emory subsystem; parsed into instructions; and lastly decoded into micro-op= erations (uops). Ideally the Frontend can issue Machine_Width uops every cy= cle to the Backend. Frontend Bound denotes unutilized issue-slots when ther= e is no Backend stall; i.e. bubbles where Frontend delivered no uops while = Backend could have accepted them. For example; stalls due to instruction-ca= che misses would be categorized under Frontend Bound.", - "MetricExpr": "100 * ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( (= CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREA= D ) ) ) )", - "MetricGroup": "TmaL1;PGO", - "MetricName": "tma_frontend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues. For example; instruction-c= ache misses; iTLB misses or fetch stalls after a branch misprediction are c= ategorized under Frontend Latency. In such cases; the Frontend eventually d= elivers no uops for some period.", - "MetricExpr": "100 * ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOP= S_DELIV.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on e= lse ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "Frontend;TmaL2;m_tma_frontend_bound_percent", - "MetricName": "tma_fetch_latency_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", - "MetricExpr": "100 * ( ( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_= 16B.IFDATA_STALL\\,cmask\\=3D0x1\\,edge\\=3D0x1@ ) / ( CPU_CLK_UNHALTED.THR= EAD ) )", - "MetricGroup": "BigFoot;FetchLat;IcMiss;TmaL3;m_tma_fetch_latency_= percent", - "MetricName": "tma_icache_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses.", - "MetricExpr": "100 * ( ICACHE_64B.IFTAG_STALL / ( CPU_CLK_UNHALTED= .THREAD ) )", - "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TmaL3;m_tma_fetch_laten= cy_percent", - "MetricName": "tma_itlb_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fron= tend delay in fetching operations from corrected path; following all sorts = of miss-predicted branches. For example; branchy code with lots of miss-pre= dictions might get categorized under Branch Resteers. Note the value of thi= s node may overlap with its siblings.", - "MetricExpr": "100 * ( INT_MISC.CLEAR_RESTEER_CYCLES / ( CPU_CLK_U= NHALTED.THREAD ) + ( ( 9 ) * BACLEARS.ANY / ( CPU_CLK_UNHALTED.THREAD ) ) )= ", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_branch_resteers_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decod= ed i-cache) is a Uop Cache where the front-end directly delivers Uops (micr= o operations) avoiding heavy x86 decoding. The DSB pipeline has shorter lat= ency and delivered higher bandwidth than the MITE (legacy instruction decod= e pipeline). Switching between the two pipelines can cause penalties hence = this metric measures the exposed penalty.", - "MetricExpr": "100 * ( DSB2MITE_SWITCHES.PENALTY_CYCLES / ( CPU_CL= K_UNHALTED.THREAD ) )", - "MetricGroup": "DSBmiss;FetchLat;TmaL3;m_tma_fetch_latency_percent= ", - "MetricName": "tma_dsb_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs). Using proper compiler = flags or Intel Compiler by default will certainly avoid this. #Link: Optimi= zation Guide about LCP BKMs.", - "MetricExpr": "100 * ( ILD_STALL.LCP / ( CPU_CLK_UNHALTED.THREAD )= )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_lcp_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS). Commonly used instructions are optimized for delivery by the= DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certa= in operations cannot be handled natively by the execution pipeline; and mus= t be performed by microcode (small programs injected into the execution str= eam). Switching to the MS too often can negatively impact performance. The = MS is designated to deliver long uop flows required by CISC instructions li= ke CPUID; or uncommon conditions like Floating Point Assists when dealing w= ith Denormals.", - "MetricExpr": "100 * ( ( 2 ) * IDQ.MS_SWITCHES / ( CPU_CLK_UNHALTE= D.THREAD ) )", - "MetricGroup": "FetchLat;MicroSeq;TmaL3;m_tma_fetch_latency_percen= t", - "MetricName": "tma_ms_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues. For example; inefficienc= ies at the instruction decoders; or restrictions for caching in the DSB (de= coded uops cache) are categorized under Fetch Bandwidth. In such cases; the= Frontend typically delivers suboptimal amount of uops to the Backend.", - "MetricExpr": "100 * ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) ) - ( ( 4 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (= ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UN= HALTED.THREAD ) ) ) ) )", - "MetricGroup": "FetchBW;Frontend;TmaL2;m_tma_frontend_bound_percen= t", - "MetricName": "tma_fetch_bandwidth_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline). This pipeline is used for code that was not pre-cached in the= DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of= long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", - "MetricExpr": "100 * ( ( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MI= TE_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else = ( CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSBmiss;FetchBW;TmaL3;m_tma_fetch_bandwidth_percen= t", - "MetricName": "tma_mite_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line. For example; inefficient utilization of the DSB cache structure or b= ank conflict when reading from it; are categorized here.", - "MetricExpr": "100 * ( ( IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB= _CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( = CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSB;FetchBW;TmaL3;m_tma_fetch_bandwidth_percent", - "MetricName": "tma_dsb_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. This include slots used to issue uops t= hat do not eventually get retired and slots for which the issue-pipeline wa= s blocked due to recovery from earlier incorrect speculation. For example; = wasted work due to miss-predicted branches are categorized under Bad Specul= ation category. Incorrect data speculation followed by Memory Ordering Nuke= s is another example.", - "MetricExpr": "100 * ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_S= LOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT= _MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 )= if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_bad_speculation_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction. These slots are either wasted = by uops fetched from an incorrectly speculated program path; or stalls when= the out-of-order part of the machine needs to recover its state from a spe= culative path.", - "MetricExpr": "100 * ( ( BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * ( ( UOPS_ISSUED.ANY - ( U= OPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 )= if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHAL= TED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "BadSpec;BrMispredicts;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_branch_mispredicts_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears. These slots are either wasted by uop= s fetched prior to the clear; or stalls the out-of-order portion of the mac= hine needs to recover its state after the clear. For example; this can happ= en due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modify= ing-Code (SMC) nukes.", - "MetricExpr": "100 * ( ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE= _SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else I= NT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2= ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( BR_MISP_RETIRED.= ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * = ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.= RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( = ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "BadSpec;MachineClears;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_machine_clears_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. Backend is the portion of the processor cor= e where the out-of-order scheduler dispatches ready uops into their respect= ive execution units; and once completed these uops get retired according to= program order. For example; stalls due to data-cache misses or stalls due = to the divider unit being overloaded are both categorized under Backend Bou= nd. Backend Bound is further divided into two main categories: Memory Bound= and Core Bound.", - "MetricExpr": "100 * ( 1 - ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_= ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_= CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) )= ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_backend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck. Memory Bound estimat= es fraction of slots where pipeline is likely stalled due to demand load or= store instructions. This accounts mainly for (1) non-completed in-flight m= emory demand loads which coincides with execution units starvation; in addi= tion to (2) cases where stores could impose backpressure on the pipeline wh= en many of them get buffered at the same time (less common out of the two).= ", - "MetricExpr": "100 * ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACT= IVITY.BOUND_ON_STORES ) / ( CYCLE_ACTIVITY.STALLS_TOTAL + ( EXE_ACTIVITY.1_= PORTS_UTIL + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALT= ED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) * EXE= _ACTIVITY.2_PORTS_UTIL ) + EXE_ACTIVITY.BOUND_ON_STORES ) ) * ( 1 - ( IDQ_U= OPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 )= * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY= _CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on el= se ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;m_tma_backend_bound_percent", - "MetricName": "tma_memory_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache. The L1 data cache typicall= y has the shortest latency. However; in certain cases like loads blocked o= n older stores; a load might suffer due to high latency even though it is b= eing satisfied by the L1. Another example is loads who miss in the TLB. The= se cases are characterized by execution unit stalls; while some non-complet= ed demand load lives in the machine without having that demand load missing= the L1 cache.", - "MetricExpr": "100 * ( max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCL= E_ACTIVITY.STALLS_L1D_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) , 0 ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l1_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 m= isses/L2 hits) can improve the latency and increase performance.", - "MetricExpr": "100 * ( ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_L= OAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) / ( ( MEM_LOAD_RETI= RED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS )= ) ) ) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D0x1@ ) ) * ( ( CYCLE_ACTIVIT= Y.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.TH= READ ) ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l2_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core. = Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and= increase performance.", - "MetricExpr": "100 * ( ( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACT= IVITY.STALLS_L3_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l3_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads. Better caching can i= mprove the latency and increase performance.", - "MetricExpr": "100 * ( min( ( ( ( CYCLE_ACTIVITY.STALLS_L3_MISS / = ( CPU_CLK_UNHALTED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_AC= TIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RE= TIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS= ) ) ) ) / ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / = ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D= 0x1@ ) ) * ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MI= SS ) / ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( min( ( ( ( ( 1 - ( ( ( 19 * ( = MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( = MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOCAL_= DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) )= + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT = / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_HI= TM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) )= ) / ( ( 19 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RET= IRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MIS= S_RETIRED.LOCAL_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED= .L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD= _RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_R= ETIRED.REMOTE_HITM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L= 1_MISS ) ) ) ) ) ) + ( 25 * ( ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + ( MEM_LO= AD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) + 33 * ( ( MEM_LOA= D_L3_MISS_RETIRED.REMOTE_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD= _RETIRED.L1_MISS ) ) ) ) ) ) ) ) ) ) * ( CYCLE_ACTIVITY.STALLS_L3_MISS / ( = CPU_CLK_UNHALTED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTI= VITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RETI= RED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS )= ) ) ) / ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( = MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D0x= 1@ ) ) * ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS= ) / ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) if ( ( 1000000 ) * ( MEM_LOAD_L3_M= ISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1= _MISS ) else 0 ) ) , ( 1 ) ) ) ) ) , ( 1 ) ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_dram_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric roughly estimates (based on idle = latencies) how often the CPU was stalled on accesses to external 3D-Xpoint = (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memo= ry Module. ", - "MetricExpr": "100 * ( min( ( ( ( ( 1 - ( ( ( 19 * ( MEM_LOAD_L3_M= ISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + = ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD= _L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_= RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + ( = MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) ) / ( ( 19 *= ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT /= ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOC= AL_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) = ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_H= IT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE= _HITM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) = ) ) ) + ( 25 * ( ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB= _HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) + 33 * ( ( MEM_LOAD_L3_MISS_RET= IRED.REMOTE_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_M= ISS ) ) ) ) ) ) ) ) ) ) * ( CYCLE_ACTIVITY.STALLS_L3_MISS / ( CPU_CLK_UNHAL= TED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L= 2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RETIRED.L2_HIT * = ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) / ( ( = MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D0x1@ ) ) * ( ( = CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_CL= K_UNHALTED.THREAD ) ) ) ) ) if ( ( 1000000 ) * ( MEM_LOAD_L3_MISS_RETIRED.R= EMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else = 0 ) ) , ( 1 ) ) )", - "MetricGroup": "MemoryBound;Server;TmaL3mem;TmaL3;m_tma_memory_bou= nd_percent", - "MetricName": "tma_pmm_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write. Even though store accesses do not typically stall= out-of-order CPUs; there are few cases where stores can lead to actual sta= lls. This metric will be flagged should RFO stores be a bottleneck.", - "MetricExpr": "100 * ( EXE_ACTIVITY.BOUND_ON_STORES / ( CPU_CLK_UN= HALTED.THREAD ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_store_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck. Shortage in hardware comput= e resources; or dependencies in software's instructions are both categorize= d under Core Bound. Hence it may indicate the machine ran out of an out-of-= order resource; certain execution units are overloaded or dependencies in p= rogram's data- or instruction-flow are limiting the performance (e.g. FP-ch= ained long-latency arithmetic operations).", - "MetricExpr": "100 * ( ( 1 - ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4= ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALT= ED.THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLE= S_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CP= U_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD )= ) ) ) - ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES= ) / ( CYCLE_ACTIVITY.STALLS_TOTAL + ( EXE_ACTIVITY.1_PORTS_UTIL + ( ( UOPS= _RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) i= f #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) * EXE_ACTIVITY.2_PORTS_UTI= L ) + EXE_ACTIVITY.BOUND_ON_STORES ) ) * ( 1 - ( IDQ_UOPS_NOT_DELIVERED.COR= E / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_C= LK_UNHALTED.THREAD ) ) ) ) - ( UOPS_ISSUED.ANY + ( 4 ) * ( ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;Compute;m_tma_backend_bound_percent", - "MetricName": "tma_core_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active. Divide and square root instructions are per= formed by the Divider unit and can take considerably longer latency than in= teger or Floating Point addition; subtraction; or multiplication.", - "MetricExpr": "100 * ( ARITH.DIVIDER_ACTIVE / ( CPU_CLK_UNHALTED.T= HREAD ) )", - "MetricGroup": "TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_divider_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related). Two distinct categories can be attributed into this met= ric: (1) heavy data-dependency among contiguous instructions would manifest= in this metric - such cases are often referred to as low Instruction Level= Parallelism (ILP). (2) Contention on some hardware execution unit other th= an Divider. For example; when there are too many multiply operations.", - "MetricExpr": "100 * ( ( EXE_ACTIVITY.EXE_BOUND_0_PORTS + ( EXE_AC= TIVITY.1_PORTS_UTIL + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_C= LK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) = ) ) * EXE_ACTIVITY.2_PORTS_UTIL ) ) / ( CPU_CLK_UNHALTED.THREAD ) if ( ARIT= H.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_ME= M_ANY ) ) else ( EXE_ACTIVITY.1_PORTS_UTIL + ( ( UOPS_RETIRED.RETIRE_SLOTS = ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_C= LK_UNHALTED.THREAD ) ) ) ) * EXE_ACTIVITY.2_PORTS_UTIL ) / ( CPU_CLK_UNHALT= ED.THREAD ) )", - "MetricGroup": "PortsUtil;TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_ports_utilization_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. Ideally= ; all pipeline slots would be attributed to the Retiring category. Retirin= g of 100% would indicate the maximum Pipeline_Width throughput was achieved= . Maximizing Retiring typically increases the Instructions-per-cycle (see = IPC metric). Note that a high Retiring value does not necessary mean there = is no room for more performance. For example; Heavy-operations or Microcod= e Assists are categorized under Retiring. They often indicate suboptimal pe= rformance and can often be optimized or avoided. ", - "MetricExpr": "100 * ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_retiring_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation). This correlates with total number = of instructions used by the program. A uops-per-instruction (see UPI metric= ) ratio of 1 or less should be expected for decently optimized software run= ning on Intel Core/Xeon products. While this often indicates efficient X86 = instructions were executed; high value does not necessarily mean better per= formance cannot be achieved.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) *= ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.T= HREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSE= D - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_light_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired). Note t= his metric's value may exceed its parent due to use of \"Uops\" CountDomain= and FMA double-counting.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD ) + ( ( FP_ARITH= _INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) / ( UOP= S_RETIRED.RETIRE_SLOTS ) ) + ( min( ( ( FP_ARITH_INST_RETIRED.128B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.25= 6B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST= _RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / = ( UOPS_RETIRED.RETIRE_SLOTS ) ) , ( 1 ) ) ) )", - "MetricGroup": "HPC;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fp_arith_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * MEM_INST_RETIRED.ANY = / INST_RETIRED.ANY )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_memory_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JCC= are commonly used examples.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * UOPS_RETIRED.MACRO_FU= SED / ( UOPS_RETIRED.RETIRE_SLOTS ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fused_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused. Non-conditi= onal branches like direct JMP or CALL would count here. Can be used to exam= ine fusible conditional jumps that were not fused.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * ( BR_INST_RETIRED.ALL= _BRANCHES - UOPS_RETIRED.MACRO_FUSED ) / ( UOPS_RETIRED.RETIRE_SLOTS ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_non_fused_branches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions. Compilers often use NOPs f= or certain address alignments - e.g. start address of a function or loop bo= dy.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED= .THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FU= SED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) = if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) * INST_RETIRED.NOP / ( = UOPS_RETIRED.RETIRE_SLOTS ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_nop_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", - "MetricExpr": "100 * ( max( 0 , ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) = / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK= _UNHALTED.THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED= .MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_A= NY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) - ( ( ( ( ( UO= PS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 )= if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) * UOPS_EXECUTED.X87 / UO= PS_EXECUTED.THREAD ) + ( ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) + ( min( ( ( = FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKE= D_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED= .256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_I= NST_RETIRED.512B_PACKED_SINGLE ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) , ( 1 ) = ) ) ) + ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTE= D.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( = ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY= ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_= CLK_UNHALTED.THREAD ) ) ) ) ) * MEM_INST_RETIRED.ANY / INST_RETIRED.ANY ) += ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREA= D_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( ( UOPS_= RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY ) / ( = ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) ) * UOPS_RETIRED.MACRO_FUSED / ( UOPS_RETIRED.RETIRE_S= LOTS ) ) + ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHA= LTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - (= ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.= ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( C= PU_CLK_UNHALTED.THREAD ) ) ) ) ) * ( BR_INST_RETIRED.ALL_BRANCHES - UOPS_RE= TIRED.MACRO_FUSED ) / ( UOPS_RETIRED.RETIRE_SLOTS ) ) + ( ( ( ( UOPS_RETIRE= D.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_= on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( ( UOPS_RETIRED.RETIRE_SLOTS= ) + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_= UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )= ) * INST_RETIRED.NOP / ( UOPS_RETIRED.RETIRE_SLOTS ) ) ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_other_light_ops_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences. This highly-correlates with the = uop length of these instructions/sequences.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RETI= RED.MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREA= D_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_heavy_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of= uops in such instructions.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) + UOPS_RE= TIRED.MACRO_FUSED - INST_RETIRED.ANY ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THR= EAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( ( UOP= S_RETIRED.RETIRE_SLOTS ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( = CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD= ) ) ) ) )", - "MetricGroup": "TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_few_uops_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS= is used for CISC instructions not supported by the default decoders (like = repeat move strings; or CPUID); or by microcode assists used to address som= e operation modes (like in Floating Point assists). These cases can often b= e avoided.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_ISSU= ED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if= #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "MicroSeq;TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_microcode_sequencer_percent", + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to LSD (Loop Stream Detector) unit. = LSD typically does well sustaining Uop supply. However; in some rare cases;= optimal uop-delivery could not be reached for small loops whose size (in t= erms of number of uops) does not suit well the LSD structure.", + "MetricExpr": "100 * ( ( LSD.CYCLES_ACTIVE - LSD.CYCLES_4_UOPS ) /= ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.T= HREAD ) ) / 2 )", + "MetricGroup": "FetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandw= idth_group", + "MetricName": "tma_lsd", "ScaleUnit": "1%" } ] diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json= b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json index 6facfb244cd3..326b674045c6 100644 --- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json @@ -27,20 +27,19 @@ "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", + "BriefDescription": "All DRAM Read CAS Commands issued (including = underfills)", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_READ", + "EventName": "UNC_M_CAS_COUNT.RD", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0x3", "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller", + "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.RD", + "EventName": "LLC_MISSES.MEM_READ", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0x3", @@ -56,20 +55,19 @@ "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", + "BriefDescription": "All DRAM Write CAS commands issued", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_WRITE", + "EventName": "UNC_M_CAS_COUNT.WR", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0xC", "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller", + "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.WR", + "EventName": "LLC_MISSES.MEM_WRITE", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0xC", diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json = b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json index a29bba230f49..e10530c21ef8 100644 --- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json +++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-other.json @@ -1477,7 +1477,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1489,7 +1488,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1501,7 +1499,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1513,7 +1510,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", - "ScaleUnit": "4Bytes", "UMask": "0x01", "Unit": "IIO" }, @@ -1584,7 +1580,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x01", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1596,7 +1591,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x02", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1608,7 +1602,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x04", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -1620,7 +1613,6 @@ "FCMask": "0x07", "PerPkg": "1", "PortMask": "0x08", - "ScaleUnit": "4Bytes", "UMask": "0x04", "Unit": "IIO" }, @@ -2254,7 +2246,7 @@ "Unit": "UPI LL" }, { - "BriefDescription": "FLITs received which bypassed the Slot0 Recei= ve Buffer", + "BriefDescription": "FLITs received which bypassed the Slot0 Recie= ve Buffer", "Counter": "0,1,2,3", "EventCode": "0x31", "EventName": "UNC_UPI_RxL_BYPASSED.SLOT2", --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A3D5C433FE for ; Sat, 1 Oct 2022 06:07:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229652AbiJAGHq (ORCPT ); Sat, 1 Oct 2022 02:07:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbiJAGHP (ORCPT ); Sat, 1 Oct 2022 02:07:15 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C20C612DE8E for ; Fri, 30 Sep 2022 23:07:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 189-20020a2516c6000000b006bbbcc3dd9bso5638704ybw.15 for ; Fri, 30 Sep 2022 23:07:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=1Fr2K5+nGwAaJU7O66cM9z/u4Ic+WMsu462XEdN8mXE=; b=bfTcslBORHmMqSekKi2gjWfsm3Z8f0+vP6m/Q+2HzGQe0VLdY1agwgTIFb1BK5LmgK ZcGyQwGiyShODzDbGFrOktj28ZZf0VPjzN5EaEhvVGHhZpqaLsfJpvvcEDJZ1TA1aZLp DOSlhbBPhJd+gQ/lhcqA2cu9zdvPAwv68lKP8e6TB/Q+rB7fQk6WJ5xRrILQTvYWeNJ3 +zZ8Suj+c9orXWMB/DpXceODpvg4iFlVNTRscm9ng3FFINYhb8LmGSnvierRNAoDuWo8 h1w61zqyiutAuejU0AuDXURI80P7cvNldeUde2svNOP+xL4s5iiPQ8PhnMNJqJKQWyUn 0+Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=1Fr2K5+nGwAaJU7O66cM9z/u4Ic+WMsu462XEdN8mXE=; b=czWR7bH4LWjU0kpEXuCOxKjJOByOcqexLi7+pXDDQc0hu3cXOocyOEENrR5qKVSCzV rsu9BLBUFhbBL4eiRYva8T8pZxBto1IIJJhdGI/9FZhNP8ZcIyqMb+bouezhz9l4BHwX BsWdiW/UIcdahpaMaro0APnYjY/SBPrsJGnvdWuu31e717j9OUPUiWJ8n0oCboKRDCZ2 KenBTPdnMRAAXN1lpFnNIOBxFalu/34jTp75Vr0Bw7rIfqUqfGokq7nPHd8u0DO1Y78g UKHaQ8Jf6nro/Va0eXR4eDtEwtZYKZ91dv1GWT4hFBzamGa/D75/M0nIl/bcOZ21l68u uyvw== X-Gm-Message-State: ACrzQf3DXJucyveLYJ9DH8ojBiSio43PLXHuCFHzMh1C/qm4+G2lqoPP 0G5XShOakbhKMDwCVBTqooBq5ge+W1me X-Google-Smtp-Source: AMsMyM5soebaKlJtIPn5kSd66jsXA3m9JqT7Is4FdPDHTkMZfzu3LURGvVZwX9IMAnGMFuPYpYAG4JDRzstt X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:9e51:0:b0:354:8aab:84ac with SMTP id n17-20020a819e51000000b003548aab84acmr11331117ywj.241.1664604426420; Fri, 30 Sep 2022 23:07:06 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:23 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-10-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 09/22] perf vendor events: Update elkhartlake cpuids From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add cpuid that was added to https://download.01.org/perfmon/mapfile.csv Signed-off-by: Ian Rogers --- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index bc873a1e84e1..6c8188404db8 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -5,7 +5,7 @@ GenuineIntel-6-(3D|47),v26,broadwell,core GenuineIntel-6-56,v23,broadwellde,core GenuineIntel-6-4F,v19,broadwellx,core GenuineIntel-6-55-[56789ABCDEF],v1.16,cascadelakex,core -GenuineIntel-6-96,v1.03,elkhartlake,core +GenuineIntel-6-9[6C],v1.03,elkhartlake,core GenuineIntel-6-5[CF],v13,goldmont,core GenuineIntel-6-7A,v1.01,goldmontplus,core GenuineIntel-6-(3C|45|46),v31,haswell,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7042BC4332F for ; Sat, 1 Oct 2022 06:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229611AbiJAGI4 (ORCPT ); Sat, 1 Oct 2022 02:08:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229673AbiJAGHW (ORCPT ); Sat, 1 Oct 2022 02:07:22 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EE2A1AD7AF for ; Fri, 30 Sep 2022 23:07:10 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-35813c409cbso2462667b3.5 for ; Fri, 30 Sep 2022 23:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=Zi4g+e70ecMhtyc4UlgS5TeTVrBomiw5g4pB1eN4Rs0=; b=UnqXe+fdO+dvRYiuUowlkXTLDWEdw7Gy5q8aymOh0V0JcXM7EiXtix5LKxsNyUFxMP h4hMI3oANIZOpUhX6sn8YGFkvy782YotwegtVXYS3bMaL3AEOzzRj8SmIlhsj2biCfRZ GFKgnEDNF0yJHiX700VTamHAY3cTwIXqUbJDqZCr5lOeabBnZBZO5KaDdjBUfJuWlnd+ URSEVdsvX+g5yAcujk/Yv7LtwXBcK4aAmJKkYZ726udX37iQJaqYHCaxk3psb8Z+5u+Y c8WSgp6kO3kAI5VvDGz/2NxxkMrjPLW0AR9As0Sb/NSwVIJcEogjoOPrVv7Md8M/BJC2 oumA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=Zi4g+e70ecMhtyc4UlgS5TeTVrBomiw5g4pB1eN4Rs0=; b=Hvp70NI9O+sGut0lJKqJDoZ/Fc3dorxZOl46cK6jtYBb/vgoM0S2kDddeUIpQSEJlI r6OVvLg90bENFs0mFSkXgqCPoLa62kaHPeFHIffuZJcmCh51Elffuq5s//XTI1eIWCr4 G8rn+MqvgJYAMzZAhZNLv2dE/v0quumECKuvLRkDbYxyMbgfkgNGBgZTer40+gVkhvUR YtA68sHweZoKtVXTWxnu3zbEK/V/4oLifzu4cYWdyCVqFlt8hpEhViJlUbmDOGOmlr+p 9CfYqAfNEQSebODlnhJGcys6cEAvyo7eTOBAAveh9miigzbkzKExPW7nJd+CKGXXZwbv Btww== X-Gm-Message-State: ACrzQf33bIfO2zsG1Zqk2abyGGRVTwHrv9BBivXnhstSn24kP/gQJR6W YRUCTN9dyTVRUHys4mbhzuTauXRLJahm X-Google-Smtp-Source: AMsMyM7x6XCcZlxcBwtt2BP0qqPMGFwRSmnRyHyZ/3ReKsctjhJmSzXHoRLcVuYn9zsCM08pHJYL3+07z92F X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:e401:0:b0:6bc:a261:15b7 with SMTP id b1-20020a25e401000000b006bca26115b7mr11596264ybh.538.1664604429261; Fri, 30 Sep 2022 23:07:09 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:24 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-11-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 10/22] perf vendor events: Update Intel haswell From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v32, the core metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../pmu-events/arch/x86/haswell/cache.json | 4 +- .../pmu-events/arch/x86/haswell/frontend.json | 12 +- .../arch/x86/haswell/hsw-metrics.json | 570 +++++++++++++++--- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 4 files changed, 498 insertions(+), 90 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/haswell/cache.json b/tools/perf= /pmu-events/arch/x86/haswell/cache.json index 3b0f3a264246..719b8e622f59 100644 --- a/tools/perf/pmu-events/arch/x86/haswell/cache.json +++ b/tools/perf/pmu-events/arch/x86/haswell/cache.json @@ -20,7 +20,7 @@ "UMask": "0x2" }, { - "BriefDescription": "L1D miss oustandings duration in cycles", + "BriefDescription": "L1D miss outstanding duration in cycles", "Counter": "2", "CounterHTOff": "2", "EventCode": "0x48", @@ -655,7 +655,7 @@ "UMask": "0x8" }, { - "BriefDescription": "Cacheable and noncachaeble code read requests= ", + "BriefDescription": "Cacheable and noncacheable code read requests= ", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0xB0", diff --git a/tools/perf/pmu-events/arch/x86/haswell/frontend.json b/tools/p= erf/pmu-events/arch/x86/haswell/frontend.json index c45a09abe5d3..18a993297108 100644 --- a/tools/perf/pmu-events/arch/x86/haswell/frontend.json +++ b/tools/perf/pmu-events/arch/x86/haswell/frontend.json @@ -161,7 +161,7 @@ "UMask": "0x4" }, { - "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", @@ -172,7 +172,7 @@ "UMask": "0x30" }, { - "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequenser (MS) is busy.", + "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequencer (MS) is busy.", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", @@ -182,7 +182,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) = initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is b= usy.", + "BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) = initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is b= usy.", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", @@ -193,7 +193,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) = that are being delivered to Instruction Decode Queue (IDQ) while Microcode = Sequenser (MS) is busy", + "BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) = that are being delivered to Instruction Decode Queue (IDQ) while Microcode = Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", @@ -203,7 +203,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", @@ -224,7 +224,7 @@ "UMask": "0x30" }, { - "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", diff --git a/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json b/tool= s/perf/pmu-events/arch/x86/haswell/hsw-metrics.json index 75dc6dd9a7bc..6cb6603efbd8 100644 --- a/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json +++ b/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json @@ -1,64 +1,490 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", + "MetricExpr": "ICACHE.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@= UOPS_EXECUTED.CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.COR= E\\,cmask\\=3D2@) / 2 - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1)= else RESOURCE_STALLS.SB) if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYC= LE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cp= u@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.C= ORE\\,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) el= se RESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.ST= ALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_= RETIRED.HIT_LFB_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.REQUEST_= FB_FULL\\,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY= .STALLS_L2_PENDING) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETI= RED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2= _PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 = + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD= _UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOP= S_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_= LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS= * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + ME= M_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LO= AD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) = + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_RETIRED.L3_MISS))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + mem_load_= uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIR= ED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RE= TIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_UOPS_R= ETIRED.L3_MISS))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS= _RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS))) * CYCLE_ACTIVITY.STA= LLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_= CORE / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES= .WALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "10 * ARITH.DIVIDER_UOPS / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_EXECUTED.= CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D= 2@) / 2 - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURCE= _STALLS.SB) if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CY= CLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_EXECUTE= D.CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.CORE\\,cmask\\= =3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURCE_S= TALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVIT= Y.STALLS_LDM_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECU= TE) - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else 0) / CORE_CL= KS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "INST_RETIRED.X87 * UPI / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -76,8 +502,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -88,37 +514,25 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "( UOPS_EXECUTED.CORE / 2 / (( cpu@UOPS_EXECUTED.COR= E\\,cmask\\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1= @) ) if #SMT_on else UOPS_EXECUTED.CORE / (( cpu@UOPS_EXECUTED.CORE\\,cmask= \\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@)", + "MetricExpr": "(UOPS_EXECUTED.CORE / 2 / ((cpu@UOPS_EXECUTED.CORE\= \,cmask\\=3D1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@))= if #SMT_on else UOPS_EXECUTED.CORE / ((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D= 1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -159,9 +573,9 @@ "MetricName": "BpTkBranch" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -172,7 +586,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -184,47 +598,41 @@ }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_= DURATION + DTLB_STORE_MISSES.WALK_DURATION) / CORE_CLKS", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -245,19 +653,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -275,19 +683,19 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -305,7 +713,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 6c8188404db8..63a0e98fd116 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -8,7 +8,7 @@ GenuineIntel-6-55-[56789ABCDEF],v1.16,cascadelakex,core GenuineIntel-6-9[6C],v1.03,elkhartlake,core GenuineIntel-6-5[CF],v13,goldmont,core GenuineIntel-6-7A,v1.01,goldmontplus,core -GenuineIntel-6-(3C|45|46),v31,haswell,core +GenuineIntel-6-(3C|45|46),v32,haswell,core GenuineIntel-6-3F,v25,haswellx,core GenuineIntel-6-(7D|7E|A7),v1.14,icelake,core GenuineIntel-6-6[AC],v1.15,icelakex,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7784CC433F5 for ; Sat, 1 Oct 2022 06:09:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229639AbiJAGJA (ORCPT ); Sat, 1 Oct 2022 02:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229530AbiJAGHZ (ORCPT ); Sat, 1 Oct 2022 02:07:25 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DD7D130F10 for ; Fri, 30 Sep 2022 23:07:12 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3579d28ffd6so9937057b3.18 for ; Fri, 30 Sep 2022 23:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=WBi/zjqY1H2qu2T4heI4XM/6PHRlY2MLte2wHLsjygA=; b=m48WypvlBuZeQxWyCFym2YzL6zfXIViBv5zyjwt786UIPK0SLyCJCKI1gUNapwSv0Q /srcgMroTlAiXFn7CFzs9zoQXNmpYivc1exqvI73pgiRaQL6sDeaHdP3yNAOI0ShSeBv lYp5KNeWa7MxRE+WDe6FvgWOUqfwSRJqP3mivdEXNhyhKhRs7z8c0S3uqTIhykJv7o/s s8zp4pfe8FwKhyhQPoiStr1IBbvyh172mbH8RFTKKQOYuIfMZcuaXX4w1J3utfjQ4hiX g4RkO48OEY9Gc/1Y3jtNuLEoiX6mWushWWIB5RkxPSFRm68yxhFMEgr6ZiK1p07XfkcH E7cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=WBi/zjqY1H2qu2T4heI4XM/6PHRlY2MLte2wHLsjygA=; b=0+vQaDQ9IZq7pdZ9IWRo2NEheTm/Kp54MlYxC/ToLzBCn/iBeICxTkxAkUhzQRgSdy LV/4rSfqJAQrvvdV7r8f2O6Mw+6mBR2aD2j3x0Gjkuk2Z1eHzz5TvL2TIRk62ih7R8cw vfS9PzZ3m91GT0lKyHo1qjFyBHIi6toVTNDM27IBvEC0Cl82eR3BoIhyHKZXl/C+cPrl FvnWnwa1XnZomsPp9JUXIrabKVBWu3P4RGNKnxHtrrIif2MqbyUDnRWVdU8gj6IlVI+O RmJ+1//wVu6I1r2exBNSC+uFmsPKDqoevatYMn9GfWokV9DQpohZVcYWeD0qKS+wbzEe oD8w== X-Gm-Message-State: ACrzQf3POJ/VL5ZA9MTwEVCvZSbgGza5BHgKjlchDgCtnVULGUOfdUit gycC0CDeGqnby97uhgEaQam7O1PLtY0D X-Google-Smtp-Source: AMsMyM6XsYCsCyfOk27nhTtn8U5BXBD0ylaKfvoSZOebL+SQlA5Bv93Vb+ctaX8kb+SHWEAMKgK9AK5wNkoW X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:d1d0:0:b0:6bd:cad:9354 with SMTP id i199-20020a25d1d0000000b006bd0cad9354mr4213799ybg.454.1664604431788; Fri, 30 Sep 2022 23:07:11 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:25 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-12-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 11/22] perf vendor events: Update Intel haswellx From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v26, the core metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Uncore event updates by Zhengjun Xing . - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Previously metrics involving topdown events were dropped. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Latest metrics from: https://github.com/intel/perfmon-metrics Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../pmu-events/arch/x86/haswellx/cache.json | 2 +- .../arch/x86/haswellx/frontend.json | 12 +- .../arch/x86/haswellx/hsx-metrics.json | 919 +++++++++++------- .../x86/haswellx/uncore-interconnect.json | 18 +- .../arch/x86/haswellx/uncore-memory.json | 18 +- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 6 files changed, 615 insertions(+), 356 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/haswellx/cache.json b/tools/per= f/pmu-events/arch/x86/haswellx/cache.json index 7557a203a1b6..427c949bed6e 100644 --- a/tools/perf/pmu-events/arch/x86/haswellx/cache.json +++ b/tools/perf/pmu-events/arch/x86/haswellx/cache.json @@ -691,7 +691,7 @@ "UMask": "0x8" }, { - "BriefDescription": "Cacheable and noncachaeble code read requests= ", + "BriefDescription": "Cacheable and noncacheable code read requests= ", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0xB0", diff --git a/tools/perf/pmu-events/arch/x86/haswellx/frontend.json b/tools/= perf/pmu-events/arch/x86/haswellx/frontend.json index c45a09abe5d3..18a993297108 100644 --- a/tools/perf/pmu-events/arch/x86/haswellx/frontend.json +++ b/tools/perf/pmu-events/arch/x86/haswellx/frontend.json @@ -161,7 +161,7 @@ "UMask": "0x4" }, { - "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", @@ -172,7 +172,7 @@ "UMask": "0x30" }, { - "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequenser (MS) is busy.", + "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequencer (MS) is busy.", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", @@ -182,7 +182,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) = initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is b= usy.", + "BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) = initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is b= usy.", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", @@ -193,7 +193,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) = that are being delivered to Instruction Decode Queue (IDQ) while Microcode = Sequenser (MS) is busy", + "BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) = that are being delivered to Instruction Decode Queue (IDQ) while Microcode = Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", @@ -203,7 +203,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", @@ -224,7 +224,7 @@ "UMask": "0x30" }, { - "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", diff --git a/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json b/too= ls/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json index d31d76db9d84..2cd86750986a 100644 --- a/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json @@ -1,64 +1,514 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", + "MetricExpr": "ICACHE.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@= UOPS_EXECUTED.CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.COR= E\\,cmask\\=3D2@) / 2 - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1)= else RESOURCE_STALLS.SB) if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYC= LE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cp= u@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.C= ORE\\,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) el= se RESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.ST= ALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_= RETIRED.HIT_LFB_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.REQUEST_= FB_FULL\\,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY= .STALLS_L2_PENDING) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETI= RED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2= _PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 = + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD= _UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOP= S_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_= LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE= _DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_R= ETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_D= RAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RET= IRED.REMOTE_FWD)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_D= RAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RET= IRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "41 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + mem_load_= uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIR= ED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RE= TIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_UOPS_L= 3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM= _LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMO= TE_FWD))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS= _RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS))) * CYCLE_ACTIVITY.STA= LLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM * (= 1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LO= AD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_U= OPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + ME= M_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMO= TE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS= _RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3= _MISS_RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "310 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM * = (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_L= OAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_= UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + M= EM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REM= OTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MIS= S_RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "(200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM *= (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_= LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD= _UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + = MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.RE= MOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MI= SS_RETIRED.REMOTE_FWD))) + 180 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD = * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM= _LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOA= D_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) += MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.R= EMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_M= ISS_RETIRED.REMOTE_FWD)))) / CLKS", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_UOPS_L3_MI= SS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(200 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_= HITM + 60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES= .WALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "10 * ARITH.DIVIDER_UOPS / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_EXECUTED.= CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D= 2@) / 2 - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURCE= _STALLS.SB) if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CY= CLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_EXECUTE= D.CORE\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_EXECUTED.CORE\\,cmask\\= =3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURCE_S= TALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVIT= Y.STALLS_LDM_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECU= TE) - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else 0) / CORE_CL= KS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "INST_RETIRED.X87 * UPI / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -74,6 +524,12 @@ "MetricGroup": "Branches;Fed;FetchBW", "MetricName": "UpTB" }, + { + "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", + "MetricName": "CPI" + }, { "BriefDescription": "Per-Logical Processor actual clocks when the = Logical Processor is active.", "MetricExpr": "CPU_CLK_UNHALTED.THREAD", @@ -82,37 +538,25 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "( UOPS_EXECUTED.CORE / 2 / (( cpu@UOPS_EXECUTED.COR= E\\,cmask\\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1= @) ) if #SMT_on else UOPS_EXECUTED.CORE / (( cpu@UOPS_EXECUTED.CORE\\,cmask= \\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@)", + "MetricExpr": "(UOPS_EXECUTED.CORE / 2 / ((cpu@UOPS_EXECUTED.CORE\= \,cmask\\=3D1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@))= if #SMT_on else UOPS_EXECUTED.CORE / ((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D= 1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -153,9 +597,9 @@ "MetricName": "BpTkBranch" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -166,7 +610,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -178,47 +622,41 @@ }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_= DURATION + DTLB_STORE_MISSES.WALK_DURATION) / CORE_CLKS", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -239,19 +677,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -269,19 +707,19 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -299,13 +737,13 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( cbox@event\\=3D0x36\\,umask\\=3D0x3\= \,filter_opc\\=3D0x182@ / cbox@event\\=3D0x35\\,umask\\=3D0x3\\,filter_opc\= \=3D0x182@ ) / ( cbox_0@event\\=3D0x0@ / duration_time )", + "MetricExpr": "1000000000 * (cbox@event\\=3D0x36\\,umask\\=3D0x3\\= ,filter_opc\\=3D0x182@ / cbox@event\\=3D0x35\\,umask\\=3D0x3\\,filter_opc\\= =3D0x182@) / (Socket_CLKS / duration_time)", "MetricGroup": "Mem;MemoryLat;SoC", "MetricName": "MEM_Read_Latency" }, @@ -321,12 +759,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cbox_0@event\\=3D0x0@ / #num_dies / duration_time /= 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -375,403 +807,234 @@ "MetricGroup": "Power", "MetricName": "C7_Pkg_Residency" }, + { + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" + }, { "BriefDescription": "CPU operating frequency (in GHz)", - "MetricExpr": "( CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_= TSC * #SYSTEM_TSC_FREQ ) / 1000000000", + "MetricExpr": "(( CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_T= SC * #SYSTEM_TSC_FREQ ) / 1000000000) / duration_time", "MetricGroup": "", "MetricName": "cpu_operating_frequency", "ScaleUnit": "1GHz" }, - { - "BriefDescription": "Cycles per instruction retired; indicating ho= w much time each executed instruction took; in units of cycles.", - "MetricExpr": " CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY ", - "MetricGroup": "", - "MetricName": "cpi", - "ScaleUnit": "1per_instr" - }, { "BriefDescription": "The ratio of number of completed memory load = instructions to the total number completed instructions", - "MetricExpr": " MEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANY ", + "MetricExpr": "MEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "loads_per_instr", "ScaleUnit": "1per_instr" }, { "BriefDescription": "The ratio of number of completed memory store= instructions to the total number completed instructions", - "MetricExpr": " MEM_UOPS_RETIRED.ALL_STORES / INST_RETIRED.ANY ", + "MetricExpr": "MEM_UOPS_RETIRED.ALL_STORES / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "stores_per_instr", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of requests missing L1 data c= ache (includes data+rfo w/ prefetches) to the total number of completed ins= tructions", - "MetricExpr": " L1D.REPLACEMENT / INST_RETIRED.ANY ", + "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l1d_mpi_includes_data_plus_rfo_with_prefetches", + "MetricName": "l1d_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of demand load requests hitti= ng in L1 data cache to the total number of completed instructions", - "MetricExpr": " MEM_LOAD_UOPS_RETIRED.L1_HIT / INST_RETIRED.ANY = ", + "MetricExpr": "MEM_LOAD_UOPS_RETIRED.L1_HIT / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "l1d_demand_data_read_hits_per_instr", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of code read requests missing= in L1 instruction cache (includes prefetches) to the total number of compl= eted instructions", - "MetricExpr": " L2_RQSTS.ALL_CODE_RD / INST_RETIRED.ANY ", + "MetricExpr": "L2_RQSTS.ALL_CODE_RD / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "l1_i_code_read_misses_with_prefetches_per_instr", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed demand load requ= ests hitting in L2 cache to the total number of completed instructions", - "MetricExpr": " MEM_LOAD_UOPS_RETIRED.L2_HIT / INST_RETIRED.ANY = ", + "MetricExpr": "MEM_LOAD_UOPS_RETIRED.L2_HIT / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "l2_demand_data_read_hits_per_instr", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of requests missing L2 cache = (includes code+data+rfo w/ prefetches) to the total number of completed ins= tructions", - "MetricExpr": " L2_LINES_IN.ALL / INST_RETIRED.ANY ", + "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l2_mpi_includes_code_plus_data_plus_rfo_with_prefet= ches", + "MetricName": "l2_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed data read reques= t missing L2 cache to the total number of completed instructions", - "MetricExpr": " MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY= ", + "MetricExpr": "MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "l2_demand_data_read_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of code read request missing = L2 cache to the total number of completed instructions", - "MetricExpr": " L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANY ", + "MetricExpr": "L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "l2_demand_code_mpi", "ScaleUnit": "1per_instr" }, + { + "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) in nano seconds", + "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_OPCO= DE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_op= c\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( #num_cores / #num_packages * #num_p= ackages ) ) ) * duration_time", + "MetricGroup": "", + "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency", + "ScaleUnit": "1ns" + }, + { + "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to local m= emory in nano seconds", + "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_LOCA= L_OPCODE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE= \\,filter_opc\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( #num_cores / #num_packa= ges * #num_packages ) ) ) * duration_time", + "MetricGroup": "", + "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _local_requests", + "ScaleUnit": "1ns" + }, + { + "BriefDescription": "Average latency of a last level cache (LLC) d= emand and prefetch data read miss (read memory access) addressed to remote = memory in nano seconds", + "MetricExpr": "( 1000000000 * ( cbox@UNC_C_TOR_OCCUPANCY.MISS_REMO= TE_OPCODE\\,filter_opc\\=3D0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCO= DE\\,filter_opc\\=3D0x182@ ) / ( UNC_C_CLOCKTICKS / ( #num_cores / #num_pac= kages * #num_packages ) ) ) * duration_time", + "MetricGroup": "", + "MetricName": "llc_data_read_demand_plus_prefetch_miss_latency_for= _remote_requests", + "ScaleUnit": "1ns" + }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by a code fetch to the total number of completed ins= tructions. This implies it missed in the ITLB (Instruction TLB) and further= levels of TLB.", - "MetricExpr": " ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY ", + "MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "itlb_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total n= umber of completed instructions. This implies it missed in the Instruction = Translation Lookaside Buffer (ITLB) and further levels of TLB.", - "MetricExpr": " ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.= ANY ", + "MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY= ", "MetricGroup": "", "MetricName": "itlb_large_page_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by demand data loads to the total number of complete= d instructions. This implies it missed in the DTLB and further levels of TL= B.", - "MetricExpr": " DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.A= NY ", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "dtlb_load_mpi", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of completed page walks (for = all page sizes) caused by demand data stores to the total number of complet= ed instructions. This implies it missed in the DTLB and further levels of T= LB.", - "MetricExpr": " DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.= ANY ", + "MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY= ", "MetricGroup": "", "MetricName": "dtlb_store_mpi", "ScaleUnit": "1per_instr" }, + { + "BriefDescription": "Uncore operating frequency in GHz", + "MetricExpr": "( UNC_C_CLOCKTICKS / ( #num_cores / #num_packages *= #num_packages ) / 1000000000) / duration_time", + "MetricGroup": "", + "MetricName": "uncore_frequency", + "ScaleUnit": "1GHz" + }, { "BriefDescription": "Intel(R) Quick Path Interconnect (QPI) data t= ransmit bandwidth (MB/sec)", - "MetricExpr": "( UNC_Q_TxL_FLITS_G0.DATA * 8 / 1000000) / duratio= n_time", + "MetricExpr": "( UNC_Q_TxL_FLITS_G0.DATA * 8 / 1000000) / duration= _time", "MetricGroup": "", - "MetricName": "qpi_data_transmit_bw_only_data", + "MetricName": "qpi_data_transmit_bw", "ScaleUnit": "1MB/s" }, { "BriefDescription": "DDR memory read bandwidth (MB/sec)", - "MetricExpr": "( UNC_M_CAS_COUNT.RD * 64 / 1000000) / duration_ti= me", + "MetricExpr": "( UNC_M_CAS_COUNT.RD * 64 / 1000000) / duration_tim= e", "MetricGroup": "", "MetricName": "memory_bandwidth_read", "ScaleUnit": "1MB/s" }, { "BriefDescription": "DDR memory write bandwidth (MB/sec)", - "MetricExpr": "( UNC_M_CAS_COUNT.WR * 64 / 1000000) / duration_ti= me", + "MetricExpr": "( UNC_M_CAS_COUNT.WR * 64 / 1000000) / duration_tim= e", "MetricGroup": "", "MetricName": "memory_bandwidth_write", "ScaleUnit": "1MB/s" }, { "BriefDescription": "DDR memory bandwidth (MB/sec)", - "MetricExpr": "(( UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR ) * 64= / 1000000) / duration_time", + "MetricExpr": "(( UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR ) * 64 /= 1000000) / duration_time", "MetricGroup": "", "MetricName": "memory_bandwidth_total", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", - "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=3D0x= 19e@ * 64 / 1000000) / duration_time", + "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=3D0x= 19e@ * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_read", + "MetricName": "io_bandwidth_disk_or_network_writes", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", - "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=3D0x= 1c8\\,filter_tid\\=3D0x3e@ * 64 / 1000000) / duration_time", + "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=3D0x= 1c8\\,filter_tid\\=3D0x3e@ * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_write", + "MetricName": "io_bandwidth_disk_or_network_reads", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Uops delivered from decoded instruction cache= (decoded stream buffer or DSB) as a percent of total uops delivered to Ins= truction Decode Queue", - "MetricExpr": "100 * ( IDQ.DSB_UOPS / UOPS_ISSUED.ANY )", + "MetricExpr": "100 * ( IDQ.DSB_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_frodecoded_icache_dsb", + "MetricName": "percent_uops_delivered_from_decoded_icache", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from legacy decode pipeline (M= icro-instruction Translation Engine or MITE) as a percent of total uops del= ivered to Instruction Decode Queue", - "MetricExpr": "100 * ( IDQ.MITE_UOPS / UOPS_ISSUED.ANY )", + "MetricExpr": "100 * ( IDQ.MITE_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_frolegacy_decode_pipeline_mi= te", + "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from microcode sequencer (MS) = as a percent of total uops delivered to Instruction Decode Queue", - "MetricExpr": "100 * ( IDQ.MS_UOPS / UOPS_ISSUED.ANY )", + "MetricExpr": "100 * ( IDQ.MS_UOPS / UOPS_ISSUED.ANY )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_fromicrocode_sequencer_ms", + "MetricName": "percent_uops_delivered_from_microcode_sequencer", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from loop stream detector(LSD)= as a percent of total uops delivered to Instruction Decode Queue", - "MetricExpr": "100 * ( UOPS_ISSUED.ANY - IDQ.MITE_UOPS - IDQ.M= S_UOPS - IDQ.DSB_UOPS ) / UOPS_ISSUED.ANY ", + "MetricExpr": "100 * ( UOPS_ISSUED.ANY - IDQ.MITE_UOPS - IDQ.MS_UO= PS - IDQ.DSB_UOPS ) / UOPS_ISSUED.ANY", "MetricGroup": "", - "MetricName": "percent_uops_delivered_froloop_streadetector_lsd", + "MetricName": "percent_uops_delivered_from_loop_stream_detector", "ScaleUnit": "1%" }, { "BriefDescription": "Ratio of number of data read requests missing= last level core cache (includes demand w/ prefetches) to the total number = of completed instructions", - "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\= =3D0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x192@ ) = / INST_RETIRED.ANY ", + "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\= =3D0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x192@ ) / = INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "llc_data_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Ratio of number of code read requests missing= last level core cache (includes demand w/ prefetches) to the total number = of completed instructions", - "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\= =3D0x181@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x191@ ) = / INST_RETIRED.ANY ", + "MetricExpr": "( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\= =3D0x181@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x191@ ) / = INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "llc_code_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Memory read that miss the last level cache (L= LC) addressed to local DRAM as a percentage of total memory read accesses, = does not include LLC prefetches.", - "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_= opc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x1= 82@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x182@ )", + "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,fi= lter_opc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_o= pc\\=3D0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=3D= 0x182@ )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_local_dram", + "MetricName": "numa_reads_addressed_to_local_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Memory reads that miss the last level cache (= LLC) addressed to remote DRAM as a percentage of total memory read accesses= , does not include LLC prefetches.", - "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_= opc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x1= 82@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\\,filter_opc\\=3D0x182@ )", - "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_remote_dram", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. Frontend denotes th= e first part of the processor core responsible to fetch operations that are= executed later on by the Backend part. Within the Frontend; a branch predi= ctor predicts the next address to fetch; cache-lines are fetched from the m= emory subsystem; parsed into instructions; and lastly decoded into micro-op= erations (uops). Ideally the Frontend can issue Machine_Width uops every cy= cle to the Backend. Frontend Bound denotes unutilized issue-slots when ther= e is no Backend stall; i.e. bubbles where Frontend delivered no uops while = Backend could have accepted them. For example; stalls due to instruction-ca= che misses would be categorized under Frontend Bound.", - "MetricExpr": "100 * ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTE= D.THREAD ) ) ) )", - "MetricGroup": "TmaL1, PGO", - "MetricName": "tma_frontend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues. For example; instruction-c= ache misses; iTLB misses or fetch stalls after a branch misprediction are c= ategorized under Frontend Latency. In such cases; the Frontend eventually d= elivers no uops for some period.", - "MetricExpr": "100 * ( ( 4 ) * ( min( CPU_CLK_UNHALTED.THREAD , = IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE ) ) / ( ( 4 ) * ( ( CPU_= CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD= ) ) ) )", - "MetricGroup": "Frontend, TmaL2", - "MetricName": "tma_fetch_latency_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", - "MetricExpr": "100 * ( ICACHE.IFDATA_STALL / ( CPU_CLK_UNHALTED= .THREAD ) )", - "MetricGroup": "BigFoot, FetchLat, IcMiss", - "MetricName": "tma_icache_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses.", - "MetricExpr": "100 * ( ( 14 * ITLB_MISSES.STLB_HIT + ITLB_MISSE= S.WALK_DURATION ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "BigFoot, FetchLat, MemoryTLB", - "MetricName": "tma_itlb_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fron= tend delay in fetching operations from corrected path; following all sorts = of miss-predicted branches. For example; branchy code with lots of miss-pre= dictions might get categorized under Branch Resteers. Note the value of thi= s node may overlap with its siblings.", - "MetricExpr": "100 * ( ( 12 ) * ( BR_MISP_RETIRED.ALL_BRANCHES += MACHINE_CLEARS.COUNT + BACLEARS.ANY ) / ( CPU_CLK_UNHALTED.THREAD ) = )", - "MetricGroup": "FetchLat", - "MetricName": "tma_branch_resteers_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decod= ed i-cache) is a Uop Cache where the front-end directly delivers Uops (micr= o operations) avoiding heavy x86 decoding. The DSB pipeline has shorter lat= ency and delivered higher bandwidth than the MITE (legacy instruction decod= e pipeline). Switching between the two pipelines can cause penalties hence = this metric measures the exposed penalty.", - "MetricExpr": "100 * ( DSB2MITE_SWITCHES.PENALTY_CYCLES / ( CPU= _CLK_UNHALTED.THREAD ) )", - "MetricGroup": "DSBmiss, FetchLat", - "MetricName": "tma_dsb_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs). Using proper compiler = flags or Intel Compiler by default will certainly avoid this. #Link: Optimi= zation Guide about LCP BKMs.", - "MetricExpr": "100 * ( ILD_STALL.LCP / ( CPU_CLK_UNHALTED.THREA= D ) )", - "MetricGroup": "FetchLat", - "MetricName": "tma_lcp_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS). Commonly used instructions are optimized for delivery by the= DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certa= in operations cannot be handled natively by the execution pipeline; and mus= t be performed by microcode (small programs injected into the execution str= eam). Switching to the MS too often can negatively impact performance. The = MS is designated to deliver long uop flows required by CISC instructions li= ke CPUID; or uncommon conditions like Floating Point Assists when dealing w= ith Denormals.", - "MetricExpr": "100 * ( ( 2 ) * IDQ.MS_SWITCHES / ( CPU_CLK_UNHA= LTED.THREAD ) )", - "MetricGroup": "FetchLat, MicroSeq", - "MetricName": "tma_ms_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues. For example; inefficienc= ies at the instruction decoders; or restrictions for caching in the DSB (de= coded uops cache) are categorized under Fetch Bandwidth. In such cases; the= Frontend typically delivers suboptimal amount of uops to the Backend.", - "MetricExpr": "100 * ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 ) *= ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHAL= TED.THREAD ) ) ) ) - ( ( 4 ) * ( min( CPU_CLK_UNHALTED.THREAD , IDQ_UOP= S_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHA= LTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) = ) )", - "MetricGroup": "FetchBW, Frontend, TmaL2", - "MetricName": "tma_fetch_bandwidth_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline). This pipeline is used for code that was not pre-cached in the= DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of= long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", - "MetricExpr": "100 * ( ( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL= _MITE_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_o= n else ( CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSBmiss, FetchBW", - "MetricName": "tma_mite_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line. For example; inefficient utilization of the DSB cache structure or b= ank conflict when reading from it; are categorized here.", - "MetricExpr": "100 * ( ( IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_= DSB_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on = else ( CPU_CLK_UNHALTED.THREAD ) ) / 2 )", - "MetricGroup": "DSB, FetchBW", - "MetricName": "tma_dsb_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. This include slots used to issue uops t= hat do not eventually get retired and slots for which the issue-pipeline wa= s blocked due to recovery from earlier incorrect speculation. For example; = wasted work due to miss-predicted branches are categorized under Bad Specul= ation category. Incorrect data speculation followed by Memory Ordering Nuke= s is another example.", - "MetricExpr": "100 * ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIR= E_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on = else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREA= D_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_bad_speculation_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction. These slots are either wasted = by uops fetched from an incorrectly speculated program path; or stalls when= the out-of-order part of the machine needs to recover its state from a spe= culative path.", - "MetricExpr": "100 * ( ( BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MI= SP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * ( ( UOPS_ISSUED.AN= Y - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLE= S_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * (= ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTE= D.THREAD ) ) ) ) )", - "MetricGroup": "BadSpec, BrMispredicts, TmaL2", - "MetricName": "tma_branch_mispredicts_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears. These slots are either wasted by uop= s fetched prior to the clear; or stalls the out-of-order portion of the mac= hine needs to recover its state after the clear. For example; this can happ= en due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modify= ing-Code (SMC) nukes.", - "MetricExpr": "100 * ( ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RET= IRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on= else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THR= EAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) - ( ( = BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHIN= E_CLEARS.COUNT ) ) * ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOTS = ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else IN= T_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY /= 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "BadSpec, MachineClears, TmaL2", - "MetricName": "tma_machine_clears_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. Backend is the portion of the processor cor= e where the out-of-order scheduler dispatches ready uops into their respect= ive execution units; and once completed these uops get retired according to= program order. For example; stalls due to data-cache misses or stalls due = to the divider unit being overloaded are both categorized under Backend Bou= nd. Backend Bound is further divided into two main categories: Memory Bound= and Core Bound.", - "MetricExpr": "100 * ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( (= 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK= _UNHALTED.THREAD ) ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_= SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on el= se INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_= ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOP= S_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2= ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_backend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck. Memory Bound estimat= es fraction of slots where pipeline is likely stalled due to demand load or= store instructions. This accounts mainly for (1) non-completed in-flight m= emory demand loads which coincides with execution units starvation; in addi= tion to (2) cases where stores could impose backpressure on the pipeline wh= en many of them get buffered at the same time (less common out of the two).= ", - "MetricExpr": "100 * ( ( ( ( min( CPU_CLK_UNHALTED.THREAD , CYC= LE_ACTIVITY.STALLS_LDM_PENDING ) ) + RESOURCE_STALLS.SB ) / ( ( ( min( = CPU_CLK_UNHALTED.THREAD , CYCLE_ACTIVITY.CYCLES_NO_EXECUTE ) ) + ( cpu@= UOPS_EXECUTED.CORE\\,cmask\\=3D0x1@ - ( cpu@UOPS_EXECUTED.CORE\\,cmask\\= =3D0x3@ if ( ( INST_RETIRED.ANY / ( CPU_CLK_UNHALTED.THREAD ) ) > 1.8 = ) else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x2@ ) ) / 2 - ( RS_EVENTS.EMP= TY_CYCLES if ( ( ( 4 ) * ( min( CPU_CLK_UNHALTED.THREAD , IDQ_UOPS_NOT_= DELIVERED.CYCLES_0_UOPS_DELIV.CORE ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.T= HREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) > 0.= 1 ) else 0 ) + RESOURCE_STALLS.SB ) if #SMT_on else ( ( min( CPU_CLK_U= NHALTED.THREAD , CYCLE_ACTIVITY.CYCLES_NO_EXECUTE ) ) + cpu@UOPS_EXECUT= ED.CORE\\,cmask\\=3D0x1@ - ( cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x3@ if = ( ( INST_RETIRED.ANY / ( CPU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else cpu@= UOPS_EXECUTED.CORE\\,cmask\\=3D0x2@ ) - ( RS_EVENTS.EMPTY_CYCLES if ( ( = ( 4 ) * ( min( CPU_CLK_UNHALTED.THREAD , IDQ_UOPS_NOT_DELIVERED.CYCLES_0= _UOPS_DELIV.CORE ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) i= f #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) > 0.1 ) else 0 ) + RE= SOURCE_STALLS.SB ) ) ) * ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIRE_SLOT= S ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on else = INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY = / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_RE= TIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) i= f #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) ) )", - "MetricGroup": "Backend, TmaL2", - "MetricName": "tma_memory_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache. The L1 data cache typicall= y has the shortest latency. However; in certain cases like loads blocked o= n older stores; a load might suffer due to high latency even though it is b= eing satisfied by the L1. Another example is loads who miss in the TLB. The= se cases are characterized by execution unit stalls; while some non-complet= ed demand load lives in the machine without having that demand load missing= the L1 cache.", - "MetricExpr": "100 * ( max( ( ( min( CPU_CLK_UNHALTED.THREAD , = CYCLE_ACTIVITY.STALLS_LDM_PENDING ) ) - CYCLE_ACTIVITY.STALLS_L1D_PENDING= ) / ( CPU_CLK_UNHALTED.THREAD ) , 0 ) )", - "MetricGroup": "CacheMisses, MemoryBound, TmaL3mem", - "MetricName": "tma_l1_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 m= isses/L2 hits) can improve the latency and increase performance.", - "MetricExpr": "100 * ( ( CYCLE_ACTIVITY.STALLS_L1D_PENDING - CY= CLE_ACTIVITY.STALLS_L2_PENDING ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses, MemoryBound, TmaL3mem", - "MetricName": "tma_l2_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core. = Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and= increase performance.", - "MetricExpr": "100 * ( ( MEM_LOAD_UOPS_RETIRED.L3_HIT / ( MEM_L= OAD_UOPS_RETIRED.L3_HIT + ( 7 ) * MEM_LOAD_UOPS_RETIRED.L3_MISS ) ) * C= YCLE_ACTIVITY.STALLS_L2_PENDING / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses, MemoryBound, TmaL3mem", - "MetricName": "tma_l3_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads. Better caching can i= mprove the latency and increase performance.", - "MetricExpr": "100 * ( min( ( ( 1 - ( MEM_LOAD_UOPS_RETIRED.L3_HI= T / ( MEM_LOAD_UOPS_RETIRED.L3_HIT + ( 7 ) * MEM_LOAD_UOPS_RETIRED.L3_M= ISS ) ) ) * CYCLE_ACTIVITY.STALLS_L2_PENDING / ( CPU_CLK_UNHALTED.THREA= D ) ) , ( 1 ) ) )", - "MetricGroup": "MemoryBound, TmaL3mem", - "MetricName": "tma_drabound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write. Even though store accesses do not typically stall= out-of-order CPUs; there are few cases where stores can lead to actual sta= lls. This metric will be flagged should RFO stores be a bottleneck.", - "MetricExpr": "100 * ( RESOURCE_STALLS.SB / ( CPU_CLK_UNHALTED.= THREAD ) )", - "MetricGroup": "MemoryBound, TmaL3mem", - "MetricName": "tma_store_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck. Shortage in hardware comput= e resources; or dependencies in software's instructions are both categorize= d under Core Bound. Hence it may indicate the machine ran out of an out-of-= order resource; certain execution units are overloaded or dependencies in p= rogram's data- or instruction-flow are limiting the performance (e.g. FP-ch= ained long-latency arithmetic operations).", - "MetricExpr": "100 * ( ( 1 - ( ( IDQ_UOPS_NOT_DELIVERED.CORE / (= ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_C= LK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_ISSUED.ANY - ( UOPS_RETIRED.RETIR= E_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) if #SMT_on = else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREA= D_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( U= OPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY /= 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) ) - ( ( ( ( mi= n( CPU_CLK_UNHALTED.THREAD , CYCLE_ACTIVITY.STALLS_LDM_PENDING ) ) + R= ESOURCE_STALLS.SB ) / ( ( ( min( CPU_CLK_UNHALTED.THREAD , CYCLE_ACTIVI= TY.CYCLES_NO_EXECUTE ) ) + ( cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x1@ - (= cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x3@ if ( ( INST_RETIRED.ANY / ( C= PU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else cpu@UOPS_EXECUTED.CORE\\,cmask\\= =3D0x2@ ) ) / 2 - ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * ( min( CPU_CL= K_UNHALTED.THREAD , IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE ) ) = / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CP= U_CLK_UNHALTED.THREAD ) ) ) ) > 0.1 ) else 0 ) + RESOURCE_STALLS.SB ) if= #SMT_on else ( ( min( CPU_CLK_UNHALTED.THREAD , CYCLE_ACTIVITY.CYCLES= _NO_EXECUTE ) ) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x1@ - ( cpu@UOPS_= EXECUTED.CORE\\,cmask\\=3D0x3@ if ( ( INST_RETIRED.ANY / ( CPU_CLK_UNHA= LTED.THREAD ) ) > 1.8 ) else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x2@ ) -= ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * ( min( CPU_CLK_UNHALTED.THREAD = , IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE ) ) / ( ( 4 ) * ( ( C= PU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) ) > 0.1 ) else 0 ) + RESOURCE_STALLS.SB ) ) ) * ( 1 - ( ( IDQ= _UOPS_NOT_DELIVERED.CORE / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2= ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) + ( ( UOPS_ISSUED= .ANY - ( UOPS_RETIRED.RETIRE_SLOTS ) + ( 4 ) * ( ( INT_MISC.RECOVERY_CY= CLES_ANY / 2 ) if #SMT_on else INT_MISC.RECOVERY_CYCLES ) ) / ( ( 4 ) = * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHA= LTED.THREAD ) ) ) ) + ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) * ( ( C= PU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THR= EAD ) ) ) ) ) ) ) )", - "MetricGroup": "Backend, TmaL2, Compute", - "MetricName": "tma_core_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active. Divide and square root instructions are per= formed by the Divider unit and can take considerably longer latency than in= teger or Floating Point addition; subtraction; or multiplication.", - "MetricExpr": "100 * ( 10 * ARITH.DIVIDER_UOPS / ( ( CPU_CLK_UN= HALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) = )", + "MetricExpr": "100 * cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,f= ilter_opc\\=3D0x182@ / ( cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_= opc\\=3D0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\= =3D0x182@ )", "MetricGroup": "", - "MetricName": "tma_divider_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related). Two distinct categories can be attributed into this met= ric: (1) heavy data-dependency among contiguous instructions would manifest= in this metric - such cases are often referred to as low Instruction Level= Parallelism (ILP). (2) Contention on some hardware execution unit other th= an Divider. For example; when there are too many multiply operations.", - "MetricExpr": "100 * ( ( ( ( ( min( CPU_CLK_UNHALTED.THREAD , C= YCLE_ACTIVITY.CYCLES_NO_EXECUTE ) ) + ( cpu@UOPS_EXECUTED.CORE\\,cmask\\= =3D0x1@ - ( cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x3@ if ( ( INST_RETIRED= .ANY / ( CPU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else cpu@UOPS_EXECUTED.COR= E\\,cmask\\=3D0x2@ ) ) / 2 - ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * ( m= in( CPU_CLK_UNHALTED.THREAD , IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV= .CORE ) ) / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on = else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) > 0.1 ) else 0 ) + RESOURCE_STAL= LS.SB ) if #SMT_on else ( ( min( CPU_CLK_UNHALTED.THREAD , CYCLE_ACTI= VITY.CYCLES_NO_EXECUTE ) ) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x1@ - (= cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D0x3@ if ( ( INST_RETIRED.ANY / ( C= PU_CLK_UNHALTED.THREAD ) ) > 1.8 ) else cpu@UOPS_EXECUTED.CORE\\,cmask\\= =3D0x2@ ) - ( RS_EVENTS.EMPTY_CYCLES if ( ( ( 4 ) * ( min( CPU_CLK_UNHA= LTED.THREAD , IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE ) ) / ( ( = 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_= UNHALTED.THREAD ) ) ) ) > 0.1 ) else 0 ) + RESOURCE_STALLS.SB ) ) - RES= OURCE_STALLS.SB - ( min( CPU_CLK_UNHALTED.THREAD , CYCLE_ACTIVITY.STALL= S_LDM_PENDING ) ) ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "PortsUtil", - "MetricName": "tma_ports_utilization_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. Ideally= ; all pipeline slots would be attributed to the Retiring category. Retirin= g of 100% would indicate the maximum Pipeline_Width throughput was achieved= . Maximizing Retiring typically increases the Instructions-per-cycle (see = IPC metric). Note that a high Retiring value does not necessary mean there = is no room for more performance. For example; Heavy-operations or Microcod= e Assists are categorized under Retiring. They often indicate suboptimal pe= rformance and can often be optimized or avoided. ", - "MetricExpr": "100 * ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 ) *= ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNHAL= TED.THREAD ) ) ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_retiring_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation). This correlates with total number = of instructions used by the program. A uops-per-instruction (see UPI metric= ) ratio of 1 or less should be expected for decently optimized software run= ning on Intel Core/Xeon products. While this often indicates efficient X86 = instructions were executed; high value does not necessarily mean better per= formance cannot be achieved.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / ( ( 4 )= * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else ( CPU_CLK_UNH= ALTED.THREAD ) ) ) ) - ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_ISSUE= D.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY / 2 = ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) ) )", - "MetricGroup": "Retire, TmaL2", - "MetricName": "tma_light_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences. This highly-correlates with the = uop length of these instructions/sequences.", - "MetricExpr": "100 * ( ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS= _ISSUED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY= / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) ) )", - "MetricGroup": "Retire, TmaL2", - "MetricName": "tma_heavy_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS= is used for CISC instructions not supported by the default decoders (like = repeat move strings; or CPUID); or by microcode assists used to address som= e operation modes (like in Floating Point assists). These cases can often b= e avoided.", - "MetricExpr": "100 * ( ( ( UOPS_RETIRED.RETIRE_SLOTS ) / UOPS_I= SSUED.ANY ) * IDQ.MS_UOPS / ( ( 4 ) * ( ( CPU_CLK_UNHALTED.THREAD_ANY = / 2 ) if #SMT_on else ( CPU_CLK_UNHALTED.THREAD ) ) ) )", - "MetricGroup": "MicroSeq", - "MetricName": "tma_microcode_sequencer_percent", + "MetricName": "numa_reads_addressed_to_remote_dram", "ScaleUnit": "1%" } ] diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.js= on b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json index 3e48ff3516b0..eb0a05fbb704 100644 --- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json +++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json @@ -981,36 +981,34 @@ "Unit": "QPI LL" }, { - "BriefDescription": "Number of data flits transmitted . Derived fr= om unc_q_txl_flits_g0.data", + "BriefDescription": "Flits Transferred - Group 0; Data Tx Flits", "Counter": "0,1,2,3", - "EventName": "QPI_DATA_BANDWIDTH_TX", + "EventName": "UNC_Q_TxL_FLITS_G0.DATA", "PerPkg": "1", - "ScaleUnit": "8Bytes", "UMask": "0x2", "Unit": "QPI LL" }, { - "BriefDescription": "Number of data flits transmitted ", + "BriefDescription": "Number of data flits transmitted . Derived fr= om unc_q_txl_flits_g0.data", "Counter": "0,1,2,3", - "EventName": "UNC_Q_TxL_FLITS_G0.DATA", + "EventName": "QPI_DATA_BANDWIDTH_TX", "PerPkg": "1", "ScaleUnit": "8Bytes", "UMask": "0x2", "Unit": "QPI LL" }, { - "BriefDescription": "Number of non data (control) flits transmitte= d . Derived from unc_q_txl_flits_g0.non_data", + "BriefDescription": "Flits Transferred - Group 0; Non-Data protoco= l Tx Flits", "Counter": "0,1,2,3", - "EventName": "QPI_CTL_BANDWIDTH_TX", + "EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA", "PerPkg": "1", - "ScaleUnit": "8Bytes", "UMask": "0x4", "Unit": "QPI LL" }, { - "BriefDescription": "Number of non data (control) flits transmitte= d ", + "BriefDescription": "Number of non data (control) flits transmitte= d . Derived from unc_q_txl_flits_g0.non_data", "Counter": "0,1,2,3", - "EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA", + "EventName": "QPI_CTL_BANDWIDTH_TX", "PerPkg": "1", "ScaleUnit": "8Bytes", "UMask": "0x4", diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json b/t= ools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json index db3418db312e..c003daa9ed8c 100644 --- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json @@ -72,20 +72,19 @@ "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", + "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM Re= ads (RD_CAS + Underfills)", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_READ", + "EventName": "UNC_M_CAS_COUNT.RD", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0x3", "Unit": "iMC" }, { - "BriefDescription": "read requests to memory controller", + "BriefDescription": "read requests to memory controller. Derived f= rom unc_m_cas_count.rd", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.RD", + "EventName": "LLC_MISSES.MEM_READ", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0x3", @@ -110,20 +109,19 @@ "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", + "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR= _CAS (both Modes)", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "LLC_MISSES.MEM_WRITE", + "EventName": "UNC_M_CAS_COUNT.WR", "PerPkg": "1", - "ScaleUnit": "64Bytes", "UMask": "0xC", "Unit": "iMC" }, { - "BriefDescription": "write requests to memory controller", + "BriefDescription": "write requests to memory controller. Derived = from unc_m_cas_count.wr", "Counter": "0,1,2,3", "EventCode": "0x4", - "EventName": "UNC_M_CAS_COUNT.WR", + "EventName": "LLC_MISSES.MEM_WRITE", "PerPkg": "1", "ScaleUnit": "64Bytes", "UMask": "0xC", diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 63a0e98fd116..ddc9fc8b7171 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -9,7 +9,7 @@ GenuineIntel-6-9[6C],v1.03,elkhartlake,core GenuineIntel-6-5[CF],v13,goldmont,core GenuineIntel-6-7A,v1.01,goldmontplus,core GenuineIntel-6-(3C|45|46),v32,haswell,core -GenuineIntel-6-3F,v25,haswellx,core +GenuineIntel-6-3F,v26,haswellx,core GenuineIntel-6-(7D|7E|A7),v1.14,icelake,core GenuineIntel-6-6[AC],v1.15,icelakex,core GenuineIntel-6-3A,v22,ivybridge,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 438D2C433FE for ; Sat, 1 Oct 2022 06:09:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229643AbiJAGJG (ORCPT ); Sat, 1 Oct 2022 02:09:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229564AbiJAGH1 (ORCPT ); Sat, 1 Oct 2022 02:07:27 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB80E1251AE for ; Fri, 30 Sep 2022 23:07:14 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id k11-20020a5b038b000000b006bbf786c30aso5606815ybp.8 for ; Fri, 30 Sep 2022 23:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=0HCivn0o0n6fpWgBZXnTaRx2C+icAPgexllW6+T3cnA=; b=Hr0DNbIJMZy1ZaXkb158i5w/QVzKe7t8qr2Baq3vGP5lQrVBO9sQJC9KDjXEMRD8KT wxz4SZCzosXTQ5FPgbwE65qA57KcdJBmdSYlU2vV+ykxDs5/ibOODPaQav50rFLV3RXO XX+ixEBjQKVk1z6JinkXK8DQ76Kx/MhbmvNEGU/2uDOb+DsIZKHmxZSIbZzJt2jgHPOE ELHULwFl1qnkh4i1FuqX3QcobaUmVwUjmDMzmwRWzF0Bs8tN5op4p4hNWVX8C8mvD7rL U7ZXxKHxaaB+gbAxd0GPikFfykJHpUfNfwMEGxyGpe4Yy2Yhqm4qKRdstp8vKJ/e3c4V ZEYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=0HCivn0o0n6fpWgBZXnTaRx2C+icAPgexllW6+T3cnA=; b=GzJ9a+bODqwN3u1q3kNwEbfOEKG17rLBaxA1ESxJBHxKFe7R9A7AhnCtUTrapCrBbY yBc/BIjL/N79coiD3xE1QFP3bGJ/jn0OmjDSwYON5OuGzvUBQ2MhH7YJO7ttZYPnEu6A l2vwroKN+NXE4mi3MfrBydW6tjfHkxxNWqeHwMFyQbOnUTYCIfSkIlBMkzLfB1SRntur yU/1OmrEJFII3MEW5lMv0qoFgd/KqABPOH8DeXzeilXi7UGRV0v6DmJazzJo5A/Igp3p L7ht0CvaX5nJbhmkZYt4N+IMEtjSr6nkgBq1bgzBrBy1OJjHIpSZVGyFwcq9OwZ0QWrk Sv7A== X-Gm-Message-State: ACrzQf3NaQf8MVQHgYrqjDT7ikcJXnChxPawSYQuGZIfERnnoXvXa97j 0x7bPRJBfvYNBxgs3jhEGzs8wqOCuISi X-Google-Smtp-Source: AMsMyM4gD86L/ivEggdfZWkcxQCs7GI13PaVbOho4CAazOO6rbJpAcyKYfyzvx5748LYJU4ueitgzM1H/riq X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:3210:0:b0:6bc:c39d:fe96 with SMTP id y16-20020a253210000000b006bcc39dfe96mr9251081yby.24.1664604434375; Fri, 30 Sep 2022 23:07:14 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:26 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-13-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 12/22] perf vendor events: Update Intel icelake From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v1.15, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - Addition of all 6 levels of TMA metrics. Previously metrics involving topdown events were dropped. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../pmu-events/arch/x86/icelake/cache.json | 6 +- .../arch/x86/icelake/icl-metrics.json | 808 +++++++++++++++++- .../pmu-events/arch/x86/icelake/pipeline.json | 2 +- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 4 files changed, 766 insertions(+), 52 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/icelake/cache.json b/tools/perf= /pmu-events/arch/x86/icelake/cache.json index b4f28f24ee63..0f6b918484d5 100644 --- a/tools/perf/pmu-events/arch/x86/icelake/cache.json +++ b/tools/perf/pmu-events/arch/x86/icelake/cache.json @@ -18,13 +18,13 @@ "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL", "PEBScounters": "0,1,2,3", - "PublicDescription": "Counts number of cycles a demand request has= waited due to L1D Fill Buffer (FB) unavailablability. Demand requests incl= ude cacheable/uncacheable demand load, store, lock or SW prefetch accesses.= ", + "PublicDescription": "Counts number of cycles a demand request has= waited due to L1D Fill Buffer (FB) unavailability. Demand requests include= cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "SampleAfterValue": "1000003", "Speculative": "1", "UMask": "0x2" }, { - "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailablability.", + "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailability.", "CollectPEBSRecord": "2", "Counter": "0,1,2,3", "CounterMask": "1", @@ -32,7 +32,7 @@ "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL_PERIODS", "PEBScounters": "0,1,2,3", - "PublicDescription": "Counts number of phases a demand request has= waited due to L1D Fill Buffer (FB) unavailablability. Demand requests incl= ude cacheable/uncacheable demand load, store, lock or SW prefetch accesses.= ", + "PublicDescription": "Counts number of phases a demand request has= waited due to L1D Fill Buffer (FB) unavailability. Demand requests include= cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "SampleAfterValue": "1000003", "Speculative": "1", "UMask": "0x2" diff --git a/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json b/tool= s/perf/pmu-events/arch/x86/icelake/icl-metrics.json index f0356d66a927..3b5ef09eb8ef 100644 --- a/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json @@ -1,26 +1,716 @@ [ + { + "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", + "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UO= P_DROPPING / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "(5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.COR= E - INT_MISC.UOP_DROPPING) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "ICACHE_16B.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_64B.IFTAG_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT))) * INT_MISC.CLEAR_RESTEER_CYCLES /= CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "10 * BACLEARS.ANY / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / CORE_C= LKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re (only) 4 uops were delivered by the MITE pipeline", + "MetricExpr": "(cpu@IDQ.MITE_UOPS\\,cmask\\=3D4@ - cpu@IDQ.MITE_UO= PS\\,cmask\\=3D5@) / CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_mite_4wide", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / CORE_CLK= S / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to LSD (Loop Stream Detector) unit", + "MetricExpr": "(LSD.CYCLES_ACTIVE - LSD.CYCLES_OK) / CORE_CLKS / 2= ", + "MetricGroup": "FetchBW;LSD;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_lsd", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to LSD (Loop Stream Detector) unit. = LSD typically does well sustaining Uop supply. However; in some rare cases= ; optimal uop-delivery could not be reached for small loops whose size (in = terms of number of uops) does not suit well the LSD structure.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", + "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + t= ma_retiring), 0)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts= )", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", + "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + (5 * cpu@IN= T_MISC.RECOVERY_CYCLES\\,cmask\\=3D1\\,edge@) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUN= D_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + = tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) = * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "L1D_PEND_MISS.FB_FULL / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "((MEM_LOAD_RETIRED.L2_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / ((MEM_LOAD_RETIRED.L2_HIT * (1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + L1D_PEND_MISS.FB_FULL= _PERIODS)) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MI= SS) / CLKS)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((29 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRED= .XSNP_HITM + (23.5 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS= ) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(23.5 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRE= D.XSNP_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2)= / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(9 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT *= (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "L1D_PEND_MISS.L2_STALL / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CYCLE_ACT= IVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma_l2_bou= nd)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 10 * (1 - (MEM_INST_RETIRED.LO= CK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.LOCK_LOA= DS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_R= EQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(32.5 * Average_Frequency) * OCR.DEMAND_RFO.L3_HIT.= SNOOP_HITM / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.= L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to Streaming store memory accesses; Streaming store optimize out a = read request required by RFO stores", + "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_streaming_stores", + "PublicDescription": "This metric estimates how often CPU was stal= led due to Streaming store memory accesses; Streaming store optimize out a= read request required by RFO stores. Even though store accesses do not typ= ically stall out-of-order CPUs; there are few cases where stores can lead t= o actual stalls. This metric will be flagged should Streaming stores be a b= ottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ + = (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / C= LKS if (ARITH.DIVIDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVIT= Y.STALLS_MEM_ANY)) else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACT= IVITY.2_PORTS_UTIL) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ / C= LKS + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTI= VITY.STALLS_MEM_ANY) / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to PAUSE Instructions", + "MetricExpr": "140 * MISC_RETIRED.PAUSE_INST / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_slow_pause", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUS= E_INST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "CLKS * UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISS= UED.ANY", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop. S= ample with: EXE_ACTIVITY.2_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + = UOPS_DISPATCHED.PORT_5 + UOPS_DISPATCHED.PORT_6) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED.PORT_0", + "MetricExpr": "UOPS_DISPATCHED.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D.PORT_1", + "MetricExpr": "UOPS_DISPATCHED.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED.PORT_6", + "MetricExpr": "UOPS_DISPATCHED.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "UOPS_DISPATCHED.PORT_2_3 / (2 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations Sample with: U= OPS_DISPATCHED.PORT_7_8", + "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_= 8) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", + "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_P= ACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * = SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 512-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_512b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 512-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_INST_RETIRED.ANY / INST_= RETIRED.ANY", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions.", + "MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES= / (tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_branch_instructions", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_reti= ring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_m= emory_operations + tma_branch_instructions + tma_nop_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer + tma_retiring * (UOPS_DECO= DED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D1@) / IDQ.MITE_UOPS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "((tma_retiring * SLOTS) / UOPS_ISSUED.ANY) * IDQ.MS= _UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * ASSISTS.ANY / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" + }, + { + "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_boun= d / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_stor= e_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma= _l3_hit_latency + tma_sq_full))) + (tma_l1_bound / (tma_dram_bound + tma_l1= _bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (= tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_spli= t_loads + tma_store_fwd_blk)) ", + "MetricGroup": "Mem;MemoryBW;Offcore", + "MetricName": "Memory_Bandwidth" + }, + { + "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_bound = / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_= bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing = + tma_l3_hit_latency + tma_sq_full)) + (tma_l2_bound / (tma_dram_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)))", + "MetricGroup": "Mem;MemoryLat;Offcore", + "MetricName": "Memory_Latency" + }, + { + "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tm= a_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_= fwd_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_boun= d + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + = tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_st= ores))) ", + "MetricGroup": "Mem;MemoryTLB;Offcore", + "MetricName": "Memory_Data_TLBs" + }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED= .NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 *= BR_INST_RETIRED.NEAR_CALL) ) / TOPDOWN.SLOTS)", + "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.= NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * = BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (( 5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE - INT_MISC.UOP_DROPPING ) / TOPDOWN.SLOTS) * ( (ICACHE_64B.IFTAG_= STALL / CPU_CLK_UNHALTED.THREAD) + (ICACHE_16B.IFDATA_STALL / CPU_CLK_UNHAL= TED.THREAD) + (10 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(( 5 * IDQ= _UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING ) / TO= PDOWN.SLOTS)", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "Big_Code" }, + { + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", + "MetricGroup": "Fed;FetchBW;Frontend", + "MetricName": "Instruction_Fetch_BW" + }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "(tma_retiring * SLOTS) / INST_RETIRED.ANY", + "MetricGroup": "Pipeline;Ret;Retire", + "MetricName": "UPI" + }, + { + "BriefDescription": "Instruction per taken branch", + "MetricExpr": "(tma_retiring * SLOTS) / BR_INST_RETIRED.NEAR_TAKEN= ", + "MetricGroup": "Branches;Fed;FetchBW", + "MetricName": "UpTB" + }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -32,13 +722,13 @@ { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", "MetricExpr": "TOPDOWN.SLOTS", - "MetricGroup": "TmaL1", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, { "BriefDescription": "Fraction of Physical Core issue-slots utilize= d by this Logical Processor", - "MetricExpr": "TOPDOWN.SLOTS / ( TOPDOWN.SLOTS / 2 ) if #SMT_on el= se 1", - "MetricGroup": "SMT;TmaL1", + "MetricExpr": "SLOTS / (TOPDOWN.SLOTS / 2) if #SMT_on else 1", + "MetricGroup": "SMT;tma_L1_group", "MetricName": "Slots_Utilization" }, { @@ -50,29 +740,35 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARI= TH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKE= D_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU= _CLK_UNHALTED.DISTRIBUTED )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.51= 2B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)) / (2 * CORE_C= LKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, + { + "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", + "MetricGroup": "Cor;SMT", + "MetricName": "Core_Bound_Likely" + }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED", @@ -117,13 +813,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.25= 6B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARI= TH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PAC= KED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST= _RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SI= NGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SIN= GLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -144,21 +840,21 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit in= struction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX512", "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit i= nstruction (lower number means higher occurrence rate). May undercount due = to FMA double counting." @@ -170,11 +866,17 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, + { + "BriefDescription": "Average number of Uops retired in cycles wher= e at least one uop has retired.", + "MetricExpr": "(tma_retiring * SLOTS) / cpu@UOPS_RETIRED.SLOTS\\,c= mask\\=3D1@", + "MetricGroup": "Pipeline;Ret", + "MetricName": "Retire" + }, { "BriefDescription": "", "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,c= mask\\=3D1@", @@ -205,6 +907,12 @@ "MetricGroup": "DSBmiss", "MetricName": "DSB_Switch_Cost" }, + { + "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_lsd + tma_mite))", + "MetricGroup": "DSBmiss;Fed", + "MetricName": "DSB_Misses" + }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -217,6 +925,12 @@ "MetricGroup": "Bad;BadSpec;BrMispredicts", "MetricName": "IpMispredict" }, + { + "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", + "MetricGroup": "Bad;BrMispredicts", + "MetricName": "Branch_Misprediction_Cost" + }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_B= RANCHES", @@ -231,7 +945,7 @@ }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, @@ -243,74 +957,74 @@ }, { "BriefDescription": "Fraction of branches of other types (not indi= vidually covered by other metrics in Info.Branches group)", - "MetricExpr": "1 - ( (BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRE= D.ALL_BRANCHES) + (BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHE= S) + (( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST= _RETIRED.ALL_BRANCHES) + ((BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.CON= D_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES) )", + "MetricExpr": "1 - (Cond_NT + Cond_TK + CallRet + Jump)", "MetricGroup": "Bad;Branches", "MetricName": "Other_Branches" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS = + MEM_LOAD_RETIRED.FB_HIT )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS += MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", - "MetricExpr": "1000 * ( ( OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_R= EQUESTS.DEMAND_DATA_RD ) + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS ) = / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricExpr": "1000 * ((OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_REQ= UESTS.DEMAND_DATA_RD) + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS) / In= structions", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING ) / ( 2 * CPU_CLK_UNHALTED.DISTRIB= UTED )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING) / (2 * CORE_CLKS)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, @@ -340,25 +1054,25 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -370,40 +1084,40 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE = + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.5= 12B_PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACK= ED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License0_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes." }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License1_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions." }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX)", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License2_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions." @@ -428,7 +1142,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, diff --git a/tools/perf/pmu-events/arch/x86/icelake/pipeline.json b/tools/p= erf/pmu-events/arch/x86/icelake/pipeline.json index a017a4727050..c74a7369cff3 100644 --- a/tools/perf/pmu-events/arch/x86/icelake/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/icelake/pipeline.json @@ -167,7 +167,7 @@ "UMask": "0x10" }, { - "BriefDescription": "number of branch instructions retired that we= re mispredicted and taken. Non PEBS", + "BriefDescription": "number of branch instructions retired that we= re mispredicted and taken.", "CollectPEBSRecord": "2", "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index ddc9fc8b7171..1f5dbe176b3a 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -10,7 +10,7 @@ GenuineIntel-6-5[CF],v13,goldmont,core GenuineIntel-6-7A,v1.01,goldmontplus,core GenuineIntel-6-(3C|45|46),v32,haswell,core GenuineIntel-6-3F,v26,haswellx,core -GenuineIntel-6-(7D|7E|A7),v1.14,icelake,core +GenuineIntel-6-(7D|7E|A7),v1.15,icelake,core GenuineIntel-6-6[AC],v1.15,icelakex,core GenuineIntel-6-3A,v22,ivybridge,core GenuineIntel-6-3E,v21,ivytown,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E592AC433F5 for ; Sat, 1 Oct 2022 06:09:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229635AbiJAGJL (ORCPT ); Sat, 1 Oct 2022 02:09:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbiJAGH3 (ORCPT ); Sat, 1 Oct 2022 02:07:29 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA3F81AD78E for ; Fri, 30 Sep 2022 23:07:17 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-349423f04dbso61611167b3.13 for ; Fri, 30 Sep 2022 23:07:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=2BqQj3GTCt7BQHpTKnyjfGWWh4j7821k0PNeR8Si53g=; b=CkeGDiuxWoesYS61+l+Wk1lYeoTwkITQfix6UdVRzPkjG7SiWOzaw1CgJ4dPobumuD JI7S9VlqX/Vw4kOyj4m7ztS+gxgrc+9P9GQwrsGOvjfkJdQlLJFPKk0dxzg/rpMRy9Jr DjLe6FaAcLwCM9J2gwjx+tC16GFoyuOf9BfcgpUYbMzlOvew+waRslPqvKBXOxXipA8Z 57+4NNlzC7tp85cCsvfNRC01oBL+GrtcJH4JMUB02lJ/a3fv5qoAUzW4hY1xyrrij8ik jDmGsF/LPRFV2ldN0y17eJ//7GC1d99N/a52+au64a6ZSMXJJaN1rwhg3hMWVVVAHB/E CvZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=2BqQj3GTCt7BQHpTKnyjfGWWh4j7821k0PNeR8Si53g=; b=T0XaqgMpX6/PXBv8qCToZsOlMIatcglCi97FrU9sRK9uzpo81rAytjDS6HpsJv+U12 hktRCHs8r4HimJdQGNQctaaZfdG089RpfPOD+GeJGTyKnOHN8bfFDqGVTmiWUoJ7Hmtk 8N/wY32+bMzlrrZY19AHqUKxNRDpzI/6HrTihMW5nu52HxMrkuQuD01jcDT4E0FBjz4O QoEnPBHFC/VFPBpmxnideeB9Y+XlYVKSGWhw0FQ8wu8RGYzEtAhifveYYNySv4dIeo27 rk1lW+MNcNqOmsHP77mYjIj75a9NGu4HZXHXEBHgDvW4bcTKjlC0Kh7AHh9nL58QcRjW XMEQ== X-Gm-Message-State: ACrzQf1QuAMDev4MzLkWOijf8u7PMTzrhe9eWFKnxLUonNOKgKepLUd3 +brAy/cuUHSS/Rl5x9WC5zI5JokgD5Eo X-Google-Smtp-Source: AMsMyM4JVIhUPEbVivwugY0PL/7ioHVj3gm1YMSI+YrbwhWWdMtRqi0eaznErawIxJs4rlMa9TwPQ+67AIke X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:698a:0:b0:356:4bc:9ebf with SMTP id e132-20020a81698a000000b0035604bc9ebfmr7034587ywc.53.1664604437149; Fri, 30 Sep 2022 23:07:17 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:27 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-14-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 13/22] perf vendor events: Update Intel icelakex From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v1.16 the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - Addition of all 6 levels of TMA metrics. Previously metrics involving topdown events were dropped. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Latest metrics from: https://github.com/intel/perfmon-metrics Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../pmu-events/arch/x86/icelakex/cache.json | 6 +- .../arch/x86/icelakex/icx-metrics.json | 1155 ++++++++++++----- .../arch/x86/icelakex/pipeline.json | 2 +- .../arch/x86/icelakex/uncore-other.json | 2 +- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 5 files changed, 833 insertions(+), 334 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/icelakex/cache.json b/tools/per= f/pmu-events/arch/x86/icelakex/cache.json index 775190bdd063..e4035b3e55ca 100644 --- a/tools/perf/pmu-events/arch/x86/icelakex/cache.json +++ b/tools/perf/pmu-events/arch/x86/icelakex/cache.json @@ -18,13 +18,13 @@ "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL", "PEBScounters": "0,1,2,3", - "PublicDescription": "Counts number of cycles a demand request has= waited due to L1D Fill Buffer (FB) unavailablability. Demand requests incl= ude cacheable/uncacheable demand load, store, lock or SW prefetch accesses.= ", + "PublicDescription": "Counts number of cycles a demand request has= waited due to L1D Fill Buffer (FB) unavailability. Demand requests include= cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "SampleAfterValue": "1000003", "Speculative": "1", "UMask": "0x2" }, { - "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailablability.", + "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailability.", "CollectPEBSRecord": "2", "Counter": "0,1,2,3", "CounterMask": "1", @@ -32,7 +32,7 @@ "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL_PERIODS", "PEBScounters": "0,1,2,3", - "PublicDescription": "Counts number of phases a demand request has= waited due to L1D Fill Buffer (FB) unavailablability. Demand requests incl= ude cacheable/uncacheable demand load, store, lock or SW prefetch accesses.= ", + "PublicDescription": "Counts number of phases a demand request has= waited due to L1D Fill Buffer (FB) unavailability. Demand requests include= cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "SampleAfterValue": "1000003", "Speculative": "1", "UMask": "0x2" diff --git a/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json b/too= ls/perf/pmu-events/arch/x86/icelakex/icx-metrics.json index e905458b34b8..b52afc34a169 100644 --- a/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json @@ -1,22 +1,742 @@ [ + { + "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", + "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UO= P_DROPPING / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "(5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.COR= E - INT_MISC.UOP_DROPPING) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "ICACHE_16B.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_64B.IFTAG_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT))) * INT_MISC.CLEAR_RESTEER_CYCLES /= CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "10 * BACLEARS.ANY / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / CORE_C= LKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re (only) 4 uops were delivered by the MITE pipeline", + "MetricExpr": "(cpu@IDQ.MITE_UOPS\\,cmask\\=3D4@ - cpu@IDQ.MITE_UO= PS\\,cmask\\=3D5@) / CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_mite_4wide", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / CORE_CLK= S / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", + "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + t= ma_retiring), 0)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts= )", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", + "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + (5 * cpu@IN= T_MISC.RECOVERY_CYCLES\\,cmask\\=3D1\\,edge@) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUN= D_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + = tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) = * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "L1D_PEND_MISS.FB_FULL / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "((MEM_LOAD_RETIRED.L2_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / ((MEM_LOAD_RETIRED.L2_HIT * (1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + L1D_PEND_MISS.FB_FULL= _PERIODS)) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MI= SS) / CLKS)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((44 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRE= D.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L= 3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + (43.5 = * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + (MEM_LOAD_= RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(43.5 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIR= ED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - (OCR.DEMAND_DATA_RD.= L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA= _RD.L3_HIT.SNOOP_HIT_WITH_FWD)))) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOA= D_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(19 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT = * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "L1D_PEND_MISS.L2_STALL / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CYCLE_AC= TIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma_l2_bo= und) - tma_pmm_bound)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "(43.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIR= ED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "(108 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRE= D.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "((97 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRE= D.REMOTE_HITM + (97 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRED.REMOTE_= FWD) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLK= S", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_L3_MISS_RE= TIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates (based on idle = latencies) how often the CPU was stalled on accesses to external 3D-Xpoint = (Crystal Ridge, a.k.a", + "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM= * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + 10 * ((MEM= _LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD= _RETIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + (MEM_LOAD= _RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.R= EMOTE_HITM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) = / ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS))) + 10 * ((MEM_LOAD_L3_MISS_RETIRED.LOCAL_D= RAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + (MEM_LO= AD_L3_MISS_RETIRED.REMOTE_FWD * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RE= TIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + (MEM_LOAD_R= ETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) + (25 * (MEM_LOAD_RETIRED.LOC= AL_PMM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + 33 *= (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM= _LOAD_RETIRED.L1_MISS))))))) * (CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CY= CLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma= _l2_bound)) if (1000000 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_R= ETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS) else 0)", + "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_memory_b= ound_group", + "MetricName": "tma_pmm_bound", + "PublicDescription": "This metric roughly estimates (based on idle= latencies) how often the CPU was stalled on accesses to external 3D-Xpoint= (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Mem= ory Module. ", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 10 * (1 - (MEM_INST_RETIRED.LO= CK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.LOCK_LOA= DS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_R= EQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(48 * Average_Frequency) * OCR.DEMAND_RFO.L3_HIT.SN= OOP_HITM / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.= L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to Streaming store memory accesses; Streaming store optimize out a = read request required by RFO stores", + "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_streaming_stores", + "PublicDescription": "This metric estimates how often CPU was stal= led due to Streaming store memory accesses; Streaming store optimize out a= read request required by RFO stores. Even though store accesses do not typ= ically stall out-of-order CPUs; there are few cases where stores can lead t= o actual stalls. This metric will be flagged should Streaming stores be a b= ottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ + = (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / C= LKS if (ARITH.DIVIDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVIT= Y.STALLS_MEM_ANY)) else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACT= IVITY.2_PORTS_UTIL) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ / C= LKS + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTI= VITY.STALLS_MEM_ANY) / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to PAUSE Instructions", + "MetricExpr": "37 * MISC_RETIRED.PAUSE_INST / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_slow_pause", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUS= E_INST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "CLKS * UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISS= UED.ANY", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop. S= ample with: EXE_ACTIVITY.2_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + = UOPS_DISPATCHED.PORT_5 + UOPS_DISPATCHED.PORT_6) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED.PORT_0", + "MetricExpr": "UOPS_DISPATCHED.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D.PORT_1", + "MetricExpr": "UOPS_DISPATCHED.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED.PORT_6", + "MetricExpr": "UOPS_DISPATCHED.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "UOPS_DISPATCHED.PORT_2_3 / (2 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations Sample with: U= OPS_DISPATCHED.PORT_7_8", + "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_= 8) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", + "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_P= ACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * = SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 512-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_512b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 512-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_INST_RETIRED.ANY / INST_= RETIRED.ANY", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions.", + "MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES= / (tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_branch_instructions", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_reti= ring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_m= emory_operations + tma_branch_instructions + tma_nop_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer + tma_retiring * (UOPS_DECO= DED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D1@) / IDQ.MITE_UOPS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "((tma_retiring * SLOTS) / UOPS_ISSUED.ANY) * IDQ.MS= _UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * ASSISTS.ANY / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" + }, + { + "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)= ) + (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_= bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_a= ccesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + (tma_l1_= bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_= pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_= load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk= )) ", + "MetricGroup": "Mem;MemoryBW;Offcore", + "MetricName": "Memory_Bandwidth" + }, + { + "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) = + (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bo= und + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contes= ted_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + (tma= _l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)))", + "MetricGroup": "Mem;MemoryLat;Offcore", + "MetricName": "Memory_Latency" + }, + { + "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_= 4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_lo= ads + tma_store_fwd_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bou= nd + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma= _dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_= store_latency + tma_streaming_stores))) ", + "MetricGroup": "Mem;MemoryTLB;Offcore", + "MetricName": "Memory_Data_TLBs" + }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED= .NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 *= BR_INST_RETIRED.NEAR_CALL) ) / TOPDOWN.SLOTS)", + "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.= NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * = BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (( 5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE - INT_MISC.UOP_DROPPING ) / TOPDOWN.SLOTS) * ( (ICACHE_64B.IFTAG_= STALL / CPU_CLK_UNHALTED.THREAD) + (ICACHE_16B.IFDATA_STALL / CPU_CLK_UNHAL= TED.THREAD) + (10 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(( 5 * IDQ= _UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING ) / TO= PDOWN.SLOTS)", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "Big_Code" }, + { + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", + "MetricGroup": "Fed;FetchBW;Frontend", + "MetricName": "Instruction_Fetch_BW" + }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "(tma_retiring * SLOTS) / INST_RETIRED.ANY", + "MetricGroup": "Pipeline;Ret;Retire", + "MetricName": "UPI" + }, + { + "BriefDescription": "Instruction per taken branch", + "MetricExpr": "(tma_retiring * SLOTS) / BR_INST_RETIRED.NEAR_TAKEN= ", + "MetricGroup": "Branches;Fed;FetchBW", + "MetricName": "UpTB" + }, + { + "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", + "MetricName": "CPI" + }, { "BriefDescription": "Per-Logical Processor actual clocks when the = Logical Processor is active.", "MetricExpr": "CPU_CLK_UNHALTED.THREAD", @@ -26,13 +746,13 @@ { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", "MetricExpr": "TOPDOWN.SLOTS", - "MetricGroup": "TmaL1", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, { "BriefDescription": "Fraction of Physical Core issue-slots utilize= d by this Logical Processor", - "MetricExpr": "TOPDOWN.SLOTS / ( TOPDOWN.SLOTS / 2 ) if #SMT_on el= se 1", - "MetricGroup": "SMT;TmaL1", + "MetricExpr": "SLOTS / (TOPDOWN.SLOTS / 2) if #SMT_on else 1", + "MetricGroup": "SMT;tma_L1_group", "MetricName": "Slots_Utilization" }, { @@ -44,29 +764,35 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARI= TH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKE= D_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU= _CLK_UNHALTED.DISTRIBUTED )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.51= 2B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)) / (2 * CORE_C= LKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, + { + "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", + "MetricGroup": "Cor;SMT", + "MetricName": "Core_Bound_Likely" + }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED", @@ -111,13 +837,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.25= 6B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARI= TH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PAC= KED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST= _RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SI= NGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SIN= GLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -138,21 +864,21 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit in= struction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX512", "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit i= nstruction (lower number means higher occurrence rate). May undercount due = to FMA double counting." @@ -164,11 +890,17 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, + { + "BriefDescription": "Average number of Uops retired in cycles wher= e at least one uop has retired.", + "MetricExpr": "(tma_retiring * SLOTS) / cpu@UOPS_RETIRED.SLOTS\\,c= mask\\=3D1@", + "MetricGroup": "Pipeline;Ret", + "MetricName": "Retire" + }, { "BriefDescription": "", "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,c= mask\\=3D1@", @@ -193,6 +925,12 @@ "MetricGroup": "DSBmiss", "MetricName": "DSB_Switch_Cost" }, + { + "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_mite))", + "MetricGroup": "DSBmiss;Fed", + "MetricName": "DSB_Misses" + }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -205,6 +943,12 @@ "MetricGroup": "Bad;BadSpec;BrMispredicts", "MetricName": "IpMispredict" }, + { + "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", + "MetricGroup": "Bad;BrMispredicts", + "MetricName": "Branch_Misprediction_Cost" + }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_B= RANCHES", @@ -219,7 +963,7 @@ }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, @@ -231,74 +975,74 @@ }, { "BriefDescription": "Fraction of branches of other types (not indi= vidually covered by other metrics in Info.Branches group)", - "MetricExpr": "1 - ( (BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRE= D.ALL_BRANCHES) + (BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHE= S) + (( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST= _RETIRED.ALL_BRANCHES) + ((BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.CON= D_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES) )", + "MetricExpr": "1 - (Cond_NT + Cond_TK + CallRet + Jump)", "MetricGroup": "Bad;Branches", "MetricName": "Other_Branches" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS = + MEM_LOAD_RETIRED.FB_HIT )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS += MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", - "MetricExpr": "1000 * ( ( OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_R= EQUESTS.DEMAND_DATA_RD ) + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS ) = / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricExpr": "1000 * ((OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_REQ= UESTS.DEMAND_DATA_RD) + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS) / In= structions", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING ) / ( 2 * CPU_CLK_UNHALTED.DISTRIB= UTED )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING) / (2 * CORE_CLKS)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, @@ -328,37 +1072,37 @@ }, { "BriefDescription": "Rate of silent evictions from the L2 cache pe= r Kilo instruction where the evicted lines are dropped (no writeback to L3 = or memory)", - "MetricExpr": "1000 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_Silent_PKI" }, { "BriefDescription": "Rate of non silent evictions from the L2 cach= e per Kilo instruction", - "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_NonSilent_PKI" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -370,40 +1114,40 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE = + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.5= 12B_PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACK= ED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License0_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes." }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License1_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions." }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX)", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License2_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions." @@ -428,13 +1172,13 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / = UNC_CHA_TOR_INSERTS.IA_MISS_DRD ) / ( cha_0@event\\=3D0x0@ / duration_time = )", + "MetricExpr": "1000000000 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / U= NC_CHA_TOR_INSERTS.IA_MISS_DRD) / (Socket_CLKS / duration_time)", "MetricGroup": "Mem;MemoryLat;SoC", "MetricName": "MEM_Read_Latency" }, @@ -446,38 +1190,38 @@ }, { "BriefDescription": "Average latency of data read request to exter= nal 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2= data-read prefetches", - "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM ) / cha_0@event\\=3D0x0@ )", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": "(1000000000 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PM= M / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / cha_0@event\\=3D0x0@)", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_PMM_Read_Latency" }, { "BriefDescription": "Average latency of data read request to exter= nal DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-= read prefetches", - "MetricExpr": " 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_D= DR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR ) / cha_0@event\\=3D0x0@", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": " 1000000000 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DD= R / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / cha_0@event\\=3D0x0@", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_DRAM_Read_Latency" }, { "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [= GB / sec]", - "MetricExpr": "( ( 64 * imc@event\\=3D0xe3@ / 1000000000 ) / durat= ion_time )", - "MetricGroup": "Mem;MemoryBW;SoC;Server", + "MetricExpr": "((64 * imc@event\\=3D0xe3@ / 1000000000) / duration= _time)", + "MetricGroup": "Mem;MemoryBW;Server;SoC", "MetricName": "PMM_Read_BW" }, { "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes = [GB / sec]", - "MetricExpr": "( ( 64 * imc@event\\=3D0xe7@ / 1000000000 ) / durat= ion_time )", - "MetricGroup": "Mem;MemoryBW;SoC;Server", + "MetricExpr": "((64 * imc@event\\=3D0xe7@ / 1000000000) / duration= _time)", + "MetricGroup": "Mem;MemoryBW;Server;SoC", "MetricName": "PMM_Write_BW" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Writes [GB / sec]", "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1000000000 /= duration_time", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Write_BW" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Reads [GB / sec]", - "MetricExpr": "( UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INS= ERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_= INSERTS.IO_MISS_ITOMCACHENEAR ) * 64 / 1000000000 / duration_time", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSE= RTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_I= NSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1000000000 / duration_time", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Read_BW" }, { @@ -486,12 +1230,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cha_0@event\\=3D0x0@ / #num_dies / duration_time / = 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -523,11 +1261,10 @@ "MetricName": "C6_Pkg_Residency" }, { - "BriefDescription": "Percentage of time spent in the active CPU po= wer state C0", - "MetricExpr": "100 * CPU_CLK_UNHALTED.REF_TSC / TSC", - "MetricGroup": "", - "MetricName": "cpu_utilization_percent", - "ScaleUnit": "1%" + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" }, { "BriefDescription": "CPU operating frequency (in GHz)", @@ -536,13 +1273,6 @@ "MetricName": "cpu_operating_frequency", "ScaleUnit": "1GHz" }, - { - "BriefDescription": "Cycles per instruction retired; indicating ho= w much time each executed instruction took; in units of cycles.", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY", - "MetricGroup": "", - "MetricName": "cpi", - "ScaleUnit": "1per_instr" - }, { "BriefDescription": "The ratio of number of completed memory load = instructions to the total number completed instructions", "MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY", @@ -561,7 +1291,7 @@ "BriefDescription": "Ratio of number of requests missing L1 data c= ache (includes data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l1d_mpi_includes_data_plus_rfo_with_prefetches", + "MetricName": "l1d_mpi", "ScaleUnit": "1per_instr" }, { @@ -589,7 +1319,7 @@ "BriefDescription": "Ratio of number of requests missing L2 cache = (includes code+data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l2_mpi_includes_code_plus_data_plus_rfo_with_prefet= ches", + "MetricName": "l2_mpi", "ScaleUnit": "1per_instr" }, { @@ -615,42 +1345,42 @@ }, { "BriefDescription": "Ratio of number of code read requests missing= last level core cache (includes demand w/ prefetches) to the total number = of completed instructions", - "MetricExpr": "( UNC_CHA_TOR_INSERTS.IA_MISS_CRD ) / INST_RETIRED.= ANY", + "MetricExpr": "( UNC_CHA_TOR_INSERTS.IA_MISS_CRD + UNC_CHA_TOR_INS= ERTS.IA_MISS_CRD_PREF ) / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "llc_code_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) in nano seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D / UNC_CHA_TOR_INSERTS.IA_MISS_DRD ) / ( UNC_CHA_CLOCKTICKS / ( source_cou= nt(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages ) ) ) * duration_time= )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD = / UNC_CHA_TOR_INSERTS.IA_MISS_DRD ) / ( UNC_CHA_CLOCKTICKS / ( source_count= (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_latency", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to local memory in nano= seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL ) / ( UNC_CHA_CLOCKTICKS / = ( source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages ) )= ) * duration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL ) / ( UNC_CHA_CLOCKTICKS / ( = source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages ) ) )= * duration_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_latency_for_local_request= s", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to remote memory in nan= o seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE ) / ( UNC_CHA_CLOCKTICKS = / ( source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages = ) ) ) * duration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE ) / ( UNC_CHA_CLOCKTICKS / = ( source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages ) = ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_latency_for_remote_reques= ts", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to Intel(R) Optane(TM) = Persistent Memory(PMEM) in nano seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM ) / ( UNC_CHA_CLOCKTICKS / ( so= urce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM) * #num_packages ) ) ) * d= uration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM ) / ( UNC_CHA_CLOCKTICKS / ( sour= ce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM) * #num_packages ) ) ) * dur= ation_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_to_pmem_latency", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to DRAM in nano seconds= ", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR ) / ( UNC_CHA_CLOCKTICKS / ( so= urce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages ) ) ) * d= uration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR ) / ( UNC_CHA_CLOCKTICKS / ( sour= ce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages ) ) ) * dur= ation_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_to_dram_latency", "ScaleUnit": "1ns" @@ -694,14 +1424,14 @@ "BriefDescription": "Memory read that miss the last level cache (L= LC) addressed to local DRAM as a percentage of total memory read accesses, = does not include LLC prefetches.", "MetricExpr": "100 * ( UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC= _CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL ) / ( UNC_CHA_TOR_INSERTS.IA_MISS_D= RD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS= .IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_local_dram", + "MetricName": "numa_reads_addressed_to_local_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Memory reads that miss the last level cache (= LLC) addressed to remote DRAM as a percentage of total memory read accesses= , does not include LLC prefetches.", "MetricExpr": "100 * ( UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UN= C_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE ) / ( UNC_CHA_TOR_INSERTS.IA_MISS= _DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSER= TS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_remote_dram", + "MetricName": "numa_reads_addressed_to_remote_dram", "ScaleUnit": "1%" }, { @@ -715,7 +1445,7 @@ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data t= ransmit bandwidth (MB/sec)", "MetricExpr": "( UNC_UPI_TxL_FLITS.ALL_DATA * (64 / 9.0) / 1000000= ) / duration_time", "MetricGroup": "", - "MetricName": "upi_data_transmit_bw_only_data", + "MetricName": "upi_data_transmit_bw", "ScaleUnit": "1MB/s" }, { @@ -764,35 +1494,35 @@ "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", "MetricExpr": "(( UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR + UNC_CHA_TO= R_INSERTS.IO_MISS_PCIRDCUR ) * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_read", + "MetricName": "io_bandwidth_disk_or_network_writes", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", "MetricExpr": "(( UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_IN= SERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR= _INSERTS.IO_MISS_ITOMCACHENEAR ) * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_write", + "MetricName": "io_bandwidth_disk_or_network_reads", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Uops delivered from decoded instruction cache= (decoded stream buffer or DSB) as a percent of total uops delivered to Ins= truction Decode Queue", "MetricExpr": "100 * ( IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_UO= PS + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_decoded_icache_dsb", + "MetricName": "percent_uops_delivered_from_decoded_icache", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from legacy decode pipeline (M= icro-instruction Translation Engine or MITE) as a percent of total uops del= ivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MITE_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_U= OPS + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline_= mite", + "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from microcode sequencer (MS) = as a percent of total uops delivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MS_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_UOP= S + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_microcode_sequencer_ms", + "MetricName": "percent_uops_delivered_from_microcode_sequencer", "ScaleUnit": "1%" }, { @@ -824,241 +1554,10 @@ "ScaleUnit": "1MB/s" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. Frontend denotes th= e first part of the processor core responsible to fetch operations that are= executed later on by the Backend part. Within the Frontend; a branch predi= ctor predicts the next address to fetch; cache-lines are fetched from the m= emory subsystem; parsed into instructions; and lastly decoded into micro-op= erations (uops). Ideally the Frontend can issue Machine_Width uops every cy= cle to the Backend. Frontend Bound denotes unutilized issue-slots when ther= e is no Backend stall; i.e. bubbles where Frontend delivered no uops while = Backend could have accepted them. For example; stalls due to instruction-ca= che misses would be categorized under Frontend Bound.", - "MetricExpr": "100 * ( topdown\\-fe\\-bound / ( topdown\\-fe\\-bou= nd + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) - I= NT_MISC.UOP_DROPPING / ( slots ) )", - "MetricGroup": "TmaL1;PGO", - "MetricName": "tma_frontend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues. For example; instruction-c= ache misses; iTLB misses or fetch stalls after a branch misprediction are c= ategorized under Frontend Latency. In such cases; the Frontend eventually d= elivers no uops for some period.", - "MetricExpr": "100 * ( ( ( 5 ) * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE - INT_MISC.UOP_DROPPING ) / ( slots ) )", - "MetricGroup": "Frontend;TmaL2;m_tma_frontend_bound_percent", - "MetricName": "tma_fetch_latency_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", - "MetricExpr": "100 * ( ICACHE_16B.IFDATA_STALL / ( CPU_CLK_UNHALTE= D.THREAD ) )", - "MetricGroup": "BigFoot;FetchLat;IcMiss;TmaL3;m_tma_fetch_latency_= percent", - "MetricName": "tma_icache_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses.", - "MetricExpr": "100 * ( ICACHE_64B.IFTAG_STALL / ( CPU_CLK_UNHALTED= .THREAD ) )", - "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TmaL3;m_tma_fetch_laten= cy_percent", - "MetricName": "tma_itlb_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fron= tend delay in fetching operations from corrected path; following all sorts = of miss-predicted branches. For example; branchy code with lots of miss-pre= dictions might get categorized under Branch Resteers. Note the value of thi= s node may overlap with its siblings.", - "MetricExpr": "100 * ( INT_MISC.CLEAR_RESTEER_CYCLES / ( CPU_CLK_U= NHALTED.THREAD ) + ( ( 10 ) * BACLEARS.ANY / ( CPU_CLK_UNHALTED.THREAD ) ) = )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_branch_resteers_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decod= ed i-cache) is a Uop Cache where the front-end directly delivers Uops (micr= o operations) avoiding heavy x86 decoding. The DSB pipeline has shorter lat= ency and delivered higher bandwidth than the MITE (legacy instruction decod= e pipeline). Switching between the two pipelines can cause penalties hence = this metric measures the exposed penalty.", - "MetricExpr": "100 * ( DSB2MITE_SWITCHES.PENALTY_CYCLES / ( CPU_CL= K_UNHALTED.THREAD ) )", - "MetricGroup": "DSBmiss;FetchLat;TmaL3;m_tma_fetch_latency_percent= ", - "MetricName": "tma_dsb_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs). Using proper compiler = flags or Intel Compiler by default will certainly avoid this. #Link: Optimi= zation Guide about LCP BKMs.", - "MetricExpr": "100 * ( ILD_STALL.LCP / ( CPU_CLK_UNHALTED.THREAD )= )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_lcp_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS). Commonly used instructions are optimized for delivery by the= DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certa= in operations cannot be handled natively by the execution pipeline; and mus= t be performed by microcode (small programs injected into the execution str= eam). Switching to the MS too often can negatively impact performance. The = MS is designated to deliver long uop flows required by CISC instructions li= ke CPUID; or uncommon conditions like Floating Point Assists when dealing w= ith Denormals.", - "MetricExpr": "100 * ( ( 3 ) * IDQ.MS_SWITCHES / ( CPU_CLK_UNHALTE= D.THREAD ) )", - "MetricGroup": "FetchLat;MicroSeq;TmaL3;m_tma_fetch_latency_percen= t", - "MetricName": "tma_ms_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues. For example; inefficienc= ies at the instruction decoders; or restrictions for caching in the DSB (de= coded uops cache) are categorized under Fetch Bandwidth. In such cases; the= Frontend typically delivers suboptimal amount of uops to the Backend.", - "MetricExpr": "100 * ( max( 0 , ( topdown\\-fe\\-bound / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) - ( ( ( 5 ) * IDQ_UOPS_NOT_DE= LIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING ) / ( slots ) ) ) = )", - "MetricGroup": "FetchBW;Frontend;TmaL2;m_tma_frontend_bound_percen= t", - "MetricName": "tma_fetch_bandwidth_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline). This pipeline is used for code that was not pre-cached in the= DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of= long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", - "MetricExpr": "100 * ( ( IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK = ) / ( CPU_CLK_UNHALTED.DISTRIBUTED ) / 2 )", - "MetricGroup": "DSBmiss;FetchBW;TmaL3;m_tma_fetch_bandwidth_percen= t", - "MetricName": "tma_mite_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line. For example; inefficient utilization of the DSB cache structure or b= ank conflict when reading from it; are categorized here.", - "MetricExpr": "100 * ( ( IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK ) = / ( CPU_CLK_UNHALTED.DISTRIBUTED ) / 2 )", - "MetricGroup": "DSB;FetchBW;TmaL3;m_tma_fetch_bandwidth_percent", - "MetricName": "tma_dsb_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. This include slots used to issue uops t= hat do not eventually get retired and slots for which the issue-pipeline wa= s blocked due to recovery from earlier incorrect speculation. For example; = wasted work due to miss-predicted branches are categorized under Bad Specul= ation category. Incorrect data speculation followed by Memory Ordering Nuke= s is another example.", - "MetricExpr": "100 * ( max( 1 - ( ( topdown\\-fe\\-bound / ( topdo= wn\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\= \-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) + ( topdown\\-be\\-bound / = ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdow= n\\-be\\-bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=3D0x1\\= ,edge\\=3D0x1@ ) / ( slots ) ) + ( topdown\\-retiring / ( topdown\\-fe\\-bo= und + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) = ) , 0 ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_bad_speculation_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction. These slots are either wasted = by uops fetched from an incorrectly speculated program path; or stalls when= the out-of-order part of the machine needs to recover its state from a spe= culative path.", - "MetricExpr": "100 * ( ( BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * ( max( 1 - ( ( topdown\\-= fe\\-bound / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-reti= ring + topdown\\-be\\-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) + ( top= down\\-be\\-bound / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown= \\-retiring + topdown\\-be\\-bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCL= ES\\,cmask\\=3D0x1\\,edge\\=3D0x1@ ) / ( slots ) ) + ( topdown\\-retiring /= ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdo= wn\\-be\\-bound ) ) ) , 0 ) ) )", - "MetricGroup": "BadSpec;BrMispredicts;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_branch_mispredicts_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears. These slots are either wasted by uop= s fetched prior to the clear; or stalls the out-of-order portion of the mac= hine needs to recover its state after the clear. For example; this can happ= en due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modify= ing-Code (SMC) nukes.", - "MetricExpr": "100 * ( max( 0 , ( max( 1 - ( ( topdown\\-fe\\-boun= d / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + to= pdown\\-be\\-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) + ( topdown\\-be= \\-bound / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiri= ng + topdown\\-be\\-bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cmas= k\\=3D0x1\\,edge\\=3D0x1@ ) / ( slots ) ) + ( topdown\\-retiring / ( topdow= n\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\= -bound ) ) ) , 0 ) ) - ( ( BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED= .ALL_BRANCHES + MACHINE_CLEARS.COUNT ) ) * ( max( 1 - ( ( topdown\\-fe\\-bo= und / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + = topdown\\-be\\-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) + ( topdown\\-= be\\-bound / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-reti= ring + topdown\\-be\\-bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cm= ask\\=3D0x1\\,edge\\=3D0x1@ ) / ( slots ) ) + ( topdown\\-retiring / ( topd= own\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be= \\-bound ) ) ) , 0 ) ) ) ) )", - "MetricGroup": "BadSpec;MachineClears;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_machine_clears_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. Backend is the portion of the processor cor= e where the out-of-order scheduler dispatches ready uops into their respect= ive execution units; and once completed these uops get retired according to= program order. For example; stalls due to data-cache misses or stalls due = to the divider unit being overloaded are both categorized under Backend Bou= nd. Backend Bound is further divided into two main categories: Memory Bound= and Core Bound.", - "MetricExpr": "100 * ( topdown\\-be\\-bound / ( topdown\\-fe\\-bou= nd + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) + (= ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=3D0x1\\,edge\\=3D0x1@ ) / (= slots ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_backend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck. Memory Bound estimat= es fraction of slots where pipeline is likely stalled due to demand load or= store instructions. This accounts mainly for (1) non-completed in-flight m= emory demand loads which coincides with execution units starvation; in addi= tion to (2) cases where stores could impose backpressure on the pipeline wh= en many of them get buffered at the same time (less common out of the two).= ", - "MetricExpr": "100 * ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACT= IVITY.BOUND_ON_STORES ) / ( CYCLE_ACTIVITY.STALLS_TOTAL + ( EXE_ACTIVITY.1_= PORTS_UTIL + ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\= \-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * EXE_ACTIVITY.2_POR= TS_UTIL ) + EXE_ACTIVITY.BOUND_ON_STORES ) ) * ( topdown\\-be\\-bound / ( t= opdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\= -be\\-bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=3D0x1\\,ed= ge\\=3D0x1@ ) / ( slots ) ) )", - "MetricGroup": "Backend;TmaL2;m_tma_backend_bound_percent", - "MetricName": "tma_memory_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache. The L1 data cache typicall= y has the shortest latency. However; in certain cases like loads blocked o= n older stores; a load might suffer due to high latency even though it is b= eing satisfied by the L1. Another example is loads who miss in the TLB. The= se cases are characterized by execution unit stalls; while some non-complet= ed demand load lives in the machine without having that demand load missing= the L1 cache.", - "MetricExpr": "100 * ( max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCL= E_ACTIVITY.STALLS_L1D_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) , 0 ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l1_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 m= isses/L2 hits) can improve the latency and increase performance.", - "MetricExpr": "100 * ( ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_L= OAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) / ( ( MEM_LOAD_RETI= RED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS )= ) ) ) + L1D_PEND_MISS.FB_FULL_PERIODS ) ) * ( ( CYCLE_ACTIVITY.STALLS_L1D_= MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l2_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core. = Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and= increase performance.", - "MetricExpr": "100 * ( ( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACT= IVITY.STALLS_L3_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l3_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads. Better caching can i= mprove the latency and increase performance.", - "MetricExpr": "100 * ( min( ( ( ( CYCLE_ACTIVITY.STALLS_L3_MISS / = ( CPU_CLK_UNHALTED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_AC= TIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RE= TIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS= ) ) ) ) / ( ( MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / = ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + L1D_PEND_MISS.FB_FULL_PERIODS ) ) * ( = ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_= CLK_UNHALTED.THREAD ) ) ) ) - ( min( ( ( ( ( 1 - ( ( ( 19 * ( MEM_LOAD_L3_M= ISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + = ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD= _L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_= RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + ( = MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) ) / ( ( 19 *= ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT /= ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOC= AL_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) = ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_H= IT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE= _HITM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) = ) ) ) + ( 25 * ( ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB= _HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) + 33 * ( ( MEM_LOAD_L3_MISS_RET= IRED.REMOTE_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_M= ISS ) ) ) ) ) ) ) ) ) ) * ( CYCLE_ACTIVITY.STALLS_L3_MISS / ( CPU_CLK_UNHAL= TED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L= 2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RETIRED.L2_HIT * = ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) / ( ( = MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + L1D_PEND_MISS.FB_FULL_PERIODS ) ) * ( ( CYCLE_ACTIVIT= Y.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.TH= READ ) ) ) ) ) if ( ( 1000000 ) * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + M= EM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) , ( 1 )= ) ) ) ) , ( 1 ) ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_dram_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric roughly estimates (based on idle = latencies) how often the CPU was stalled on accesses to external 3D-Xpoint = (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memo= ry Module. ", - "MetricExpr": "100 * ( min( ( ( ( ( 1 - ( ( ( 19 * ( MEM_LOAD_L3_M= ISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + = ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD= _L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_= RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + ( = MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) ) / ( ( 19 *= ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT /= ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOC= AL_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) = ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_H= IT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE= _HITM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) = ) ) ) + ( 25 * ( ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB= _HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) + 33 * ( ( MEM_LOAD_L3_MISS_RET= IRED.REMOTE_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_M= ISS ) ) ) ) ) ) ) ) ) ) * ( CYCLE_ACTIVITY.STALLS_L3_MISS / ( CPU_CLK_UNHAL= TED.THREAD ) + ( ( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L= 2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) ) - ( ( ( MEM_LOAD_RETIRED.L2_HIT * = ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) / ( ( = MEM_LOAD_RETIRED.L2_HIT * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + L1D_PEND_MISS.FB_FULL_PERIODS ) ) * ( ( CYCLE_ACTIVIT= Y.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.TH= READ ) ) ) ) ) if ( ( 1000000 ) * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + M= EM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) , ( 1 )= ) )", - "MetricGroup": "MemoryBound;Server;TmaL3mem;TmaL3;m_tma_memory_bou= nd_percent", - "MetricName": "tma_pmm_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write. Even though store accesses do not typically stall= out-of-order CPUs; there are few cases where stores can lead to actual sta= lls. This metric will be flagged should RFO stores be a bottleneck.", - "MetricExpr": "100 * ( EXE_ACTIVITY.BOUND_ON_STORES / ( CPU_CLK_UN= HALTED.THREAD ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_store_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck. Shortage in hardware comput= e resources; or dependencies in software's instructions are both categorize= d under Core Bound. Hence it may indicate the machine ran out of an out-of-= order resource; certain execution units are overloaded or dependencies in p= rogram's data- or instruction-flow are limiting the performance (e.g. FP-ch= ained long-latency arithmetic operations).", - "MetricExpr": "100 * ( max( 0 , ( topdown\\-be\\-bound / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=3D0x1\\,edge\\= =3D0x1@ ) / ( slots ) ) - ( ( ( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVIT= Y.BOUND_ON_STORES ) / ( CYCLE_ACTIVITY.STALLS_TOTAL + ( EXE_ACTIVITY.1_PORT= S_UTIL + ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-sp= ec + topdown\\-retiring + topdown\\-be\\-bound ) ) * EXE_ACTIVITY.2_PORTS_U= TIL ) + EXE_ACTIVITY.BOUND_ON_STORES ) ) * ( topdown\\-be\\-bound / ( topdo= wn\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\= \-bound ) + ( ( 5 ) * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=3D0x1\\,edge\\= =3D0x1@ ) / ( slots ) ) ) ) )", - "MetricGroup": "Backend;TmaL2;Compute;m_tma_backend_bound_percent", - "MetricName": "tma_core_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active. Divide and square root instructions are per= formed by the Divider unit and can take considerably longer latency than in= teger or Floating Point addition; subtraction; or multiplication.", - "MetricExpr": "100 * ( ARITH.DIVIDER_ACTIVE / ( CPU_CLK_UNHALTED.T= HREAD ) )", - "MetricGroup": "TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_divider_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. Ideally= ; all pipeline slots would be attributed to the Retiring category. Retirin= g of 100% would indicate the maximum Pipeline_Width throughput was achieved= . Maximizing Retiring typically increases the Instructions-per-cycle (see = IPC metric). Note that a high Retiring value does not necessary mean there = is no room for more performance. For example; Heavy-operations or Microcod= e Assists are categorized under Retiring. They often indicate suboptimal pe= rformance and can often be optimized or avoided. ", - "MetricExpr": "( 100 * ( topdown\\-retiring / ( topdown\\-fe\\-bou= nd + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) )= + ( 0 * slots )", - "MetricGroup": "TmaL1", - "MetricName": "tma_retiring_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation). This correlates with total number = of instructions used by the program. A uops-per-instruction (see UPI metric= ) ratio of 1 or less should be expected for decently optimized software run= ning on Intel Core/Xeon products. While this often indicates efficient X86 = instructions were executed; high value does not necessarily mean better per= formance cannot be achieved.", - "MetricExpr": "100 * ( max( 0 , ( topdown\\-retiring / ( topdown\\= -fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bo= und ) ) - ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\= -bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) /= UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-retiring / ( t= opdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\= -be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D0= x1@ ) / IDQ.MITE_UOPS ) ) )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_light_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired). Note t= his metric's value may exceed its parent due to use of \"Uops\" CountDomain= and FMA double-counting.", - "MetricExpr": "100 * ( ( ( topdown\\-retiring / ( topdown\\-fe\\-b= ound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) )= * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD ) + ( ( FP_ARITH_INST_RETIRED.S= CALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) / ( ( topdown\\-retiri= ng / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + t= opdown\\-be\\-bound ) ) * ( slots ) ) ) + ( min( ( ( FP_ARITH_INST_RETIRED.= 128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_IN= ST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKE= D_SINGLE ) / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) , = ( 1 ) ) ) )", - "MetricGroup": "HPC;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fp_arith_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown= \\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) )= / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-retiring / (= topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown= \\-be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\= =3D0x1@ ) / IDQ.MITE_UOPS ) ) ) * MEM_INST_RETIRED.ANY / INST_RETIRED.ANY )= ", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_memory_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown= \\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) )= / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-retiring / (= topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown= \\-be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\= =3D0x1@ ) / IDQ.MITE_UOPS ) ) ) * BR_INST_RETIRED.ALL_BRANCHES / ( ( topdow= n\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-re= tiring + topdown\\-be\\-bound ) ) * ( slots ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_branch_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions. Compilers often use NOPs f= or certain address alignments - e.g. start address of a function or loop bo= dy.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown= \\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) )= / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-retiring / (= topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown= \\-be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\= =3D0x1@ ) / IDQ.MITE_UOPS ) ) ) * INST_RETIRED.NOP / ( ( topdown\\-retiring= / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + top= down\\-be\\-bound ) ) * ( slots ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_nop_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", - "MetricExpr": "100 * ( max( 0 , ( max( 0 , ( topdown\\-retiring / = ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdow= n\\-be\\-bound ) ) - ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound = + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( = slots ) ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-ret= iring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring = + topdown\\-be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,= cmask\\=3D0x1@ ) / IDQ.MITE_UOPS ) ) ) - ( ( ( ( topdown\\-retiring / ( top= down\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-b= e\\-bound ) ) * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD ) + ( ( FP_ARITH_I= NST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) / ( ( top= down\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\= -retiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) + ( min( ( ( FP_ARITH_= INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE = + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PAC= KED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIR= ED.512B_PACKED_SINGLE ) / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound += topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( s= lots ) ) ) , ( 1 ) ) ) ) + ( ( max( 0 , ( topdown\\-retiring / ( topdown\\-= fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bou= nd ) ) - ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-= bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) / = UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-retiring / ( to= pdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-= be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D0x= 1@ ) / IDQ.MITE_UOPS ) ) ) * MEM_INST_RETIRED.ANY / INST_RETIRED.ANY ) + ( = ( max( 0 , ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-= spec + topdown\\-retiring + topdown\\-be\\-bound ) ) - ( ( ( ( ( topdown\\-= retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiri= ng + topdown\\-be\\-bound ) ) * ( slots ) ) / UOPS_ISSUED.ANY ) * IDQ.MS_UO= PS / ( slots ) ) + ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\= \-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( UOPS_DECOD= ED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D0x1@ ) / IDQ.MITE_UOPS ) ) ) * = BR_INST_RETIRED.ALL_BRANCHES / ( ( topdown\\-retiring / ( topdown\\-fe\\-bo= und + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) = * ( slots ) ) ) + ( ( max( 0 , ( topdown\\-retiring / ( topdown\\-fe\\-boun= d + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) - = ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spe= c + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) / UOPS_ISSU= ED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topdown\\-retiring / ( topdown\\-f= e\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-boun= d ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D0x1@ ) / ID= Q.MITE_UOPS ) ) ) * INST_RETIRED.NOP / ( ( topdown\\-retiring / ( topdown\\= -fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bo= und ) ) * ( slots ) ) ) ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_other_light_ops_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences. This highly-correlates with the = uop length of these instructions/sequences.", - "MetricExpr": "100 * ( ( ( ( ( topdown\\-retiring / ( topdown\\-fe= \\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound= ) ) * ( slots ) ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( topd= own\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-= retiring + topdown\\-be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECODE= D.DEC0\\,cmask\\=3D0x1@ ) / IDQ.MITE_UOPS )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_heavy_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of= uops in such instructions.", - "MetricExpr": "100 * ( ( ( ( ( ( topdown\\-retiring / ( topdown\\-= fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bou= nd ) ) * ( slots ) ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) ) + ( to= pdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\= \-retiring + topdown\\-be\\-bound ) ) * ( UOPS_DECODED.DEC0 - cpu@UOPS_DECO= DED.DEC0\\,cmask\\=3D0x1@ ) / IDQ.MITE_UOPS ) - ( ( ( ( topdown\\-retiring = / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topd= own\\-be\\-bound ) ) * ( slots ) ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( sl= ots ) ) )", - "MetricGroup": "TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_few_uops_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS= is used for CISC instructions not supported by the default decoders (like = repeat move strings; or CPUID); or by microcode assists used to address som= e operation modes (like in Floating Point assists). These cases can often b= e avoided.", - "MetricExpr": "100 * ( ( ( ( topdown\\-retiring / ( topdown\\-fe\\= -bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound )= ) * ( slots ) ) / UOPS_ISSUED.ANY ) * IDQ.MS_UOPS / ( slots ) )", - "MetricGroup": "MicroSeq;TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_microcode_sequencer_percent", + "BriefDescription": "%", + "MetricExpr": "100 * ( ( LSD.CYCLES_ACTIVE - LSD.CYCLES_OK ) / ( C= PU_CLK_UNHALTED.DISTRIBUTED ) / 2 )", + "MetricGroup": "FetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandw= idth_group", + "MetricName": "tma_lsd", "ScaleUnit": "1%" } ] diff --git a/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json b/tools/= perf/pmu-events/arch/x86/icelakex/pipeline.json index 396868f70004..52fba238bf1f 100644 --- a/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json @@ -167,7 +167,7 @@ "UMask": "0x10" }, { - "BriefDescription": "number of branch instructions retired that we= re mispredicted and taken. Non PEBS", + "BriefDescription": "number of branch instructions retired that we= re mispredicted and taken.", "CollectPEBSRecord": "2", "Counter": "0,1,2,3,4,5,6,7", "EventCode": "0xc5", diff --git a/tools/perf/pmu-events/arch/x86/icelakex/uncore-other.json b/to= ols/perf/pmu-events/arch/x86/icelakex/uncore-other.json index 7783aa2ef5d1..03e99b8aed93 100644 --- a/tools/perf/pmu-events/arch/x86/icelakex/uncore-other.json +++ b/tools/perf/pmu-events/arch/x86/icelakex/uncore-other.json @@ -11779,7 +11779,7 @@ "Unit": "M3UPI" }, { - "BriefDescription": "Flit Gen - Header 1 : Acumullate", + "BriefDescription": "Flit Gen - Header 1 : Accumulate", "Counter": "0,1,2,3", "CounterType": "PGMABLE", "EventCode": "0x51", diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 1f5dbe176b3a..84535179d128 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -11,7 +11,7 @@ GenuineIntel-6-7A,v1.01,goldmontplus,core GenuineIntel-6-(3C|45|46),v32,haswell,core GenuineIntel-6-3F,v26,haswellx,core GenuineIntel-6-(7D|7E|A7),v1.15,icelake,core -GenuineIntel-6-6[AC],v1.15,icelakex,core +GenuineIntel-6-6[AC],v1.16,icelakex,core GenuineIntel-6-3A,v22,ivybridge,core GenuineIntel-6-3E,v21,ivytown,core GenuineIntel-6-2D,v21,jaketown,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EA3C433FE for ; Sat, 1 Oct 2022 06:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbiJAGJQ (ORCPT ); Sat, 1 Oct 2022 02:09:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229605AbiJAGHb (ORCPT ); Sat, 1 Oct 2022 02:07:31 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BC781AD798 for ; Fri, 30 Sep 2022 23:07:20 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id f3-20020a056902038300b00696588a0e87so5603687ybs.3 for ; Fri, 30 Sep 2022 23:07:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=zieR1IljZ0inG/IQxlkWrbFw7J/PKY8MhZNsZCPvuUs=; b=j+gCBhN8eQpm5/eqPYy3ZOVeJcweUz9wYWUInHNQo/+Up917MngPrCU0R5fRzaO6+i 1I/y+U8YX1hfFZ4chjGnpgDfMSU6s2yLqvfu5O8fvbHJ7dFyC9eiT/DtxCFYNJECUQtX +OiVh9JPqzADXHfqiE+runIKfN3IpuYFmOYxaaBZWaU4lahzCdshTeDXqLg+Ym4rlWR2 zZVTQEMI35yyo82r7ZkNaXuhbKZYj0cYobxlSJsqmWX5rEe5+gMGdLHirhHKPpKH1elv gJt4fyrmoybbj6FFP45unQfKQptM3XW/WOovcgu/j3swgb7fLgQskCtjr8CBtL755eHy lhFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=zieR1IljZ0inG/IQxlkWrbFw7J/PKY8MhZNsZCPvuUs=; b=gkxPduluECafkMmEcNhuG28Lo7RBrho54mmJ/Ofel/gYIB+Jz3tO0jXvMC3m2W16n4 wCjrTyL77D6uM4qAf3bLZ68nTgrLxbU/BsT5u9l21lVSz6migGoSY/NDom3X5h+AwLBU 5LXcgup/4GlsGpc+4bHVinB8K//BzIssFM2aEmpizBzfFogExmjZ0ij4bDZ3kqlOeL0e k3imSlj14/balaukhPLda7MouWKMX9zhSGrMkAz3b5mkfATwwGLMv7/LLZKfeNCOd46/ jo1mQL3zyTULKGqCGWWx2saoBgAGrkOqZkh1txSOLQMBWHZrJ7EfWL6WwYnAZ4O7rm7a b1qw== X-Gm-Message-State: ACrzQf3LHz/p8eXDOdtK++0nO6cM90yb//asBGrHFISLRG4PFO+OQQr/ Rv/9NJnXkh3Eby/6X4Z2t0radXIjaZCr X-Google-Smtp-Source: AMsMyM4mpD7Fn7l5Z2oJoXQeYvVbBsWZBWoCu0Z0SsPR36IfwQMOucKNU25U5HJqzWPtBgc2a8x1xbDNKdlu X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a05:6902:70f:b0:6af:281e:39ad with SMTP id k15-20020a056902070f00b006af281e39admr11825116ybt.136.1664604439768; Fri, 30 Sep 2022 23:07:19 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:28 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-15-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 14/22] perf vendor events: Update Intel ivybridge From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v22, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/ivybridge/ivb-metrics.json | 594 +++++++++++++++--- 1 file changed, 503 insertions(+), 91 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json b/to= ols/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json index 3f48e75f8a86..63db3397af0f 100644 --- a/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json +++ b/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json @@ -1,64 +1,500 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", + "MetricExpr": "ICACHE.IFETCH_STALL / CLKS - tma_itlb_misses", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_EXECUTE) + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXE= CUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_GE_2_U= OPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURC= E_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.ST= ALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_= RETIRED.HIT_LFB_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "13 * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY= .STALLS_L2_PENDING) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RET= IRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.STALLS= _L2_PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM * (1= + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOA= D_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_= UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS) += MEM_LOAD_UOPS_RETIRED.LLC_MISS))) + 43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XS= NP_MISS * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_H= IT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT= + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.= XSNP_MISS) + MEM_LOAD_UOPS_RETIRED.LLC_MISS)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT * (1 += mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_= UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UO= PS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS) + M= EM_LOAD_UOPS_RETIRED.LLC_MISS))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.LLC_HIT * (1 + mem_load= _uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETI= RED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HI= T_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_U= OPS_RETIRED.LLC_MISS))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOP= S_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS))) * CYCLE_ACTIVITY.= STALLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER= _CORE / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES= .WALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_EXECUTE) + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_G= E_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - RS_= EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURCE_STALLS.SB) -= RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LD= M_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECU= TE) - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else 0) / CORE_CL= KS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1= _UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2= _UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5) / (3 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / U= OPS_EXECUTED.THREAD", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EX= E.SSE_SCALAR_DOUBLE) / UOPS_EXECUTED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EX= E.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE= ) / UOPS_EXECUTED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -76,8 +512,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -88,16 +524,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -107,37 +537,25 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_O= PS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP= _COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_= 256.PACKED_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CL= K_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ / 2 ) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((cpu@UOPS_EXECUTED.CORE\\,c= mask\\=3D1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -179,15 +597,15 @@ }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "1 / ( ((FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE) / UOPS_EXECUTED.THREAD) + ((FP_COMP_OPS_EXE.SSE= _PACKED_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SIN= GLE + SIMD_FP_256.PACKED_DOUBLE) / UOPS_EXECUTED.THREAD) )", + "MetricExpr": "1 / (tma_fp_scalar + tma_fp_vector)", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -204,7 +622,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -216,47 +634,41 @@ }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.LLC_MISS / INST_RETIRE= D.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_= DURATION + DTLB_STORE_MISSES.WALK_DURATION) / CORE_CLKS", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -277,19 +689,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -307,26 +719,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_CO= MP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 = * ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * S= IMD_FP_256.PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_= OPS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (F= P_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP= _256.PACKED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -344,7 +756,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94AEEC43217 for ; Sat, 1 Oct 2022 06:09:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229470AbiJAGJT (ORCPT ); Sat, 1 Oct 2022 02:09:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229615AbiJAGHc (ORCPT ); Sat, 1 Oct 2022 02:07:32 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7FDF1A598B for ; Fri, 30 Sep 2022 23:07:22 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id a2-20020a5b0002000000b006b48689da76so5608576ybp.16 for ; Fri, 30 Sep 2022 23:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=dLUz/Ts7Fk8657K2blvINPjVFBnsySekpY1d7WHVMRg=; b=o9Qy/goQZlFn0pTvN5Gr+HyGnbgbxshEWb9muWnA2EmkIPVvZBHyvp9hABdwVVZRnN KXDC3f9EY2N0mPVyEATuCTtjFsS7oCOnjK6vmtkOYCUOagLUx/RYC2sX96RynCmDVsqs rrkH/iEF+bnuDDi97VGUWTeuf0S6pPYlOTke6H8/Pf1orZstAhxoCFxp0lPcWTU5MDzF T/U20CX4sUebhgBkC24o1Vv8YrW9pE8eoJZ508bDrBaKj27dmnVL3lYsllttabG37EjT Kv9KySMxtM75Xk6inZrQEvfz+SRNJ2Z/CVDCJxLwOoZ+vxxcQnW6+lFH3OaX/bni+A3w ACvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=dLUz/Ts7Fk8657K2blvINPjVFBnsySekpY1d7WHVMRg=; b=X1t0sdnhoFCKvob2g54X8Una9VaadIzm2JQk+5btFPEQJkh9zjZUj3QSeRmO1J8xcx X7A7Mi59ltX3CkH+ekZTtoPpSx/R0ufhy48UTGHk3i64qKnzluapuyb6wyoTI4dp301H xZJ3aOy8or57JGzl+LLiapzWP0tpWidmYeq1XaDZy5KqZI7Np+lS3PpYaDKvAGY3cA/Y 6yCihnxcvUrJUYQ6j8oAugyQIZkXNv5oOy/vwa8x++YyH2RJ2qHyRQIqGbRdJj6H0mx+ DBBPfy8y+1I0h3mJ7/kQyzRlJPwWxgXQ02sx6f5Ka7j4mqgKnQpRyZ72hsujsHOIbOO9 258Q== X-Gm-Message-State: ACrzQf2gde6EdHa2AiVbxgZGGiOKvwXQdycvrPqDrp0P4GGS+FzgGjfN VwGPulBMuPt9Zz75cIfoANCthzyPz4TY X-Google-Smtp-Source: AMsMyM5F1OkKIs7jXbOKud1C/luLHwt0JfsCT3weK/H+Oya6f7IqSzyl19sRe3el/6b08Io3J0+bKClO2MNW X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a5b:b8e:0:b0:6bc:ed68:7b88 with SMTP id l14-20020a5b0b8e000000b006bced687b88mr6326114ybq.198.1664604442428; Fri, 30 Sep 2022 23:07:22 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:29 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-16-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 15/22] perf vendor events: Update Intel ivytown From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v22 the core metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../pmu-events/arch/x86/ivytown/cache.json | 4 +- .../arch/x86/ivytown/floating-point.json | 2 +- .../pmu-events/arch/x86/ivytown/frontend.json | 18 +- .../arch/x86/ivytown/ivt-metrics.json | 630 +++++++++++++++--- .../arch/x86/ivytown/uncore-cache.json | 58 +- .../arch/x86/ivytown/uncore-interconnect.json | 84 +-- .../arch/x86/ivytown/uncore-memory.json | 2 +- .../arch/x86/ivytown/uncore-other.json | 6 +- .../arch/x86/ivytown/uncore-power.json | 8 +- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 10 files changed, 625 insertions(+), 189 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/ivytown/cache.json b/tools/perf= /pmu-events/arch/x86/ivytown/cache.json index 27576d53b347..d95b98c83914 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/cache.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/cache.json @@ -21,7 +21,7 @@ "UMask": "0x2" }, { - "BriefDescription": "L1D miss oustandings duration in cycles", + "BriefDescription": "L1D miss outstanding duration in cycles", "Counter": "2", "CounterHTOff": "2", "EventCode": "0x48", @@ -658,7 +658,7 @@ "UMask": "0x8" }, { - "BriefDescription": "Cacheable and noncachaeble code read requests= ", + "BriefDescription": "Cacheable and noncacheable code read requests= ", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0xB0", diff --git a/tools/perf/pmu-events/arch/x86/ivytown/floating-point.json b/t= ools/perf/pmu-events/arch/x86/ivytown/floating-point.json index 4c2ac010cf55..88891cba54ec 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/floating-point.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/floating-point.json @@ -91,7 +91,7 @@ "UMask": "0x20" }, { - "BriefDescription": "Number of FP Computational Uops Executed this= cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULsand IMULs, FDIVs= , FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish = an FADD used in the middle of a transcendental flow from a s", + "BriefDescription": "Number of FP Computational Uops Executed this= cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULs and IMULs, FDIV= s, FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish= an FADD used in the middle of a transcendental flow from a s", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x10", diff --git a/tools/perf/pmu-events/arch/x86/ivytown/frontend.json b/tools/p= erf/pmu-events/arch/x86/ivytown/frontend.json index 2b1a82dd86ab..0a295c4e093d 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/frontend.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/frontend.json @@ -176,41 +176,41 @@ "UMask": "0x4" }, { - "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Cycles when uops are being delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MS_CYCLES", - "PublicDescription": "Cycles when uops are being delivered to Inst= ruction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy.", + "PublicDescription": "Cycles when uops are being delivered to Inst= ruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy.", "SampleAfterValue": "2000003", "UMask": "0x30" }, { - "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequenser (MS) is busy", + "BriefDescription": "Cycles when uops initiated by Decode Stream B= uffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mic= rocode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EventCode": "0x79", "EventName": "IDQ.MS_DSB_CYCLES", - "PublicDescription": "Cycles when uops initiated by Decode Stream = Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mi= crocode Sequenser (MS) is busy.", + "PublicDescription": "Cycles when uops initiated by Decode Stream = Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Mi= crocode Sequencer (MS) is busy.", "SampleAfterValue": "2000003", "UMask": "0x10" }, { - "BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) = initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is b= usy", + "BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) = initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is b= usy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "CounterMask": "1", "EdgeDetect": "1", "EventCode": "0x79", "EventName": "IDQ.MS_DSB_OCCUR", - "PublicDescription": "Deliveries to Instruction Decode Queue (IDQ)= initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is = busy.", + "PublicDescription": "Deliveries to Instruction Decode Queue (IDQ)= initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is = busy.", "SampleAfterValue": "2000003", "UMask": "0x10" }, { - "BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) = that are being delivered to Instruction Decode Queue (IDQ) while Microcode = Sequenser (MS) is busy", + "BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) = that are being delivered to Instruction Decode Queue (IDQ) while Microcode = Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", @@ -220,7 +220,7 @@ "UMask": "0x10" }, { - "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Uops initiated by MITE and delivered to Instr= uction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", @@ -242,7 +242,7 @@ "UMask": "0x30" }, { - "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequenser (MS) is busy", + "BriefDescription": "Uops delivered to Instruction Decode Queue (I= DQ) while Microcode Sequencer (MS) is busy", "Counter": "0,1,2,3", "CounterHTOff": "0,1,2,3,4,5,6,7", "EventCode": "0x79", diff --git a/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json b/tool= s/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json index 19c7f3b41102..99a45c8d8cee 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json @@ -1,64 +1,524 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", + "MetricExpr": "ICACHE.IFETCH_STALL / CLKS - tma_itlb_misses", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_EXECUTE) + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXE= CUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_GE_2_U= OPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURC= E_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.ST= ALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_= RETIRED.HIT_LFB_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "13 * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY= .STALLS_L2_PENDING) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RET= IRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.STALLS= _L2_PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM * (1= + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOA= D_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_= UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS) += MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED= .REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L= LC_MISS_RETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MI= SS * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + = MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + ME= M_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_= MISS) + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_= RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD= _UOPS_LLC_MISS_RETIRED.REMOTE_FWD)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT * (1 += mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_= UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UO= PS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS) + M= EM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.R= EMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC= _MISS_RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "41 * (MEM_LOAD_UOPS_RETIRED.LLC_HIT * (1 + mem_load= _uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETI= RED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HI= T_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_U= OPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRA= M + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RET= IRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOP= S_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS))) * CYCLE_ACTIVITY.= STALLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "200 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM * = (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_L= OAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOA= D_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS)= + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIR= ED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS= _LLC_MISS_RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3= _MISS_RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "310 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM *= (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_= LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LO= AD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS= ) + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETI= RED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOP= S_LLC_MISS_RETIRED.REMOTE_FWD))) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "(200 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM = * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM= _LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_L= OAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MIS= S) + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RET= IRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UO= PS_LLC_MISS_RETIRED.REMOTE_FWD))) + 180 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.R= EMOTE_FWD * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2= _HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_H= IT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRE= D.XSNP_MISS) + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LL= C_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + M= EM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD)))) / CLKS", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_UOPS_L3_MI= SS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(200 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_= HITM + 60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES= .WALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_EXECUTE) + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_G= E_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - RS_= EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else RESOURCE_STALLS.SB) -= RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LD= M_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECU= TE) - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else 0) / CORE_CL= KS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1= _UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2= _UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5) / (3 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / U= OPS_EXECUTED.THREAD", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EX= E.SSE_SCALAR_DOUBLE) / UOPS_EXECUTED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EX= E.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE= ) / UOPS_EXECUTED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -76,8 +536,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -88,16 +548,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -107,37 +561,25 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_O= PS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP= _COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_= 256.PACKED_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CL= K_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ / 2 ) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((cpu@UOPS_EXECUTED.CORE\\,c= mask\\=3D1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -179,15 +621,15 @@ }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "1 / ( ((FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE) / UOPS_EXECUTED.THREAD) + ((FP_COMP_OPS_EXE.SSE= _PACKED_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SIN= GLE + SIMD_FP_256.PACKED_DOUBLE) / UOPS_EXECUTED.THREAD) )", + "MetricExpr": "1 / (tma_fp_scalar + tma_fp_vector)", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -204,7 +646,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -216,47 +658,41 @@ }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.LLC_MISS / INST_RETIRE= D.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_= DURATION + DTLB_STORE_MISSES.WALK_DURATION) / CORE_CLKS", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -277,19 +713,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -307,26 +743,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_CO= MP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 = * ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * S= IMD_FP_256.PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_= OPS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (F= P_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP= _256.PACKED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -344,7 +780,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, @@ -354,12 +790,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cbox_0@event\\=3D0x0@ / #num_dies / duration_time /= 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -407,5 +837,11 @@ "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", "MetricGroup": "Power", "MetricName": "C7_Pkg_Residency" + }, + { + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" } ] diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-cache.json b/too= ls/perf/pmu-events/arch/x86/ivytown/uncore-cache.json index 93e07385eeec..c118ff54c30e 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-cache.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-cache.json @@ -61,7 +61,7 @@ "EventCode": "0x34", "EventName": "UNC_C_LLC_LOOKUP.WRITE", "PerPkg": "1", - "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set filter mask bit 0 and select a sta= te or states to match. Otherwise, the event will count nothing. CBoGlCtr= l[22:17] bits correspond to [M'FMESI] state.; Writeback transactions from L= 2 to the LLC This includes all write transactions -- both Cachable and UC.= ", + "PublicDescription": "Counts the number of times the LLC was acces= sed - this includes code, data, prefetches and hints coming from L2. This = has numerous filters available. Note the non-standard filtering equation. = This event will count requests that lookup the cache multiple times with m= ultiple increments. One must ALWAYS set filter mask bit 0 and select a sta= te or states to match. Otherwise, the event will count nothing. CBoGlCtr= l[22:17] bits correspond to [M'FMESI] state.; Writeback transactions from L= 2 to the LLC This includes all write transactions -- both Cacheable and UC= .", "UMask": "0x5", "Unit": "CBO" }, @@ -999,7 +999,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.ALL", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserted= into the TOR. This includes requests that reside in the TOR for a short= time, such as LLC Hits that do not need to snoop cores or requests that ge= t rejected and have to be retried through one of the ingress queues. The T= OR is more commonly a bottleneck in skews with smaller core counts, where t= he ratio of RTIDs to TOR entries is larger. Note that there are reserved T= OR entries for various request types, so it is possible that a given reques= t type be blocked with an occupancy that is less than 20. Also note that g= enerally requests will not be able to arbitrate into the TOR pipeline if th= ere are no available TOR slots.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserte= d into the TOR. This includes requests that reside in the TOR for a shor= t time, such as LLC Hits that do not need to snoop cores or requests that g= et rejected and have to be retried through one of the ingress queues. The = TOR is more commonly a bottleneck in skews with smaller core counts, where = the ratio of RTIDs to TOR entries is larger. Note that there are reserved = TOR entries for various request types, so it is possible that a given reque= st type be blocked with an occupancy that is less than 20. Also note that = generally requests will not be able to arbitrate into the TOR pipeline if t= here are no available TOR slots.", "UMask": "0x8", "Unit": "CBO" }, @@ -1009,7 +1009,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.EVICTION", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Eviction transactions ins= erted into the TOR. Evictions can be quick, such as when the line is in th= e F, S, or E states and no core valid bits are set. They can also be longe= r if either CV bits are set (so the cores need to be snooped) and/or if the= re is a HitM (in which case it is necessary to write the request out to mem= ory).", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Eviction transactions in= serted into the TOR. Evictions can be quick, such as when the line is in t= he F, S, or E states and no core valid bits are set. They can also be long= er if either CV bits are set (so the cores need to be snooped) and/or if th= ere is a HitM (in which case it is necessary to write the request out to me= mory).", "UMask": "0x4", "Unit": "CBO" }, @@ -1019,7 +1019,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.LOCAL", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserted= into the TOR that are satisifed by locally HOMed memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserte= d into the TOR that are satisfied by locally HOMed memory.", "UMask": "0x28", "Unit": "CBO" }, @@ -1029,7 +1029,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.LOCAL_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions, satisif= ed by an opcode, inserted into the TOR that are satisifed by locally HOMed= memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions, satisf= ied by an opcode, inserted into the TOR that are satisfied by locally HOMe= d memory.", "UMask": "0x21", "Unit": "CBO" }, @@ -1039,7 +1039,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserte= d into the TOR that are satisifed by locally HOMed memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions insert= ed into the TOR that are satisfied by locally HOMed memory.", "UMask": "0x2A", "Unit": "CBO" }, @@ -1049,7 +1049,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions, satisi= fed by an opcode, inserted into the TOR that are satisifed by locally HOMed= memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions, satis= fied by an opcode, inserted into the TOR that are satisfied by locally HOMe= d memory.", "UMask": "0x23", "Unit": "CBO" }, @@ -1059,7 +1059,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.MISS_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserte= d into the TOR that match an opcode.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions insert= ed into the TOR that match an opcode.", "UMask": "0x3", "Unit": "CBO" }, @@ -1069,7 +1069,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserte= d into the TOR that are satisifed by remote caches or remote memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions insert= ed into the TOR that are satisfied by remote caches or remote memory.", "UMask": "0x8A", "Unit": "CBO" }, @@ -1079,7 +1079,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions, satisi= fed by an opcode, inserted into the TOR that are satisifed by remote cache= s or remote memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions, satis= fied by an opcode, inserted into the TOR that are satisfied by remote cach= es or remote memory.", "UMask": "0x83", "Unit": "CBO" }, @@ -1089,7 +1089,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.NID_ALL", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All NID matched (matches = an RTID destination) transactions inserted into the TOR. The NID is progra= mmed in Cn_MSR_PMON_BOX_FILTER.nid. In conjunction with STATE =3D I, it is= possible to monitor misses to specific NIDs in the system.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All NID matched (matches= an RTID destination) transactions inserted into the TOR. The NID is progr= ammed in Cn_MSR_PMON_BOX_FILTER.nid. In conjunction with STATE =3D I, it i= s possible to monitor misses to specific NIDs in the system.", "UMask": "0x48", "Unit": "CBO" }, @@ -1099,7 +1099,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.NID_EVICTION", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; NID matched eviction tran= sactions inserted into the TOR.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; NID matched eviction tra= nsactions inserted into the TOR.", "UMask": "0x44", "Unit": "CBO" }, @@ -1109,7 +1109,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.NID_MISS_ALL", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All NID matched miss requ= ests that were inserted into the TOR.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All NID matched miss req= uests that were inserted into the TOR.", "UMask": "0x4A", "Unit": "CBO" }, @@ -1119,7 +1119,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.NID_MISS_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserte= d into the TOR that match a NID and an opcode.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions insert= ed into the TOR that match a NID and an opcode.", "UMask": "0x43", "Unit": "CBO" }, @@ -1129,7 +1129,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.NID_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Transactions inserted int= o the TOR that match a NID and an opcode.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Transactions inserted in= to the TOR that match a NID and an opcode.", "UMask": "0x41", "Unit": "CBO" }, @@ -1139,7 +1139,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.NID_WB", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; NID matched write transac= tions inserted into the TOR.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; NID matched write transa= ctions inserted into the TOR.", "UMask": "0x50", "Unit": "CBO" }, @@ -1149,7 +1149,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Transactions inserted int= o the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Transactions inserted in= to the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)", "UMask": "0x1", "Unit": "CBO" }, @@ -1159,7 +1159,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.REMOTE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserted= into the TOR that are satisifed by remote caches or remote memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserte= d into the TOR that are satisfied by remote caches or remote memory.", "UMask": "0x88", "Unit": "CBO" }, @@ -1169,7 +1169,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.REMOTE_OPCODE", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions, satisif= ed by an opcode, inserted into the TOR that are satisifed by remote caches= or remote memory.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions, satisf= ied by an opcode, inserted into the TOR that are satisfied by remote cache= s or remote memory.", "UMask": "0x81", "Unit": "CBO" }, @@ -1179,7 +1179,7 @@ "EventCode": "0x35", "EventName": "UNC_C_TOR_INSERTS.WB", "PerPkg": "1", - "PublicDescription": "Counts the number of entries successfuly ins= erted into the TOR that match qualifications specified by the subevent. T= here are a number of subevent 'filters' but only a subset of the subevent c= ombinations are valid. Subevents that require an opcode or NID match requi= re the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example,= one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and= set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Write transactions insert= ed into the TOR. This does not include RFO, but actual operations that co= ntain data being sent from the core.", + "PublicDescription": "Counts the number of entries successfully in= serted into the TOR that match qualifications specified by the subevent. = There are a number of subevent 'filters' but only a subset of the subevent = combinations are valid. Subevents that require an opcode or NID match requ= ire the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example= , one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH an= d set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Write transactions inser= ted into the TOR. This does not include RFO, but actual operations that c= ontain data being sent from the core.", "UMask": "0x10", "Unit": "CBO" }, @@ -1215,7 +1215,7 @@ "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.LOCAL_OPCODE", "PerPkg": "1", - "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding transactions, satisifed by an opcode, in the TOR that are satis= ifed by locally HOMed memory.", + "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding transactions, satisfied by an opcode, in the TOR that are satis= fied by locally HOMed memory.", "UMask": "0x21", "Unit": "CBO" }, @@ -1242,7 +1242,7 @@ "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL_OPCODE", "PerPkg": "1", - "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding Miss transactions, satisifed by an opcode, in the TOR that are sa= tisifed by locally HOMed memory.", + "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding Miss transactions, satisfied by an opcode, in the TOR that are sa= tisfied by locally HOMed memory.", "UMask": "0x23", "Unit": "CBO" }, @@ -1269,7 +1269,7 @@ "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE_OPCODE", "PerPkg": "1", - "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding Miss transactions, satisifed by an opcode, in the TOR that are sa= tisifed by remote caches or remote memory.", + "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding Miss transactions, satisfied by an opcode, in the TOR that are sa= tisfied by remote caches or remote memory.", "UMask": "0x83", "Unit": "CBO" }, @@ -1350,7 +1350,7 @@ "EventCode": "0x36", "EventName": "UNC_C_TOR_OCCUPANCY.REMOTE_OPCODE", "PerPkg": "1", - "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding transactions, satisifed by an opcode, in the TOR that are satis= ifed by remote caches or remote memory.", + "PublicDescription": "For each cycle, this event accumulates the n= umber of valid entries in the TOR that match qualifications specified by th= e subevent. There are a number of subevent 'filters' but only a subset of= the subevent combinations are valid. Subevents that require an opcode or = NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. I= f, for example, one wanted to count DRD Local Misses, one should select MIS= S_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of ou= tstanding transactions, satisfied by an opcode, in the TOR that are satis= fied by remote caches or remote memory.", "UMask": "0x81", "Unit": "CBO" }, @@ -1446,7 +1446,7 @@ "EventCode": "0x2", "EventName": "UNC_C_TxR_INSERTS.BL_CORE", "PerPkg": "1", - "PublicDescription": "Number of allocations into the Cbo Egress. = The Egress is used to queue up requests destined for the ring.; Ring transa= ctions from the Corebo destined for the BL ring. This is commonly used for= transfering writeback data to the cache.", + "PublicDescription": "Number of allocations into the Cbo Egress. = The Egress is used to queue up requests destined for the ring.; Ring transa= ctions from the Corebo destined for the BL ring. This is commonly used for= transferring writeback data to the cache.", "UMask": "0x40", "Unit": "CBO" }, @@ -1692,7 +1692,7 @@ "EventCode": "0xb", "EventName": "UNC_H_CONFLICT_CYCLES.LAST", "PerPkg": "1", - "PublicDescription": "Count every last conflictor in conflict chai= n. Can be used to compute the average conflict chain length as (#Ackcnflts/= #LastConflictor)+1. This can be used to give a feel for the conflict chain = lenghts while analyzing lock kernels.", + "PublicDescription": "Count every last conflictor in conflict chai= n. Can be used to compute the average conflict chain length as (#Ackcnflts/= #LastConflictor)+1. This can be used to give a feel for the conflict chain = lengths while analyzing lock kernels.", "UMask": "0x4", "Unit": "HA" }, @@ -1729,7 +1729,7 @@ "EventCode": "0x41", "EventName": "UNC_H_DIRECTORY_LAT_OPT", "PerPkg": "1", - "PublicDescription": "Directory Latency Optimization Data Return P= ath Taken. When directory mode is enabled and the directory retuned for a r= ead is Dir=3DI, then data can be returned using a faster path if certain co= nditions are met (credits, free pipeline, etc).", + "PublicDescription": "Directory Latency Optimization Data Return P= ath Taken. When directory mode is enabled and the directory returned for a = read is Dir=3DI, then data can be returned using a faster path if certain c= onditions are met (credits, free pipeline, etc).", "Unit": "HA" }, { @@ -2686,7 +2686,7 @@ "EventCode": "0x21", "EventName": "UNC_H_SNOOP_RESP.RSPSFWD", "PerPkg": "1", - "PublicDescription": "Counts the total number of RspI snoop respon= ses received. Whenever a snoops are issued, one or more snoop responses wi= ll be returned depending on the topology of the system. In systems larger= than 2s, when multiple snoops are returned this will count all the snoops = that are received. For example, if 3 snoops were issued and returned RspI,= RspS, and RspSFwd; then each of these sub-events would increment by 1.; Fi= lters for a snoop response of RspSFwd. This is returned when a remote cach= ing agent forwards data but holds on to its currentl copy. This is common = for data and code reads that hit in a remote socket in E or F state.", + "PublicDescription": "Counts the total number of RspI snoop respon= ses received. Whenever a snoops are issued, one or more snoop responses wi= ll be returned depending on the topology of the system. In systems larger= than 2s, when multiple snoops are returned this will count all the snoops = that are received. For example, if 3 snoops were issued and returned RspI,= RspS, and RspSFwd; then each of these sub-events would increment by 1.; Fi= lters for a snoop response of RspSFwd. This is returned when a remote cach= ing agent forwards data but holds on to its currently copy. This is common= for data and code reads that hit in a remote socket in E or F state.", "UMask": "0x8", "Unit": "HA" }, @@ -2766,7 +2766,7 @@ "EventCode": "0x60", "EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPSFWD", "PerPkg": "1", - "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for a snoop response of RspSFwd. This is returned whe= n a remote caching agent forwards data but holds on to its currentl copy. = This is common for data and code reads that hit in a remote socket in E or = F state.", + "PublicDescription": "Number of snoop responses received for a Loc= al request; Filters for a snoop response of RspSFwd. This is returned whe= n a remote caching agent forwards data but holds on to its currently copy. = This is common for data and code reads that hit in a remote socket in E or= F state.", "UMask": "0x8", "Unit": "HA" }, diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.jso= n b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json index b3b1a08d4acf..10ea4afeffc1 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json @@ -24,7 +24,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because there were not enough Egress = credits. Had there been enough credits, the spawn would have worked as the= RBT bit was set and the RBT tag matched.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because there were not enough Egress= credits. Had there been enough credits, the spawn would have worked as th= e RBT bit was set and the RBT tag matched.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -34,7 +34,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_MISS", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because the RBT tag did not match and= there weren't enough Egress credits. The valid bit was set.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because the RBT tag did not match an= d there weren't enough Egress credits. The valid bit was set.", "UMask": "0x20", "Unit": "QPI LL" }, @@ -44,7 +44,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because there were not enough Egress = credits AND the RBT bit was not set, but the RBT tag matched.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because there were not enough Egress= credits AND the RBT bit was not set, but the RBT tag matched.", "UMask": "0x8", "Unit": "QPI LL" }, @@ -54,7 +54,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT_MISS", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because the RBT tag did not match, th= e valid bit was not set and there weren't enough Egress credits.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because the RBT tag did not match, t= he valid bit was not set and there weren't enough Egress credits.", "UMask": "0x80", "Unit": "QPI LL" }, @@ -64,7 +64,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_MISS", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because the RBT tag did not match alt= hough the valid bit was set and there were enough Egress credits.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because the RBT tag did not match al= though the valid bit was set and there were enough Egress credits.", "UMask": "0x10", "Unit": "QPI LL" }, @@ -74,7 +74,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_RBT_HIT", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because the route-back table (RBT) sp= ecified that the transaction should not trigger a direct2core tranaction. = This is common for IO transactions. There were enough Egress credits and t= he RBT tag matched but the valid bit was not set.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because the route-back table (RBT) s= pecified that the transaction should not trigger a direct2core transaction.= This is common for IO transactions. There were enough Egress credits and= the RBT tag matched but the valid bit was not set.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -84,7 +84,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.FAILURE_RBT_MISS", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn failed because the RBT tag did not match and= the valid bit was not set although there were enough Egress credits.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn failed because the RBT tag did not match an= d the valid bit was not set although there were enough Egress credits.", "UMask": "0x40", "Unit": "QPI LL" }, @@ -94,7 +94,7 @@ "EventCode": "0x13", "EventName": "UNC_Q_DIRECT2CORE.SUCCESS_RBT_HIT", "PerPkg": "1", - "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exlusive filters. Filte= r [0] can be used to get successful spawns, while [1:3] provide the differe= nt failure cases. Note that this does not count packets that are not candi= dates for Direct2Core. The only candidates for Direct2Core are DRS packets= destined for Cbos.; The spawn was successful. There were sufficient credi= ts, the RBT valid bit was set and there was an RBT tag match. The message = was marked to spawn direct2core.", + "PublicDescription": "Counts the number of DRS packets that we att= empted to do direct2core on. There are 4 mutually exclusive filters. Filt= er [0] can be used to get successful spawns, while [1:3] provide the differ= ent failure cases. Note that this does not count packets that are not cand= idates for Direct2Core. The only candidates for Direct2Core are DRS packet= s destined for Cbos.; The spawn was successful. There were sufficient cred= its, the RBT valid bit was set and there was an RBT tag match. The message= was marked to spawn direct2core.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -131,7 +131,7 @@ "EventCode": "0x9", "EventName": "UNC_Q_RxL_BYPASSED", "PerPkg": "1", - "PublicDescription": "Counts the number of times that an incoming = flit was able to bypass the flit buffer and pass directly across the BGF an= d into the Egress. This is a latency optimization, and should generally be= the common case. If this value is less than the number of flits transfere= d, it implies that there was queueing getting onto the ring, and thus the t= ransactions saw higher latency.", + "PublicDescription": "Counts the number of times that an incoming = flit was able to bypass the flit buffer and pass directly across the BGF an= d into the Egress. This is a latency optimization, and should generally be= the common case. If this value is less than the number of flits transferr= ed, it implies that there was queueing getting onto the ring, and thus the = transactions saw higher latency.", "Unit": "QPI LL" }, { @@ -443,7 +443,7 @@ "EventCode": "0x1", "EventName": "UNC_Q_RxL_FLITS_G0.DATA", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. It includes filters for Idle, protocol, and Data Flits. Each f= lit is made up of 80 bits of information (in addition to some ECC data). I= n full-width (L0) mode, flits are made up of four fits, each of which conta= ins 20 bits of data (along with some additional ECC data). In half-width = (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many= fits to transmit a flit. When one talks about QPI speed (for example, 8.0= GT/s), the transfers here refer to fits. Therefore, in L0, the system wil= l transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate th= e bandwidth of the link by taking: flits*80b/time. Note that this is not t= he same as data bandwidth. For example, when we are transfering a 64B cach= eline across QPI, we will break it into 9 flits -- 1 with header informatio= n and 8 with 64 bits of actual data and an additional 16 bits of other info= rmation. To calculate data bandwidth, one should therefore do: data flits = * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flitsrece= ived over QPI. Each flit contains 64b of data. This includes both DRS and= NCB data flits (coherent and non-coherent). This can be used to calculate= the data bandwidth of the QPI link. One can get a good picture of the QPI= -link characteristics by evaluating the protocol flits, data flits, and idl= e/null flits. This does not include the header flits that go in data packe= ts.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. It includes filters for Idle, protocol, and Data Flits. Each f= lit is made up of 80 bits of information (in addition to some ECC data). I= n full-width (L0) mode, flits are made up of four fits, each of which conta= ins 20 bits of data (along with some additional ECC data). In half-width = (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many= fits to transmit a flit. When one talks about QPI speed (for example, 8.0= GT/s), the transfers here refer to fits. Therefore, in L0, the system wil= l transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate th= e bandwidth of the link by taking: flits*80b/time. Note that this is not t= he same as data bandwidth. For example, when we are transferring a 64B cac= heline across QPI, we will break it into 9 flits -- 1 with header informati= on and 8 with 64 bits of actual data and an additional 16 bits of other inf= ormation. To calculate data bandwidth, one should therefore do: data flits= * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits re= ceived over QPI. Each flit contains 64b of data. This includes both DRS a= nd NCB data flits (coherent and non-coherent). This can be used to calcula= te the data bandwidth of the QPI link. One can get a good picture of the Q= PI-link characteristics by evaluating the protocol flits, data flits, and i= dle/null flits. This does not include the header flits that go in data pac= kets.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -453,7 +453,7 @@ "EventCode": "0x1", "EventName": "UNC_Q_RxL_FLITS_G0.IDLE", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. It includes filters for Idle, protocol, and Data Flits. Each f= lit is made up of 80 bits of information (in addition to some ECC data). I= n full-width (L0) mode, flits are made up of four fits, each of which conta= ins 20 bits of data (along with some additional ECC data). In half-width = (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many= fits to transmit a flit. When one talks about QPI speed (for example, 8.0= GT/s), the transfers here refer to fits. Therefore, in L0, the system wil= l transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate th= e bandwidth of the link by taking: flits*80b/time. Note that this is not t= he same as data bandwidth. For example, when we are transfering a 64B cach= eline across QPI, we will break it into 9 flits -- 1 with header informatio= n and 8 with 64 bits of actual data and an additional 16 bits of other info= rmation. To calculate data bandwidth, one should therefore do: data flits = * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of flits received= over QPI that do not hold protocol payload. When QPI is not in a power sa= ving state, it continuously transmits flits across the link. When there ar= e no protocol flits to send, it will send IDLE and NULL flits across. The= se flits sometimes do carry a payload, such as credit returns, but are gene= rall not considered part of the QPI bandwidth.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. It includes filters for Idle, protocol, and Data Flits. Each f= lit is made up of 80 bits of information (in addition to some ECC data). I= n full-width (L0) mode, flits are made up of four fits, each of which conta= ins 20 bits of data (along with some additional ECC data). In half-width = (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many= fits to transmit a flit. When one talks about QPI speed (for example, 8.0= GT/s), the transfers here refer to fits. Therefore, in L0, the system wil= l transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate th= e bandwidth of the link by taking: flits*80b/time. Note that this is not t= he same as data bandwidth. For example, when we are transferring a 64B cac= heline across QPI, we will break it into 9 flits -- 1 with header informati= on and 8 with 64 bits of actual data and an additional 16 bits of other inf= ormation. To calculate data bandwidth, one should therefore do: data flits= * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of flits receive= d over QPI that do not hold protocol payload. When QPI is not in a power s= aving state, it continuously transmits flits across the link. When there a= re no protocol flits to send, it will send IDLE and NULL flits across. Th= ese flits sometimes do carry a payload, such as credit returns, but are gen= erally not considered part of the QPI bandwidth.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -463,7 +463,7 @@ "EventCode": "0x1", "EventName": "UNC_Q_RxL_FLITS_G0.NON_DATA", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. It includes filters for Idle, protocol, and Data Flits. Each f= lit is made up of 80 bits of information (in addition to some ECC data). I= n full-width (L0) mode, flits are made up of four fits, each of which conta= ins 20 bits of data (along with some additional ECC data). In half-width = (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many= fits to transmit a flit. When one talks about QPI speed (for example, 8.0= GT/s), the transfers here refer to fits. Therefore, in L0, the system wil= l transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate th= e bandwidth of the link by taking: flits*80b/time. Note that this is not t= he same as data bandwidth. For example, when we are transfering a 64B cach= eline across QPI, we will break it into 9 flits -- 1 with header informatio= n and 8 with 64 bits of actual data and an additional 16 bits of other info= rmation. To calculate data bandwidth, one should therefore do: data flits = * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-d= ata flits received across QPI. This basically tracks the protocol overhead= on the QPI link. One can get a good picture of the QPI-link characteristi= cs by evaluating the protocol flits, data flits, and idle/null flits. This= includes the header flits for data packets.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. It includes filters for Idle, protocol, and Data Flits. Each f= lit is made up of 80 bits of information (in addition to some ECC data). I= n full-width (L0) mode, flits are made up of four fits, each of which conta= ins 20 bits of data (along with some additional ECC data). In half-width = (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many= fits to transmit a flit. When one talks about QPI speed (for example, 8.0= GT/s), the transfers here refer to fits. Therefore, in L0, the system wil= l transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate th= e bandwidth of the link by taking: flits*80b/time. Note that this is not t= he same as data bandwidth. For example, when we are transferring a 64B cac= heline across QPI, we will break it into 9 flits -- 1 with header informati= on and 8 with 64 bits of actual data and an additional 16 bits of other inf= ormation. To calculate data bandwidth, one should therefore do: data flits= * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-= data flits received across QPI. This basically tracks the protocol overhea= d on the QPI link. One can get a good picture of the QPI-link characterist= ics by evaluating the protocol flits, data flits, and idle/null flits. Thi= s includes the header flits for data packets.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -474,7 +474,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.DRS", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the total number of flits received over QPI on the DRS (Data Respon= se) channel. DRS flits are used to transmit data with coherency. This doe= s not count data flits received over the NCB channel which transmits non-co= herent data.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the total number of flits received over QPI on the DRS (Data Respo= nse) channel. DRS flits are used to transmit data with coherency. This do= es not count data flits received over the NCB channel which transmits non-c= oherent data.", "UMask": "0x18", "Unit": "QPI LL" }, @@ -485,7 +485,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.DRS_DATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the total number of data flits received over QPI on the DRS (Data R= esponse) channel. DRS flits are used to transmit data with coherency. Thi= s does not count data flits received over the NCB channel which transmits n= on-coherent data. This includes only the data flits (not the header).", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the total number of data flits received over QPI on the DRS (Data = Response) channel. DRS flits are used to transmit data with coherency. Th= is does not count data flits received over the NCB channel which transmits = non-coherent data. This includes only the data flits (not the header).", "UMask": "0x8", "Unit": "QPI LL" }, @@ -496,7 +496,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.DRS_NONDATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the total number of protocol flits received over QPI on the DRS (Da= ta Response) channel. DRS flits are used to transmit data with coherency. = This does not count data flits received over the NCB channel which transmi= ts non-coherent data. This includes only the header flits (not the data). = This includes extended headers.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the total number of protocol flits received over QPI on the DRS (D= ata Response) channel. DRS flits are used to transmit data with coherency.= This does not count data flits received over the NCB channel which transm= its non-coherent data. This includes only the header flits (not the data).= This includes extended headers.", "UMask": "0x10", "Unit": "QPI LL" }, @@ -507,7 +507,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.HOM", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the number of flits received over QPI on the home channel.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the number of flits received over QPI on the home channel.", "UMask": "0x6", "Unit": "QPI LL" }, @@ -518,7 +518,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.HOM_NONREQ", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the number of non-request flits received over QPI on the home chann= el. These are most commonly snoop responses, and this event can be used as= a proxy for that.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the number of non-request flits received over QPI on the home chan= nel. These are most commonly snoop responses, and this event can be used a= s a proxy for that.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -529,7 +529,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.HOM_REQ", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the number of data request received over QPI on the home channel. = This basically counts the number of remote memory requests received over QP= I. In conjunction with the local read count in the Home Agent, one can cal= culate the number of LLC Misses.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the number of data request received over QPI on the home channel. = This basically counts the number of remote memory requests received over Q= PI. In conjunction with the local read count in the Home Agent, one can ca= lculate the number of LLC Misses.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -540,7 +540,7 @@ "EventName": "UNC_Q_RxL_FLITS_G1.SNP", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the number of snoop request flits received over QPI. These request= s are contained in the snoop channel. This does not include snoop response= s, which are received on the home channel.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for SNP, HOM, and DRS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the number of snoop request flits received over QPI. These reques= ts are contained in the snoop channel. This does not include snoop respons= es, which are received on the home channel.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -551,7 +551,7 @@ "EventName": "UNC_Q_RxL_FLITS_G2.NCB", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Number of Non-Coherent Bypass flits. These packets are generally used to = transmit non-coherent data across QPI.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Number of Non-Coherent Bypass flits. These packets are generally used to= transmit non-coherent data across QPI.", "UMask": "0xC", "Unit": "QPI LL" }, @@ -562,7 +562,7 @@ "EventName": "UNC_Q_RxL_FLITS_G2.NCB_DATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Number of Non-Coherent Bypass data flits. These flits are generally used = to transmit non-coherent data across QPI. This does not include a count of= the DRS (coherent) data flits. This only counts the data flits, not the N= CB headers.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Number of Non-Coherent Bypass data flits. These flits are generally used= to transmit non-coherent data across QPI. This does not include a count o= f the DRS (coherent) data flits. This only counts the data flits, not the = NCB headers.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -573,7 +573,7 @@ "EventName": "UNC_Q_RxL_FLITS_G2.NCB_NONDATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Number of Non-Coherent Bypass non-data flits. These packets are generally= used to transmit non-coherent data across QPI, and the flits counted here = are for headers and other non-data flits. This includes extended headers.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Number of Non-Coherent Bypass non-data flits. These packets are generall= y used to transmit non-coherent data across QPI, and the flits counted here= are for headers and other non-data flits. This includes extended headers.= ", "UMask": "0x8", "Unit": "QPI LL" }, @@ -584,7 +584,7 @@ "EventName": "UNC_Q_RxL_FLITS_G2.NCS", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Number of NCS (non-coherent standard) flits received over QPI. This inc= ludes extended headers.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Number of NCS (non-coherent standard) flits received over QPI. This in= cludes extended headers.", "UMask": "0x10", "Unit": "QPI LL" }, @@ -595,7 +595,7 @@ "EventName": "UNC_Q_RxL_FLITS_G2.NDR_AD", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the total number of flits received over the NDR (Non-Data Response)= channel. This channel is used to send a variety of protocol flits includi= ng grants and completions. This is only for NDR packets to the local socke= t which use the AK ring.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the total number of flits received over the NDR (Non-Data Response= ) channel. This channel is used to send a variety of protocol flits includ= ing grants and completions. This is only for NDR packets to the local sock= et which use the AK ring.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -606,7 +606,7 @@ "EventName": "UNC_Q_RxL_FLITS_G2.NDR_AK", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transfering a 64B cacheline across = QPI, we will break it into 9 flits -- 1 with header information and 8 with = 64 bits of actual data and an additional 16 bits of other information. To = calculate data bandwidth, one should therefore do: data flits * 8B / time.;= Counts the total number of flits received over the NDR (Non-Data Response)= channel. This channel is used to send a variety of protocol flits includi= ng grants and completions. This is only for NDR packets destined for Route= -thru to a remote socket.", + "PublicDescription": "Counts the number of flits received from the= QPI Link. This is one of three groups that allow us to track flits. It i= ncludes filters for NDR, NCB, and NCS message classes. Each flit is made u= p of 80 bits of information (in addition to some ECC data). In full-width = (L0) mode, flits are made up of four fits, each of which contains 20 bits o= f data (along with some additional ECC data). In half-width (L0p) mode, t= he fits are only 10 bits, and therefore it takes twice as many fits to tran= smit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the t= ransfers here refer to fits. Therefore, in L0, the system will transfer 1 = flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth o= f the link by taking: flits*80b/time. Note that this is not the same as da= ta bandwidth. For example, when we are transferring a 64B cacheline across= QPI, we will break it into 9 flits -- 1 with header information and 8 with= 64 bits of actual data and an additional 16 bits of other information. To= calculate data bandwidth, one should therefore do: data flits * 8B / time.= ; Counts the total number of flits received over the NDR (Non-Data Response= ) channel. This channel is used to send a variety of protocol flits includ= ing grants and completions. This is only for NDR packets destined for Rout= e-thru to a remote socket.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -1227,7 +1227,7 @@ "Counter": "0,1,2,3", "EventName": "UNC_Q_TxL_FLITS_G0.DATA", "PerPkg": "1", - "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. It includes filters for Idle, protocol, and Data Flits. E= ach flit is made up of 80 bits of information (in addition to some ECC data= ). In full-width (L0) mode, flits are made up of four fits, each of which = contains 20 bits of data (along with some additional ECC data). In half-w= idth (L0p) mode, the fits are only 10 bits, and therefore it takes twice as= many fits to transmit a flit. When one talks about QPI speed (for example= , 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the syste= m will transfer 1 flit at the rate of 1/4th the QPI speed. One can calcula= te the bandwidth of the link by taking: flits*80b/time. Note that this is = not the same as data bandwidth. For example, when we are transfering a 64B= cacheline across QPI, we will break it into 9 flits -- 1 with header infor= mation and 8 with 64 bits of actual data and an additional 16 bits of other= information. To calculate data bandwidth, one should therefore do: data f= lits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flit= s transmitted over QPI. Each flit contains 64b of data. This includes bot= h DRS and NCB data flits (coherent and non-coherent). This can be used to = calculate the data bandwidth of the QPI link. One can get a good picture o= f the QPI-link characteristics by evaluating the protocol flits, data flits= , and idle/null flits. This does not include the header flits that go in d= ata packets.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. It includes filters for Idle, protocol, and Data Flits. E= ach flit is made up of 80 bits of information (in addition to some ECC data= ). In full-width (L0) mode, flits are made up of four fits, each of which = contains 20 bits of data (along with some additional ECC data). In half-w= idth (L0p) mode, the fits are only 10 bits, and therefore it takes twice as= many fits to transmit a flit. When one talks about QPI speed (for example= , 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the syste= m will transfer 1 flit at the rate of 1/4th the QPI speed. One can calcula= te the bandwidth of the link by taking: flits*80b/time. Note that this is = not the same as data bandwidth. For example, when we are transferring a 64= B cacheline across QPI, we will break it into 9 flits -- 1 with header info= rmation and 8 with 64 bits of actual data and an additional 16 bits of othe= r information. To calculate data bandwidth, one should therefore do: data = flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data fli= ts transmitted over QPI. Each flit contains 64b of data. This includes bo= th DRS and NCB data flits (coherent and non-coherent). This can be used to= calculate the data bandwidth of the QPI link. One can get a good picture = of the QPI-link characteristics by evaluating the protocol flits, data flit= s, and idle/null flits. This does not include the header flits that go in = data packets.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -1236,7 +1236,7 @@ "Counter": "0,1,2,3", "EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA", "PerPkg": "1", - "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. It includes filters for Idle, protocol, and Data Flits. E= ach flit is made up of 80 bits of information (in addition to some ECC data= ). In full-width (L0) mode, flits are made up of four fits, each of which = contains 20 bits of data (along with some additional ECC data). In half-w= idth (L0p) mode, the fits are only 10 bits, and therefore it takes twice as= many fits to transmit a flit. When one talks about QPI speed (for example= , 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the syste= m will transfer 1 flit at the rate of 1/4th the QPI speed. One can calcula= te the bandwidth of the link by taking: flits*80b/time. Note that this is = not the same as data bandwidth. For example, when we are transfering a 64B= cacheline across QPI, we will break it into 9 flits -- 1 with header infor= mation and 8 with 64 bits of actual data and an additional 16 bits of other= information. To calculate data bandwidth, one should therefore do: data f= lits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL = non-data flits transmitted across QPI. This basically tracks the protocol = overhead on the QPI link. One can get a good picture of the QPI-link chara= cteristics by evaluating the protocol flits, data flits, and idle/null flit= s. This includes the header flits for data packets.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. It includes filters for Idle, protocol, and Data Flits. E= ach flit is made up of 80 bits of information (in addition to some ECC data= ). In full-width (L0) mode, flits are made up of four fits, each of which = contains 20 bits of data (along with some additional ECC data). In half-w= idth (L0p) mode, the fits are only 10 bits, and therefore it takes twice as= many fits to transmit a flit. When one talks about QPI speed (for example= , 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the syste= m will transfer 1 flit at the rate of 1/4th the QPI speed. One can calcula= te the bandwidth of the link by taking: flits*80b/time. Note that this is = not the same as data bandwidth. For example, when we are transferring a 64= B cacheline across QPI, we will break it into 9 flits -- 1 with header info= rmation and 8 with 64 bits of actual data and an additional 16 bits of othe= r information. To calculate data bandwidth, one should therefore do: data = flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL= non-data flits transmitted across QPI. This basically tracks the protocol= overhead on the QPI link. One can get a good picture of the QPI-link char= acteristics by evaluating the protocol flits, data flits, and idle/null fli= ts. This includes the header flits for data packets.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -1246,7 +1246,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.DRS", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the total number of flits transmitted over QPI on the DRS (Data= Response) channel. DRS flits are used to transmit data with coherency.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the total number of flits transmitted over QPI on the DRS (Da= ta Response) channel. DRS flits are used to transmit data with coherency.", "UMask": "0x18", "Unit": "QPI LL" }, @@ -1256,7 +1256,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.DRS_DATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the total number of data flits transmitted over QPI on the DRS = (Data Response) channel. DRS flits are used to transmit data with coherenc= y. This does not count data flits transmitted over the NCB channel which t= ransmits non-coherent data. This includes only the data flits (not the hea= der).", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the total number of data flits transmitted over QPI on the DR= S (Data Response) channel. DRS flits are used to transmit data with cohere= ncy. This does not count data flits transmitted over the NCB channel which= transmits non-coherent data. This includes only the data flits (not the h= eader).", "UMask": "0x8", "Unit": "QPI LL" }, @@ -1266,7 +1266,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.DRS_NONDATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the total number of protocol flits transmitted over QPI on the = DRS (Data Response) channel. DRS flits are used to transmit data with cohe= rency. This does not count data flits transmitted over the NCB channel whi= ch transmits non-coherent data. This includes only the header flits (not t= he data). This includes extended headers.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the total number of protocol flits transmitted over QPI on th= e DRS (Data Response) channel. DRS flits are used to transmit data with co= herency. This does not count data flits transmitted over the NCB channel w= hich transmits non-coherent data. This includes only the header flits (not= the data). This includes extended headers.", "UMask": "0x10", "Unit": "QPI LL" }, @@ -1276,7 +1276,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.HOM", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the number of flits transmitted over QPI on the home channel.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the number of flits transmitted over QPI on the home channel.= ", "UMask": "0x6", "Unit": "QPI LL" }, @@ -1286,7 +1286,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.HOM_NONREQ", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the number of non-request flits transmitted over QPI on the hom= e channel. These are most commonly snoop responses, and this event can be = used as a proxy for that.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the number of non-request flits transmitted over QPI on the h= ome channel. These are most commonly snoop responses, and this event can b= e used as a proxy for that.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -1296,7 +1296,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.HOM_REQ", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the number of data request transmitted over QPI on the home cha= nnel. This basically counts the number of remote memory requests transmitt= ed over QPI. In conjunction with the local read count in the Home Agent, o= ne can calculate the number of LLC Misses.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the number of data request transmitted over QPI on the home c= hannel. This basically counts the number of remote memory requests transmi= tted over QPI. In conjunction with the local read count in the Home Agent,= one can calculate the number of LLC Misses.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -1306,7 +1306,7 @@ "EventName": "UNC_Q_TxL_FLITS_G1.SNP", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the number of snoop request flits transmitted over QPI. These = requests are contained in the snoop channel. This does not include snoop r= esponses, which are transmitted on the home channel.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for SNP, HOM, and DRS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the number of snoop request flits transmitted over QPI. Thes= e requests are contained in the snoop channel. This does not include snoop= responses, which are transmitted on the home channel.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -1317,7 +1317,7 @@ "EventName": "UNC_Q_TxL_FLITS_G2.NCB", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Number of Non-Coherent Bypass flits. These packets are generally used= to transmit non-coherent data across QPI.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Number of Non-Coherent Bypass flits. These packets are generally us= ed to transmit non-coherent data across QPI.", "UMask": "0xC", "Unit": "QPI LL" }, @@ -1328,7 +1328,7 @@ "EventName": "UNC_Q_TxL_FLITS_G2.NCB_DATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Number of Non-Coherent Bypass data flits. These flits are generally u= sed to transmit non-coherent data across QPI. This does not include a coun= t of the DRS (coherent) data flits. This only counts the data flits, not t= e NCB headers.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Number of Non-Coherent Bypass data flits. These flits are generally= used to transmit non-coherent data across QPI. This does not include a co= unt of the DRS (coherent) data flits. This only counts the data flits, not= the NCB headers.", "UMask": "0x4", "Unit": "QPI LL" }, @@ -1339,7 +1339,7 @@ "EventName": "UNC_Q_TxL_FLITS_G2.NCB_NONDATA", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Number of Non-Coherent Bypass non-data flits. These packets are gener= ally used to transmit non-coherent data across QPI, and the flits counted h= ere are for headers and other non-data flits. This includes extended heade= rs.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Number of Non-Coherent Bypass non-data flits. These packets are gen= erally used to transmit non-coherent data across QPI, and the flits counted= here are for headers and other non-data flits. This includes extended hea= ders.", "UMask": "0x8", "Unit": "QPI LL" }, @@ -1350,7 +1350,7 @@ "EventName": "UNC_Q_TxL_FLITS_G2.NCS", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Number of NCS (non-coherent standard) flits transmitted over QPI. T= his includes extended headers.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Number of NCS (non-coherent standard) flits transmitted over QPI. = This includes extended headers.", "UMask": "0x10", "Unit": "QPI LL" }, @@ -1361,7 +1361,7 @@ "EventName": "UNC_Q_TxL_FLITS_G2.NDR_AD", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the total number of flits transmitted over the NDR (Non-Data Re= sponse) channel. This channel is used to send a variety of protocol flits = including grants and completions. This is only for NDR packets to the loca= l socket which use the AK ring.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the total number of flits transmitted over the NDR (Non-Data = Response) channel. This channel is used to send a variety of protocol flit= s including grants and completions. This is only for NDR packets to the lo= cal socket which use the AK ring.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -1372,7 +1372,7 @@ "EventName": "UNC_Q_TxL_FLITS_G2.NDR_AK", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Counts the number of flits trasmitted across= the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is ma= de up of 80 bits of information (in addition to some ECC data). In full-wi= dth (L0) mode, flits are made up of four fits, each of which contains 20 bi= ts of data (along with some additional ECC data). In half-width (L0p) mod= e, the fits are only 10 bits, and therefore it takes twice as many fits to = transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), t= he transfers here refer to fits. Therefore, in L0, the system will transfe= r 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwid= th of the link by taking: flits*80b/time. Note that this is not the same a= s data bandwidth. For example, when we are transfering a 64B cacheline acr= oss QPI, we will break it into 9 flits -- 1 with header information and 8 w= ith 64 bits of actual data and an additional 16 bits of other information. = To calculate data bandwidth, one should therefore do: data flits * 8B / ti= me.; Counts the total number of flits transmitted over the NDR (Non-Data Re= sponse) channel. This channel is used to send a variety of protocol flits = including grants and completions. This is only for NDR packets destined fo= r Route-thru to a remote socket.", + "PublicDescription": "Counts the number of flits transmitted acros= s the QPI Link. This is one of three groups that allow us to track flits. = It includes filters for NDR, NCB, and NCS message classes. Each flit is m= ade up of 80 bits of information (in addition to some ECC data). In full-w= idth (L0) mode, flits are made up of four fits, each of which contains 20 b= its of data (along with some additional ECC data). In half-width (L0p) mo= de, the fits are only 10 bits, and therefore it takes twice as many fits to= transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), = the transfers here refer to fits. Therefore, in L0, the system will transf= er 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwi= dth of the link by taking: flits*80b/time. Note that this is not the same = as data bandwidth. For example, when we are transferring a 64B cacheline a= cross QPI, we will break it into 9 flits -- 1 with header information and 8= with 64 bits of actual data and an additional 16 bits of other information= . To calculate data bandwidth, one should therefore do: data flits * 8B / = time.; Counts the total number of flits transmitted over the NDR (Non-Data = Response) channel. This channel is used to send a variety of protocol flit= s including grants and completions. This is only for NDR packets destined = for Route-thru to a remote socket.", "UMask": "0x2", "Unit": "QPI LL" }, @@ -1511,7 +1511,7 @@ "EventName": "UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN0", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Occupancy event that tracks the number of li= nk layer credits into the R3 (for transactions across the BGF) available in= each cycle. Flow Control FIFO fro Snoop messages on AD.", + "PublicDescription": "Occupancy event that tracks the number of li= nk layer credits into the R3 (for transactions across the BGF) available in= each cycle. Flow Control FIFO for Snoop messages on AD.", "UMask": "0x1", "Unit": "QPI LL" }, @@ -1522,7 +1522,7 @@ "EventName": "UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN1", "ExtSel": "1", "PerPkg": "1", - "PublicDescription": "Occupancy event that tracks the number of li= nk layer credits into the R3 (for transactions across the BGF) available in= each cycle. Flow Control FIFO fro Snoop messages on AD.", + "PublicDescription": "Occupancy event that tracks the number of li= nk layer credits into the R3 (for transactions across the BGF) available in= each cycle. Flow Control FIFO for Snoop messages on AD.", "UMask": "0x2", "Unit": "QPI LL" }, diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-memory.json b/to= ols/perf/pmu-events/arch/x86/ivytown/uncore-memory.json index 63b49b712c62..ed60ebca35cb 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-memory.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-memory.json @@ -188,7 +188,7 @@ "EventCode": "0x9", "EventName": "UNC_M_ECC_CORRECTABLE_ERRORS", "PerPkg": "1", - "PublicDescription": "Counts the number of ECC errors detected and= corrected by the iMC on this channel. This counter is only useful with EC= C DRAM devices. This count will increment one time for each correction reg= ardless of the number of bits corrected. The iMC can correct up to 4 bit e= rrors in independent channel mode and 8 bit erros in lockstep mode.", + "PublicDescription": "Counts the number of ECC errors detected and= corrected by the iMC on this channel. This counter is only useful with EC= C DRAM devices. This count will increment one time for each correction reg= ardless of the number of bits corrected. The iMC can correct up to 4 bit e= rrors in independent channel mode and 8 bit errors in lockstep mode.", "Unit": "iMC" }, { diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-other.json b/too= ls/perf/pmu-events/arch/x86/ivytown/uncore-other.json index af289aa6c98e..6c7ddf642fc3 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-other.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-other.json @@ -2097,7 +2097,7 @@ "EventCode": "0x33", "EventName": "UNC_R3_VNA_CREDITS_ACQUIRED", "PerPkg": "1", - "PublicDescription": "Number of QPI VNA Credit acquisitions. This= event can be used in conjunction with the VNA In-Use Accumulator to calcul= ate the average lifetime of a credit holder. VNA credits are used by all m= essage classes in order to communicate across QPI. If a packet is unable t= o acquire credits, it will then attempt to use credts from the VN0 pool. N= ote that a single packet may require multiple flit buffers (i.e. when data = is being transfered). Therefore, this event will increment by the number o= f credits acquired in each cycle. Filtering based on message class is not = provided. One can count the number of packets transfered in a given messag= e class using an qfclk event.", + "PublicDescription": "Number of QPI VNA Credit acquisitions. This= event can be used in conjunction with the VNA In-Use Accumulator to calcul= ate the average lifetime of a credit holder. VNA credits are used by all m= essage classes in order to communicate across QPI. If a packet is unable t= o acquire credits, it will then attempt to use credits from the VN0 pool. = Note that a single packet may require multiple flit buffers (i.e. when data= is being transferred). Therefore, this event will increment by the number= of credits acquired in each cycle. Filtering based on message class is no= t provided. One can count the number of packets transferred in a given mes= sage class using an qfclk event.", "Unit": "R3QPI" }, { @@ -2106,7 +2106,7 @@ "EventCode": "0x33", "EventName": "UNC_R3_VNA_CREDITS_ACQUIRED.AD", "PerPkg": "1", - "PublicDescription": "Number of QPI VNA Credit acquisitions. This= event can be used in conjunction with the VNA In-Use Accumulator to calcul= ate the average lifetime of a credit holder. VNA credits are used by all m= essage classes in order to communicate across QPI. If a packet is unable t= o acquire credits, it will then attempt to use credts from the VN0 pool. N= ote that a single packet may require multiple flit buffers (i.e. when data = is being transfered). Therefore, this event will increment by the number o= f credits acquired in each cycle. Filtering based on message class is not = provided. One can count the number of packets transfered in a given messag= e class using an qfclk event.; Filter for the Home (HOM) message class. HO= M is generally used to send requests, request responses, and snoop response= s.", + "PublicDescription": "Number of QPI VNA Credit acquisitions. This= event can be used in conjunction with the VNA In-Use Accumulator to calcul= ate the average lifetime of a credit holder. VNA credits are used by all m= essage classes in order to communicate across QPI. If a packet is unable t= o acquire credits, it will then attempt to use credits from the VN0 pool. = Note that a single packet may require multiple flit buffers (i.e. when data= is being transferred). Therefore, this event will increment by the number= of credits acquired in each cycle. Filtering based on message class is no= t provided. One can count the number of packets transferred in a given mes= sage class using an qfclk event.; Filter for the Home (HOM) message class. = HOM is generally used to send requests, request responses, and snoop respo= nses.", "UMask": "0x1", "Unit": "R3QPI" }, @@ -2116,7 +2116,7 @@ "EventCode": "0x33", "EventName": "UNC_R3_VNA_CREDITS_ACQUIRED.BL", "PerPkg": "1", - "PublicDescription": "Number of QPI VNA Credit acquisitions. This= event can be used in conjunction with the VNA In-Use Accumulator to calcul= ate the average lifetime of a credit holder. VNA credits are used by all m= essage classes in order to communicate across QPI. If a packet is unable t= o acquire credits, it will then attempt to use credts from the VN0 pool. N= ote that a single packet may require multiple flit buffers (i.e. when data = is being transfered). Therefore, this event will increment by the number o= f credits acquired in each cycle. Filtering based on message class is not = provided. One can count the number of packets transfered in a given messag= e class using an qfclk event.; Filter for the Home (HOM) message class. HO= M is generally used to send requests, request responses, and snoop response= s.", + "PublicDescription": "Number of QPI VNA Credit acquisitions. This= event can be used in conjunction with the VNA In-Use Accumulator to calcul= ate the average lifetime of a credit holder. VNA credits are used by all m= essage classes in order to communicate across QPI. If a packet is unable t= o acquire credits, it will then attempt to use credits from the VN0 pool. = Note that a single packet may require multiple flit buffers (i.e. when data= is being transferred). Therefore, this event will increment by the number= of credits acquired in each cycle. Filtering based on message class is no= t provided. One can count the number of packets transferred in a given mes= sage class using an qfclk event.; Filter for the Home (HOM) message class. = HOM is generally used to send requests, request responses, and snoop respo= nses.", "UMask": "0x4", "Unit": "R3QPI" }, diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json b/too= ls/perf/pmu-events/arch/x86/ivytown/uncore-power.json index 0ba63a97ddfa..74c87217d75c 100644 --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json @@ -601,7 +601,7 @@ "EventCode": "0x80", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0", "PerPkg": "1", - "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with threshholding to gene= rate histograms, or with other PCU events and occupancy triggering to captu= re other details.", + "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with thresholding to gener= ate histograms, or with other PCU events and occupancy triggering to captur= e other details.", "Unit": "PCU" }, { @@ -610,7 +610,7 @@ "EventCode": "0x80", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3", "PerPkg": "1", - "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with threshholding to gene= rate histograms, or with other PCU events and occupancy triggering to captu= re other details.", + "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with thresholding to gener= ate histograms, or with other PCU events and occupancy triggering to captur= e other details.", "Unit": "PCU" }, { @@ -619,7 +619,7 @@ "EventCode": "0x80", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6", "PerPkg": "1", - "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with threshholding to gene= rate histograms, or with other PCU events and occupancy triggering to captu= re other details.", + "PublicDescription": "This is an occupancy event that tracks the n= umber of cores that are in the chosen C-State. It can be used by itself to= get the average number of cores in that C-state with thresholding to gener= ate histograms, or with other PCU events and occupancy triggering to captur= e other details.", "Unit": "PCU" }, { @@ -637,7 +637,7 @@ "EventCode": "0x9", "EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES", "PerPkg": "1", - "PublicDescription": "Counts the number of cycles that we are in I= nteral PROCHOT mode. This mode is triggered when a sensor on the die deter= mines that we are too hot and must throttle to avoid damaging the chip.", + "PublicDescription": "Counts the number of cycles that we are in I= nternal PROCHOT mode. This mode is triggered when a sensor on the die dete= rmines that we are too hot and must throttle to avoid damaging the chip.", "Unit": "PCU" }, { diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 84535179d128..81bd6f5d5354 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -13,7 +13,7 @@ GenuineIntel-6-3F,v26,haswellx,core GenuineIntel-6-(7D|7E|A7),v1.15,icelake,core GenuineIntel-6-6[AC],v1.16,icelakex,core GenuineIntel-6-3A,v22,ivybridge,core -GenuineIntel-6-3E,v21,ivytown,core +GenuineIntel-6-3E,v22,ivytown,core GenuineIntel-6-2D,v21,jaketown,core GenuineIntel-6-(57|85),v9,knightslanding,core GenuineIntel-6-AA,v1.00,meteorlake,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8B9CC433F5 for ; Sat, 1 Oct 2022 06:09:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229757AbiJAGJW (ORCPT ); Sat, 1 Oct 2022 02:09:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229636AbiJAGHn (ORCPT ); Sat, 1 Oct 2022 02:07:43 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14C0C1A5990 for ; Fri, 30 Sep 2022 23:07:26 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d8-20020a25bc48000000b00680651cf051so5632358ybk.23 for ; Fri, 30 Sep 2022 23:07:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=avrxn4qMiHdUwycQAh1uwNY8jWd4fHiKamMhhUm6giw=; b=Lh5FCEgfGR1BbzaYlXneE1I7XBzSjEGzpq/qREsbAXBHliz9KT7bW9X/sxrYYa23iY FyIoWKRMj0CO57DwQ2OOY9FkSqNmCUpiH70RrpO0lwGkd9+AP8U8V+BE6yMbVsaIUoWn e/AyZ+mpdKZMYa4+SlBW67koSrLx45IiYg6q92NvklDLY9twCinFNMXtgBRgXbhupTp/ hny8fbvreUccO3ABeO8DbuurngaARv9QSEPKYjLZQbs6oi1ROOnpjQQJwxNQQzc46FMZ 6J6ZnuZBcSLnevuwq0LUZzEQMM4fTJNbk9wkeInKKYM3M3PGvZitIFe+ih6Q2foyc+nq tRCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=avrxn4qMiHdUwycQAh1uwNY8jWd4fHiKamMhhUm6giw=; b=0TPLbFRX+yPjgIbp70EfX+5XVpWXz7RjMrHA+b6+ZEn4XSCFzZA78ih36XK31CQT7F VcsxsI6LdLEPIYDXrRWfJdks1iT7bfPmlt/ccmDH7EYvuzAZ7JgV0KL69T2wiBbIck60 uR+5LKKG29ros4zfGcWjWWwd8T5PvvQahbT4HL+0j81qnlTZCJ2TPEMs/2g/6QtGYT6n igqwVpIpq7LfCD1sJBnqN1c/Scgv83uIA3yK/yTpZTZn9MbEm5gBGhpOWK9snC1POF6f B9pGWnXu0c9js0eHMJUyQZq6dDe1pBJpXEHvpcCmkCLq/t/bR2qgj8dAGyi4rHtcqbs+ d4qA== X-Gm-Message-State: ACrzQf0gtitmvJClV8+VIyiIPrUd7IFD6hyhu+QOWYTi2w+GVW0wp6KV LUXTsZ1YtJr4q4+71IBmQSs4FmrffJF1 X-Google-Smtp-Source: AMsMyM4E0iGs0q36xsCwgPlbOQoFt0dS36dIfTDyN+wEtVopEIVwdZNnNyFeAepVS2z7HpnmFnB0Rqwrtlo3 X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a0d:e607:0:b0:352:714a:2c86 with SMTP id p7-20020a0de607000000b00352714a2c86mr11123818ywe.312.1664604445305; Fri, 30 Sep 2022 23:07:25 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:30 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-17-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 16/22] perf vendor events: Update Intel jaketown From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v21, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/jaketown/jkt-metrics.json | 327 +++++++++++++----- 1 file changed, 246 insertions(+), 81 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json b/too= ls/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json index c0fbb4f31241..554f87c03c05 100644 --- a/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json +++ b/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json @@ -1,64 +1,247 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_L1D_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D1@ - = cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_DISP= ATCHED.THREAD\\,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency= > 0.1) else RESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RET= IRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.STALLS= _L2_PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOP= S_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS))) * CYCLE_ACTIVITY.= STALLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D1@ - cpu@UOPS_DISPA= TCHED.THREAD\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_DISPATCHED.THREAD\= \,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else R= ESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCL= E_ACTIVITY.STALLS_L1D_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / U= OPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EX= E.SSE_SCALAR_DOUBLE) / UOPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EX= E.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE= ) / UOPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -70,8 +253,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -82,16 +265,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_DISPATCHED.THREAD / UOPS_ISSUED.ANY", @@ -101,44 +278,32 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_O= PS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP= _COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_= 256.PACKED_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CL= K_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_DISPATCHED.THREAD / (( cpu@UOPS_DISPATCHED.COR= E\\,cmask\\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\= =3D1@)", + "MetricExpr": "UOPS_DISPATCHED.THREAD / ((cpu@UOPS_DISPATCHED.CORE= \\,cmask\\=3D1@ / 2) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\=3D1= @)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -149,7 +314,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -161,26 +326,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_CO= MP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 = * ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * S= IMD_FP_256.PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_= OPS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (F= P_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP= _256.PACKED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -198,7 +363,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, @@ -208,12 +373,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cbox_0@event\\=3D0x0@ / #num_dies / duration_time /= 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -261,5 +420,11 @@ "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", "MetricGroup": "Power", "MetricName": "C7_Pkg_Residency" + }, + { + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" } ] --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80401C433FE for ; Sat, 1 Oct 2022 06:09:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229650AbiJAGJ1 (ORCPT ); Sat, 1 Oct 2022 02:09:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229641AbiJAGHo (ORCPT ); Sat, 1 Oct 2022 02:07:44 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25C88130F13 for ; Fri, 30 Sep 2022 23:07:28 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id a10-20020a5b0aca000000b006b05bfb6ab0so5562661ybr.9 for ; Fri, 30 Sep 2022 23:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=3RUdb2fLLN1hcwadSBVZfenbL3m7I3v6oxF7W67MysU=; b=iVwy6v9iO+MQWzz5FUO0ws2noeoh1XhIS65NOWFo7JfP4HhFbPsTQZp56XGTu3dfOi S1MP8k+AltEYXI4YF+xVj5PCON+oy12XNRFjKwIf6dgFrmVdlWfUczV2HZ6xBIPnkupZ //d4buJQBQd0CEiJaM2Z0dS0Xy+Z+GtSZt7pT+iwisvdZICKdDkjRutCWBuDEbAYZL+P zIYAHRQDi8zO8JtdUVjiT//1tVEjtENN5U83GAtyWKBXHV0dTWqp51GxMweJI7G09i9g XzpuQJKAlq2+kLyjkGjsRd1sEB9eohOkx7wsEgDXoAir8Rpg92ybUOsM7isVwgvfAmFN fXfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=3RUdb2fLLN1hcwadSBVZfenbL3m7I3v6oxF7W67MysU=; b=l7uA+LKl52mD82B5YSgaSHJmUlQhsQbijXqW0SpRMdwIy8+48gyuNbxwWHA+X7/Uxc 7AW8Yu4eeSOk5d4UqbwHfkLPYpAqo8Sc8fH9jeE7DabWSN3HF2YmCMhM4C2nl5ABTurv 5omASwzbiVJvHb6gEbye1hpIkOfXkULDSq+JgZcPrCy23LzJTWE4OlCMDa13TFpaQSRa 2YQw3RezDNIjud+GyK0DJ34b2PTAHflSwJxT/B4OTnyqiYAFYdN8XwHtc5RQOhM7vCd0 xN5XF1JrL1yKyA5Nj7Tmq7/ESVsVNeSrl68O0JMmu8O3sDhcQ7TiV/uAeLW3vILd3McM +h0w== X-Gm-Message-State: ACrzQf3IsZpU2wdrMNle+gQcBkPnaOhvYqDtPIhN67yM1ixDcTp3reFj TyDiL0oEbvkMHQ/fVo4vj25w3cIDC+bU X-Google-Smtp-Source: AMsMyM7i2FAyEQEayNwRt7/AKBP30u7muSybWyeNFdr4ShJbZOh0KxHYZKHhKpGOsvQD/0jlU1YQ32H9ZaKe X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a05:6902:92:b0:6b2:bed3:5eec with SMTP id h18-20020a056902009200b006b2bed35eecmr10500201ybs.60.1664604447799; Fri, 30 Sep 2022 23:07:27 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:31 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-18-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 17/22] perf vendor events: Update Intel sandybridge From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v17, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/sandybridge/snb-metrics.json | 315 +++++++++++++----- 1 file changed, 240 insertions(+), 75 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json b/= tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json index ae7ed267b2a2..5d5a6d6f3bda 100644 --- a/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json +++ b/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json @@ -1,64 +1,247 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_L1D_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D1@ - = cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_DISP= ATCHED.THREAD\\,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency= > 0.1) else RESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RET= IRED.LLC_HIT + 7 * MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.S= TALLS_L2_PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOP= S_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS))) * CYCLE_ACTI= VITY.STALLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D1@ - cpu@UOPS_DISPA= TCHED.THREAD\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_DISPATCHED.THREAD\= \,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else R= ESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCL= E_ACTIVITY.STALLS_L1D_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / U= OPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EX= E.SSE_SCALAR_DOUBLE) / UOPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EX= E.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE= ) / UOPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -70,8 +253,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -82,16 +265,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_DISPATCHED.THREAD / UOPS_ISSUED.ANY", @@ -101,44 +278,32 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_O= PS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP= _COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_= 256.PACKED_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CL= K_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_DISPATCHED.THREAD / (( cpu@UOPS_DISPATCHED.COR= E\\,cmask\\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\= =3D1@)", + "MetricExpr": "UOPS_DISPATCHED.THREAD / ((cpu@UOPS_DISPATCHED.CORE= \\,cmask\\=3D1@ / 2) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\=3D1= @)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -149,7 +314,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -161,26 +326,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_CO= MP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 = * ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * S= IMD_FP_256.PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_= OPS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (F= P_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP= _256.PACKED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -198,7 +363,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5400EC433FE for ; Sat, 1 Oct 2022 06:09:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229683AbiJAGJh (ORCPT ); Sat, 1 Oct 2022 02:09:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229686AbiJAGIr (ORCPT ); Sat, 1 Oct 2022 02:08:47 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A2401AF902 for ; Fri, 30 Sep 2022 23:07:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3538689fc60so61126077b3.3 for ; Fri, 30 Sep 2022 23:07:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=+QqtqxdU1TJtRoCq38bEwojXU5KZYBGdp/hLqGQAh64=; b=sqBQrPNX3x/aOaKeqMm6jW9JuXHmwzfhvVCr8Dves17GefTsSqsL9S4rNaLEDuU9tY 4U63B/ZSV4Al2alFXVnDvxz6BJ3zoOv8tMpmbsq/nAd0c9GA02o6uLR23/NKW85cAxvE a0q6XST/vjiGFHw+KFsFQHeRZ0upZJAyI1F5x/aXI5DvrSxv9P4gI20MarVHiD8anzm6 p6ZrFSO/rcoz5TyTAfJqXZjJBSV5z32fCnhSnVdwJgnrCEOVoJtgV1I7j2ycCb3a2eT5 O8rq99pB1XRZP+/Ubn9yMrqNQVZSvJNleci4rJadGEslXvwvjqyHEHcpmkmBJlSMwd1+ KmsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=+QqtqxdU1TJtRoCq38bEwojXU5KZYBGdp/hLqGQAh64=; b=XdJKMNB007t8VcAz2hr4x7swlW4mCvSmoxNn/C9fdqg4wgQYiu5Sil4EVOeFoH+yh2 ZvDKnnQrNTbaU5PxtK7OoVZYLOBbvdFH4m0vA8QrOJmoG5Djxa/xlwWXYZ/1+b5aVAqI ciFwt/GLq2/JDgm6VcFIW3WO+EPS8P7hexPncZrmioVl52PdY8xqEkWok7nn2NqDdt6D b3mJzs/D3qB/fMKez+/a7n7zP+C+fAz1Z6fbu5xC8Re3QP9mc76nCPWtCitW5KZTFeFR mvj4VqNUQh+KYpXVL20UM2lLRioZPEB9IO2tuYdgM7WELd5PofhLAOqysWrfMNh1xlRL HGcA== X-Gm-Message-State: ACrzQf0sZojKnl4UCQ/1xGH8umrRc2048s7ZhXrODUhyRRo4rjxRg+PT udXU+Gma9ZMJ8oNEflKybTpXVAfY6HKB X-Google-Smtp-Source: AMsMyM5xLfC6etume8BDFLE8knjiEZ7VQhfmnmxwk+PznWbJzs84RpiUd2tVo8UctL86rCa947GwK8i4B4n2 X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:af54:0:b0:6bc:ff93:438d with SMTP id c20-20020a25af54000000b006bcff93438dmr4753787ybj.322.1664604450625; Fri, 30 Sep 2022 23:07:30 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:32 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-19-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 18/22] perf vendor events: Update Intel sapphirerapids From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events are updated to v1.06 the core metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - Addition of all 6 levels of TMA metrics. Previously metrics involving topdown events were dropped. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. - Latest metrics from: https://github.com/intel/perfmon-metrics Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- .../arch/x86/sapphirerapids/cache.json | 4 +- .../arch/x86/sapphirerapids/frontend.json | 11 + .../arch/x86/sapphirerapids/pipeline.json | 4 +- .../arch/x86/sapphirerapids/spr-metrics.json | 1249 ++++++++++++----- 5 files changed, 917 insertions(+), 353 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index 81bd6f5d5354..c2354e368586 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -20,7 +20,7 @@ GenuineIntel-6-AA,v1.00,meteorlake,core GenuineIntel-6-1[AEF],v3,nehalemep,core GenuineIntel-6-2E,v3,nehalemex,core GenuineIntel-6-2A,v17,sandybridge,core -GenuineIntel-6-8F,v1.04,sapphirerapids,core +GenuineIntel-6-8F,v1.06,sapphirerapids,core GenuineIntel-6-(37|4C|4D),v14,silvermont,core GenuineIntel-6-(4E|5E|8E|9E|A5|A6),v53,skylake,core GenuineIntel-6-55-[01234],v1.28,skylakex,core diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json b/too= ls/perf/pmu-events/arch/x86/sapphirerapids/cache.json index 348476ce8107..c05c741e22db 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json @@ -35,7 +35,7 @@ "UMask": "0x2" }, { - "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailablability.", + "BriefDescription": "Number of phases a demand request has waited = due to L1D Fill Buffer (FB) unavailability.", "CollectPEBSRecord": "2", "Counter": "0,1,2,3", "CounterMask": "1", @@ -43,7 +43,7 @@ "EventCode": "0x48", "EventName": "L1D_PEND_MISS.FB_FULL_PERIODS", "PEBScounters": "0,1,2,3", - "PublicDescription": "Counts number of phases a demand request has= waited due to L1D Fill Buffer (FB) unavailablability. Demand requests incl= ude cacheable/uncacheable demand load, store, lock or SW prefetch accesses.= ", + "PublicDescription": "Counts number of phases a demand request has= waited due to L1D Fill Buffer (FB) unavailability. Demand requests include= cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "SampleAfterValue": "1000003", "Speculative": "1", "UMask": "0x2" diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json b/= tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json index 44ecf38ad970..ff0d47ce8e9a 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json @@ -11,6 +11,17 @@ "Speculative": "1", "UMask": "0x1" }, + { + "BriefDescription": "Cycles the Microcode Sequencer is busy.", + "CollectPEBSRecord": "2", + "Counter": "0,1,2,3", + "EventCode": "0x87", + "EventName": "DECODE.MS_BUSY", + "PEBScounters": "0,1,2,3", + "SampleAfterValue": "500009", + "Speculative": "1", + "UMask": "0x2" + }, { "BriefDescription": "DSB-to-MITE switch true penalty cycles.", "CollectPEBSRecord": "2", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json b/= tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json index df4f3d714e6e..b2f0d9393d3c 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json @@ -80,10 +80,10 @@ "EventCode": "0xc1", "EventName": "ASSISTS.ANY", "PEBScounters": "0,1,2,3,4,5,6,7", - "PublicDescription": "Counts the number of occurrences where a mic= rocode assist is invoked by hardware Examples include AD (page Access Dirty= ), FP and AVX related assists.", + "PublicDescription": "Counts the number of occurrences where a mic= rocode assist is invoked by hardware. Examples include AD (page Access Dirt= y), FP and AVX related assists.", "SampleAfterValue": "100003", "Speculative": "1", - "UMask": "0x1f" + "UMask": "0x1b" }, { "BriefDescription": "All branch instructions retired.", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json= b/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json index e194dfc5c25b..9ec42a68c160 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json @@ -1,16 +1,818 @@ [ + { + "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", + "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UO= P_DROPPING / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "(topdown\\-fetch\\-lat / (topdown\\-fe\\-bound + to= pdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.= UOP_DROPPING / SLOTS)", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "ICACHE_DATA.STALLS / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_TAG.STALLS / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(tma_branch_mispredicts / tma_bad_speculation) * IN= T_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (tma_branch_mispredicts / tma_bad_speculation)= ) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "DECODE.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: FRONTEND_RETIRED.MS_FLOWS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / CORE_C= LKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / CORE_CLK= S / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", + "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + t= ma_retiring), 0)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "topdown\\-br\\-mispredict / (topdown\\-fe\\-bound += topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOT= S", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts= )", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", + "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "topdown\\-mem\\-bound / (topdown\\-fe\\-bound + top= down\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((EXE_ACTIVITY.BOUND_ON_LOADS - MEMORY_ACTIVITY.= STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMOR= Y_ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "L1D_PEND_MISS.FB_FULL / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_ACTIVITY.= STALLS_L2_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_ACTIVITY.S= TALLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((25 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRE= D.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3= _HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + (24 * A= verage_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + (MEM_LOAD_RET= IRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(24 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRED= .XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - (OCR.DEMAND_DATA_RD.= L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA= _RD.L3_HIT.SNOOP_HIT_WITH_FWD)))) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOA= D_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_NO_FWD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(9 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT *= (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "(XQ.FULL_CYCLES + L1D_PEND_MISS.L2_STALLS) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "((MEMORY_ACTIVITY.STALLS_L3_MISS / CLKS) - tma_pmm_= bound)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to memory bandwidth Allocation= feature (RDT's memory bandwidth throttling).", + "MetricExpr": "INT_MISC.MBA_STALLS / CLKS", + "MetricGroup": "MemoryBW;Offcore;Server;TopdownL5;tma_mem_bandwidt= h_group", + "MetricName": "tma_mba_stalls", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from local memory", + "MetricExpr": "(54.5 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIR= ED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_local_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from local memory. Caching will = improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS= _RETIRED.LOCAL_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote memory", + "MetricExpr": "(119 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRE= D.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) /= 2) / CLKS", + "MetricGroup": "Server;Snoop;TopdownL5;tma_mem_latency_group", + "MetricName": "tma_remote_dram", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote memory. This is caus= ed often due to non-optimal NUMA allocations. #link to NUMA article Sample = with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling loads from remote cache in other socket= s including synchronizations issues", + "MetricExpr": "((108 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIR= ED.REMOTE_HITM + (108 * Average_Frequency) * MEM_LOAD_L3_MISS_RETIRED.REMOT= E_FWD) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / C= LKS", + "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_mem_latency_gro= up", + "MetricName": "tma_remote_cache", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling loads from remote cache in other socke= ts including synchronizations issues. This is caused often due to non-optim= al NUMA allocations. #link to NUMA article Sample with: MEM_LOAD_L3_MISS_RE= TIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates (based on idle = latencies) how often the CPU was stalled on accesses to external 3D-Xpoint = (Crystal Ridge, a.k.a", + "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM= * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + 10 * ((MEM= _LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD= _RETIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + (MEM_LOAD= _RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.R= EMOTE_HITM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) = / ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + (MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS))) + 10 * ((MEM_LOAD_L3_MISS_RETIRED.LOCAL_D= RAM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + (MEM_LO= AD_L3_MISS_RETIRED.REMOTE_FWD * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RE= TIRED.L1_MISS))) + (MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + (MEM_LOAD_R= ETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) + (25 * (MEM_LOAD_RETIRED.LOC= AL_PMM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + 33 *= (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM= _LOAD_RETIRED.L1_MISS))))))) * (MEMORY_ACTIVITY.STALLS_L3_MISS / CLKS)) if = (1000000 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PM= M) > MEM_LOAD_RETIRED.L1_MISS) else 0)", + "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_memory_b= ound_group", + "MetricName": "tma_pmm_bound", + "PublicDescription": "This metric roughly estimates (based on idle= latencies) how often the CPU was stalled on accesses to external 3D-Xpoint= (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Mem= ory Module. ", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((MEM_STORE_RETIRED.L2_HIT * 10 * (1 - (MEM_INST_RE= TIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.= LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, O= FFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(28 * Average_Frequency) * OCR.DEMAND_RFO.L3_HIT.SN= OOP_HITM / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.= L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to Streaming store memory accesses; Streaming store optimize out a = read request required by RFO stores", + "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_streaming_stores", + "PublicDescription": "This metric estimates how often CPU was stal= led due to Streaming store memory accesses; Streaming store optimize out a= read request required by RFO stores. Even though store accesses do not typ= ically stall out-of-order CPUs; there are few cases where stores can lead t= o actual stalls. This metric will be flagged should Streaming stores be a b= ottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ + = (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * cpu@EXE_ACTIVITY.2_PORTS_UTIL\\= ,umask\\=3D0xc@)) / CLKS if (ARITH.DIVIDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_= TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS)) else (EXE_ACTIVITY.1_PORTS_UTIL + tma= _retiring * cpu@EXE_ACTIVITY.2_PORTS_UTIL\\,umask\\=3D0xc@) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ / C= LKS + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVI= TY.BOUND_ON_LOADS) / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to PAUSE Instructions", + "MetricExpr": "CPU_CLK_UNHALTED.PAUSE / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_slow_pause", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.= PAUSE_INST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to LFENCE Instructions.", + "MetricExpr": "13 * MISC2_RETIRED.LFENCE / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_memory_fence", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "160 * ASSISTS.SSE_AVX_MIX / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the Advanced Matrix Extensions (AMX) execution engine was busy with tile = (arithmetic) operations", + "MetricExpr": "EXE.AMX_BUSY / CORE_CLKS", + "MetricGroup": "Compute;HPC;Server;TopdownL5;tma_ports_utilized_0_= group", + "MetricName": "tma_amx_busy", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop. S= ample with: EXE_ACTIVITY.2_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + = UOPS_DISPATCHED.PORT_5_11 + UOPS_DISPATCHED.PORT_6) / (5 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED.PORT_0", + "MetricExpr": "UOPS_DISPATCHED.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D.PORT_1", + "MetricExpr": "UOPS_DISPATCHED.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED.PORT_6", + "MetricExpr": "UOPS_DISPATCHED.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3_10", + "MetricExpr": "UOPS_DISPATCHED.PORT_2_3_10 / (3 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations Sample with: U= OPS_DISPATCHED.PORT_7_8", + "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_= 8) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", + "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector + tma_f= p_amx", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR) / (tma_retiring * = SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_P= ACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED2.VECTOR) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF= ) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF= ) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 512-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF= ) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_512b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 512-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) matrix uops fraction the CPU has retired (aggregated across all = supported FP datatypes in AMX engine)", + "MetricExpr": "cpu@AMX_OPS_RETIRED.BF16\\,cmask\\=3D1@ / (tma_reti= ring * SLOTS)", + "MetricGroup": "Compute;Flops;HPC;Pipeline;Server;TopdownL4;tma_fp= _arith_group", + "MetricName": "tma_fp_amx", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) matrix uops fraction the CPU has retired (aggregated across all= supported FP datatypes in AMX engine). Refer to AMX_Busy and GFLOPs metric= s for actual AMX utilization and FP performance, resp.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall Integer (Int) = select operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b + tma_shu= ffles", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_int_operations", + "PublicDescription": "This metric represents overall Integer (Int)= select operations fraction the CPU has executed (retired). Vector/Matrix I= nt operations and shuffles are counted. Note this metric's value may exceed= its parent due to use of \"Uops\" CountDomain.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents 128-bit vector Integer= ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the= CPU has retired.", + "MetricExpr": "(INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIRED.VNNI_128= ) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_int_opera= tions_group", + "MetricName": "tma_int_vector_128b", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents 256-bit vector Integer= ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the= CPU has retired.", + "MetricExpr": "(INT_VEC_RETIRED.ADD_256 + INT_VEC_RETIRED.MUL_256 = + INT_VEC_RETIRED.VNNI_256) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_int_opera= tions_group", + "MetricName": "tma_int_vector_256b", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic Integer (= Int) matrix uops fraction the CPU has retired (aggregated across all suppor= ted Int datatypes in AMX engine)", + "MetricExpr": "cpu@AMX_OPS_RETIRED.INT8\\,cmask\\=3D1@ / (tma_reti= ring * SLOTS)", + "MetricGroup": "Compute;HPC;IntVector;Pipeline;Server;TopdownL4;tm= a_int_operations_group", + "MetricName": "tma_int_amx", + "PublicDescription": "This metric approximates arithmetic Integer = (Int) matrix uops fraction the CPU has retired (aggregated across all suppo= rted Int datatypes in AMX engine). Refer to AMX_Busy and TIOPs metrics for = actual AMX utilization and Int performance, resp.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Shuffle (cross \"vecto= r lane\" data transfers) uops fraction the CPU has retired.", + "MetricExpr": "INT_VEC_RETIRED.SHUFFLES / (tma_retiring * SLOTS)", + "MetricGroup": "HPC;Pipeline;TopdownL4;tma_int_operations_group", + "MetricName": "tma_shuffles", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_UOP_RETIRED.ANY / (tma_r= etiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.MACRO_FUSED / (= tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fused_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JC= C are commonly used examples.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", + "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - INST_RETIRED.MACRO_FUSED) / (tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_non_fused_branches", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_reti= ring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_i= nt_operations + tma_memory_operations + tma_fused_instructions + tma_non_fu= sed_branches + tma_nop_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "topdown\\-heavy\\-ops / (topdown\\-fe\\-bound + top= down\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences. Sample with: UOPS_RETIRED.HEAV= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "UOPS_RETIRED.MS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: UOPS_RETIRED.MS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * cpu@ASSISTS.ANY\\,umask\\=3D0x1B@ / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of slo= ts the CPU retired uops as a result of handing Page Faults", + "MetricExpr": "99 * ASSISTS.PAGE_FAULT / SLOTS", + "MetricGroup": "TopdownL5;tma_assists_group", + "MetricName": "tma_page_faults", + "PublicDescription": "This metric roughly estimates fraction of sl= ots the CPU retired uops as a result of handing Page Faults. A Page Fault m= ay apply on first application access to a memory page. Note operating syste= m handling of page faults accounts for the majority of its cost.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of slo= ts the CPU retired uops as a result of handing Floating Point (FP) Assists", + "MetricExpr": "30 * ASSISTS.FP / SLOTS", + "MetricGroup": "HPC;TopdownL5;tma_assists_group", + "MetricName": "tma_fp_assists", + "PublicDescription": "This metric roughly estimates fraction of sl= ots the CPU retired uops as a result of handing Floating Point (FP) Assists= . FP Assist may apply when working with very small floating point values (s= o-called denormals).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops as a result of handing SSE to AVX* or AVX* to SSE transitio= n Assists. ", + "MetricExpr": "63 * ASSISTS.SSE_AVX_MIX / SLOTS", + "MetricGroup": "HPC;TopdownL5;tma_assists_group", + "MetricName": "tma_avx_assists", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources. Sample with: FRONTEND_RETIRE= D.MS_FLOWS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" + }, + { + "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)= ) + (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_= bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_a= ccesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + (tma_l1_= bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_= pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full= + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) ", + "MetricGroup": "Mem;MemoryBW;Offcore", + "MetricName": "Memory_Bandwidth" + }, + { + "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma= _store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) = + (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bo= und + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contes= ted_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + (tma= _l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)))", + "MetricGroup": "Mem;MemoryLat;Offcore", + "MetricName": "Memory_Latency" + }, + { + "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_= dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fw= d_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound = + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_store / (tma= _dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tm= a_streaming_stores))) ", + "MetricGroup": "Mem;MemoryTLB;Offcore", + "MetricName": "Memory_Data_TLBs" + }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED= .NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 *= BR_INST_RETIRED.NEAR_CALL) ) / TOPDOWN.SLOTS)", + "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.= NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * = BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, + { + "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", + "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", + "MetricName": "Big_Code" + }, + { + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", + "MetricGroup": "Fed;FetchBW;Frontend", + "MetricName": "Instruction_Fetch_BW" + }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "(tma_retiring * SLOTS) / INST_RETIRED.ANY", + "MetricGroup": "Pipeline;Ret;Retire", + "MetricName": "UPI" + }, + { + "BriefDescription": "Instruction per taken branch", + "MetricExpr": "(tma_retiring * SLOTS) / BR_INST_RETIRED.NEAR_TAKEN= ", + "MetricGroup": "Branches;Fed;FetchBW", + "MetricName": "UpTB" + }, + { + "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", + "MetricName": "CPI" + }, { "BriefDescription": "Per-Logical Processor actual clocks when the = Logical Processor is active.", "MetricExpr": "CPU_CLK_UNHALTED.THREAD", @@ -20,13 +822,13 @@ { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", "MetricExpr": "TOPDOWN.SLOTS", - "MetricGroup": "TmaL1", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, { "BriefDescription": "Fraction of Physical Core issue-slots utilize= d by this Logical Processor", - "MetricExpr": "TOPDOWN.SLOTS / ( TOPDOWN.SLOTS / 2 ) if #SMT_on el= se 1", - "MetricGroup": "SMT;TmaL1", + "MetricExpr": "SLOTS / (TOPDOWN.SLOTS / 2) if #SMT_on else 1", + "MetricGroup": "SMT;tma_L1_group", "MetricName": "Slots_Utilization" }, { @@ -38,29 +840,35 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR_HALF ) + 2 *= ( FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLE= X_SCALAR_HALF ) + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH= _INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED2.128B_PACK= ED_HALF + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 512B_PACKED_DOUBLE ) + 16 * ( FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_= ARITH_INST_RETIRED.512B_PACKED_SINGLE ) + 32 * FP_ARITH_INST_RETIRED2.512B_= PACKED_HALF + 4 * AMX_OPS_RETIRED.BF16 ) / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR_HALF) + 2 * (F= P_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SC= ALAR_HALF) + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_= RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF = + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST= _RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF= + 4 * AMX_OPS_RETIRED.BF16) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.= PORT_1 + FP_ARITH_DISPATCHED.PORT_5 ) / ( 2 * CPU_CLK_UNHALTED.DISTRIBUTED = )", + "MetricExpr": "(FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.P= ORT_1 + FP_ARITH_DISPATCHED.PORT_5) / (2 * CORE_CLKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, + { + "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", + "MetricGroup": "Cor;SMT", + "MetricName": "Core_Bound_Likely" + }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED", @@ -105,13 +913,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.= SCALAR_HALF ) + 2 * ( FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_I= NST_RETIRED2.COMPLEX_SCALAR_HALF ) + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKE= D_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST= _RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_= ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * ( FP_ARITH_INST_RETIRED2.256= B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) + 32 * FP_ARITH_= INST_RETIRED2.512B_PACKED_HALF + 4 * AMX_OPS_RETIRED.BF16 )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SC= ALAR_HALF) + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_= RETIRED2.COMPLEX_SCALAR_HALF) + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SING= LE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED= 2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_IN= ST_RETIRED.512B_PACKED_DOUBLE) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_H= ALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRE= D2.512B_PACKED_HALF + 4 * AMX_OPS_RETIRED.BF16)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALA= R) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B= _PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_A= RITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.VECTOR) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR= ) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_= PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.VECTOR))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -132,21 +940,21 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED2.128B_PACKED_HALF )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRE= D2.128B_PACKED_HALF)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED2.256B_PACKED_HALF )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRE= D2.256B_PACKED_HALF)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit in= struction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED2.512B_PACKED_HALF )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRE= D2.512B_PACKED_HALF)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX512", "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit i= nstruction (lower number means higher occurrence rate). May undercount due = to FMA double counting." @@ -161,7 +969,7 @@ { "BriefDescription": "Instructions per Integer Arithmetic AMX opera= tion (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / AMX_OPS_RETIRED.INT8", - "MetricGroup": "IntVector;InsType;Server", + "MetricGroup": "InsType;IntVector;Server", "MetricName": "IpArith_AMX_Int8", "PublicDescription": "Instructions per Integer Arithmetic AMX oper= ation (lower number means higher occurrence rate). Operations factored per = matrices' sizes of the AMX instructions." }, @@ -172,11 +980,17 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, + { + "BriefDescription": "Average number of Uops retired in cycles wher= e at least one uop has retired.", + "MetricExpr": "(tma_retiring * SLOTS) / cpu@UOPS_RETIRED.SLOTS\\,c= mask\\=3D1@", + "MetricGroup": "Pipeline;Ret", + "MetricName": "Retire" + }, { "BriefDescription": "Estimated fraction of retirement-cycles deali= ng with repeat instructions", "MetricExpr": "INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.SLOTS= \\,cmask\\=3D1@", @@ -213,6 +1027,12 @@ "MetricGroup": "DSBmiss", "MetricName": "DSB_Switch_Cost" }, + { + "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_mite))", + "MetricGroup": "DSBmiss;Fed", + "MetricName": "DSB_Misses" + }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -225,6 +1045,12 @@ "MetricGroup": "Bad;BadSpec;BrMispredicts", "MetricName": "IpMispredict" }, + { + "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", + "MetricGroup": "Bad;BrMispredicts", + "MetricName": "Branch_Misprediction_Cost" + }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_B= RANCHES", @@ -239,7 +1065,7 @@ }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, @@ -251,7 +1077,7 @@ }, { "BriefDescription": "Fraction of branches of other types (not indi= vidually covered by other metrics in Info.Branches group)", - "MetricExpr": "1 - ( (BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRE= D.ALL_BRANCHES) + (BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHE= S) + (( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST= _RETIRED.ALL_BRANCHES) + ((BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.CON= D_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES) )", + "MetricExpr": "1 - (Cond_NT + Cond_TK + CallRet + Jump)", "MetricGroup": "Bad;Branches", "MetricName": "Other_Branches" }, @@ -264,67 +1090,67 @@ { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING ) / ( 4 * CPU_CLK_UNHALTED.DISTRIB= UTED )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING) / (4 * CORE_CLKS)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, @@ -354,37 +1180,37 @@ }, { "BriefDescription": "Rate of silent evictions from the L2 cache pe= r Kilo instruction where the evicted lines are dropped (no writeback to L3 = or memory)", - "MetricExpr": "1000 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_Silent_PKI" }, { "BriefDescription": "Rate of non silent evictions from the L2 cach= e per Kilo instruction", - "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY", + "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / Instructions", "MetricGroup": "L2Evicts;Mem;Server", "MetricName": "L2_Evictions_NonSilent_PKI" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -396,26 +1222,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR_HALF ) + 2= * ( FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMP= LEX_SCALAR_HALF ) + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARI= TH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED2.128B_PA= CKED_HALF + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRE= D.512B_PACKED_DOUBLE ) + 16 * ( FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + F= P_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) + 32 * FP_ARITH_INST_RETIRED2.512= B_PACKED_HALF + 4 * AMX_OPS_RETIRED.BF16 ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR_HALF) + 2 * (= FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_S= CALAR_HALF) + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST= _RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF= + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PA= CKED_DOUBLE) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INS= T_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HAL= F + 4 * AMX_OPS_RETIRED.BF16) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Tera Integer (matrix) Operations Per Second", - "MetricExpr": "( 8 * AMX_OPS_RETIRED.INT8 / 1000000000000 ) / dur= ation_time", + "MetricExpr": "(8 * AMX_OPS_RETIRED.INT8 / 1e12) / duration_time", "MetricGroup": "Cor;HPC;IntVector;Server", "MetricName": "TIOPS" }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, @@ -439,13 +1265,13 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "(64 * (uncore_imc@cas_count_read@ + uncore_imc@cas_= count_write@) / 1000000000) / duration_time", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / = UNC_CHA_TOR_INSERTS.IA_MISS_DRD ) / ( uncore_cha_0@event\\=3D0x1@ / duratio= n_time )", + "MetricExpr": "1000000000 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / U= NC_CHA_TOR_INSERTS.IA_MISS_DRD) / (Socket_CLKS / duration_time)", "MetricGroup": "Mem;MemoryLat;SoC", "MetricName": "MEM_Read_Latency" }, @@ -457,32 +1283,32 @@ }, { "BriefDescription": "Average latency of data read request to exter= nal 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2= data-read prefetches", - "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM ) / uncore_cha_0@event\\=3D0x1@ )= ", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": "(1000000000 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PM= M / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / uncore_cha_0@event\\=3D0x1@)", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_PMM_Read_Latency" }, { "BriefDescription": "Average latency of data read request to exter= nal DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-= read prefetches", - "MetricExpr": " 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_D= DR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR ) / uncore_cha_0@event\\=3D0x1@", - "MetricGroup": "Mem;MemoryLat;SoC;Server", + "MetricExpr": " 1000000000 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DD= R / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / uncore_cha_0@event\\=3D0x1@", + "MetricGroup": "Mem;MemoryLat;Server;SoC", "MetricName": "MEM_DRAM_Read_Latency" }, { "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [= GB / sec]", - "MetricExpr": "( ( 64 * UNC_M_PMM_RPQ_INSERTS / 1000000000 ) / dur= ation_time )", - "MetricGroup": "Mem;MemoryBW;SoC;Server", + "MetricExpr": "((64 * UNC_M_PMM_RPQ_INSERTS / 1000000000) / durati= on_time)", + "MetricGroup": "Mem;MemoryBW;Server;SoC", "MetricName": "PMM_Read_BW" }, { "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes = [GB / sec]", - "MetricExpr": "( ( 64 * UNC_M_PMM_WPQ_INSERTS / 1000000000 ) / dur= ation_time )", - "MetricGroup": "Mem;MemoryBW;SoC;Server", + "MetricExpr": "((64 * UNC_M_PMM_WPQ_INSERTS / 1000000000) / durati= on_time)", + "MetricGroup": "Mem;MemoryBW;Server;SoC", "MetricName": "PMM_Write_BW" }, { "BriefDescription": "Average IO (network or disk) Bandwidth Use fo= r Writes [GB / sec]", "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1000000000 /= duration_time", - "MetricGroup": "IoBW;Mem;SoC;Server", + "MetricGroup": "IoBW;Mem;Server;SoC", "MetricName": "IO_Write_BW" }, { @@ -491,12 +1317,6 @@ "MetricGroup": "SoC", "MetricName": "Socket_CLKS" }, - { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "uncore_cha_0@event\\=3D0x1@ / #num_dies / duration_= time / 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" - }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u", @@ -528,11 +1348,10 @@ "MetricName": "C6_Pkg_Residency" }, { - "BriefDescription": "Percentage of time spent in the active CPU po= wer state C0", - "MetricExpr": "100 * CPU_CLK_UNHALTED.REF_TSC / TSC", - "MetricGroup": "", - "MetricName": "cpu_utilization_percent", - "ScaleUnit": "1%" + "BriefDescription": "Uncore frequency per die [GHZ]", + "MetricExpr": "Socket_CLKS / #num_dies / duration_time / 100000000= 0", + "MetricGroup": "SoC", + "MetricName": "UNCORE_FREQ" }, { "BriefDescription": "CPU operating frequency (in GHz)", @@ -541,13 +1360,6 @@ "MetricName": "cpu_operating_frequency", "ScaleUnit": "1GHz" }, - { - "BriefDescription": "Cycles per instruction retired; indicating ho= w much time each executed instruction took; in units of cycles.", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY", - "MetricGroup": "", - "MetricName": "cpi", - "ScaleUnit": "1per_instr" - }, { "BriefDescription": "The ratio of number of completed memory load = instructions to the total number completed instructions", "MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY", @@ -566,7 +1378,7 @@ "BriefDescription": "Ratio of number of requests missing L1 data c= ache (includes data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l1d_mpi_includes_data_plus_rfo_with_prefetches", + "MetricName": "l1d_mpi", "ScaleUnit": "1per_instr" }, { @@ -594,7 +1406,7 @@ "BriefDescription": "Ratio of number of requests missing L2 cache = (includes code+data+rfo w/ prefetches) to the total number of completed ins= tructions", "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY", "MetricGroup": "", - "MetricName": "l2_mpi_includes_code_plus_data_plus_rfo_with_prefet= ches", + "MetricName": "l2_mpi", "ScaleUnit": "1per_instr" }, { @@ -620,42 +1432,42 @@ }, { "BriefDescription": "Ratio of number of code read requests missing= last level core cache (includes demand w/ prefetches) to the total number = of completed instructions", - "MetricExpr": "( UNC_CHA_TOR_INSERTS.IA_MISS_CRD ) / INST_RETIRED.= ANY", + "MetricExpr": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD / INST_RETIRED.ANY", "MetricGroup": "", "MetricName": "llc_code_read_mpi_demand_plus_prefetch", "ScaleUnit": "1per_instr" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) in nano seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D / UNC_CHA_TOR_INSERTS.IA_MISS_DRD ) / ( UNC_CHA_CLOCKTICKS / ( source_cou= nt(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages ) ) ) * duration_time= )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD = / UNC_CHA_TOR_INSERTS.IA_MISS_DRD ) / ( UNC_CHA_CLOCKTICKS / ( source_count= (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages ) ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_latency", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to local memory in nano= seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL ) / ( UNC_CHA_CLOCKTICKS / = ( source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages ) )= ) * duration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL ) / ( UNC_CHA_CLOCKTICKS / ( = source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages ) ) )= * duration_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_latency_for_local_request= s", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to remote memory in nan= o seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE ) / ( UNC_CHA_CLOCKTICKS = / ( source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages = ) ) ) * duration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE ) / ( UNC_CHA_CLOCKTICKS / = ( source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages ) = ) ) * duration_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_latency_for_remote_reques= ts", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to Intel(R) Optane(TM) = Persistent Memory(PMEM) in nano seconds", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM ) / ( UNC_CHA_CLOCKTICKS / ( so= urce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM) * #num_packages ) ) ) * d= uration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM ) / ( UNC_CHA_CLOCKTICKS / ( sour= ce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM) * #num_packages ) ) ) * dur= ation_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_to_pmem_latency", "ScaleUnit": "1ns" }, { "BriefDescription": "Average latency of a last level cache (LLC) d= emand data read miss (read memory access) addressed to DRAM in nano seconds= ", - "MetricExpr": "( ( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DR= D_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR ) / ( UNC_CHA_CLOCKTICKS / ( so= urce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages ) ) ) * d= uration_time )", + "MetricExpr": "( 1000000000 * ( UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_= DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR ) / ( UNC_CHA_CLOCKTICKS / ( sour= ce_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages ) ) ) * dur= ation_time", "MetricGroup": "", "MetricName": "llc_demand_data_read_miss_to_dram_latency", "ScaleUnit": "1ns" @@ -699,14 +1511,14 @@ "BriefDescription": "Memory read that miss the last level cache (L= LC) addressed to local DRAM as a percentage of total memory read accesses, = does not include LLC prefetches.", "MetricExpr": "100 * ( UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC= _CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL ) / ( UNC_CHA_TOR_INSERTS.IA_MISS_D= RD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS= .IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_local_dram", + "MetricName": "numa_reads_addressed_to_local_dram", "ScaleUnit": "1%" }, { "BriefDescription": "Memory reads that miss the last level cache (= LLC) addressed to remote DRAM as a percentage of total memory read accesses= , does not include LLC prefetches.", "MetricExpr": "100 * ( UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UN= C_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE ) / ( UNC_CHA_TOR_INSERTS.IA_MISS= _DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSER= TS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE )", "MetricGroup": "", - "MetricName": "numa_percent_reads_addressed_to_remote_dram", + "MetricName": "numa_reads_addressed_to_remote_dram", "ScaleUnit": "1%" }, { @@ -720,7 +1532,7 @@ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data t= ransmit bandwidth (MB/sec)", "MetricExpr": "( UNC_UPI_TxL_FLITS.ALL_DATA * (64 / 9.0) / 1000000= ) / duration_time", "MetricGroup": "", - "MetricName": "upi_data_transmit_bw_only_data", + "MetricName": "upi_data_transmit_bw", "ScaleUnit": "1MB/s" }, { @@ -769,35 +1581,35 @@ "BriefDescription": "Bandwidth of IO reads that are initiated by e= nd device controllers that are requesting memory from the CPU.", "MetricExpr": "( UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1000000) /= duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_read", + "MetricName": "io_bandwidth_disk_or_network_writes", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Bandwidth of IO writes that are initiated by = end device controllers that are writing memory to the CPU.", "MetricExpr": "(( UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERT= S.IO_ITOMCACHENEAR ) * 64 / 1000000) / duration_time", "MetricGroup": "", - "MetricName": "io_bandwidth_write", + "MetricName": "io_bandwidth_disk_or_network_reads", "ScaleUnit": "1MB/s" }, { "BriefDescription": "Uops delivered from decoded instruction cache= (decoded stream buffer or DSB) as a percent of total uops delivered to Ins= truction Decode Queue", "MetricExpr": "100 * ( IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_UO= PS + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_decoded_icache_dsb", + "MetricName": "percent_uops_delivered_from_decoded_icache", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from legacy decode pipeline (M= icro-instruction Translation Engine or MITE) as a percent of total uops del= ivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MITE_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_U= OPS + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline_= mite", + "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline", "ScaleUnit": "1%" }, { "BriefDescription": "Uops delivered from microcode sequencer (MS) = as a percent of total uops delivered to Instruction Decode Queue", "MetricExpr": "100 * ( IDQ.MS_UOPS / ( IDQ.DSB_UOPS + IDQ.MITE_UOP= S + IDQ.MS_UOPS + LSD.UOPS ) )", "MetricGroup": "", - "MetricName": "percent_uops_delivered_from_microcode_sequencer_ms", + "MetricName": "percent_uops_delivered_from_microcode_sequencer", "ScaleUnit": "1%" }, { @@ -827,264 +1639,5 @@ "MetricGroup": "", "MetricName": "llc_miss_remote_memory_bandwidth_write", "ScaleUnit": "1MB/s" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. Frontend denotes th= e first part of the processor core responsible to fetch operations that are= executed later on by the Backend part. Within the Frontend; a branch predi= ctor predicts the next address to fetch; cache-lines are fetched from the m= emory subsystem; parsed into instructions; and lastly decoded into micro-op= erations (uops). Ideally the Frontend can issue Machine_Width uops every cy= cle to the Backend. Frontend Bound denotes unutilized issue-slots when ther= e is no Backend stall; i.e. bubbles where Frontend delivered no uops while = Backend could have accepted them. For example; stalls due to instruction-ca= che misses would be categorized under Frontend Bound.", - "MetricExpr": "100 * ( topdown\\-fe\\-bound / ( topdown\\-fe\\-bou= nd + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) - I= NT_MISC.UOP_DROPPING / ( slots ) )", - "MetricGroup": "TmaL1;PGO", - "MetricName": "tma_frontend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues. For example; instruction-c= ache misses; iTLB misses or fetch stalls after a branch misprediction are c= ategorized under Frontend Latency. In such cases; the Frontend eventually d= elivers no uops for some period.", - "MetricExpr": "100 * ( ( topdown\\-fetch\\-lat / ( topdown\\-fe\\-= bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) = - INT_MISC.UOP_DROPPING / ( slots ) ) )", - "MetricGroup": "Frontend;TmaL2;m_tma_frontend_bound_percent", - "MetricName": "tma_fetch_latency_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses.", - "MetricExpr": "100 * ( ICACHE_DATA.STALLS / ( CPU_CLK_UNHALTED.THR= EAD ) )", - "MetricGroup": "BigFoot;FetchLat;IcMiss;TmaL3;m_tma_fetch_latency_= percent", - "MetricName": "tma_icache_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses.", - "MetricExpr": "100 * ( ICACHE_TAG.STALLS / ( CPU_CLK_UNHALTED.THRE= AD ) )", - "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TmaL3;m_tma_fetch_laten= cy_percent", - "MetricName": "tma_itlb_misses_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fron= tend delay in fetching operations from corrected path; following all sorts = of miss-predicted branches. For example; branchy code with lots of miss-pre= dictions might get categorized under Branch Resteers. Note the value of thi= s node may overlap with its siblings.", - "MetricExpr": "100 * ( INT_MISC.CLEAR_RESTEER_CYCLES / ( CPU_CLK_U= NHALTED.THREAD ) + ( INT_MISC.UNKNOWN_BRANCH_CYCLES / ( CPU_CLK_UNHALTED.TH= READ ) ) )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_branch_resteers_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decod= ed i-cache) is a Uop Cache where the front-end directly delivers Uops (micr= o operations) avoiding heavy x86 decoding. The DSB pipeline has shorter lat= ency and delivered higher bandwidth than the MITE (legacy instruction decod= e pipeline). Switching between the two pipelines can cause penalties hence = this metric measures the exposed penalty.", - "MetricExpr": "100 * ( DSB2MITE_SWITCHES.PENALTY_CYCLES / ( CPU_CL= K_UNHALTED.THREAD ) )", - "MetricGroup": "DSBmiss;FetchLat;TmaL3;m_tma_fetch_latency_percent= ", - "MetricName": "tma_dsb_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs). Using proper compiler = flags or Intel Compiler by default will certainly avoid this. #Link: Optimi= zation Guide about LCP BKMs.", - "MetricExpr": "100 * ( DECODE.LCP / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "FetchLat;TmaL3;m_tma_fetch_latency_percent", - "MetricName": "tma_lcp_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS). Commonly used instructions are optimized for delivery by the= DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certa= in operations cannot be handled natively by the execution pipeline; and mus= t be performed by microcode (small programs injected into the execution str= eam). Switching to the MS too often can negatively impact performance. The = MS is designated to deliver long uop flows required by CISC instructions li= ke CPUID; or uncommon conditions like Floating Point Assists when dealing w= ith Denormals.", - "MetricExpr": "100 * ( ( 3 ) * IDQ.MS_SWITCHES / ( CPU_CLK_UNHALTE= D.THREAD ) )", - "MetricGroup": "FetchLat;MicroSeq;TmaL3;m_tma_fetch_latency_percen= t", - "MetricName": "tma_ms_switches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues. For example; inefficienc= ies at the instruction decoders; or restrictions for caching in the DSB (de= coded uops cache) are categorized under Fetch Bandwidth. In such cases; the= Frontend typically delivers suboptimal amount of uops to the Backend.", - "MetricExpr": "100 * ( max( 0 , ( topdown\\-fe\\-bound / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) - ( ( topdown\\-fetch\\-lat /= ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdo= wn\\-be\\-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) ) ) )", - "MetricGroup": "FetchBW;Frontend;TmaL2;m_tma_frontend_bound_percen= t", - "MetricName": "tma_fetch_bandwidth_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline). This pipeline is used for code that was not pre-cached in the= DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of= long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.", - "MetricExpr": "100 * ( ( IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK = ) / ( CPU_CLK_UNHALTED.DISTRIBUTED ) / 2 )", - "MetricGroup": "DSBmiss;FetchBW;TmaL3;m_tma_fetch_bandwidth_percen= t", - "MetricName": "tma_mite_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line. For example; inefficient utilization of the DSB cache structure or b= ank conflict when reading from it; are categorized here.", - "MetricExpr": "100 * ( ( IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK ) = / ( CPU_CLK_UNHALTED.DISTRIBUTED ) / 2 )", - "MetricGroup": "DSB;FetchBW;TmaL3;m_tma_fetch_bandwidth_percent", - "MetricName": "tma_dsb_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. This include slots used to issue uops t= hat do not eventually get retired and slots for which the issue-pipeline wa= s blocked due to recovery from earlier incorrect speculation. For example; = wasted work due to miss-predicted branches are categorized under Bad Specul= ation category. Incorrect data speculation followed by Memory Ordering Nuke= s is another example.", - "MetricExpr": "100 * ( max( 1 - ( ( topdown\\-fe\\-bound / ( topdo= wn\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\= \-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) + ( topdown\\-be\\-bound / = ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdow= n\\-be\\-bound ) ) + ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) , 0 ) )", - "MetricGroup": "TmaL1", - "MetricName": "tma_bad_speculation_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction. These slots are either wasted = by uops fetched from an incorrectly speculated program path; or stalls when= the out-of-order part of the machine needs to recover its state from a spe= culative path.", - "MetricExpr": "( 100 * ( topdown\\-br\\-mispredict / ( topdown\\-f= e\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-boun= d ) ) ) + ( 0 * slots )", - "MetricGroup": "BadSpec;BrMispredicts;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_branch_mispredicts_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears. These slots are either wasted by uop= s fetched prior to the clear; or stalls the out-of-order portion of the mac= hine needs to recover its state after the clear. For example; this can happ= en due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modify= ing-Code (SMC) nukes.", - "MetricExpr": "100 * ( max( 0 , ( max( 1 - ( ( topdown\\-fe\\-boun= d / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + to= pdown\\-be\\-bound ) - INT_MISC.UOP_DROPPING / ( slots ) ) + ( topdown\\-be= \\-bound / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiri= ng + topdown\\-be\\-bound ) ) + ( topdown\\-retiring / ( topdown\\-fe\\-bou= nd + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) )= , 0 ) ) - ( topdown\\-br\\-mispredict / ( topdown\\-fe\\-bound + topdown\\= -bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) )", - "MetricGroup": "BadSpec;MachineClears;TmaL2;m_tma_bad_speculation_= percent", - "MetricName": "tma_machine_clears_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. Backend is the portion of the processor cor= e where the out-of-order scheduler dispatches ready uops into their respect= ive execution units; and once completed these uops get retired according to= program order. For example; stalls due to data-cache misses or stalls due = to the divider unit being overloaded are both categorized under Backend Bou= nd. Backend Bound is further divided into two main categories: Memory Bound= and Core Bound.", - "MetricExpr": "( 100 * ( topdown\\-be\\-bound / ( topdown\\-fe\\-b= ound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) )= ) + ( 0 * slots )", - "MetricGroup": "TmaL1", - "MetricName": "tma_backend_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck. Memory Bound estimat= es fraction of slots where pipeline is likely stalled due to demand load or= store instructions. This accounts mainly for (1) non-completed in-flight m= emory demand loads which coincides with execution units starvation; in addi= tion to (2) cases where stores could impose backpressure on the pipeline wh= en many of them get buffered at the same time (less common out of the two).= ", - "MetricExpr": "( 100 * ( topdown\\-mem\\-bound / ( topdown\\-fe\\-= bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) = ) ) + ( 0 * slots )", - "MetricGroup": "Backend;TmaL2;m_tma_backend_bound_percent", - "MetricName": "tma_memory_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache. The L1 data cache typicall= y has the shortest latency. However; in certain cases like loads blocked o= n older stores; a load might suffer due to high latency even though it is b= eing satisfied by the L1. Another example is loads who miss in the TLB. The= se cases are characterized by execution unit stalls; while some non-complet= ed demand load lives in the machine without having that demand load missing= the L1 cache.", - "MetricExpr": "100 * ( max( ( EXE_ACTIVITY.BOUND_ON_LOADS - MEMORY= _ACTIVITY.STALLS_L1D_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) , 0 ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l1_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 m= isses/L2 hits) can improve the latency and increase performance.", - "MetricExpr": "100 * ( ( MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_= ACTIVITY.STALLS_L2_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l2_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core. = Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and= increase performance.", - "MetricExpr": "100 * ( ( MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_A= CTIVITY.STALLS_L3_MISS ) / ( CPU_CLK_UNHALTED.THREAD ) )", - "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TmaL3;m_tma_memor= y_bound_percent", - "MetricName": "tma_l3_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads. Better caching can i= mprove the latency and increase performance.", - "MetricExpr": "100 * ( min( ( ( ( MEMORY_ACTIVITY.STALLS_L3_MISS /= ( CPU_CLK_UNHALTED.THREAD ) ) - ( min( ( ( ( ( 1 - ( ( ( 19 * ( MEM_LOAD_L= 3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_R= ETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1= + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_L= OAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LO= AD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 += ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) ) / ( ( 1= 9 * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HI= T / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.= LOCAL_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS )= ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.F= B_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REM= OTE_HITM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) )= ) ) ) ) + ( 25 * ( ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + ( MEM_LOAD_RETIRED= .FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) + 33 * ( ( MEM_LOAD_L3_MISS_= RETIRED.REMOTE_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L= 1_MISS ) ) ) ) ) ) ) ) ) ) * ( MEMORY_ACTIVITY.STALLS_L3_MISS / ( CPU_CLK_U= NHALTED.THREAD ) ) ) if ( ( 1000000 ) * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_P= MM + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) ,= ( 1 ) ) ) ) ) , ( 1 ) ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_dram_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric roughly estimates (based on idle = latencies) how often the CPU was stalled on accesses to external 3D-Xpoint = (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memo= ry Module. ", - "MetricExpr": "100 * ( min( ( ( ( ( 1 - ( ( ( 19 * ( MEM_LOAD_L3_M= ISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETI= RED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * ( 1 + = ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD= _L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_= RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * ( 1 + ( = MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) ) / ( ( 19 *= ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT /= ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + 10 * ( ( MEM_LOAD_L3_MISS_RETIRED.LOC= AL_DRAM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) = ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * ( 1 + ( MEM_LOAD_RETIRED.FB_H= IT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) + ( MEM_LOAD_L3_MISS_RETIRED.REMOTE= _HITM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) = ) ) ) + ( 25 * ( ( MEM_LOAD_RETIRED.LOCAL_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB= _HIT / ( MEM_LOAD_RETIRED.L1_MISS ) ) ) ) ) + 33 * ( ( MEM_LOAD_L3_MISS_RET= IRED.REMOTE_PMM * ( 1 + ( MEM_LOAD_RETIRED.FB_HIT / ( MEM_LOAD_RETIRED.L1_M= ISS ) ) ) ) ) ) ) ) ) ) * ( MEMORY_ACTIVITY.STALLS_L3_MISS / ( CPU_CLK_UNHA= LTED.THREAD ) ) ) if ( ( 1000000 ) * ( MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM = + MEM_LOAD_RETIRED.LOCAL_PMM ) > MEM_LOAD_RETIRED.L1_MISS ) else 0 ) ) , ( = 1 ) ) )", - "MetricGroup": "MemoryBound;Server;TmaL3mem;TmaL3;m_tma_memory_bou= nd_percent", - "MetricName": "tma_pmm_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write. Even though store accesses do not typically stall= out-of-order CPUs; there are few cases where stores can lead to actual sta= lls. This metric will be flagged should RFO stores be a bottleneck.", - "MetricExpr": "100 * ( EXE_ACTIVITY.BOUND_ON_STORES / ( CPU_CLK_UN= HALTED.THREAD ) )", - "MetricGroup": "MemoryBound;TmaL3mem;TmaL3;m_tma_memory_bound_perc= ent", - "MetricName": "tma_store_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck. Shortage in hardware comput= e resources; or dependencies in software's instructions are both categorize= d under Core Bound. Hence it may indicate the machine ran out of an out-of-= order resource; certain execution units are overloaded or dependencies in p= rogram's data- or instruction-flow are limiting the performance (e.g. FP-ch= ained long-latency arithmetic operations).", - "MetricExpr": "( 100 * ( max( 0 , ( topdown\\-be\\-bound / ( topdo= wn\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\= \-bound ) ) - ( topdown\\-mem\\-bound / ( topdown\\-fe\\-bound + topdown\\-= bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) ) + ( 0 * sl= ots )", - "MetricGroup": "Backend;TmaL2;Compute;m_tma_backend_bound_percent", - "MetricName": "tma_core_bound_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active. Divide and square root instructions are per= formed by the Divider unit and can take considerably longer latency than in= teger or Floating Point addition; subtraction; or multiplication.", - "MetricExpr": "100 * ( ARITH.DIVIDER_ACTIVE / ( CPU_CLK_UNHALTED.T= HREAD ) )", - "MetricGroup": "TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_divider_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related). Two distinct categories can be attributed into this met= ric: (1) heavy data-dependency among contiguous instructions would manifest= in this metric - such cases are often referred to as low Instruction Level= Parallelism (ILP). (2) Contention on some hardware execution unit other th= an Divider. For example; when there are too many multiply operations.", - "MetricExpr": "( 100 * ( ( EXE_ACTIVITY.EXE_BOUND_0_PORTS + ( EXE_= ACTIVITY.1_PORTS_UTIL + ( topdown\\-retiring / ( topdown\\-fe\\-bound + top= down\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * cpu@EXE= _ACTIVITY.2_PORTS_UTIL\\,umask\\=3D0xc@ ) ) / ( CPU_CLK_UNHALTED.THREAD ) i= f ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOU= ND_ON_LOADS ) ) else ( EXE_ACTIVITY.1_PORTS_UTIL + ( topdown\\-retiring / (= topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown= \\-be\\-bound ) ) * cpu@EXE_ACTIVITY.2_PORTS_UTIL\\,umask\\=3D0xc@ ) / ( CP= U_CLK_UNHALTED.THREAD ) ) ) + ( 0 * slots )", - "MetricGroup": "PortsUtil;TmaL3;m_tma_core_bound_percent", - "MetricName": "tma_ports_utilization_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. Ideally= ; all pipeline slots would be attributed to the Retiring category. Retirin= g of 100% would indicate the maximum Pipeline_Width throughput was achieved= . Maximizing Retiring typically increases the Instructions-per-cycle (see = IPC metric). Note that a high Retiring value does not necessary mean there = is no room for more performance. For example; Heavy-operations or Microcod= e Assists are categorized under Retiring. They often indicate suboptimal pe= rformance and can often be optimized or avoided. ", - "MetricExpr": "( 100 * ( topdown\\-retiring / ( topdown\\-fe\\-bou= nd + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) )= + ( 0 * slots )", - "MetricGroup": "TmaL1", - "MetricName": "tma_retiring_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation). This correlates with total number = of instructions used by the program. A uops-per-instruction (see UPI metric= ) ratio of 1 or less should be expected for decently optimized software run= ning on Intel Core/Xeon products. While this often indicates efficient X86 = instructions were executed; high value does not necessarily mean better per= formance cannot be achieved.", - "MetricExpr": "( 100 * ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) ) + ( 0 * slot= s )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_light_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired). Note t= his metric's value may exceed its parent due to use of \"Uops\" CountDomain= and FMA double-counting.", - "MetricExpr": "100 * ( ( ( topdown\\-retiring / ( topdown\\-fe\\-b= ound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) )= * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD ) + ( ( FP_ARITH_INST_RETIRED.S= CALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2= .SCALAR ) / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad= \\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) + (= min( ( ( FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.= 128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_IN= ST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + = FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.VECTOR ) = / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + = topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) , ( 1 ) ) ) += ( cpu@AMX_OPS_RETIRED.BF16\\,cmask\\=3D0x1@ / ( ( topdown\\-retiring / ( t= opdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\= -be\\-bound ) ) * ( slots ) ) ) )", - "MetricGroup": "HPC;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fp_arith_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents overall Integer (Int) = select operations fraction the CPU has executed (retired). Vector/Matrix In= t operations and shuffles are counted. Note this metric's value may exceed = its parent due to use of \"Uops\" CountDomain.", - "MetricExpr": "100 * ( ( ( INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIR= ED.VNNI_128 ) / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\= -bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) )= + ( ( INT_VEC_RETIRED.ADD_256 + INT_VEC_RETIRED.MUL_256 + INT_VEC_RETIRED.= VNNI_256 ) / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) + = ( INT_VEC_RETIRED.SHUFFLES / ( ( topdown\\-retiring / ( topdown\\-fe\\-boun= d + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * = ( slots ) ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_int_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) * MEM_UOP_RETI= RED.ANY / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\= -spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_memory_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JCC= are commonly used examples.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) * INST_RETIRED= .MACRO_FUSED / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-= bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_fused_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused. Non-conditi= onal branches like direct JMP or CALL would count here. Can be used to exam= ine fusible conditional jumps that were not fused.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) * ( BR_INST_RE= TIRED.ALL_BRANCHES - INST_RETIRED.MACRO_FUSED ) / ( ( topdown\\-retiring / = ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdow= n\\-be\\-bound ) ) * ( slots ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_non_fused_branches_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions. Compilers often use NOPs f= or certain address alignments - e.g. start address of a function or loop bo= dy.", - "MetricExpr": "100 * ( ( max( 0 , ( topdown\\-retiring / ( topdown= \\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-= bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-ba= d\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) * INST_RETIRED= .NOP / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-sp= ec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_nop_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", - "MetricExpr": "100 * ( max( 0 , ( max( 0 , ( topdown\\-retiring / = ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdow= n\\-be\\-bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + top= down\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) - ( (= ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + t= opdown\\-retiring + topdown\\-be\\-bound ) ) * UOPS_EXECUTED.X87 / UOPS_EXE= CUTED.THREAD ) + ( ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RE= TIRED.SCALAR_DOUBLE + FP_ARITH_INST_RETIRED2.SCALAR ) / ( ( topdown\\-retir= ing / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + = topdown\\-be\\-bound ) ) * ( slots ) ) ) + ( min( ( ( FP_ARITH_INST_RETIRED= .128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_I= NST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE += FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACK= ED_SINGLE + FP_ARITH_INST_RETIRED2.VECTOR ) / ( ( topdown\\-retiring / ( to= pdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-= be\\-bound ) ) * ( slots ) ) ) , ( 1 ) ) ) + ( cpu@AMX_OPS_RETIRED.BF16\\,c= mask\\=3D0x1@ / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\= -bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) )= ) + ( ( ( INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIRED.VNNI_128 ) / ( ( topdo= wn\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-r= etiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) + ( ( INT_VEC_RETIRED.AD= D_256 + INT_VEC_RETIRED.MUL_256 + INT_VEC_RETIRED.VNNI_256 ) / ( ( topdown\= \-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-reti= ring + topdown\\-be\\-bound ) ) * ( slots ) ) ) + ( INT_VEC_RETIRED.SHUFFLE= S / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec = + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) ) + ( ( max= ( 0 , ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec = + topdown\\-retiring + topdown\\-be\\-bound ) ) - ( topdown\\-heavy\\-ops /= ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdo= wn\\-be\\-bound ) ) ) ) * MEM_UOP_RETIRED.ANY / ( ( topdown\\-retiring / ( = topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\= \-be\\-bound ) ) * ( slots ) ) ) + ( ( max( 0 , ( topdown\\-retiring / ( to= pdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-= be\\-bound ) ) - ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown= \\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) ) ) * INST_RE= TIRED.MACRO_FUSED / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdo= wn\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( slots )= ) ) + ( ( max( 0 , ( topdown\\-retiring / ( topdown\\-fe\\-bound + topdown= \\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) - ( topdown\\= -heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-re= tiring + topdown\\-be\\-bound ) ) ) ) * ( BR_INST_RETIRED.ALL_BRANCHES - IN= ST_RETIRED.MACRO_FUSED ) / ( ( topdown\\-retiring / ( topdown\\-fe\\-bound = + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) * ( = slots ) ) ) + ( ( max( 0 , ( topdown\\-retiring / ( topdown\\-fe\\-bound + = topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) ) - ( to= pdown\\-heavy\\-ops / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdo= wn\\-retiring + topdown\\-be\\-bound ) ) ) ) * INST_RETIRED.NOP / ( ( topdo= wn\\-retiring / ( topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-r= etiring + topdown\\-be\\-bound ) ) * ( slots ) ) ) ) ) )", - "MetricGroup": "Pipeline;TmaL3;m_tma_light_operations_percent", - "MetricName": "tma_other_light_ops_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences. This highly-correlates with the = uop length of these instructions/sequences.", - "MetricExpr": "( 100 * ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-= bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) = ) ) + ( 0 * slots )", - "MetricGroup": "Retire;TmaL2;m_tma_retiring_percent", - "MetricName": "tma_heavy_operations_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of= uops in such instructions.", - "MetricExpr": "100 * ( ( topdown\\-heavy\\-ops / ( topdown\\-fe\\-= bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound ) = ) - ( UOPS_RETIRED.MS / ( slots ) ) )", - "MetricGroup": "TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_few_uops_instructions_percent", - "ScaleUnit": "1%" - }, - { - "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS= is used for CISC instructions not supported by the default decoders (like = repeat move strings; or CPUID); or by microcode assists used to address som= e operation modes (like in Floating Point assists). These cases can often b= e avoided.", - "MetricExpr": "100 * ( UOPS_RETIRED.MS / ( slots ) )", - "MetricGroup": "MicroSeq;TmaL3;m_tma_heavy_operations_percent", - "MetricName": "tma_microcode_sequencer_percent", - "ScaleUnit": "1%" } ] --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAB90C433FE for ; Sat, 1 Oct 2022 06:09:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229660AbiJAGJ3 (ORCPT ); Sat, 1 Oct 2022 02:09:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229689AbiJAGIr (ORCPT ); Sat, 1 Oct 2022 02:08:47 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8E811AF91C for ; Fri, 30 Sep 2022 23:07:33 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 189-20020a2516c6000000b006bbbcc3dd9bso5639229ybw.15 for ; Fri, 30 Sep 2022 23:07:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=sYSumwi1J3HcBN/q2DeaufGmFH3/CfQMFmLPW058Qnk=; b=SF/ngzdkbm+4uolPD5z9riTyqTuyEacPZPaZZoZmWLfXTZX8GDTM4f6K4luf+hX11i /+XLx07MkeeEy5FoWbghuD1N513UkF5fyaCR4hW/nCbqYmTb1sU6Rr+KVlFN4wbI6ZMc FK/WuNeCNv5sgdeXBjTZJHSbLEf+LItBBdEYr9Id/NFpLLpldG4EL9nSHL39lQrRCNJi ufB/gc60eaK/wh5Wk0tb9/XB9K5omyRWbUmCJjJFFTypllDdtM2qJTZ5Vu5wrg+n9YJ5 3F5DucKWMMtTRue5K89N3rjpJU7xgZnDQRxu7gI2fmDIxf4sCEvfwhSDuLZwnbClK1TE Cqgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=sYSumwi1J3HcBN/q2DeaufGmFH3/CfQMFmLPW058Qnk=; b=zO8Tgp6Y6YtfqnaRLRIpWoy77PEyNjqq2AhF1R0jmQskJYWWWvSpfIy87hD94crhcZ fOufSH9CbC+TdHfOJaSflq3tdeV7Q6VvMyVQcKpqqWQc2hAyzgyszPs3sY6hZn6sSYFY r4SIH8LJXX8M/9WHvFZJTYMQsUkjiN1gEQ3IDuMNaAMvL7qc9xwUiLkKJ6nx0KrQ3iC9 eK3l0n5Jtuf1+GqSwlmQ1/mTl6o58CL90gy56B7abxosvwA1E7ZeW0tnh+AxCcTNZyYD IKnnaYLBbyhKQty8sz0snPecJpjxdEdDccAeEVbXefaPIAw2VWuYZ8icSWoLGeB+kRr0 uwkw== X-Gm-Message-State: ACrzQf1UfeN+wKo2so4OW0o81nPjsDMWUFAdGz/O6NQzpQ3W4lQfJGoZ dItXGNh4pZgxl7xGDzbV/rM8zAktSxGS X-Google-Smtp-Source: AMsMyM5speHkAvFFyH4EOcnqJEwnYmY8cZgR0lRS1x6kRJa1jeL7Duf043AaF54+gnnC3+CO2s3fgkQfldMr X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:a445:0:b0:6af:c2a7:1eea with SMTP id f63-20020a25a445000000b006afc2a71eeamr11283695ybi.305.1664604453229; Fri, 30 Sep 2022 23:07:33 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:33 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-20-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 19/22] perf vendor events: Update silvermont cpuids From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add cpuid that was added to https://download.01.org/perfmon/mapfile.csv Signed-off-by: Ian Rogers --- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-ev= ents/arch/x86/mapfile.csv index c2354e368586..5e609b876790 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -21,7 +21,7 @@ GenuineIntel-6-1[AEF],v3,nehalemep,core GenuineIntel-6-2E,v3,nehalemex,core GenuineIntel-6-2A,v17,sandybridge,core GenuineIntel-6-8F,v1.06,sapphirerapids,core -GenuineIntel-6-(37|4C|4D),v14,silvermont,core +GenuineIntel-6-(37|4A|4C|4D|5A),v14,silvermont,core GenuineIntel-6-(4E|5E|8E|9E|A5|A6),v53,skylake,core GenuineIntel-6-55-[01234],v1.28,skylakex,core GenuineIntel-6-86,v1.20,snowridgex,core --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0CCFC433F5 for ; Sat, 1 Oct 2022 06:10:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229713AbiJAGK3 (ORCPT ); Sat, 1 Oct 2022 02:10:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229706AbiJAGIt (ORCPT ); Sat, 1 Oct 2022 02:08:49 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFE801AF937 for ; Fri, 30 Sep 2022 23:07:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id p12-20020a259e8c000000b006958480b858so5629916ybq.12 for ; Fri, 30 Sep 2022 23:07:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=3xsi+1npBdo6nobu8EYert8P4XssPVKd4PP77XWYoDk=; b=QVpxlIr3ur75Sly/6wSM0yBTx+RGXs2mYLgGFGA1QgC0/Oz3tyaoPrvWMl0l7JpHsF TV1SffAlPUFqxNP8x5nsHFknPoUc7ophEi9UlpLVjkGagJDpoyadi4GtPwMaTyNvn42V tz3VpdD1gyjM5aOjs8CEFL0bbKvRXNptj0dZwWa/Rdsa71BZdfBKZCLzNWKxWaFYr4qN H/DBYkdz2Ox/MMCTWyVkyDPd0mxSWG7PO0AfIl7cHm/dAh0Gv81+6DgJhWG9Y9M9Qrco I7TlrWqhtsPovFmhOG1jUq0eD/gJ/+urBJXaoJeSue3WXOLGkwSFqZRY70xMuVIs5f3O AndA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=3xsi+1npBdo6nobu8EYert8P4XssPVKd4PP77XWYoDk=; b=gKLlLPxXIc5TQ+2Tvl/DfD4ffLPDTlA16sUSPnJ+HS3Aq9rNUOOuK38gEYvGSzVOpG HkY+lgGgGHGGQ5sYm/3N6mDlP0rRqC0W62IpQRqQoNNxDMSEab+qJ+zCE1oljsNxHlb6 4lrO2Lhm65GH7rR0tvwFlSEo/ZynL6G7C0vl6A9a3KFzCkngTrKqSYm5aVaR4LekvfU2 he7etNfc2IDdANYQbaSrKyi/xSVowLuHGjWjb1gzHX6z7ULHk0sLjdrkmOW1B9Wu/UUz a6E8zsIhe0jn4w6RI0NxuO06Y2eS8o35dlW8ITO/huUOq6OT870o4TFs1hQzUzXI+YQ9 DXdw== X-Gm-Message-State: ACrzQf2R6Kd5wkEDDwvejxlhZjzVENp9jlf50dzqhTfBtZo6xRf9E3/a j1SIrxgsjz0Zu/Q5d8VQ391dNONbO6Fp X-Google-Smtp-Source: AMsMyM4FEpLAvRS2wDy1caYhZp4iqbthLO6hJROfkTtb2OAuLSumMUTh/y/Ggw02Wk2RZPy9r2LFzZcbZ7aJ X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a5b:f0b:0:b0:6b5:639a:c03d with SMTP id x11-20020a5b0f0b000000b006b5639ac03dmr10896932ybr.486.1664604455923; Fri, 30 Sep 2022 23:07:35 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:34 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-21-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 20/22] perf vendor events: Update Intel skylake From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v53, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/skylake/skl-metrics.json | 861 ++++++++++++++---- 1 file changed, 679 insertions(+), 182 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json b/tool= s/perf/pmu-events/arch/x86/skylake/skl-metrics.json index 73fa72d3dcb1..f138b9836b51 100644 --- a/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json @@ -1,148 +1,694 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE= / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "(ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDAT= A_STALL\\,cmask\\=3D1\\,edge@) / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_64B.IFTAG_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT))) * INT_MISC.CLEAR_RESTEER_CYCLES /= CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "9 * BACLEARS.ANY / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNH= ALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * = CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - tma_frontend_bound - (UOPS_ISSUED.ANY + 4 * ((I= NT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLES))= / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUN= D_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + = tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) = * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(12 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 9 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTA= NDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK= _UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK= _UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCL= ES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNH= ALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "((MEM_LOAD_RETIRED.L2_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / ((MEM_LOAD_RETIRED.L2_HIT * (1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + cpu@L1D_PEND_MISS.FB_= FULL\\,cmask\\=3D1@)) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.S= TALLS_L2_MISS) / CLKS)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((18.5 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIR= ED.XSNP_HITM + (16.5 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MI= SS) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS= ", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(16.5 * Average_Frequency) * MEM_LOAD_L3_HIT_RETIRE= D.XSNP_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2)= / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(6.5 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HIT= * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CYCLE_ACT= IVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma_l2_bou= nd)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_INST_RETIRED.LOC= K_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.LOCK_LOAD= S / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(22 * Average_Frequency) * OFFCORE_RESPONSE.DEMAND_= RFO.L3_HIT.SNOOP_HITM / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT= _RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_P= ORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / CLKS if (ARITH.DIV= IDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY)= ) else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTI= L) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_NONE / 2 if #SMT_on else= CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "PARTIAL_RAT_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "CLKS * UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISS= UED.ANY", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((UOPS_EXECUTED.CORE_CYCLES_GE_1 - UOPS_EXECUTED.CO= RE_CYCLES_GE_2) / 2 if #SMT_on else EXE_ACTIVITY.1_PORTS_UTIL) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((UOPS_EXECUTED.CORE_CYCLES_GE_2 - UOPS_EXECUTED.CO= RE_CYCLES_GE_3) / 2 if #SMT_on else EXE_ACTIVITY.2_PORTS_UTIL) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise).", + "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_GE_3 / 2 if #SMT_on else= UOPS_EXECUTED.CORE_CYCLES_GE_3) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED_PORT.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D_PORT.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED_PORT.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_2", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads) Sample with: UOPS_DISPATCHED_PORT.PORT_3", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data) Sample with: UOPS_DI= SPATCHED_PORT.PORT_4", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address) Samp= le with: UOPS_DISPATCHED_PORT.PORT_7", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" }, { - "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", - "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_= RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALT= ED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * = CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETI= RED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES = / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "Bad;BadSpec;BrMispredicts", - "MetricName": "Mispredictions" + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_INST_RETIRED.ANY / INST_= RETIRED.ANY", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring fused instructions -- where one uop can represent mu= ltiple contiguous instructions", + "MetricExpr": "tma_light_operations * UOPS_RETIRED.MACRO_FUSED / U= OPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fused_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring fused instructions -- where one uop can represent m= ultiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JC= C are commonly used examples.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions that were not fused", + "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHE= S - UOPS_RETIRED.MACRO_FUSED) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_non_fused_branches", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring branch instructions that were not fused. Non-condit= ional branches like direct JMP or CALL would count here. Can be used to exa= mine fusible conditional jumps that were not fused.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / UOPS_RETI= RED.RETIRE_SLOTS", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_m= emory_operations + tma_fused_instructions + tma_non_fused_branches + tma_no= p_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS + UOPS_RETIRED.MACRO_FUS= ED - INST_RETIRED.ANY) / SLOTS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.AN= Y", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", - "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_= RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_= RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNH= ALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIR= ED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) = * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS= _NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))) )", - "MetricGroup": "Bad;BadSpec;BrMispredicts_SMT", - "MetricName": "Mispredictions_SMT" + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" }, { "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_= ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALL= S_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (= ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_H= IT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1= @ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS )= / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_AC= TIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_P= ORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * E= XE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS= _NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + = 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min= ( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,= cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREAD) / #(CYCLE_ACTIVITY.STALLS_L3_MISS= / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTI= VITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_= HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (ME= M_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L= 1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVI= TY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THR= EAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MI= SS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_= ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1= _PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) *= EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UO= PS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY = + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (O= FFCORE_REQUESTS_BUFFER.SQ_FULL / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIV= ITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THR= EAD) ) ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_= L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_ME= M_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EX= E_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTE= D.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * = (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS= _ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD= ))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_R= ETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ / CPU_CLK_UNHAL= TED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALL= S_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_boun= d / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_stor= e_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma= _l3_hit_latency + tma_sq_full))) + (tma_l1_bound / (tma_dram_bound + tma_l1= _bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (= tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_spli= t_loads + tma_store_fwd_blk)) ", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "Memory_Bandwidth" }, - { - "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK= _UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALL= S_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 = + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RET= IRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ))= + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_= L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #= ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE= _ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_= SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE= _THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTI= L) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (= 4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_A= CTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MIS= C.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( = 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )= * ( (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_D= ATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREAD) / #(CYCLE_ACTIVITY.STALL= S_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - C= YCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RE= TIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )= ) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_= RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYC= LE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNH= ALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_A= NY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_A= CTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD= / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XC= LK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( OFFCORE_REQUESTS_BUFFER= .SQ_FULL / 2 ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(( CYCLE_ACTIVITY.ST= ALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) )= ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MI= SS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY = + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTI= VITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.= THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.= REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)= ) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / = 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK = ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4= * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_AC= TIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( ((L1D_PEND_MISS.PENDING / ( M= EM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB= _FULL\\,cmask\\=3D1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.S= TALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD = , 0 )) ) ", - "MetricGroup": "Mem;MemoryBW;Offcore_SMT", - "MetricName": "Memory_Bandwidth_SMT" - }, { "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_= ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALL= S_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (= ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETI= RED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_H= IT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1= @ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS )= / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_AC= TIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_P= ORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * E= XE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS= _NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + = 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min= ( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_R= D ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE= _REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHALTED.THREA= D)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_= ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALT= ED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT /= MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOA= D_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL= \\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.ST= ALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_= L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #(((= CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_AC= TIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLO= TS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTI= VITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 = * CPU_CLK_UNHALTED.THREAD))) ) * ( (( (10 * ((CPU_CLK_UNHALTED.THREAD / CPU= _CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * (= (CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 100000000= 0 / duration_time)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB= _HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCL= E_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHAL= TED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_= HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB= _FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVI= TY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALL= S_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL += (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNH= ALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)= ) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( = UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) ) )", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_bound = / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_= bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing = + tma_l3_hit_latency + tma_sq_full)) + (tma_l2_bound / (tma_dram_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)))", "MetricGroup": "Mem;MemoryLat;Offcore", "MetricName": "Memory_Latency" }, - { - "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK= _UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALL= S_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 = + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RET= IRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) ))= + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_= L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #= ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE= _ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_= SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE= _THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTI= L) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (= 4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_A= CTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MIS= C.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( = 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )= * ( (min( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WI= TH_DATA_RD ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cp= u@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@ ) / CPU_CLK_UNHAL= TED.THREAD)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + = (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_C= LK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED= .FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 += (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MIS= S.FB_FULL\\,cmask\\=3D1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_AC= TIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVIT= Y.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREA= D) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / = (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.R= ETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALT= ED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_POR= TS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CO= RE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_TH= READ_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( I= NT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )))) ) * ( (( (10 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) *= msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THREAD= / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) * = MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.= L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MIS= S - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) + ( (( (ME= M_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L= 1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / = MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=3D1@ ) )= * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CP= U_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY= .BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_U= TIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) *= ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))= * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_= UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CP= U_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS= _ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK= _UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK= _UNHALTED.REF_XCLK ) )))) ) )", - "MetricGroup": "Mem;MemoryLat;Offcore_SMT", - "MetricName": "Memory_Latency_SMT" - }, { "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_= ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NO= T_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (max( (= CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK= _UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.= BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UT= IL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTI= VITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DE= LIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( 9 * c= pu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_LOAD_MISSES.WALK_ACTIVE = , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 )= ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYC= LE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_A= CTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.ST= ALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTA= L + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_= UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) -= ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED= .THREAD))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTL= B_STORE_MISSES.WALK_ACTIVE ) / CPU_CLK_UNHALTED.THREAD) / #(EXE_ACTIVITY.BO= UND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tm= a_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_= fwd_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_boun= d + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + = tma_false_sharing + tma_split_stores + tma_store_latency))) ", "MetricGroup": "Mem;MemoryTLB;Offcore", "MetricName": "Memory_Data_TLBs" }, - { - "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", - "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) )))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - = CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYC= LE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVI= TY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EX= E_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (m= in( 9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_LOAD_MISSES.WAL= K_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_M= ISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_= ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) += ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_AC= TIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.ST= ALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 *= ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACT= IVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_C= YCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_= UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( 9 * = cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ + DTLB_STORE_MISSES.WALK_ACTI= VE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREA= D_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(EXE_ACTIVITY.BOUND_ON_STORES = / CPU_CLK_UNHALTED.THREAD) ) ) ", - "MetricGroup": "Mem;MemoryTLB;Offcore_SMT", - "MetricName": "Memory_Data_TLBs_SMT" - }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_= RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITI= ONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 = * CPU_CLK_UNHALTED.THREAD))", + "MetricExpr": "100 * ((BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_R= ETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.CONDITION= AL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, - { - "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_= RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITI= ONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 = * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACT= IVE / CPU_CLK_UNHALTED.REF_XCLK ) )))", - "MetricGroup": "Ret_SMT", - "MetricName": "Branching_Overhead_SMT" - }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_= CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDA= TA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS= .ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "Big_Code" }, - { - "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_ST= ALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACH= E_16B.IFDATA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 = * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.= CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + C= PU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))", - "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB_SMT", - "MetricName": "Big_Code_SMT" - }, { "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", - "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK= _UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE /= (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MIS= P_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_C= YCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UO= PS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) - (100 * (4 * IDQ_UOPS_NOT= _DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (I= CACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STA= LL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHA= LTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_U= OPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))= )", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", "MetricGroup": "Fed;FetchBW;Frontend", "MetricName": "Instruction_Fetch_BW" }, - { - "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", - "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DE= LIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.= ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL= _BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_= MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_D= ELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) = * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))= ) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNH= ALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STAL= L\\,cmask\\=3D1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / = CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DEL= IV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.O= NE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))", - "MetricGroup": "Fed;FetchBW;Frontend_SMT", - "MetricName": "Instruction_Fetch_BW_SMT" - }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -160,8 +706,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -172,16 +718,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -191,63 +731,38 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / CORE_C= LKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIV= E / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * CPU_CLK_UNHALT= ED.THREAD )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)) / (2 * CORE_CLKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, - { - "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width). SMT versi= on; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * ( ( CPU_CLK_UN= HALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UN= HALTED.REF_XCLK ) ) )", - "MetricGroup": "Cor;Flops;HPC_SMT", - "MetricName": "FP_Arith_Utilization_SMT", - "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n). SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "( 1 - ((1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU= _CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES )= / (4 * CPU_CLK_UNHALTED.THREAD)) - ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE= _ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.= 1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) = * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_U= OPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY= + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)))) / ((EX= E_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.R= ETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL)) = / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STAL= LS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_PORTS_UTI= L + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIV= ITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if ((1 - (IDQ_UOPS_NOT_DELIVER= ED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC= .RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) - ((( CYCLE_ACTIVITY.ST= ALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTA= L + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_= UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STOR= ES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) -= ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED= .THREAD)))) < ((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL= + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVI= TY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( C= YCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_AC= TIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.TH= READ)) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) else 1 ) if = 0 > 0.5 else 0", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", "MetricGroup": "Cor;SMT", "MetricName": "Core_Bound_Likely" }, - { - "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", - "MetricExpr": "( 1 - ((1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ((( CYC= LE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVI= TY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS /= (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EX= E_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( (= CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE /= CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOV= ERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU= _CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))) / ((EXE= _ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTE= D.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORT= S_UTIL)) / CPU_CLK_UNHALTED.THREAD if ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTI= VITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_= PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if ((1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))) - ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIV= ITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORT= S_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (I= DQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 += CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( U= OPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_= CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_= CLK_UNHALTED.REF_XCLK ) ))))) < ((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACT= IVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED= .THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED= .REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if = ( ARITH.DIVIDER_ACTIVE < ( CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STA= LLS_MEM_ANY ) ) else (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOT= S / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THR= EAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) /= CPU_CLK_UNHALTED.THREAD) else 1 ) if (1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTI= VE / ( CPU_CLK_UNHALTED.REF_XCLK_ANY / 2 )) > 0.5 else 0", - "MetricGroup": "Cor;SMT_SMT", - "MetricName": "Core_Bound_Likely_SMT" - }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else= CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -289,13 +804,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B= _PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACK= ED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -316,14 +831,14 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." @@ -335,9 +850,9 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -372,16 +887,10 @@ }, { "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * ( (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * (DSB2MITE_SWITCHES.PENALTY_CYC= LES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS= _DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) + ((IDQ_UOPS_NOT_DELIVERED.COR= E / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_U= OPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) * (( IDQ.ALL_MITE_CYCLES_A= NY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / CPU_CLK_UNHALTED.THREAD / 2) / #((= IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOP= S_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) = )", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_mite))", "MetricGroup": "DSBmiss;Fed", "MetricName": "DSB_Misses" }, - { - "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", - "MetricExpr": "100 * ( (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (DSB2MITE_SWITCHES.P= ENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYC= LES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_= CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) + ((IDQ_UO= PS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_= CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ= _UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.TH= READ / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.RE= F_XCLK ) )))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOP= S ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD= _ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) / 2) / #((IDQ_UOPS_NOT_DELIVERED.CO= RE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_TH= READ_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED= .CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + = CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) )", - "MetricGroup": "DSBmiss;Fed_SMT", - "MetricName": "DSB_Misses_SMT" - }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -396,16 +905,10 @@ }, { "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_C= LK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.A= LL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU= _CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CO= RE / (4 * CPU_CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_= MISP_RETIRED.ALL_BRANCHES", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", "MetricGroup": "Bad;BrMispredicts", "MetricName": "Branch_Misprediction_Cost" }, - { - "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.AL= L_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT= _MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_= DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 )= * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )= )) ) * (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_= THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCH= ES", - "MetricGroup": "Bad;BrMispredicts_SMT", - "MetricName": "Branch_Misprediction_Cost_SMT" - }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.NOT_TAKEN / BR_INST_RETIRED.ALL_BRA= NCHES", @@ -414,101 +917,95 @@ }, { "BriefDescription": "Fraction of branches that are taken condition= als", - "MetricExpr": "( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT= _TAKEN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_= TAKEN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches;CodeGen;PGO", "MetricName": "Cond_TK" }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, { "BriefDescription": "Fraction of branches that are unconditional (= direct or indirect) jumps", - "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CON= DITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) / B= R_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.COND= ITIONAL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_= INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "Jump" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS = + MEM_LOAD_RETIRED.FB_HIT )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS += MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CPU_C= LK_UNHALTED.THREAD )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING) / (2 * CORE_CLK= S)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * ( ( C= PU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / C= PU_CLK_UNHALTED.REF_XCLK ) ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -535,25 +1032,25 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -565,26 +1062,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) = / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 10000= 00000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -602,7 +1099,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78620C433FE for ; Sat, 1 Oct 2022 06:10:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229763AbiJAGK4 (ORCPT ); Sat, 1 Oct 2022 02:10:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229724AbiJAGIv (ORCPT ); Sat, 1 Oct 2022 02:08:51 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51DAD1B3491 for ; Fri, 30 Sep 2022 23:07:38 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d8-20020a25bc48000000b00680651cf051so5632654ybk.23 for ; Fri, 30 Sep 2022 23:07:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=WGdfaSbg26sjN1PYSWw/pNUX1GE6z2S/LbaWOmVdFxw=; b=pxnIdM7xeUVR0o7FqtsQ5tLChQ0XRW6pIt23cbtpajvKXHbcC7gkJunGczC6PNY5fO faNx9pA6EB15+CoPe7oC/XfAt5saU6tTPyJGKhqrNiAXs1pOBPIb0USmaAgqbMIDu9qE Zw2b96H/VfADiEBXpJ1TkdBbOQ/HLKYrJUpzM4ybwLzOlOqzuZu939HHHZqVsi9/wCBg Xxo+y0pesGexJ2pgjR5NVuTbNjB77LNUF4SEnRAeSJizT8vPbYT92/UT69pv7BUyihEK M1fFlsfmPnv/LGfexdQT/MU2+/1NqOdjupaw9dqkBvoy8AbgbjUUxbh+qYTUMdf8RF9j QL5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=WGdfaSbg26sjN1PYSWw/pNUX1GE6z2S/LbaWOmVdFxw=; b=HcoQm13P6d9F4ZzKv4FYhtWi1dB91GiWyWktbX0lKhyQUhg2ZC/5SCn9QLDGpjU7z1 td9jIL0NZq2BhFyhVpZWvItnCFlUoDLyZqSxfFZdfbtkv2nsIhpHOO0J9e7N6Fzexbbd tbk9s5ySkYhOQHoS9i9eP12rkVanOZuPSJQOsxlGAg6IrmhjczHBduRbWALyoRXY/TUG Z7JNHR+rZBSs2FwOwt5OHc4HDoBvmTSUMrlJumvIi11L/2dkKH+nVjgJ15IH+Qw+pMnH bTIF10V0Xbi7R46stg4O9w49RDUN/0P51rihEpeiI4D4ted3wnZ6c7KXYsBGYcP/EZ+o l51w== X-Gm-Message-State: ACrzQf1vBaeYrcGQpcmfAof+Pxy7NtW8262iqgtFZAwWX6XyEikE9EKP K0vZL68jq4fdaIK6R+pO+h6AgdaJ6fZv X-Google-Smtp-Source: AMsMyM6N7K0rV8QMSoXVv3yQ8Rg4cZFnV68Y0qRHlQxpvTmyF40C+GmgWqTPQCRkeXs02X4uZhzf7qlCmw75 X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a81:6d06:0:b0:352:be5b:2c5e with SMTP id i6-20020a816d06000000b00352be5b2c5emr11376019ywc.267.1664604458568; Fri, 30 Sep 2022 23:07:38 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:35 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-22-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 21/22] perf vendor events: Update Intel tigerlake From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v1.07, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - Addition of all 6 levels of TMA metrics. Previously metrics involving topdown events were dropped. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/tigerlake/tgl-metrics.json | 810 ++++++++++++++++-- 1 file changed, 762 insertions(+), 48 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json b/to= ols/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json index 03c97bd74ad9..79b8b101b68f 100644 --- a/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json @@ -1,26 +1,716 @@ [ + { + "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", + "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UO= P_DROPPING / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "(5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.COR= E - INT_MISC.UOP_DROPPING) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "ICACHE_16B.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "ICACHE_64B.IFTAG_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / CLKS + tma_unknown_= branches", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / CLKS", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "(1 - (BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT))) * INT_MISC.CLEAR_RESTEER_CYCLES /= CLKS", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "10 * BACLEARS.ANY / CLKS", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: BACLEARS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / CORE_C= LKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re decoder-0 was the only active decoder", + "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=3D1@ - cpu@INS= T_DECODED.DECODERS\\,cmask\\=3D2@) / CORE_CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_decoder0_alone", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re (only) 4 uops were delivered by the MITE pipeline", + "MetricExpr": "(cpu@IDQ.MITE_UOPS\\,cmask\\=3D4@ - cpu@IDQ.MITE_UO= PS\\,cmask\\=3D5@) / CLKS", + "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_mite_group", + "MetricName": "tma_mite_4wide", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / CORE_CLK= S / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to LSD (Loop Stream Detector) unit", + "MetricExpr": "(LSD.CYCLES_ACTIVE - LSD.CYCLES_OK) / CORE_CLKS / 2= ", + "MetricGroup": "FetchBW;LSD;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_lsd", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to LSD (Loop Stream Detector) unit. = LSD typically does well sustaining Uop supply. However; in some rare cases= ; optimal uop-delivery could not be reached for small loops whose size (in = terms of number of uops) does not suit well the LSD structure.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", + "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + t= ma_retiring), 0)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts= )", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", + "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topd= own\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + (5 * cpu@IN= T_MISC.RECOVERY_CYCLES\\,cmask\\=3D1\\,edge@) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUN= D_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + = tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) = * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=3D1= @ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE= _ACTIVITY.CYCLES_L1D_MISS, 0)) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the (first level) DTLB was missed by load accesses, that late= r on hit in second-level TLB (STLB)", + "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the Second-level TLB (STLB) was missed by load accesses, performing a= hardware page walk", + "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_load_group", + "MetricName": "tma_load_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS= .ALL_RFO) + (MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * (= 10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DEMAND_RFO))) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "L1D_PEND_MISS.FB_FULL / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "((MEM_LOAD_RETIRED.L2_HIT * (1 + (MEM_LOAD_RETIRED.= FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / ((MEM_LOAD_RETIRED.L2_HIT * (1 + (ME= M_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) + L1D_PEND_MISS.FB_FULL= _PERIODS)) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MI= SS) / CLKS)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STA= LLS_L3_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "((49 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRE= D.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3= _HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + (48 * A= verage_Frequency) * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + (MEM_LOAD_RET= IRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "(48 * Average_Frequency) * (MEM_LOAD_L3_HIT_RETIRED= .XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - (OCR.DEMAND_DATA_RD.= L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA= _RD.L3_HIT.SNOOP_HIT_WITH_FWD)))) * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOA= D_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_NO_FWD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "(17.5 * Average_Frequency) * MEM_LOAD_RETIRED.L3_HI= T * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "L1D_PEND_MISS.L2_STALL / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / CLKS + ((CYCLE_ACT= IVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / CLKS) - tma_l2_bou= nd)", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 10 * (1 - (MEM_INST_RETIRED.LO= CK_LOADS / MEM_INST_RETIRED.ALL_STORES))) + (1 - (MEM_INST_RETIRED.LOCK_LOA= DS / MEM_INST_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_R= EQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "(54 * Average_Frequency) * OCR.DEMAND_RFO.L3_HIT.SN= OOP_HITM / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.= L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to Streaming store memory accesses; Streaming store optimize out a = read request required by RFO stores", + "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_streaming_stores", + "PublicDescription": "This metric estimates how often CPU was stal= led due to Streaming store memory accesses; Streaming store optimize out a= read request required by RFO stores. Even though store accesses do not typ= ically stall out-of-order CPUs; there are few cases where stores can lead t= o actual stalls. This metric will be flagged should Streaming stores be a b= ottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=3D1@ = + DTLB_STORE_MISSES.WALK_ACTIVE) / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the TLB was missed by store accesses, hitting in the second-l= evel TLB (STLB)", + "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_hit", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = where the STLB was missed by store accesses, performing a hardware page wal= k", + "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / CORE_CLKS", + "MetricGroup": "MemoryTLB;TopdownL5;tma_dtlb_store_group", + "MetricName": "tma_store_stlb_miss", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.DIVIDER_ACTIVE / CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "(cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ + = (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / C= LKS if (ARITH.DIVIDER_ACTIVE < (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVIT= Y.STALLS_MEM_ANY)) else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACT= IVITY.2_PORTS_UTIL) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=3D0x80@ / C= LKS + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTI= VITY.STALLS_MEM_ANY) / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU issue-pipeline was stalled due to serializing operations", + "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_serializing_operation", + "PublicDescription": "This metric represents fraction of cycles th= e CPU issue-pipeline was stalled due to serializing operations. Instruction= s like CPUID; WRMSR or LFENCE serialize the out-of-order execution which ma= y limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to PAUSE Instructions", + "MetricExpr": "140 * MISC_RETIRED.PAUSE_INST / CLKS", + "MetricGroup": "TopdownL6;tma_serializing_operation_group", + "MetricName": "tma_slow_pause", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUS= E_INST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "The Mixing_Vectors metric gives the percentag= e of injected blend uops out of all uops issued", + "MetricExpr": "CLKS * UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISS= UED.ANY", + "MetricGroup": "TopdownL5;tma_ports_utilized_0_group", + "MetricName": "tma_mixing_vectors", + "PublicDescription": "The Mixing_Vectors metric gives the percenta= ge of injected blend uops out of all uops issued. Usually a Mixing_Vectors = over 5% is worth investigating. Read more in Appendix B1 of the Optimizatio= ns Guide for this topic.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop. S= ample with: EXE_ACTIVITY.2_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + = UOPS_DISPATCHED.PORT_5 + UOPS_DISPATCHED.PORT_6) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED.PORT_0", + "MetricExpr": "UOPS_DISPATCHED.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D.PORT_1", + "MetricExpr": "UOPS_DISPATCHED.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU) Sample with: UOPS_DISPATCHED.PORT_5", + "MetricExpr": "UOPS_DISPATCHED.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED.PORT_6", + "MetricExpr": "UOPS_DISPATCHED.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3", + "MetricExpr": "UOPS_DISPATCHED.PORT_2_3 / (2 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations Sample with: U= OPS_DISPATCHED.PORT_7_8", + "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_= 8) / (4 * CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", + "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdow= n\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0*SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.TH= READ", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_P= ACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * = SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 512-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * SLOTS)", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_512b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 512-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring memory operations -- uops for memory load or store a= ccesses.", + "MetricExpr": "tma_light_operations * MEM_INST_RETIRED.ANY / INST_= RETIRED.ANY", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_memory_operations", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring branch instructions.", + "MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES= / (tma_retiring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_branch_instructions", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring NOP (no op) instructions", + "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_reti= ring * SLOTS)", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_nop_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring NOP (no op) instructions. Compilers often use NOPs = for certain address alignments - e.g. start address of a function or loop b= ody. Sample with: INST_RETIRED.NOP", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents the remaining light uo= ps fraction the CPU has executed - remaining means not covered by other sib= ling nodes. May undercount due to FMA double counting", + "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_m= emory_operations + tma_branch_instructions + tma_nop_instructions))", + "MetricGroup": "Pipeline;TopdownL3;tma_light_operations_group", + "MetricName": "tma_other_light_ops", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer + tma_retiring * (UOPS_DECO= DED.DEC0 - cpu@UOPS_DECODED.DEC0\\,cmask\\=3D1@) / IDQ.MITE_UOPS", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring instructions that that are decoder into two or up to= ([SNB+] four; [ADL+] five) uops", + "MetricExpr": "tma_heavy_operations - tma_microcode_sequencer", + "MetricGroup": "TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_few_uops_instructions", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring instructions that that are decoder into two or up t= o ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number o= f uops in such instructions.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "((tma_retiring * SLOTS) / UOPS_ISSUED.ANY) * IDQ.MS= _UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * ASSISTS.ANY / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "Total pipeline cost of Branch Misprediction r= elated bottlenecks", + "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency *= tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_i= cache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", + "MetricGroup": "Bad;BadSpec;BrMispredicts", + "MetricName": "Mispredictions" + }, + { + "BriefDescription": "Total pipeline cost of (external) Memory Band= width related bottlenecks", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_boun= d / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_stor= e_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma= _l3_hit_latency + tma_sq_full))) + (tma_l1_bound / (tma_dram_bound + tma_l1= _bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (= tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_spli= t_loads + tma_store_fwd_blk)) ", + "MetricGroup": "Mem;MemoryBW;Offcore", + "MetricName": "Memory_Bandwidth" + }, + { + "BriefDescription": "Total pipeline cost of Memory Latency related= bottlenecks (external memory and off-core caches)", + "MetricExpr": "100 * tma_memory_bound * ((tma_dram_bound / (tma_dr= am_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) *= (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + (tma_l3_bound = / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_= bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing = + tma_l3_hit_latency + tma_sq_full)) + (tma_l2_bound / (tma_dram_bound + tm= a_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)))", + "MetricGroup": "Mem;MemoryLat;Offcore", + "MetricName": "Memory_Latency" + }, + { + "BriefDescription": "Total pipeline cost of Memory Address Transla= tion related bottlenecks (data-side TLBs)", + "MetricExpr": "100 * tma_memory_bound * ((tma_l1_bound / max(tma_m= emory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + = tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tm= a_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_= fwd_blk)) + (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_boun= d + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + = tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_st= ores))) ", + "MetricGroup": "Mem;MemoryTLB;Offcore", + "MetricName": "Memory_Data_TLBs" + }, { "BriefDescription": "Total pipeline cost of branch related instruc= tions (used for program control-flow including function calls)", - "MetricExpr": "100 * (( BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED= .NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 *= BR_INST_RETIRED.NEAR_CALL) ) / TOPDOWN.SLOTS)", + "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.= NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * = BR_INST_RETIRED.NEAR_CALL)) / SLOTS)", "MetricGroup": "Ret", "MetricName": "Branching_Overhead" }, { "BriefDescription": "Total pipeline cost of instruction fetch rela= ted bottlenecks by large code footprint programs (i-side cache; TLB and BTB= misses)", - "MetricExpr": "100 * (( 5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE - INT_MISC.UOP_DROPPING ) / TOPDOWN.SLOTS) * ( (ICACHE_64B.IFTAG_= STALL / CPU_CLK_UNHALTED.THREAD) + (ICACHE_16B.IFDATA_STALL / CPU_CLK_UNHAL= TED.THREAD) + (10 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(( 5 * IDQ= _UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING ) / TO= PDOWN.SLOTS)", + "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_ic= ache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switch= es + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB", "MetricName": "Big_Code" }, + { + "BriefDescription": "Total pipeline cost of instruction fetch band= width related bottlenecks", + "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma= _mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icach= e_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - Big_Code", + "MetricGroup": "Fed;FetchBW;Frontend", + "MetricName": "Instruction_Fetch_BW" + }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "(tma_retiring * SLOTS) / INST_RETIRED.ANY", + "MetricGroup": "Pipeline;Ret;Retire", + "MetricName": "UPI" + }, + { + "BriefDescription": "Instruction per taken branch", + "MetricExpr": "(tma_retiring * SLOTS) / BR_INST_RETIRED.NEAR_TAKEN= ", + "MetricGroup": "Branches;Fed;FetchBW", + "MetricName": "UpTB" + }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -32,13 +722,13 @@ { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", "MetricExpr": "TOPDOWN.SLOTS", - "MetricGroup": "TmaL1", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, { "BriefDescription": "Fraction of Physical Core issue-slots utilize= d by this Logical Processor", - "MetricExpr": "TOPDOWN.SLOTS / ( TOPDOWN.SLOTS / 2 ) if #SMT_on el= se 1", - "MetricGroup": "SMT;TmaL1", + "MetricExpr": "SLOTS / (TOPDOWN.SLOTS / 2) if #SMT_on else 1", + "MetricGroup": "SMT;tma_L1_group", "MetricName": "Slots_Utilization" }, { @@ -50,29 +740,35 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512= B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.DISTRIBUTED", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARI= TH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKE= D_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.5= 12B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU= _CLK_UNHALTED.DISTRIBUTED )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.51= 2B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)) / (2 * CORE_C= LKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES= _GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((UOPS_EXECUTED.CORE_CYCLES_= GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, + { + "BriefDescription": "Probability of Core Bound bottleneck hidden b= y SMT-profiling artifacts", + "MetricExpr": "(1 - tma_core_bound / tma_ports_utilization if tma_= core_bound < tma_ports_utilization else 1) if SMT_2T_Utilization > 0.5 else= 0", + "MetricGroup": "Cor;SMT", + "MetricName": "Core_Bound_Likely" + }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED", @@ -117,13 +813,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.25= 6B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARI= TH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PAC= KED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST= _RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SI= NGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SIN= GLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -144,21 +840,21 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit in= struction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX512", "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit i= nstruction (lower number means higher occurrence rate). May undercount due = to FMA double counting." @@ -170,11 +866,17 @@ "MetricName": "IpSWPF" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, + { + "BriefDescription": "Average number of Uops retired in cycles wher= e at least one uop has retired.", + "MetricExpr": "(tma_retiring * SLOTS) / cpu@UOPS_RETIRED.SLOTS\\,c= mask\\=3D1@", + "MetricGroup": "Pipeline;Ret", + "MetricName": "Retire" + }, { "BriefDescription": "", "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,c= mask\\=3D1@", @@ -205,6 +907,12 @@ "MetricGroup": "DSBmiss", "MetricName": "DSB_Switch_Cost" }, + { + "BriefDescription": "Total penalty related to DSB (uop cache) miss= es - subset of the Instruction_Fetch_BW Bottleneck.", + "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_= branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + = tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tm= a_lsd + tma_mite))", + "MetricGroup": "DSBmiss;Fed", + "MetricName": "DSB_Misses" + }, { "BriefDescription": "Number of Instructions per non-speculative DS= B miss (lower number means higher occurrence rate)", "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS", @@ -217,6 +925,12 @@ "MetricGroup": "Bad;BadSpec;BrMispredicts", "MetricName": "IpMispredict" }, + { + "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", + "MetricGroup": "Bad;BrMispredicts", + "MetricName": "Branch_Misprediction_Cost" + }, { "BriefDescription": "Fraction of branches that are non-taken condi= tionals", "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_B= RANCHES", @@ -231,7 +945,7 @@ }, { "BriefDescription": "Fraction of branches that are CALL or RET", - "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_= RETURN ) / BR_INST_RETIRED.ALL_BRANCHES", + "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_R= ETURN) / BR_INST_RETIRED.ALL_BRANCHES", "MetricGroup": "Bad;Branches", "MetricName": "CallRet" }, @@ -243,80 +957,80 @@ }, { "BriefDescription": "Fraction of branches of other types (not indi= vidually covered by other metrics in Info.Branches group)", - "MetricExpr": "1 - ( (BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRE= D.ALL_BRANCHES) + (BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHE= S) + (( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST= _RETIRED.ALL_BRANCHES) + ((BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.CON= D_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES) )", + "MetricExpr": "1 - (Cond_NT + Cond_TK + CallRet + Jump)", "MetricGroup": "Bad;Branches", "MetricName": "Other_Branches" }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS = + MEM_LOAD_RETIRED.FB_HIT )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS += MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI_Load" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Fill Buffer (FB) hits per kilo instructions f= or retired demand loads (L1D misses that merge into ongoing miss-handling e= ntries)", "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "FB_HPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_= PENDING + DTLB_STORE_MISSES.WALK_PENDING ) / ( 2 * CPU_CLK_UNHALTED.DISTRIB= UTED )", + "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_P= ENDING + DTLB_STORE_MISSES.WALK_PENDING) / (2 * CORE_CLKS)", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, @@ -346,25 +1060,25 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data access bandwidth to t= he L3 cache [GB / sec]", - "MetricExpr": "(64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / = duration_time)", + "MetricExpr": "L3_Cache_Access_BW", "MetricGroup": "Mem;MemoryBW;Offcore", "MetricName": "L3_Cache_Access_BW_1T" }, @@ -376,40 +1090,40 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE = + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.5= 12B_PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_AR= ITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACK= ED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for baseline license level 0", - "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License0_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for baseline license level 0. This includes non= -AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes." }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 1", - "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License1_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 1. This includes high current= AVX 256-bit instructions as well as low current AVX 512-bit instructions." }, { "BriefDescription": "Fraction of Core cycles where the core was ru= nning with power-delivery for license level 2 (introduced in SKX)", - "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.DI= STRIBUTED", + "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CORE_CLKS", "MetricGroup": "Power", "MetricName": "Power_License2_Utilization", "PublicDescription": "Fraction of Core cycles where the core was r= unning with power-delivery for license level 2 (introduced in SKX). This i= ncludes high current AVX 512-bit instructions." @@ -434,7 +1148,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Mon Apr 29 10:52:12 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E59A2C433F5 for ; Sat, 1 Oct 2022 06:11:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229461AbiJAGLI (ORCPT ); Sat, 1 Oct 2022 02:11:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229571AbiJAGIy (ORCPT ); Sat, 1 Oct 2022 02:08:54 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA6101B3A77 for ; Fri, 30 Sep 2022 23:07:41 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-35076702654so60897307b3.17 for ; Fri, 30 Sep 2022 23:07:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=PutJ8p4acLuzwoEpIqJ0xCi7PzIMN3G+YoDSvuABVAo=; b=J2ZdQvBOIz5FkxzPlM+WH5Vswv8JJv+DtgId/cDJC8qEr9IxDyUhwot1Xkcs5SU8H7 3p+8veJZjX+NvEJs4rla28mALyfaBN8D1xuMo9EtVYpy9E5vzxhKuQFF1af/z/D7nVib kxRmBib+/EvMqFtdg5Nvl0bYml0PTudw6NgF52ao7sV/yTHgQNMUQ87/HiAVe+PVEG5t lH0n2d4l1ytc06diSnzZzjDPavQvZK3aKXBKdFZRD3Y5AcrGvuwXNZBCe0Vx1h8ui8u9 CkMuqXwTZOv3cZ5fDG2KRvNzq9wVUowTQ55MsOtZUwjjRpbkO9mB6/ygNOWUVJYtex0t omig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=PutJ8p4acLuzwoEpIqJ0xCi7PzIMN3G+YoDSvuABVAo=; b=yH4zKtvLZ4tf1EttX7T/d41xpsh0bPqpms+ip+1G6YJ7ASHSGHauKtgUReXN1BnobS /NHXQVPafXExjk5w/LEjnJnOAt0jh8bn41UuI5LCswQ/gtzYNXhWqLFTqJ5NRaL0gh4W 07Ac3oe3kGyTxYeBraTL/zOmKAD912nGcRP7DDxquYmjW5floYTe/n1A27fjS85tumvS KFakivG/Y8T+UnhXFtnlKyPHs36X8BxJ2lVkFlYKIIHML8vPCDyyjwPQoMnXmuBuA+Jt ttYSi1Im3U8SnhMyjTO5EZwVWWQq5Iwft1XwA63UbyZAkWZbFLssgu3/eZxBeFjavYn7 /ikw== X-Gm-Message-State: ACrzQf27bOf90S8uRHcnLj9PttNRI+2l5yV46HY5fRaoy4THldjtSkIy uD8CSaKfMa6kZ5id7hF3G8oGHjY9sq3h X-Google-Smtp-Source: AMsMyM5cQdxZaAWEvLGdTyhJIfS3Fws+fW4K20VNmkD6u8h1JBZ1mGySWoG5KQZ1G7G0C1pX75T2R6HJobZl X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:de60:c6ac:8491:ce1e]) (user=irogers job=sendgmr) by 2002:a25:2f12:0:b0:6bc:ab8b:9a4b with SMTP id v18-20020a252f12000000b006bcab8b9a4bmr10746005ybv.585.1664604461242; Fri, 30 Sep 2022 23:07:41 -0700 (PDT) Date: Fri, 30 Sep 2022 23:06:36 -0700 In-Reply-To: <20221001060636.2661983-1-irogers@google.com> Message-Id: <20221001060636.2661983-23-irogers@google.com> Mime-Version: 1.0 References: <20221001060636.2661983-1-irogers@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Subject: [PATCH v2 22/22] perf vendor events: Update Intel broadwellde From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Events remain at v23, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Switch for core metrics from BDX to BDW. - Switch for Page_Walks_Utilization to BDX version. - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/broadwellde/bdwde-metrics.json | 711 ++++++++++++++---- 1 file changed, 577 insertions(+), 134 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json = b/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json index b6fdf5ba2c9a..5a074cf7c77d 100644 --- a/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json +++ b/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json @@ -1,64 +1,556 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", + "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. Sample with: FRONTEN= D_RETIRED.LATENCY_GE_4_PS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE= / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_= 16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to instruction cache misses", + "MetricExpr": "ICACHE.IFDATA_STALL / CLKS", + "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_fetch_latenc= y_group", + "MetricName": "tma_icache_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RE= TIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_D= URATION\\,cmask\\=3D1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTE= ND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Branch Misprediction= at execution stage", + "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers = / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_mispredicts_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Branch Mispredictio= n at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers as a result of Machine Clears", + "MetricExpr": "MACHINE_CLEARS.COUNT * tma_branch_resteers / (BR_MI= SP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)", + "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_branch_resteer= s_group", + "MetricName": "tma_clears_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers as a result of Machine Clears. Sa= mple with: INT_MISC.CLEAR_RESTEER_CYCLES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to new branch address clears", + "MetricExpr": "tma_branch_resteers - tma_mispredicts_resteers - tm= a_clears_resteers", + "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_branch_resteers_gro= up", + "MetricName": "tma_unknown_branches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to new branch address clears. These are fetched branc= hes the Branch Prediction Unit was unable to recognize (First fetch or hitt= ing BPU capacity limit). Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DS= B_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "2 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend. Sam= ple with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LA= TENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to the MITE pipeline (the legacy deco= de pipeline)", + "MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES= _4_UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_fetch_bandwidth_grou= p", + "MetricName": "tma_mite", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to the MITE pipeline (the legacy dec= ode pipeline). This pipeline is used for code that was not pre-cached in th= e DSB or LSD. For example; inefficiencies due to asymmetric decoders; use o= f long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sa= mple with: FRONTEND_RETIRED.ANY_DSB_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s in which CPU was likely limited due to DSB (decoded uop cache) fetch pipe= line", + "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4= _UOPS) / CORE_CLKS / 2", + "MetricGroup": "DSB;FetchBW;TopdownL3;tma_fetch_bandwidth_group", + "MetricName": "tma_dsb", + "PublicDescription": "This metric represents Core fraction of cycl= es in which CPU was likely limited due to DSB (decoded uop cache) fetch pip= eline. For example; inefficient utilization of the DSB cache structure or = bank conflict when reading from it; are categorized here.", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", + "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example.", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", + "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_MEM_ANY + RESOURCE_STALLS.S= B) / (CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UO= PS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else UOPS_EXECUTED.CYCLES_= GE_2_UOPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else R= ESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= .", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled without loads missing the L1 data cache", + "MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY= .STALLS_L1D_MISS) / CLKS, 0)", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l1_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled without loads missing the L1 data cache. The L1 data cache typical= ly has the shortest latency. However; in certain cases like loads blocked = on older stores; a load might suffer due to high latency even though it is = being satisfied by the L1. Another example is loads who miss in the TLB. Th= ese cases are characterized by execution unit stalls; while some non-comple= ted demand load lives in the machine without having that demand load missin= g the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB= _HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISS= ES.WALK_DURATION\\,cmask\\=3D1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / CL= KS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates fraction of cyc= les when the memory subsystem had loads blocked since they could not forwar= d data from earlier (in program order) overlapping stores", + "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_store_fwd_blk", + "PublicDescription": "This metric roughly estimates fraction of cy= cles when the memory subsystem had loads blocked since they could not forwa= rd data from earlier (in program order) overlapping stores. To streamline m= emory operations in the pipeline; a load can avoid waiting for memory if a = prior in-flight store is writing the data that the load wants to read (stor= e forwarding process). However; in some cases the load may be blocked for a= significant time pending the store forward. For example; when the prior st= ore is writing a smaller region than the load is reading.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU spent handling cache misses due to lock operations", + "MetricExpr": "(MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL= _STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES= _WITH_DEMAND_RFO) / CLKS", + "MetricGroup": "Offcore;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_lock_latency", + "PublicDescription": "This metric represents fraction of cycles th= e CPU spent handling cache misses due to lock operations. Due to the microa= rchitecture handling of locks; they are classified as L1_Bound regardless o= f what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOA= DS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles hand= ling memory load split accesses - load that cross 64-byte cache line bounda= ry", + "MetricExpr": "Load_Miss_Real_Latency * LD_BLOCKS.NO_SR / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_split_loads", + "PublicDescription": "This metric estimates fraction of cycles han= dling memory load split accesses - load that cross 64-byte cache line bound= ary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often memory load a= ccesses were aliased by preceding stores (in program order) with a 4K addre= ss offset", + "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / CLKS", + "MetricGroup": "TopdownL4;tma_l1_bound_group", + "MetricName": "tma_4k_aliasing", + "PublicDescription": "This metric estimates how often memory load = accesses were aliased by preceding stores (in program order) with a 4K addr= ess offset. False match is possible; which incur a few cycles load re-issue= . However; the short re-issue duration is often hidden by the out-of-order = core and HW optimizations; hence a user may safely ignore a high value of t= his metric unless it manages to propagate up into parent nodes of the hiera= rchy (e.g. to L1_Bound).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric does a *rough estimation* of how = often L1D Fill Buffer unavailability limited additional L1D miss memory acc= ess requests to proceed", + "MetricExpr": "Load_Miss_Real_Latency * cpu@L1D_PEND_MISS.FB_FULL\= \,cmask\\=3D1@ / CLKS", + "MetricGroup": "MemoryBW;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_fb_full", + "PublicDescription": "This metric does a *rough estimation* of how= often L1D Fill Buffer unavailability limited additional L1D miss memory ac= cess requests to proceed. The higher the metric value; the deeper the memor= y hierarchy level the misses are satisfied from (metric values >1 are valid= ). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or= external memory).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to L2 cache accesses by loads", + "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.ST= ALLS_L2_MISS) / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l2_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 = misses/L2 hits) can improve the latency and increase performance. Sample wi= th: MEM_LOAD_RETIRED.L2_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETI= RED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2= _MISS / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to contested acces= ses", + "MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 = + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD= _UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOP= S_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_= LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS= * (1 + mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + ME= M_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LO= AD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) = + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_l3_bound_g= roup", + "MetricName": "tma_contested_accesses", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to contested acce= sses. Contested accesses occur when data written by one Logical Processor a= re read by another Logical Processor on a different Physical Core. Examples= of contested accesses include synchronizations such as locks; true data sh= aring such as modified locked variables; and false sharing. Sample with: ME= M_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles whil= e the memory subsystem was handling synchronizations due to data-sharing ac= cesses", + "MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + = mem_load_uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_U= OPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_= L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LO= AD_UOPS_RETIRED.L3_MISS))) / CLKS", + "MetricGroup": "Offcore;Snoop;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_data_sharing", + "PublicDescription": "This metric estimates fraction of cycles whi= le the memory subsystem was handling synchronizations due to data-sharing a= ccesses. Data shared by multiple Logical Processors (even just read shared)= may cause increased access latency due to cache coherency. Excessive data = sharing can drastically harm multithreaded performance. Sample with: MEM_LO= AD_L3_HIT_RETIRED.XSNP_NO_FWD", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles wit= h demand load accesses that hit the L3 cache under unloaded scenarios (poss= ibly L3 latency limited)", + "MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + mem_load_= uops_retired.hit_lfb / ((MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIR= ED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RE= TIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS) + MEM_LOAD_UOPS_R= ETIRED.L3_MISS))) / CLKS", + "MetricGroup": "MemoryLat;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_l3_hit_latency", + "PublicDescription": "This metric represents fraction of cycles wi= th demand load accesses that hit the L3 cache under unloaded scenarios (pos= sibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L= 3 hits) will improve the latency; reduce contention with sibling physical c= ores and increase performance. Note the value of this node may overlap wit= h its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric measures fraction of cycles where= the Super Queue (SQ) was full taking into account all request-types and bo= th hardware SMT threads (Logical Processors)", + "MetricExpr": "((OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2) if #SMT_on e= lse OFFCORE_REQUESTS_BUFFER.SQ_FULL) / CORE_CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_l3_bound_group", + "MetricName": "tma_sq_full", + "PublicDescription": "This metric measures fraction of cycles wher= e the Super Queue (SQ) was full taking into account all request-types and b= oth hardware SMT threads (Logical Processors). The Super Queue is used for = requests to access the L2 cache or to go out to the Uncore.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS= _RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS))) * CYCLE_ACTIVITY.STA= LLS_L2_MISS / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED= .L3_MISS_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D4@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that).", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_INST_RETIRED.ALL_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU spent handling L1D store misses", + "MetricExpr": "((L2_RQSTS.RFO_HIT * 9 * (1 - (MEM_UOPS_RETIRED.LOC= K_LOADS / MEM_UOPS_RETIRED.ALL_STORES))) + (1 - (MEM_UOPS_RETIRED.LOCK_LOAD= S / MEM_UOPS_RETIRED.ALL_STORES)) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_RE= QUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / CLKS", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_store_bound_group", + "MetricName": "tma_store_latency", + "PublicDescription": "This metric estimates fraction of cycles the= CPU spent handling L1D store misses. Store accesses usually less impact ou= t-of-order core performance; however; holding resources for longer time can= lead into undesired implications (e.g. contention on L1D fill-buffer entri= es - see FB_Full)", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates how often CPU w= as handling synchronizations due to False Sharing", + "MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM = / CLKS", + "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_store_boun= d_group", + "MetricName": "tma_false_sharing", + "PublicDescription": "This metric roughly estimates how often CPU = was handling synchronizations due to False Sharing. False Sharing is a mult= ithreading hiccup; where multiple Logical Processors contend on different d= ata-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.= L3_HIT.SNOOP_HITM", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents rate of split store ac= cesses", + "MetricExpr": "2 * MEM_UOPS_RETIRED.SPLIT_STORES / CORE_CLKS", + "MetricGroup": "TopdownL4;tma_store_bound_group", + "MetricName": "tma_split_stores", + "PublicDescription": "This metric represents rate of split store a= ccesses. Consider aligning your data to the 64-byte cache line granularity= . Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles spent handling first-level data TLB store misses", + "MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MI= SSES.WALK_DURATION\\,cmask\\=3D1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) /= CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_store_bound_group", + "MetricName": "tma_dtlb_store", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles spent handling first-level data TLB store misses. As with ordinar= y data caching; focus on improving data locality and reducing working-set s= ize to reduce DTLB overhead. Additionally; consider using profile-guided o= ptimization (PGO) to collocate frequently-used data on the same page. Try = using larger page sizes for large amounts of frequently-used data. Sample w= ith: MEM_INST_RETIRED.STLB_MISS_STORES_PS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations).", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_ACTIVE", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLE= S_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if (IPC > 1.8) else U= OPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_l= atency > 0.1) else RESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - CYCLE_ACTIVIT= Y.STALLS_MEM_ANY) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed no uops on any execution port (Logical Processor cycles since ICL= , Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=3D1@) / 2 i= f #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - RS_EVENTS.EMPTY_CYCLES if (tm= a_fetch_latency > 0.1) else 0) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_0", + "PublicDescription": "This metric represents fraction of cycles CP= U executed no uops on any execution port (Logical Processor cycles since IC= L, Physical Core cycles otherwise). Long-latency instructions like divides = may contribute to this metric.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the CPU executed total of 1 uop per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D1@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1= _UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_1", + "PublicDescription": "This metric represents fraction of cycles wh= ere the CPU executed total of 1 uop per cycle on all execution ports (Logic= al Processor cycles since ICL, Physical Core cycles otherwise). This can be= due to heavy data-dependency among software instructions; or over oversubs= cribing a particular hardware resource. In some other cases with high 1_Por= t_Utilized and L1_Bound; this metric can point to L1 data-cache latency bot= tleneck that may not necessarily manifest with complete execution starvatio= n (due to the short L1 latency e.g. walking a linked list) - looking at the= assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 2 uops per cycle on all execution ports (Logical Process= or cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D2@ - cpu@UOPS_E= XECUTED.CORE\\,cmask\\=3D3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2= _UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_2", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 2 uops per cycle on all execution ports (Logical Proces= sor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization = -most compilers feature auto-Vectorization options today- reduces pressure = on the execution ports as multiple elements are calculated with same uop. S= ample with: EXE_ACTIVITY.2_PORTS_UTIL", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= executed total of 3 or more uops per cycle on all execution ports (Logical= Processor cycles since ICL, Physical Core cycles otherwise)", + "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=3D3@ / 2) if #SM= T_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / CORE_CLKS", + "MetricGroup": "PortsUtil;TopdownL4;tma_ports_utilization_group", + "MetricName": "tma_ports_utilized_3m", + "PublicDescription": "This metric represents fraction of cycles CP= U executed total of 3 or more uops per cycle on all execution ports (Logica= l Processor cycles since ICL, Physical Core cycles otherwise). Sample with:= UOPS_EXECUTED.CYCLES_GE_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution ports for ALU operations.", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT= .PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / (4 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_alu_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd b= ranch) Sample with: UOPS_DISPATCHED.PORT_0", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_0 / CORE_CLKS", + "MetricGroup": "Compute;TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_0", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 1 (ALU) Sample with: UOPS_DISPATCHE= D.PORT_1", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_1 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_1", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] = ALU)", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_5 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_5", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple = ALU) Sample with: UOPS_DISPATCHED.PORT_6", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_alu_op_utilization_group", + "MetricName": "tma_port_6", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Load operations Sample with: UO= PS_DISPATCHED.PORT_2_3_10", + "MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT= .PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 *= CORE_CLKS)", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_load_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [= ICL+] Loads)", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_2 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_2", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [= ICL+] Loads)", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_3 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_load_op_utilization_group", + "MetricName": "tma_port_3", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port for Store operations Sample with: U= OPS_DISPATCHED.PORT_7_8", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL5;tma_ports_utilized_3m_group", + "MetricName": "tma_store_op_utilization", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 4 (Store-data)", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_4 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_4", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents Core fraction of cycle= s CPU dispatched uops on execution port 7 ([HSW+]simple Store-address)", + "MetricExpr": "UOPS_DISPATCHED_PORT.PORT_7 / CORE_CLKS", + "MetricGroup": "TopdownL6;tma_store_op_utilization_group", + "MetricName": "tma_port_7", + "ScaleUnit": "100%" }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.SLOTS", + "ScaleUnit": "100%" }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "INST_RETIRED.X87 * UPI / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INS= T_RETIRED.SCALAR_DOUBLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBL= E + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 128-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_128b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 128-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric approximates arithmetic FP vector= uops fraction the CPU has retired for 256-bit wide vectors", + "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARIT= H_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTS", + "MetricGroup": "Compute;Flops;TopdownL5;tma_fp_vector_group", + "MetricName": "tma_fp_vector_256b", + "PublicDescription": "This metric approximates arithmetic FP vecto= r uops fraction the CPU has retired for 256-bit wide vectors. May overcount= due to FMA double counting.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences.", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: UOPS_RETIRED.MS", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of slots the C= PU retired uops delivered by the Microcode_Sequencer as a result of Assists= ", + "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / SLOTS", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_assists", + "PublicDescription": "This metric estimates fraction of slots the = CPU retired uops delivered by the Microcode_Sequencer as a result of Assist= s. Assists are long sequences of uops that are required in certain corner-c= ases for operations that cannot be handled natively by the execution pipeli= ne. For example; when working with very small floating point values (so-cal= led Denormals); the FP units are not set up to perform these operations nat= ively. Instead; a sequence of instructions to perform the computation on th= e Denormals is injected into the pipeline. Since these microcode sequences = might be dozens of uops long; Assists can be extremely deleterious to perfo= rmance and they can be avoided in many cases. Sample with: ASSISTS.ANY", + "ScaleUnit": "100%" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU retired uops originated from CISC (complex instruction set computer) in= struction", + "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)", + "MetricGroup": "TopdownL4;tma_microcode_sequencer_group", + "MetricName": "tma_cisc", + "PublicDescription": "This metric estimates fraction of cycles the= CPU retired uops originated from CISC (complex instruction set computer) i= nstruction. A CISC instruction has multiple uops that are required to perfo= rm the instruction's functionality as in the case of read-modify-write as a= n example. Since these instructions require multiple uops they may or may n= ot imply sub-optimal use of machine resources.", + "ScaleUnit": "100%" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -76,8 +568,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -88,16 +580,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY", @@ -107,51 +593,32 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARIT= H_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBL= E + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.2= 56B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / CORE_C= LKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_AR= ITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DO= UBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIR= ED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / = ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIV= E / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width)", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * CPU_CLK_UNHALT= ED.THREAD )", + "MetricExpr": "((FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_IN= ST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_= ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_D= OUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)) / (2 * CORE_CLKS)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "FP_Arith_Utilization", "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n)." }, - { - "BriefDescription": "Actual per-core usage of the Floating Point n= on-X87 execution units (regardless of precision or vector-width). SMT versi= on; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_I= NST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP= _ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_= DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) ) / ( 2 * ( ( CPU_CLK_UN= HALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UN= HALTED.REF_XCLK ) ) )", - "MetricGroup": "Cor;Flops;HPC_SMT", - "MetricName": "FP_Arith_Utilization_SMT", - "PublicDescription": "Actual per-core usage of the Floating Point = non-X87 execution units (regardless of precision or vector-width). Values >= 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; = [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less commo= n). SMT version; use when SMT is enabled and measuring per logical CPU." - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,= cmask\\=3D1@ / 2 ) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ((cpu@UOPS_EXECUTED.CORE\\,c= mask\\=3D1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, @@ -193,13 +660,13 @@ }, { "BriefDescription": "Instructions per Floating Point (FP) Operatio= n (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SC= ALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RET= IRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + = FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B= _PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (1 * (FP_ARITH_INST_RETIRED.SCAL= AR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRE= D.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_A= RITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACK= ED_SINGLE)", "MetricGroup": "Flops;InsType", "MetricName": "IpFLOP" }, { "BriefDescription": "Instructions per FP Arithmetic instruction (l= ower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_= SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B= _PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_R= ETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) )", + "MetricExpr": "INST_RETIRED.ANY / ((FP_ARITH_INST_RETIRED.SCALAR_S= INGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_= PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RE= TIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE))", "MetricGroup": "Flops;InsType", "MetricName": "IpArith", "PublicDescription": "Instructions per FP Arithmetic instruction (= lower number means higher occurrence rate). May undercount due to FMA doubl= e counting. Approximated prior to BDW." @@ -220,22 +687,22 @@ }, { "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bi= t instruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX128", "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-b= it instruction (lower number means higher occurrence rate). May undercount = due to FMA double counting." }, { "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit i= nstruction (lower number means higher occurrence rate)", - "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PAC= KED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )", + "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACK= ED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)", "MetricGroup": "Flops;FpVector;InsType", "MetricName": "IpArith_AVX256", "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit = instruction (lower number means higher occurrence rate). May undercount due= to FMA double counting." }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -252,7 +719,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -264,83 +731,71 @@ }, { "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.TH= READ))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_C= LK_UNHALTED.THREAD)) * (BR_MISP_RETIRED.ALL_BRANCHES * (12 * ( BR_MISP_RETI= RED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY ) / CPU_CLK_UNHALTED= .THREAD) / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS= .ANY )) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_= CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_MISP_RETIRED.A= LL_BRANCHES", + "MetricExpr": " (tma_branch_mispredicts + tma_fetch_latency * tma_= mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache= _misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * SLOTS / BR_MISP_R= ETIRED.ALL_BRANCHES", "MetricGroup": "Bad;BrMispredicts", "MetricName": "Branch_Misprediction_Cost" }, - { - "BriefDescription": "Branch Misprediction Cost: Fraction of TMA sl= ots wasted per non-speculative branch misprediction (retired JEClear)", - "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIR= ED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIR= ED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU= _CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU= _CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_D= ELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED= .ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (BR_MISP_RETIRED.ALL= _BRANCHES * (12 * ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + B= ACLEARS.ANY ) / CPU_CLK_UNHALTED.THREAD) / ( BR_MISP_RETIRED.ALL_BRANCHES += MACHINE_CLEARS.COUNT + BACLEARS.ANY )) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCL= ES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) * (4 * ( = ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE = / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCHES", - "MetricGroup": "Bad;BrMispredicts_SMT", - "MetricName": "Branch_Misprediction_Cost_SMT" - }, { "BriefDescription": "Actual Average Latency for L1 data-cache miss= demand load operations (in core cycles)", - "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_= MISS + mem_load_uops_retired.hit_lfb )", + "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_M= ISS + mem_load_uops_retired.hit_lfb)", "MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricName": "Load_Miss_Real_Latency" }, { "BriefDescription": "Memory-Level-Parallelism (average number of L= 1 miss demand load when there is at least one such miss. Per-Logical Proces= sor)", "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLE= S", - "MetricGroup": "Mem;MemoryBound;MemoryBW", + "MetricGroup": "Mem;MemoryBW;MemoryBound", "MetricName": "MLP" }, { "BriefDescription": "L1 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L1MPKI" }, { "BriefDescription": "L2 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;Backend;CacheMisses", + "MetricGroup": "Backend;CacheMisses;Mem", "MetricName": "L2MPKI" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all request types (including speculative)", "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses;Offcore", + "MetricGroup": "CacheMisses;Mem;Offcore", "MetricName": "L2MPKI_All" }, { "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instru= ction for all demand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.= ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2MPKI_Load" }, { "BriefDescription": "L2 cache hits per kilo instruction for all re= quest types (including speculative)", - "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / IN= ST_RETIRED.ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricExpr": "1000 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST= _RETIRED.ANY", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_All" }, { "BriefDescription": "L2 cache hits per kilo instruction for all de= mand loads (including speculative)", "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.A= NY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L2HPKI_Load" }, { "BriefDescription": "L3 cache true misses per kilo instruction for= retired demand loads", "MetricExpr": "1000 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED= .ANY", - "MetricGroup": "Mem;CacheMisses", + "MetricGroup": "CacheMisses;Mem", "MetricName": "L3MPKI" }, { "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "( cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cp= u@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cpu@DTLB_STORE_MISSES.WAL= K_DURATION\\,cmask\\=3D1@ + 7 * ( DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_L= OAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) ) / CPU_CLK_UNHALT= ED.THREAD", + "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK= _DURATION + DTLB_STORE_MISSES.WALK_DURATION + 7 * ( DTLB_STORE_MISSES.WALK_= COMPLETED + DTLB_LOAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) = ) / ( 2 * (( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_T= HREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) if #core_wide < 1 else ( CPU_C= LK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD) )", "MetricGroup": "Mem;MemoryTLB", "MetricName": "Page_Walks_Utilization" }, - { - "BriefDescription": "Utilization of the core's Page Walker(s) serv= ing STLB misses triggered by instruction/Load/Store accesses", - "MetricExpr": "( cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cp= u@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=3D1@ + cpu@DTLB_STORE_MISSES.WAL= K_DURATION\\,cmask\\=3D1@ + 7 * ( DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_L= OAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED ) ) / ( ( CPU_CLK_UN= HALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UN= HALTED.REF_XCLK ) )", - "MetricGroup": "Mem;MemoryTLB_SMT", - "MetricName": "Page_Walks_Utilization_SMT" - }, { "BriefDescription": "Average per-core data fill bandwidth to the L= 1 data cache [GB / sec]", "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time", @@ -361,19 +816,19 @@ }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L1 data cache [GB / sec]", - "MetricExpr": "(64 * L1D.REPLACEMENT / 1000000000 / duration_time)= ", + "MetricExpr": "L1D_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L1D_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L2 cache [GB / sec]", - "MetricExpr": "(64 * L2_LINES_IN.ALL / 1000000000 / duration_time)= ", + "MetricExpr": "L2_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L2_Cache_Fill_BW_1T" }, { "BriefDescription": "Average per-thread data fill bandwidth to the= L3 cache [GB / sec]", - "MetricExpr": "(64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duratio= n_time)", + "MetricExpr": "L3_Cache_Fill_BW", "MetricGroup": "Mem;MemoryBW", "MetricName": "L3_Cache_Fill_BW_1T" }, @@ -391,26 +846,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_= ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_= DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RET= IRED.256B_PACKED_DOUBLE ) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) = / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARI= TH_INST_RETIRED.SCALAR_DOUBLE) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUB= LE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.= 256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 10000= 00000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -428,33 +883,21 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@ca= s_count_write@ ) / 1000000000 ) / duration_time", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, { - "BriefDescription": "Average latency of data read request to exter= nal memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches= ", - "MetricExpr": "1000000000 * ( cbox@event\\=3D0x36\\,umask\\=3D0x3\= \,filter_opc\\=3D0x182@ / cbox@event\\=3D0x35\\,umask\\=3D0x3\\,filter_opc\= \=3D0x182@ ) / ( cbox_0@event\\=3D0x0@ / duration_time )", - "MetricGroup": "Mem;MemoryLat;SoC", - "MetricName": "MEM_Read_Latency" - }, - { - "BriefDescription": "Average number of parallel data read requests= to external memory. Accounts for demand loads and L1/L2 prefetches", - "MetricExpr": "cbox@event\\=3D0x36\\,umask\\=3D0x3\\,filter_opc\\= =3D0x182@ / cbox@event\\=3D0x36\\,umask\\=3D0x3\\,filter_opc\\=3D0x182\\,th= resh\\=3D1@", - "MetricGroup": "Mem;MemoryBW;SoC", - "MetricName": "MEM_Parallel_Reads" - }, - { - "BriefDescription": "Socket actual clocks when any core is active = on that socket", - "MetricExpr": "cbox_0@event\\=3D0x0@", - "MetricGroup": "SoC", - "MetricName": "Socket_CLKS" + "BriefDescription": "Average latency of all requests to external m= emory (in Uncore cycles)", + "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / arb@event\\=3D0x81\\,um= ask\\=3D0x1@", + "MetricGroup": "Mem;SoC", + "MetricName": "MEM_Request_Latency" }, { - "BriefDescription": "Uncore frequency per die [GHZ]", - "MetricExpr": "cbox_0@event\\=3D0x0@ / #num_dies / duration_time /= 1000000000", - "MetricGroup": "SoC", - "MetricName": "UNCORE_FREQ" + "BriefDescription": "Average number of parallel requests to extern= al memory. Accounts for all requests", + "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / arb@event\\=3D0x81\\,um= ask\\=3D0x1@", + "MetricGroup": "Mem;SoC", + "MetricName": "MEM_Parallel_Requests" }, { "BriefDescription": "Instructions per Far Branch ( Far Branches ap= ply upon transition from application to operating system, handling interrup= ts, exceptions) [lower number means higher occurrence rate]", --=20 2.38.0.rc1.362.ged0d419d3c-goog