[PATCH v1 00/22] Switch the default perf stat metrics to json

Ian Rogers posted 22 patches 3 months, 2 weeks ago
There is a newer version of this series
tools/perf/builtin-script.c                   | 238 ++++++++++-
tools/perf/builtin-stat.c                     | 154 ++-----
.../arch/common/common/metrics.json           | 151 +++++++
tools/perf/pmu-events/empty-pmu-events.c      | 139 ++++--
tools/perf/pmu-events/jevents.py              |  34 +-
tools/perf/pmu-events/pmu-events.h            |   2 +
.../tests/shell/lib/perf_json_output_lint.py  |   4 +-
tools/perf/tests/shell/lib/stat_output.sh     |   2 +-
tools/perf/tests/shell/stat+csv_output.sh     |   2 +-
tools/perf/tests/shell/stat+json_output.sh    |   2 +-
tools/perf/tests/shell/stat+shadow_stat.sh    |   4 +-
tools/perf/tests/shell/stat+std_output.sh     |   4 +-
tools/perf/tests/shell/stat.sh                |   6 +-
.../perf/tests/shell/stat_all_metricgroups.sh |   3 +
tools/perf/tests/shell/stat_all_metrics.sh    |   7 +-
tools/perf/util/evsel.c                       |   2 -
tools/perf/util/evsel.h                       |   2 +-
tools/perf/util/expr.c                        |   3 +
tools/perf/util/metricgroup.c                 |  95 ++++-
tools/perf/util/metricgroup.h                 |   2 +-
tools/perf/util/stat-display.c                |  55 +--
tools/perf/util/stat-shadow.c                 | 402 +-----------------
tools/perf/util/stat.h                        |   2 +-
23 files changed, 672 insertions(+), 643 deletions(-)
create mode 100644 tools/perf/pmu-events/arch/common/common/metrics.json
[PATCH v1 00/22] Switch the default perf stat metrics to json
Posted by Ian Rogers 3 months, 2 weeks ago
Prior to this series stat-shadow would produce hard coded metrics if
certain events appeared in the evlist. This series produces equivalent
json metrics and cleans up the consequences in tests and display
output. A before and after of the default display output on a
tigerlake is:

Before:
```
$ perf stat -a sleep 1

 Performance counter stats for 'system wide':

    16,041,816,418      cpu-clock                        #   15.995 CPUs utilized             
             5,749      context-switches                 #  358.376 /sec                      
               121      cpu-migrations                   #    7.543 /sec                      
             1,806      page-faults                      #  112.581 /sec                      
       825,965,204      instructions                     #    0.70  insn per cycle            
     1,180,799,101      cycles                           #    0.074 GHz                       
       168,945,109      branches                         #   10.532 M/sec                     
         4,629,567      branch-misses                    #    2.74% of all branches           
 #     30.2 %  tma_backend_bound      
                                                  #      7.8 %  tma_bad_speculation    
                                                  #     47.1 %  tma_frontend_bound     
 #     14.9 %  tma_retiring           
```

After:
```
$ perf stat -a sleep 1

 Performance counter stats for 'system wide':

             2,890      context-switches                 #    179.9 cs/sec  cs_per_second     
    16,061,923,339      cpu-clock                        #     16.0 CPUs  CPUs_utilized       
                43      cpu-migrations                   #      2.7 migrations/sec  migrations_per_second
             5,645      page-faults                      #    351.5 faults/sec  page_faults_per_second
         5,708,413      branch-misses                    #      1.4 %  branch_miss_rate         (88.83%)
       429,978,120      branches                         #     26.8 K/sec  branch_frequency     (88.85%)
     1,626,915,897      cpu-cycles                       #      0.1 GHz  cycles_frequency       (88.84%)
     2,556,805,534      instructions                     #      1.5 instructions  insn_per_cycle  (88.86%)
                        TopdownL1                 #     20.1 %  tma_backend_bound      
                                                  #     40.5 %  tma_bad_speculation      (88.90%)
                                                  #     17.2 %  tma_frontend_bound       (78.05%)
                                                  #     22.2 %  tma_retiring             (88.89%)

       1.002994394 seconds time elapsed
```

Having the metrics in json brings greater uniformity, allows events to
be shared by metrics, and it also allows descriptions like:
```
$ perf list cs_per_second
...
  cs_per_second
       [Context switches per CPU second]
```

A thorn in the side of doing this work was that the hard coded metrics
were used by perf script with '-F metric'. This functionality didn't
work for me (I was testing `perf record -e instructions,cycles` and
then `perf script -F metric` but saw nothing but empty lines) but
anyway I decided to fix it to the best of my ability in this
series. So the script side counters were removed and the regular ones
associated with the evsel used. The json metrics were all searched
looking for ones that have a subset of events matching those in the
perf script session, and all metrics are printed. This is kind of
weird as the counters are being set by the period of samples, but I
carried the behavior forward. I suspect there needs to be follow up
work to make this better, but what is in the series is superior to
what is currently in the tree. Follow up work could include finding
metrics for the machine in the perf.data rather than using the host,
allowing multiple metrics even if the metric ids of the events differ,
fixing pre-existing `perf stat record/report` issues, etc.

There is a lot of stat tests that, for example, assume '-e
instructions,cycles' will produce an IPC metric. These things needed
tidying as now the metric must be explicitly asked for and when doing
this ones using software events were preferred to increase
compatibility. As the test updates were numerous they are distinct to
the patches updating the functionality causing periods in the series
where not all tests are passing. If this is undesirable the test fixes
can be squashed into the functionality updates.

Ian Rogers (22):
  perf evsel: Remove unused metric_events variable
  perf metricgroup: Update comment on location of metric_event list
  perf metricgroup: Missed free on error path
  perf metricgroup: When copy metrics copy default information
  perf metricgroup: Add care to picking the evsel for displaying a
    metric
  perf jevents: Make all tables static
  perf expr: Add #target_cpu literal
  perf jevents: Add set of common metrics based on default ones
  perf jevents: Add metric DefaultShowEvents
  perf stat: Add detail -d,-dd,-ddd metrics
  perf script: Change metric format to use json metrics
  perf stat: Remove hard coded shadow metrics
  perf stat: Fix default metricgroup display on hybrid
  perf stat: Sort default events/metrics
  perf stat: Remove "unit" workarounds for metric-only
  perf test stat+json: Improve metric-only testing
  perf test stat: Ignore failures in Default[234] metricgroups
  perf test stat: Update std_output testing metric expectations
  perf test metrics: Update all metrics for possibly failing default
    metrics
  perf test stat: Update shadow test to use metrics
  perf test stat: Update test expectations and events
  perf test stat csv: Update test expectations and events

 tools/perf/builtin-script.c                   | 238 ++++++++++-
 tools/perf/builtin-stat.c                     | 154 ++-----
 .../arch/common/common/metrics.json           | 151 +++++++
 tools/perf/pmu-events/empty-pmu-events.c      | 139 ++++--
 tools/perf/pmu-events/jevents.py              |  34 +-
 tools/perf/pmu-events/pmu-events.h            |   2 +
 .../tests/shell/lib/perf_json_output_lint.py  |   4 +-
 tools/perf/tests/shell/lib/stat_output.sh     |   2 +-
 tools/perf/tests/shell/stat+csv_output.sh     |   2 +-
 tools/perf/tests/shell/stat+json_output.sh    |   2 +-
 tools/perf/tests/shell/stat+shadow_stat.sh    |   4 +-
 tools/perf/tests/shell/stat+std_output.sh     |   4 +-
 tools/perf/tests/shell/stat.sh                |   6 +-
 .../perf/tests/shell/stat_all_metricgroups.sh |   3 +
 tools/perf/tests/shell/stat_all_metrics.sh    |   7 +-
 tools/perf/util/evsel.c                       |   2 -
 tools/perf/util/evsel.h                       |   2 +-
 tools/perf/util/expr.c                        |   3 +
 tools/perf/util/metricgroup.c                 |  95 ++++-
 tools/perf/util/metricgroup.h                 |   2 +-
 tools/perf/util/stat-display.c                |  55 +--
 tools/perf/util/stat-shadow.c                 | 402 +-----------------
 tools/perf/util/stat.h                        |   2 +-
 23 files changed, 672 insertions(+), 643 deletions(-)
 create mode 100644 tools/perf/pmu-events/arch/common/common/metrics.json

-- 
2.51.1.821.gb6fe4d2222-goog
Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
Posted by Namhyung Kim 3 months ago
Hi Ian,

On Fri, Oct 24, 2025 at 10:58:35AM -0700, Ian Rogers wrote:
> Prior to this series stat-shadow would produce hard coded metrics if
> certain events appeared in the evlist. This series produces equivalent
> json metrics and cleans up the consequences in tests and display
> output. A before and after of the default display output on a
> tigerlake is:
> 
> Before:
> ```
> $ perf stat -a sleep 1
> 
>  Performance counter stats for 'system wide':
> 
>     16,041,816,418      cpu-clock                        #   15.995 CPUs utilized             
>              5,749      context-switches                 #  358.376 /sec                      
>                121      cpu-migrations                   #    7.543 /sec                      
>              1,806      page-faults                      #  112.581 /sec                      
>        825,965,204      instructions                     #    0.70  insn per cycle            
>      1,180,799,101      cycles                           #    0.074 GHz                       
>        168,945,109      branches                         #   10.532 M/sec                     
>          4,629,567      branch-misses                    #    2.74% of all branches           
>  #     30.2 %  tma_backend_bound      
>                                                   #      7.8 %  tma_bad_speculation    
>                                                   #     47.1 %  tma_frontend_bound     
>  #     14.9 %  tma_retiring           
> ```
> 
> After:
> ```
> $ perf stat -a sleep 1
> 
>  Performance counter stats for 'system wide':
> 
>              2,890      context-switches                 #    179.9 cs/sec  cs_per_second     
>     16,061,923,339      cpu-clock                        #     16.0 CPUs  CPUs_utilized       
>                 43      cpu-migrations                   #      2.7 migrations/sec  migrations_per_second
>              5,645      page-faults                      #    351.5 faults/sec  page_faults_per_second
>          5,708,413      branch-misses                    #      1.4 %  branch_miss_rate         (88.83%)
>        429,978,120      branches                         #     26.8 K/sec  branch_frequency     (88.85%)
>      1,626,915,897      cpu-cycles                       #      0.1 GHz  cycles_frequency       (88.84%)
>      2,556,805,534      instructions                     #      1.5 instructions  insn_per_cycle  (88.86%)
>                         TopdownL1                 #     20.1 %  tma_backend_bound      
>                                                   #     40.5 %  tma_bad_speculation      (88.90%)
>                                                   #     17.2 %  tma_frontend_bound       (78.05%)
>                                                   #     22.2 %  tma_retiring             (88.89%)
> 
>        1.002994394 seconds time elapsed
> ```

While this looks nicer, I worry about the changes in the output.  And I'm
curious why only the "After" output shows the multiplexing percent.

> 
> Having the metrics in json brings greater uniformity, allows events to
> be shared by metrics, and it also allows descriptions like:
> ```
> $ perf list cs_per_second
> ...
>   cs_per_second
>        [Context switches per CPU second]
> ```
> 
> A thorn in the side of doing this work was that the hard coded metrics
> were used by perf script with '-F metric'. This functionality didn't
> work for me (I was testing `perf record -e instructions,cycles` and
> then `perf script -F metric` but saw nothing but empty lines)

The documentation says:

	With the metric option perf script can compute metrics for
	sampling periods, similar to perf stat. This requires
	specifying a group with multiple events defining metrics with the :S option
	for perf record. perf will sample on the first event, and
	print computed metrics for all the events in the group. Please note
	that the metric computed is averaged over the whole sampling
	period (since the last sample), not just for the sample point.

So I guess it should have 'S' modifiers in a group.


> but anyway I decided to fix it to the best of my ability in this
> series. So the script side counters were removed and the regular ones
> associated with the evsel used. The json metrics were all searched
> looking for ones that have a subset of events matching those in the
> perf script session, and all metrics are printed. This is kind of
> weird as the counters are being set by the period of samples, but I
> carried the behavior forward. I suspect there needs to be follow up
> work to make this better, but what is in the series is superior to
> what is currently in the tree. Follow up work could include finding
> metrics for the machine in the perf.data rather than using the host,
> allowing multiple metrics even if the metric ids of the events differ,
> fixing pre-existing `perf stat record/report` issues, etc.
> 
> There is a lot of stat tests that, for example, assume '-e
> instructions,cycles' will produce an IPC metric. These things needed
> tidying as now the metric must be explicitly asked for and when doing
> this ones using software events were preferred to increase
> compatibility. As the test updates were numerous they are distinct to
> the patches updating the functionality causing periods in the series
> where not all tests are passing. If this is undesirable the test fixes
> can be squashed into the functionality updates.

Hmm.. how many of them?  I think it'd better to have the test changes at
the same time so that we can assure test success count after the change.
Can the test changes be squashed into one or two commits?

Thanks,
Namhyung

> 
> Ian Rogers (22):
>   perf evsel: Remove unused metric_events variable
>   perf metricgroup: Update comment on location of metric_event list
>   perf metricgroup: Missed free on error path
>   perf metricgroup: When copy metrics copy default information
>   perf metricgroup: Add care to picking the evsel for displaying a
>     metric
>   perf jevents: Make all tables static
>   perf expr: Add #target_cpu literal
>   perf jevents: Add set of common metrics based on default ones
>   perf jevents: Add metric DefaultShowEvents
>   perf stat: Add detail -d,-dd,-ddd metrics
>   perf script: Change metric format to use json metrics
>   perf stat: Remove hard coded shadow metrics
>   perf stat: Fix default metricgroup display on hybrid
>   perf stat: Sort default events/metrics
>   perf stat: Remove "unit" workarounds for metric-only
>   perf test stat+json: Improve metric-only testing
>   perf test stat: Ignore failures in Default[234] metricgroups
>   perf test stat: Update std_output testing metric expectations
>   perf test metrics: Update all metrics for possibly failing default
>     metrics
>   perf test stat: Update shadow test to use metrics
>   perf test stat: Update test expectations and events
>   perf test stat csv: Update test expectations and events
> 
>  tools/perf/builtin-script.c                   | 238 ++++++++++-
>  tools/perf/builtin-stat.c                     | 154 ++-----
>  .../arch/common/common/metrics.json           | 151 +++++++
>  tools/perf/pmu-events/empty-pmu-events.c      | 139 ++++--
>  tools/perf/pmu-events/jevents.py              |  34 +-
>  tools/perf/pmu-events/pmu-events.h            |   2 +
>  .../tests/shell/lib/perf_json_output_lint.py  |   4 +-
>  tools/perf/tests/shell/lib/stat_output.sh     |   2 +-
>  tools/perf/tests/shell/stat+csv_output.sh     |   2 +-
>  tools/perf/tests/shell/stat+json_output.sh    |   2 +-
>  tools/perf/tests/shell/stat+shadow_stat.sh    |   4 +-
>  tools/perf/tests/shell/stat+std_output.sh     |   4 +-
>  tools/perf/tests/shell/stat.sh                |   6 +-
>  .../perf/tests/shell/stat_all_metricgroups.sh |   3 +
>  tools/perf/tests/shell/stat_all_metrics.sh    |   7 +-
>  tools/perf/util/evsel.c                       |   2 -
>  tools/perf/util/evsel.h                       |   2 +-
>  tools/perf/util/expr.c                        |   3 +
>  tools/perf/util/metricgroup.c                 |  95 ++++-
>  tools/perf/util/metricgroup.h                 |   2 +-
>  tools/perf/util/stat-display.c                |  55 +--
>  tools/perf/util/stat-shadow.c                 | 402 +-----------------
>  tools/perf/util/stat.h                        |   2 +-
>  23 files changed, 672 insertions(+), 643 deletions(-)
>  create mode 100644 tools/perf/pmu-events/arch/common/common/metrics.json
> 
> -- 
> 2.51.1.821.gb6fe4d2222-goog
>
Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
Posted by Ian Rogers 3 months ago
On Mon, Nov 3, 2025 at 8:47 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hi Ian,
>
> On Fri, Oct 24, 2025 at 10:58:35AM -0700, Ian Rogers wrote:
> > Prior to this series stat-shadow would produce hard coded metrics if
> > certain events appeared in the evlist. This series produces equivalent
> > json metrics and cleans up the consequences in tests and display
> > output. A before and after of the default display output on a
> > tigerlake is:
> >
> > Before:
> > ```
> > $ perf stat -a sleep 1
> >
> >  Performance counter stats for 'system wide':
> >
> >     16,041,816,418      cpu-clock                        #   15.995 CPUs utilized
> >              5,749      context-switches                 #  358.376 /sec
> >                121      cpu-migrations                   #    7.543 /sec
> >              1,806      page-faults                      #  112.581 /sec
> >        825,965,204      instructions                     #    0.70  insn per cycle
> >      1,180,799,101      cycles                           #    0.074 GHz
> >        168,945,109      branches                         #   10.532 M/sec
> >          4,629,567      branch-misses                    #    2.74% of all branches
> >  #     30.2 %  tma_backend_bound
> >                                                   #      7.8 %  tma_bad_speculation
> >                                                   #     47.1 %  tma_frontend_bound
> >  #     14.9 %  tma_retiring
> > ```
> >
> > After:
> > ```
> > $ perf stat -a sleep 1
> >
> >  Performance counter stats for 'system wide':
> >
> >              2,890      context-switches                 #    179.9 cs/sec  cs_per_second
> >     16,061,923,339      cpu-clock                        #     16.0 CPUs  CPUs_utilized
> >                 43      cpu-migrations                   #      2.7 migrations/sec  migrations_per_second
> >              5,645      page-faults                      #    351.5 faults/sec  page_faults_per_second
> >          5,708,413      branch-misses                    #      1.4 %  branch_miss_rate         (88.83%)
> >        429,978,120      branches                         #     26.8 K/sec  branch_frequency     (88.85%)
> >      1,626,915,897      cpu-cycles                       #      0.1 GHz  cycles_frequency       (88.84%)
> >      2,556,805,534      instructions                     #      1.5 instructions  insn_per_cycle  (88.86%)
> >                         TopdownL1                 #     20.1 %  tma_backend_bound
> >                                                   #     40.5 %  tma_bad_speculation      (88.90%)
> >                                                   #     17.2 %  tma_frontend_bound       (78.05%)
> >                                                   #     22.2 %  tma_retiring             (88.89%)
> >
> >        1.002994394 seconds time elapsed
> > ```
>
> While this looks nicer, I worry about the changes in the output.  And I'm
> curious why only the "After" output shows the multiplexing percent.
>
> >
> > Having the metrics in json brings greater uniformity, allows events to
> > be shared by metrics, and it also allows descriptions like:
> > ```
> > $ perf list cs_per_second
> > ...
> >   cs_per_second
> >        [Context switches per CPU second]
> > ```
> >
> > A thorn in the side of doing this work was that the hard coded metrics
> > were used by perf script with '-F metric'. This functionality didn't
> > work for me (I was testing `perf record -e instructions,cycles` and
> > then `perf script -F metric` but saw nothing but empty lines)
>
> The documentation says:
>
>         With the metric option perf script can compute metrics for
>         sampling periods, similar to perf stat. This requires
>         specifying a group with multiple events defining metrics with the :S option
>         for perf record. perf will sample on the first event, and
>         print computed metrics for all the events in the group. Please note
>         that the metric computed is averaged over the whole sampling
>         period (since the last sample), not just for the sample point.
>
> So I guess it should have 'S' modifiers in a group.

Thanks Namhyung. Yes, this is the silly behavior where leader sample
events are both treated as an event but then the constituent parts
turned into individual events with the period set to the leader sample
read counts. Most recently this behavior was disabled by struct
perf_tool's dont_split_sample_group in the case of perf inject as it
causes events to be processed multiple times. The perf script behavior
doesn't rely anywhere on the grouping of the leader sample events and
even with it the metric format option doesn't work either - I'll save
pasting a screen full of blank lines here.

> > but anyway I decided to fix it to the best of my ability in this
> > series. So the script side counters were removed and the regular ones
> > associated with the evsel used. The json metrics were all searched
> > looking for ones that have a subset of events matching those in the
> > perf script session, and all metrics are printed. This is kind of
> > weird as the counters are being set by the period of samples, but I
> > carried the behavior forward. I suspect there needs to be follow up
> > work to make this better, but what is in the series is superior to
> > what is currently in the tree. Follow up work could include finding
> > metrics for the machine in the perf.data rather than using the host,
> > allowing multiple metrics even if the metric ids of the events differ,
> > fixing pre-existing `perf stat record/report` issues, etc.
> >
> > There is a lot of stat tests that, for example, assume '-e
> > instructions,cycles' will produce an IPC metric. These things needed
> > tidying as now the metric must be explicitly asked for and when doing
> > this ones using software events were preferred to increase
> > compatibility. As the test updates were numerous they are distinct to
> > the patches updating the functionality causing periods in the series
> > where not all tests are passing. If this is undesirable the test fixes
> > can be squashed into the functionality updates.
>
> Hmm.. how many of them?  I think it'd better to have the test changes at
> the same time so that we can assure test success count after the change.
> Can the test changes be squashed into one or two commits?

So the patches are below. The first set are all clean up:

> > Ian Rogers (22):
> >   perf evsel: Remove unused metric_events variable
> >   perf metricgroup: Update comment on location of metric_event list
> >   perf metricgroup: Missed free on error path
> >   perf metricgroup: When copy metrics copy default information
> >   perf metricgroup: Add care to picking the evsel for displaying a
> >     metric
> >   perf jevents: Make all tables static

Then there is the addition of the legacy metrics as json:

> >   perf expr: Add #target_cpu literal
> >   perf jevents: Add set of common metrics based on default ones
> >   perf jevents: Add metric DefaultShowEvents
> >   perf stat: Add detail -d,-dd,-ddd metrics

Then there is the change to make perf script metric format work:

> >   perf script: Change metric format to use json metrics

Then there is a clean up patch:

> >   perf stat: Remove hard coded shadow metrics

Then there are fixes to perf stat's already broken output:

> >   perf stat: Fix default metricgroup display on hybrid
> >   perf stat: Sort default events/metrics
> >   perf stat: Remove "unit" workarounds for metric-only

Then there are 7 patches updating test expectations. Each patch deals
with a separate test to make the resolution clear.

> >   perf test stat+json: Improve metric-only testing
> >   perf test stat: Ignore failures in Default[234] metricgroups
> >   perf test stat: Update std_output testing metric expectations
> >   perf test metrics: Update all metrics for possibly failing default
> >     metrics
> >   perf test stat: Update shadow test to use metrics
> >   perf test stat: Update test expectations and events
> >   perf test stat csv: Update test expectations and events

The patch "perf jevents: Add set of common metrics based on default
ones" most impacts the output but we don't want to verify the default
stat output with the hardcoded metrics that are removed in "perf stat:
Remove hard coded shadow metrics". Having a test for both hard coded
and json metrics in an intermediate state makes little sense and the
default output is impacting by the 3 patches fixing it and removing
workarounds.

It is possible to squash things together but I think something is lost
in doing so, hence presenting it this way.

Thanks,
Ian
Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
Posted by Namhyung Kim 3 months ago
On Mon, Nov 03, 2025 at 09:09:14PM -0800, Ian Rogers wrote:
> On Mon, Nov 3, 2025 at 8:47 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Hi Ian,
> >
> > On Fri, Oct 24, 2025 at 10:58:35AM -0700, Ian Rogers wrote:
> > > Prior to this series stat-shadow would produce hard coded metrics if
> > > certain events appeared in the evlist. This series produces equivalent
> > > json metrics and cleans up the consequences in tests and display
> > > output. A before and after of the default display output on a
> > > tigerlake is:
> > >
> > > Before:
> > > ```
> > > $ perf stat -a sleep 1
> > >
> > >  Performance counter stats for 'system wide':
> > >
> > >     16,041,816,418      cpu-clock                        #   15.995 CPUs utilized
> > >              5,749      context-switches                 #  358.376 /sec
> > >                121      cpu-migrations                   #    7.543 /sec
> > >              1,806      page-faults                      #  112.581 /sec
> > >        825,965,204      instructions                     #    0.70  insn per cycle
> > >      1,180,799,101      cycles                           #    0.074 GHz
> > >        168,945,109      branches                         #   10.532 M/sec
> > >          4,629,567      branch-misses                    #    2.74% of all branches
> > >  #     30.2 %  tma_backend_bound
> > >                                                   #      7.8 %  tma_bad_speculation
> > >                                                   #     47.1 %  tma_frontend_bound
> > >  #     14.9 %  tma_retiring
> > > ```
> > >
> > > After:
> > > ```
> > > $ perf stat -a sleep 1
> > >
> > >  Performance counter stats for 'system wide':
> > >
> > >              2,890      context-switches                 #    179.9 cs/sec  cs_per_second
> > >     16,061,923,339      cpu-clock                        #     16.0 CPUs  CPUs_utilized
> > >                 43      cpu-migrations                   #      2.7 migrations/sec  migrations_per_second
> > >              5,645      page-faults                      #    351.5 faults/sec  page_faults_per_second
> > >          5,708,413      branch-misses                    #      1.4 %  branch_miss_rate         (88.83%)
> > >        429,978,120      branches                         #     26.8 K/sec  branch_frequency     (88.85%)
> > >      1,626,915,897      cpu-cycles                       #      0.1 GHz  cycles_frequency       (88.84%)
> > >      2,556,805,534      instructions                     #      1.5 instructions  insn_per_cycle  (88.86%)
> > >                         TopdownL1                 #     20.1 %  tma_backend_bound
> > >                                                   #     40.5 %  tma_bad_speculation      (88.90%)
> > >                                                   #     17.2 %  tma_frontend_bound       (78.05%)
> > >                                                   #     22.2 %  tma_retiring             (88.89%)
> > >
> > >        1.002994394 seconds time elapsed
> > > ```
> >
> > While this looks nicer, I worry about the changes in the output.  And I'm
> > curious why only the "After" output shows the multiplexing percent.
> >
> > >
> > > Having the metrics in json brings greater uniformity, allows events to
> > > be shared by metrics, and it also allows descriptions like:
> > > ```
> > > $ perf list cs_per_second
> > > ...
> > >   cs_per_second
> > >        [Context switches per CPU second]
> > > ```
> > >
> > > A thorn in the side of doing this work was that the hard coded metrics
> > > were used by perf script with '-F metric'. This functionality didn't
> > > work for me (I was testing `perf record -e instructions,cycles` and
> > > then `perf script -F metric` but saw nothing but empty lines)
> >
> > The documentation says:
> >
> >         With the metric option perf script can compute metrics for
> >         sampling periods, similar to perf stat. This requires
> >         specifying a group with multiple events defining metrics with the :S option
> >         for perf record. perf will sample on the first event, and
> >         print computed metrics for all the events in the group. Please note
> >         that the metric computed is averaged over the whole sampling
> >         period (since the last sample), not just for the sample point.
> >
> > So I guess it should have 'S' modifiers in a group.
> 
> Thanks Namhyung. Yes, this is the silly behavior where leader sample
> events are both treated as an event but then the constituent parts
> turned into individual events with the period set to the leader sample
> read counts. Most recently this behavior was disabled by struct
> perf_tool's dont_split_sample_group in the case of perf inject as it
> causes events to be processed multiple times. The perf script behavior
> doesn't rely anywhere on the grouping of the leader sample events and
> even with it the metric format option doesn't work either - I'll save
> pasting a screen full of blank lines here.

Right, it seems to be broken at some point.

> 
> > > but anyway I decided to fix it to the best of my ability in this
> > > series. So the script side counters were removed and the regular ones
> > > associated with the evsel used. The json metrics were all searched
> > > looking for ones that have a subset of events matching those in the
> > > perf script session, and all metrics are printed. This is kind of
> > > weird as the counters are being set by the period of samples, but I
> > > carried the behavior forward. I suspect there needs to be follow up
> > > work to make this better, but what is in the series is superior to
> > > what is currently in the tree. Follow up work could include finding
> > > metrics for the machine in the perf.data rather than using the host,
> > > allowing multiple metrics even if the metric ids of the events differ,
> > > fixing pre-existing `perf stat record/report` issues, etc.
> > >
> > > There is a lot of stat tests that, for example, assume '-e
> > > instructions,cycles' will produce an IPC metric. These things needed
> > > tidying as now the metric must be explicitly asked for and when doing
> > > this ones using software events were preferred to increase
> > > compatibility. As the test updates were numerous they are distinct to
> > > the patches updating the functionality causing periods in the series
> > > where not all tests are passing. If this is undesirable the test fixes
> > > can be squashed into the functionality updates.
> >
> > Hmm.. how many of them?  I think it'd better to have the test changes at
> > the same time so that we can assure test success count after the change.
> > Can the test changes be squashed into one or two commits?
> 
> So the patches are below. The first set are all clean up:
> 
> > > Ian Rogers (22):
> > >   perf evsel: Remove unused metric_events variable
> > >   perf metricgroup: Update comment on location of metric_event list
> > >   perf metricgroup: Missed free on error path
> > >   perf metricgroup: When copy metrics copy default information
> > >   perf metricgroup: Add care to picking the evsel for displaying a
> > >     metric
> > >   perf jevents: Make all tables static

I've applied most of this part to perf-tools-next, will take a look at
others later.

Thanks,
Namhyung

> 
> Then there is the addition of the legacy metrics as json:
> 
> > >   perf expr: Add #target_cpu literal
> > >   perf jevents: Add set of common metrics based on default ones
> > >   perf jevents: Add metric DefaultShowEvents
> > >   perf stat: Add detail -d,-dd,-ddd metrics
> 
> Then there is the change to make perf script metric format work:
> 
> > >   perf script: Change metric format to use json metrics
> 
> Then there is a clean up patch:
> 
> > >   perf stat: Remove hard coded shadow metrics
> 
> Then there are fixes to perf stat's already broken output:
> 
> > >   perf stat: Fix default metricgroup display on hybrid
> > >   perf stat: Sort default events/metrics
> > >   perf stat: Remove "unit" workarounds for metric-only
> 
> Then there are 7 patches updating test expectations. Each patch deals
> with a separate test to make the resolution clear.
> 
> > >   perf test stat+json: Improve metric-only testing
> > >   perf test stat: Ignore failures in Default[234] metricgroups
> > >   perf test stat: Update std_output testing metric expectations
> > >   perf test metrics: Update all metrics for possibly failing default
> > >     metrics
> > >   perf test stat: Update shadow test to use metrics
> > >   perf test stat: Update test expectations and events
> > >   perf test stat csv: Update test expectations and events
> 
> The patch "perf jevents: Add set of common metrics based on default
> ones" most impacts the output but we don't want to verify the default
> stat output with the hardcoded metrics that are removed in "perf stat:
> Remove hard coded shadow metrics". Having a test for both hard coded
> and json metrics in an intermediate state makes little sense and the
> default output is impacting by the 3 patches fixing it and removing
> workarounds.
> 
> It is possible to squash things together but I think something is lost
> in doing so, hence presenting it this way.
> 
> Thanks,
> Ian
Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
Posted by Ian Rogers 3 months, 1 week ago
On Fri, Oct 24, 2025 at 10:59 AM Ian Rogers <irogers@google.com> wrote:
>
> Prior to this series stat-shadow would produce hard coded metrics if
> certain events appeared in the evlist. This series produces equivalent
> json metrics and cleans up the consequences in tests and display
> output. A before and after of the default display output on a
> tigerlake is:
>
> Before:
> ```
> $ perf stat -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>     16,041,816,418      cpu-clock                        #   15.995 CPUs utilized
>              5,749      context-switches                 #  358.376 /sec
>                121      cpu-migrations                   #    7.543 /sec
>              1,806      page-faults                      #  112.581 /sec
>        825,965,204      instructions                     #    0.70  insn per cycle
>      1,180,799,101      cycles                           #    0.074 GHz
>        168,945,109      branches                         #   10.532 M/sec
>          4,629,567      branch-misses                    #    2.74% of all branches
>  #     30.2 %  tma_backend_bound
>                                                   #      7.8 %  tma_bad_speculation
>                                                   #     47.1 %  tma_frontend_bound
>  #     14.9 %  tma_retiring
> ```
>
> After:
> ```
> $ perf stat -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>              2,890      context-switches                 #    179.9 cs/sec  cs_per_second
>     16,061,923,339      cpu-clock                        #     16.0 CPUs  CPUs_utilized
>                 43      cpu-migrations                   #      2.7 migrations/sec  migrations_per_second
>              5,645      page-faults                      #    351.5 faults/sec  page_faults_per_second
>          5,708,413      branch-misses                    #      1.4 %  branch_miss_rate         (88.83%)
>        429,978,120      branches                         #     26.8 K/sec  branch_frequency     (88.85%)
>      1,626,915,897      cpu-cycles                       #      0.1 GHz  cycles_frequency       (88.84%)
>      2,556,805,534      instructions                     #      1.5 instructions  insn_per_cycle  (88.86%)
>                         TopdownL1                 #     20.1 %  tma_backend_bound
>                                                   #     40.5 %  tma_bad_speculation      (88.90%)
>                                                   #     17.2 %  tma_frontend_bound       (78.05%)
>                                                   #     22.2 %  tma_retiring             (88.89%)
>
>        1.002994394 seconds time elapsed
> ```
>
> Having the metrics in json brings greater uniformity, allows events to
> be shared by metrics, and it also allows descriptions like:
> ```
> $ perf list cs_per_second
> ...
>   cs_per_second
>        [Context switches per CPU second]
> ```
>
> A thorn in the side of doing this work was that the hard coded metrics
> were used by perf script with '-F metric'. This functionality didn't
> work for me (I was testing `perf record -e instructions,cycles` and
> then `perf script -F metric` but saw nothing but empty lines) but
> anyway I decided to fix it to the best of my ability in this
> series. So the script side counters were removed and the regular ones
> associated with the evsel used. The json metrics were all searched
> looking for ones that have a subset of events matching those in the
> perf script session, and all metrics are printed. This is kind of
> weird as the counters are being set by the period of samples, but I
> carried the behavior forward. I suspect there needs to be follow up
> work to make this better, but what is in the series is superior to
> what is currently in the tree. Follow up work could include finding
> metrics for the machine in the perf.data rather than using the host,
> allowing multiple metrics even if the metric ids of the events differ,
> fixing pre-existing `perf stat record/report` issues, etc.
>
> There is a lot of stat tests that, for example, assume '-e
> instructions,cycles' will produce an IPC metric. These things needed
> tidying as now the metric must be explicitly asked for and when doing
> this ones using software events were preferred to increase
> compatibility. As the test updates were numerous they are distinct to
> the patches updating the functionality causing periods in the series
> where not all tests are passing. If this is undesirable the test fixes
> can be squashed into the functionality updates.

Hi,

no comments on this series yet, please help! I'd like to land this
work and then rebase the python generating metric work [1] on it. The
metric generation work is largely independent of everything else but
there are collisions in the json Makefile/Build files.

Thanks,
Ian

[1]
* Foundations: https://lore.kernel.org/lkml/20240228175617.4049201-1-irogers@google.com/
* AMD: https://lore.kernel.org/lkml/20240229001537.4158049-1-irogers@google.com/
* Intel: https://lore.kernel.org/lkml/20240229001806.4158429-1-irogers@google.com/
* ARM: https://lore.kernel.org/lkml/20240229001325.4157655-1-irogers@google.com/



> Ian Rogers (22):
>   perf evsel: Remove unused metric_events variable
>   perf metricgroup: Update comment on location of metric_event list
>   perf metricgroup: Missed free on error path
>   perf metricgroup: When copy metrics copy default information
>   perf metricgroup: Add care to picking the evsel for displaying a
>     metric
>   perf jevents: Make all tables static
>   perf expr: Add #target_cpu literal
>   perf jevents: Add set of common metrics based on default ones
>   perf jevents: Add metric DefaultShowEvents
>   perf stat: Add detail -d,-dd,-ddd metrics
>   perf script: Change metric format to use json metrics
>   perf stat: Remove hard coded shadow metrics
>   perf stat: Fix default metricgroup display on hybrid
>   perf stat: Sort default events/metrics
>   perf stat: Remove "unit" workarounds for metric-only
>   perf test stat+json: Improve metric-only testing
>   perf test stat: Ignore failures in Default[234] metricgroups
>   perf test stat: Update std_output testing metric expectations
>   perf test metrics: Update all metrics for possibly failing default
>     metrics
>   perf test stat: Update shadow test to use metrics
>   perf test stat: Update test expectations and events
>   perf test stat csv: Update test expectations and events
>
>  tools/perf/builtin-script.c                   | 238 ++++++++++-
>  tools/perf/builtin-stat.c                     | 154 ++-----
>  .../arch/common/common/metrics.json           | 151 +++++++
>  tools/perf/pmu-events/empty-pmu-events.c      | 139 ++++--
>  tools/perf/pmu-events/jevents.py              |  34 +-
>  tools/perf/pmu-events/pmu-events.h            |   2 +
>  .../tests/shell/lib/perf_json_output_lint.py  |   4 +-
>  tools/perf/tests/shell/lib/stat_output.sh     |   2 +-
>  tools/perf/tests/shell/stat+csv_output.sh     |   2 +-
>  tools/perf/tests/shell/stat+json_output.sh    |   2 +-
>  tools/perf/tests/shell/stat+shadow_stat.sh    |   4 +-
>  tools/perf/tests/shell/stat+std_output.sh     |   4 +-
>  tools/perf/tests/shell/stat.sh                |   6 +-
>  .../perf/tests/shell/stat_all_metricgroups.sh |   3 +
>  tools/perf/tests/shell/stat_all_metrics.sh    |   7 +-
>  tools/perf/util/evsel.c                       |   2 -
>  tools/perf/util/evsel.h                       |   2 +-
>  tools/perf/util/expr.c                        |   3 +
>  tools/perf/util/metricgroup.c                 |  95 ++++-
>  tools/perf/util/metricgroup.h                 |   2 +-
>  tools/perf/util/stat-display.c                |  55 +--
>  tools/perf/util/stat-shadow.c                 | 402 +-----------------
>  tools/perf/util/stat.h                        |   2 +-
>  23 files changed, 672 insertions(+), 643 deletions(-)
>  create mode 100644 tools/perf/pmu-events/arch/common/common/metrics.json
>
> --
> 2.51.1.821.gb6fe4d2222-goog
>
Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
Posted by Ian Rogers 3 months ago
On Thu, Oct 30, 2025 at 1:51 PM Ian Rogers <irogers@google.com> wrote:
>
> On Fri, Oct 24, 2025 at 10:59 AM Ian Rogers <irogers@google.com> wrote:
> >
> > Prior to this series stat-shadow would produce hard coded metrics if
> > certain events appeared in the evlist. This series produces equivalent
> > json metrics and cleans up the consequences in tests and display
> > output. A before and after of the default display output on a
> > tigerlake is:
> >
> > Before:
> > ```
> > $ perf stat -a sleep 1
> >
> >  Performance counter stats for 'system wide':
> >
> >     16,041,816,418      cpu-clock                        #   15.995 CPUs utilized
> >              5,749      context-switches                 #  358.376 /sec
> >                121      cpu-migrations                   #    7.543 /sec
> >              1,806      page-faults                      #  112.581 /sec
> >        825,965,204      instructions                     #    0.70  insn per cycle
> >      1,180,799,101      cycles                           #    0.074 GHz
> >        168,945,109      branches                         #   10.532 M/sec
> >          4,629,567      branch-misses                    #    2.74% of all branches
> >  #     30.2 %  tma_backend_bound
> >                                                   #      7.8 %  tma_bad_speculation
> >                                                   #     47.1 %  tma_frontend_bound
> >  #     14.9 %  tma_retiring
> > ```
> >
> > After:
> > ```
> > $ perf stat -a sleep 1
> >
> >  Performance counter stats for 'system wide':
> >
> >              2,890      context-switches                 #    179.9 cs/sec  cs_per_second
> >     16,061,923,339      cpu-clock                        #     16.0 CPUs  CPUs_utilized
> >                 43      cpu-migrations                   #      2.7 migrations/sec  migrations_per_second
> >              5,645      page-faults                      #    351.5 faults/sec  page_faults_per_second
> >          5,708,413      branch-misses                    #      1.4 %  branch_miss_rate         (88.83%)
> >        429,978,120      branches                         #     26.8 K/sec  branch_frequency     (88.85%)
> >      1,626,915,897      cpu-cycles                       #      0.1 GHz  cycles_frequency       (88.84%)
> >      2,556,805,534      instructions                     #      1.5 instructions  insn_per_cycle  (88.86%)
> >                         TopdownL1                 #     20.1 %  tma_backend_bound
> >                                                   #     40.5 %  tma_bad_speculation      (88.90%)
> >                                                   #     17.2 %  tma_frontend_bound       (78.05%)
> >                                                   #     22.2 %  tma_retiring             (88.89%)
> >
> >        1.002994394 seconds time elapsed
> > ```
> >
> > Having the metrics in json brings greater uniformity, allows events to
> > be shared by metrics, and it also allows descriptions like:
> > ```
> > $ perf list cs_per_second
> > ...
> >   cs_per_second
> >        [Context switches per CPU second]
> > ```
> >
> > A thorn in the side of doing this work was that the hard coded metrics
> > were used by perf script with '-F metric'. This functionality didn't
> > work for me (I was testing `perf record -e instructions,cycles` and
> > then `perf script -F metric` but saw nothing but empty lines) but
> > anyway I decided to fix it to the best of my ability in this
> > series. So the script side counters were removed and the regular ones
> > associated with the evsel used. The json metrics were all searched
> > looking for ones that have a subset of events matching those in the
> > perf script session, and all metrics are printed. This is kind of
> > weird as the counters are being set by the period of samples, but I
> > carried the behavior forward. I suspect there needs to be follow up
> > work to make this better, but what is in the series is superior to
> > what is currently in the tree. Follow up work could include finding
> > metrics for the machine in the perf.data rather than using the host,
> > allowing multiple metrics even if the metric ids of the events differ,
> > fixing pre-existing `perf stat record/report` issues, etc.
> >
> > There is a lot of stat tests that, for example, assume '-e
> > instructions,cycles' will produce an IPC metric. These things needed
> > tidying as now the metric must be explicitly asked for and when doing
> > this ones using software events were preferred to increase
> > compatibility. As the test updates were numerous they are distinct to
> > the patches updating the functionality causing periods in the series
> > where not all tests are passing. If this is undesirable the test fixes
> > can be squashed into the functionality updates.
>
> Hi,
>
> no comments on this series yet, please help! I'd like to land this
> work and then rebase the python generating metric work [1] on it. The
> metric generation work is largely independent of everything else but
> there are collisions in the json Makefile/Build files.

Just to also add that the default perf stat output in perf-tools-next
looks like this on an Alderlake:
```
$ perf stat -a sleep 1

Performance counter stats for 'system wide':

                0      cpu-clock                        #    0.000
CPUs utilized
           19,362      context-switches
              874      cpu-migrations
           10,194      page-faults
      633,489,938      cpu_atom/instructions/           #    0.69
insn per cycle              (87.25%)
    3,738,623,788      cpu_core/instructions/           #    2.05
insn per cycle
      923,779,727      cpu_atom/cycles/
                        (87.28%)
    1,821,165,755      cpu_core/cycles/
      102,969,608      cpu_atom/branches/
                        (87.41%)
      594,784,374      cpu_core/branches/
        4,376,709      cpu_atom/branch-misses/          #    4.25% of
all branches             (87.66%)
        7,886,194      cpu_core/branch-misses/          #    1.33% of
all branches
#     10.4 %  tma_bad_speculation
                                                 #     21.5 %
tma_frontend_bound
#     34.5 %  tma_backend_bound
                                                 #     33.5 %
tma_retiring
#     17.7 %  tma_bad_speculation
                                                 #     17.8 %
tma_retiring             (87.64%)
#     33.4 %  tma_backend_bound
                                                 #     31.1 %
tma_frontend_bound       (87.67%)

      1.004970242 seconds time elapsed
```
and this with the series:
```
$ perf stat -a sleep 1
 Performance counter stats for 'system wide':

            21,198      context-switches                 #      nan
cs/sec  cs_per_second
                 0      cpu-clock                        #      0.0
CPUs  CPUs_utilized
               989      cpu-migrations                   #      nan
migrations/sec  migrations_per_second
             6,642      page-faults                      #      nan
faults/sec  page_faults_per_second
         6,966,308      cpu_core/branch-misses/          #      1.3 %
branch_miss_rate
       517,064,969      cpu_core/branches/               #      nan
K/sec  branch_frequency
     1,602,405,292      cpu_core/cpu-cycles/             #      nan
GHz  cycles_frequency
     3,012,408,051      cpu_core/instructions/           #      1.9
instructions  insn_per_cycle
         4,727,342      cpu_atom/branch-misses/          #      4.8 %
branch_miss_rate         (49.79%)
        94,075,578      cpu_atom/branches/               #      nan
K/sec  branch_frequency     (50.14%)
       922,932,356      cpu_atom/cpu-cycles/             #      nan
GHz  cycles_frequency       (50.36%)
       513,356,622      cpu_atom/instructions/           #      0.6
instructions  insn_per_cycle  (50.36%)
             TopdownL1 (cpu_core)                 #     10.4 %
tma_bad_speculation
                                                  #     24.0 %
tma_frontend_bound
                                                  #     35.2 %
tma_backend_bound
                                                  #     30.4 %
tma_retiring
             TopdownL1 (cpu_atom)                 #     36.1 %
tma_backend_bound        (59.76%)
                                                  #     38.7 %
tma_frontend_bound       (59.57%)
                                                  #      8.8 %
tma_bad_speculation
                                                  #     16.4 %
tma_retiring             (59.57%)

       1.006937573 seconds time elapsed
```
That is the TopdownL1 default group name is missing in the current
tree, etc. So just fixing the default perf stat output would be a good
reason to land this. The also broken output at the top is from a
tigerlake non-hybrid system.

Thanks,
Ian

> [1]
> * Foundations: https://lore.kernel.org/lkml/20240228175617.4049201-1-irogers@google.com/
> * AMD: https://lore.kernel.org/lkml/20240229001537.4158049-1-irogers@google.com/
> * Intel: https://lore.kernel.org/lkml/20240229001806.4158429-1-irogers@google.com/
> * ARM: https://lore.kernel.org/lkml/20240229001325.4157655-1-irogers@google.com/