tools/perf/Documentation/perf-stat.txt | 21 + tools/perf/builtin-stat.c | 6 + .../tests/shell/lib/perf_csv_output_lint.py | 48 +++ .../tests/shell/lib/perf_json_output_lint.py | 94 +++++ tools/perf/tests/shell/stat+csv_output.sh | 147 +++++++ tools/perf/tests/shell/stat+json_output.sh | 147 +++++++ tools/perf/util/stat-display.c | 384 +++++++++++++----- tools/perf/util/stat.c | 1 + tools/perf/util/stat.h | 2 + 9 files changed, 744 insertions(+), 106 deletions(-) create mode 100644 tools/perf/tests/shell/lib/perf_csv_output_lint.py create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py create mode 100755 tools/perf/tests/shell/stat+csv_output.sh create mode 100755 tools/perf/tests/shell/stat+json_output.sh
Parsing the CSV or text output of perf stat can be problematic when
new output is added (columns in CSV format). JSON names values and
simplifies the job of parsing. Add a JSON output option to perf-stat
then add unit test that parses and validates the output.
This is a resend of two v2 patches:
https://lore.kernel.org/lkml/20210813220754.2104922-1-cjense@google.com/
https://lore.kernel.org/lkml/20210813220936.2105426-1-cjense@google.com/
with a few formatting changes and improvements to the linter.
The CSV test/linter is also added to ensure that CSV output doesn't regress:
https://lore.kernel.org/lkml/20210813192108.2087512-1-cjense@google.com/
v4. Does some minor fixes to the json linter.
v3. There is some tidy up of CSV code including a potential memory
over run in the os.nfields set up caught by sanitizers. To
facilitate this an AGGR_MAX value is added. v3 also adds the CSV
testing.
v2. Fixes the system wide no aggregation test to not run if the
paranoia is wrong. It also makes the counter-value check handle
the "<not counted>" and "<not supported>" cases.
Claire Jensen (3):
perf test: Add checking for perf stat CSV output.
perf stat: Add JSON output option
perf test: Json format checking
tools/perf/Documentation/perf-stat.txt | 21 +
tools/perf/builtin-stat.c | 6 +
.../tests/shell/lib/perf_csv_output_lint.py | 48 +++
.../tests/shell/lib/perf_json_output_lint.py | 94 +++++
tools/perf/tests/shell/stat+csv_output.sh | 147 +++++++
tools/perf/tests/shell/stat+json_output.sh | 147 +++++++
tools/perf/util/stat-display.c | 384 +++++++++++++-----
tools/perf/util/stat.c | 1 +
tools/perf/util/stat.h | 2 +
9 files changed, 744 insertions(+), 106 deletions(-)
create mode 100644 tools/perf/tests/shell/lib/perf_csv_output_lint.py
create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py
create mode 100755 tools/perf/tests/shell/stat+csv_output.sh
create mode 100755 tools/perf/tests/shell/stat+json_output.sh
--
2.36.1.124.g0e6072fb45-goog
Em Tue, May 24, 2022 at 10:38:11PM -0700, Ian Rogers escreveu:
> Parsing the CSV or text output of perf stat can be problematic when
> new output is added (columns in CSV format). JSON names values and
> simplifies the job of parsing. Add a JSON output option to perf-stat
> then add unit test that parses and validates the output.
>
> This is a resend of two v2 patches:
> https://lore.kernel.org/lkml/20210813220754.2104922-1-cjense@google.com/
> https://lore.kernel.org/lkml/20210813220936.2105426-1-cjense@google.com/
> with a few formatting changes and improvements to the linter.
>
> The CSV test/linter is also added to ensure that CSV output doesn't regress:
> https://lore.kernel.org/lkml/20210813192108.2087512-1-cjense@google.com/
So, the JSON test is failing:
⬢[acme@toolbox perf]$ perf test -v JSON
Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc
90: perf stat JSON output linter :
--- start ---
test child forked, pid 2626229
Checking json output: no args [Success]
Checking json output: system wide [Skip] parnoia and not root
Checking json output: system wide [Skip] parnoia and not root
Checking json output: interval Test failed for input:
{"interval" : 0.000506453, "counter-value" : "0.212360", "unit" : "msec", "event" : "task-clock:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000212, "metric-unit" : "CPUs utilized"}
{"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "context-switches:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}
{"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "cpu-migrations:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}
{"interval" : 0.000506453, "counter-value" : "45.000000", "unit" : "", "event" : "page-faults:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 211.904313, "metric-unit" : "K/sec"}
{"interval" : 0.000506453, "counter-value" : "143761.000000", "unit" : "", "event" : "cycles:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.676968, "metric-unit" : "GHz"}
{"interval" : 0.000506453, "counter-value" : "456.000000", "unit" : "", "event" : "stalled-cycles-frontend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.317193, "metric-unit" : "frontend cycles idle"}
{"interval" : 0.000506453, "counter-value" : "11639.000000", "unit" : "", "event" : "stalled-cycles-backend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 8.096076, "metric-unit" : "backend cycles idle"}
{"interval" : 0.000506453, "counter-value" : "150684.000000", "unit" : "", "event" : "instructions:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 1.048156, "metric-unit" : "insn per cycle"}
{"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
{"interval" : 0.000506453, "counter-value" : "29735.000000", "unit" : "", "event" : "branches:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 140.021661, "metric-unit" : "M/sec"}
{"interval" : 0.000506453, "counter-value" : "<not counted>", "unit" : "", "event" : "branch-misses:u", "event-runtime" : 0, "pcnt-running" : 0.00, "metric-value" : 0.000000, "metric-unit" : ""}
Traceback (most recent call last):
File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 91, in <module>
check_json_output(expected_items)
File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 52, in check_json_output
raise RuntimeError(f'wrong number of fields. counted {count} expected {expected_items}'
RuntimeError: wrong number of fields. counted 2 expected 7 in '{"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
'
test child finished with -1
---- end ----
perf stat JSON output linter: FAILED!
⬢[acme@toolbox perf]$
So please check this and resubmit.
My system is a fedora 35 silverblue toolbox.
⬢[acme@toolbox perf]$ rpm -q python3
python3-3.10.4-1.fc35.x86_64
- Arnaldo
Em Wed, May 25, 2022 at 08:18:27AM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Tue, May 24, 2022 at 10:38:11PM -0700, Ian Rogers escreveu:
> > Parsing the CSV or text output of perf stat can be problematic when
> > new output is added (columns in CSV format). JSON names values and
> > simplifies the job of parsing. Add a JSON output option to perf-stat
> > then add unit test that parses and validates the output.
> >
> > This is a resend of two v2 patches:
> > https://lore.kernel.org/lkml/20210813220754.2104922-1-cjense@google.com/
> > https://lore.kernel.org/lkml/20210813220936.2105426-1-cjense@google.com/
> > with a few formatting changes and improvements to the linter.
> >
> > The CSV test/linter is also added to ensure that CSV output doesn't regress:
> > https://lore.kernel.org/lkml/20210813192108.2087512-1-cjense@google.com/
>
> So, the JSON test is failing:
>
> ⬢[acme@toolbox perf]$ perf test -v JSON
> Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc
> 90: perf stat JSON output linter :
> --- start ---
> test child forked, pid 2626229
> Checking json output: no args [Success]
> Checking json output: system wide [Skip] parnoia and not root
> Checking json output: system wide [Skip] parnoia and not root
> Checking json output: interval Test failed for input:
> {"interval" : 0.000506453, "counter-value" : "0.212360", "unit" : "msec", "event" : "task-clock:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000212, "metric-unit" : "CPUs utilized"}
>
> {"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "context-switches:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}
>
> {"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "cpu-migrations:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}
>
> {"interval" : 0.000506453, "counter-value" : "45.000000", "unit" : "", "event" : "page-faults:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 211.904313, "metric-unit" : "K/sec"}
>
> {"interval" : 0.000506453, "counter-value" : "143761.000000", "unit" : "", "event" : "cycles:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.676968, "metric-unit" : "GHz"}
>
> {"interval" : 0.000506453, "counter-value" : "456.000000", "unit" : "", "event" : "stalled-cycles-frontend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.317193, "metric-unit" : "frontend cycles idle"}
>
> {"interval" : 0.000506453, "counter-value" : "11639.000000", "unit" : "", "event" : "stalled-cycles-backend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 8.096076, "metric-unit" : "backend cycles idle"}
>
> {"interval" : 0.000506453, "counter-value" : "150684.000000", "unit" : "", "event" : "instructions:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 1.048156, "metric-unit" : "insn per cycle"}
>
> {"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
>
> {"interval" : 0.000506453, "counter-value" : "29735.000000", "unit" : "", "event" : "branches:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 140.021661, "metric-unit" : "M/sec"}
>
> {"interval" : 0.000506453, "counter-value" : "<not counted>", "unit" : "", "event" : "branch-misses:u", "event-runtime" : 0, "pcnt-running" : 0.00, "metric-value" : 0.000000, "metric-unit" : ""}
>
> Traceback (most recent call last):
> File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 91, in <module>
> check_json_output(expected_items)
> File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 52, in check_json_output
> raise RuntimeError(f'wrong number of fields. counted {count} expected {expected_items}'
> RuntimeError: wrong number of fields. counted 2 expected 7 in '{"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
> '
> test child finished with -1
> ---- end ----
> perf stat JSON output linter: FAILED!
> ⬢[acme@toolbox perf]$
>
> So please check this and resubmit.
I kept the first patch in the series, so please just check the JSON
ones.
- Arnaldo
> My system is a fedora 35 silverblue toolbox.
>
> ⬢[acme@toolbox perf]$ rpm -q python3
> python3-3.10.4-1.fc35.x86_64
>
> - Arnaldo
--
- Arnaldo
© 2016 - 2026 Red Hat, Inc.