tools/perf/util/metricgroup.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
On some platforms that do not use cpu MIDR mapping, such as hisilicon
HIP09 platform. The list cmd can display the CPA metrics, but the stat
cmd cannot work well to parse of CPA metrics.
$ perf list metric
Metrics:
cpa_p0_avg_bw
[Average bandwidth of CPA Port 0]
cpa_p1_avg_bw
[Average bandwidth of CPA Port 1]
$ perf stat -M cpa_p0_avg_bw --timeout 1000 --> No error messages output
$ echo $?
234
Currently, the metricgroup__parse_groups() expects to find an cpu metric
table, but the hisilicon/hip09 doesn't uses cpus MIDR to map json events
and metrics, so pmu_metrics_table__find() will return NULL, than the cmd
run failed.
But in metricgroup__add_metric(), the function parse for each sys metric
and add it to metric_list, which also will get an valid sys metric table.
So, we can ignore the NULL result of pmu_metrics_table__find() and to use
the sys metric table.
metricgroup__parse_groups
-> parse_groups
-> metricgroup__add_metric_list
-> metricgroup__add_metric
-> pmu_for_each_sys_metric --> We've got the sys metric
Testing:
[root@localhost ~]# perf stat -M cpa_p0_avg_bw --no-merge --timeout 1000
Performance counter stats for 'system wide':
CPU0 1,001,121,239 cpa_cycles [hisi_sicl0_cpa0] # 0.00 cpa_p0_avg_bw
CPU0 0 cpa_p0_wr_dat [hisi_sicl0_cpa0]
CPU0 0 cpa_p0_rd_dat_64b [hisi_sicl0_cpa0]
CPU0 0 cpa_p0_rd_dat_32b [hisi_sicl0_cpa0]
CPU0 1,001,094,851 cpa_cycles [hisi_sicl2_cpa0] # 0.00 cpa_p0_avg_bw
CPU0 0 cpa_p0_wr_dat [hisi_sicl2_cpa0]
CPU0 0 cpa_p0_rd_dat_64b [hisi_sicl2_cpa0]
CPU0 0 cpa_p0_rd_dat_32b [hisi_sicl2_cpa0]
1.001306160 seconds time elapsed
Signed-off-by: Junhao He <hejunhao3@huawei.com>
---
tools/perf/util/metricgroup.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
index 69f6a46402c3..fa62fb3418b6 100644
--- a/tools/perf/util/metricgroup.c
+++ b/tools/perf/util/metricgroup.c
@@ -1123,7 +1123,7 @@ static int metricgroup__add_metric_sys_event_iter(const struct pmu_metric *pm,
ret = add_metric(d->metric_list, pm, d->modifier, d->metric_no_group,
d->metric_no_threshold, d->user_requested_cpu_list,
- d->system_wide, d->root_metric, d->visited, d->table);
+ d->system_wide, d->root_metric, d->visited, d->table ?: table);
if (ret)
goto out;
@@ -1239,7 +1239,7 @@ static int metricgroup__add_metric(const char *pmu, const char *metric_name, con
int ret;
bool has_match = false;
- {
+ if (table) {
struct metricgroup__add_metric_data data = {
.list = &list,
.pmu = pmu,
@@ -1696,8 +1696,6 @@ int metricgroup__parse_groups(struct evlist *perf_evlist,
{
const struct pmu_metrics_table *table = pmu_metrics_table__find();
- if (!table)
- return -EINVAL;
if (hardware_aware_grouping)
pr_debug("Use hardware aware grouping instead of traditional metric grouping method\n");
--
2.33.0
On 2024/8/7 12:00, Junhao He wrote:
> On some platforms that do not use cpu MIDR mapping, such as hisilicon
> HIP09 platform. The list cmd can display the CPA metrics, but the stat
> cmd cannot work well to parse of CPA metrics.
>
> $ perf list metric
> Metrics:
> cpa_p0_avg_bw
> [Average bandwidth of CPA Port 0]
> cpa_p1_avg_bw
> [Average bandwidth of CPA Port 1]
> $ perf stat -M cpa_p0_avg_bw --timeout 1000 --> No error messages output
> $ echo $?
> 234
>
> Currently, the metricgroup__parse_groups() expects to find an cpu metric
> table, but the hisilicon/hip09 doesn't uses cpus MIDR to map json events
> and metrics, so pmu_metrics_table__find() will return NULL, than the cmd
> run failed.
> But in metricgroup__add_metric(), the function parse for each sys metric
> and add it to metric_list, which also will get an valid sys metric table.
> So, we can ignore the NULL result of pmu_metrics_table__find() and to use
> the sys metric table.
>
> metricgroup__parse_groups
> -> parse_groups
> -> metricgroup__add_metric_list
> -> metricgroup__add_metric
> -> pmu_for_each_sys_metric --> We've got the sys metric
>
> Testing:
> [root@localhost ~]# perf stat -M cpa_p0_avg_bw --no-merge --timeout 1000
>
> Performance counter stats for 'system wide':
>
> CPU0 1,001,121,239 cpa_cycles [hisi_sicl0_cpa0] # 0.00 cpa_p0_avg_bw
> CPU0 0 cpa_p0_wr_dat [hisi_sicl0_cpa0]
> CPU0 0 cpa_p0_rd_dat_64b [hisi_sicl0_cpa0]
> CPU0 0 cpa_p0_rd_dat_32b [hisi_sicl0_cpa0]
> CPU0 1,001,094,851 cpa_cycles [hisi_sicl2_cpa0] # 0.00 cpa_p0_avg_bw
> CPU0 0 cpa_p0_wr_dat [hisi_sicl2_cpa0]
> CPU0 0 cpa_p0_rd_dat_64b [hisi_sicl2_cpa0]
> CPU0 0 cpa_p0_rd_dat_32b [hisi_sicl2_cpa0]
>
> 1.001306160 seconds time elapsed
>
> Signed-off-by: Junhao He <hejunhao3@huawei.com>
Tested-by: Yicong Yang <yangyicong@hisilicon.com>
The changes looks fine to me, I'd prefer to add some documents for this. See below.
> ---
> tools/perf/util/metricgroup.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
> index 69f6a46402c3..fa62fb3418b6 100644
> --- a/tools/perf/util/metricgroup.c
> +++ b/tools/perf/util/metricgroup.c
> @@ -1123,7 +1123,7 @@ static int metricgroup__add_metric_sys_event_iter(const struct pmu_metric *pm,
>
> ret = add_metric(d->metric_list, pm, d->modifier, d->metric_no_group,
> d->metric_no_threshold, d->user_requested_cpu_list,
> - d->system_wide, d->root_metric, d->visited, d->table);
> + d->system_wide, d->root_metric, d->visited, d->table ?: table);
> if (ret)
> goto out;
>
> @@ -1239,7 +1239,7 @@ static int metricgroup__add_metric(const char *pmu, const char *metric_name, con
> int ret;
> bool has_match = false;
>
> - {
> + if (table) {
may add a comment here. If !table then the metrics may belong to a system PMU.
> struct metricgroup__add_metric_data data = {
> .list = &list,
> .pmu = pmu,
> @@ -1696,8 +1696,6 @@ int metricgroup__parse_groups(struct evlist *perf_evlist,
> {
> const struct pmu_metrics_table *table = pmu_metrics_table__find();
>
> - if (!table)
> - return -EINVAL;
may add a debug message to mention we're parsing metrics of system PMUs.
Thanks.
> if (hardware_aware_grouping)
> pr_debug("Use hardware aware grouping instead of traditional metric grouping method\n");
>
>
Hi, Yicong
Thanks for your comments.
On 2024/9/5 11:16, Yicong Yang wrote:
> On 2024/8/7 12:00, Junhao He wrote:
>> On some platforms that do not use cpu MIDR mapping, such as hisilicon
>> HIP09 platform. The list cmd can display the CPA metrics, but the stat
>> cmd cannot work well to parse of CPA metrics.
>>
>> $ perf list metric
>> Metrics:
>> cpa_p0_avg_bw
>> [Average bandwidth of CPA Port 0]
>> cpa_p1_avg_bw
>> [Average bandwidth of CPA Port 1]
>> $ perf stat -M cpa_p0_avg_bw --timeout 1000 --> No error messages output
>> $ echo $?
>> 234
>>
>> Currently, the metricgroup__parse_groups() expects to find an cpu metric
>> table, but the hisilicon/hip09 doesn't uses cpus MIDR to map json events
>> and metrics, so pmu_metrics_table__find() will return NULL, than the cmd
>> run failed.
>> But in metricgroup__add_metric(), the function parse for each sys metric
>> and add it to metric_list, which also will get an valid sys metric table.
>> So, we can ignore the NULL result of pmu_metrics_table__find() and to use
>> the sys metric table.
>>
>> metricgroup__parse_groups
>> -> parse_groups
>> -> metricgroup__add_metric_list
>> -> metricgroup__add_metric
>> -> pmu_for_each_sys_metric --> We've got the sys metric
>>
>> Testing:
>> [root@localhost ~]# perf stat -M cpa_p0_avg_bw --no-merge --timeout 1000
>>
>> Performance counter stats for 'system wide':
>>
>> CPU0 1,001,121,239 cpa_cycles [hisi_sicl0_cpa0] # 0.00 cpa_p0_avg_bw
>> CPU0 0 cpa_p0_wr_dat [hisi_sicl0_cpa0]
>> CPU0 0 cpa_p0_rd_dat_64b [hisi_sicl0_cpa0]
>> CPU0 0 cpa_p0_rd_dat_32b [hisi_sicl0_cpa0]
>> CPU0 1,001,094,851 cpa_cycles [hisi_sicl2_cpa0] # 0.00 cpa_p0_avg_bw
>> CPU0 0 cpa_p0_wr_dat [hisi_sicl2_cpa0]
>> CPU0 0 cpa_p0_rd_dat_64b [hisi_sicl2_cpa0]
>> CPU0 0 cpa_p0_rd_dat_32b [hisi_sicl2_cpa0]
>>
>> 1.001306160 seconds time elapsed
>>
>> Signed-off-by: Junhao He <hejunhao3@huawei.com>
> Tested-by: Yicong Yang <yangyicong@hisilicon.com>
>
> The changes looks fine to me, I'd prefer to add some documents for this. See below.
>
>> ---
>> tools/perf/util/metricgroup.c | 6 ++----
>> 1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
>> index 69f6a46402c3..fa62fb3418b6 100644
>> --- a/tools/perf/util/metricgroup.c
>> +++ b/tools/perf/util/metricgroup.c
>> @@ -1123,7 +1123,7 @@ static int metricgroup__add_metric_sys_event_iter(const struct pmu_metric *pm,
>>
>> ret = add_metric(d->metric_list, pm, d->modifier, d->metric_no_group,
>> d->metric_no_threshold, d->user_requested_cpu_list,
>> - d->system_wide, d->root_metric, d->visited, d->table);
>> + d->system_wide, d->root_metric, d->visited, d->table ?: table);
>> if (ret)
>> goto out;
>>
>> @@ -1239,7 +1239,7 @@ static int metricgroup__add_metric(const char *pmu, const char *metric_name, con
>> int ret;
>> bool has_match = false;
>>
>> - {
>> + if (table) {
> may add a comment here. If !table then the metrics may belong to a system PMU.
Yes, I will do that.
>> struct metricgroup__add_metric_data data = {
>> .list = &list,
>> .pmu = pmu,
>> @@ -1696,8 +1696,6 @@ int metricgroup__parse_groups(struct evlist *perf_evlist,
>> {
>> const struct pmu_metrics_table *table = pmu_metrics_table__find();
>>
>> - if (!table)
>> - return -EINVAL;
> may add a debug message to mention we're parsing metrics of system PMUs.
>
> Thanks.
Yes, I will do that.
Best regards,
Junhao.
>
>> if (hardware_aware_grouping)
>> pr_debug("Use hardware aware grouping instead of traditional metric grouping method\n");
>>
>>
> .
>
© 2016 - 2025 Red Hat, Inc.