The mbm_cntr_assign mode offers several hardware counters that can be
assigned to an RMID-event pair and monitor the bandwidth as long as it
is assigned.
Counters are managed at two levels. The global assignment is tracked
using the mbm_cntr_free_map field in the struct resctrl_mon, while
domain-specific assignments are tracked using the mbm_cntr_map field
in the struct rdt_mon_domain. Allocation begins at the global level
and is then applied individually to each domain.
Introduce an interface to allocate these counters and update the
corresponding domains accordingly.
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
v8: Renamed rdtgroup_assign_cntr() to rdtgroup_assign_cntr_event().
Added the code to return the error if rdtgroup_assign_cntr_event fails.
Moved definition of MBM_EVENT_ARRAY_INDEX to resctrl/internal.h.
Updated typo in the comments.
v7: New patch. Moved all the FS code here.
Merged rdtgroup_assign_cntr and rdtgroup_alloc_cntr.
Adde new #define MBM_EVENT_ARRAY_INDEX.
---
arch/x86/kernel/cpu/resctrl/internal.h | 9 +++++
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 47 ++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 6d4df0490186..900e18aea2c4 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -67,6 +67,13 @@
#define MON_CNTR_UNSET U32_MAX
+/*
+ * Get the counter index for the assignable counter
+ * 0 for evtid == QOS_L3_MBM_TOTAL_EVENT_ID
+ * 1 for evtid == QOS_L3_MBM_LOCAL_EVENT_ID
+ */
+#define MBM_EVENT_ARRAY_INDEX(_event) ((_event) - 2)
+
/**
* cpumask_any_housekeeping() - Choose any CPU in @mask, preferring those that
* aren't marked nohz_full
@@ -708,6 +715,8 @@ unsigned int mon_event_config_index_get(u32 evtid);
int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
enum resctrl_event_id evtid, u32 rmid, u32 closid,
u32 cntr_id, bool assign);
+int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
+ struct rdt_mon_domain *d, enum resctrl_event_id evtid);
void rdt_staged_configs_clear(void);
bool closid_allocated(unsigned int closid);
int resctrl_find_cleanest_closid(void);
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 4ab1a18010c9..e4f628e6fe65 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -1898,6 +1898,53 @@ int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
return 0;
}
+/*
+ * Assign a hardware counter to the group.
+ * Counter will be assigned to all the domains if rdt_mon_domain is NULL
+ * else the counter will be allocated to specific domain.
+ */
+int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
+ struct rdt_mon_domain *d, enum resctrl_event_id evtid)
+{
+ int index = MBM_EVENT_ARRAY_INDEX(evtid);
+ int cntr_id = rdtgrp->mon.cntr_id[index];
+ int ret;
+
+ /*
+ * Allocate a new counter id to the event if the counter is not
+ * assigned already.
+ */
+ if (cntr_id == MON_CNTR_UNSET) {
+ cntr_id = mbm_cntr_alloc(r);
+ if (cntr_id < 0) {
+ rdt_last_cmd_puts("Out of MBM assignable counters\n");
+ return -ENOSPC;
+ }
+ rdtgrp->mon.cntr_id[index] = cntr_id;
+ }
+
+ if (!d) {
+ list_for_each_entry(d, &r->mon_domains, hdr.list) {
+ ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
+ rdtgrp->closid, cntr_id, true);
+ if (ret)
+ goto out_done_assign;
+
+ set_bit(cntr_id, d->mbm_cntr_map);
+ }
+ } else {
+ ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
+ rdtgrp->closid, cntr_id, true);
+ if (ret)
+ goto out_done_assign;
+
+ set_bit(cntr_id, d->mbm_cntr_map);
+ }
+
+out_done_assign:
+ return ret;
+}
+
/* rdtgroup information files for one cache resource. */
static struct rftype res_common_files[] = {
{
--
2.34.1
Hi Babu,
On 10/9/24 10:39 AM, Babu Moger wrote:
> The mbm_cntr_assign mode offers several hardware counters that can be
> assigned to an RMID-event pair and monitor the bandwidth as long as it
repeated nit (to be consistent): RMID, event
> is assigned.
>
> Counters are managed at two levels. The global assignment is tracked
> using the mbm_cntr_free_map field in the struct resctrl_mon, while
> domain-specific assignments are tracked using the mbm_cntr_map field
> in the struct rdt_mon_domain. Allocation begins at the global level
> and is then applied individually to each domain.
>
> Introduce an interface to allocate these counters and update the
> corresponding domains accordingly.
>
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
> v8: Renamed rdtgroup_assign_cntr() to rdtgroup_assign_cntr_event().
> Added the code to return the error if rdtgroup_assign_cntr_event fails.
> Moved definition of MBM_EVENT_ARRAY_INDEX to resctrl/internal.h.
> Updated typo in the comments.
>
> v7: New patch. Moved all the FS code here.
> Merged rdtgroup_assign_cntr and rdtgroup_alloc_cntr.
> Adde new #define MBM_EVENT_ARRAY_INDEX.
> ---
> arch/x86/kernel/cpu/resctrl/internal.h | 9 +++++
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 47 ++++++++++++++++++++++++++
> 2 files changed, 56 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 6d4df0490186..900e18aea2c4 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -67,6 +67,13 @@
>
> #define MON_CNTR_UNSET U32_MAX
>
> +/*
> + * Get the counter index for the assignable counter
> + * 0 for evtid == QOS_L3_MBM_TOTAL_EVENT_ID
> + * 1 for evtid == QOS_L3_MBM_LOCAL_EVENT_ID
> + */
> +#define MBM_EVENT_ARRAY_INDEX(_event) ((_event) - 2)
> +
This can be moved to patch that introduces and initializes the array and used there.
> /**
> * cpumask_any_housekeeping() - Choose any CPU in @mask, preferring those that
> * aren't marked nohz_full
> @@ -708,6 +715,8 @@ unsigned int mon_event_config_index_get(u32 evtid);
> int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
> enum resctrl_event_id evtid, u32 rmid, u32 closid,
> u32 cntr_id, bool assign);
> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
> + struct rdt_mon_domain *d, enum resctrl_event_id evtid);
> void rdt_staged_configs_clear(void);
> bool closid_allocated(unsigned int closid);
> int resctrl_find_cleanest_closid(void);
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 4ab1a18010c9..e4f628e6fe65 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -1898,6 +1898,53 @@ int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
> return 0;
> }
>
> +/*
> + * Assign a hardware counter to the group.
hmmm ... counters are not assigned to groups. How about:
"Assign a hardware counter to event @evtid of group @rdtgrp"?
> + * Counter will be assigned to all the domains if rdt_mon_domain is NULL
> + * else the counter will be allocated to specific domain.
"will be allocated to" -> "will be assigned to"?
> + */
> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
> + struct rdt_mon_domain *d, enum resctrl_event_id evtid)
> +{
> + int index = MBM_EVENT_ARRAY_INDEX(evtid);
> + int cntr_id = rdtgrp->mon.cntr_id[index];
> + int ret;
> +
> + /*
> + * Allocate a new counter id to the event if the counter is not
> + * assigned already.
> + */
> + if (cntr_id == MON_CNTR_UNSET) {
> + cntr_id = mbm_cntr_alloc(r);
> + if (cntr_id < 0) {
> + rdt_last_cmd_puts("Out of MBM assignable counters\n");
> + return -ENOSPC;
> + }
> + rdtgrp->mon.cntr_id[index] = cntr_id;
> + }
> +
> + if (!d) {
> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
> + rdtgrp->closid, cntr_id, true);
> + if (ret)
> + goto out_done_assign;
> +
> + set_bit(cntr_id, d->mbm_cntr_map);
The code pattern above is repeated four times in this work, twice in
rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
duplication should be avoided. It can be done in a function that also resets
the architectural state.
> + }
> + } else {
> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
> + rdtgrp->closid, cntr_id, true);
> + if (ret)
> + goto out_done_assign;
> +
> + set_bit(cntr_id, d->mbm_cntr_map);
> + }
> +
> +out_done_assign:
Should a newly allocated counter not be freed if it could not be configured?
> + return ret;
> +}
> +
> /* rdtgroup information files for one cache resource. */
> static struct rftype res_common_files[] = {
> {
Reinette
Hi Reinette,
On 10/15/2024 10:25 PM, Reinette Chatre wrote:
> Hi Babu,
>
> On 10/9/24 10:39 AM, Babu Moger wrote:
>> The mbm_cntr_assign mode offers several hardware counters that can be
>> assigned to an RMID-event pair and monitor the bandwidth as long as it
>
> repeated nit (to be consistent): RMID, event
Sure.
>
>> is assigned.
>>
>> Counters are managed at two levels. The global assignment is tracked
>> using the mbm_cntr_free_map field in the struct resctrl_mon, while
>> domain-specific assignments are tracked using the mbm_cntr_map field
>> in the struct rdt_mon_domain. Allocation begins at the global level
>> and is then applied individually to each domain.
>>
>> Introduce an interface to allocate these counters and update the
>> corresponding domains accordingly.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>> v8: Renamed rdtgroup_assign_cntr() to rdtgroup_assign_cntr_event().
>> Added the code to return the error if rdtgroup_assign_cntr_event fails.
>> Moved definition of MBM_EVENT_ARRAY_INDEX to resctrl/internal.h.
>> Updated typo in the comments.
>>
>> v7: New patch. Moved all the FS code here.
>> Merged rdtgroup_assign_cntr and rdtgroup_alloc_cntr.
>> Adde new #define MBM_EVENT_ARRAY_INDEX.
>> ---
>> arch/x86/kernel/cpu/resctrl/internal.h | 9 +++++
>> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 47 ++++++++++++++++++++++++++
>> 2 files changed, 56 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
>> index 6d4df0490186..900e18aea2c4 100644
>> --- a/arch/x86/kernel/cpu/resctrl/internal.h
>> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
>> @@ -67,6 +67,13 @@
>>
>> #define MON_CNTR_UNSET U32_MAX
>>
>> +/*
>> + * Get the counter index for the assignable counter
>> + * 0 for evtid == QOS_L3_MBM_TOTAL_EVENT_ID
>> + * 1 for evtid == QOS_L3_MBM_LOCAL_EVENT_ID
>> + */
>> +#define MBM_EVENT_ARRAY_INDEX(_event) ((_event) - 2)
>> +
>
> This can be moved to patch that introduces and initializes the array and used there.
Sure.
>
>> /**
>> * cpumask_any_housekeeping() - Choose any CPU in @mask, preferring those that
>> * aren't marked nohz_full
>> @@ -708,6 +715,8 @@ unsigned int mon_event_config_index_get(u32 evtid);
>> int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
>> enum resctrl_event_id evtid, u32 rmid, u32 closid,
>> u32 cntr_id, bool assign);
>> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
>> + struct rdt_mon_domain *d, enum resctrl_event_id evtid);
>> void rdt_staged_configs_clear(void);
>> bool closid_allocated(unsigned int closid);
>> int resctrl_find_cleanest_closid(void);
>> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> index 4ab1a18010c9..e4f628e6fe65 100644
>> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> @@ -1898,6 +1898,53 @@ int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
>> return 0;
>> }
>>
>> +/*
>> + * Assign a hardware counter to the group.
>
> hmmm ... counters are not assigned to groups. How about:
> "Assign a hardware counter to event @evtid of group @rdtgrp"?
Sure.
>
>> + * Counter will be assigned to all the domains if rdt_mon_domain is NULL
>> + * else the counter will be allocated to specific domain.
>
> "will be allocated to" -> "will be assigned to"?
Sure.
>
>> + */
>> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
>> + struct rdt_mon_domain *d, enum resctrl_event_id evtid)
>> +{
>> + int index = MBM_EVENT_ARRAY_INDEX(evtid);
>> + int cntr_id = rdtgrp->mon.cntr_id[index];
>> + int ret;
>> +
>> + /*
>> + * Allocate a new counter id to the event if the counter is not
>> + * assigned already.
>> + */
>> + if (cntr_id == MON_CNTR_UNSET) {
>> + cntr_id = mbm_cntr_alloc(r);
>> + if (cntr_id < 0) {
>> + rdt_last_cmd_puts("Out of MBM assignable counters\n");
>> + return -ENOSPC;
>> + }
>> + rdtgrp->mon.cntr_id[index] = cntr_id;
>> + }
>> +
>> + if (!d) {
>> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
>> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
>> + rdtgrp->closid, cntr_id, true);
>> + if (ret)
>> + goto out_done_assign;
>> +
>> + set_bit(cntr_id, d->mbm_cntr_map);
>
> The code pattern above is repeated four times in this work, twice in
> rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
> duplication should be avoided. It can be done in a function that also resets
> the architectural state.
Are you suggesting to combine rdtgroup_assign_cntr_event() and
rdtgroup_unassign_cntr_event()?
It can be done. We need a flag to tell if it is a assign or unassign.
>
>> + }
>> + } else {
>> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
>> + rdtgrp->closid, cntr_id, true);
>> + if (ret)
>> + goto out_done_assign;
>> +
>> + set_bit(cntr_id, d->mbm_cntr_map);
>> + }
>> +
>> +out_done_assign:
>
> Should a newly allocated counter not be freed if it could not be configured?
Yes.
>
>> + return ret;
>> +}
>> +
>> /* rdtgroup information files for one cache resource. */
>> static struct rftype res_common_files[] = {
>> {
>
> Reinette
>
--
- Babu Moger
Hi Babu,
On 10/17/24 3:56 PM, Moger, Babu wrote:
> On 10/15/2024 10:25 PM, Reinette Chatre wrote:
>> On 10/9/24 10:39 AM, Babu Moger wrote:
>>> + */
>>> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
>>> + struct rdt_mon_domain *d, enum resctrl_event_id evtid)
>>> +{
>>> + int index = MBM_EVENT_ARRAY_INDEX(evtid);
>>> + int cntr_id = rdtgrp->mon.cntr_id[index];
>>> + int ret;
>>> +
>>> + /*
>>> + * Allocate a new counter id to the event if the counter is not
>>> + * assigned already.
>>> + */
>>> + if (cntr_id == MON_CNTR_UNSET) {
>>> + cntr_id = mbm_cntr_alloc(r);
>>> + if (cntr_id < 0) {
>>> + rdt_last_cmd_puts("Out of MBM assignable counters\n");
>>> + return -ENOSPC;
>>> + }
>>> + rdtgrp->mon.cntr_id[index] = cntr_id;
>>> + }
>>> +
>>> + if (!d) {
>>> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
>>> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
>>> + rdtgrp->closid, cntr_id, true);
>>> + if (ret)
>>> + goto out_done_assign;
>>> +
>>> + set_bit(cntr_id, d->mbm_cntr_map);
>>
>> The code pattern above is repeated four times in this work, twice in
>> rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
>> duplication should be avoided. It can be done in a function that also resets
>> the architectural state.
>
> Are you suggesting to combine rdtgroup_assign_cntr_event() and rdtgroup_unassign_cntr_event()?
No. My comment was about the following pattern that is repeated four times:
...
ret = resctrl_arch_config_cntr(...)
if (ret)
...
set_bit()/clear_bit()
...
> It can be done. We need a flag to tell if it is a assign or unassign.
There is already a flag that is used by resctrl_arch_config_cntr(), the same parameters
as resctrl_arch_config_cntr() can be used for a wrapper that just calls
resctrl_arch_config_cntr() directly and uses that same flag to
select between set_bit() and clear_bit(). This wrapper can then also include
the reset of architectural state.
Also, I do not think we need atomic bitops here so these can be __set_bit()
and __clear_bit() that also matches how bits of mbm_cntr_free_map are managed
earlier in series.
Reinette
Hi Reinette,
On 10/18/24 10:59, Reinette Chatre wrote:
> Hi Babu,
>
> On 10/17/24 3:56 PM, Moger, Babu wrote:
>> On 10/15/2024 10:25 PM, Reinette Chatre wrote:
>>> On 10/9/24 10:39 AM, Babu Moger wrote:
>
>>>> + */
>>>> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
>>>> + struct rdt_mon_domain *d, enum resctrl_event_id evtid)
>>>> +{
>>>> + int index = MBM_EVENT_ARRAY_INDEX(evtid);
>>>> + int cntr_id = rdtgrp->mon.cntr_id[index];
>>>> + int ret;
>>>> +
>>>> + /*
>>>> + * Allocate a new counter id to the event if the counter is not
>>>> + * assigned already.
>>>> + */
>>>> + if (cntr_id == MON_CNTR_UNSET) {
>>>> + cntr_id = mbm_cntr_alloc(r);
>>>> + if (cntr_id < 0) {
>>>> + rdt_last_cmd_puts("Out of MBM assignable counters\n");
>>>> + return -ENOSPC;
>>>> + }
>>>> + rdtgrp->mon.cntr_id[index] = cntr_id;
>>>> + }
>>>> +
>>>> + if (!d) {
>>>> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
>>>> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
>>>> + rdtgrp->closid, cntr_id, true);
>>>> + if (ret)
>>>> + goto out_done_assign;
>>>> +
>>>> + set_bit(cntr_id, d->mbm_cntr_map);
>>>
>>> The code pattern above is repeated four times in this work, twice in
>>> rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
>>> duplication should be avoided. It can be done in a function that also resets
>>> the architectural state.
>>
>> Are you suggesting to combine rdtgroup_assign_cntr_event() and rdtgroup_unassign_cntr_event()?
>
> No. My comment was about the following pattern that is repeated four times:
> ...
> ret = resctrl_arch_config_cntr(...)
> if (ret)
> ...
> set_bit()/clear_bit()
> ...
>
ok.
>> It can be done. We need a flag to tell if it is a assign or unassign.
>
> There is already a flag that is used by resctrl_arch_config_cntr(), the same parameters
> as resctrl_arch_config_cntr() can be used for a wrapper that just calls
> resctrl_arch_config_cntr() directly and uses that same flag to
> select between set_bit() and clear_bit(). This wrapper can then also include
> the reset of architectural state.
ok. Got it, It will look like this.
+/*
+ * Wrapper to configure the counter in a domain.
+ */
+static int rdtgroup_config_cntr(struct rdt_resource *r,struct
rdt_mon_domain *d,
+ enum resctrl_event_id evtid, u32 rmid, u32
closid,
+ u32 cntr_id, bool assign)
+{
+ int ret;
+
+ ret = resctrl_arch_config_cntr(r, d, evtid, rmid, closid, cntr_id,
assign);
+ if (ret)
+ return ret;
+
+ if (assign)
+ __set_bit(cntr_id, d->mbm_cntr_map);
+ else
+ __clear_bit(cntr_id, d->mbm_cntr_map);
+
+ /*
+ * Reset the architectural state so that reading of hardware
+ * counter is not considered as an overflow in next update.
+ */
+ resctrl_arch_reset_rmid(r, d, closid, rmid, evtid);
+
+ return ret;
+}
+
>
> Also, I do not think we need atomic bitops here so these can be __set_bit()
> and __clear_bit() that also matches how bits of mbm_cntr_free_map are managed
> earlier in series.
>
--
Thanks
Babu Moger
Hi Babu,
On 10/21/24 7:40 AM, Moger, Babu wrote:
> On 10/18/24 10:59, Reinette Chatre wrote:
>> On 10/17/24 3:56 PM, Moger, Babu wrote:
>>> On 10/15/2024 10:25 PM, Reinette Chatre wrote:
>>>> On 10/9/24 10:39 AM, Babu Moger wrote:
>>
>>>>> + */
>>>>> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
>>>>> + struct rdt_mon_domain *d, enum resctrl_event_id evtid)
>>>>> +{
>>>>> + int index = MBM_EVENT_ARRAY_INDEX(evtid);
>>>>> + int cntr_id = rdtgrp->mon.cntr_id[index];
>>>>> + int ret;
>>>>> +
>>>>> + /*
>>>>> + * Allocate a new counter id to the event if the counter is not
>>>>> + * assigned already.
>>>>> + */
>>>>> + if (cntr_id == MON_CNTR_UNSET) {
>>>>> + cntr_id = mbm_cntr_alloc(r);
>>>>> + if (cntr_id < 0) {
>>>>> + rdt_last_cmd_puts("Out of MBM assignable counters\n");
>>>>> + return -ENOSPC;
>>>>> + }
>>>>> + rdtgrp->mon.cntr_id[index] = cntr_id;
>>>>> + }
>>>>> +
>>>>> + if (!d) {
>>>>> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
>>>>> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
>>>>> + rdtgrp->closid, cntr_id, true);
>>>>> + if (ret)
>>>>> + goto out_done_assign;
>>>>> +
>>>>> + set_bit(cntr_id, d->mbm_cntr_map);
>>>>
>>>> The code pattern above is repeated four times in this work, twice in
>>>> rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
>>>> duplication should be avoided. It can be done in a function that also resets
>>>> the architectural state.
>>>
>>> Are you suggesting to combine rdtgroup_assign_cntr_event() and rdtgroup_unassign_cntr_event()?
>>
>> No. My comment was about the following pattern that is repeated four times:
>> ...
>> ret = resctrl_arch_config_cntr(...)
>> if (ret)
>> ...
>> set_bit()/clear_bit()
>> ...
>>
>
> ok.
>
>
>>> It can be done. We need a flag to tell if it is a assign or unassign.
>>
>> There is already a flag that is used by resctrl_arch_config_cntr(), the same parameters
>> as resctrl_arch_config_cntr() can be used for a wrapper that just calls
>> resctrl_arch_config_cntr() directly and uses that same flag to
>> select between set_bit() and clear_bit(). This wrapper can then also include
>> the reset of architectural state.
>
> ok. Got it, It will look like this.
>
>
> +/*
> + * Wrapper to configure the counter in a domain.
> + */
Please replace comment with a description of what the function does.
> +static int rdtgroup_config_cntr(struct rdt_resource *r,struct
While it keeps being a challenge to get naming right I do think this
can start by replacing "rdtgroup" with "resctrl" (specifically,
"rdtgroup_config_cntr() -> resctrl_config_cntr()") because, as seen
with the parameters passed, this has nothing to do with rdtgroup.
> rdt_mon_domain *d,
> + enum resctrl_event_id evtid, u32 rmid, u32
> closid,
> + u32 cntr_id, bool assign)
> +{
> + int ret;
> +
> + ret = resctrl_arch_config_cntr(r, d, evtid, rmid, closid, cntr_id,
> assign);
> + if (ret)
> + return ret;
> +
> + if (assign)
> + __set_bit(cntr_id, d->mbm_cntr_map);
> + else
> + __clear_bit(cntr_id, d->mbm_cntr_map);
> +
> + /*
> + * Reset the architectural state so that reading of hardware
> + * counter is not considered as an overflow in next update.
> + */
> + resctrl_arch_reset_rmid(r, d, closid, rmid, evtid);
> +
> + return ret;
> +}
> +
Yes, this looks good. Thank you.
Reinette
Hi Reinette,
On 10/21/2024 10:31 AM, Reinette Chatre wrote:
> Hi Babu,
>
> On 10/21/24 7:40 AM, Moger, Babu wrote:
>> On 10/18/24 10:59, Reinette Chatre wrote:
>>> On 10/17/24 3:56 PM, Moger, Babu wrote:
>>>> On 10/15/2024 10:25 PM, Reinette Chatre wrote:
>>>>> On 10/9/24 10:39 AM, Babu Moger wrote:
>>>
>>>>>> + */
>>>>>> +int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
>>>>>> + struct rdt_mon_domain *d, enum resctrl_event_id evtid)
>>>>>> +{
>>>>>> + int index = MBM_EVENT_ARRAY_INDEX(evtid);
>>>>>> + int cntr_id = rdtgrp->mon.cntr_id[index];
>>>>>> + int ret;
>>>>>> +
>>>>>> + /*
>>>>>> + * Allocate a new counter id to the event if the counter is not
>>>>>> + * assigned already.
>>>>>> + */
>>>>>> + if (cntr_id == MON_CNTR_UNSET) {
>>>>>> + cntr_id = mbm_cntr_alloc(r);
>>>>>> + if (cntr_id < 0) {
>>>>>> + rdt_last_cmd_puts("Out of MBM assignable counters\n");
>>>>>> + return -ENOSPC;
>>>>>> + }
>>>>>> + rdtgrp->mon.cntr_id[index] = cntr_id;
>>>>>> + }
>>>>>> +
>>>>>> + if (!d) {
>>>>>> + list_for_each_entry(d, &r->mon_domains, hdr.list) {
>>>>>> + ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
>>>>>> + rdtgrp->closid, cntr_id, true);
>>>>>> + if (ret)
>>>>>> + goto out_done_assign;
>>>>>> +
>>>>>> + set_bit(cntr_id, d->mbm_cntr_map);
>>>>>
>>>>> The code pattern above is repeated four times in this work, twice in
>>>>> rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
>>>>> duplication should be avoided. It can be done in a function that also resets
>>>>> the architectural state.
>>>>
>>>> Are you suggesting to combine rdtgroup_assign_cntr_event() and rdtgroup_unassign_cntr_event()?
>>>
>>> No. My comment was about the following pattern that is repeated four times:
>>> ...
>>> ret = resctrl_arch_config_cntr(...)
>>> if (ret)
>>> ...
>>> set_bit()/clear_bit()
>>> ...
>>>
>>
>> ok.
>>
>>
>>>> It can be done. We need a flag to tell if it is a assign or unassign.
>>>
>>> There is already a flag that is used by resctrl_arch_config_cntr(), the same parameters
>>> as resctrl_arch_config_cntr() can be used for a wrapper that just calls
>>> resctrl_arch_config_cntr() directly and uses that same flag to
>>> select between set_bit() and clear_bit(). This wrapper can then also include
>>> the reset of architectural state.
>>
>> ok. Got it, It will look like this.
>>
>>
>> +/*
>> + * Wrapper to configure the counter in a domain.
>> + */
>
> Please replace comment with a description of what the function does.
sure.
>
>> +static int rdtgroup_config_cntr(struct rdt_resource *r,struct
>
> While it keeps being a challenge to get naming right I do think this
> can start by replacing "rdtgroup" with "resctrl" (specifically,
> "rdtgroup_config_cntr() -> resctrl_config_cntr()") because, as seen
> with the parameters passed, this has nothing to do with rdtgroup.
Sure.
>
>> rdt_mon_domain *d,
>> + enum resctrl_event_id evtid, u32 rmid, u32
>> closid,
>> + u32 cntr_id, bool assign)
>> +{
>> + int ret;
>> +
>> + ret = resctrl_arch_config_cntr(r, d, evtid, rmid, closid, cntr_id,
>> assign);
>> + if (ret)
>> + return ret;
>> +
>> + if (assign)
>> + __set_bit(cntr_id, d->mbm_cntr_map);
>> + else
>> + __clear_bit(cntr_id, d->mbm_cntr_map);
>> +
>> + /*
>> + * Reset the architectural state so that reading of hardware
>> + * counter is not considered as an overflow in next update.
>> + */
>> + resctrl_arch_reset_rmid(r, d, closid, rmid, evtid);
>> +
>> + return ret;
>> +}
>> +
>
> Yes, this looks good. Thank you.
>
Thanks-
- Babu Moger
© 2016 - 2026 Red Hat, Inc.