[PATCH v14 27/32] x86,fs/resctrl: Auto assign/unassign counters on mkdir

Babu Moger posted 32 patches 3 months, 4 weeks ago
There is a newer version of this series
[PATCH v14 27/32] x86,fs/resctrl: Auto assign/unassign counters on mkdir
Posted by Babu Moger 3 months, 4 weeks ago
Resctrl provides a user-configurable option mbm_assign_on_mkdir that
determines if a counter will automatically be assigned to an RMID, event
pair when its associated monitor group is created via mkdir.

Enable mbm_assign_on_mkdir by default and automatically assign or unassign
counters when a resctrl group is created or deleted.

By default, each group requires two counters: one for the MBM total event
and one for the MBM local event.

If the counters are exhausted, the kernel will log the error message
"Unable to allocate counter in domain" in
/sys/fs/resctrl/info/last_cmd_status when a new group is created and the
counter assignment will fail. However, the creation of a group should not
fail due to assignment failures. Users have the flexibility to modify the
assignments at a later time.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
v14: Updated the changelog with changed name mbm_event.
     Update code comments with changed name mbm_event.
     Changed the code to reflect Tony's struct mon_evt changes.

v13: Changes due to calling of resctrl_assign_cntr_event() and resctrl_unassign_cntr_event().
     It only takes evtid. evt_cfg is not required anymore.
     Resolved conflicts caused by the recent FS/ARCH code restructure.
     The monitor.c/rdtgroup.c files have been split between the FS and ARCH directories.

v12: Removed mbm_cntr_reset() as it is not required while removing the group.
     Update the commit text.
     Added r->mon_capable  check in rdtgroup_assign_cntrs() and rdtgroup_unassign_cntrs.

v11: Moved mbm_cntr_reset() to monitor.c.
     Added code reset non-architectural state in mbm_cntr_reset().
     Added missing rdtgroup_unassign_cntrs() calls on failure path.

v10: Assigned the counter before exposing the event files.
    Moved the call rdtgroup_assign_cntrs() inside mkdir_rdt_prepare_rmid_alloc().
    This is called both CNTR_MON and MON group creation.
    Call mbm_cntr_reset() when unmounted to clear all the assignments.
    Taken care of few other feedback comments.

v9: Changed rdtgroup_assign_cntrs() and rdtgroup_unassign_cntrs() to return void.
    Updated couple of rdtgroup_unassign_cntrs() calls properly.
    Updated function comments.

v8: Renamed rdtgroup_assign_grp to rdtgroup_assign_cntrs.
    Renamed rdtgroup_unassign_grp to rdtgroup_unassign_cntrs.
    Fixed the problem with unassigning the child MON groups of CTRL_MON group.

v7: Reworded the commit message.
    Removed the reference of ABMC with mbm_cntr_assign.
    Renamed the function rdtgroup_assign_cntrs to rdtgroup_assign_grp.

v6: Removed the redundant comments on all the calls of
    rdtgroup_assign_cntrs. Updated the commit message.
    Dropped printing error message on every call of rdtgroup_assign_cntrs.

v5: Removed the code to enable/disable ABMC during the mount.
    That will be another patch.
    Added arch callers to get the arch specific data.
    Renamed fuctions to match the other abmc function.
    Added code comments for assignment failures.

v4: Few name changes based on the upstream discussion.
    Commit message update.

v3: This is a new patch. Patch addresses the upstream comment to enable
    ABMC feature by default if the feature is available.
---
 arch/x86/kernel/cpu/resctrl/monitor.c |  1 +
 fs/resctrl/rdtgroup.c                 | 71 ++++++++++++++++++++++++++-
 2 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index ee0aa741cf6c..053f516a8e67 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -429,6 +429,7 @@ int __init rdt_get_mon_l3_config(struct rdt_resource *r)
 		r->mon.mbm_cntr_assignable = true;
 		cpuid_count(0x80000020, 5, &eax, &ebx, &ecx, &edx);
 		r->mon.num_mbm_cntrs = (ebx & GENMASK(15, 0)) + 1;
+		r->mon.mbm_assign_on_mkdir = true;
 	}
 
 	r->mon_capable = true;
diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c
index bf5fd46bd455..128a9db339f3 100644
--- a/fs/resctrl/rdtgroup.c
+++ b/fs/resctrl/rdtgroup.c
@@ -2945,6 +2945,55 @@ static void schemata_list_destroy(void)
 	}
 }
 
+/**
+ * rdtgroup_assign_cntrs() - Assign counters to MBM events. Called when
+ *			     a new group is created.
+ * If "mbm_event" mode is enabled, counters are automatically assigned.
+ * Each group can accommodate two counters: one for the total event and
+ * one for the local event. Assignments may fail due to the limited number
+ * of counters. However, it is not necessary to fail the group creation
+ * and thus no failure is returned. Users have the option to modify the
+ * counter assignments after the group has been created.
+ */
+static void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp)
+{
+	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
+
+	if (!r->mon_capable)
+		return;
+
+	if (resctrl_arch_mbm_cntr_assign_enabled(r) && !r->mon.mbm_assign_on_mkdir)
+		return;
+
+	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
+		resctrl_assign_cntr_event(r, NULL, rdtgrp,
+					  &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);
+
+	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
+		resctrl_assign_cntr_event(r, NULL, rdtgrp,
+					  &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
+}
+
+/*
+ * rdtgroup_unassign_cntrs() - Unassign the counters associated with MBM events.
+ *			       Called when a group is deleted.
+ */
+static void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp)
+{
+	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
+
+	if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r))
+		return;
+
+	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
+		resctrl_unassign_cntr_event(r, NULL, rdtgrp,
+					    &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);
+
+	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
+		resctrl_unassign_cntr_event(r, NULL, rdtgrp,
+					    &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
+}
+
 static int rdt_get_tree(struct fs_context *fc)
 {
 	struct rdt_fs_context *ctx = rdt_fc2context(fc);
@@ -3001,6 +3050,8 @@ static int rdt_get_tree(struct fs_context *fc)
 		if (ret < 0)
 			goto out_info;
 
+		rdtgroup_assign_cntrs(&rdtgroup_default);
+
 		ret = mkdir_mondata_all(rdtgroup_default.kn,
 					&rdtgroup_default, &kn_mondata);
 		if (ret < 0)
@@ -3039,8 +3090,10 @@ static int rdt_get_tree(struct fs_context *fc)
 	if (resctrl_arch_mon_capable())
 		kernfs_remove(kn_mondata);
 out_mongrp:
-	if (resctrl_arch_mon_capable())
+	if (resctrl_arch_mon_capable()) {
+		rdtgroup_unassign_cntrs(&rdtgroup_default);
 		kernfs_remove(kn_mongrp);
+	}
 out_info:
 	kernfs_remove(kn_info);
 out_closid_exit:
@@ -3186,6 +3239,7 @@ static void free_all_child_rdtgrp(struct rdtgroup *rdtgrp)
 
 	head = &rdtgrp->mon.crdtgrp_list;
 	list_for_each_entry_safe(sentry, stmp, head, mon.crdtgrp_list) {
+		rdtgroup_unassign_cntrs(sentry);
 		free_rmid(sentry->closid, sentry->mon.rmid);
 		list_del(&sentry->mon.crdtgrp_list);
 
@@ -3226,6 +3280,8 @@ static void rmdir_all_sub(void)
 		cpumask_or(&rdtgroup_default.cpu_mask,
 			   &rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask);
 
+		rdtgroup_unassign_cntrs(rdtgrp);
+
 		free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
 
 		kernfs_remove(rdtgrp->kn);
@@ -3310,6 +3366,7 @@ static void resctrl_fs_teardown(void)
 		return;
 
 	rmdir_all_sub();
+	rdtgroup_unassign_cntrs(&rdtgroup_default);
 	mon_put_kn_priv();
 	rdt_pseudo_lock_release();
 	rdtgroup_default.mode = RDT_MODE_SHAREABLE;
@@ -3790,9 +3847,12 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
 	}
 	rdtgrp->mon.rmid = ret;
 
+	rdtgroup_assign_cntrs(rdtgrp);
+
 	ret = mkdir_mondata_all(rdtgrp->kn, rdtgrp, &rdtgrp->mon.mon_data_kn);
 	if (ret) {
 		rdt_last_cmd_puts("kernfs subdir error\n");
+		rdtgroup_unassign_cntrs(rdtgrp);
 		free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
 		return ret;
 	}
@@ -3802,8 +3862,10 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
 
 static void mkdir_rdt_prepare_rmid_free(struct rdtgroup *rgrp)
 {
-	if (resctrl_arch_mon_capable())
+	if (resctrl_arch_mon_capable()) {
+		rdtgroup_unassign_cntrs(rgrp);
 		free_rmid(rgrp->closid, rgrp->mon.rmid);
+	}
 }
 
 /*
@@ -4079,6 +4141,9 @@ static int rdtgroup_rmdir_mon(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
 	update_closid_rmid(tmpmask, NULL);
 
 	rdtgrp->flags = RDT_DELETED;
+
+	rdtgroup_unassign_cntrs(rdtgrp);
+
 	free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
 
 	/*
@@ -4126,6 +4191,8 @@ static int rdtgroup_rmdir_ctrl(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
 	cpumask_or(tmpmask, tmpmask, &rdtgrp->cpu_mask);
 	update_closid_rmid(tmpmask, NULL);
 
+	rdtgroup_unassign_cntrs(rdtgrp);
+
 	free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
 	closid_free(rdtgrp->closid);
 
-- 
2.34.1
Re: [PATCH v14 27/32] x86,fs/resctrl: Auto assign/unassign counters on mkdir
Posted by Reinette Chatre 3 months, 2 weeks ago
Hi Babu,

On 6/13/25 2:05 PM, Babu Moger wrote:
> Resctrl provides a user-configurable option mbm_assign_on_mkdir that
> determines if a counter will automatically be assigned to an RMID, event
> pair when its associated monitor group is created via mkdir.
> 
> Enable mbm_assign_on_mkdir by default and automatically assign or unassign
> counters when a resctrl group is created or deleted.

This is a bit confusing since I do not think mbm_assign_on_mkdir has *anything*
to do with unassign of counters. Counters are always (irrespective of mbm_assign_on_mkdir)
unassigned when a resctrl group is deleted, no?

The subject also does not seem accurate since there is no unassign on
mkdir.

> 
> By default, each group requires two counters: one for the MBM total event
> and one for the MBM local event.
> 
> If the counters are exhausted, the kernel will log the error message
> "Unable to allocate counter in domain" in
> /sys/fs/resctrl/info/last_cmd_status when a new group is created and the
> counter assignment will fail. However, the creation of a group should not
> fail due to assignment failures. Users have the flexibility to modify the
> assignments at a later time.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---

...

> ---
>  arch/x86/kernel/cpu/resctrl/monitor.c |  1 +
>  fs/resctrl/rdtgroup.c                 | 71 ++++++++++++++++++++++++++-
>  2 files changed, 70 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index ee0aa741cf6c..053f516a8e67 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -429,6 +429,7 @@ int __init rdt_get_mon_l3_config(struct rdt_resource *r)
>  		r->mon.mbm_cntr_assignable = true;
>  		cpuid_count(0x80000020, 5, &eax, &ebx, &ecx, &edx);
>  		r->mon.num_mbm_cntrs = (ebx & GENMASK(15, 0)) + 1;
> +		r->mon.mbm_assign_on_mkdir = true;
>  	}
>  
>  	r->mon_capable = true;
> diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c
> index bf5fd46bd455..128a9db339f3 100644
> --- a/fs/resctrl/rdtgroup.c
> +++ b/fs/resctrl/rdtgroup.c
> @@ -2945,6 +2945,55 @@ static void schemata_list_destroy(void)
>  	}
>  }
>  
> +/**
> + * rdtgroup_assign_cntrs() - Assign counters to MBM events. Called when
> + *			     a new group is created.
> + * If "mbm_event" mode is enabled, counters are automatically assigned.

"counters are automatically assigned" -> "counters should be automatically assigned
if the "mbm_assign_on_mkdir" is set"?

> + * Each group can accommodate two counters: one for the total event and
> + * one for the local event. Assignments may fail due to the limited number
> + * of counters. However, it is not necessary to fail the group creation
> + * and thus no failure is returned. Users have the option to modify the
> + * counter assignments after the group has been created.
> + */
> +static void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp)
> +{
> +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
> +
> +	if (!r->mon_capable)
> +		return;
> +
> +	if (resctrl_arch_mbm_cntr_assign_enabled(r) && !r->mon.mbm_assign_on_mkdir)
> +		return;

This check is not clear to me. It looks to me as though counter assignment
will be attempted if !resctrl_arch_mbm_cntr_assign_enabled(r)? Perhaps
something like:
	if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r) ||
	    !r->mon.mbm_assign_on_mkdir)
		return;

> +
> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
> +		resctrl_assign_cntr_event(r, NULL, rdtgrp,
> +					  &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);

Switching the namespace like this is confusing to me. rdtgroup_assign_cntrs()
has prefix rdtgroup_ to indicate it operates on a resource group. It is confusing
when it switches namespace to call resctrl_assign_cntr_event() that actually assigns
a specific event to a resource group. I think this will be easier to follow if:
resctrl_assign_cntr_event() -> rdtgroup_assign_cntr_event()

> +
> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
> +		resctrl_assign_cntr_event(r, NULL, rdtgrp,
> +					  &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
> +}
> +
> +/*
> + * rdtgroup_unassign_cntrs() - Unassign the counters associated with MBM events.
> + *			       Called when a group is deleted.
> + */
> +static void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp)
> +{
> +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
> +
> +	if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r))
> +		return;
> +
> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
> +		resctrl_unassign_cntr_event(r, NULL, rdtgrp,
> +					    &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);

same here, I think this will be easier to follow when namespace is
consistent:
resctrl_unassign_cntr_event() -> rdtgroup_unassign_cntr_event()


Also, the struct rdt_resource parameter should not be needed when
struct mon_evt is provided and resource can be obtained from mon_evt::rid.

> +
> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
> +		resctrl_unassign_cntr_event(r, NULL, rdtgrp,
> +					    &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
> +}
> +
>  static int rdt_get_tree(struct fs_context *fc)
>  {
>  	struct rdt_fs_context *ctx = rdt_fc2context(fc);


Reinette
Re: [PATCH v14 27/32] x86,fs/resctrl: Auto assign/unassign counters on mkdir
Posted by Moger, Babu 3 months, 1 week ago
Hi Reinette,

On 6/25/25 18:25, Reinette Chatre wrote:
> Hi Babu,
> 
> On 6/13/25 2:05 PM, Babu Moger wrote:
>> Resctrl provides a user-configurable option mbm_assign_on_mkdir that
>> determines if a counter will automatically be assigned to an RMID, event
>> pair when its associated monitor group is created via mkdir.
>>
>> Enable mbm_assign_on_mkdir by default and automatically assign or unassignq
>> counters when a resctrl group is created or deleted.
> 
> This is a bit confusing since I do not think mbm_assign_on_mkdir has *anything*
> to do with unassign of counters. Counters are always (irrespective of mbm_assign_on_mkdir)
> unassigned when a resctrl group is deleted, no?

Yes. That is correct. Changed the text now.

> 
> The subject also does not seem accurate since there is no unassign on
> mkdir.

Changed the subject to:

x86,fs/resctrl: Auto assign counters on mkdir and clean up on group removal

> 
>>
>> By default, each group requires two counters: one for the MBM total event
>> and one for the MBM local event.
>>
>> If the counters are exhausted, the kernel will log the error message
>> "Unable to allocate counter in domain" in
>> /sys/fs/resctrl/info/last_cmd_status when a new group is created and the
>> counter assignment will fail. However, the creation of a group should not
>> fail due to assignment failures. Users have the flexibility to modify the
>> assignments at a later time.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
> 
> ...
> 
>> ---
>>  arch/x86/kernel/cpu/resctrl/monitor.c |  1 +
>>  fs/resctrl/rdtgroup.c                 | 71 ++++++++++++++++++++++++++-
>>  2 files changed, 70 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index ee0aa741cf6c..053f516a8e67 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -429,6 +429,7 @@ int __init rdt_get_mon_l3_config(struct rdt_resource *r)
>>  		r->mon.mbm_cntr_assignable = true;
>>  		cpuid_count(0x80000020, 5, &eax, &ebx, &ecx, &edx);
>>  		r->mon.num_mbm_cntrs = (ebx & GENMASK(15, 0)) + 1;
>> +		r->mon.mbm_assign_on_mkdir = true;
>>  	}
>>  
>>  	r->mon_capable = true;
>> diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c
>> index bf5fd46bd455..128a9db339f3 100644
>> --- a/fs/resctrl/rdtgroup.c
>> +++ b/fs/resctrl/rdtgroup.c
>> @@ -2945,6 +2945,55 @@ static void schemata_list_destroy(void)
>>  	}
>>  }
>>  
>> +/**
>> + * rdtgroup_assign_cntrs() - Assign counters to MBM events. Called when
>> + *			     a new group is created.
>> + * If "mbm_event" mode is enabled, counters are automatically assigned.
> 
> "counters are automatically assigned" -> "counters should be automatically assigned
> if the "mbm_assign_on_mkdir" is set"?

Sure.

> 
>> + * Each group can accommodate two counters: one for the total event and
>> + * one for the local event. Assignments may fail due to the limited number
>> + * of counters. However, it is not necessary to fail the group creation
>> + * and thus no failure is returned. Users have the option to modify the
>> + * counter assignments after the group has been created.
>> + */
>> +static void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp)
>> +{
>> +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
>> +
>> +	if (!r->mon_capable)
>> +		return;
>> +
>> +	if (resctrl_arch_mbm_cntr_assign_enabled(r) && !r->mon.mbm_assign_on_mkdir)
>> +		return;
> 
> This check is not clear to me. It looks to me as though counter assignment
> will be attempted if !resctrl_arch_mbm_cntr_assign_enabled(r)? Perhaps
> something like:
> 	if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r) ||
> 	    !r->mon.mbm_assign_on_mkdir)
> 		return;
> 

Yes. Good catch.


>> +
>> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
>> +		resctrl_assign_cntr_event(r, NULL, rdtgrp,
>> +					  &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);
> 
> Switching the namespace like this is confusing to me. rdtgroup_assign_cntrs()
> has prefix rdtgroup_ to indicate it operates on a resource group. It is confusing
> when it switches namespace to call resctrl_assign_cntr_event() that actually assigns
> a specific event to a resource group. I think this will be easier to follow if:
> resctrl_assign_cntr_event() -> rdtgroup_assign_cntr_event()

Sure.

> 
>> +
>> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
>> +		resctrl_assign_cntr_event(r, NULL, rdtgrp,
>> +					  &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
>> +}
>> +
>> +/*
>> + * rdtgroup_unassign_cntrs() - Unassign the counters associated with MBM events.
>> + *			       Called when a group is deleted.
>> + */
>> +static void rdtgroup_unassign_cntrs(struct rdtgroup *rdtgrp)
>> +{
>> +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
>> +
>> +	if (!r->mon_capable || !resctrl_arch_mbm_cntr_assign_enabled(r))
>> +		return;
>> +
>> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))
>> +		resctrl_unassign_cntr_event(r, NULL, rdtgrp,
>> +					    &mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID]);
> 
> same here, I think this will be easier to follow when namespace is
> consistent:
> resctrl_unassign_cntr_event() -> rdtgroup_unassign_cntr_event()
> 

Sure.

> 
> Also, the struct rdt_resource parameter should not be needed when
> struct mon_evt is provided and resource can be obtained from mon_evt::rid.
> 
>> +
>> +	if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))
>> +		resctrl_unassign_cntr_event(r, NULL, rdtgrp,
>> +					    &mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID]);
>> +}
>> +
>>  static int rdt_get_tree(struct fs_context *fc)
>>  {
>>  	struct rdt_fs_context *ctx = rdt_fc2context(fc);
> 
> 
> Reinette
> 

-- 
Thanks
Babu Moger