The io_alloc feature in resctrl enables system software to configure
the portion of the cache allocated for I/O traffic.
Add "io_alloc_cbm" resctrl file to display CBMs (Capacity Bit Mask) of
io_alloc feature.
The CBM interface file io_alloc_cbm resides in the info directory
(e.g., /sys/fs/resctrl/info/L3/). Displaying the resource name is not
necessary. Pass the resource name to show_doms() and print it only if
the name is valid. For io_alloc, pass NULL to suppress printing the
resource name.
When CDP is enabled, io_alloc routes traffic using the highest CLOSID
associated with the L3CODE resource. To ensure consistent cache allocation
behavior, the L3CODE and L3DATA resources must remain synchronized.
rdtgroup_init_cat() function takes both L3CODE and L3DATA into account when
initializing CBMs for new groups. The io_alloc feature adheres to this
same principle, meaning that the Cache Bit Masks (CBMs) accessed through
either L3CODE or L3DATA will reflect identical values.
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
v8: Updated the changelog.
Moved resctrl_io_alloc_cbm_show() to fs/resctrl/ctrlmondata.c.
show_doms is remains static with this change.
v7: Updated changelog.
Updated use doc (resctrl.rst).
Removed if (io_alloc_closid < 0) check. Not required anymore.
v6: Added "io_alloc_cbm" details in user doc resctrl.rst.
Resource name is not printed in CBM now. Corrected the texts about it
in resctrl.rst.
v5: Resolved conflicts due to recent resctrl FS/ARCH code restructure.
Updated show_doms() to print the resource if only it is valid. Pass NULL while
printing io_alloc CBM.
Changed the code to access the CBMs via either L3CODE or L3DATA resources.
v4: Updated the change log.
Added rdtgroup_mutex before rdt_last_cmd_puts().
Returned -ENODEV when resource type is CDP_DATA.
Kept the resource name while printing the CBM (L3:0=fff) that way
I dont have to change show_doms() just for this feature and it is
consistant across all the schemata display.
v3: Minor changes due to changes in resctrl_arch_get_io_alloc_enabled()
and resctrl_io_alloc_closid_get().
Added the check to verify CDP resource type.
Updated the commit log.
v2: Fixed to display only on L3 resources.
Added the locks while processing.
Rename the displat to io_alloc_cbm (from sdciae_cmd).
---
Documentation/filesystems/resctrl.rst | 19 +++++++++++++
fs/resctrl/ctrlmondata.c | 39 ++++++++++++++++++++++++---
fs/resctrl/internal.h | 3 +++
fs/resctrl/rdtgroup.c | 11 +++++++-
4 files changed, 68 insertions(+), 4 deletions(-)
diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst
index bd0a633afbb9..3002f7fdb2fe 100644
--- a/Documentation/filesystems/resctrl.rst
+++ b/Documentation/filesystems/resctrl.rst
@@ -173,6 +173,25 @@ related to allocation:
available for general (CPU) cache allocation for both the L3CODE and
L3DATA resources.
+"io_alloc_cbm":
+ CBMs(Capacity Bit Masks) that describe the portions of cache instances
+ to which I/O traffic from supported I/O devices are routed when "io_alloc"
+ is enabled.
+
+ CBMs are displayed in the following format:
+
+ <cache_id0>=<cbm>;<cache_id1>=<cbm>;...
+
+ Example::
+
+ # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
+ 0=ffff;1=ffff
+
+ When CDP is enabled "io_alloc_cbm" associated with the DATA and CODE
+ resources may reflect the same values. For example, values read from and
+ written to /sys/fs/resctrl/info/L3DATA/io_alloc_cbm may be reflected by
+ /sys/fs/resctrl/info/L3CODE/io_alloc_cbm and vice versa.
+
Memory bandwidth(MB) subdirectory contains the following files
with respect to allocation:
diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c
index bf982eab7b18..edb9dd131eed 100644
--- a/fs/resctrl/ctrlmondata.c
+++ b/fs/resctrl/ctrlmondata.c
@@ -381,7 +381,8 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
return ret ?: nbytes;
}
-static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int closid)
+static void show_doms(struct seq_file *s, struct resctrl_schema *schema,
+ char *resource_name, int closid)
{
struct rdt_resource *r = schema->res;
struct rdt_ctrl_domain *dom;
@@ -391,7 +392,8 @@ static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int clo
/* Walking r->domains, ensure it can't race with cpuhp */
lockdep_assert_cpus_held();
- seq_printf(s, "%*s:", max_name_width, schema->name);
+ if (resource_name)
+ seq_printf(s, "%*s:", max_name_width, resource_name);
list_for_each_entry(dom, &r->ctrl_domains, hdr.list) {
if (sep)
seq_puts(s, ";");
@@ -437,7 +439,7 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of,
closid = rdtgrp->closid;
list_for_each_entry(schema, &resctrl_schema_all, list) {
if (closid < schema->num_closid)
- show_doms(s, schema, closid);
+ show_doms(s, schema, schema->name, closid);
}
}
} else {
@@ -822,3 +824,34 @@ ssize_t resctrl_io_alloc_write(struct kernfs_open_file *of, char *buf,
return ret ?: nbytes;
}
+
+int resctrl_io_alloc_cbm_show(struct kernfs_open_file *of, struct seq_file *seq, void *v)
+{
+ struct resctrl_schema *s = rdt_kn_parent_priv(of->kn);
+ struct rdt_resource *r = s->res;
+ int ret = 0;
+
+ cpus_read_lock();
+ mutex_lock(&rdtgroup_mutex);
+
+ rdt_last_cmd_clear();
+
+ if (!r->cache.io_alloc_capable) {
+ rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
+ ret = -ENODEV;
+ goto out_unlock;
+ }
+
+ if (!resctrl_arch_get_io_alloc_enabled(r)) {
+ rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ show_doms(seq, s, NULL, resctrl_io_alloc_closid(r));
+
+out_unlock:
+ mutex_unlock(&rdtgroup_mutex);
+ cpus_read_unlock();
+ return ret;
+}
diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h
index 335def7af1f6..49934cd3dc40 100644
--- a/fs/resctrl/internal.h
+++ b/fs/resctrl/internal.h
@@ -389,6 +389,9 @@ enum resctrl_conf_type resctrl_peer_type(enum resctrl_conf_type my_type);
ssize_t resctrl_io_alloc_write(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off);
+int resctrl_io_alloc_cbm_show(struct kernfs_open_file *of, struct seq_file *seq,
+ void *v);
+
const char *rdtgroup_name_by_closid(int closid);
#ifdef CONFIG_RESCTRL_FS_PSEUDO_LOCK
diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c
index 380ebc86c748..6af8ff6c8385 100644
--- a/fs/resctrl/rdtgroup.c
+++ b/fs/resctrl/rdtgroup.c
@@ -1917,6 +1917,12 @@ static struct rftype res_common_files[] = {
.seq_show = resctrl_io_alloc_show,
.write = resctrl_io_alloc_write,
},
+ {
+ .name = "io_alloc_cbm",
+ .mode = 0444,
+ .kf_ops = &rdtgroup_kf_single_ops,
+ .seq_show = resctrl_io_alloc_cbm_show,
+ },
{
.name = "max_threshold_occupancy",
.mode = 0644,
@@ -2095,9 +2101,12 @@ static void io_alloc_init(void)
{
struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
- if (r->cache.io_alloc_capable)
+ if (r->cache.io_alloc_capable) {
resctrl_file_fflags_init("io_alloc", RFTYPE_CTRL_INFO |
RFTYPE_RES_CACHE);
+ resctrl_file_fflags_init("io_alloc_cbm",
+ RFTYPE_CTRL_INFO | RFTYPE_RES_CACHE);
+ }
}
void resctrl_file_fflags_init(const char *config, unsigned long fflags)
--
2.34.1
Hi Babu, On 8/5/25 4:30 PM, Babu Moger wrote: > The io_alloc feature in resctrl enables system software to configure > the portion of the cache allocated for I/O traffic. > > Add "io_alloc_cbm" resctrl file to display CBMs (Capacity Bit Mask) of > io_alloc feature. This is a bit vague. How about: Add "io_alloc_cbm" resctrl file to display the Capacity Bit Masks (CBMs) that represent the portion of each cache instance allocated for I/O traffic. > > The CBM interface file io_alloc_cbm resides in the info directory > (e.g., /sys/fs/resctrl/info/L3/). Displaying the resource name is not > necessary. Pass the resource name to show_doms() and print it only if "Displaying the resource name is not necessary." -> "Since the resource name is part of the path it is not necessary to display the resource name as done in the schemata file."? > the name is valid. For io_alloc, pass NULL to suppress printing the > resource name. > > When CDP is enabled, io_alloc routes traffic using the highest CLOSID > associated with the L3CODE resource. To ensure consistent cache allocation > behavior, the L3CODE and L3DATA resources must remain synchronized. "must remain synchronized" -> "are kept in sync" > rdtgroup_init_cat() function takes both L3CODE and L3DATA into account when I do not understand this part. rdtgroup_init_cat() is part of current implementation and it takes L3CODE and L3DATE of _other_ CLOSID into account when determining what CBM to initialize new CLOSID with. How is that relevant here? I wonder if you are not perhaps trying to say: "resctrl_io_alloc_init_cbm() initializes L3CODE and L3DATA of highest CLOSID with the same CBM." I do not think this is necessary to include here though since this is what the previous patch does and just saying that L3CODE and L3DATA are kept in sync is sufficient here. > initializing CBMs for new groups. The io_alloc feature adheres to this > same principle, meaning that the Cache Bit Masks (CBMs) accessed through > either L3CODE or L3DATA will reflect identical values. I do not understand what you are trying to say here. What do you mean with "same principle"? The fact that L3CODE and L3DATA are kept in sync is part of io_alloc only, no? > > Signed-off-by: Babu Moger <babu.moger@amd.com> > --- ... > --- ... > +int resctrl_io_alloc_cbm_show(struct kernfs_open_file *of, struct seq_file *seq, void *v) > +{ > + struct resctrl_schema *s = rdt_kn_parent_priv(of->kn); > + struct rdt_resource *r = s->res; > + int ret = 0; > + > + cpus_read_lock(); > + mutex_lock(&rdtgroup_mutex); > + > + rdt_last_cmd_clear(); > + > + if (!r->cache.io_alloc_capable) { > + rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name); > + ret = -ENODEV; > + goto out_unlock; > + } > + > + if (!resctrl_arch_get_io_alloc_enabled(r)) { > + rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name); > + ret = -EINVAL; > + goto out_unlock; > + } > + Could you please add a comment here that explains to the reader that CBMs of L3CODE and L3DATA are kept in sync elsewhere and the io_alloc CBMs displayed from either CDP resource are thus identical and accurately reflect the CBMs used for I/O. > + show_doms(seq, s, NULL, resctrl_io_alloc_closid(r)); > + > +out_unlock: > + mutex_unlock(&rdtgroup_mutex); > + cpus_read_unlock(); > + return ret; > +} Reinette
Hi Reinette, On 8/7/25 20:51, Reinette Chatre wrote: > Hi Babu, > > On 8/5/25 4:30 PM, Babu Moger wrote: >> The io_alloc feature in resctrl enables system software to configure >> the portion of the cache allocated for I/O traffic. >> >> Add "io_alloc_cbm" resctrl file to display CBMs (Capacity Bit Mask) of >> io_alloc feature. > > This is a bit vague. How about: > Add "io_alloc_cbm" resctrl file to display the Capacity Bit Masks > (CBMs) that represent the portion of each cache instance allocated > for I/O traffic. Sure. >> >> The CBM interface file io_alloc_cbm resides in the info directory >> (e.g., /sys/fs/resctrl/info/L3/). Displaying the resource name is not >> necessary. Pass the resource name to show_doms() and print it only if > > "Displaying the resource name is not necessary." -> "Since the > resource name is part of the path it is not necessary to display the > resource name as done in the schemata file."? Sure. > > >> the name is valid. For io_alloc, pass NULL to suppress printing the >> resource name. >> >> When CDP is enabled, io_alloc routes traffic using the highest CLOSID >> associated with the L3CODE resource. To ensure consistent cache allocation >> behavior, the L3CODE and L3DATA resources must remain synchronized. > > "must remain synchronized" -> "are kept in sync" Sure. > >> rdtgroup_init_cat() function takes both L3CODE and L3DATA into account when > > I do not understand this part. rdtgroup_init_cat() is part of current implementation > and it takes L3CODE and L3DATE of _other_ CLOSID into account when > determining what CBM to initialize new CLOSID with. How is that relevant > here? I wonder if you are not perhaps trying to say: > "resctrl_io_alloc_init_cbm() initializes L3CODE and L3DATA of highest CLOSID > with the same CBM." > I do not think this is necessary to include here though since this is what the > previous patch does and just saying that L3CODE and L3DATA are kept in sync is > sufficient here. Ok. Sounds good. > >> initializing CBMs for new groups. The io_alloc feature adheres to this >> same principle, meaning that the Cache Bit Masks (CBMs) accessed through >> either L3CODE or L3DATA will reflect identical values. > > I do not understand what you are trying to say here. What do you mean with > "same principle"? The fact that L3CODE and L3DATA are kept in sync is > part of io_alloc only, no? Yes. That is correct. I will remove that text. > >> >> Signed-off-by: Babu Moger <babu.moger@amd.com> >> --- > > ... > >> --- > > ... > >> +int resctrl_io_alloc_cbm_show(struct kernfs_open_file *of, struct seq_file *seq, void *v) >> +{ >> + struct resctrl_schema *s = rdt_kn_parent_priv(of->kn); >> + struct rdt_resource *r = s->res; >> + int ret = 0; >> + >> + cpus_read_lock(); >> + mutex_lock(&rdtgroup_mutex); >> + >> + rdt_last_cmd_clear(); >> + >> + if (!r->cache.io_alloc_capable) { >> + rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name); >> + ret = -ENODEV; >> + goto out_unlock; >> + } >> + >> + if (!resctrl_arch_get_io_alloc_enabled(r)) { >> + rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name); >> + ret = -EINVAL; >> + goto out_unlock; >> + } >> + > > Could you please add a comment here that explains to the reader that CBMs of > L3CODE and L3DATA are kept in sync elsewhere and the io_alloc CBMs displayed from > either CDP resource are thus identical and accurately reflect the CBMs used > for I/O. Sure. > >> + show_doms(seq, s, NULL, resctrl_io_alloc_closid(r)); >> + >> +out_unlock: >> + mutex_unlock(&rdtgroup_mutex); >> + cpus_read_unlock(); >> + return ret; >> +} > > Reinette > -- Thanks Babu Moger
© 2016 - 2025 Red Hat, Inc.