.../admin-guide/kernel-parameters.txt | 2 +- Documentation/arch/x86/resctrl.rst | 55 +++ arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/cpuid-deps.c | 1 + arch/x86/kernel/cpu/resctrl/core.c | 9 + arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 35 +- arch/x86/kernel/cpu/resctrl/internal.h | 19 + arch/x86/kernel/cpu/resctrl/rdtgroup.c | 372 +++++++++++++++++- arch/x86/kernel/cpu/scattered.c | 1 + include/linux/resctrl.h | 12 + 11 files changed, 487 insertions(+), 21 deletions(-)
This series adds the support for L3 Smart Data Cache Injection Allocation
Enforcement (SDCIAE) to resctrl infrastructure. It is refered to as "io_alloc"
in resctrl subsystem.
Upcoming AMD hardware implements Smart Data Cache Injection (SDCI).
Smart Data Cache Injection (SDCI) is a mechanism that enables direct
insertion of data from I/O devices into the L3 cache. By directly caching
data from I/O devices rather than first storing the I/O data in DRAM, SDCI
reduces demands on DRAM bandwidth and reduces latency to the processor
consuming the I/O data.
The SDCIAE (SDCI Allocation Enforcement) PQE feature allows system software
to control the portion of the L3 cache used for SDCI devices.
When enabled, SDCIAE forces all SDCI lines to be placed into the L3 cache
partitions identified by the highest-supported L3_MASK_n register, where n
is the maximum supported CLOSID.
The feature details are documented in the APM listed below [1].
[1] AMD64 Architecture Programmer's Manual Volume 2: System Programming
Publication # 24593 Revision 3.41 section 19.4.7 L3 Smart Data Cache
Injection Allocation Enforcement (SDCIAE)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
The feature requires linux support of TPH (TLP Processing Hints).
The support is available in linux kernel after the commit
48d0fd2b903e3 ("PCI/TPH: Add TPH documentation")
The patches are based on top of commit
84c319145cbad6 Merge branch into ("tip/master: 'x86/nmi'")
# Linux Implementation
Feature adds following interface files when the resctrl "io_alloc" feature is
supported on L3 resource:
/sys/fs/resctrl/info/L3/io_alloc: Report the feature status. Enable/disable the
feature by writing to the interface.
/sys/fs/resctrl/info/L3/io_alloc_cbm: List the Capacity Bit Masks (CBMs) available
for I/O devices when io_alloc feature is enabled.
Configure the CBM by writing to the interface.
# Examples:
a. Check if io_alloc feature is available
#mount -t resctrl resctrl /sys/fs/resctrl/
# cat /sys/fs/resctrl/info/L3/io_alloc
disabled
b. Enable the io_alloc feature.
# echo 1 > /sys/fs/resctrl/info/L3/io_alloc
# cat /sys/fs/resctrl/info/L3/io_alloc
enabled
c. Check the CBM values for the io_alloc feature.
# cat /sys/fs/resctrl/info/L3/io_alloc_cbm
L3:0=ffff;1=ffff
d. Change the CBM value for the domain 1:
# echo L3:1=FF > /sys/fs/resctrl/info/L3/io_alloc_cbm
# cat /sys/fs/resctrl/info/L3/io_alloc_cbm
L3:0=ffff;1=00ff
d. Disable io_alloc feature and exit.
# echo 0 > /sys/fs/resctrl/info/L3/io_alloc
# cat /sys/fs/resctrl/info/L3/io_alloc
disabled
#umount /sys/fs/resctrl/
---
v4: The "io_alloc" interface will report "enabled/disabled/not supported"
instead of 0 or 1..
Updated resctrl_io_alloc_closid_get() to verify the max closid availability
using closids_supported().
Updated the documentation for "shareable_bits" and "bit_usage".
NOTE: io_alloc is about specific CLOS. rdt_bit_usage_show() is not designed
handle bit_usage for specific CLOS. Its about overall system. So, we cannot
really tell the user which CLOS is shared across both hardware and software.
This is something we need to discuss.
Introduced io_alloc_init() to initialize fflags.
Printed the group name when io_alloc enablement fails to help user.
Added rdtgroup_mutex before rdt_last_cmd_puts() in resctrl_io_alloc_cbm_show().
Returned -ENODEV when resource type is CDP_DATA.
Kept the resource name while printing the CBM (L3:0=ffff) that way we dont have
to change show_doms() just for this feature and it is consistant across all the
schemata display.
Added new patch to call parse_cbm() directly to avoid code duplication.
Checked all the series(v1-v3) again to verify if I missed any comment.
v3: Rewrote commit log for the last 3 patches. Changed the text to bit
more generic than the AMD specific feature. Added AMD feature
specifics in the end.
Renamed the rdt_get_sdciae_alloc_cfg() to rdt_set_io_alloc_capable().
Renamed the _resctrl_io_alloc_enable() to _resctrl_sdciae_enable()
as it is arch specific.
Changed the return to void in _resctrl_sdciae_enable() instead of int.
The number of CLOSIDs is determined based on the minimum supported
across all resources (in closid_init). It needs to match the max
supported on the resource. Added the check to verify if MAX CLOSID
availability on the system.
Added CDP check to make sure io_alloc is configured in CDP_CODE.
Highest CLOSID corresponds to CDP_CODE.
Added resctrl_io_alloc_closid_free() to free the io_alloc CLOSID.
Added errors in few cases when CLOSID allocation fails.
Fixes splat reported when info/L3/bit_usage is accesed when io_alloc is enabled.
https://lore.kernel.org/lkml/SJ1PR11MB60837B532254E7B23BC27E84FC052@SJ1PR11MB6083.namprd11.prod.outlook.com/
v2: Added dependancy on X86_FEATURE_CAT_L3
Removed the "" in CPU feature definition.
Changed sdciae_capable to io_alloc_capable to make it as generic feature.
Moved io_alloc_capable field in struct resctrl_cache.
Changed the name of few arch functions similar to ABMC series.
resctrl_arch_get_io_alloc_enabled()
resctrl_arch_io_alloc_enable()
Renamed the feature to "io_alloc".
Added generic texts for the feature in commit log and resctrl.rst doc.
Added resctrl_io_alloc_init_cat() to initialize io_alloc to default values
when enabled.
Fixed io_alloc interface to show only on L3 resource.
Added the locks while processing io_alloc CBMs.
Previous versions:
v3: https://lore.kernel.org/lkml/cover.1738272037.git.babu.moger@amd.com/
v2: https://lore.kernel.org/lkml/cover.1734556832.git.babu.moger@amd.com/
v1: https://lore.kernel.org/lkml/cover.1723824984.git.babu.moger@amd.com/
Babu Moger (8):
x86/cpufeatures: Add support for L3 Smart Data Cache Injection
Allocation Enforcement
x86/resctrl: Add SDCIAE feature in the command line options
x86/resctrl: Detect io_alloc feature
x86/resctrl: Implement "io_alloc" enable/disable handlers
x86/resctrl: Add user interface to enable/disable io_alloc feature
x86/resctrl: Introduce interface to display io_alloc CBMs
x86/resctrl: Modify rdt_parse_data to pass mode and CLOSID
x86/resctrl: Introduce interface to modify io_alloc Capacity Bit Masks
.../admin-guide/kernel-parameters.txt | 2 +-
Documentation/arch/x86/resctrl.rst | 55 +++
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/kernel/cpu/cpuid-deps.c | 1 +
arch/x86/kernel/cpu/resctrl/core.c | 9 +
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 35 +-
arch/x86/kernel/cpu/resctrl/internal.h | 19 +
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 372 +++++++++++++++++-
arch/x86/kernel/cpu/scattered.c | 1 +
include/linux/resctrl.h | 12 +
11 files changed, 487 insertions(+), 21 deletions(-)
--
2.34.1
Hi Babu, On 4/21/25 3:43 PM, Babu Moger wrote: > # Linux Implementation > > Feature adds following interface files when the resctrl "io_alloc" feature is > supported on L3 resource: > > /sys/fs/resctrl/info/L3/io_alloc: Report the feature status. Enable/disable the > feature by writing to the interface. > > /sys/fs/resctrl/info/L3/io_alloc_cbm: List the Capacity Bit Masks (CBMs) available > for I/O devices when io_alloc feature is enabled. > Configure the CBM by writing to the interface. > > # Examples: > > a. Check if io_alloc feature is available > #mount -t resctrl resctrl /sys/fs/resctrl/ > > # cat /sys/fs/resctrl/info/L3/io_alloc > disabled > > b. Enable the io_alloc feature. > > # echo 1 > /sys/fs/resctrl/info/L3/io_alloc > # cat /sys/fs/resctrl/info/L3/io_alloc > enabled > > c. Check the CBM values for the io_alloc feature. > > # cat /sys/fs/resctrl/info/L3/io_alloc_cbm > L3:0=ffff;1=ffff > > d. Change the CBM value for the domain 1: > # echo L3:1=FF > /sys/fs/resctrl/info/L3/io_alloc_cbm > > # cat /sys/fs/resctrl/info/L3/io_alloc_cbm > L3:0=ffff;1=00ff > > d. Disable io_alloc feature and exit. > > # echo 0 > /sys/fs/resctrl/info/L3/io_alloc > # cat /sys/fs/resctrl/info/L3/io_alloc > disabled > > #umount /sys/fs/resctrl/ > From what I can tell the interface when CDP is enabled will look as follows: # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/ # cat /sys/fs/resctrl/info/L3CODE/io_alloc disabled # cat /sys/fs/resctrl/info/L3DATA/io_alloc not supported "io_alloc" can thus be enabled for L3CODE but not for L3DATA. This is unexpected considering the feature is called "L3 Smart *Data* Cache Injection Allocation Enforcement". I understand that the interface evolved into this because the "code" allocation of CDP uses the CLOSID required by SDCIAE but I think leaking implementation details like this to the user interface can cause confusion. Since there is no distinction between code and data in these IO allocations, what do you think of connecting the io_alloc and io_alloc_cbm files within L3CODE and L3DATA so that the user can read/write from either with a read showing the same data and user able to write to either? For example, # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/ # cat /sys/fs/resctrl/info/L3CODE/io_alloc disabled # cat /sys/fs/resctrl/info/L3DATA/io_alloc disabled # echo 1 > /sys/fs/resctrl/info/L3CODE/io_alloc # cat /sys/fs/resctrl/info/L3CODE/io_alloc enabled # cat /sys/fs/resctrl/info/L3DATA/io_alloc enabled # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm 0=ffff;1=ffff # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm 0=ffff;1=ffff # echo 1=FF > /sys/fs/resctrl/info/L3DATA/io_alloc_cbm # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm 0=ffff;1=00ff # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm 0=ffff;1=00ff (Note in above I removed the resource name from io_alloc_cbm to match what was discussed during previous version: https://lore.kernel.org/lkml/251c8fe1-603f-4993-a822-afb35b49cdfa@amd.com/ ) What do you think? > --- > v4: The "io_alloc" interface will report "enabled/disabled/not supported" > instead of 0 or 1.. > > Updated resctrl_io_alloc_closid_get() to verify the max closid availability > using closids_supported(). > > Updated the documentation for "shareable_bits" and "bit_usage". > > NOTE: io_alloc is about specific CLOS. rdt_bit_usage_show() is not designed > handle bit_usage for specific CLOS. Its about overall system. So, we cannot > really tell the user which CLOS is shared across both hardware and software. "bit_usage" is not about CLOS but how the resource is used. Per the doc: "bit_usage": Annotated capacity bitmasks showing how all instances of the resource are used. The key here is the CBM, not CLOS. For each bit in the *CBM* "bit_usage" shows how that portion of the cache is used with the legend documented in Documentation/arch/x86/resctrl.rst. Consider a system with the following allocations: # cat /sys/fs/resctrl/schemata L3:0=0ff0 # cat /sys/fs/resctrl/info/L3/io_alloc_cbm 0=ff00 Then "bit_usage" will look like: # cat /sys/fs/resctrl/info/L3/bit_usage 0=HHHHXXXXSSSS0000 "bit_usage" shows how the cache is being used. It shows that the portion of cache represented by first four bits of CBM is unused, portion of cache represented by bits 4 to 7 of CBM is only used by software, portion of cache represented by bits 8 to 11 of CBM is shared between software and hardware, portion of cache represented by bits 12 to 15 is only used by hardware. > This is something we need to discuss. Looking at implementation in patch #5 the "io_alloc_cbm" bits of CBM are presented as software bits, since "io_alloc_cbm" represents IO from devices it should be "hardware" bits (hw_shareable), no? Reinette
Hi Reinette, Thanks for quick turnaround. On 5/2/2025 4:20 PM, Reinette Chatre wrote: > Hi Babu, > > On 4/21/25 3:43 PM, Babu Moger wrote: >> # Linux Implementation >> >> Feature adds following interface files when the resctrl "io_alloc" feature is >> supported on L3 resource: >> >> /sys/fs/resctrl/info/L3/io_alloc: Report the feature status. Enable/disable the >> feature by writing to the interface. >> >> /sys/fs/resctrl/info/L3/io_alloc_cbm: List the Capacity Bit Masks (CBMs) available >> for I/O devices when io_alloc feature is enabled. >> Configure the CBM by writing to the interface. >> >> # Examples: >> >> a. Check if io_alloc feature is available >> #mount -t resctrl resctrl /sys/fs/resctrl/ >> >> # cat /sys/fs/resctrl/info/L3/io_alloc >> disabled >> >> b. Enable the io_alloc feature. >> >> # echo 1 > /sys/fs/resctrl/info/L3/io_alloc >> # cat /sys/fs/resctrl/info/L3/io_alloc >> enabled >> >> c. Check the CBM values for the io_alloc feature. >> >> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm >> L3:0=ffff;1=ffff >> >> d. Change the CBM value for the domain 1: >> # echo L3:1=FF > /sys/fs/resctrl/info/L3/io_alloc_cbm >> >> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm >> L3:0=ffff;1=00ff >> >> d. Disable io_alloc feature and exit. >> >> # echo 0 > /sys/fs/resctrl/info/L3/io_alloc >> # cat /sys/fs/resctrl/info/L3/io_alloc >> disabled >> >> #umount /sys/fs/resctrl/ >> > >>From what I can tell the interface when CDP is enabled will look > as follows: > > # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/ > # cat /sys/fs/resctrl/info/L3CODE/io_alloc > disabled > # cat /sys/fs/resctrl/info/L3DATA/io_alloc > not supported > > "io_alloc" can thus be enabled for L3CODE but not for L3DATA. > This is unexpected considering the feature is called > "L3 Smart *Data* Cache Injection Allocation Enforcement". > > I understand that the interface evolved into this because the > "code" allocation of CDP uses the CLOSID required by SDCIAE but I think > leaking implementation details like this to the user interface can > cause confusion. > > Since there is no distinction between code and data in these > IO allocations, what do you think of connecting the io_alloc and > io_alloc_cbm files within L3CODE and L3DATA so that the user can > read/write from either with a read showing the same data and > user able to write to either? For example, > > # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/ > # cat /sys/fs/resctrl/info/L3CODE/io_alloc > disabled > # cat /sys/fs/resctrl/info/L3DATA/io_alloc > disabled > # echo 1 > /sys/fs/resctrl/info/L3CODE/io_alloc > # cat /sys/fs/resctrl/info/L3CODE/io_alloc > enabled > # cat /sys/fs/resctrl/info/L3DATA/io_alloc > enabled > # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm > 0=ffff;1=ffff > # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm > 0=ffff;1=ffff > # echo 1=FF > /sys/fs/resctrl/info/L3DATA/io_alloc_cbm > # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm > 0=ffff;1=00ff > # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm > 0=ffff;1=00ff I agree. There is no right or wrong here. It can be done this way like you mentioned above. But I am not sure if will clear the confusion. We have already added the text in user doc (also spec says the same). "On AMD systems, the io_alloc feature is supported by the L3 Smart Data Cache Injection Allocation Enforcement (SDCIAE). The CLOSID for io_alloc is determined by the highest CLOSID supported by the resource. When CDP is enabled, io_alloc routes I/O traffic using the highest CLOSID allocated for the instruction cache (L3CODE). Dont you think this text might clear the confusion? We can add examples also if that makes it even more clear. > > (Note in above I removed the resource name from io_alloc_cbm to match > what was discussed during previous version: > https://lore.kernel.org/lkml/251c8fe1-603f-4993-a822-afb35b49cdfa@amd.com/ ) > What do you think? Yes. I remember. "Kept the resource name while printing the CBM for io_alloc, so we dont have to change show_doms() just for this feature and it is consistant across all the schemata display. I added the note in here. https://lore.kernel.org/lkml/784fbc61e02e9a834473c3476ee196ef6a44e338.1745275431.git.babu.moger@amd.com/ I will change it if you feel strongly about it. We will have to change show_doms() to handle this. > > >> --- >> v4: The "io_alloc" interface will report "enabled/disabled/not supported" >> instead of 0 or 1.. >> >> Updated resctrl_io_alloc_closid_get() to verify the max closid availability >> using closids_supported(). >> >> Updated the documentation for "shareable_bits" and "bit_usage". >> >> NOTE: io_alloc is about specific CLOS. rdt_bit_usage_show() is not designed >> handle bit_usage for specific CLOS. Its about overall system. So, we cannot >> really tell the user which CLOS is shared across both hardware and software. > > "bit_usage" is not about CLOS but how the resource is used. Per the doc: > > "bit_usage": > Annotated capacity bitmasks showing how all > instances of the resource are used. > > The key here is the CBM, not CLOS. For each bit in the *CBM* "bit_usage" shows > how that portion of the cache is used with the legend documented in > Documentation/arch/x86/resctrl.rst. > > Consider a system with the following allocations: > # cat /sys/fs/resctrl/schemata > L3:0=0ff0 This is CLOS 0. > # cat /sys/fs/resctrl/info/L3/io_alloc_cbm > 0=ff00 This is CLOS 15. > > Then "bit_usage" will look like: > > # cat /sys/fs/resctrl/info/L3/bit_usage > 0=HHHHXXXXSSSS0000 It is confusing here. To make it clear we may have to print all the CLOSes in each domain. # cat /sys/fs/resctrl/info/L3/bit_usage DOM0=CLOS0:SSSSSSSSSSSSSSSS;... ;CLOS15=HHHHXXXXSSSS0000; DOM1=CLOS0:SSSSSSSSSSSSSSSS;... ;CLOS15=HHHHXXXXSSSS0000 > > "bit_usage" shows how the cache is being used. It shows that the portion of cache represented > by first four bits of CBM is unused, portion of cache represented by bits 4 to 7 of CBM is > only used by software, portion of cache represented by bits 8 to 11 of CBM is shared between > software and hardware, portion of cache represented by bits 12 to 15 is only used by hardware. > >> This is something we need to discuss. > > Looking at implementation in patch #5 the "io_alloc_cbm" bits of CBM are presented > as software bits, since "io_alloc_cbm" represents IO from devices it should be "hardware" bits > (hw_shareable), no? > Yes. It is. But logic is bit different there. It loops thru all the CLOSes on the domain. So, it will print again like this below. #cat bit_usage 0=HHHHXXXXSSSS0000 It tells the user that all the CLOSes in domain 0 has this sharing propery which is not correct. To make it clear we really need to print every CLOS here. What do you think? Thanks Babu
Hi Babu,
On 5/2/25 5:53 PM, Moger, Babu wrote:
> Hi Reinette,
>
> Thanks for quick turnaround.
>
> On 5/2/2025 4:20 PM, Reinette Chatre wrote:
>> Hi Babu,
>>
>> On 4/21/25 3:43 PM, Babu Moger wrote:
>>> # Linux Implementation
>>>
>>> Feature adds following interface files when the resctrl "io_alloc" feature is
>>> supported on L3 resource:
>>>
>>> /sys/fs/resctrl/info/L3/io_alloc: Report the feature status. Enable/disable the
>>> feature by writing to the interface.
>>>
>>> /sys/fs/resctrl/info/L3/io_alloc_cbm: List the Capacity Bit Masks (CBMs) available
>>> for I/O devices when io_alloc feature is enabled.
>>> Configure the CBM by writing to the interface.
>>>
>>> # Examples:
>>>
>>> a. Check if io_alloc feature is available
>>> #mount -t resctrl resctrl /sys/fs/resctrl/
>>>
>>> # cat /sys/fs/resctrl/info/L3/io_alloc
>>> disabled
>>>
>>> b. Enable the io_alloc feature.
>>>
>>> # echo 1 > /sys/fs/resctrl/info/L3/io_alloc
>>> # cat /sys/fs/resctrl/info/L3/io_alloc
>>> enabled
>>>
>>> c. Check the CBM values for the io_alloc feature.
>>>
>>> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
>>> L3:0=ffff;1=ffff
>>>
>>> d. Change the CBM value for the domain 1:
>>> # echo L3:1=FF > /sys/fs/resctrl/info/L3/io_alloc_cbm
>>>
>>> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
>>> L3:0=ffff;1=00ff
>>>
>>> d. Disable io_alloc feature and exit.
>>>
>>> # echo 0 > /sys/fs/resctrl/info/L3/io_alloc
>>> # cat /sys/fs/resctrl/info/L3/io_alloc
>>> disabled
>>>
>>> #umount /sys/fs/resctrl/
>>>
>>
>>> From what I can tell the interface when CDP is enabled will look
>> as follows:
>>
>> # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/
>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc
>> disabled
>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc
>> not supported
>> "io_alloc" can thus be enabled for L3CODE but not for L3DATA.
>> This is unexpected considering the feature is called
>> "L3 Smart *Data* Cache Injection Allocation Enforcement".
>>
>> I understand that the interface evolved into this because the
>> "code" allocation of CDP uses the CLOSID required by SDCIAE but I think
>> leaking implementation details like this to the user interface can
>> cause confusion.
>>
>> Since there is no distinction between code and data in these
>> IO allocations, what do you think of connecting the io_alloc and
>> io_alloc_cbm files within L3CODE and L3DATA so that the user can
>> read/write from either with a read showing the same data and
>> user able to write to either? For example,
>>
>> # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/
>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc
>> disabled
>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc
>> disabled
>> # echo 1 > /sys/fs/resctrl/info/L3CODE/io_alloc
>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc
>> enabled
>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc
>> enabled
>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm
>> 0=ffff;1=ffff
>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm
>> 0=ffff;1=ffff
>> # echo 1=FF > /sys/fs/resctrl/info/L3DATA/io_alloc_cbm
>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm
>> 0=ffff;1=00ff
>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm
>> 0=ffff;1=00ff
>
> I agree. There is no right or wrong here. It can be done this way like you mentioned above. But I am not sure if will clear the confusion.
>
> We have already added the text in user doc (also spec says the same).
>
> "On AMD systems, the io_alloc feature is supported by the L3 Smart
> Data Cache Injection Allocation Enforcement (SDCIAE). The CLOSID for
> io_alloc is determined by the highest CLOSID supported by the resource.
> When CDP is enabled, io_alloc routes I/O traffic using the highest
> CLOSID allocated for the instruction cache (L3CODE).
>
> Dont you think this text might clear the confusion? We can add examples also if that makes it even more clear.
The user interface is not intended to be a mirror of the hardware interface.
If it was, doing so is becoming increasingly difficult with multiple
architectures with different hardware intefaces needing to use the same
user interface for control. Remember, there are no "CLOSID" in MPAM and
I do not know details of what RISC-V brings.
We should aim to have something as generic as possible that makes sense
for user space. All the hardware interface details should be hidden as much
as possible from user interface. When we expose the hardware interface details
it becomes very difficult to support new use cases.
The only aspect of "closids" that has been exposed to user space thus far
is the "num_closids" and in user documentation a CLOSid has been linked to the
number of control groups. That is the only constraint we need to think about
here. I have repeatedly asked for IO alloc connection with CLOSIDs to not be exposed
to user space (yet user documentation and messages to user space keeps doing so
in this series). Support for IO alloc in this way is unique to AMD. We do not want
resctrl to be constrained like this if another architecture needs to support
some form of IO alloc and does so in a different way.
I understand that IO alloc backed by CLOSID is forming part of resctrl fs in this
implementation and that is ok for now. As long as we do not leak this to user space
it gives use flexibility to change resctrl fs when/if we learn different architecture
needs later.
>> (Note in above I removed the resource name from io_alloc_cbm to match
>> what was discussed during previous version:
>> https://lore.kernel.org/lkml/251c8fe1-603f-4993-a822-afb35b49cdfa@amd.com/ )
>> What do you think?
>
> Yes. I remember. "Kept the resource name while printing the CBM for io_alloc, so we dont have to change show_doms() just for this feature and it is consistant across all the schemata display.
It almost sounds like you do not want to implement something because the
code to support it does not exist?
>
> I added the note in here.
> https://lore.kernel.org/lkml/784fbc61e02e9a834473c3476ee196ef6a44e338.1745275431.git.babu.moger@amd.com/
You mention "I dont have to change show_doms() just for this feature and it is
consistant across all the schemata display."
I am indeed seeing a pattern where one goal is to add changes by changing minimum
amount of code. Please let this not be a goal but instead make it a goal to integrate
changes into resctrl appropriately, not just pasted on top.
When it comes to the schemata display then it makes sense to add the resource name since
the schemata file is within a resource group containing multiple resources and the schemata
file thus needs to identify resources. Compare this to, for example, the "bit_usage" file
that is unique to a resource and thus no need to identify the resource.
>
> I will change it if you feel strongly about it. We will have to change show_doms() to handle this.
What is the problem with changing show_doms()?
>
>>
>>
>>> ---
>>> v4: The "io_alloc" interface will report "enabled/disabled/not supported"
>>> instead of 0 or 1..
>>>
>>> Updated resctrl_io_alloc_closid_get() to verify the max closid availability
>>> using closids_supported().
>>>
>>> Updated the documentation for "shareable_bits" and "bit_usage".
>>>
>>> NOTE: io_alloc is about specific CLOS. rdt_bit_usage_show() is not designed
>>> handle bit_usage for specific CLOS. Its about overall system. So, we cannot
>>> really tell the user which CLOS is shared across both hardware and software.
>>
>> "bit_usage" is not about CLOS but how the resource is used. Per the doc:
>>
>> "bit_usage":
>> Annotated capacity bitmasks showing how all
>> instances of the resource are used.
>>
>> The key here is the CBM, not CLOS. For each bit in the *CBM* "bit_usage" shows
>> how that portion of the cache is used with the legend documented in
>> Documentation/arch/x86/resctrl.rst.
>>
>> Consider a system with the following allocations:
>> # cat /sys/fs/resctrl/schemata
>> L3:0=0ff0
>
> This is CLOS 0.
>
>> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
>> 0=ff00
>
> This is CLOS 15.
>
>>
>> Then "bit_usage" will look like:
>>
>> # cat /sys/fs/resctrl/info/L3/bit_usage
>> 0=HHHHXXXXSSSS0000
>
> It is confusing here. To make it clear we may have to print all the CLOSes in each domain.
Could you please elaborate how this is confusing?
>
> # cat /sys/fs/resctrl/info/L3/bit_usage
> DOM0=CLOS0:SSSSSSSSSSSSSSSS;... ;CLOS15=HHHHXXXXSSSS0000;
> DOM1=CLOS0:SSSSSSSSSSSSSSSS;... ;CLOS15=HHHHXXXXSSSS0000
Please no. Not just does this change existing user interface it also breaks the goal of
"bit_usage".
Please think of it from user perspective. If user wants to know, for example, "how is my
L3 cache allocated" then the "bit_usage" file provides that summary.
>> "bit_usage" shows how the cache is being used. It shows that the portion of cache represented
>> by first four bits of CBM is unused, portion of cache represented by bits 4 to 7 of CBM is
>> only used by software, portion of cache represented by bits 8 to 11 of CBM is shared between
>> software and hardware, portion of cache represented by bits 12 to 15 is only used by hardware.
>>
>>> This is something we need to discuss.
>>
>> Looking at implementation in patch #5 the "io_alloc_cbm" bits of CBM are presented
>> as software bits, since "io_alloc_cbm" represents IO from devices it should be "hardware" bits
>> (hw_shareable), no?
>>
> Yes. It is. But logic is bit different there.
>
> It loops thru all the CLOSes on the domain. So, it will print again like this below.
This is what current code does, but the code can be changed, no? For example, rdt_bit_usage_show()
does not need to treat the IO allocation like all the other resource groups but instead handle it
separately. Below us some pseudo code that presents the idea, untested, not compiled.
hw_shareable = r->cache.shareable_bits;
for (i = 0; i < closids_supported(); i++) {
if (!closid_allocated(i) ||
(resctrl_arch_get_io_alloc_enabled(r) && i == resctrl_io_alloc_closid_get(r, s)))
continue;
/* Intitialize sw_shareable and exclusive */
}
if (resctrl_arch_get_io_alloc_enabled(r)) {
/*
* Sidenote: I do not think schemata parameter is needed for
* resctrl_io_alloc_closid_get()
*/
io_alloc_closid = resctrl_io_alloc_closid_get(r, s);
if (resctrl_arch_get_cdp_enabled(r->rid))
ctrl_val = resctrl_arch_get_config(r, dom, io_alloc_closid, CDP_CODE);
else
ctrl_val = resctrl_arch_get_config(r, dom, io_alloc_closid, CDP_NONE);
hw_shareable |= ctrl_val;
}
for (i = r->cache.cbm_len - 1; i >= 0; i--) {
/* Write annotated bitmask to user space */
}
>
> #cat bit_usage
> 0=HHHHXXXXSSSS0000
>
> It tells the user that all the CLOSes in domain 0 has this sharing propery which is not correct.
>
> To make it clear we really need to print every CLOS here. What do you think?
No. We cannot just change user space API like this. The way I see it the implementation can
support existing user space API. I am sure the above can be improved but it presents an idea
that we can use to start with.
Reinette
Hi Reinette,
On 5/5/25 11:22, Reinette Chatre wrote:
> Hi Babu,
>
> On 5/2/25 5:53 PM, Moger, Babu wrote:
>> Hi Reinette,
>>
>> Thanks for quick turnaround.
>>
>> On 5/2/2025 4:20 PM, Reinette Chatre wrote:
>>> Hi Babu,
>>>
>>> On 4/21/25 3:43 PM, Babu Moger wrote:
>>>> # Linux Implementation
>>>>
>>>> Feature adds following interface files when the resctrl "io_alloc" feature is
>>>> supported on L3 resource:
>>>>
>>>> /sys/fs/resctrl/info/L3/io_alloc: Report the feature status. Enable/disable the
>>>> feature by writing to the interface.
>>>>
>>>> /sys/fs/resctrl/info/L3/io_alloc_cbm: List the Capacity Bit Masks (CBMs) available
>>>> for I/O devices when io_alloc feature is enabled.
>>>> Configure the CBM by writing to the interface.
>>>>
>>>> # Examples:
>>>>
>>>> a. Check if io_alloc feature is available
>>>> #mount -t resctrl resctrl /sys/fs/resctrl/
>>>>
>>>> # cat /sys/fs/resctrl/info/L3/io_alloc
>>>> disabled
>>>>
>>>> b. Enable the io_alloc feature.
>>>>
>>>> # echo 1 > /sys/fs/resctrl/info/L3/io_alloc
>>>> # cat /sys/fs/resctrl/info/L3/io_alloc
>>>> enabled
>>>>
>>>> c. Check the CBM values for the io_alloc feature.
>>>>
>>>> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
>>>> L3:0=ffff;1=ffff
>>>>
>>>> d. Change the CBM value for the domain 1:
>>>> # echo L3:1=FF > /sys/fs/resctrl/info/L3/io_alloc_cbm
>>>>
>>>> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
>>>> L3:0=ffff;1=00ff
>>>>
>>>> d. Disable io_alloc feature and exit.
>>>>
>>>> # echo 0 > /sys/fs/resctrl/info/L3/io_alloc
>>>> # cat /sys/fs/resctrl/info/L3/io_alloc
>>>> disabled
>>>>
>>>> #umount /sys/fs/resctrl/
>>>>
>>>
>>>> From what I can tell the interface when CDP is enabled will look
>>> as follows:
>>>
>>> # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/
>>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc
>>> disabled
>>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc
>>> not supported
>>> "io_alloc" can thus be enabled for L3CODE but not for L3DATA.
>>> This is unexpected considering the feature is called
>>> "L3 Smart *Data* Cache Injection Allocation Enforcement".
>>>
>>> I understand that the interface evolved into this because the
>>> "code" allocation of CDP uses the CLOSID required by SDCIAE but I think
>>> leaking implementation details like this to the user interface can
>>> cause confusion.
>>>
>>> Since there is no distinction between code and data in these
>>> IO allocations, what do you think of connecting the io_alloc and
>>> io_alloc_cbm files within L3CODE and L3DATA so that the user can
>>> read/write from either with a read showing the same data and
>>> user able to write to either? For example,
>>>
>>> # mount -o cdp -t resctrl resctrl /sys/fs/resctrl/
>>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc
>>> disabled
>>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc
>>> disabled
>>> # echo 1 > /sys/fs/resctrl/info/L3CODE/io_alloc
>>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc
>>> enabled
>>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc
>>> enabled
>>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm
>>> 0=ffff;1=ffff
>>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm
>>> 0=ffff;1=ffff
>>> # echo 1=FF > /sys/fs/resctrl/info/L3DATA/io_alloc_cbm
>>> # cat /sys/fs/resctrl/info/L3DATA/io_alloc_cbm
>>> 0=ffff;1=00ff
>>> # cat /sys/fs/resctrl/info/L3CODE/io_alloc_cbm
>>> 0=ffff;1=00ff
>>
>> I agree. There is no right or wrong here. It can be done this way like you mentioned above. But I am not sure if will clear the confusion.
>>
>> We have already added the text in user doc (also spec says the same).
>>
>> "On AMD systems, the io_alloc feature is supported by the L3 Smart
>> Data Cache Injection Allocation Enforcement (SDCIAE). The CLOSID for
>> io_alloc is determined by the highest CLOSID supported by the resource.
>> When CDP is enabled, io_alloc routes I/O traffic using the highest
>> CLOSID allocated for the instruction cache (L3CODE).
>>
>> Dont you think this text might clear the confusion? We can add examples also if that makes it even more clear.
>
> The user interface is not intended to be a mirror of the hardware interface.
> If it was, doing so is becoming increasingly difficult with multiple
> architectures with different hardware intefaces needing to use the same
> user interface for control. Remember, there are no "CLOSID" in MPAM and
> I do not know details of what RISC-V brings.
>
> We should aim to have something as generic as possible that makes sense
> for user space. All the hardware interface details should be hidden as much
> as possible from user interface. When we expose the hardware interface details
> it becomes very difficult to support new use cases.
>
> The only aspect of "closids" that has been exposed to user space thus far
> is the "num_closids" and in user documentation a CLOSid has been linked to the
> number of control groups. That is the only constraint we need to think about
> here. I have repeatedly asked for IO alloc connection with CLOSIDs to not be exposed
> to user space (yet user documentation and messages to user space keeps doing so
> in this series). Support for IO alloc in this way is unique to AMD. We do not want
> resctrl to be constrained like this if another architecture needs to support
> some form of IO alloc and does so in a different way.
>
> I understand that IO alloc backed by CLOSID is forming part of resctrl fs in this
> implementation and that is ok for now. As long as we do not leak this to user space
> it gives use flexibility to change resctrl fs when/if we learn different architecture
> needs later.
That makes sense. I’ll go ahead and adjust it as suggested.
>
>>> (Note in above I removed the resource name from io_alloc_cbm to match
>>> what was discussed during previous version:
>>> https://lore.kernel.org/lkml/251c8fe1-603f-4993-a822-afb35b49cdfa@amd.com/ )
>>> What do you think?
>>
>> Yes. I remember. "Kept the resource name while printing the CBM for io_alloc, so we dont have to change show_doms() just for this feature and it is consistant across all the schemata display.
>
> It almost sounds like you do not want to implement something because the
> code to support it does not exist?
>
>>
>> I added the note in here.
>> https://lore.kernel.org/lkml/784fbc61e02e9a834473c3476ee196ef6a44e338.1745275431.git.babu.moger@amd.com/
>
> You mention "I dont have to change show_doms() just for this feature and it is
> consistant across all the schemata display."
> I am indeed seeing a pattern where one goal is to add changes by changing minimum
> amount of code. Please let this not be a goal but instead make it a goal to integrate
> changes into resctrl appropriately, not just pasted on top.
>
> When it comes to the schemata display then it makes sense to add the resource name since
> the schemata file is within a resource group containing multiple resources and the schemata
> file thus needs to identify resources. Compare this to, for example, the "bit_usage" file
> that is unique to a resource and thus no need to identify the resource.
>
>>
>> I will change it if you feel strongly about it. We will have to change show_doms() to handle this.
>
> What is the problem with changing show_doms()?
There is no problem changing show_doms(). My intenstion was to keep the
change as minimul as possible.
Sure. Will make the changes "not" to print the resource name for io_alloc_cbm.
>
>>
>>>
>>>
>>>> ---
>>>> v4: The "io_alloc" interface will report "enabled/disabled/not supported"
>>>> instead of 0 or 1..
>>>>
>>>> Updated resctrl_io_alloc_closid_get() to verify the max closid availability
>>>> using closids_supported().
>>>>
>>>> Updated the documentation for "shareable_bits" and "bit_usage".
>>>>
>>>> NOTE: io_alloc is about specific CLOS. rdt_bit_usage_show() is not designed
>>>> handle bit_usage for specific CLOS. Its about overall system. So, we cannot
>>>> really tell the user which CLOS is shared across both hardware and software.
>>>
>>> "bit_usage" is not about CLOS but how the resource is used. Per the doc:
>>>
>>> "bit_usage":
>>> Annotated capacity bitmasks showing how all
>>> instances of the resource are used.
>>>
>>> The key here is the CBM, not CLOS. For each bit in the *CBM* "bit_usage" shows
>>> how that portion of the cache is used with the legend documented in
>>> Documentation/arch/x86/resctrl.rst.
>>>
>>> Consider a system with the following allocations:
>>> # cat /sys/fs/resctrl/schemata
>>> L3:0=0ff0
>>
>> This is CLOS 0.
>>
>>> # cat /sys/fs/resctrl/info/L3/io_alloc_cbm
>>> 0=ff00
>>
>> This is CLOS 15.
>>
>>>
>>> Then "bit_usage" will look like:
>>>
>>> # cat /sys/fs/resctrl/info/L3/bit_usage
>>> 0=HHHHXXXXSSSS0000
>>
>> It is confusing here. To make it clear we may have to print all the CLOSes in each domain.
>
> Could you please elaborate how this is confusing?
# cat /sys/fs/resctrl/info/L3/bit_usage
0=HHHHXXXXSSSS0000
This may give the impression that the all CLOSes in all domains carries
this property, but in reality, it applies only to one CLOS(15) within each
domain.
Example below....
>
>>
>> # cat /sys/fs/resctrl/info/L3/bit_usage
>> DOM0=CLOS0:SSSSSSSSSSSSSSSS;... ;CLOS15=HHHHXXXXSSSS0000;
>> DOM1=CLOS0:SSSSSSSSSSSSSSSS;... ;CLOS15=HHHHXXXXSSSS0000
>
> Please no. Not just does this change existing user interface it also breaks the goal of
> "bit_usage".
>
> Please think of it from user perspective. If user wants to know, for example, "how is my
> L3 cache allocated" then the "bit_usage" file provides that summary.
>
>>> "bit_usage" shows how the cache is being used. It shows that the portion of cache represented
>>> by first four bits of CBM is unused, portion of cache represented by bits 4 to 7 of CBM is
>>> only used by software, portion of cache represented by bits 8 to 11 of CBM is shared between
>>> software and hardware, portion of cache represented by bits 12 to 15 is only used by hardware.
>>>
>>>> This is something we need to discuss.
>>>
>>> Looking at implementation in patch #5 the "io_alloc_cbm" bits of CBM are presented
>>> as software bits, since "io_alloc_cbm" represents IO from devices it should be "hardware" bits
>>> (hw_shareable), no?
>>>
>> Yes. It is. But logic is bit different there.
>>
>> It loops thru all the CLOSes on the domain. So, it will print again like this below.
>
> This is what current code does, but the code can be changed, no? For example, rdt_bit_usage_show()
> does not need to treat the IO allocation like all the other resource groups but instead handle it
> separately. Below us some pseudo code that presents the idea, untested, not compiled.
>
> hw_shareable = r->cache.shareable_bits;
>
> for (i = 0; i < closids_supported(); i++) {
> if (!closid_allocated(i) ||
> (resctrl_arch_get_io_alloc_enabled(r) && i == resctrl_io_alloc_closid_get(r, s)))
> continue;
>
> /* Intitialize sw_shareable and exclusive */
> }
>
> if (resctrl_arch_get_io_alloc_enabled(r)) {
> /*
> * Sidenote: I do not think schemata parameter is needed for
> * resctrl_io_alloc_closid_get()
Sure. Got it.
> */
> io_alloc_closid = resctrl_io_alloc_closid_get(r, s);
> if (resctrl_arch_get_cdp_enabled(r->rid))
> ctrl_val = resctrl_arch_get_config(r, dom, io_alloc_closid, CDP_CODE);
> else
> ctrl_val = resctrl_arch_get_config(r, dom, io_alloc_closid, CDP_NONE);
> hw_shareable |= ctrl_val;
> }
>
> for (i = r->cache.cbm_len - 1; i >= 0; i--) {
> /* Write annotated bitmask to user space */
> }
>
Here is the behaviour after these cahnges.
=== Before io_alloc enabled==============================
#cd /sys/fs/resctrl/L3/
# cat io_alloc
disabled
# cat shareable_bits
0 (This is always 0 for AMD)
# cat bit_usage
0=SSSSSSSSSSSSSSSS;1=SSSSSSSSSSSSSSSS;2=SSSSSSSSSSSSSSSS;3=SSSSSSSSSSSSSSSS
==== After io_alloc enabled=================================
# echo 1 > io_alloc
# cat io_alloc
enabled
# cat io_alloc_cbm
L3:0=ffff;1=ffff;2=ffff;3=ffff
#cat bit_usage
0=XXXXXXXXXXXXXXXX;1=XXXXXXXXXXXXXXXX;2=XXXXXXXXXXXXXXXX;3=XXXXXXXXXXXXXXXX
==== After changing io_alloc_cbm ============================
#echo "L3:0=ff00;1=ff00;2=ff00;3=ff00 > io_alloc_cbm
# cat io_alloc_cbm
L3:0=ff00;1=ff00;2=ff00;3=ff00
#cat bit_usage
0=XXXXXXXXSSSSSSSS;1=XXXXXXXXSSSSSSSS;2=XXXXXXXXSSSSSSSS;3=XXXXXXXXSSSSSSSS
=============================================================
My concern here is, this may imply that the property is present across all
CLOSes in all the domains, while in fact, it only applies to a single
CLOS(15) within each domain.
Thanks
Babu Moger
Hi Babu, On 5/5/25 12:54 PM, Moger, Babu wrote: > On 5/5/25 11:22, Reinette Chatre wrote: >> On 5/2/25 5:53 PM, Moger, Babu wrote: >>> On 5/2/2025 4:20 PM, Reinette Chatre wrote: >>>> On 4/21/25 3:43 PM, Babu Moger wrote: ... >>>> >>>> Then "bit_usage" will look like: >>>> >>>> # cat /sys/fs/resctrl/info/L3/bit_usage >>>> 0=HHHHXXXXSSSS0000 >>> >>> It is confusing here. To make it clear we may have to print all the CLOSes in each domain. >> >> Could you please elaborate how this is confusing? > > # cat /sys/fs/resctrl/info/L3/bit_usage > 0=HHHHXXXXSSSS0000 > > This may give the impression that the all CLOSes in all domains carries > this property, but in reality, it applies only to one CLOS(15) within each > domain. > > Example below.... > ... > Here is the behaviour after these cahnges. > > === Before io_alloc enabled============================== > > #cd /sys/fs/resctrl/L3/ > # cat io_alloc > disabled > > # cat shareable_bits > 0 (This is always 0 for AMD) > > # cat bit_usage > 0=SSSSSSSSSSSSSSSS;1=SSSSSSSSSSSSSSSS;2=SSSSSSSSSSSSSSSS;3=SSSSSSSSSSSSSSSS Please note that the "S" in above does not have anything to do with "shareable_bits" at this point. The "S" indicates that all L3 instances are currently used by software and that sharing is allowed. "bit_usage" gives insight to user space how all L3 instances are used. If at this point a new resource group is created and it has an "exclusive" allocation then "bit_usage" will change to reflect that. For example, you can try this on the system you are testing on: # echo 'L3:0=fff0;1=fff0;2=fff0;3=fff0' > /sys/fs/resctrl/schemata # mkdir /sys/fs/resctrl/g1 # echo 'L3:0=f;1=f;2=f;3=f' > /sys/fs/resctrl/g1/schemata # echo 'exclusive' > /sys/fs/resctrl/g1/mode The above isolates a portion of all L3 instances for exclusive use by g1. After above changes: # cat /sys/fs/resctrl/info/L3/bit_usage 0=SSSSSSSSSSSSEEEE;1=SSSSSSSSSSSSEEEE;2=SSSSSSSSSSSSEEEE;3=SSSSSSSSSSSSEEEE Note that there is no "closid" or resource group information but instead, "bit_usage" shows to user space how each cache instance is being used across all resource groups and hardware (IO) allocations. > > ==== After io_alloc enabled================================= > > # echo 1 > io_alloc > > # cat io_alloc > enabled > > # cat io_alloc_cbm > L3:0=ffff;1=ffff;2=ffff;3=ffff > > #cat bit_usage > 0=XXXXXXXXXXXXXXXX;1=XXXXXXXXXXXXXXXX;2=XXXXXXXXXXXXXXXX;3=XXXXXXXXXXXXXXXX Looks accurate to me. It shows that both hardware and software can allocate into all portions of all caches. > > ==== After changing io_alloc_cbm ============================ > > #echo "L3:0=ff00;1=ff00;2=ff00;3=ff00 > io_alloc_cbm > > # cat io_alloc_cbm > L3:0=ff00;1=ff00;2=ff00;3=ff00 > > #cat bit_usage > 0=XXXXXXXXSSSSSSSS;1=XXXXXXXXSSSSSSSS;2=XXXXXXXXSSSSSSSS;3=XXXXXXXXSSSSSSSS Looks accurate to me. > ============================================================= > > My concern here is, this may imply that the property is present across all > CLOSes in all the domains, while in fact, it only applies to a single > CLOS(15) within each domain. If a user wants a resource group specific view then the schemata should be used. "bit_usage" presents the view from the cache instance perspective and reflects how each L3 cache instance is being used at that moment in time. It helps system administrator answer the question "how are the caches used at the moment"? "bit_usage" does so by presenting a summary of all allocations across all resource groups and any hardware allocations that may exist. This file helps user space to understand how the cache is being used without needing to correlate the CBMs of all resource groups and IO allocations. For example, "bit_usage" is to be used by system administrator to ensure cache is used optimally (for example, there are no unused portions). Also, a user may be investigating a performance issue in a particular resource group and "bit_usage" will help with that to see if the tasks in that resource group may be competing with IO. Reinette
Hi Reinette, On 5/5/2025 4:13 PM, Reinette Chatre wrote: > Hi Babu, > > On 5/5/25 12:54 PM, Moger, Babu wrote: >> On 5/5/25 11:22, Reinette Chatre wrote: >>> On 5/2/25 5:53 PM, Moger, Babu wrote: >>>> On 5/2/2025 4:20 PM, Reinette Chatre wrote: >>>>> On 4/21/25 3:43 PM, Babu Moger wrote: > > ... > >>>>> >>>>> Then "bit_usage" will look like: >>>>> >>>>> # cat /sys/fs/resctrl/info/L3/bit_usage >>>>> 0=HHHHXXXXSSSS0000 >>>> >>>> It is confusing here. To make it clear we may have to print all the CLOSes in each domain. >>> >>> Could you please elaborate how this is confusing? >> >> # cat /sys/fs/resctrl/info/L3/bit_usage >> 0=HHHHXXXXSSSS0000 >> >> This may give the impression that the all CLOSes in all domains carries >> this property, but in reality, it applies only to one CLOS(15) within each >> domain. >> >> Example below.... >> > > ... > >> Here is the behaviour after these cahnges. >> >> === Before io_alloc enabled============================== >> >> #cd /sys/fs/resctrl/L3/ >> # cat io_alloc >> disabled >> >> # cat shareable_bits >> 0 (This is always 0 for AMD) >> >> # cat bit_usage >> 0=SSSSSSSSSSSSSSSS;1=SSSSSSSSSSSSSSSS;2=SSSSSSSSSSSSSSSS;3=SSSSSSSSSSSSSSSS > > Please note that the "S" in above does not have anything to do with > "shareable_bits" at this point. The "S" indicates that all L3 instances > are currently used by software and that sharing is allowed. > > "bit_usage" gives insight to user space how all L3 instances are used. > > If at this point a new resource group is created and it has an "exclusive" > allocation then "bit_usage" will change to reflect that. For example, > you can try this on the system you are testing on: > > # echo 'L3:0=fff0;1=fff0;2=fff0;3=fff0' > /sys/fs/resctrl/schemata > # mkdir /sys/fs/resctrl/g1 > # echo 'L3:0=f;1=f;2=f;3=f' > /sys/fs/resctrl/g1/schemata > # echo 'exclusive' > /sys/fs/resctrl/g1/mode > > The above isolates a portion of all L3 instances for exclusive use by g1. > After above changes: > # cat /sys/fs/resctrl/info/L3/bit_usage > 0=SSSSSSSSSSSSEEEE;1=SSSSSSSSSSSSEEEE;2=SSSSSSSSSSSSEEEE;3=SSSSSSSSSSSSEEEE > Yes. I see the same output. > Note that there is no "closid" or resource group information but instead, > "bit_usage" shows to user space how each cache instance is being used > across all resource groups and hardware (IO) allocations. Ok. Got it. > >> >> ==== After io_alloc enabled================================= >> >> # echo 1 > io_alloc >> >> # cat io_alloc >> enabled >> >> # cat io_alloc_cbm >> L3:0=ffff;1=ffff;2=ffff;3=ffff >> >> #cat bit_usage >> 0=XXXXXXXXXXXXXXXX;1=XXXXXXXXXXXXXXXX;2=XXXXXXXXXXXXXXXX;3=XXXXXXXXXXXXXXXX > > Looks accurate to me. It shows that both hardware and software can > allocate into all portions of all caches. > >> >> ==== After changing io_alloc_cbm ============================ >> >> #echo "L3:0=ff00;1=ff00;2=ff00;3=ff00 > io_alloc_cbm >> >> # cat io_alloc_cbm >> L3:0=ff00;1=ff00;2=ff00;3=ff00 >> >> #cat bit_usage >> 0=XXXXXXXXSSSSSSSS;1=XXXXXXXXSSSSSSSS;2=XXXXXXXXSSSSSSSS;3=XXXXXXXXSSSSSSSS > > Looks accurate to me. > >> ============================================================= >> >> My concern here is, this may imply that the property is present across all >> CLOSes in all the domains, while in fact, it only applies to a single >> CLOS(15) within each domain. > > If a user wants a resource group specific view then the schemata should be used. > "bit_usage" presents the view from the cache instance perspective and reflects > how each L3 cache instance is being used at that moment in time. It helps > system administrator answer the question "how are the caches used at the moment"? > "bit_usage" does so by presenting a summary of all allocations across all resource > groups and any hardware allocations that may exist. This file helps user space > to understand how the cache is being used without needing to correlate the CBMs > of all resource groups and IO allocations. For example, "bit_usage" is to be used > by system administrator to ensure cache is used optimally (for example, there are > no unused portions). Also, a user may be investigating a performance issue in > a particular resource group and "bit_usage" will help with that to see if > the tasks in that resource group may be competing with IO. > Ok, "bit_usage" is a summary across all the groups. That is a good point. Thanks for the detailed explanation. Will make those changes in next revision. Thank you. Babu
> The only aspect of "closids" that has been exposed to user space thus far > is the "num_closids" and in user documentation a CLOSid has been linked to the > number of control groups. That is the only constraint we need to think about > here. I have repeatedly asked for IO alloc connection with CLOSIDs to not be exposed > to user space (yet user documentation and messages to user space keeps doing so > in this series). Support for IO alloc in this way is unique to AMD. We do not want > resctrl to be constrained like this if another architecture needs to support > some form of IO alloc and does so in a different way. This isn't unique to AMD. Intel also ties CLOSid to control features associated with I/O (likewise with RMIDs for monitoring). See the Intel RDT architecture specification[1] chapter 4.4: " Non-CPU agent RDT uses the RMID and CLOS tags in the same way that they are used for CPU agents." -Tony [1] https://cdrdv2.intel.com/v1/dl/getContent/789566
Hi Tony, On 5/5/25 10:01 AM, Luck, Tony wrote: >> The only aspect of "closids" that has been exposed to user space thus far >> is the "num_closids" and in user documentation a CLOSid has been linked to the >> number of control groups. That is the only constraint we need to think about >> here. I have repeatedly asked for IO alloc connection with CLOSIDs to not be exposed >> to user space (yet user documentation and messages to user space keeps doing so >> in this series). Support for IO alloc in this way is unique to AMD. We do not want >> resctrl to be constrained like this if another architecture needs to support >> some form of IO alloc and does so in a different way. > > This isn't unique to AMD. Intel also ties CLOSid to control features associated with > I/O (likewise with RMIDs for monitoring). > > See the Intel RDT architecture specification[1] chapter 4.4: > > " Non-CPU agent RDT uses the RMID and CLOS tags in the same way that they are used for CPU agents." As I understand AMD uses a single specific (the highest CLOSid supported by L3) CLOS that is then reserved for IO allocation. While both Intel and AMD technically "uses CLOSid", it is done differently, no? Specifically, is this documentation introduced in patch #5 accurate for Intel? + The feature routes the I/O traffic via specific CLOSID reserved + for io_alloc feature. By configuring the CBM (Capacity Bit Mask) + for the CLOSID, users can control the L3 portions available for + I/0 traffic. The reserved CLOSID will be excluded for group creation. Reinette
> > " Non-CPU agent RDT uses the RMID and CLOS tags in the same way that they are used for CPU agents." > > As I understand AMD uses a single specific (the highest CLOSid supported by L3) > CLOS that is then reserved for IO allocation. While both Intel and AMD technically > "uses CLOSid", it is done differently, no? > > Specifically, is this documentation introduced in patch #5 accurate for Intel? > + The feature routes the I/O traffic via specific CLOSID reserved > + for io_alloc feature. By configuring the CBM (Capacity Bit Mask) > + for the CLOSID, users can control the L3 portions available for > + I/0 traffic. The reserved CLOSID will be excluded for group creation. No. Intel doesn't reserve a single CLOS. It allows to assign RMIDs and CLOSids for I/O monitoring and control. Different IDs can be assigned to different groups of devices (the "grouping" is dependent on h/w routing to devices, not assignable by the OS). I had some patches for this in my abandoned "resctrl2" implementation. No immediate plans to resurrect them since it became clear that the h/w implementation was model specific for just one generation. -Tony
Hi Tony, On 5/5/25 10:27 AM, Luck, Tony wrote: >>> " Non-CPU agent RDT uses the RMID and CLOS tags in the same way that they are used for CPU agents." >> >> As I understand AMD uses a single specific (the highest CLOSid supported by L3) >> CLOS that is then reserved for IO allocation. While both Intel and AMD technically >> "uses CLOSid", it is done differently, no? >> >> Specifically, is this documentation introduced in patch #5 accurate for Intel? >> + The feature routes the I/O traffic via specific CLOSID reserved >> + for io_alloc feature. By configuring the CBM (Capacity Bit Mask) >> + for the CLOSID, users can control the L3 portions available for >> + I/0 traffic. The reserved CLOSID will be excluded for group creation. > > No. Intel doesn't reserve a single CLOS. It allows to assign RMIDs and CLOSids > for I/O monitoring and control. Different IDs can be assigned to different groups > of devices (the "grouping" is dependent on h/w routing to devices, not > assignable by the OS). How does this work with CDP on Intel? Can CDP be enabled for CPU agents while the "code" and "data" CLOSids be used for I/O control? Reinette
>> No. Intel doesn't reserve a single CLOS. It allows to assign RMIDs and CLOSids >> for I/O monitoring and control. Different IDs can be assigned to different groups >> of devices (the "grouping" is dependent on h/w routing to devices, not >> assignable by the OS). > > How does this work with CDP on Intel? Can CDP be enabled for CPU agents while the > "code" and "data" CLOSids be used for I/O control? Reinette, Good question. I'll have to check with h/w folks. -Tony
© 2016 - 2025 Red Hat, Inc.