[PATCH] numa_conf: Properly check for caches in virDomainNumaDefValidate()

Michal Privoznik posted 1 patch 3 years, 8 months ago
Test syntax-check failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/2207bf5e25921ad8120cbf32543164000977aabc.1597748139.git.mprivozn@redhat.com
src/conf/numa_conf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH] numa_conf: Properly check for caches in virDomainNumaDefValidate()
Posted by Michal Privoznik 3 years, 8 months ago
When adding support for HMAT, in f0611fe8830 I've introduced a
check which aims to validate /domain/cpu/numa/interconnects. As a
part of that, there is a loop which checks whether all <latency/>
with @cache attribute refer to an existing cache level. For
instance:

  <cpu mode='host-model' check='partial'>
    <numa>
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
        <cache level='1' associativity='direct' policy='writeback'>
          <size value='8' unit='KiB'/>
          <line value='5' unit='B'/>
        </cache>
      </cell>
      <interconnects>
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
      </interconnects>
    </numa>
  </cpu>

This XML defines that accessing L1 cache of node #0 from node #0
has latency of 5ns.

However, the loop was not written properly. Well, the check in
it, as it was always checking for the first cache in the target
node and not the rest. Therefore, the following example errors
out:

  <cpu mode='host-model' check='partial'>
    <numa>
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
        <cache level='3' associativity='direct' policy='writeback'>
          <size value='10' unit='KiB'/>
          <line value='8' unit='B'/>
        </cache>
        <cache level='1' associativity='direct' policy='writeback'>
          <size value='8' unit='KiB'/>
          <line value='5' unit='B'/>
        </cache>
      </cell>
      <interconnects>
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
      </interconnects>
    </numa>
  </cpu>

This errors out even though it is a valid configuration. The L1
cache under node #0 is still present.

Fixes: f0611fe8830
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
---

Is this trivial enough to be pushed as such? ;-)

 src/conf/numa_conf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
index 50d57ba8f6..9305e125b7 100644
--- a/src/conf/numa_conf.c
+++ b/src/conf/numa_conf.c
@@ -1421,7 +1421,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
 
         if (l->cache > 0) {
             for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
-                const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
+                const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
 
                 if (l->cache == cache->level)
                     break;
-- 
2.26.2

Re: [PATCH] numa_conf: Properly check for caches in virDomainNumaDefValidate()
Posted by Laine Stump 3 years, 8 months ago
On 8/18/20 6:55 AM, Michal Privoznik wrote:
> When adding support for HMAT, in f0611fe8830 I've introduced a
> check which aims to validate /domain/cpu/numa/interconnects. As a
> part of that, there is a loop which checks whether all <latency/>
> with @cache attribute refer to an existing cache level. For
> instance:
>
>    <cpu mode='host-model' check='partial'>
>      <numa>
>        <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
>          <cache level='1' associativity='direct' policy='writeback'>
>            <size value='8' unit='KiB'/>
>            <line value='5' unit='B'/>
>          </cache>
>        </cell>
>        <interconnects>
>          <latency initiator='0' target='0' cache='1' type='access' value='5'/>
>          <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
>        </interconnects>
>      </numa>
>    </cpu>
>
> This XML defines that accessing L1 cache of node #0 from node #0
> has latency of 5ns.
>
> However, the loop was not written properly. Well, the check in
> it, as it was always checking for the first cache in the target
> node and not the rest. Therefore, the following example errors
> out:
>
>    <cpu mode='host-model' check='partial'>
>      <numa>
>        <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
>          <cache level='3' associativity='direct' policy='writeback'>
>            <size value='10' unit='KiB'/>
>            <line value='8' unit='B'/>
>          </cache>
>          <cache level='1' associativity='direct' policy='writeback'>
>            <size value='8' unit='KiB'/>
>            <line value='5' unit='B'/>
>          </cache>
>        </cell>
>        <interconnects>
>          <latency initiator='0' target='0' cache='1' type='access' value='5'/>
>          <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
>        </interconnects>
>      </numa>
>    </cpu>
>
> This errors out even though it is a valid configuration. The L1
> cache under node #0 is still present.
>
> Fixes: f0611fe8830
> Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
> ---
>
> Is this trivial enough to be pushed as such? ;-)


It looks pretty trivial to me, but just in case you want it:


Reviewed-by: Laine Stump <laine@redhat.com>


>
>   src/conf/numa_conf.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
> index 50d57ba8f6..9305e125b7 100644
> --- a/src/conf/numa_conf.c
> +++ b/src/conf/numa_conf.c
> @@ -1421,7 +1421,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
>   
>           if (l->cache > 0) {
>               for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
> -                const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
> +                const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
>   
>                   if (l->cache == cache->level)
>                       break;