[libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)

Wang Huaqiang posted 10 patches 5 years, 9 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/1531119658-18549-1-git-send-email-huaqiang.wang@intel.com
Test syntax-check passed
docs/formatdomain.html.in                          |  17 +
docs/news.xml                                      |  10 +
docs/schemas/domaincommon.rng                      |  14 +
include/libvirt/libvirt-domain.h                   |  14 +
src/conf/domain_conf.c                             | 320 ++++++++++++++++++
src/conf/domain_conf.h                             |  25 ++
src/driver-hypervisor.h                            |  13 +
src/libvirt-domain.c                               |  96 ++++++
src/libvirt_private.syms                           |  13 +
src/libvirt_public.syms                            |   6 +
src/qemu/qemu_driver.c                             | 357 +++++++++++++++++++++
src/qemu/qemu_process.c                            |  45 ++-
src/remote/remote_daemon_dispatch.c                |  45 +++
src/remote/remote_driver.c                         |   4 +-
src/remote/remote_protocol.x                       |  31 +-
src/remote_protocol-structs                        |  16 +
src/util/virresctrl.c                              | 338 +++++++++++++++++++
src/util/virresctrl.h                              |  40 +++
tests/genericxml2xmlindata/cachetune-cdp.xml       |   3 +
tests/genericxml2xmlindata/cachetune-small.xml     |   2 +
tests/genericxml2xmlindata/cachetune.xml           |   2 +
.../resmongroup-colliding-cachetune.xml            |  34 ++
tests/genericxml2xmltest.c                         |   3 +
tools/virsh-domain-monitor.c                       |   7 +
tools/virsh-domain.c                               | 139 ++++++++
25 files changed, 1588 insertions(+), 6 deletions(-)
create mode 100644 tests/genericxml2xmlindata/resmongroup-colliding-cachetune.xml
[libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Wang Huaqiang 5 years, 9 months ago
This is the V2 of RFC and the POC source code for introducing x86 RDT CMT
feature, thanks Martin Kletzander for his review and constructive
suggestion for V1.

This series is trying to provide the similar functions of the perf event
based CMT, MBMT and MBML features in reporting cache occupancy, total
memory bandwidth utilization and local memory bandwidth utilization
information in livirt. Firstly we focus on cmt.

x86 RDT Cache Monitoring Technology (CMT) provides a medthod to track the
cache occupancy information per CPU thread. We are leveraging the
implementation of kernel resctrl filesystem and create our patches on top
of that.

Describing the functionality from a high level:

1. Extend the output of 'domstats' and report CMT inforamtion.

Comparing with perf event based CMT implementation in libvirt, this series
extends the output of command 'domstat' and reports cache occupancy
information like these:
<pre>
[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
Domain: 'vm3'
  cpu.cacheoccupancy.vcpus_2.value=4415488
  cpu.cacheoccupancy.vcpus_2.vcpus=2
  cpu.cacheoccupancy.vcpus_1.value=7839744
  cpu.cacheoccupancy.vcpus_1.vcpus=1
  cpu.cacheoccupancy.vcpus_0,3.value=53796864
  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
</pre>
The vcpus have been arragned into three monitoring groups, these three
groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take an example,
the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the cache occupancy
information for vcpu 0 and vcpu 3, the 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
represents the vcpu group information.

To address Martin's suggestion "beware as 1-4 is something else than 1,4 so
you need to differentiate that.", the content of 'vcpus'
(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially processed, if
vcpus is a continous range, e.g. 0-2, then the output of
cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
instead of
'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
Please note that 'vcpus_0-2' is a name of this monitoring group, could be
specified any other word from the XML configuration file or lively changed
with the command introduced in following part.

2. A new command 'cpu-resource' for live changing CMT groups.

A virsh tool has been introduced in this series to dynamically create,
destroy monitoring groups as well as showing the existing grouping status.
The general command interface is like this:
<pre>
[root@dl-c200 libvirt]# virsh help cpu-resource
  NAME
      cpu-resource - get or set hardware CPU RDT monitoring group

      SYNOPSIS
          cpu-resource <domain> [--group-name <string>] [--vcpulist
          <string>] [--create] [--destroy] [--live] [--config]
          [--current]

      DESCRIPTION
          Create or destroy CPU resource monitoring group.
          To get current CPU resource monitoring group status:
          virsh # cpu-resource [domain]

      OPTIONS
          [--domain] <string>  domain name, id or uuid
          --group-name <string>  group name to manipulate
          --vcpulist <string>  ids of vcpus to manipulate
          --create         Create CPU resctrl monitoring group for
          functions such as monitoring cache occupancy
          --destroy        Destroy CPU resctrl monitoring group
          --live           modify/get running state
          --config         modify/get persistent configuration
          --current        affect current domain
</pre>
This command provides live interface of changing resource monitoring group
and keeping the result in persistent domain XML configuration file.

3. XML configuration changes for keeping CMT groups.

To keep the monitoring group information and monitoring CPU cache resource
utilization information at launch time, XML configuration file has been
changed by adding a new element
<resmongroup>:
<pre>
# Add a new element
        <cputune>
          <resmongroup vcpus='0-2'/>
          <resmongroup vcpus='3'/>
        </cputune>
</pre>

4. About the naming used in this series for RDT CMT technology.

About the wording and naming used in this series for Intel RDT CMT
technology, 'RDT', 'CMT' and 'resctrl' are currently used names in Intel
documents and kernel namespace in the context of CPU resource, but they
are pretty confusing for system administrator. But 'Resource Control' or
'Monitoring' is a not good choice either, the scope of these two phrases
are too big which normally cover lots of aspects other than CPU cache and
memory hbandwidth. Intel 'RDT' is technology emphasizing on the resource
allocation and monitoring within the scope CPU, I would like to use the
term 'cpu-resource' here to describe the technology that these patches' are
trying to address.
This series is focusing on CPU cache occupancy monitoring(CMT), and this
naming seems has a wider scope than CMT, we could add the similar resource
monitoring part for technologies of MBML and MBMT under the framework that
introduced in these patches. This naming is also applicable to technology
of CPU resource allocation, it is possible to add some command by adding
some arguments to allocate cache or memory bandwidth at run time.

5. About emulator and io threads CMT

Currently, it is not possible to allocate an dedicated amount of cache or
memory bandwidth for emulator or io threads. so the resource monitoring for
emulator or io threads is not considered in this series.
Could be planned in next stage.

Changes since v1:
A lot of things changed, mainly
* report cache occupancy information based on vcpu group instead of whole
domain.
* be possible to destroy vcpu group at run time
* XML configuration file changed
* change naming for describing 'RDT CMT' to 'cpu-resource'


Wang Huaqiang (10):
  util: add Intel x86 RDT/CMT support
  conf: introduce <resmongroup> element
  tests: add tests for validating <resmongroup>
  libvirt: add public APIs for resource monitoring group
  qemu: enable resctrl monitoring at booting stage
  remote: add remote protocol for resctrl monitoring
  qemu: add interfaces for dynamically manupulating resctl mon groups
  tool: add command cpuresource to interact with cpu resources
  tools:  show cpu cache occupancy information in domstats
  news: add Intel x86 RDT CMT feature

 docs/formatdomain.html.in                          |  17 +
 docs/news.xml                                      |  10 +
 docs/schemas/domaincommon.rng                      |  14 +
 include/libvirt/libvirt-domain.h                   |  14 +
 src/conf/domain_conf.c                             | 320 ++++++++++++++++++
 src/conf/domain_conf.h                             |  25 ++
 src/driver-hypervisor.h                            |  13 +
 src/libvirt-domain.c                               |  96 ++++++
 src/libvirt_private.syms                           |  13 +
 src/libvirt_public.syms                            |   6 +
 src/qemu/qemu_driver.c                             | 357 +++++++++++++++++++++
 src/qemu/qemu_process.c                            |  45 ++-
 src/remote/remote_daemon_dispatch.c                |  45 +++
 src/remote/remote_driver.c                         |   4 +-
 src/remote/remote_protocol.x                       |  31 +-
 src/remote_protocol-structs                        |  16 +
 src/util/virresctrl.c                              | 338 +++++++++++++++++++
 src/util/virresctrl.h                              |  40 +++
 tests/genericxml2xmlindata/cachetune-cdp.xml       |   3 +
 tests/genericxml2xmlindata/cachetune-small.xml     |   2 +
 tests/genericxml2xmlindata/cachetune.xml           |   2 +
 .../resmongroup-colliding-cachetune.xml            |  34 ++
 tests/genericxml2xmltest.c                         |   3 +
 tools/virsh-domain-monitor.c                       |   7 +
 tools/virsh-domain.c                               | 139 ++++++++
 25 files changed, 1588 insertions(+), 6 deletions(-)
 create mode 100644 tests/genericxml2xmlindata/resmongroup-colliding-cachetune.xml

-- 
2.7.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Huaqiang,Wang 5 years, 9 months ago
Hi,

Regarding the output of CMT monitoring result, which will be
listed in the result of command 'domstats', I'd like to make it changed
by adding a field of 'cache block id' for indicating cache occupancy of
each cache block. So the CMT related message for every cache
monitoring group would be: (see in-lined update for a real example)
"
cpu.cacheoccupancy.<mon_group_name>.vcpus = <vcpu list>
cpu.cacheoccupancy.<mon_group_name>.<cache_block_id>.value
                                                                      = 
<cache occu in bytes>
"
I'd like to hear your voice regarding this RFC.

On 2018年07月09日 15:00, Wang Huaqiang wrote:
> This is the V2 of RFC and the POC source code for introducing x86 RDT CMT
> feature, thanks Martin Kletzander for his review and constructive
> suggestion for V1.
>
> This series is trying to provide the similar functions of the perf event
> based CMT, MBMT and MBML features in reporting cache occupancy, total
> memory bandwidth utilization and local memory bandwidth utilization
> information in livirt. Firstly we focus on cmt.
>
> x86 RDT Cache Monitoring Technology (CMT) provides a medthod to track the
> cache occupancy information per CPU thread. We are leveraging the
> implementation of kernel resctrl filesystem and create our patches on top
> of that.
>
> Describing the functionality from a high level:
>
> 1. Extend the output of 'domstats' and report CMT inforamtion.
>
> Comparing with perf event based CMT implementation in libvirt, this series
> extends the output of command 'domstat' and reports cache occupancy
> information like these:
> <pre>
> [root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
> Domain: 'vm3'
>    cpu.cacheoccupancy.vcpus_2.value=4415488
>    cpu.cacheoccupancy.vcpus_2.vcpus=2
>    cpu.cacheoccupancy.vcpus_1.value=7839744
>    cpu.cacheoccupancy.vcpus_1.vcpus=1
>    cpu.cacheoccupancy.vcpus_0,3.value=53796864
>    cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
> </pre>
Since kernel resctrlfs outputs cache occupancy information for
each cache block id, I'd like to adhere to resctrlfs's arrangement
and dumping cache occupancy information for each cache block
in the result of 'domstats'. The output of 'domstats' would be like
these:

<pre>
[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
Domain: 'vm3'
   cpu.cacheoccupancy.vcpus_2.vcpus=2
   cpu.cacheoccupancy.vcpus_2.1.value=27832
   cpu.cacheoccupancy.vcpus_2.0.value=372186
   cpu.cacheoccupancy.vcpus_1.vcpus=1
   cpu.cacheoccupancy.vcpus_1.1.value=0
   cpu.cacheoccupancy.vcpus_1.0.value=90112
   cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
   cpu.cacheoccupancy.vcpus_0,3.1.value=90112
   cpu.cacheoccupancy.vcpus_0,3.0.value=540672
</pre>

So from above message, it is known there is a cpu CMT
resource monitoring group exists in domain with the
group name 'vcpus_2'.
'cpu.cacheoccupancy.vcpus_2.vcpus=2' tells us this cpu
resource monitoring group contains one vcpu,  the vcpu 2.
'cpu.cacheoccupancy.vcpus_2.1.value=2387832'
and 'cpu.cacheoccupancy.vcpus_2.0.value=372186' indicate
the cache occupancy information for cache block 1 and
cache block 0 respectively.
You can also get similar information for cpu monitoring
groups 'vcpus_1' and 'vcpus_0,3'.

> The vcpus have been arragned into three monitoring groups, these three
> groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take an example,
> the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the cache occupancy
> information for vcpu 0 and vcpu 3, the 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
> represents the vcpu group information.
>
> To address Martin's suggestion "beware as 1-4 is something else than 1,4 so
> you need to differentiate that.", the content of 'vcpus'
> (cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially processed, if
> vcpus is a continous range, e.g. 0-2, then the output of
> cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
> 'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
> instead of
> 'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
> Please note that 'vcpus_0-2' is a name of this monitoring group, could be
> specified any other word from the XML configuration file or lively changed
> with the command introduced in following part.
>
> 2. A new command 'cpu-resource' for live changing CMT groups.
>
> A virsh tool has been introduced in this series to dynamically create,
> destroy monitoring groups as well as showing the existing grouping status.
> The general command interface is like this:
> <pre>
> [root@dl-c200 libvirt]# virsh help cpu-resource
>    NAME
>        cpu-resource - get or set hardware CPU RDT monitoring group
>
>        SYNOPSIS
>            cpu-resource <domain> [--group-name <string>] [--vcpulist
>            <string>] [--create] [--destroy] [--live] [--config]
>            [--current]
>
>        DESCRIPTION
>            Create or destroy CPU resource monitoring group.
>            To get current CPU resource monitoring group status:
>            virsh # cpu-resource [domain]
>
>        OPTIONS
>            [--domain] <string>  domain name, id or uuid
>            --group-name <string>  group name to manipulate
>            --vcpulist <string>  ids of vcpus to manipulate
>            --create         Create CPU resctrl monitoring group for
>            functions such as monitoring cache occupancy
>            --destroy        Destroy CPU resctrl monitoring group
>            --live           modify/get running state
>            --config         modify/get persistent configuration
>            --current        affect current domain
> </pre>
> This command provides live interface of changing resource monitoring group
> and keeping the result in persistent domain XML configuration file.
>
> 3. XML configuration changes for keeping CMT groups.
>
> To keep the monitoring group information and monitoring CPU cache resource
> utilization information at launch time, XML configuration file has been
> changed by adding a new element
> <resmongroup>:
> <pre>
> # Add a new element
>          <cputune>
>            <resmongroup vcpus='0-2'/>
>            <resmongroup vcpus='3'/>
>          </cputune>
> </pre>
>
> 4. About the naming used in this series for RDT CMT technology.
>
> About the wording and naming used in this series for Intel RDT CMT
> technology, 'RDT', 'CMT' and 'resctrl' are currently used names in Intel
> documents and kernel namespace in the context of CPU resource, but they
> are pretty confusing for system administrator. But 'Resource Control' or
> 'Monitoring' is a not good choice either, the scope of these two phrases
> are too big which normally cover lots of aspects other than CPU cache and
> memory hbandwidth. Intel 'RDT' is technology emphasizing on the resource
> allocation and monitoring within the scope CPU, I would like to use the
> term 'cpu-resource' here to describe the technology that these patches' are
> trying to address.
> This series is focusing on CPU cache occupancy monitoring(CMT), and this
> naming seems has a wider scope than CMT, we could add the similar resource
> monitoring part for technologies of MBML and MBMT under the framework that
> introduced in these patches. This naming is also applicable to technology
> of CPU resource allocation, it is possible to add some command by adding
> some arguments to allocate cache or memory bandwidth at run time.
>
> 5. About emulator and io threads CMT
>
> Currently, it is not possible to allocate an dedicated amount of cache or
> memory bandwidth for emulator or io threads. so the resource monitoring for
> emulator or io threads is not considered in this series.
> Could be planned in next stage.
>
> Changes since v1:
> A lot of things changed, mainly
> * report cache occupancy information based on vcpu group instead of whole
> domain.
> * be possible to destroy vcpu group at run time
> * XML configuration file changed
> * change naming for describing 'RDT CMT' to 'cpu-resource'
>
>
> Wang Huaqiang (10):
>    util: add Intel x86 RDT/CMT support
>    conf: introduce <resmongroup> element
>    tests: add tests for validating <resmongroup>
>    libvirt: add public APIs for resource monitoring group
>    qemu: enable resctrl monitoring at booting stage
>    remote: add remote protocol for resctrl monitoring
>    qemu: add interfaces for dynamically manupulating resctl mon groups
>    tool: add command cpuresource to interact with cpu resources
>    tools:  show cpu cache occupancy information in domstats
>    news: add Intel x86 RDT CMT feature
>
>   docs/formatdomain.html.in                          |  17 +
>   docs/news.xml                                      |  10 +
>   docs/schemas/domaincommon.rng                      |  14 +
>   include/libvirt/libvirt-domain.h                   |  14 +
>   src/conf/domain_conf.c                             | 320 ++++++++++++++++++
>   src/conf/domain_conf.h                             |  25 ++
>   src/driver-hypervisor.h                            |  13 +
>   src/libvirt-domain.c                               |  96 ++++++
>   src/libvirt_private.syms                           |  13 +
>   src/libvirt_public.syms                            |   6 +
>   src/qemu/qemu_driver.c                             | 357 +++++++++++++++++++++
>   src/qemu/qemu_process.c                            |  45 ++-
>   src/remote/remote_daemon_dispatch.c                |  45 +++
>   src/remote/remote_driver.c                         |   4 +-
>   src/remote/remote_protocol.x                       |  31 +-
>   src/remote_protocol-structs                        |  16 +
>   src/util/virresctrl.c                              | 338 +++++++++++++++++++
>   src/util/virresctrl.h                              |  40 +++
>   tests/genericxml2xmlindata/cachetune-cdp.xml       |   3 +
>   tests/genericxml2xmlindata/cachetune-small.xml     |   2 +
>   tests/genericxml2xmlindata/cachetune.xml           |   2 +
>   .../resmongroup-colliding-cachetune.xml            |  34 ++
>   tests/genericxml2xmltest.c                         |   3 +
>   tools/virsh-domain-monitor.c                       |   7 +
>   tools/virsh-domain.c                               | 139 ++++++++
>   25 files changed, 1588 insertions(+), 6 deletions(-)
>   create mode 100644 tests/genericxml2xmlindata/resmongroup-colliding-cachetune.xml
>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Martin Kletzander 5 years, 9 months ago
On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
>
>This is the V2 of RFC and the POC source code for introducing x86 RDT CMT
>feature, thanks Martin Kletzander for his review and constructive
>suggestion for V1.
>
>This series is trying to provide the similar functions of the perf event
>based CMT, MBMT and MBML features in reporting cache occupancy, total
>memory bandwidth utilization and local memory bandwidth utilization
>information in livirt. Firstly we focus on cmt.
>
>x86 RDT Cache Monitoring Technology (CMT) provides a medthod to track the
>cache occupancy information per CPU thread. We are leveraging the
>implementation of kernel resctrl filesystem and create our patches on top
>of that.
>
>Describing the functionality from a high level:
>
>1. Extend the output of 'domstats' and report CMT inforamtion.
>
>Comparing with perf event based CMT implementation in libvirt, this series
>extends the output of command 'domstat' and reports cache occupancy
>information like these:
><pre>
>[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
>Domain: 'vm3'
>  cpu.cacheoccupancy.vcpus_2.value=4415488
>  cpu.cacheoccupancy.vcpus_2.vcpus=2
>  cpu.cacheoccupancy.vcpus_1.value=7839744
>  cpu.cacheoccupancy.vcpus_1.vcpus=1
>  cpu.cacheoccupancy.vcpus_0,3.value=53796864
>  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
></pre>
>The vcpus have been arragned into three monitoring groups, these three
>groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take an example,
>the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the cache occupancy
>information for vcpu 0 and vcpu 3, the 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
>represents the vcpu group information.
>
>To address Martin's suggestion "beware as 1-4 is something else than 1,4 so
>you need to differentiate that.", the content of 'vcpus'
>(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially processed, if
>vcpus is a continous range, e.g. 0-2, then the output of
>cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
>'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
>instead of
>'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
>Please note that 'vcpus_0-2' is a name of this monitoring group, could be
>specified any other word from the XML configuration file or lively changed
>with the command introduced in following part.
>

One small nit according to the naming (but it shouldn't block any reviewers from
reviewing, just keep this in mind for next version for example) is that this is
still inconsistent.  The way domstats are structured when there is something
like an array could shed some light into this.  What you suggested is really
kind of hard to parse (although looks better).  What would you say to something
like this:

  cpu.cacheoccupancy.count = 3
  cpu.cacheoccupancy.0.value=4415488
  cpu.cacheoccupancy.0.vcpus=2
  cpu.cacheoccupancy.0.name=vcpus_2
  cpu.cacheoccupancy.1.value=7839744
  cpu.cacheoccupancy.1.vcpus=1
  cpu.cacheoccupancy.1.name=vcpus_1
  cpu.cacheoccupancy.2.value=53796864
  cpu.cacheoccupancy.2.vcpus=0,3
  cpu.cacheoccupancy.2.name=0,3

Other than that I didn't go through all the patches now, sorry.--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Wang, Huaqiang 5 years, 9 months ago
Hi Martin,

Thanks for your comments. Please see my reply inline.

> -----Original Message-----
> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> Sent: Tuesday, July 17, 2018 2:27 PM
> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu, 
> Bing <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>; 
> Zang, Rui <rui.zang@intel.com>
> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring 
> Technology (CMT)
> 
> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
> >
> >This is the V2 of RFC and the POC source code for introducing x86 RDT 
> >CMT feature, thanks Martin Kletzander for his review and constructive 
> >suggestion for V1.
> >
> >This series is trying to provide the similar functions of the perf 
> >event based CMT, MBMT and MBML features in reporting cache occupancy, 
> >total memory bandwidth utilization and local memory bandwidth 
> >utilization information in livirt. Firstly we focus on cmt.
> >
> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod to track 
> >the cache occupancy information per CPU thread. We are leveraging the 
> >implementation of kernel resctrl filesystem and create our patches on 
> >top of that.
> >
> >Describing the functionality from a high level:
> >
> >1. Extend the output of 'domstats' and report CMT inforamtion.
> >
> >Comparing with perf event based CMT implementation in libvirt, this 
> >series extends the output of command 'domstat' and reports cache 
> >occupancy information like these:
> ><pre>
> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
> >Domain: 'vm3'
> >  cpu.cacheoccupancy.vcpus_2.value=4415488
> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
> >  cpu.cacheoccupancy.vcpus_1.value=7839744
> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
> ></pre>
> >The vcpus have been arragned into three monitoring groups, these 
> >three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take an 
> >example, the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the cache 
> >occupancy information for vcpu 0 and vcpu 3, the
> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
> >represents the vcpu group information.
> >
> >To address Martin's suggestion "beware as 1-4 is something else than
> >1,4 so you need to differentiate that.", the content of 'vcpus'
> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially 
> >processed, if vcpus is a continous range, e.g. 0-2, then the output 
> >of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like 
> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
> >instead of
> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
> >Please note that 'vcpus_0-2' is a name of this monitoring group, 
> >could be specified any other word from the XML configuration file or 
> >lively changed with the command introduced in following part.
> >
> 
> One small nit according to the naming (but it shouldn't block any 
> reviewers from reviewing, just keep this in mind for next version for 
> example) is that this is still inconsistent.

OK.  I'll try to use words such as 'cache', 'cpu resource' and avoid using
'RDT', 'CMT'.

The way domstats are structured when there is something like an
> array could shed some light into this.  What you suggested is really 
> kind of hard to parse (although looks better).  What would you say to something like this:
> 
>   cpu.cacheoccupancy.count = 3
>   cpu.cacheoccupancy.0.value=4415488
>   cpu.cacheoccupancy.0.vcpus=2
>   cpu.cacheoccupancy.0.name=vcpus_2
>   cpu.cacheoccupancy.1.value=7839744
>   cpu.cacheoccupancy.1.vcpus=1
>   cpu.cacheoccupancy.1.name=vcpus_1
>   cpu.cacheoccupancy.2.value=53796864
>   cpu.cacheoccupancy.2.vcpus=0,3
>   cpu.cacheoccupancy.2.name=0,3
> 

Your arrangement looks more reasonable, thanks for your advice. 
However, as I mentioned in another email that I sent to libvirt-list 
hours ago, the kernel resctrl interface provides cache occupancy
information for each cache block for every resource group.
Maybe we need to expose the cache occupancy for each cache block.
If you agree, we need to refine the 'domstats' output message,
how about this:

  cpu.cacheoccupancy.count=3
  cpu.cacheoccupancy.0.name=vcpus_2
  cpu.cacheoccupancy.0.vcpus=2
  cpu.cacheoccupancy.0.block.count=2
  cpu.cacheoccupancy.0.block.0.bytes=5488
  cpu.cacheoccupancy.0.block.1. bytes =4410000
  cpu.cacheoccupancy.1.name=vcpus_1
  cpu.cacheoccupancy.1.vcpus=1
  cpu.cacheoccupancy.1.block.count=2
  cpu.cacheoccupancy.1.block.0. bytes =7839744
  cpu.cacheoccupancy.1.block.0. bytes =0
  cpu.cacheoccupancy.2.name=0,3
  cpu.cacheoccupancy.2.vcpus=0,3
  cpu.cacheoccupancy.2.block.count=2
  cpu.cacheoccupancy.2.block.0. bytes=53796864
  cpu.cacheoccupancy.2.block.1. bytes=0

> Other than that I didn't go through all the patches now, sorry.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Martin Kletzander 5 years, 9 months ago
On Tue, Jul 17, 2018 at 07:19:41AM +0000, Wang, Huaqiang wrote:
>Hi Martin,
>
>Thanks for your comments. Please see my reply inline.
>
>> -----Original Message-----
>> From: Martin Kletzander [mailto:mkletzan@redhat.com]
>> Sent: Tuesday, July 17, 2018 2:27 PM
>> To: Wang, Huaqiang <huaqiang.wang@intel.com>
>> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu,
>> Bing <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>;
>> Zang, Rui <rui.zang@intel.com>
>> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
>> Technology (CMT)
>>
>> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
>> >
>> >This is the V2 of RFC and the POC source code for introducing x86 RDT
>> >CMT feature, thanks Martin Kletzander for his review and constructive
>> >suggestion for V1.
>> >
>> >This series is trying to provide the similar functions of the perf
>> >event based CMT, MBMT and MBML features in reporting cache occupancy,
>> >total memory bandwidth utilization and local memory bandwidth
>> >utilization information in livirt. Firstly we focus on cmt.
>> >
>> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod to track
>> >the cache occupancy information per CPU thread. We are leveraging the
>> >implementation of kernel resctrl filesystem and create our patches on
>> >top of that.
>> >
>> >Describing the functionality from a high level:
>> >
>> >1. Extend the output of 'domstats' and report CMT inforamtion.
>> >
>> >Comparing with perf event based CMT implementation in libvirt, this
>> >series extends the output of command 'domstat' and reports cache
>> >occupancy information like these:
>> ><pre>
>> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
>> >Domain: 'vm3'
>> >  cpu.cacheoccupancy.vcpus_2.value=4415488
>> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
>> >  cpu.cacheoccupancy.vcpus_1.value=7839744
>> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
>> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
>> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
>> ></pre>
>> >The vcpus have been arragned into three monitoring groups, these
>> >three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take an
>> >example, the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the cache
>> >occupancy information for vcpu 0 and vcpu 3, the
>> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
>> >represents the vcpu group information.
>> >
>> >To address Martin's suggestion "beware as 1-4 is something else than
>> >1,4 so you need to differentiate that.", the content of 'vcpus'
>> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially
>> >processed, if vcpus is a continous range, e.g. 0-2, then the output
>> >of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
>> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
>> >instead of
>> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
>> >Please note that 'vcpus_0-2' is a name of this monitoring group,
>> >could be specified any other word from the XML configuration file or
>> >lively changed with the command introduced in following part.
>> >
>>
>> One small nit according to the naming (but it shouldn't block any
>> reviewers from reviewing, just keep this in mind for next version for
>> example) is that this is still inconsistent.
>
>OK.  I'll try to use words such as 'cache', 'cpu resource' and avoid using
>'RDT', 'CMT'.
>

Oh, you misunderstood, I meant the naming in the domstats output =)

>The way domstats are structured when there is something like an
>> array could shed some light into this.  What you suggested is really
>> kind of hard to parse (although looks better).  What would you say to something like this:
>>
>>   cpu.cacheoccupancy.count = 3
>>   cpu.cacheoccupancy.0.value=4415488
>>   cpu.cacheoccupancy.0.vcpus=2
>>   cpu.cacheoccupancy.0.name=vcpus_2
>>   cpu.cacheoccupancy.1.value=7839744
>>   cpu.cacheoccupancy.1.vcpus=1
>>   cpu.cacheoccupancy.1.name=vcpus_1
>>   cpu.cacheoccupancy.2.value=53796864
>>   cpu.cacheoccupancy.2.vcpus=0,3
>>   cpu.cacheoccupancy.2.name=0,3
>>
>
>Your arrangement looks more reasonable, thanks for your advice.
>However, as I mentioned in another email that I sent to libvirt-list
>hours ago, the kernel resctrl interface provides cache occupancy
>information for each cache block for every resource group.
>Maybe we need to expose the cache occupancy for each cache block.
>If you agree, we need to refine the 'domstats' output message,
>how about this:
>
>  cpu.cacheoccupancy.count=3
>  cpu.cacheoccupancy.0.name=vcpus_2
>  cpu.cacheoccupancy.0.vcpus=2
>  cpu.cacheoccupancy.0.block.count=2
>  cpu.cacheoccupancy.0.block.0.bytes=5488
>  cpu.cacheoccupancy.0.block.1. bytes =4410000
>  cpu.cacheoccupancy.1.name=vcpus_1
>  cpu.cacheoccupancy.1.vcpus=1
>  cpu.cacheoccupancy.1.block.count=2
>  cpu.cacheoccupancy.1.block.0. bytes =7839744
>  cpu.cacheoccupancy.1.block.0. bytes =0
>  cpu.cacheoccupancy.2.name=0,3
>  cpu.cacheoccupancy.2.vcpus=0,3
>  cpu.cacheoccupancy.2.block.count=2
>  cpu.cacheoccupancy.2.block.0. bytes=53796864
>  cpu.cacheoccupancy.2.block.1. bytes=0
>

What do you mean by cache block?  Is that (cache_size / granularity)?  In that
case it looks fine, I guess (without putting too much thought into it).

Martin
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Wang, Huaqiang 5 years, 9 months ago

> -----Original Message-----
> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> Sent: Tuesday, July 17, 2018 5:11 PM
> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu, Bing
> <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>; Zang, Rui
> <rui.zang@intel.com>
> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
> Technology (CMT)
> 
> On Tue, Jul 17, 2018 at 07:19:41AM +0000, Wang, Huaqiang wrote:
> >Hi Martin,
> >
> >Thanks for your comments. Please see my reply inline.
> >
> >> -----Original Message-----
> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> >> Sent: Tuesday, July 17, 2018 2:27 PM
> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
> >> Technology (CMT)
> >>
> >> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
> >> >
> >> >This is the V2 of RFC and the POC source code for introducing x86
> >> >RDT CMT feature, thanks Martin Kletzander for his review and
> >> >constructive suggestion for V1.
> >> >
> >> >This series is trying to provide the similar functions of the perf
> >> >event based CMT, MBMT and MBML features in reporting cache
> >> >occupancy, total memory bandwidth utilization and local memory
> >> >bandwidth utilization information in livirt. Firstly we focus on cmt.
> >> >
> >> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod to
> >> >track the cache occupancy information per CPU thread. We are
> >> >leveraging the implementation of kernel resctrl filesystem and
> >> >create our patches on top of that.
> >> >
> >> >Describing the functionality from a high level:
> >> >
> >> >1. Extend the output of 'domstats' and report CMT inforamtion.
> >> >
> >> >Comparing with perf event based CMT implementation in libvirt, this
> >> >series extends the output of command 'domstat' and reports cache
> >> >occupancy information like these:
> >> ><pre>
> >> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
> >> >Domain: 'vm3'
> >> >  cpu.cacheoccupancy.vcpus_2.value=4415488
> >> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
> >> >  cpu.cacheoccupancy.vcpus_1.value=7839744
> >> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
> >> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
> >> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
> >> ></pre>
> >> >The vcpus have been arragned into three monitoring groups, these
> >> >three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take
> >> >an example, the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the
> >> >cache occupancy information for vcpu 0 and vcpu 3, the
> >> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
> >> >represents the vcpu group information.
> >> >
> >> >To address Martin's suggestion "beware as 1-4 is something else than
> >> >1,4 so you need to differentiate that.", the content of 'vcpus'
> >> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially
> >> >processed, if vcpus is a continous range, e.g. 0-2, then the output
> >> >of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
> >> >instead of
> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
> >> >Please note that 'vcpus_0-2' is a name of this monitoring group,
> >> >could be specified any other word from the XML configuration file or
> >> >lively changed with the command introduced in following part.
> >> >
> >>
> >> One small nit according to the naming (but it shouldn't block any
> >> reviewers from reviewing, just keep this in mind for next version for
> >> example) is that this is still inconsistent.
> >
> >OK.  I'll try to use words such as 'cache', 'cpu resource' and avoid
> >using 'RDT', 'CMT'.
> >
> 
> Oh, you misunderstood, I meant the naming in the domstats output =)
> 
> >The way domstats are structured when there is something like an
> >> array could shed some light into this.  What you suggested is really
> >> kind of hard to parse (although looks better).  What would you say to
> something like this:
> >>
> >>   cpu.cacheoccupancy.count = 3
> >>   cpu.cacheoccupancy.0.value=4415488
> >>   cpu.cacheoccupancy.0.vcpus=2
> >>   cpu.cacheoccupancy.0.name=vcpus_2
> >>   cpu.cacheoccupancy.1.value=7839744
> >>   cpu.cacheoccupancy.1.vcpus=1
> >>   cpu.cacheoccupancy.1.name=vcpus_1
> >>   cpu.cacheoccupancy.2.value=53796864
> >>   cpu.cacheoccupancy.2.vcpus=0,3
> >>   cpu.cacheoccupancy.2.name=0,3
> >>
> >
> >Your arrangement looks more reasonable, thanks for your advice.
> >However, as I mentioned in another email that I sent to libvirt-list
> >hours ago, the kernel resctrl interface provides cache occupancy
> >information for each cache block for every resource group.
> >Maybe we need to expose the cache occupancy for each cache block.
> >If you agree, we need to refine the 'domstats' output message, how
> >about this:
> >
> >  cpu.cacheoccupancy.count=3
> >  cpu.cacheoccupancy.0.name=vcpus_2
> >  cpu.cacheoccupancy.0.vcpus=2
> >  cpu.cacheoccupancy.0.block.count=2
> >  cpu.cacheoccupancy.0.block.0.bytes=5488
> >  cpu.cacheoccupancy.0.block.1. bytes =4410000
> >  cpu.cacheoccupancy.1.name=vcpus_1
> >  cpu.cacheoccupancy.1.vcpus=1
> >  cpu.cacheoccupancy.1.block.count=2
> >  cpu.cacheoccupancy.1.block.0. bytes =7839744
> > cpu.cacheoccupancy.1.block.0. bytes =0
> >  cpu.cacheoccupancy.2.name=0,3
> >  cpu.cacheoccupancy.2.vcpus=0,3
> >  cpu.cacheoccupancy.2.block.count=2
> >  cpu.cacheoccupancy.2.block.0. bytes=53796864
> > cpu.cacheoccupancy.2.block.1. bytes=0
> >
> 
> What do you mean by cache block?  Is that (cache_size / granularity)?  In that
> case it looks fine, I guess (without putting too much thought into it).

No. 'cache block' that I mean is indexed with 'cache id', with the id number 
kept in '/sys/devices/system/cpu/cpu*/cache/index*/id'. 

Generally for a two socket server  node, there are two sockets (with CPU 
E5-2680 v4, for example) in system, and each socket has a L3 cache, 
if resctrl monitoring group is created (/sys/fs/resctrl/p0, for example), 
you can find the cache occupancy information for these two L3 cache
areas separately from file 
/sys/fs/resctrl/p0/mon_data/mon_L3_00/llc_occupancy
and file
/sys/fs/resctrl/p0/mon_data/mon_L3_01/llc_occupancy
Cache information for individual socket is meaningful to detect performance
issues such as workload balancing...etc. We'd better expose these details to
libvirt users. 
To my knowledge, I am using 'cache block' to describe the CPU cache
indexed with number found in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
I welcome suggestion on other kind of naming for it. 

> 
> Martin

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Martin Kletzander 5 years, 9 months ago
On Wed, Jul 18, 2018 at 02:29:32AM +0000, Wang, Huaqiang wrote:
>
>
>> -----Original Message-----
>> From: Martin Kletzander [mailto:mkletzan@redhat.com]
>> Sent: Tuesday, July 17, 2018 5:11 PM
>> To: Wang, Huaqiang <huaqiang.wang@intel.com>
>> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu, Bing
>> <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>; Zang, Rui
>> <rui.zang@intel.com>
>> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
>> Technology (CMT)
>>
>> On Tue, Jul 17, 2018 at 07:19:41AM +0000, Wang, Huaqiang wrote:
>> >Hi Martin,
>> >
>> >Thanks for your comments. Please see my reply inline.
>> >
>> >> -----Original Message-----
>> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
>> >> Sent: Tuesday, July 17, 2018 2:27 PM
>> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
>> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
>> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
>> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
>> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
>> >> Technology (CMT)
>> >>
>> >> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
>> >> >
>> >> >This is the V2 of RFC and the POC source code for introducing x86
>> >> >RDT CMT feature, thanks Martin Kletzander for his review and
>> >> >constructive suggestion for V1.
>> >> >
>> >> >This series is trying to provide the similar functions of the perf
>> >> >event based CMT, MBMT and MBML features in reporting cache
>> >> >occupancy, total memory bandwidth utilization and local memory
>> >> >bandwidth utilization information in livirt. Firstly we focus on cmt.
>> >> >
>> >> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod to
>> >> >track the cache occupancy information per CPU thread. We are
>> >> >leveraging the implementation of kernel resctrl filesystem and
>> >> >create our patches on top of that.
>> >> >
>> >> >Describing the functionality from a high level:
>> >> >
>> >> >1. Extend the output of 'domstats' and report CMT inforamtion.
>> >> >
>> >> >Comparing with perf event based CMT implementation in libvirt, this
>> >> >series extends the output of command 'domstat' and reports cache
>> >> >occupancy information like these:
>> >> ><pre>
>> >> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
>> >> >Domain: 'vm3'
>> >> >  cpu.cacheoccupancy.vcpus_2.value=4415488
>> >> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
>> >> >  cpu.cacheoccupancy.vcpus_1.value=7839744
>> >> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
>> >> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
>> >> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
>> >> ></pre>
>> >> >The vcpus have been arragned into three monitoring groups, these
>> >> >three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively. Take
>> >> >an example, the 'cpu.cacheoccupancy.vcpus_0,3.value' reports the
>> >> >cache occupancy information for vcpu 0 and vcpu 3, the
>> >> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
>> >> >represents the vcpu group information.
>> >> >
>> >> >To address Martin's suggestion "beware as 1-4 is something else than
>> >> >1,4 so you need to differentiate that.", the content of 'vcpus'
>> >> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially
>> >> >processed, if vcpus is a continous range, e.g. 0-2, then the output
>> >> >of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
>> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
>> >> >instead of
>> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
>> >> >Please note that 'vcpus_0-2' is a name of this monitoring group,
>> >> >could be specified any other word from the XML configuration file or
>> >> >lively changed with the command introduced in following part.
>> >> >
>> >>
>> >> One small nit according to the naming (but it shouldn't block any
>> >> reviewers from reviewing, just keep this in mind for next version for
>> >> example) is that this is still inconsistent.
>> >
>> >OK.  I'll try to use words such as 'cache', 'cpu resource' and avoid
>> >using 'RDT', 'CMT'.
>> >
>>
>> Oh, you misunderstood, I meant the naming in the domstats output =)
>>
>> >The way domstats are structured when there is something like an
>> >> array could shed some light into this.  What you suggested is really
>> >> kind of hard to parse (although looks better).  What would you say to
>> something like this:
>> >>
>> >>   cpu.cacheoccupancy.count = 3
>> >>   cpu.cacheoccupancy.0.value=4415488
>> >>   cpu.cacheoccupancy.0.vcpus=2
>> >>   cpu.cacheoccupancy.0.name=vcpus_2
>> >>   cpu.cacheoccupancy.1.value=7839744
>> >>   cpu.cacheoccupancy.1.vcpus=1
>> >>   cpu.cacheoccupancy.1.name=vcpus_1
>> >>   cpu.cacheoccupancy.2.value=53796864
>> >>   cpu.cacheoccupancy.2.vcpus=0,3
>> >>   cpu.cacheoccupancy.2.name=0,3
>> >>
>> >
>> >Your arrangement looks more reasonable, thanks for your advice.
>> >However, as I mentioned in another email that I sent to libvirt-list
>> >hours ago, the kernel resctrl interface provides cache occupancy
>> >information for each cache block for every resource group.
>> >Maybe we need to expose the cache occupancy for each cache block.
>> >If you agree, we need to refine the 'domstats' output message, how
>> >about this:
>> >
>> >  cpu.cacheoccupancy.count=3
>> >  cpu.cacheoccupancy.0.name=vcpus_2
>> >  cpu.cacheoccupancy.0.vcpus=2
>> >  cpu.cacheoccupancy.0.block.count=2
>> >  cpu.cacheoccupancy.0.block.0.bytes=5488
>> >  cpu.cacheoccupancy.0.block.1. bytes =4410000
>> >  cpu.cacheoccupancy.1.name=vcpus_1
>> >  cpu.cacheoccupancy.1.vcpus=1
>> >  cpu.cacheoccupancy.1.block.count=2
>> >  cpu.cacheoccupancy.1.block.0. bytes =7839744
>> > cpu.cacheoccupancy.1.block.0. bytes =0
>> >  cpu.cacheoccupancy.2.name=0,3
>> >  cpu.cacheoccupancy.2.vcpus=0,3
>> >  cpu.cacheoccupancy.2.block.count=2
>> >  cpu.cacheoccupancy.2.block.0. bytes=53796864
>> > cpu.cacheoccupancy.2.block.1. bytes=0
>> >
>>
>> What do you mean by cache block?  Is that (cache_size / granularity)?  In that
>> case it looks fine, I guess (without putting too much thought into it).
>
>No. 'cache block' that I mean is indexed with 'cache id', with the id number
>kept in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
>
>Generally for a two socket server  node, there are two sockets (with CPU
>E5-2680 v4, for example) in system, and each socket has a L3 cache,
>if resctrl monitoring group is created (/sys/fs/resctrl/p0, for example),
>you can find the cache occupancy information for these two L3 cache
>areas separately from file
>/sys/fs/resctrl/p0/mon_data/mon_L3_00/llc_occupancy
>and file
>/sys/fs/resctrl/p0/mon_data/mon_L3_01/llc_occupancy
>Cache information for individual socket is meaningful to detect performance
>issues such as workload balancing...etc. We'd better expose these details to
>libvirt users.
>To my knowledge, I am using 'cache block' to describe the CPU cache
>indexed with number found in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
>I welcome suggestion on other kind of naming for it.
>

To be consistent I'd prefer "cache" "cache bank" and "index" or "id".  I don't
have specific requirements, I just don't want to invent new words.  Look at how
it is described in capabilities for example.

>>
>> Martin
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Wang, Huaqiang 5 years, 9 months ago

> -----Original Message-----
> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> Sent: Wednesday, July 18, 2018 8:07 PM
> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu, Bing
> <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>; Zang, Rui
> <rui.zang@intel.com>
> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
> Technology (CMT)
> 
> On Wed, Jul 18, 2018 at 02:29:32AM +0000, Wang, Huaqiang wrote:
> >
> >
> >> -----Original Message-----
> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> >> Sent: Tuesday, July 17, 2018 5:11 PM
> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
> >> Technology (CMT)
> >>
> >> On Tue, Jul 17, 2018 at 07:19:41AM +0000, Wang, Huaqiang wrote:
> >> >Hi Martin,
> >> >
> >> >Thanks for your comments. Please see my reply inline.
> >> >
> >> >> -----Original Message-----
> >> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> >> >> Sent: Tuesday, July 17, 2018 2:27 PM
> >> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> >> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
> >> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
> >> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
> >> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache
> >> >> Monitoring Technology (CMT)
> >> >>
> >> >> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
> >> >> >
> >> >> >This is the V2 of RFC and the POC source code for introducing x86
> >> >> >RDT CMT feature, thanks Martin Kletzander for his review and
> >> >> >constructive suggestion for V1.
> >> >> >
> >> >> >This series is trying to provide the similar functions of the
> >> >> >perf event based CMT, MBMT and MBML features in reporting cache
> >> >> >occupancy, total memory bandwidth utilization and local memory
> >> >> >bandwidth utilization information in livirt. Firstly we focus on cmt.
> >> >> >
> >> >> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod to
> >> >> >track the cache occupancy information per CPU thread. We are
> >> >> >leveraging the implementation of kernel resctrl filesystem and
> >> >> >create our patches on top of that.
> >> >> >
> >> >> >Describing the functionality from a high level:
> >> >> >
> >> >> >1. Extend the output of 'domstats' and report CMT inforamtion.
> >> >> >
> >> >> >Comparing with perf event based CMT implementation in libvirt,
> >> >> >this series extends the output of command 'domstat' and reports
> >> >> >cache occupancy information like these:
> >> >> ><pre>
> >> >> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
> >> >> >Domain: 'vm3'
> >> >> >  cpu.cacheoccupancy.vcpus_2.value=4415488
> >> >> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
> >> >> >  cpu.cacheoccupancy.vcpus_1.value=7839744
> >> >> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
> >> >> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
> >> >> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
> >> >> ></pre>
> >> >> >The vcpus have been arragned into three monitoring groups, these
> >> >> >three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively.
> >> >> >Take an example, the 'cpu.cacheoccupancy.vcpus_0,3.value' reports
> >> >> >the cache occupancy information for vcpu 0 and vcpu 3, the
> >> >> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
> >> >> >represents the vcpu group information.
> >> >> >
> >> >> >To address Martin's suggestion "beware as 1-4 is something else
> >> >> >than
> >> >> >1,4 so you need to differentiate that.", the content of 'vcpus'
> >> >> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially
> >> >> >processed, if vcpus is a continous range, e.g. 0-2, then the
> >> >> >output of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
> >> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
> >> >> >instead of
> >> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
> >> >> >Please note that 'vcpus_0-2' is a name of this monitoring group,
> >> >> >could be specified any other word from the XML configuration file
> >> >> >or lively changed with the command introduced in following part.
> >> >> >
> >> >>
> >> >> One small nit according to the naming (but it shouldn't block any
> >> >> reviewers from reviewing, just keep this in mind for next version
> >> >> for
> >> >> example) is that this is still inconsistent.
> >> >
> >> >OK.  I'll try to use words such as 'cache', 'cpu resource' and avoid
> >> >using 'RDT', 'CMT'.
> >> >
> >>
> >> Oh, you misunderstood, I meant the naming in the domstats output =)
> >>
> >> >The way domstats are structured when there is something like an
> >> >> array could shed some light into this.  What you suggested is
> >> >> really kind of hard to parse (although looks better).  What would
> >> >> you say to
> >> something like this:
> >> >>
> >> >>   cpu.cacheoccupancy.count = 3
> >> >>   cpu.cacheoccupancy.0.value=4415488
> >> >>   cpu.cacheoccupancy.0.vcpus=2
> >> >>   cpu.cacheoccupancy.0.name=vcpus_2
> >> >>   cpu.cacheoccupancy.1.value=7839744
> >> >>   cpu.cacheoccupancy.1.vcpus=1
> >> >>   cpu.cacheoccupancy.1.name=vcpus_1
> >> >>   cpu.cacheoccupancy.2.value=53796864
> >> >>   cpu.cacheoccupancy.2.vcpus=0,3
> >> >>   cpu.cacheoccupancy.2.name=0,3
> >> >>
> >> >
> >> >Your arrangement looks more reasonable, thanks for your advice.
> >> >However, as I mentioned in another email that I sent to libvirt-list
> >> >hours ago, the kernel resctrl interface provides cache occupancy
> >> >information for each cache block for every resource group.
> >> >Maybe we need to expose the cache occupancy for each cache block.
> >> >If you agree, we need to refine the 'domstats' output message, how
> >> >about this:
> >> >
> >> >  cpu.cacheoccupancy.count=3
> >> >  cpu.cacheoccupancy.0.name=vcpus_2
> >> >  cpu.cacheoccupancy.0.vcpus=2
> >> >  cpu.cacheoccupancy.0.block.count=2
> >> >  cpu.cacheoccupancy.0.block.0.bytes=5488
> >> >  cpu.cacheoccupancy.0.block.1. bytes =4410000
> >> >  cpu.cacheoccupancy.1.name=vcpus_1
> >> >  cpu.cacheoccupancy.1.vcpus=1
> >> >  cpu.cacheoccupancy.1.block.count=2
> >> >  cpu.cacheoccupancy.1.block.0. bytes =7839744
> >> > cpu.cacheoccupancy.1.block.0. bytes =0
> >> >  cpu.cacheoccupancy.2.name=0,3
> >> >  cpu.cacheoccupancy.2.vcpus=0,3
> >> >  cpu.cacheoccupancy.2.block.count=2
> >> >  cpu.cacheoccupancy.2.block.0. bytes=53796864
> >> > cpu.cacheoccupancy.2.block.1. bytes=0
> >> >
> >>
> >> What do you mean by cache block?  Is that (cache_size / granularity)?
> >> In that case it looks fine, I guess (without putting too much thought into it).
> >
> >No. 'cache block' that I mean is indexed with 'cache id', with the id
> >number kept in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
> >
> >Generally for a two socket server  node, there are two sockets (with
> >CPU
> >E5-2680 v4, for example) in system, and each socket has a L3 cache, if
> >resctrl monitoring group is created (/sys/fs/resctrl/p0, for example),
> >you can find the cache occupancy information for these two L3 cache
> >areas separately from file
> >/sys/fs/resctrl/p0/mon_data/mon_L3_00/llc_occupancy
> >and file
> >/sys/fs/resctrl/p0/mon_data/mon_L3_01/llc_occupancy
> >Cache information for individual socket is meaningful to detect
> >performance issues such as workload balancing...etc. We'd better expose
> >these details to libvirt users.
> >To my knowledge, I am using 'cache block' to describe the CPU cache
> >indexed with number found in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
> >I welcome suggestion on other kind of naming for it.
> >
> 
> To be consistent I'd prefer "cache" "cache bank" and "index" or "id".  I don't
> have specific requirements, I just don't want to invent new words.  Look at how
> it is described in capabilities for example.
> 
Make sense. Then let's use 'id' for the the purpose, and the output would be:

cpu.cacheoccupancy.count=3
cpu.cacheoccupancy.0.name=vcpus_2
cpu.cacheoccupancy.0.vcpus=2
cpu.cacheoccupancy.0.id.count=2
cpu.cacheoccupancy.0.id.0.bytes=5488
cpu.cacheoccupancy.0.id.1.bytes =4410000
cpu.cacheoccupancy.1.name=vcpus_1
cpu.cacheoccupancy.1.vcpus=1
cpu.cacheoccupancy.1.id.count=2
cpu.cacheoccupancy.1.id.0.bytes =7839744
cpu.cacheoccupancy.1.id.1.bytes =0
cpu.cacheoccupancy.2.name=0,3
cpu.cacheoccupancy.2.vcpus=0,3
cpu.cacheoccupancy.2.id.count=2
cpu.cacheoccupancy.2.id.0.bytes=53796864
cpu.cacheoccupancy.2.id.1.bytes=0

How about it? 


> >> Martin

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Martin Kletzander 5 years, 9 months ago
On Wed, Jul 18, 2018 at 12:19:18PM +0000, Wang, Huaqiang wrote:
>
>
>> -----Original Message-----
>> From: Martin Kletzander [mailto:mkletzan@redhat.com]
>> Sent: Wednesday, July 18, 2018 8:07 PM
>> To: Wang, Huaqiang <huaqiang.wang@intel.com>
>> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu, Bing
>> <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>; Zang, Rui
>> <rui.zang@intel.com>
>> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
>> Technology (CMT)
>>
>> On Wed, Jul 18, 2018 at 02:29:32AM +0000, Wang, Huaqiang wrote:
>> >
>> >
>> >> -----Original Message-----
>> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
>> >> Sent: Tuesday, July 17, 2018 5:11 PM
>> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
>> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
>> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
>> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
>> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
>> >> Technology (CMT)
>> >>
>> >> On Tue, Jul 17, 2018 at 07:19:41AM +0000, Wang, Huaqiang wrote:
>> >> >Hi Martin,
>> >> >
>> >> >Thanks for your comments. Please see my reply inline.
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
>> >> >> Sent: Tuesday, July 17, 2018 2:27 PM
>> >> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
>> >> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
>> >> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
>> >> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
>> >> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache
>> >> >> Monitoring Technology (CMT)
>> >> >>
>> >> >> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
>> >> >> >
>> >> >> >This is the V2 of RFC and the POC source code for introducing x86
>> >> >> >RDT CMT feature, thanks Martin Kletzander for his review and
>> >> >> >constructive suggestion for V1.
>> >> >> >
>> >> >> >This series is trying to provide the similar functions of the
>> >> >> >perf event based CMT, MBMT and MBML features in reporting cache
>> >> >> >occupancy, total memory bandwidth utilization and local memory
>> >> >> >bandwidth utilization information in livirt. Firstly we focus on cmt.
>> >> >> >
>> >> >> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod to
>> >> >> >track the cache occupancy information per CPU thread. We are
>> >> >> >leveraging the implementation of kernel resctrl filesystem and
>> >> >> >create our patches on top of that.
>> >> >> >
>> >> >> >Describing the functionality from a high level:
>> >> >> >
>> >> >> >1. Extend the output of 'domstats' and report CMT inforamtion.
>> >> >> >
>> >> >> >Comparing with perf event based CMT implementation in libvirt,
>> >> >> >this series extends the output of command 'domstat' and reports
>> >> >> >cache occupancy information like these:
>> >> >> ><pre>
>> >> >> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
>> >> >> >Domain: 'vm3'
>> >> >> >  cpu.cacheoccupancy.vcpus_2.value=4415488
>> >> >> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
>> >> >> >  cpu.cacheoccupancy.vcpus_1.value=7839744
>> >> >> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
>> >> >> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
>> >> >> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
>> >> >> ></pre>
>> >> >> >The vcpus have been arragned into three monitoring groups, these
>> >> >> >three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively.
>> >> >> >Take an example, the 'cpu.cacheoccupancy.vcpus_0,3.value' reports
>> >> >> >the cache occupancy information for vcpu 0 and vcpu 3, the
>> >> >> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
>> >> >> >represents the vcpu group information.
>> >> >> >
>> >> >> >To address Martin's suggestion "beware as 1-4 is something else
>> >> >> >than
>> >> >> >1,4 so you need to differentiate that.", the content of 'vcpus'
>> >> >> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially
>> >> >> >processed, if vcpus is a continous range, e.g. 0-2, then the
>> >> >> >output of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
>> >> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
>> >> >> >instead of
>> >> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
>> >> >> >Please note that 'vcpus_0-2' is a name of this monitoring group,
>> >> >> >could be specified any other word from the XML configuration file
>> >> >> >or lively changed with the command introduced in following part.
>> >> >> >
>> >> >>
>> >> >> One small nit according to the naming (but it shouldn't block any
>> >> >> reviewers from reviewing, just keep this in mind for next version
>> >> >> for
>> >> >> example) is that this is still inconsistent.
>> >> >
>> >> >OK.  I'll try to use words such as 'cache', 'cpu resource' and avoid
>> >> >using 'RDT', 'CMT'.
>> >> >
>> >>
>> >> Oh, you misunderstood, I meant the naming in the domstats output =)
>> >>
>> >> >The way domstats are structured when there is something like an
>> >> >> array could shed some light into this.  What you suggested is
>> >> >> really kind of hard to parse (although looks better).  What would
>> >> >> you say to
>> >> something like this:
>> >> >>
>> >> >>   cpu.cacheoccupancy.count = 3
>> >> >>   cpu.cacheoccupancy.0.value=4415488
>> >> >>   cpu.cacheoccupancy.0.vcpus=2
>> >> >>   cpu.cacheoccupancy.0.name=vcpus_2
>> >> >>   cpu.cacheoccupancy.1.value=7839744
>> >> >>   cpu.cacheoccupancy.1.vcpus=1
>> >> >>   cpu.cacheoccupancy.1.name=vcpus_1
>> >> >>   cpu.cacheoccupancy.2.value=53796864
>> >> >>   cpu.cacheoccupancy.2.vcpus=0,3
>> >> >>   cpu.cacheoccupancy.2.name=0,3
>> >> >>
>> >> >
>> >> >Your arrangement looks more reasonable, thanks for your advice.
>> >> >However, as I mentioned in another email that I sent to libvirt-list
>> >> >hours ago, the kernel resctrl interface provides cache occupancy
>> >> >information for each cache block for every resource group.
>> >> >Maybe we need to expose the cache occupancy for each cache block.
>> >> >If you agree, we need to refine the 'domstats' output message, how
>> >> >about this:
>> >> >
>> >> >  cpu.cacheoccupancy.count=3
>> >> >  cpu.cacheoccupancy.0.name=vcpus_2
>> >> >  cpu.cacheoccupancy.0.vcpus=2
>> >> >  cpu.cacheoccupancy.0.block.count=2
>> >> >  cpu.cacheoccupancy.0.block.0.bytes=5488
>> >> >  cpu.cacheoccupancy.0.block.1. bytes =4410000
>> >> >  cpu.cacheoccupancy.1.name=vcpus_1
>> >> >  cpu.cacheoccupancy.1.vcpus=1
>> >> >  cpu.cacheoccupancy.1.block.count=2
>> >> >  cpu.cacheoccupancy.1.block.0. bytes =7839744
>> >> > cpu.cacheoccupancy.1.block.0. bytes =0
>> >> >  cpu.cacheoccupancy.2.name=0,3
>> >> >  cpu.cacheoccupancy.2.vcpus=0,3
>> >> >  cpu.cacheoccupancy.2.block.count=2
>> >> >  cpu.cacheoccupancy.2.block.0. bytes=53796864
>> >> > cpu.cacheoccupancy.2.block.1. bytes=0
>> >> >
>> >>
>> >> What do you mean by cache block?  Is that (cache_size / granularity)?
>> >> In that case it looks fine, I guess (without putting too much thought into it).
>> >
>> >No. 'cache block' that I mean is indexed with 'cache id', with the id
>> >number kept in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
>> >
>> >Generally for a two socket server  node, there are two sockets (with
>> >CPU
>> >E5-2680 v4, for example) in system, and each socket has a L3 cache, if
>> >resctrl monitoring group is created (/sys/fs/resctrl/p0, for example),
>> >you can find the cache occupancy information for these two L3 cache
>> >areas separately from file
>> >/sys/fs/resctrl/p0/mon_data/mon_L3_00/llc_occupancy
>> >and file
>> >/sys/fs/resctrl/p0/mon_data/mon_L3_01/llc_occupancy
>> >Cache information for individual socket is meaningful to detect
>> >performance issues such as workload balancing...etc. We'd better expose
>> >these details to libvirt users.
>> >To my knowledge, I am using 'cache block' to describe the CPU cache
>> >indexed with number found in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
>> >I welcome suggestion on other kind of naming for it.
>> >
>>
>> To be consistent I'd prefer "cache" "cache bank" and "index" or "id".  I don't
>> have specific requirements, I just don't want to invent new words.  Look at how
>> it is described in capabilities for example.
>>
>Make sense. Then let's use 'id' for the the purpose, and the output would be:
>
>cpu.cacheoccupancy.count=3
>cpu.cacheoccupancy.0.name=vcpus_2
>cpu.cacheoccupancy.0.vcpus=2
>cpu.cacheoccupancy.0.id.count=2
>cpu.cacheoccupancy.0.id.0.bytes=5488
>cpu.cacheoccupancy.0.id.1.bytes =4410000
>cpu.cacheoccupancy.1.name=vcpus_1
>cpu.cacheoccupancy.1.vcpus=1
>cpu.cacheoccupancy.1.id.count=2
>cpu.cacheoccupancy.1.id.0.bytes =7839744
>cpu.cacheoccupancy.1.id.1.bytes =0
>cpu.cacheoccupancy.2.name=0,3
>cpu.cacheoccupancy.2.vcpus=0,3
>cpu.cacheoccupancy.2.id.count=2
>cpu.cacheoccupancy.2.id.0.bytes=53796864
>cpu.cacheoccupancy.2.id.1.bytes=0
>
>How about it?
>

I'm switching contexts too much and hence I didn't make myself clear.  Since IDs
are not guaranteed to be consecutive, this might be more future-proof:

cpu.cacheoccupancy.count=3
cpu.cacheoccupancy.0.name=vcpus_2
cpu.cacheoccupancy.0.vcpus=2
cpu.cacheoccupancy.0.bank.count=2
cpu.cacheoccupancy.0.bank.0.id=0
cpu.cacheoccupancy.0.bank.0.bytes=5488
cpu.cacheoccupancy.0.bank.1.id=1
cpu.cacheoccupancy.0.bank.1.bytes =4410000
cpu.cacheoccupancy.1.name=vcpus_1
cpu.cacheoccupancy.1.vcpus=1
cpu.cacheoccupancy.1.bank.count=2
cpu.cacheoccupancy.0.bank.0.id=0
cpu.cacheoccupancy.1.bank.0.bytes =7839744
cpu.cacheoccupancy.0.bank.1.id=1
cpu.cacheoccupancy.1.bank.1.bytes =0
cpu.cacheoccupancy.2.name=0,3
cpu.cacheoccupancy.2.vcpus=0,3
cpu.cacheoccupancy.2.bank.count=2
cpu.cacheoccupancy.0.bank.0.id=0
cpu.cacheoccupancy.2.bank.0.bytes=53796864
cpu.cacheoccupancy.0.bank.1.id=1
cpu.cacheoccupancy.2.bank.1.bytes=0
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring Technology (CMT)
Posted by Wang, Huaqiang 5 years, 9 months ago

> -----Original Message-----
> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> Sent: Wednesday, July 18, 2018 10:03 PM
> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>; Niu, Bing
> <bing.niu@intel.com>; Ding, Jian-feng <jian-feng.ding@intel.com>; Zang, Rui
> <rui.zang@intel.com>
> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
> Technology (CMT)
> 
> On Wed, Jul 18, 2018 at 12:19:18PM +0000, Wang, Huaqiang wrote:
> >
> >
> >> -----Original Message-----
> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> >> Sent: Wednesday, July 18, 2018 8:07 PM
> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache Monitoring
> >> Technology (CMT)
> >>
> >> On Wed, Jul 18, 2018 at 02:29:32AM +0000, Wang, Huaqiang wrote:
> >> >
> >> >
> >> >> -----Original Message-----
> >> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> >> >> Sent: Tuesday, July 17, 2018 5:11 PM
> >> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> >> >> Cc: libvir-list@redhat.com; Feng, Shaohe <shaohe.feng@intel.com>;
> >> >> Niu, Bing <bing.niu@intel.com>; Ding, Jian-feng
> >> >> <jian-feng.ding@intel.com>; Zang, Rui <rui.zang@intel.com>
> >> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache
> >> >> Monitoring Technology (CMT)
> >> >>
> >> >> On Tue, Jul 17, 2018 at 07:19:41AM +0000, Wang, Huaqiang wrote:
> >> >> >Hi Martin,
> >> >> >
> >> >> >Thanks for your comments. Please see my reply inline.
> >> >> >
> >> >> >> -----Original Message-----
> >> >> >> From: Martin Kletzander [mailto:mkletzan@redhat.com]
> >> >> >> Sent: Tuesday, July 17, 2018 2:27 PM
> >> >> >> To: Wang, Huaqiang <huaqiang.wang@intel.com>
> >> >> >> Cc: libvir-list@redhat.com; Feng, Shaohe
> >> >> >> <shaohe.feng@intel.com>; Niu, Bing <bing.niu@intel.com>; Ding,
> >> >> >> Jian-feng <jian-feng.ding@intel.com>; Zang, Rui
> >> >> >> <rui.zang@intel.com>
> >> >> >> Subject: Re: [libvirt] [RFC PATCHv2 00/10] x86 RDT Cache
> >> >> >> Monitoring Technology (CMT)
> >> >> >>
> >> >> >> On Mon, Jul 09, 2018 at 03:00:48PM +0800, Wang Huaqiang wrote:
> >> >> >> >
> >> >> >> >This is the V2 of RFC and the POC source code for introducing
> >> >> >> >x86 RDT CMT feature, thanks Martin Kletzander for his review
> >> >> >> >and constructive suggestion for V1.
> >> >> >> >
> >> >> >> >This series is trying to provide the similar functions of the
> >> >> >> >perf event based CMT, MBMT and MBML features in reporting
> >> >> >> >cache occupancy, total memory bandwidth utilization and local
> >> >> >> >memory bandwidth utilization information in livirt. Firstly we focus on
> cmt.
> >> >> >> >
> >> >> >> >x86 RDT Cache Monitoring Technology (CMT) provides a medthod
> >> >> >> >to track the cache occupancy information per CPU thread. We
> >> >> >> >are leveraging the implementation of kernel resctrl filesystem
> >> >> >> >and create our patches on top of that.
> >> >> >> >
> >> >> >> >Describing the functionality from a high level:
> >> >> >> >
> >> >> >> >1. Extend the output of 'domstats' and report CMT inforamtion.
> >> >> >> >
> >> >> >> >Comparing with perf event based CMT implementation in libvirt,
> >> >> >> >this series extends the output of command 'domstat' and
> >> >> >> >reports cache occupancy information like these:
> >> >> >> ><pre>
> >> >> >> >[root@dl-c200 libvirt]# virsh domstats vm3 --cpu-resource
> >> >> >> >Domain: 'vm3'
> >> >> >> >  cpu.cacheoccupancy.vcpus_2.value=4415488
> >> >> >> >  cpu.cacheoccupancy.vcpus_2.vcpus=2
> >> >> >> >  cpu.cacheoccupancy.vcpus_1.value=7839744
> >> >> >> >  cpu.cacheoccupancy.vcpus_1.vcpus=1
> >> >> >> >  cpu.cacheoccupancy.vcpus_0,3.value=53796864
> >> >> >> >  cpu.cacheoccupancy.vcpus_0,3.vcpus=0,3
> >> >> >> ></pre>
> >> >> >> >The vcpus have been arragned into three monitoring groups,
> >> >> >> >these three groups cover vcpu 1, vcpu 2 and vcpus 0,3 respectively.
> >> >> >> >Take an example, the 'cpu.cacheoccupancy.vcpus_0,3.value'
> >> >> >> >reports the cache occupancy information for vcpu 0 and vcpu 3,
> >> >> >> >the
> >> >> >> 'cpu.cacheoccupancy.vcpus_0,3.vcpus'
> >> >> >> >represents the vcpu group information.
> >> >> >> >
> >> >> >> >To address Martin's suggestion "beware as 1-4 is something
> >> >> >> >else than
> >> >> >> >1,4 so you need to differentiate that.", the content of 'vcpus'
> >> >> >> >(cpu.cacheoccupancy.<groupname>.vcpus=xxx) has been specially
> >> >> >> >processed, if vcpus is a continous range, e.g. 0-2, then the
> >> >> >> >output of cpu.cacheoccupancy.vcpus_0-2.vcpus will be like
> >> >> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0,1,2'
> >> >> >> >instead of
> >> >> >> >'cpu.cacheoccupancy.vcpus_0-2.vcpus=0-2'.
> >> >> >> >Please note that 'vcpus_0-2' is a name of this monitoring
> >> >> >> >group, could be specified any other word from the XML
> >> >> >> >configuration file or lively changed with the command introduced in
> following part.
> >> >> >> >
> >> >> >>
> >> >> >> One small nit according to the naming (but it shouldn't block
> >> >> >> any reviewers from reviewing, just keep this in mind for next
> >> >> >> version for
> >> >> >> example) is that this is still inconsistent.
> >> >> >
> >> >> >OK.  I'll try to use words such as 'cache', 'cpu resource' and
> >> >> >avoid using 'RDT', 'CMT'.
> >> >> >
> >> >>
> >> >> Oh, you misunderstood, I meant the naming in the domstats output
> >> >> =)
> >> >>
> >> >> >The way domstats are structured when there is something like an
> >> >> >> array could shed some light into this.  What you suggested is
> >> >> >> really kind of hard to parse (although looks better).  What
> >> >> >> would you say to
> >> >> something like this:
> >> >> >>
> >> >> >>   cpu.cacheoccupancy.count = 3
> >> >> >>   cpu.cacheoccupancy.0.value=4415488
> >> >> >>   cpu.cacheoccupancy.0.vcpus=2
> >> >> >>   cpu.cacheoccupancy.0.name=vcpus_2
> >> >> >>   cpu.cacheoccupancy.1.value=7839744
> >> >> >>   cpu.cacheoccupancy.1.vcpus=1
> >> >> >>   cpu.cacheoccupancy.1.name=vcpus_1
> >> >> >>   cpu.cacheoccupancy.2.value=53796864
> >> >> >>   cpu.cacheoccupancy.2.vcpus=0,3
> >> >> >>   cpu.cacheoccupancy.2.name=0,3
> >> >> >>
> >> >> >
> >> >> >Your arrangement looks more reasonable, thanks for your advice.
> >> >> >However, as I mentioned in another email that I sent to
> >> >> >libvirt-list hours ago, the kernel resctrl interface provides
> >> >> >cache occupancy information for each cache block for every resource
> group.
> >> >> >Maybe we need to expose the cache occupancy for each cache block.
> >> >> >If you agree, we need to refine the 'domstats' output message,
> >> >> >how about this:
> >> >> >
> >> >> >  cpu.cacheoccupancy.count=3
> >> >> >  cpu.cacheoccupancy.0.name=vcpus_2
> >> >> >  cpu.cacheoccupancy.0.vcpus=2
> >> >> >  cpu.cacheoccupancy.0.block.count=2
> >> >> >  cpu.cacheoccupancy.0.block.0.bytes=5488
> >> >> >  cpu.cacheoccupancy.0.block.1. bytes =4410000
> >> >> >  cpu.cacheoccupancy.1.name=vcpus_1
> >> >> >  cpu.cacheoccupancy.1.vcpus=1
> >> >> >  cpu.cacheoccupancy.1.block.count=2
> >> >> >  cpu.cacheoccupancy.1.block.0. bytes =7839744
> >> >> > cpu.cacheoccupancy.1.block.0. bytes =0
> >> >> >  cpu.cacheoccupancy.2.name=0,3
> >> >> >  cpu.cacheoccupancy.2.vcpus=0,3
> >> >> >  cpu.cacheoccupancy.2.block.count=2
> >> >> >  cpu.cacheoccupancy.2.block.0. bytes=53796864
> >> >> > cpu.cacheoccupancy.2.block.1. bytes=0
> >> >> >
> >> >>
> >> >> What do you mean by cache block?  Is that (cache_size / granularity)?
> >> >> In that case it looks fine, I guess (without putting too much thought into it).
> >> >
> >> >No. 'cache block' that I mean is indexed with 'cache id', with the
> >> >id number kept in '/sys/devices/system/cpu/cpu*/cache/index*/id'.
> >> >
> >> >Generally for a two socket server  node, there are two sockets (with
> >> >CPU
> >> >E5-2680 v4, for example) in system, and each socket has a L3 cache,
> >> >if resctrl monitoring group is created (/sys/fs/resctrl/p0, for
> >> >example), you can find the cache occupancy information for these two
> >> >L3 cache areas separately from file
> >> >/sys/fs/resctrl/p0/mon_data/mon_L3_00/llc_occupancy
> >> >and file
> >> >/sys/fs/resctrl/p0/mon_data/mon_L3_01/llc_occupancy
> >> >Cache information for individual socket is meaningful to detect
> >> >performance issues such as workload balancing...etc. We'd better
> >> >expose these details to libvirt users.
> >> >To my knowledge, I am using 'cache block' to describe the CPU cache
> >> >indexed with number found in
> '/sys/devices/system/cpu/cpu*/cache/index*/id'.
> >> >I welcome suggestion on other kind of naming for it.
> >> >
> >>
> >> To be consistent I'd prefer "cache" "cache bank" and "index" or "id".
> >> I don't have specific requirements, I just don't want to invent new
> >> words.  Look at how it is described in capabilities for example.
> >>
> >Make sense. Then let's use 'id' for the the purpose, and the output would be:
> >
> >cpu.cacheoccupancy.count=3
> >cpu.cacheoccupancy.0.name=vcpus_2
> >cpu.cacheoccupancy.0.vcpus=2
> >cpu.cacheoccupancy.0.id.count=2
> >cpu.cacheoccupancy.0.id.0.bytes=5488
> >cpu.cacheoccupancy.0.id.1.bytes =4410000
> >cpu.cacheoccupancy.1.name=vcpus_1
> >cpu.cacheoccupancy.1.vcpus=1
> >cpu.cacheoccupancy.1.id.count=2
> >cpu.cacheoccupancy.1.id.0.bytes =7839744
> >cpu.cacheoccupancy.1.id.1.bytes =0
> >cpu.cacheoccupancy.2.name=0,3
> >cpu.cacheoccupancy.2.vcpus=0,3
> >cpu.cacheoccupancy.2.id.count=2
> >cpu.cacheoccupancy.2.id.0.bytes=53796864
> >cpu.cacheoccupancy.2.id.1.bytes=0
> >
> >How about it?
> >
> 
> I'm switching contexts too much and hence I didn't make myself clear.  Since IDs
> are not guaranteed to be consecutive, this might be more future-proof:
> 
> cpu.cacheoccupancy.count=3
> cpu.cacheoccupancy.0.name=vcpus_2
> cpu.cacheoccupancy.0.vcpus=2
> cpu.cacheoccupancy.0.bank.count=2
> cpu.cacheoccupancy.0.bank.0.id=0
> cpu.cacheoccupancy.0.bank.0.bytes=5488
> cpu.cacheoccupancy.0.bank.1.id=1
> cpu.cacheoccupancy.0.bank.1.bytes =4410000
> cpu.cacheoccupancy.1.name=vcpus_1
> cpu.cacheoccupancy.1.vcpus=1
> cpu.cacheoccupancy.1.bank.count=2
> cpu.cacheoccupancy.0.bank.0.id=0
> cpu.cacheoccupancy.1.bank.0.bytes =7839744
> cpu.cacheoccupancy.0.bank.1.id=1
> cpu.cacheoccupancy.1.bank.1.bytes =0
> cpu.cacheoccupancy.2.name=0,3
> cpu.cacheoccupancy.2.vcpus=0,3
> cpu.cacheoccupancy.2.bank.count=2
> cpu.cacheoccupancy.0.bank.0.id=0
> cpu.cacheoccupancy.2.bank.0.bytes=53796864
> cpu.cacheoccupancy.0.bank.1.id=1
> cpu.cacheoccupancy.2.bank.1.bytes=0

It is better now. Agree. 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list