docs/schemas/domaincommon.rng | 91 +++++++++--------- include/libvirt/libvirt-domain.h | 1 + src/conf/capabilities.c | 4 +- src/conf/domain_conf.c | 44 +++++++-- src/libvirt-domain.c | 21 +++++ src/qemu/qemu_driver.c | 103 ++++++++++++++++++++- src/util/virfile.c | 40 ++++++++ src/util/virfile.h | 2 + src/util/virresctrl.c | 6 +- src/util/virresctrl.h | 2 +- tests/genericxml2xmlindata/cachetune.xml | 1 + tests/genericxml2xmlindata/memorytune.xml | 5 + tests/genericxml2xmloutdata/cachetune.xml | 34 +++++++ tests/genericxml2xmloutdata/memorytune.xml | 42 +++++++++ tests/genericxml2xmltest.c | 4 +- tools/virsh-domain-monitor.c | 7 ++ tools/virsh.pod | 23 ++++- 17 files changed, 367 insertions(+), 63 deletions(-) create mode 100644 tests/genericxml2xmloutdata/cachetune.xml create mode 100644 tests/genericxml2xmloutdata/memorytune.xml
RDT is the short for Intel Resource Director Technology, consists of four sub-technologies until now: -. CAT for cache allocation -. CMT for cache usage monitoring -. MBA for memory bandwidth allocation -. MBM for memory bandwidth usage monitoring The Linux kernel interface is 'resctrl' file system, and we have already implemented the support of CAT, CMT and MBA, to accomplish the tasks such as allocating a part of shared CPU last level cache to particular domain vcpu or a list of vcpus and monitoring the usage of cache, or the task of allocating a mount of memory bandwidth to specify domain vcpu(s). This series is to introduce the support of MBM. Basically the interfaces are: ** Identify host capability ** Similar to identify the host capability of CMT, it could be gotten through the result of 'virsh capabilities', if following elements are found, then MBM is supported: <memory_bandwidth> <monitor maxMonitors='176'> <feature name='mbm_total_bytes'/> <feature name='mbm_local_bytes'/> </monitor> </memory_bandwidth> 'mbm_total_bytes' means supporting to report the memory bandwidth used by the vcpu(s) of specific monitor on all CPU sockets. 'mbm_local_bytes' means supporting to report the memory bandwidth used by vcpu(s) that is passing through local CPU socket. ** Create monitor group** The monitor group for specific domain vcpus, for example vcpu 0-4, is defined in domain configuration file, in such kind of way: <cputune> <memorytune vcpus='0-4'> <monitor vcpus='0-4'/> </memorytune> </cputune> ** Report memory usage ** Introduced an option '--memory' against 'virsh domstats' command to show the memory bandwidth usage in such way: (also very similar to the format of CMT result.) # virsh domstats --memory Domain: 'libvirt-vm' memory.bandwidth.monitor.count=4 memory.bandwidth.monitor.0.name=vcpus_0-4 memory.bandwidth.monitor.0.vcpus=0-4 memory.bandwidth.monitor.0.node.count=2 memory.bandwidth.monitor.0.node.0.id=0 memory.bandwidth.monitor.0.node.0.bytes.total=14201651200 memory.bandwidth.monitor.0.node.0.bytes.local=7369809920 memory.bandwidth.monitor.0.node.1.id=1 memory.bandwidth.monitor.0.node.1.bytes.total=188897640448 memory.bandwidth.monitor.0.node.1.bytes.local=170044047360 Huaqiang (5): util, resctrl: using 64bit interface instead of 32bit for counters conf: showing cache/memoryBW monitor features in capabilities cachetune schema: a looser check for the order of <cache> and <monitor> element conf: Parse dommon configure file for memorytune monitors virsh: show memoryBW info in 'virsh domstats' command docs/schemas/domaincommon.rng | 91 +++++++++--------- include/libvirt/libvirt-domain.h | 1 + src/conf/capabilities.c | 4 +- src/conf/domain_conf.c | 44 +++++++-- src/libvirt-domain.c | 21 +++++ src/qemu/qemu_driver.c | 103 ++++++++++++++++++++- src/util/virfile.c | 40 ++++++++ src/util/virfile.h | 2 + src/util/virresctrl.c | 6 +- src/util/virresctrl.h | 2 +- tests/genericxml2xmlindata/cachetune.xml | 1 + tests/genericxml2xmlindata/memorytune.xml | 5 + tests/genericxml2xmloutdata/cachetune.xml | 34 +++++++ tests/genericxml2xmloutdata/memorytune.xml | 42 +++++++++ tests/genericxml2xmltest.c | 4 +- tools/virsh-domain-monitor.c | 7 ++ tools/virsh.pod | 23 ++++- 17 files changed, 367 insertions(+), 63 deletions(-) create mode 100644 tests/genericxml2xmloutdata/cachetune.xml create mode 100644 tests/genericxml2xmloutdata/memorytune.xml -- 2.23.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Just a reminder that libvirt binds need to be updated after patches introduced. Refer to libvirt python and perl bindings: commit b0a7747 Author: Pavel Hrdina <phrdina@redhat.com> Date: Fri Sep 20 11:14:35 2019 +0200 virDomainMemoryStats: include disk caches Introduced in libvirt 4.6.0 by commit <aee04655089>. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1683516 Signed-off-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> commit e562e58 Author: Daniel P. Berrangé <berrange@redhat.com> Date: Wed May 22 14:07:57 2019 +0100 Add new hugetlb memory stats constants Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> On Wed, Nov 13, 2019 at 6:34 PM Wang Huaqiang <huaqiang.wang@intel.com> wrote: > RDT is the short for Intel Resource Director Technology, consists > of four sub-technologies until now: > > -. CAT for cache allocation > -. CMT for cache usage monitoring > -. MBA for memory bandwidth allocation > -. MBM for memory bandwidth usage monitoring > > The Linux kernel interface is 'resctrl' file system, and we have > already implemented the support of CAT, CMT and MBA, to accomplish > the tasks such as allocating a part of shared CPU last level cache > to particular domain vcpu or a list of vcpus and monitoring the > usage of cache, or the task of allocating a mount of memory > bandwidth to specify domain vcpu(s). > > This series is to introduce the support of MBM. > > Basically the interfaces are: > > ** Identify host capability ** > > Similar to identify the host capability of CMT, it could be gotten > through the result of 'virsh capabilities', if following elements > are found, then MBM is supported: > > <memory_bandwidth> > <monitor maxMonitors='176'> > <feature name='mbm_total_bytes'/> > <feature name='mbm_local_bytes'/> > </monitor> > </memory_bandwidth> > > 'mbm_total_bytes' means supporting to report the memory bandwidth > used by the vcpu(s) of specific monitor on all CPU sockets. > > 'mbm_local_bytes' means supporting to report the memory bandwidth > used by vcpu(s) that is passing through local CPU socket. > > ** Create monitor group** > > The monitor group for specific domain vcpus, for example vcpu 0-4, > is defined in domain configuration file, in such kind of way: > > <cputune> > <memorytune vcpus='0-4'> > <monitor vcpus='0-4'/> > </memorytune> > </cputune> > > ** Report memory usage ** > > Introduced an option '--memory' against 'virsh domstats' command > to show the memory bandwidth usage in such way: > (also very similar to the format of CMT result.) > > # virsh domstats --memory > > Domain: 'libvirt-vm' > memory.bandwidth.monitor.count=4 > memory.bandwidth.monitor.0.name=vcpus_0-4 > memory.bandwidth.monitor.0.vcpus=0-4 > memory.bandwidth.monitor.0.node.count=2 > memory.bandwidth.monitor.0.node.0.id=0 > memory.bandwidth.monitor.0.node.0.bytes.total=14201651200 > memory.bandwidth.monitor.0.node.0.bytes.local=7369809920 > memory.bandwidth.monitor.0.node.1.id=1 > memory.bandwidth.monitor.0.node.1.bytes.total=188897640448 > memory.bandwidth.monitor.0.node.1.bytes.local=170044047360 > > > Huaqiang (5): > util, resctrl: using 64bit interface instead of 32bit for counters > conf: showing cache/memoryBW monitor features in capabilities > cachetune schema: a looser check for the order of <cache> and > <monitor> element > conf: Parse dommon configure file for memorytune monitors > virsh: show memoryBW info in 'virsh domstats' command > > docs/schemas/domaincommon.rng | 91 +++++++++--------- > include/libvirt/libvirt-domain.h | 1 + > src/conf/capabilities.c | 4 +- > src/conf/domain_conf.c | 44 +++++++-- > src/libvirt-domain.c | 21 +++++ > src/qemu/qemu_driver.c | 103 ++++++++++++++++++++- > src/util/virfile.c | 40 ++++++++ > src/util/virfile.h | 2 + > src/util/virresctrl.c | 6 +- > src/util/virresctrl.h | 2 +- > tests/genericxml2xmlindata/cachetune.xml | 1 + > tests/genericxml2xmlindata/memorytune.xml | 5 + > tests/genericxml2xmloutdata/cachetune.xml | 34 +++++++ > tests/genericxml2xmloutdata/memorytune.xml | 42 +++++++++ > tests/genericxml2xmltest.c | 4 +- > tools/virsh-domain-monitor.c | 7 ++ > tools/virsh.pod | 23 ++++- > 17 files changed, 367 insertions(+), 63 deletions(-) > create mode 100644 tests/genericxml2xmloutdata/cachetune.xml > create mode 100644 tests/genericxml2xmloutdata/memorytune.xml > > -- > 2.23.0 > > > -- > libvir-list mailing list > libvir-list@redhat.com > https://www.redhat.com/mailman/listinfo/libvir-list > > -- Best regards, ----------------------------------- Han Han Quality Engineer Redhat. Email: hhan@redhat.com Phone: +861065339333 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Hi Han, Thanks for your kind reminder. I haven't used the 'virsh dommemstat' command for reporting the domain vcpus' memory bandwidth usage over host. What I implemented is add a new option '--memory' under command 'virsh domstats'. Reason for such kind of implementation is I haven't realized there is already one interface, the 'dommemstat' command, with the intention of showing domain memory related statistics. But after examining the ways to report the block device statistics, and the network traffic statistics, I found they are similar, for example you can find block device statistics from command 'virsh domblkstat' and 'virsh domstats --block'. So I tent to use the way that I have done in patch '1/5' to let these memory information be shown in command 'virsh domstasts --memory'. Reason is the memory bandwidth information is associated with the memory bandwidth monitor (a hardware feature from cpu manufacture), and each monitor could be applied to one or some vcpus. This is much similar to the case of 'virsh domstats --interface' and 'virsh domstats --block'. I hope more reviewers involve this discussion. Thanks Huaqiang On 2019/11/14 下午2:43, Han Han wrote: > Just a reminder that libvirt binds need to be updated after patches > introduced. > Refer to libvirt python and perl bindings: > commit b0a7747 > Author: Pavel Hrdina <phrdina@redhat.com <mailto:phrdina@redhat.com>> > Date: Fri Sep 20 11:14:35 2019 +0200 > > virDomainMemoryStats: include disk caches > > Introduced in libvirt 4.6.0 by commit <aee04655089>. > > Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1683516 > > Signed-off-by: Pavel Hrdina <phrdina@redhat.com > <mailto:phrdina@redhat.com>> > Reviewed-by: Daniel P. Berrangé <berrange@redhat.com > <mailto:berrange@redhat.com>> > > commit e562e58 > Author: Daniel P. Berrangé <berrange@redhat.com > <mailto:berrange@redhat.com>> > Date: Wed May 22 14:07:57 2019 +0100 > > Add new hugetlb memory stats constants > > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com > <mailto:berrange@redhat.com>> > > > On Wed, Nov 13, 2019 at 6:34 PM Wang Huaqiang <huaqiang.wang@intel.com > <mailto:huaqiang.wang@intel.com>> wrote: > > RDT is the short for Intel Resource Director Technology, consists > of four sub-technologies until now: > > -. CAT for cache allocation > -. CMT for cache usage monitoring > -. MBA for memory bandwidth allocation > -. MBM for memory bandwidth usage monitoring > > The Linux kernel interface is 'resctrl' file system, and we have > already implemented the support of CAT, CMT and MBA, to accomplish > the tasks such as allocating a part of shared CPU last level cache > to particular domain vcpu or a list of vcpus and monitoring the > usage of cache, or the task of allocating a mount of memory > bandwidth to specify domain vcpu(s). > > This series is to introduce the support of MBM. > > Basically the interfaces are: > > ** Identify host capability ** > > Similar to identify the host capability of CMT, it could be gotten > through the result of 'virsh capabilities', if following elements > are found, then MBM is supported: > > <memory_bandwidth> > <monitor maxMonitors='176'> > <feature name='mbm_total_bytes'/> > <feature name='mbm_local_bytes'/> > </monitor> > </memory_bandwidth> > > 'mbm_total_bytes' means supporting to report the memory bandwidth > used by the vcpu(s) of specific monitor on all CPU sockets. > > 'mbm_local_bytes' means supporting to report the memory bandwidth > used by vcpu(s) that is passing through local CPU socket. > > ** Create monitor group** > > The monitor group for specific domain vcpus, for example vcpu 0-4, > is defined in domain configuration file, in such kind of way: > > <cputune> > <memorytune vcpus='0-4'> > <monitor vcpus='0-4'/> > </memorytune> > </cputune> > > ** Report memory usage ** > > Introduced an option '--memory' against 'virsh domstats' command > to show the memory bandwidth usage in such way: > (also very similar to the format of CMT result.) > > # virsh domstats --memory > > Domain: 'libvirt-vm' > memory.bandwidth.monitor.count=4 > memory.bandwidth.monitor.0.name > <http://memory.bandwidth.monitor.0.name>=vcpus_0-4 > memory.bandwidth.monitor.0.vcpus=0-4 > memory.bandwidth.monitor.0.node.count=2 > memory.bandwidth.monitor.0.node.0.id > <http://memory.bandwidth.monitor.0.node.0.id>=0 > memory.bandwidth.monitor.0.node.0.bytes.total=14201651200 > memory.bandwidth.monitor.0.node.0.bytes.local=7369809920 > memory.bandwidth.monitor.0.node.1.id > <http://memory.bandwidth.monitor.0.node.1.id>=1 > memory.bandwidth.monitor.0.node.1.bytes.total=188897640448 > memory.bandwidth.monitor.0.node.1.bytes.local=170044047360 > > > Huaqiang (5): > util, resctrl: using 64bit interface instead of 32bit for counters > conf: showing cache/memoryBW monitor features in capabilities > cachetune schema: a looser check for the order of <cache> and > <monitor> element > conf: Parse dommon configure file for memorytune monitors > virsh: show memoryBW info in 'virsh domstats' command > > docs/schemas/domaincommon.rng | 91 +++++++++--------- > include/libvirt/libvirt-domain.h | 1 + > src/conf/capabilities.c | 4 +- > src/conf/domain_conf.c | 44 +++++++-- > src/libvirt-domain.c | 21 +++++ > src/qemu/qemu_driver.c | 103 > ++++++++++++++++++++- > src/util/virfile.c | 40 ++++++++ > src/util/virfile.h | 2 + > src/util/virresctrl.c | 6 +- > src/util/virresctrl.h | 2 +- > tests/genericxml2xmlindata/cachetune.xml | 1 + > tests/genericxml2xmlindata/memorytune.xml | 5 + > tests/genericxml2xmloutdata/cachetune.xml | 34 +++++++ > tests/genericxml2xmloutdata/memorytune.xml | 42 +++++++++ > tests/genericxml2xmltest.c | 4 +- > tools/virsh-domain-monitor.c | 7 ++ > tools/virsh.pod | 23 ++++- > 17 files changed, 367 insertions(+), 63 deletions(-) > create mode 100644 tests/genericxml2xmloutdata/cachetune.xml > create mode 100644 tests/genericxml2xmloutdata/memorytune.xml > > -- > 2.23.0 > > > -- > libvir-list mailing list > libvir-list@redhat.com <mailto:libvir-list@redhat.com> > https://www.redhat.com/mailman/listinfo/libvir-list > > > > -- > Best regards, > ----------------------------------- > Han Han > Quality Engineer > Redhat. > > Email: hhan@redhat.com <mailto:hhan@redhat.com> > Phone: +861065339333 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Thu, Nov 14, 2019 at 01:08:18AM +0800, Wang Huaqiang wrote: > RDT is the short for Intel Resource Director Technology, consists > of four sub-technologies until now: > > -. CAT for cache allocation > -. CMT for cache usage monitoring > -. MBA for memory bandwidth allocation > -. MBM for memory bandwidth usage monitoring > > The Linux kernel interface is 'resctrl' file system, and we have > already implemented the support of CAT, CMT and MBA, to accomplish > the tasks such as allocating a part of shared CPU last level cache > to particular domain vcpu or a list of vcpus and monitoring the > usage of cache, or the task of allocating a mount of memory > bandwidth to specify domain vcpu(s). > > This series is to introduce the support of MBM. > > Basically the interfaces are: > > ** Identify host capability ** > > Similar to identify the host capability of CMT, it could be gotten > through the result of 'virsh capabilities', if following elements > are found, then MBM is supported: > > <memory_bandwidth> > <monitor maxMonitors='176'> > <feature name='mbm_total_bytes'/> > <feature name='mbm_local_bytes'/> > </monitor> > </memory_bandwidth> > > 'mbm_total_bytes' means supporting to report the memory bandwidth > used by the vcpu(s) of specific monitor on all CPU sockets. > > 'mbm_local_bytes' means supporting to report the memory bandwidth > used by vcpu(s) that is passing through local CPU socket. > > ** Create monitor group** > > The monitor group for specific domain vcpus, for example vcpu 0-4, > is defined in domain configuration file, in such kind of way: > > <cputune> > <memorytune vcpus='0-4'> > <monitor vcpus='0-4'/> > </memorytune> > </cputune> > > ** Report memory usage ** > > Introduced an option '--memory' against 'virsh domstats' command > to show the memory bandwidth usage in such way: > (also very similar to the format of CMT result.) > > # virsh domstats --memory > > Domain: 'libvirt-vm' > memory.bandwidth.monitor.count=4 > memory.bandwidth.monitor.0.name=vcpus_0-4 > memory.bandwidth.monitor.0.vcpus=0-4 > memory.bandwidth.monitor.0.node.count=2 > memory.bandwidth.monitor.0.node.0.id=0 > memory.bandwidth.monitor.0.node.0.bytes.total=14201651200 > memory.bandwidth.monitor.0.node.0.bytes.local=7369809920 > memory.bandwidth.monitor.0.node.1.id=1 > memory.bandwidth.monitor.0.node.1.bytes.total=188897640448 > memory.bandwidth.monitor.0.node.1.bytes.local=170044047360 > > > Huaqiang (5): > util, resctrl: using 64bit interface instead of 32bit for counters > conf: showing cache/memoryBW monitor features in capabilities > cachetune schema: a looser check for the order of <cache> and > <monitor> element > conf: Parse dommon configure file for memorytune monitors > virsh: show memoryBW info in 'virsh domstats' command I've pushed patches 2, 3, & 4, so you only need update 1 & 5 and resend those two. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
© 2016 - 2024 Red Hat, Inc.