[libvirt] [PATCH] virsh emulatorpin, vcpupin: omit offline CPUs from affinity map

Scott Cheloha posted 1 patch 4 years, 9 months ago
Test syntax-check failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/20190711164630.12730-1-cheloha@linux.vnet.ibm.com
src/conf/domain_conf.c | 8 ++++++++
src/qemu/qemu_driver.c | 4 ++++
2 files changed, 12 insertions(+)
[libvirt] [PATCH] virsh emulatorpin, vcpupin: omit offline CPUs from affinity map
Posted by Scott Cheloha 4 years, 9 months ago
Each host CPU is not necessarily online.  Including CPUs that are
known to be offline in the default affinity map doesn't make much
sense.  We can try to omit those CPUs if the host supports CPU
bitmaps, i.e.  virHostCPUHasBitmap() is true.  Otherwise we can
return a full map as we do now.

For example, given the following lscpu(1):

Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                128
On-line CPU(s) list:   0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120
Off-line CPU(s) list:  1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79,81-87,89-95,97-103,105-111,113-119,121-127
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             4
NUMA node(s):          4
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
CPU max MHz:           4322.0000
CPU min MHz:           2061.0000
L1d cache:             64K
L1i cache:             32K
L2 cache:              512K
L3 cache:              8192K
NUMA node0 CPU(s):     0,8,16,24
NUMA node1 CPU(s):     32,40,48,56
NUMA node16 CPU(s):    64,72,80,88
NUMA node17 CPU(s):    96,104,112,120

the current behavior for a guest with no VCPU configuration is:

$ virsh vcpupin myvm
----------------------------------
   0: 0-127

but this patch instead you get:

VCPU   CPU Affinity
----------------------------------
 0      0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120

which is more consistent with the lscpu(1) output.

Fixes: ibm bz174632 (rhbz#1434276)

Signed-off-by: Scott Cheloha <cheloha@linux.vnet.ibm.com>
---
I'm unsure whether it's better to automatically fall back
to the full map if virHostCPUGetOnlineBitmap() fails, or to
fail loudly as I do in this patch.

Preferences?

 src/conf/domain_conf.c | 8 ++++++++
 src/qemu/qemu_driver.c | 4 ++++
 2 files changed, 12 insertions(+)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 3323c9a5b1..0ea6f69574 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -1989,6 +1989,7 @@ virDomainDefGetVcpuPinInfoHelper(virDomainDefPtr def,
     int maxvcpus = virDomainDefGetVcpusMax(def);
     size_t i;
     VIR_AUTOPTR(virBitmap) allcpumap = NULL;
+    VIR_AUTOPTR(virBitmap) onlinemap = NULL;
 
     if (hostcpus < 0)
         return -1;
@@ -1998,6 +1999,11 @@ virDomainDefGetVcpuPinInfoHelper(virDomainDefPtr def,
 
     virBitmapSetAll(allcpumap);
 
+    if (virHostCPUHasBitmap()) {
+        if (!(onlinemap = virHostCPUGetOnlineBitmap()))
+            return -1;
+    }
+
     for (i = 0; i < maxvcpus && i < ncpumaps; i++) {
         virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(def, i);
         virBitmapPtr bitmap = NULL;
@@ -2009,6 +2015,8 @@ virDomainDefGetVcpuPinInfoHelper(virDomainDefPtr def,
             bitmap = autoCpuset;
         else if (def->cpumask)
             bitmap = def->cpumask;
+        else if (onlinemap)
+            bitmap = onlinemap;
         else
             bitmap = allcpumap;
 
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 5a75f23981..2c59513929 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -5377,6 +5377,10 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom,
     } else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO &&
                autoCpuset) {
         cpumask = autoCpuset;
+    } else if (virHostCPUHasBitmap()) {
+        if (!(bitmap = virHostCPUGetOnlineBitmap()))
+            goto cleanup;
+        cpumask = bitmap;
     } else {
         if (!(bitmap = virBitmapNew(hostcpus)))
             goto cleanup;
-- 
2.20.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list