From nobody Sat Feb 7 13:41:46 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=linux.microsoft.com Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1641941070874853.2261285744563; Tue, 11 Jan 2022 14:44:30 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-625-EeLpPqyoNaCB1ydrEYs7CA-1; Tue, 11 Jan 2022 17:44:28 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4D24783DD21; Tue, 11 Jan 2022 22:44:22 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2A1ADE73D; Tue, 11 Jan 2022 22:44:22 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id EAE4D1806D2F; Tue, 11 Jan 2022 22:44:21 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 20BMhi0u004861 for ; Tue, 11 Jan 2022 17:43:44 -0500 Received: by smtp.corp.redhat.com (Postfix) id 964CF2166B4A; Tue, 11 Jan 2022 22:43:44 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast07.extmail.prod.ext.rdu2.redhat.com [10.11.55.23]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9048E2166B2F for ; Tue, 11 Jan 2022 22:43:41 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D7303C021A8 for ; Tue, 11 Jan 2022 22:43:41 +0000 (UTC) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by relay.mimecast.com with ESMTP id us-mta-536-_Sr84_b9M9mbb_ZHNOomnA-1; Tue, 11 Jan 2022 17:43:39 -0500 Received: from prapal-ch2.oiwy50ateaxezb1sqsoezlib2f.xx.internal.cloudapp.net (unknown [20.80.162.67]) by linux.microsoft.com (Postfix) with ESMTPSA id 1FD0620B717A; Tue, 11 Jan 2022 14:43:35 -0800 (PST) X-MC-Unique: EeLpPqyoNaCB1ydrEYs7CA-1 X-MC-Unique: _Sr84_b9M9mbb_ZHNOomnA-1 DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1FD0620B717A From: Praveen K Paladugu To: libvir-list@redhat.com Subject: [libvirt PATCH v4 1/7] qemu, hypervisor: refactor some cgroup mgmt methods Date: Tue, 11 Jan 2022 22:43:23 +0000 Message-Id: <20220111224329.2611962-2-prapal@linux.microsoft.com> In-Reply-To: <20220111224329.2611962-1-prapal@linux.microsoft.com> References: <20220111224329.2611962-1-prapal@linux.microsoft.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 20BMhi0u004861 X-loop: libvir-list@redhat.com Cc: william.douglas@intel.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1641941071761100001 Content-Type: text/plain; charset="utf-8" Refactor some cgroup management methods from qemu into hypervisor. These methods will be shared with ch driver for cgroup management. Signed-off-by: Praveen K Paladugu --- src/hypervisor/domain_cgroup.c | 457 ++++++++++++++++++++++++++++++++- src/hypervisor/domain_cgroup.h | 72 ++++++ src/libvirt_private.syms | 14 +- src/qemu/qemu_cgroup.c | 413 +---------------------------- src/qemu/qemu_cgroup.h | 11 - src/qemu/qemu_driver.c | 14 +- src/qemu/qemu_hotplug.c | 7 +- src/qemu/qemu_process.c | 24 +- 8 files changed, 580 insertions(+), 432 deletions(-) diff --git a/src/hypervisor/domain_cgroup.c b/src/hypervisor/domain_cgroup.c index 61b54f071c..05c53ff7d1 100644 --- a/src/hypervisor/domain_cgroup.c +++ b/src/hypervisor/domain_cgroup.c @@ -22,11 +22,12 @@ =20 #include "domain_cgroup.h" #include "domain_driver.h" - +#include "util/virnuma.h" +#include "virlog.h" #include "virutil.h" =20 #define VIR_FROM_THIS VIR_FROM_DOMAIN - +VIR_LOG_INIT("domain.cgroup"); =20 int virDomainCgroupSetupBlkio(virCgroup *cgroup, virDomainBlkiotune blkio) @@ -269,3 +270,455 @@ virDomainCgroupSetMemoryLimitParameters(virCgroup *cg= roup, =20 return 0; } + + +int +virDomainCgroupSetupBlkioCgroup(virDomainObj *vm, + virCgroup *cgroup) +{ + + if (!virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_BLKIO)) { + if (vm->def->blkio.weight || vm->def->blkio.ndevices) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Block I/O tuning is not available on this ho= st")); + return -1; + } else { + return 0; + } + } + + return virDomainCgroupSetupBlkio(cgroup, vm->def->blkio); +} + + +int +virDomainCgroupSetupMemoryCgroup(virDomainObj *vm, + virCgroup *cgroup) +{ + + if (!virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_MEMORY)) { + if (virMemoryLimitIsSet(vm->def->mem.hard_limit) || + virMemoryLimitIsSet(vm->def->mem.soft_limit) || + virMemoryLimitIsSet(vm->def->mem.swap_hard_limit)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Memory cgroup is not available on this host"= )); + return -1; + } else { + return 0; + } + } + + return virDomainCgroupSetupMemtune(cgroup, vm->def->mem); +} + + +int +virDomainCgroupSetupCpusetCgroup(virCgroup *cgroup) +{ + + if (!virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) + return 0; + + if (virCgroupSetCpusetMemoryMigrate(cgroup, true) < 0) + return -1; + + return 0; +} + + +int +virDomainCgroupSetupCpuCgroup(virDomainObj *vm, + virCgroup *cgroup) +{ + + if (!virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPU)) { + if (vm->def->cputune.sharesSpecified) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("CPU tuning is not available on this host")); + return -1; + } else { + return 0; + } + } + + if (vm->def->cputune.sharesSpecified) { + + if (virCgroupSetCpuShares(cgroup, vm->def->cputune.shares) < 0) + return -1; + + } + + return 0; +} + + +int +virDomainCgroupInitCgroup(const char *prefix, + virDomainObj * vm, + size_t nnicindexes, + int *nicindexes, + virCgroup * cgroup, + int cgroupControllers, + unsigned int maxThreadsPerProc, + bool privileged, + char *machineName) +{ + if (!privileged) + return 0; + + if (!virCgroupAvailable()) + return 0; + + virCgroupFree(cgroup); + cgroup =3D NULL; + + if (!vm->def->resource) + vm->def->resource =3D g_new0(virDomainResourceDef, 1); + + if (!vm->def->resource->partition) + vm->def->resource->partition =3D g_strdup("/machine"); + + if (!g_path_is_absolute(vm->def->resource->partition)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Resource partition '%s' must start with '/'"), + vm->def->resource->partition); + return -1; + } + + if (virCgroupNewMachine(machineName, + prefix, + vm->def->uuid, + NULL, + vm->pid, + false, + nnicindexes, nicindexes, + vm->def->resource->partition, + cgroupControllers, + maxThreadsPerProc, &cgroup) < 0) { + if (virCgroupNewIgnoreError()) + return 0; + + return -1; + } + + return 0; +} + + +void +virDomainCgroupRestoreCgroupState(virDomainObj *vm, + virCgroup *cgroup) +{ + g_autofree char *mem_mask =3D NULL; + size_t i =3D 0; + + g_autoptr(virBitmap) all_nodes =3D NULL; + + if (!virNumaIsAvailable() || + !virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) + return; + + if (!(all_nodes =3D virNumaGetHostMemoryNodeset())) + goto error; + + if (!(mem_mask =3D virBitmapFormat(all_nodes))) + goto error; + + if (virCgroupHasEmptyTasks(cgroup, VIR_CGROUP_CONTROLLER_CPUSET) <=3D = 0) + goto error; + + if (virCgroupSetCpusetMems(cgroup, mem_mask) < 0) + goto error; + + for (i =3D 0; i < virDomainDefGetVcpusMax(vm->def); i++) { + virDomainVcpuDef *vcpu =3D virDomainDefGetVcpu(vm->def, i); + + if (!vcpu->online) + continue; + + if (virDomainCgroupRestoreCgroupThread(cgroup, + VIR_CGROUP_THREAD_VCPU, + i) < 0) + return; + } + + for (i =3D 0; i < vm->def->niothreadids; i++) { + if (virDomainCgroupRestoreCgroupThread(cgroup, + VIR_CGROUP_THREAD_IOTHREAD, + vm->def->iothreadids[i]->io= thread_id) < 0) + return; + } + + if (virDomainCgroupRestoreCgroupThread(cgroup, + VIR_CGROUP_THREAD_EMULATOR, + 0) < 0) + return; + + return; + + error: + virResetLastError(); + VIR_DEBUG("Couldn't restore cgroups to meaningful state"); + return; +} + + +int +virDomainCgroupRestoreCgroupThread(virCgroup *cgroup, + virCgroupThreadName thread, + int id) +{ + g_autoptr(virCgroup) cgroup_temp =3D NULL; + g_autofree char *nodeset =3D NULL; + + if (virCgroupNewThread(cgroup, thread, id, false, &cgroup_temp) < 0) + return -1; + + if (virCgroupSetCpusetMemoryMigrate(cgroup_temp, true) < 0) + return -1; + + if (virCgroupGetCpusetMems(cgroup_temp, &nodeset) < 0) + return -1; + + if (virCgroupSetCpusetMems(cgroup_temp, nodeset) < 0) + return -1; + + return 0; +} + + +int +virDomainCgroupConnectCgroup(const char *prefix, + virDomainObj *vm, + virCgroup *cgroup, + int cgroupControllers, + bool privileged, + char *machineName) +{ + if (privileged) + return 0; + + if (!virCgroupAvailable()) + return 0; + + virCgroupFree(cgroup); + + if (virCgroupNewDetectMachine(vm->def->name, + prefix, + vm->pid, + cgroupControllers, machineName, &cgroup)= < 0) + return -1; + + virDomainCgroupRestoreCgroupState(vm, cgroup); + return 0; +} + + +int +virDomainCgroupSetupCgroup(const char *prefix, + virDomainObj * vm, + size_t nnicindexes, + int *nicindexes, + virCgroup * cgroup, + int cgroupControllers, + unsigned int maxThreadsPerProc, + bool privileged, + char *machineName) +{ + if (!vm->pid) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Cannot setup cgroups until process is started")); + return -1; + } + + if (virDomainCgroupInitCgroup(prefix, + vm, + nnicindexes, + nicindexes, + cgroup, + cgroupControllers, + maxThreadsPerProc, + privileged, + machineName) < 0) + return -1; + + if (!cgroup) + return 0; + + if (virDomainCgroupSetupBlkioCgroup(vm, cgroup) < 0) + return -1; + + if (virDomainCgroupSetupMemoryCgroup(vm, cgroup) < 0) + return -1; + + if (virDomainCgroupSetupCpuCgroup(vm, cgroup) < 0) + return -1; + + if (virDomainCgroupSetupCpusetCgroup(cgroup) < 0) + return -1; + + return 0; +} + + +int +virDomainCgroupSetupVcpuBW(virCgroup *cgroup, + unsigned long long period, + long long quota) +{ + return virCgroupSetupCpuPeriodQuota(cgroup, period, quota); +} + + +int +virDomainCgroupSetupCpusetCpus(virCgroup *cgroup, + virBitmap *cpumask) +{ + return virCgroupSetupCpusetCpus(cgroup, cpumask); +} + + +int +virDomainCgroupSetupGlobalCpuCgroup(virDomainObj *vm, + virCgroup *cgroup, + virBitmap *autoNodeset) +{ + unsigned long long period =3D vm->def->cputune.global_period; + long long quota =3D vm->def->cputune.global_quota; + g_autofree char *mem_mask =3D NULL; + virDomainNumatuneMemMode mem_mode; + + if ((period || quota) && + !virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPU)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("cgroup cpu is required for scheduler tuning")); + return -1; + } + + /* + * If CPU cgroup controller is not initialized here, then we need + * neither period nor quota settings. And if CPUSET controller is + * not initialized either, then there's nothing to do anyway. + */ + if (!virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPU) && + !virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) + return 0; + + + if (virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) =3D=3D 0 && + mem_mode =3D=3D VIR_DOMAIN_NUMATUNE_MEM_STRICT && + virDomainNumatuneMaybeFormatNodeset(vm->def->numa, + autoNodeset, &mem_mask, -1) < = 0) + return -1; + + if (period || quota) { + if (virDomainCgroupSetupVcpuBW(cgroup, period, quota) < 0) + return -1; + } + + return 0; +} + + +int +virDomainCgroupRemoveCgroup(virDomainObj *vm, + virCgroup *cgroup, + char *machineName) +{ + if (cgroup =3D=3D NULL) + return 0; /* Not supported, so claim success */ + + if (virCgroupTerminateMachine(machineName) < 0) { + if (!virCgroupNewIgnoreError()) + VIR_DEBUG("Failed to terminate cgroup for %s", vm->def->name); + } + + return virCgroupRemove(cgroup); +} + + +void +virDomainCgroupEmulatorAllNodesDataFree(virCgroupEmulatorAllNodesData *dat= a) +{ + if (!data) + return; + + virCgroupFree(data->emulatorCgroup); + g_free(data->emulatorMemMask); + g_free(data); +} + + +/** + * virDomainCgroupEmulatorAllNodesAllow: + * @cgroup: domain cgroup pointer + * @retData: filled with structure used to roll back the operation + * + * Allows all NUMA nodes for the cloud hypervisor thread temporarily. This= is + * necessary when hotplugging cpus since it requires memory allocated in t= he + * DMA region. Afterwards the operation can be reverted by + * virDomainCgroupEmulatorAllNodesRestore. + * + * Returns 0 on success -1 on error + */ +int +virDomainCgroupEmulatorAllNodesAllow(virCgroup *cgroup, + virCgroupEmulatorAllNodesData **retDa= ta) +{ + virCgroupEmulatorAllNodesData *data =3D NULL; + g_autofree char *all_nodes_str =3D NULL; + + g_autoptr(virBitmap) all_nodes =3D NULL; + int ret =3D -1; + + if (!virNumaIsAvailable() || + !virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) + return 0; + + if (!(all_nodes =3D virNumaGetHostMemoryNodeset())) + goto cleanup; + + if (!(all_nodes_str =3D virBitmapFormat(all_nodes))) + goto cleanup; + + data =3D g_new0(virCgroupEmulatorAllNodesData, 1); + + if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, + false, &data->emulatorCgroup) < 0) + goto cleanup; + + if (virCgroupGetCpusetMems(data->emulatorCgroup, &data->emulatorMemMas= k) < 0 + || virCgroupSetCpusetMems(data->emulatorCgroup, all_nodes_str) < 0) + goto cleanup; + + *retData =3D g_steal_pointer(&data); + ret =3D 0; + + cleanup: + virDomainCgroupEmulatorAllNodesDataFree(data); + + return ret; +} + + +/** + * virDomainCgroupEmulatorAllNodesRestore: + * @data: data structure created by virDomainCgroupEmulatorAllNodesAllow + * + * Rolls back the setting done by virDomainCgroupEmulatorAllNodesAllow and= frees the + * associated data. + */ +void +virDomainCgroupEmulatorAllNodesRestore(virCgroupEmulatorAllNodesData *data) +{ + virError *err; + + if (!data) + return; + + virErrorPreserveLast(&err); + virCgroupSetCpusetMems(data->emulatorCgroup, data->emulatorMemMask); + virErrorRestore(&err); + + virDomainCgroupEmulatorAllNodesDataFree(data); +} diff --git a/src/hypervisor/domain_cgroup.h b/src/hypervisor/domain_cgroup.h index f93e5f74fe..dfe7107674 100644 --- a/src/hypervisor/domain_cgroup.h +++ b/src/hypervisor/domain_cgroup.h @@ -23,6 +23,11 @@ #include "vircgroup.h" #include "domain_conf.h" =20 +typedef struct _virCgroupEmulatorAllNodesData virCgroupEmulatorAllNodesDat= a; +struct _virCgroupEmulatorAllNodesData { + virCgroup *emulatorCgroup; + char *emulatorMemMask; +}; =20 int virDomainCgroupSetupBlkio(virCgroup *cgroup, virDomainBlkiotune blkio); int virDomainCgroupSetupMemtune(virCgroup *cgroup, virDomainMemtune mem); @@ -36,3 +41,70 @@ int virDomainCgroupSetMemoryLimitParameters(virCgroup *c= group, virDomainDef *persistentDef, virTypedParameterPtr params, int nparams); +int +virDomainCgroupSetupBlkioCgroup(virDomainObj * vm, + virCgroup *cgroup); +int +virDomainCgroupSetupMemoryCgroup(virDomainObj * vm, + virCgroup *cgroup); +int +virDomainCgroupSetupCpusetCgroup(virCgroup *cgroup); +int +virDomainCgroupSetupCpuCgroup(virDomainObj * vm, + virCgroup *cgroup); +int +virDomainCgroupInitCgroup(const char *prefix, + virDomainObj * vm, + size_t nnicindexes, + int *nicindexes, + virCgroup *cgroup, + int cgroupControllers, + unsigned int maxThreadsPerProc, + bool privileged, + char *machineName); +void +virDomainCgroupRestoreCgroupState(virDomainObj * vm, + virCgroup *cgroup); +int +virDomainCgroupConnectCgroup(const char *prefix, + virDomainObj * vm, + virCgroup *cgroup, + int cgroupControllers, + bool privileged, + char *machineName); +int +virDomainCgroupSetupCgroup(const char *prefix, + virDomainObj * vm, + size_t nnicindexes, + int *nicindexes, + virCgroup *cgroup, + int cgroupControllers, + unsigned int maxThreadsPerProc, + bool privileged, + char *machineName); +void +virDomainCgroupEmulatorAllNodesDataFree(virCgroupEmulatorAllNodesData * da= ta); +int +virDomainCgroupEmulatorAllNodesAllow(virCgroup * cgroup, + virCgroupEmulatorAllNodesData ** retD= ata); +void +virDomainCgroupEmulatorAllNodesRestore(virCgroupEmulatorAllNodesData * dat= a); +int +virDomainCgroupSetupVcpuBW(virCgroup * cgroup, + unsigned long long period, + long long quota); +int +virDomainCgroupSetupCpusetCpus(virCgroup * cgroup, + virBitmap * cpumask); +int +virDomainCgroupSetupGlobalCpuCgroup(virDomainObj * vm, + virCgroup * cgroup, + virBitmap *autoNodeset); +int +virDomainCgroupRemoveCgroup(virDomainObj * vm, + virCgroup * cgroup, + char *machineName); +int +virDomainCgroupRestoreCgroupThread(virCgroup *cgroup, + virCgroupThreadName thread, + int id); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index ee14b99d88..d9d83d2a99 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1541,11 +1541,23 @@ virSetConnectStorage; =20 =20 # hypervisor/domain_cgroup.h +virDomainCgroupConnectCgroup; +virDomainCgroupEmulatorAllNodesAllow; +virDomainCgroupEmulatorAllNodesRestore; +virDomainCgroupInitCgroup; +virDomainCgroupRemoveCgroup; virDomainCgroupSetMemoryLimitParameters; virDomainCgroupSetupBlkio; +virDomainCgroupSetupBlkioCgroup; +virDomainCgroupSetupCgroup; +virDomainCgroupSetupCpuCgroup; +virDomainCgroupSetupCpusetCgroup; +virDomainCgroupSetupCpusetCpus; virDomainCgroupSetupDomainBlkioParameters; +virDomainCgroupSetupGlobalCpuCgroup; +virDomainCgroupSetupMemoryCgroup; virDomainCgroupSetupMemtune; - +virDomainCgroupSetupVcpuBW; =20 # hypervisor/domain_driver.h virDomainDriverAddIOThreadCheck; diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 308be5b00f..bc0f73cb95 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -593,46 +593,6 @@ qemuSetupVideoCgroup(virDomainObj *vm, return ret; } =20 - -static int -qemuSetupBlkioCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!virCgroupHasController(priv->cgroup, - VIR_CGROUP_CONTROLLER_BLKIO)) { - if (vm->def->blkio.weight || vm->def->blkio.ndevices) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("Block I/O tuning is not available on this ho= st")); - return -1; - } - return 0; - } - - return virDomainCgroupSetupBlkio(priv->cgroup, vm->def->blkio); -} - - -static int -qemuSetupMemoryCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_MEMORY= )) { - if (virMemoryLimitIsSet(vm->def->mem.hard_limit) || - virMemoryLimitIsSet(vm->def->mem.soft_limit) || - virMemoryLimitIsSet(vm->def->mem.swap_hard_limit)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("Memory cgroup is not available on this host"= )); - return -1; - } - return 0; - } - - return virDomainCgroupSetupMemtune(priv->cgroup, vm->def->mem); -} - - static int qemuSetupFirmwareCgroup(virDomainObj *vm) { @@ -861,44 +821,6 @@ qemuSetupDevicesCgroup(virDomainObj *vm) } =20 =20 -static int -qemuSetupCpusetCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET= )) - return 0; - - if (virCgroupSetCpusetMemoryMigrate(priv->cgroup, true) < 0) - return -1; - - return 0; -} - - -static int -qemuSetupCpuCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPU)) { - if (vm->def->cputune.sharesSpecified) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("CPU tuning is not available on this host")); - return -1; - } - return 0; - } - - if (vm->def->cputune.sharesSpecified) { - if (virCgroupSetCpuShares(priv->cgroup, vm->def->cputune.shares) <= 0) - return -1; - } - - return 0; -} - - static int qemuSetupCgroupAppid(virDomainObj *vm) { @@ -927,166 +849,13 @@ qemuSetupCgroupAppid(virDomainObj *vm) } =20 =20 -static int -qemuInitCgroup(virDomainObj *vm, - size_t nnicindexes, - int *nicindexes) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); - - if (!priv->driver->privileged) - return 0; - - if (!virCgroupAvailable()) - return 0; - - virCgroupFree(priv->cgroup); - priv->cgroup =3D NULL; - - if (!vm->def->resource) - vm->def->resource =3D g_new0(virDomainResourceDef, 1); - - if (!vm->def->resource->partition) - vm->def->resource->partition =3D g_strdup("/machine"); - - if (!g_path_is_absolute(vm->def->resource->partition)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("Resource partition '%s' must start with '/'"), - vm->def->resource->partition); - return -1; - } - - if (virCgroupNewMachine(priv->machineName, - "qemu", - vm->def->uuid, - NULL, - vm->pid, - false, - nnicindexes, nicindexes, - vm->def->resource->partition, - cfg->cgroupControllers, - cfg->maxThreadsPerProc, - &priv->cgroup) < 0) { - if (virCgroupNewIgnoreError()) - return 0; - - return -1; - } - - return 0; -} - -static int -qemuRestoreCgroupThread(virCgroup *cgroup, - virCgroupThreadName thread, - int id) -{ - g_autoptr(virCgroup) cgroup_temp =3D NULL; - g_autofree char *nodeset =3D NULL; - - if (virCgroupNewThread(cgroup, thread, id, false, &cgroup_temp) < 0) - return -1; - - if (virCgroupSetCpusetMemoryMigrate(cgroup_temp, true) < 0) - return -1; - - if (virCgroupGetCpusetMems(cgroup_temp, &nodeset) < 0) - return -1; - - if (virCgroupSetCpusetMems(cgroup_temp, nodeset) < 0) - return -1; - - return 0; -} - -static void -qemuRestoreCgroupState(virDomainObj *vm) -{ - g_autofree char *mem_mask =3D NULL; - qemuDomainObjPrivate *priv =3D vm->privateData; - size_t i =3D 0; - g_autoptr(virBitmap) all_nodes =3D NULL; - - if (!virNumaIsAvailable() || - !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET= )) - return; - - if (!(all_nodes =3D virNumaGetHostMemoryNodeset())) - goto error; - - if (!(mem_mask =3D virBitmapFormat(all_nodes))) - goto error; - - if (virCgroupHasEmptyTasks(priv->cgroup, - VIR_CGROUP_CONTROLLER_CPUSET) <=3D 0) - goto error; - - if (virCgroupSetCpusetMems(priv->cgroup, mem_mask) < 0) - goto error; - - for (i =3D 0; i < virDomainDefGetVcpusMax(vm->def); i++) { - virDomainVcpuDef *vcpu =3D virDomainDefGetVcpu(vm->def, i); - - if (!vcpu->online) - continue; - - if (qemuRestoreCgroupThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, i) < 0) - return; - } - - for (i =3D 0; i < vm->def->niothreadids; i++) { - if (qemuRestoreCgroupThread(priv->cgroup, VIR_CGROUP_THREAD_IOTHRE= AD, - vm->def->iothreadids[i]->iothread_id) = < 0) - return; - } - - if (qemuRestoreCgroupThread(priv->cgroup, - VIR_CGROUP_THREAD_EMULATOR, 0) < 0) - return; - - return; - - error: - virResetLastError(); - VIR_DEBUG("Couldn't restore cgroups to meaningful state"); - return; -} - -int -qemuConnectCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); - - if (!priv->driver->privileged) - return 0; - - if (!virCgroupAvailable()) - return 0; - - virCgroupFree(priv->cgroup); - priv->cgroup =3D NULL; - - if (virCgroupNewDetectMachine(vm->def->name, - "qemu", - vm->pid, - cfg->cgroupControllers, - priv->machineName, - &priv->cgroup) < 0) - return -1; - - qemuRestoreCgroupState(vm); - return 0; -} - int qemuSetupCgroup(virDomainObj *vm, size_t nnicindexes, int *nicindexes) { qemuDomainObjPrivate *priv =3D vm->privateData; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); =20 if (!vm->pid) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -1094,7 +863,15 @@ qemuSetupCgroup(virDomainObj *vm, return -1; } =20 - if (qemuInitCgroup(vm, nnicindexes, nicindexes) < 0) + if (virDomainCgroupInitCgroup("qemu", + vm, + nnicindexes, + nicindexes, + priv->cgroup, + cfg->cgroupControllers, + cfg->maxThreadsPerProc, + priv->driver->privileged, + priv->machineName) < 0) return -1; =20 if (!priv->cgroup) @@ -1103,16 +880,16 @@ qemuSetupCgroup(virDomainObj *vm, if (qemuSetupDevicesCgroup(vm) < 0) return -1; =20 - if (qemuSetupBlkioCgroup(vm) < 0) + if (virDomainCgroupSetupBlkioCgroup(vm, priv->cgroup) < 0) return -1; =20 - if (qemuSetupMemoryCgroup(vm) < 0) + if (virDomainCgroupSetupMemoryCgroup(vm, priv->cgroup) < 0) return -1; =20 - if (qemuSetupCpuCgroup(vm) < 0) + if (virDomainCgroupSetupCpuCgroup(vm, priv->cgroup) < 0) return -1; =20 - if (qemuSetupCpusetCgroup(vm) < 0) + if (virDomainCgroupSetupCpusetCgroup(priv->cgroup) < 0) return -1; =20 if (qemuSetupCgroupAppid(vm) < 0) @@ -1121,23 +898,6 @@ qemuSetupCgroup(virDomainObj *vm, return 0; } =20 -int -qemuSetupCgroupVcpuBW(virCgroup *cgroup, - unsigned long long period, - long long quota) -{ - return virCgroupSetupCpuPeriodQuota(cgroup, period, quota); -} - - -int -qemuSetupCgroupCpusetCpus(virCgroup *cgroup, - virBitmap *cpumask) -{ - return virCgroupSetupCpusetCpus(cgroup, cpumask); -} - - int qemuSetupCgroupForExtDevices(virDomainObj *vm, virQEMUDriver *driver) @@ -1164,148 +924,3 @@ qemuSetupCgroupForExtDevices(virDomainObj *vm, =20 return qemuExtDevicesSetupCgroup(driver, vm, cgroup_temp); } - - -int -qemuSetupGlobalCpuCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - unsigned long long period =3D vm->def->cputune.global_period; - long long quota =3D vm->def->cputune.global_quota; - g_autofree char *mem_mask =3D NULL; - virDomainNumatuneMemMode mem_mode; - - if ((period || quota) && - !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPU)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("cgroup cpu is required for scheduler tuning")); - return -1; - } - - /* - * If CPU cgroup controller is not initialized here, then we need - * neither period nor quota settings. And if CPUSET controller is - * not initialized either, then there's nothing to do anyway. - */ - if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPU) && - !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET= )) - return 0; - - - if (virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) =3D=3D 0 && - mem_mode =3D=3D VIR_DOMAIN_NUMATUNE_MEM_STRICT && - virDomainNumatuneMaybeFormatNodeset(vm->def->numa, - priv->autoNodeset, - &mem_mask, -1) < 0) - return -1; - - if (period || quota) { - if (qemuSetupCgroupVcpuBW(priv->cgroup, period, quota) < 0) - return -1; - } - - return 0; -} - - -int -qemuRemoveCgroup(virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (priv->cgroup =3D=3D NULL) - return 0; /* Not supported, so claim success */ - - if (virCgroupTerminateMachine(priv->machineName) < 0) { - if (!virCgroupNewIgnoreError()) - VIR_DEBUG("Failed to terminate cgroup for %s", vm->def->name); - } - - return virCgroupRemove(priv->cgroup); -} - - -static void -qemuCgroupEmulatorAllNodesDataFree(qemuCgroupEmulatorAllNodesData *data) -{ - if (!data) - return; - - virCgroupFree(data->emulatorCgroup); - g_free(data->emulatorMemMask); - g_free(data); -} - - -/** - * qemuCgroupEmulatorAllNodesAllow: - * @cgroup: domain cgroup pointer - * @retData: filled with structure used to roll back the operation - * - * Allows all NUMA nodes for the qemu emulator thread temporarily. This is - * necessary when hotplugging cpus since it requires memory allocated in t= he - * DMA region. Afterwards the operation can be reverted by - * qemuCgroupEmulatorAllNodesRestore. - * - * Returns 0 on success -1 on error - */ -int -qemuCgroupEmulatorAllNodesAllow(virCgroup *cgroup, - qemuCgroupEmulatorAllNodesData **retData) -{ - qemuCgroupEmulatorAllNodesData *data =3D NULL; - g_autofree char *all_nodes_str =3D NULL; - g_autoptr(virBitmap) all_nodes =3D NULL; - int ret =3D -1; - - if (!virNumaIsAvailable() || - !virCgroupHasController(cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) - return 0; - - if (!(all_nodes =3D virNumaGetHostMemoryNodeset())) - goto cleanup; - - if (!(all_nodes_str =3D virBitmapFormat(all_nodes))) - goto cleanup; - - data =3D g_new0(qemuCgroupEmulatorAllNodesData, 1); - - if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, - false, &data->emulatorCgroup) < 0) - goto cleanup; - - if (virCgroupGetCpusetMems(data->emulatorCgroup, &data->emulatorMemMas= k) < 0 || - virCgroupSetCpusetMems(data->emulatorCgroup, all_nodes_str) < 0) - goto cleanup; - - *retData =3D g_steal_pointer(&data); - ret =3D 0; - - cleanup: - qemuCgroupEmulatorAllNodesDataFree(data); - - return ret; -} - - -/** - * qemuCgroupEmulatorAllNodesRestore: - * @data: data structure created by qemuCgroupEmulatorAllNodesAllow - * - * Rolls back the setting done by qemuCgroupEmulatorAllNodesAllow and free= s the - * associated data. - */ -void -qemuCgroupEmulatorAllNodesRestore(qemuCgroupEmulatorAllNodesData *data) -{ - virErrorPtr err; - - if (!data) - return; - - virErrorPreserveLast(&err); - virCgroupSetCpusetMems(data->emulatorCgroup, data->emulatorMemMask); - virErrorRestore(&err); - - qemuCgroupEmulatorAllNodesDataFree(data); -} diff --git a/src/qemu/qemu_cgroup.h b/src/qemu/qemu_cgroup.h index cd537ebd82..f09134947f 100644 --- a/src/qemu/qemu_cgroup.h +++ b/src/qemu/qemu_cgroup.h @@ -56,18 +56,11 @@ int qemuSetupChardevCgroup(virDomainObj *vm, virDomainChrDef *dev); int qemuTeardownChardevCgroup(virDomainObj *vm, virDomainChrDef *dev); -int qemuConnectCgroup(virDomainObj *vm); int qemuSetupCgroup(virDomainObj *vm, size_t nnicindexes, int *nicindexes); -int qemuSetupCgroupVcpuBW(virCgroup *cgroup, - unsigned long long period, - long long quota); -int qemuSetupCgroupCpusetCpus(virCgroup *cgroup, virBitmap *cpumask); -int qemuSetupGlobalCpuCgroup(virDomainObj *vm); int qemuSetupCgroupForExtDevices(virDomainObj *vm, virQEMUDriver *driver); -int qemuRemoveCgroup(virDomainObj *vm); =20 typedef struct _qemuCgroupEmulatorAllNodesData qemuCgroupEmulatorAllNodesD= ata; struct _qemuCgroupEmulatorAllNodesData { @@ -75,8 +68,4 @@ struct _qemuCgroupEmulatorAllNodesData { char *emulatorMemMask; }; =20 -int qemuCgroupEmulatorAllNodesAllow(virCgroup *cgroup, - qemuCgroupEmulatorAllNodesData **data); -void qemuCgroupEmulatorAllNodesRestore(qemuCgroupEmulatorAllNodesData *dat= a); - extern const char *const defaultDeviceACL[]; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d3d76c003f..c3d4bc1270 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4424,7 +4424,7 @@ qemuDomainPinVcpuLive(virDomainObj *vm, if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, v= cpu, false, &cgroup_vcpu) < 0) goto cleanup; - if (qemuSetupCgroupCpusetCpus(cgroup_vcpu, cpumap) < 0) + if (virDomainCgroupSetupCpusetCpus(cgroup_vcpu, cpumap) < 0) goto cleanup; } =20 @@ -4633,7 +4633,7 @@ qemuDomainPinEmulator(virDomainPtr dom, 0, false, &cgroup_emulator) < 0) goto endjob; =20 - if (qemuSetupCgroupCpusetCpus(cgroup_emulator, pcpumap) < 0) { + if (virDomainCgroupSetupCpusetCpus(cgroup_emulator, pcpumap) <= 0) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("failed to set cpuset.cpus in cgroup" " for emulator threads")); @@ -5038,7 +5038,7 @@ qemuDomainPinIOThread(virDomainPtr dom, if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_IOTHREA= D, iothread_id, false, &cgroup_iothread) <= 0) goto endjob; - if (qemuSetupCgroupCpusetCpus(cgroup_iothread, pcpumap) < 0) { + if (virDomainCgroupSetupCpusetCpus(cgroup_iothread, pcpumap) <= 0) { virReportError(VIR_ERR_OPERATION_INVALID, _("failed to set cpuset.cpus in cgroup" " for iothread %d"), iothread_id); @@ -8938,7 +8938,7 @@ qemuSetGlobalBWLive(virCgroup *cgroup, unsigned long = long period, if (period =3D=3D 0 && quota =3D=3D 0) return 0; =20 - if (qemuSetupCgroupVcpuBW(cgroup, period, quota) < 0) + if (virDomainCgroupSetupVcpuBW(cgroup, period, quota) < 0) return -1; =20 return 0; @@ -9133,7 +9133,7 @@ qemuSetVcpusBWLive(virDomainObj *vm, virCgroup *cgrou= p, false, &cgroup_vcpu) < 0) return -1; =20 - if (qemuSetupCgroupVcpuBW(cgroup_vcpu, period, quota) < 0) + if (virDomainCgroupSetupVcpuBW(cgroup_vcpu, period, quota) < 0) return -1; } =20 @@ -9154,7 +9154,7 @@ qemuSetEmulatorBandwidthLive(virCgroup *cgroup, false, &cgroup_emulator) < 0) return -1; =20 - if (qemuSetupCgroupVcpuBW(cgroup_emulator, period, quota) < 0) + if (virDomainCgroupSetupVcpuBW(cgroup_emulator, period, quota) < 0) return -1; =20 return 0; @@ -9181,7 +9181,7 @@ qemuSetIOThreadsBWLive(virDomainObj *vm, virCgroup *c= group, false, &cgroup_iothread) < 0) return -1; =20 - if (qemuSetupCgroupVcpuBW(cgroup_iothread, period, quota) < 0) + if (virDomainCgroupSetupVcpuBW(cgroup_iothread, period, quota) < 0) return -1; } =20 diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 04c6600f26..01d734e294 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -37,6 +37,7 @@ #include "qemu_snapshot.h" #include "qemu_virtiofs.h" #include "domain_audit.h" +#include "domain_cgroup.h" #include "netdev_bandwidth_conf.h" #include "domain_nwfilter.h" #include "virlog.h" @@ -6551,11 +6552,11 @@ qemuDomainSetVcpusLive(virQEMUDriver *driver, bool enable) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuCgroupEmulatorAllNodesData *emulatorCgroup =3D NULL; + virCgroupEmulatorAllNodesData *emulatorCgroup =3D NULL; ssize_t nextvcpu =3D -1; int ret =3D -1; =20 - if (qemuCgroupEmulatorAllNodesAllow(priv->cgroup, &emulatorCgroup) < 0) + if (virDomainCgroupEmulatorAllNodesAllow(priv->cgroup, &emulatorCgroup= ) < 0) goto cleanup; =20 if (enable) { @@ -6576,7 +6577,7 @@ qemuDomainSetVcpusLive(virQEMUDriver *driver, ret =3D 0; =20 cleanup: - qemuCgroupEmulatorAllNodesRestore(emulatorCgroup); + virDomainCgroupEmulatorAllNodesRestore(emulatorCgroup); =20 return ret; } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 5c9ca0fe4f..605fa2efa2 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -73,6 +73,7 @@ #include "virpidfile.h" #include "virhostcpu.h" #include "domain_audit.h" +#include "domain_cgroup.h" #include "domain_nwfilter.h" #include "domain_validate.h" #include "locking/domain_lock.h" @@ -2686,7 +2687,7 @@ qemuProcessSetupPid(virDomainObj *vm, =20 if (virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPU= SET)) { if (use_cpumask && - qemuSetupCgroupCpusetCpus(cgroup, use_cpumask) < 0) + virDomainCgroupSetupCpusetCpus(cgroup, use_cpumask) < 0) goto cleanup; =20 if (mem_mask && virCgroupSetCpusetMems(cgroup, mem_mask) < 0) @@ -2695,7 +2696,7 @@ qemuProcessSetupPid(virDomainObj *vm, } =20 if ((period || quota) && - qemuSetupCgroupVcpuBW(cgroup, period, quota) < 0) + virDomainCgroupSetupVcpuBW(cgroup, period, quota) < 0) goto cleanup; =20 /* Move the thread to the sub dir */ @@ -5964,7 +5965,7 @@ qemuProcessSetupHotpluggableVcpus(virQEMUDriver *driv= er, { unsigned int maxvcpus =3D virDomainDefGetVcpusMax(vm->def); qemuDomainObjPrivate *priv =3D vm->privateData; - qemuCgroupEmulatorAllNodesData *emulatorCgroup =3D NULL; + virCgroupEmulatorAllNodesData *emulatorCgroup =3D NULL; virDomainVcpuDef *vcpu; qemuDomainVcpuPrivate *vcpupriv; size_t i; @@ -5992,7 +5993,7 @@ qemuProcessSetupHotpluggableVcpus(virQEMUDriver *driv= er, qsort(bootHotplug, nbootHotplug, sizeof(*bootHotplug), qemuProcessVcpusSortOrder); =20 - if (qemuCgroupEmulatorAllNodesAllow(priv->cgroup, &emulatorCgroup) < 0) + if (virDomainCgroupEmulatorAllNodesAllow(priv->cgroup, &emulatorCgroup= ) < 0) goto cleanup; =20 for (i =3D 0; i < nbootHotplug; i++) { @@ -6016,7 +6017,7 @@ qemuProcessSetupHotpluggableVcpus(virQEMUDriver *driv= er, ret =3D 0; =20 cleanup: - qemuCgroupEmulatorAllNodesRestore(emulatorCgroup); + virDomainCgroupEmulatorAllNodesRestore(emulatorCgroup); return ret; } =20 @@ -7006,7 +7007,7 @@ qemuProcessPrepareHost(virQEMUDriver *driver, /* Ensure no historical cgroup for this VM is lying around bogus * settings */ VIR_DEBUG("Ensuring no historical cgroup is lying around"); - qemuRemoveCgroup(vm); + virDomainCgroupRemoveCgroup(vm, priv->cgroup, priv->machineName); =20 if (g_mkdir_with_parents(cfg->logDir, 0777) < 0) { virReportSystemError(errno, @@ -7615,7 +7616,7 @@ qemuProcessLaunch(virConnectPtr conn, goto cleanup; =20 VIR_DEBUG("Setting global CPU cgroup (if required)"); - if (qemuSetupGlobalCpuCgroup(vm) < 0) + if (virDomainCgroupSetupGlobalCpuCgroup(vm, priv->cgroup, priv->autoNo= deset) < 0) goto cleanup; =20 VIR_DEBUG("Setting vCPU tuning/settings"); @@ -8223,7 +8224,7 @@ void qemuProcessStop(virQEMUDriver *driver, } =20 retry: - if ((ret =3D qemuRemoveCgroup(vm)) < 0) { + if ((ret =3D virDomainCgroupRemoveCgroup(vm, priv->cgroup, priv->machi= neName)) < 0) { if (ret =3D=3D -EBUSY && (retries++ < 5)) { g_usleep(200*1000); goto retry; @@ -8782,7 +8783,12 @@ qemuProcessReconnect(void *opaque) if (!priv->machineName) goto error; =20 - if (qemuConnectCgroup(obj) < 0) + if (virDomainCgroupConnectCgroup("qemu", + obj, + priv->cgroup, + cfg->cgroupControllers, + priv->driver->privileged, + priv->machineName) < 0) goto error; =20 if (qemuDomainPerfRestart(obj) < 0) --=20 2.27.0