From nobody Sun Feb 8 10:33:19 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1654087988; cv=none; d=zohomail.com; s=zohoarc; b=GhiIYaB/XpS9iP35+sstGEMokTsZXKiuyv5zGnNSxnspeleKOTwm7CIRF8jld85GNzn/ULAuc7YU7BbRa7Ujyrbg/z21G9FiUZZZHyyhdJyh4f136Whr8jJahjQI3ZPEKe2to6TQT/UxDSByfO1z217pUZ6K1GQJ9Mc95MPHKxQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654087988; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=e8QZARivFrTNOPMrV2r07KoAjmBg+L6pUjhphP078OU=; b=mQs07+S15LijI8DxH9V8eIiglCKumQ40cIvKLPGvN6rSofRsCmivhj4G9Pe6iZZoWzzpW7q6TIXewqlmyEnT5Tb2B9tcUtwZmvKiBcWJHM8SGCZvWQ+zonPX6Fpa3Mj9Xh0Lo2fFFn6pD7TPtP/OHiqzVHQB7FMQZZqQ/dKXDQo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1654087988859368.87849784969535; Wed, 1 Jun 2022 05:53:08 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-645-Aw1F8yS_NIaIvqE4bIXQ4A-1; Wed, 01 Jun 2022 08:51:46 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C0D9928013FD; Wed, 1 Jun 2022 12:50:54 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id A2F6E2024CB6; Wed, 1 Jun 2022 12:50:54 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 499DC1947B97; Wed, 1 Jun 2022 12:50:51 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C14C019451EC for ; Wed, 1 Jun 2022 12:50:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id B19171121315; Wed, 1 Jun 2022 12:50:31 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.227]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 216051121330; Wed, 1 Jun 2022 12:50:31 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id AEB5B245B88; Wed, 1 Jun 2022 14:50:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654087987; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=e8QZARivFrTNOPMrV2r07KoAjmBg+L6pUjhphP078OU=; b=B668GHaOjkNTeKDl4dSmm5h1O4xJuCscXcMipHoYwf79gITuH6b5svkjVmNJJfZGhH/fGt p1hNUZyQfCgeBvpL565IpUQ7tSZn3Ev9K53m6h07sDP18muJm+GKUJqN9uCWEPpO/Tz39Z AUlAylOPIU4G7c4bGy9o4QNlirBtol4= X-MC-Unique: Aw1F8yS_NIaIvqE4bIXQ4A-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH v2 59/81] qemu: Refactor qemuMigrationDstPrepareFresh Date: Wed, 1 Jun 2022 14:49:59 +0200 Message-Id: <96ebc7c97bb4a8d22985fde10e5f61d0e4c44a47.1654087150.git.jdenemar@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Krempa , Pavel Hrdina Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1654087990469100013 Content-Type: text/plain; charset="utf-8" Offline migration jumps over a big part of qemuMigrationDstPrepareFresh. Let's move that part into a new qemuMigrationDstPrepareActive function to make the code easier to follow. Signed-off-by: Jiri Denemark Reviewed-by: Peter Krempa Reviewed-by: Pavel Hrdina --- Notes: Version 2: - dropped line breaks from error messages - fixed indentation - dropped VIR_MIGRATE_OFFLINE check from qemuMigrationDstPrepareActive src/qemu/qemu_migration.c | 371 +++++++++++++++++++++----------------- 1 file changed, 203 insertions(+), 168 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index d4683a9922..53236b7239 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3083,6 +3083,197 @@ qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDoma= inObj *vm, } =20 =20 +static int +qemuMigrationDstPrepareActive(virQEMUDriver *driver, + virDomainObj *vm, + virConnectPtr dconn, + qemuMigrationCookie *mig, + virStreamPtr st, + const char *protocol, + unsigned short port, + const char *listenAddress, + size_t nmigrate_disks, + const char **migrate_disks, + int nbdPort, + const char *nbdURI, + qemuMigrationParams *migParams, + unsigned long flags) +{ + qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuProcessIncomingDef *incoming =3D NULL; + g_autofree char *tlsAlias =3D NULL; + virObjectEvent *event =3D NULL; + virErrorPtr origErr =3D NULL; + int dataFD[2] =3D { -1, -1 }; + bool stopProcess =3D false; + unsigned int startFlags; + bool relabel =3D false; + bool tunnel =3D !!st; + int ret =3D -1; + int rv; + + if (STREQ_NULLABLE(protocol, "rdma") && + !virMemoryLimitIsSet(vm->def->mem.hard_limit)) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("cannot start RDMA migration with no memory hard = limit set")); + goto error; + } + + if (qemuMigrationDstPrecreateStorage(vm, mig->nbd, + nmigrate_disks, migrate_disks, + !!(flags & VIR_MIGRATE_NON_SHARED= _INC)) < 0) + goto error; + + if (tunnel && + virPipe(dataFD) < 0) + goto error; + + startFlags =3D VIR_QEMU_PROCESS_START_AUTODESTROY; + + if (qemuProcessInit(driver, vm, mig->cpu, VIR_ASYNC_JOB_MIGRATION_IN, + true, startFlags) < 0) + goto error; + stopProcess =3D true; + + if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, + listenAddress, port, + dataFD[0]))) + goto error; + + if (qemuProcessPrepareDomain(driver, vm, startFlags) < 0) + goto error; + + if (qemuProcessPrepareHost(driver, vm, startFlags) < 0) + goto error; + + rv =3D qemuProcessLaunch(dconn, driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + incoming, NULL, + VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, + startFlags); + if (rv < 0) { + if (rv =3D=3D -2) + relabel =3D true; + goto error; + } + relabel =3D true; + + if (tunnel) { + if (virFDStreamOpen(st, dataFD[1]) < 0) { + virReportSystemError(errno, "%s", + _("cannot pass pipe for tunnelled migrati= on")); + goto error; + } + dataFD[1] =3D -1; /* 'st' owns the FD now & will close it */ + } + + if (STREQ_NULLABLE(protocol, "rdma") && + vm->def->mem.hard_limit > 0 && + virProcessSetMaxMemLock(vm->pid, vm->def->mem.hard_limit << 10) < = 0) { + goto error; + } + + if (qemuMigrationDstPrepareAnyBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) + goto error; + + if (qemuMigrationParamsCheck(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + migParams, mig->caps->automatic) < 0) + goto error; + + /* Migrations using TLS need to add the "tls-creds-x509" object and + * set the migration TLS parameters */ + if (flags & VIR_MIGRATE_TLS) { + if (qemuMigrationParamsEnableTLS(driver, vm, true, + VIR_ASYNC_JOB_MIGRATION_IN, + &tlsAlias, NULL, + migParams) < 0) + goto error; + } else { + if (qemuMigrationParamsDisableTLS(vm, migParams) < 0) + goto error; + } + + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + migParams) < 0) + goto error; + + if (mig->nbd && + flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)= && + virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_SERVER)) { + const char *nbdTLSAlias =3D NULL; + + if (flags & VIR_MIGRATE_TLS) { + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_TLS)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU NBD server does not support TLS tra= nsport")); + goto error; + } + + nbdTLSAlias =3D tlsAlias; + } + + if (qemuMigrationDstStartNBDServer(driver, vm, incoming->address, + nmigrate_disks, migrate_disks, + nbdPort, nbdURI, + nbdTLSAlias) < 0) { + goto error; + } + } + + if (mig->lockState) { + VIR_DEBUG("Received lockstate %s", mig->lockState); + VIR_FREE(priv->lockState); + priv->lockState =3D g_steal_pointer(&mig->lockState); + } else { + VIR_DEBUG("Received no lockstate"); + } + + if (qemuMigrationDstRun(driver, vm, incoming->uri, + VIR_ASYNC_JOB_MIGRATION_IN) < 0) + goto error; + + if (qemuProcessFinishStartup(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + false, VIR_DOMAIN_PAUSED_MIGRATION) < 0) + goto error; + + virDomainAuditStart(vm, "migrated", true); + event =3D virDomainEventLifecycleNewFromObj(vm, + VIR_DOMAIN_EVENT_STARTED, + VIR_DOMAIN_EVENT_STARTED_MIG= RATED); + + ret =3D 0; + + cleanup: + qemuProcessIncomingDefFree(incoming); + VIR_FORCE_CLOSE(dataFD[0]); + VIR_FORCE_CLOSE(dataFD[1]); + virObjectEventStateQueue(driver->domainEventState, event); + virErrorRestore(&origErr); + return ret; + + error: + virErrorPreserveLast(&origErr); + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + jobPriv->migParams, priv->job.apiFlags); + + if (stopProcess) { + unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; + if (!relabel) + stopFlags |=3D VIR_QEMU_PROCESS_STOP_NO_RELABEL; + virDomainAuditStart(vm, "migrated", false); + qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, + VIR_ASYNC_JOB_MIGRATION_IN, stopFlags); + /* release if port is auto selected which is not the case if + * it is given in parameters + */ + if (nbdPort =3D=3D 0) + virPortAllocatorRelease(priv->nbdPort); + priv->nbdPort =3D 0; + } + goto cleanup; +} + + static int qemuMigrationDstPrepareFresh(virQEMUDriver *driver, virConnectPtr dconn, @@ -3105,32 +3296,20 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, unsigned long flags) { virDomainObj *vm =3D NULL; - virObjectEvent *event =3D NULL; virErrorPtr origErr; int ret =3D -1; - int dataFD[2] =3D { -1, -1 }; qemuDomainObjPrivate *priv =3D NULL; qemuMigrationCookie *mig =3D NULL; - qemuDomainJobPrivate *jobPriv =3D NULL; - bool tunnel =3D !!st; g_autofree char *xmlout =3D NULL; - unsigned int cookieFlags; - unsigned int startFlags; - qemuProcessIncomingDef *incoming =3D NULL; + unsigned int cookieFlags =3D 0; bool taint_hook =3D false; - bool stopProcess =3D false; - bool relabel =3D false; - int rv; - g_autofree char *tlsAlias =3D NULL; =20 VIR_DEBUG("name=3D%s, origname=3D%s, protocol=3D%s, port=3D%hu, " "listenAddress=3D%s, nbdPort=3D%d, nbdURI=3D%s, flags=3D0x%l= x", (*def)->name, NULLSTR(origname), protocol, port, listenAddress, nbdPort, NULLSTR(nbdURI), flags); =20 - if (flags & VIR_MIGRATE_OFFLINE) { - cookieFlags =3D 0; - } else { + if (!(flags & VIR_MIGRATE_OFFLINE)) { cookieFlags =3D QEMU_MIGRATION_COOKIE_GRAPHICS | QEMU_MIGRATION_COOKIE_CAPS; } @@ -3203,7 +3382,6 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, goto cleanup; =20 priv =3D vm->privateData; - jobPriv =3D priv->job.privateData; priv->origname =3D g_strdup(origname); =20 if (taint_hook) { @@ -3211,19 +3389,6 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, priv->hookRun =3D true; } =20 - if (STREQ_NULLABLE(protocol, "rdma") && - !virMemoryLimitIsSet(vm->def->mem.hard_limit)) { - virReportError(VIR_ERR_OPERATION_INVALID, "%s", - _("cannot start RDMA migration with no memory hard " - "limit set")); - goto cleanup; - } - - if (qemuMigrationDstPrecreateStorage(vm, mig->nbd, - nmigrate_disks, migrate_disks, - !!(flags & VIR_MIGRATE_NON_SHARED= _INC)) < 0) - goto cleanup; - if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, flags) < 0) goto cleanup; @@ -3234,122 +3399,21 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, /* Domain starts inactive, even if the domain XML had an id field. */ vm->def->id =3D -1; =20 - if (flags & VIR_MIGRATE_OFFLINE) - goto done; - - if (tunnel && - virPipe(dataFD) < 0) - goto stopjob; - - startFlags =3D VIR_QEMU_PROCESS_START_AUTODESTROY; - - if (qemuProcessInit(driver, vm, mig->cpu, VIR_ASYNC_JOB_MIGRATION_IN, - true, startFlags) < 0) - goto stopjob; - stopProcess =3D true; - - if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, - listenAddress, port, - dataFD[0]))) - goto stopjob; - - if (qemuProcessPrepareDomain(driver, vm, startFlags) < 0) - goto stopjob; - - if (qemuProcessPrepareHost(driver, vm, startFlags) < 0) - goto stopjob; - - rv =3D qemuProcessLaunch(dconn, driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - incoming, NULL, - VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, - startFlags); - if (rv < 0) { - if (rv =3D=3D -2) - relabel =3D true; - goto stopjob; - } - relabel =3D true; - - if (tunnel) { - if (virFDStreamOpen(st, dataFD[1]) < 0) { - virReportSystemError(errno, "%s", - _("cannot pass pipe for tunnelled migrati= on")); - goto stopjob; - } - dataFD[1] =3D -1; /* 'st' owns the FD now & will close it */ - } - - if (STREQ_NULLABLE(protocol, "rdma") && - vm->def->mem.hard_limit > 0 && - virProcessSetMaxMemLock(vm->pid, vm->def->mem.hard_limit << 10) < = 0) { - goto stopjob; - } - - if (qemuMigrationDstPrepareAnyBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) - goto stopjob; - - if (qemuMigrationParamsCheck(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - migParams, mig->caps->automatic) < 0) - goto stopjob; - - /* Migrations using TLS need to add the "tls-creds-x509" object and - * set the migration TLS parameters */ - if (flags & VIR_MIGRATE_TLS) { - if (qemuMigrationParamsEnableTLS(driver, vm, true, - VIR_ASYNC_JOB_MIGRATION_IN, - &tlsAlias, NULL, - migParams) < 0) - goto stopjob; - } else { - if (qemuMigrationParamsDisableTLS(vm, migParams) < 0) + if (!(flags & VIR_MIGRATE_OFFLINE)) { + if (qemuMigrationDstPrepareActive(driver, vm, dconn, mig, st, + protocol, port, listenAddress, + nmigrate_disks, migrate_disks, + nbdPort, nbdURI, + migParams, flags) < 0) { goto stopjob; - } - - if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - migParams) < 0) - goto stopjob; - - if (mig->nbd && - flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)= && - virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_SERVER)) { - const char *nbdTLSAlias =3D NULL; - - if (flags & VIR_MIGRATE_TLS) { - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_TLS)) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("QEMU NBD server does not support TLS tra= nsport")); - goto stopjob; - } - - nbdTLSAlias =3D tlsAlias; } =20 - if (qemuMigrationDstStartNBDServer(driver, vm, incoming->address, - nmigrate_disks, migrate_disks, - nbdPort, nbdURI, - nbdTLSAlias) < 0) { - goto stopjob; - } - cookieFlags |=3D QEMU_MIGRATION_COOKIE_NBD; + if (mig->nbd && + flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_= INC) && + virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_SERVER)) + cookieFlags |=3D QEMU_MIGRATION_COOKIE_NBD; } =20 - if (mig->lockState) { - VIR_DEBUG("Received lockstate %s", mig->lockState); - VIR_FREE(priv->lockState); - priv->lockState =3D g_steal_pointer(&mig->lockState); - } else { - VIR_DEBUG("Received no lockstate"); - } - - if (qemuMigrationDstRun(driver, vm, incoming->uri, - VIR_ASYNC_JOB_MIGRATION_IN) < 0) - goto stopjob; - - if (qemuProcessFinishStartup(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - false, VIR_DOMAIN_PAUSED_MIGRATION) < 0) - goto stopjob; - - done: if (qemuMigrationCookieFormat(mig, driver, vm, QEMU_MIGRATION_DESTINATION, cookieout, cookieoutlen, cookieFlags) < = 0) { @@ -3362,13 +3426,6 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, =20 qemuDomainCleanupAdd(vm, qemuMigrationDstPrepareCleanup); =20 - if (!(flags & VIR_MIGRATE_OFFLINE)) { - virDomainAuditStart(vm, "migrated", true); - event =3D virDomainEventLifecycleNewFromObj(vm, - VIR_DOMAIN_EVENT_STARTED, - VIR_DOMAIN_EVENT_STARTED_MIGRATED= ); - } - /* We keep the job active across API calls until the finish() call. * This prevents any other APIs being invoked while incoming * migration is taking place. @@ -3386,41 +3443,19 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, =20 cleanup: virErrorPreserveLast(&origErr); - qemuProcessIncomingDefFree(incoming); - VIR_FORCE_CLOSE(dataFD[0]); - VIR_FORCE_CLOSE(dataFD[1]); if (ret < 0 && priv) { /* priv is set right after vm is added to the list of domains * and there is no 'goto cleanup;' in the middle of those */ VIR_FREE(priv->origname); - /* release if port is auto selected which is not the case if - * it is given in parameters - */ - if (nbdPort =3D=3D 0) - virPortAllocatorRelease(priv->nbdPort); - priv->nbdPort =3D 0; virDomainObjRemoveTransientDef(vm); qemuDomainRemoveInactive(driver, vm); } virDomainObjEndAPI(&vm); - virObjectEventStateQueue(driver->domainEventState, event); qemuMigrationCookieFree(mig); virErrorRestore(&origErr); return ret; =20 stopjob: - qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - jobPriv->migParams, priv->job.apiFlags); - - if (stopProcess) { - unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; - if (!relabel) - stopFlags |=3D VIR_QEMU_PROCESS_STOP_NO_RELABEL; - virDomainAuditStart(vm, "migrated", false); - qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, - VIR_ASYNC_JOB_MIGRATION_IN, stopFlags); - } - qemuMigrationJobFinish(vm); goto cleanup; } --=20 2.35.1