From nobody Sun Feb 8 10:30:28 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1652196501; cv=none; d=zohomail.com; s=zohoarc; b=c+EKjdJa1RSjuYZDdl+98a1naSk2/8PbAWQiRtBw8sTiZ8a/hfLdzcwGsD5mr9qWvtA5K26s72gx9OT+9Yo2MyBXYG0gS7X1JmwINHs3U3D1ZmGKWORJ+mS6m2eimXpTHlobYyrX9dWWkS0rKBHQ0ZxY3V+aCf3wkSn5OcYaPWs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1652196501; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5+S1RrBl9Q0JwfLy0INP6DiRG+QSpgeMOp+KzYbl0X4=; b=MmDvurzCx1Q7V998Rxn3dk5z9q545PDVuOyOvln7D9uHB8XnW8eqGOVAu8Z134H7CvnuO5zjE6g00FL0h0wSjyp1Zol0RLNYGBO4dt5n3+ZMMwMGBSvW18y0IONk++RcX1MfC33yeAKljk4TFlCNZn+SoQIThzoauh6xVQ0Urbg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 165219650113883.75910512284884; Tue, 10 May 2022 08:28:21 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-665-6DY5_q1ANxawqen5L_u-aw-1; Tue, 10 May 2022 11:28:13 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E15DC2949BD1; Tue, 10 May 2022 15:28:09 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE51140CFD06; Tue, 10 May 2022 15:28:08 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 9F59D194705D; Tue, 10 May 2022 15:28:08 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 63B25194704D for ; Tue, 10 May 2022 15:28:00 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 480AE403579A; Tue, 10 May 2022 15:28:00 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.187]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E420A40C1421 for ; Tue, 10 May 2022 15:27:59 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id C1666244629; Tue, 10 May 2022 17:21:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652196499; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=5+S1RrBl9Q0JwfLy0INP6DiRG+QSpgeMOp+KzYbl0X4=; b=XCB9qLovkB3yoWd+H5jXBTtC1dUjJVqLIyeVHCjS1cwdgPhj804pCKdRHAkUHVjWDbd9Uj m0OoK350hDg6YPWZ8ickW2ZIPgN9WeoeAlahG5WkwdrJX5AMPkKhLr++GbOnLXVy18Hfzp afevnZY5traxx38JYmU7s0XT15j9290= X-MC-Unique: 6DY5_q1ANxawqen5L_u-aw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH 61/80] qemu: Refactor qemuMigrationDstPrepareFresh Date: Tue, 10 May 2022 17:21:22 +0200 Message-Id: <9624e404bf7ba91ff2fe2543527e69f93270fa69.1652196064.git.jdenemar@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1652196502440100001 Content-Type: text/plain; charset="utf-8" Offline migration jumps over a big part of qemuMigrationDstPrepareFresh. Let's move that part into a new qemuMigrationDstPrepareActive function to make the code easier to follow. Signed-off-by: Jiri Denemark Reviewed-by: Peter Krempa --- src/qemu/qemu_migration.c | 374 +++++++++++++++++++++----------------- 1 file changed, 206 insertions(+), 168 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index f1e3774034..dc608fb8a4 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3118,6 +3118,200 @@ qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDoma= inObj *vm, } =20 =20 +static int +qemuMigrationDstPrepareActive(virQEMUDriver *driver, + virDomainObj *vm, + virConnectPtr dconn, + qemuMigrationCookie *mig, + virStreamPtr st, + const char *protocol, + unsigned short port, + const char *listenAddress, + size_t nmigrate_disks, + const char **migrate_disks, + int nbdPort, + const char *nbdURI, + qemuMigrationParams *migParams, + unsigned long flags) +{ + qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuProcessIncomingDef *incoming =3D NULL; + g_autofree char *tlsAlias =3D NULL; + virObjectEvent *event =3D NULL; + virErrorPtr origErr =3D NULL; + int dataFD[2] =3D { -1, -1 }; + bool stopProcess =3D false; + unsigned int startFlags; + bool relabel =3D false; + bool tunnel =3D !!st; + int ret =3D -1; + int rv; + + if (STREQ_NULLABLE(protocol, "rdma") && + !virMemoryLimitIsSet(vm->def->mem.hard_limit)) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("cannot start RDMA migration with no memory hard " + "limit set")); + goto error; + } + + if (qemuMigrationDstPrecreateStorage(vm, mig->nbd, + nmigrate_disks, migrate_disks, + !!(flags & VIR_MIGRATE_NON_SHARED= _INC)) < 0) + goto error; + + if (tunnel && + virPipe(dataFD) < 0) + goto error; + + startFlags =3D VIR_QEMU_PROCESS_START_AUTODESTROY; + + if (qemuProcessInit(driver, vm, mig->cpu, VIR_ASYNC_JOB_MIGRATION_IN, + true, startFlags) < 0) + goto error; + stopProcess =3D true; + + if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, + listenAddress, port, + dataFD[0]))) + goto error; + + if (qemuProcessPrepareDomain(driver, vm, startFlags) < 0) + goto error; + + if (qemuProcessPrepareHost(driver, vm, startFlags) < 0) + goto error; + + rv =3D qemuProcessLaunch(dconn, driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + incoming, NULL, + VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, + startFlags); + if (rv < 0) { + if (rv =3D=3D -2) + relabel =3D true; + goto error; + } + relabel =3D true; + + if (tunnel) { + if (virFDStreamOpen(st, dataFD[1]) < 0) { + virReportSystemError(errno, "%s", + _("cannot pass pipe for tunnelled migrati= on")); + goto error; + } + dataFD[1] =3D -1; /* 'st' owns the FD now & will close it */ + } + + if (STREQ_NULLABLE(protocol, "rdma") && + vm->def->mem.hard_limit > 0 && + virProcessSetMaxMemLock(vm->pid, vm->def->mem.hard_limit << 10) < = 0) { + goto error; + } + + if (qemuMigrationDstPrepareAnyBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) + goto error; + + if (qemuMigrationParamsCheck(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + migParams, mig->caps->automatic) < 0) + goto error; + + /* Migrations using TLS need to add the "tls-creds-x509" object and + * set the migration TLS parameters */ + if (flags & VIR_MIGRATE_TLS) { + if (qemuMigrationParamsEnableTLS(driver, vm, true, + VIR_ASYNC_JOB_MIGRATION_IN, + &tlsAlias, NULL, + migParams) < 0) + goto error; + } else { + if (qemuMigrationParamsDisableTLS(vm, migParams) < 0) + goto error; + } + + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + migParams) < 0) + goto error; + + if (mig->nbd && + flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)= && + virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_SERVER)) { + const char *nbdTLSAlias =3D NULL; + + if (flags & VIR_MIGRATE_TLS) { + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_TLS)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU NBD server does not support TLS tra= nsport")); + goto error; + } + + nbdTLSAlias =3D tlsAlias; + } + + if (qemuMigrationDstStartNBDServer(driver, vm, incoming->address, + nmigrate_disks, migrate_disks, + nbdPort, nbdURI, + nbdTLSAlias) < 0) { + goto error; + } + } + + if (mig->lockState) { + VIR_DEBUG("Received lockstate %s", mig->lockState); + VIR_FREE(priv->lockState); + priv->lockState =3D g_steal_pointer(&mig->lockState); + } else { + VIR_DEBUG("Received no lockstate"); + } + + if (qemuMigrationDstRun(driver, vm, incoming->uri, + VIR_ASYNC_JOB_MIGRATION_IN) < 0) + goto error; + + if (qemuProcessFinishStartup(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + false, VIR_DOMAIN_PAUSED_MIGRATION) < 0) + goto error; + + if (!(flags & VIR_MIGRATE_OFFLINE)) { + virDomainAuditStart(vm, "migrated", true); + event =3D virDomainEventLifecycleNewFromObj(vm, + VIR_DOMAIN_EVENT_STARTED, + VIR_DOMAIN_EVENT_STARTED_MIGRATED= ); + } + + ret =3D 0; + + cleanup: + qemuProcessIncomingDefFree(incoming); + VIR_FORCE_CLOSE(dataFD[0]); + VIR_FORCE_CLOSE(dataFD[1]); + virObjectEventStateQueue(driver->domainEventState, event); + virErrorRestore(&origErr); + return ret; + + error: + virErrorPreserveLast(&origErr); + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, + jobPriv->migParams, priv->job.apiFlags); + + if (stopProcess) { + unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; + if (!relabel) + stopFlags |=3D VIR_QEMU_PROCESS_STOP_NO_RELABEL; + virDomainAuditStart(vm, "migrated", false); + qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, + VIR_ASYNC_JOB_MIGRATION_IN, stopFlags); + /* release if port is auto selected which is not the case if + * it is given in parameters + */ + if (nbdPort =3D=3D 0) + virPortAllocatorRelease(priv->nbdPort); + priv->nbdPort =3D 0; + } + goto cleanup; +} + + static int qemuMigrationDstPrepareFresh(virQEMUDriver *driver, virConnectPtr dconn, @@ -3140,32 +3334,20 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, unsigned long flags) { virDomainObj *vm =3D NULL; - virObjectEvent *event =3D NULL; virErrorPtr origErr; int ret =3D -1; - int dataFD[2] =3D { -1, -1 }; qemuDomainObjPrivate *priv =3D NULL; qemuMigrationCookie *mig =3D NULL; - qemuDomainJobPrivate *jobPriv =3D NULL; - bool tunnel =3D !!st; g_autofree char *xmlout =3D NULL; - unsigned int cookieFlags; - unsigned int startFlags; - qemuProcessIncomingDef *incoming =3D NULL; + unsigned int cookieFlags =3D 0; bool taint_hook =3D false; - bool stopProcess =3D false; - bool relabel =3D false; - int rv; - g_autofree char *tlsAlias =3D NULL; =20 VIR_DEBUG("name=3D%s, origname=3D%s, protocol=3D%s, port=3D%hu, " "listenAddress=3D%s, nbdPort=3D%d, nbdURI=3D%s, flags=3D0x%l= x", (*def)->name, NULLSTR(origname), protocol, port, listenAddress, nbdPort, NULLSTR(nbdURI), flags); =20 - if (flags & VIR_MIGRATE_OFFLINE) { - cookieFlags =3D 0; - } else { + if (!(flags & VIR_MIGRATE_OFFLINE)) { cookieFlags =3D QEMU_MIGRATION_COOKIE_GRAPHICS | QEMU_MIGRATION_COOKIE_CAPS; } @@ -3238,7 +3420,6 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, goto cleanup; =20 priv =3D vm->privateData; - jobPriv =3D priv->job.privateData; priv->origname =3D g_strdup(origname); =20 if (taint_hook) { @@ -3246,19 +3427,6 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, priv->hookRun =3D true; } =20 - if (STREQ_NULLABLE(protocol, "rdma") && - !virMemoryLimitIsSet(vm->def->mem.hard_limit)) { - virReportError(VIR_ERR_OPERATION_INVALID, "%s", - _("cannot start RDMA migration with no memory hard " - "limit set")); - goto cleanup; - } - - if (qemuMigrationDstPrecreateStorage(vm, mig->nbd, - nmigrate_disks, migrate_disks, - !!(flags & VIR_MIGRATE_NON_SHARED= _INC)) < 0) - goto cleanup; - if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, flags) < 0) goto cleanup; @@ -3269,122 +3437,21 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, /* Domain starts inactive, even if the domain XML had an id field. */ vm->def->id =3D -1; =20 - if (flags & VIR_MIGRATE_OFFLINE) - goto done; - - if (tunnel && - virPipe(dataFD) < 0) - goto stopjob; - - startFlags =3D VIR_QEMU_PROCESS_START_AUTODESTROY; - - if (qemuProcessInit(driver, vm, mig->cpu, VIR_ASYNC_JOB_MIGRATION_IN, - true, startFlags) < 0) - goto stopjob; - stopProcess =3D true; - - if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, - listenAddress, port, - dataFD[0]))) - goto stopjob; - - if (qemuProcessPrepareDomain(driver, vm, startFlags) < 0) - goto stopjob; - - if (qemuProcessPrepareHost(driver, vm, startFlags) < 0) - goto stopjob; - - rv =3D qemuProcessLaunch(dconn, driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - incoming, NULL, - VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, - startFlags); - if (rv < 0) { - if (rv =3D=3D -2) - relabel =3D true; - goto stopjob; - } - relabel =3D true; - - if (tunnel) { - if (virFDStreamOpen(st, dataFD[1]) < 0) { - virReportSystemError(errno, "%s", - _("cannot pass pipe for tunnelled migrati= on")); - goto stopjob; - } - dataFD[1] =3D -1; /* 'st' owns the FD now & will close it */ - } - - if (STREQ_NULLABLE(protocol, "rdma") && - vm->def->mem.hard_limit > 0 && - virProcessSetMaxMemLock(vm->pid, vm->def->mem.hard_limit << 10) < = 0) { - goto stopjob; - } - - if (qemuMigrationDstPrepareAnyBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) - goto stopjob; - - if (qemuMigrationParamsCheck(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - migParams, mig->caps->automatic) < 0) - goto stopjob; - - /* Migrations using TLS need to add the "tls-creds-x509" object and - * set the migration TLS parameters */ - if (flags & VIR_MIGRATE_TLS) { - if (qemuMigrationParamsEnableTLS(driver, vm, true, - VIR_ASYNC_JOB_MIGRATION_IN, - &tlsAlias, NULL, - migParams) < 0) - goto stopjob; - } else { - if (qemuMigrationParamsDisableTLS(vm, migParams) < 0) - goto stopjob; - } - - if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - migParams) < 0) - goto stopjob; - - if (mig->nbd && - flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)= && - virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_SERVER)) { - const char *nbdTLSAlias =3D NULL; - - if (flags & VIR_MIGRATE_TLS) { - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_TLS)) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("QEMU NBD server does not support TLS tra= nsport")); - goto stopjob; - } - - nbdTLSAlias =3D tlsAlias; - } - - if (qemuMigrationDstStartNBDServer(driver, vm, incoming->address, - nmigrate_disks, migrate_disks, - nbdPort, nbdURI, - nbdTLSAlias) < 0) { + if (!(flags & VIR_MIGRATE_OFFLINE)) { + if (qemuMigrationDstPrepareActive(driver, vm, dconn, mig, st, + protocol, port, listenAddress, + nmigrate_disks, migrate_disks, + nbdPort, nbdURI, + migParams, flags) < 0) { goto stopjob; } - cookieFlags |=3D QEMU_MIGRATION_COOKIE_NBD; - } =20 - if (mig->lockState) { - VIR_DEBUG("Received lockstate %s", mig->lockState); - VIR_FREE(priv->lockState); - priv->lockState =3D g_steal_pointer(&mig->lockState); - } else { - VIR_DEBUG("Received no lockstate"); + if (mig->nbd && + flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_= INC) && + virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_SERVER)) + cookieFlags |=3D QEMU_MIGRATION_COOKIE_NBD; } =20 - if (qemuMigrationDstRun(driver, vm, incoming->uri, - VIR_ASYNC_JOB_MIGRATION_IN) < 0) - goto stopjob; - - if (qemuProcessFinishStartup(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - false, VIR_DOMAIN_PAUSED_MIGRATION) < 0) - goto stopjob; - - done: if (qemuMigrationCookieFormat(mig, driver, vm, QEMU_MIGRATION_DESTINATION, cookieout, cookieoutlen, cookieFlags) < = 0) { @@ -3397,13 +3464,6 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, =20 qemuDomainCleanupAdd(vm, qemuMigrationDstPrepareCleanup); =20 - if (!(flags & VIR_MIGRATE_OFFLINE)) { - virDomainAuditStart(vm, "migrated", true); - event =3D virDomainEventLifecycleNewFromObj(vm, - VIR_DOMAIN_EVENT_STARTED, - VIR_DOMAIN_EVENT_STARTED_MIGRATED= ); - } - /* We keep the job active across API calls until the finish() call. * This prevents any other APIs being invoked while incoming * migration is taking place. @@ -3421,41 +3481,19 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, =20 cleanup: virErrorPreserveLast(&origErr); - qemuProcessIncomingDefFree(incoming); - VIR_FORCE_CLOSE(dataFD[0]); - VIR_FORCE_CLOSE(dataFD[1]); if (ret < 0 && priv) { /* priv is set right after vm is added to the list of domains * and there is no 'goto cleanup;' in the middle of those */ VIR_FREE(priv->origname); - /* release if port is auto selected which is not the case if - * it is given in parameters - */ - if (nbdPort =3D=3D 0) - virPortAllocatorRelease(priv->nbdPort); - priv->nbdPort =3D 0; virDomainObjRemoveTransientDef(vm); qemuDomainRemoveInactive(driver, vm); } virDomainObjEndAPI(&vm); - virObjectEventStateQueue(driver->domainEventState, event); qemuMigrationCookieFree(mig); virErrorRestore(&origErr); return ret; =20 stopjob: - qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, - jobPriv->migParams, priv->job.apiFlags); - - if (stopProcess) { - unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; - if (!relabel) - stopFlags |=3D VIR_QEMU_PROCESS_STOP_NO_RELABEL; - virDomainAuditStart(vm, "migrated", false); - qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, - VIR_ASYNC_JOB_MIGRATION_IN, stopFlags); - } - qemuMigrationJobFinish(vm); goto cleanup; } --=20 2.35.1