From nobody Mon Apr 29 11:03:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 207.211.31.81 as permitted sender) client-ip=207.211.31.81; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.81 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1594233244; cv=none; d=zohomail.com; s=zohoarc; b=clQ9eh4aRQMGIKif9mlAhNOa9aDVOaBpDtUiCs1qlQEOU9CS+phTgNhGHmDPmPJ3UwKBYyXHzgriyEnXgdu5KLkuECBkNSCPCVMG3zkPbIvUNvq2wAJACgZqMyJ9Y2CS2CivEVEvr+yv/VhKF7ctIjksqBhzwxPKMWT9TemlV1Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1594233244; h=Content-Type:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:Sender:Subject:To; bh=QedrZ+iUMXNKmNbOoDJD64awkBt6qkPSSobUE/jongQ=; b=fx9BnIVTQHn2bUxKeFE4Zrml79ePzsiwAn/hfcbSVMzaQhJ40AWWeLx25UaoyPGn2e13tJD4KRpOCr1Ex6xWeMDBP7V+3aSFgAdB2yU7IuPNL+qo3uxsoNWIj+kwswpT93T9+4Gj1x8huc/dlGxNvlax+Y0yInQIhftGLLArlck= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.81 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by mx.zohomail.com with SMTPS id 1594233244731473.2853815617599; Wed, 8 Jul 2020 11:34:04 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-151-E4Wh3UH1O-WYOk3jADLkaQ-1; Wed, 08 Jul 2020 14:34:01 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D5D00107ACF2; Wed, 8 Jul 2020 18:33:50 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 73C551001901; Wed, 8 Jul 2020 18:33:48 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 0F5AE1809547; Wed, 8 Jul 2020 18:33:45 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 068IXhx5023039 for ; Wed, 8 Jul 2020 14:33:43 -0400 Received: by smtp.corp.redhat.com (Postfix) id 8886F2097D60; Wed, 8 Jul 2020 18:33:43 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 82F6F2097D61 for ; Wed, 8 Jul 2020 18:33:41 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7672D8007AC for ; Wed, 8 Jul 2020 18:33:41 +0000 (UTC) Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-207-11qufBhcOqSl8094yYAyFw-1; Wed, 08 Jul 2020 14:33:38 -0400 Received: by mail-pf1-f178.google.com with SMTP id t11so15475191pfq.11 for ; Wed, 08 Jul 2020 11:33:38 -0700 (PDT) Received: from localhost.localdomain ([116.73.66.2]) by smtp.gmail.com with ESMTPSA id j70sm451050pfd.208.2020.07.08.11.33.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Jul 2020 11:33:35 -0700 (PDT) X-MC-Unique: E4Wh3UH1O-WYOk3jADLkaQ-1 X-MC-Unique: 11qufBhcOqSl8094yYAyFw-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=QedrZ+iUMXNKmNbOoDJD64awkBt6qkPSSobUE/jongQ=; b=ll+CC2L4eBpaZ/OZ+DGARY7aQHXSnIQWPGPF0IjYYCCCneBu5ctdFBNj/2nrxmpH/r qdQySehu6ag6Rodd6hjFCeG6r/oP0twjU3RXClOTkiWopNb0bttpKjyQVQVITE44iG0b lHuX/MT/u6RULnNb8mCrJ9dEM8UzR/1qm56EqVaoeBNHrVKjVmjKCdfV3MhWfgzNm67H iDbnQzv/qMorWQ7ikqTQUbBHgF4e0OLXmDLo5ODMddbzLMEqC2abMAm56DgJZ5xxCKix 6KQLQC5lU6xOYypINdlI6o0jRp56g1CYMqgriBHCoPS8E+pkmYtptyCnxG0T0Wg1EL+w dQDw== X-Gm-Message-State: AOAM530ArU+XehLqZPZVqSAvnsGKnG1ay9+T9ACoELtzvRGoGQMGgRoM LUiABgLCR/GAQfAg4B9UyNpVZRiO X-Google-Smtp-Source: ABdhPJwuGnnxg+pVXt405F1q9vlqVUvUYRobBvslv4UhAGjRqvqDUTGMBVTOi+SX26FJ49sNnp/nuQ== X-Received: by 2002:a65:6883:: with SMTP id e3mr51041043pgt.5.1594233216392; Wed, 08 Jul 2020 11:33:36 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2] qemu_domainjob: removal of its dependency on other qemu-files Date: Thu, 9 Jul 2020 00:03:27 +0530 Message-Id: <20200708183327.8560-1-pc44800@gmail.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" It was seen that `qemu_domain.h` file depended upon `qemu_migration_params.h` and `qmeu_monitor.h` as they were required by some qemu_domainjob stuctures. This dependency was removed by the introduction of a `void *privateData` pointer. This privateData pointer was handled using a structure of callback functions. Additionally, the patch also moves funcitons `qemuDomainObjPrivateXMLFormatJob` and `qemuDomainObjPrivateXMLParseJob` from `qemu_domain` and handles them using the callback structure of domain jobs. Signed-off-by: Prathamesh Chavan --- Previous version of this patch can be found here[1]. This patch adds a funciton to the domainJobInfo callback structure to specifically to copy the jobInfo privateData structure. Also, it was noticed that qemuDomainNamespace was reciding in `qmeu_domainjob.c`, and should rather be in its original file `qmeu_domain.c`. hence was moved. src/qemu/qemu_backup.c | 13 +- src/qemu/qemu_domain.c | 251 +------------------- src/qemu/qemu_domainjob.c | 386 ++++++++++++++++++++++++++++--- src/qemu/qemu_domainjob.h | 69 ++++-- src/qemu/qemu_driver.c | 21 +- src/qemu/qemu_migration.c | 45 ++-- src/qemu/qemu_migration_cookie.c | 7 +- src/qemu/qemu_migration_params.c | 9 +- src/qemu/qemu_migration_params.h | 28 +++ src/qemu/qemu_process.c | 24 +- 10 files changed, 515 insertions(+), 338 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index b711f8f623..82dc7797cb 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -529,17 +529,19 @@ qemuBackupJobTerminate(virDomainObjPtr vm, =20 { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobInfoPrivatePtr completedJobInfo; size_t i; =20 qemuDomainJobInfoUpdateTime(priv->job.current); =20 g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); priv->job.completed =3D qemuDomainJobInfoCopy(priv->job.current); + completedJobInfo =3D priv->job.completed->privateData; =20 - priv->job.completed->stats.backup.total =3D priv->backup->push_total; - priv->job.completed->stats.backup.transferred =3D priv->backup->push_t= ransferred; - priv->job.completed->stats.backup.tmp_used =3D priv->backup->pull_tmp_= used; - priv->job.completed->stats.backup.tmp_total =3D priv->backup->pull_tmp= _total; + completedJobInfo->stats.backup.total =3D priv->backup->push_total; + completedJobInfo->stats.backup.transferred =3D priv->backup->push_tran= sferred; + completedJobInfo->stats.backup.tmp_used =3D priv->backup->pull_tmp_use= d; + completedJobInfo->stats.backup.tmp_total =3D priv->backup->pull_tmp_to= tal; =20 priv->job.completed->status =3D jobstatus; priv->job.completed->errmsg =3D g_strdup(priv->backup->errmsg); @@ -1069,7 +1071,8 @@ qemuBackupGetJobInfoStats(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuDomainJobInfoPtr jobInfo) { - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + qemuDomainBackupStats *stats =3D &jobInfoPriv->stats.backup; qemuDomainObjPrivatePtr priv =3D vm->privateData; qemuMonitorJobInfoPtr *blockjobs =3D NULL; size_t nblockjobs =3D 0; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 42cc78ac1b..c77e809045 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -83,6 +83,11 @@ =20 VIR_LOG_INIT("qemu.qemu_domain"); =20 +VIR_ENUM_IMPL(qemuDomainNamespace, + QEMU_DOMAIN_NS_LAST, + "mount", +); + /** * qemuDomainObjFromDomain: * @domain: Domain pointer that has to be looked up @@ -2162,103 +2167,6 @@ qemuDomainObjPrivateXMLFormatPR(virBufferPtr buf, virBufferAddLit(buf, "\n"); } =20 - -static int -qemuDomainObjPrivateXMLFormatNBDMigrationSource(virBufferPtr buf, - virStorageSourcePtr src, - virDomainXMLOptionPtr xmlo= pt) -{ - g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - - virBufferAsprintf(&attrBuf, " type=3D'%s' format=3D'%s'", - virStorageTypeToString(src->type), - virStorageFileFormatTypeToString(src->format)); - - if (virDomainDiskSourceFormat(&childBuf, src, "source", 0, false, - VIR_DOMAIN_DEF_FORMAT_STATUS, - false, false, xmlopt) < 0) - return -1; - - virXMLFormatElement(buf, "migrationSource", &attrBuf, &childBuf); - - return 0; -} - - -static int -qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv =3D vm->privateData; - size_t i; - virDomainDiskDefPtr disk; - qemuDomainDiskPrivatePtr diskPriv; - - for (i =3D 0; i < vm->def->ndisks; i++) { - g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - disk =3D vm->def->disks[i]; - diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); - - virBufferAsprintf(&attrBuf, " dev=3D'%s' migrating=3D'%s'", - disk->dst, diskPriv->migrating ? "yes" : "no"); - - if (diskPriv->migrSource && - qemuDomainObjPrivateXMLFormatNBDMigrationSource(&childBuf, - diskPriv->migr= Source, - priv->driver->= xmlopt) < 0) - return -1; - - virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); - } - - return 0; -} - - -static int -qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, - virDomainObjPtr vm, - qemuDomainObjPrivatePtr priv) -{ - g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - qemuDomainJob job =3D priv->job.active; - - if (!qemuDomainTrackJob(job)) - job =3D QEMU_JOB_NONE; - - if (job =3D=3D QEMU_JOB_NONE && - priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) - return 0; - - virBufferAsprintf(&attrBuf, " type=3D'%s' async=3D'%s'", - qemuDomainJobTypeToString(job), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); - - if (priv->job.phase) { - virBufferAsprintf(&attrBuf, " phase=3D'%s'", - qemuDomainAsyncJobPhaseToString(priv->job.asyncJ= ob, - priv->job.phase)= ); - } - - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_NONE) - virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", priv->job.apiFlags= ); - - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT && - qemuDomainObjPrivateXMLFormatNBDMigration(&childBuf, vm) < 0) - return -1; - - if (priv->job.migParams) - qemuMigrationParamsFormat(&childBuf, priv->job.migParams); - - virXMLFormatElement(buf, "job", &attrBuf, &childBuf); - - return 0; -} - - static bool qemuDomainHasSlirp(virDomainObjPtr vm) { @@ -2394,7 +2302,7 @@ qemuDomainObjPrivateXMLFormat(virBufferPtr buf, if (priv->lockState) virBufferAsprintf(buf, "%s\n", priv->lockSt= ate); =20 - if (qemuDomainObjPrivateXMLFormatJob(buf, vm, priv) < 0) + if (priv->job.cb.formatJob(buf, vm, &priv->job) < 0) return -1; =20 if (priv->fakeReboot) @@ -2894,151 +2802,6 @@ qemuDomainObjPrivateXMLParsePR(xmlXPathContextPtr c= txt, } =20 =20 -static int -qemuDomainObjPrivateXMLParseJobNBDSource(xmlNodePtr node, - xmlXPathContextPtr ctxt, - virDomainDiskDefPtr disk, - virDomainXMLOptionPtr xmlopt) -{ - VIR_XPATH_NODE_AUTORESTORE(ctxt); - qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); - g_autofree char *format =3D NULL; - g_autofree char *type =3D NULL; - g_autoptr(virStorageSource) migrSource =3D NULL; - xmlNodePtr sourceNode; - - ctxt->node =3D node; - - if (!(ctxt->node =3D virXPathNode("./migrationSource", ctxt))) - return 0; - - if (!(type =3D virXMLPropString(ctxt->node, "type"))) { - virReportError(VIR_ERR_XML_ERROR, "%s", - _("missing storage source type")); - return -1; - } - - if (!(format =3D virXMLPropString(ctxt->node, "format"))) { - virReportError(VIR_ERR_XML_ERROR, "%s", - _("missing storage source format")); - return -1; - } - - if (!(migrSource =3D virDomainStorageSourceParseBase(type, format, NUL= L))) - return -1; - - /* newer libvirt uses the subelement instead of formatting the - * source directly into */ - if ((sourceNode =3D virXPathNode("./source", ctxt))) - ctxt->node =3D sourceNode; - - if (virDomainStorageSourceParse(ctxt->node, ctxt, migrSource, - VIR_DOMAIN_DEF_PARSE_STATUS, xmlopt) <= 0) - return -1; - - diskPriv->migrSource =3D g_steal_pointer(&migrSource); - return 0; -} - - -static int -qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm, - qemuDomainObjPrivatePtr priv, - xmlXPathContextPtr ctxt) -{ - g_autofree xmlNodePtr *nodes =3D NULL; - size_t i; - int n; - - if ((n =3D virXPathNodeSet("./disk[@migrating=3D'yes']", ctxt, &nodes)= ) < 0) - return -1; - - if (n > 0) { - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { - VIR_WARN("Found disks marked for migration but we were not " - "migrating"); - n =3D 0; - } - for (i =3D 0; i < n; i++) { - virDomainDiskDefPtr disk; - g_autofree char *dst =3D NULL; - - if ((dst =3D virXMLPropString(nodes[i], "dev")) && - (disk =3D virDomainDiskByTarget(vm->def, dst))) { - QEMU_DOMAIN_DISK_PRIVATE(disk)->migrating =3D true; - - if (qemuDomainObjPrivateXMLParseJobNBDSource(nodes[i], ctx= t, - disk, - priv->driver-= >xmlopt) < 0) - return -1; - } - } - } - - return 0; -} - - -static int -qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, - qemuDomainObjPrivatePtr priv, - xmlXPathContextPtr ctxt) -{ - VIR_XPATH_NODE_AUTORESTORE(ctxt); - g_autofree char *tmp =3D NULL; - - if (!(ctxt->node =3D virXPathNode("./job[1]", ctxt))) - return 0; - - if ((tmp =3D virXPathString("string(@type)", ctxt))) { - int type; - - if ((type =3D qemuDomainJobTypeFromString(tmp)) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("Unknown job type %s"), tmp); - return -1; - } - VIR_FREE(tmp); - priv->job.active =3D type; - } - - if ((tmp =3D virXPathString("string(@async)", ctxt))) { - int async; - - if ((async =3D qemuDomainAsyncJobTypeFromString(tmp)) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("Unknown async job type %s"), tmp); - return -1; - } - VIR_FREE(tmp); - priv->job.asyncJob =3D async; - - if ((tmp =3D virXPathString("string(@phase)", ctxt))) { - priv->job.phase =3D qemuDomainAsyncJobPhaseFromString(async, t= mp); - if (priv->job.phase < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("Unknown job phase %s"), tmp); - return -1; - } - VIR_FREE(tmp); - } - } - - if (virXPathULongHex("string(@flags)", ctxt, &priv->job.apiFlags) =3D= =3D -2) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid job flags"= )); - return -1; - } - - if (qemuDomainObjPrivateXMLParseJobNBD(vm, priv, ctxt) < 0) - return -1; - - if (qemuMigrationParamsParse(ctxt, &priv->job.migParams) < 0) - return -1; - - return 0; -} - - static int qemuDomainObjPrivateXMLParseSlirpFeatures(xmlNodePtr featuresNode, xmlXPathContextPtr ctxt, @@ -3198,7 +2961,7 @@ qemuDomainObjPrivateXMLParse(xmlXPathContextPtr ctxt, =20 priv->lockState =3D virXPathString("string(./lockstate)", ctxt); =20 - if (qemuDomainObjPrivateXMLParseJob(vm, priv, ctxt) < 0) + if (priv->job.cb.parseJob(vm, &priv->job, ctxt, priv->driver->xmlopt) = < 0) goto error; =20 priv->fakeReboot =3D virXPathBoolean("boolean(./fakereboot)", ctxt) = =3D=3D 1; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 7111acadda..389777153b 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -63,11 +63,6 @@ VIR_ENUM_IMPL(qemuDomainAsyncJob, "backup", ); =20 -VIR_ENUM_IMPL(qemuDomainNamespace, - QEMU_DOMAIN_NS_LAST, - "mount", -); - =20 const char * qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, @@ -121,22 +116,68 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob = job, return -1; } =20 +static void +qemuJobInfoFreePrivateData(qemuDomainJobInfoPrivatePtr priv) +{ + g_free(&priv->stats); +} + +static void +qemuJobInfoFreePrivate(void *opaque) +{ + qemuDomainJobInfoPtr jobInfo =3D (qemuDomainJobInfoPtr) opaque; + qemuJobInfoFreePrivateData(jobInfo->privateData); +} =20 void qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) { + info->cb.freeJobInfoPrivate(info); g_free(info->errmsg); g_free(info); } =20 +static void * +qemuDomainJobInfoPrivateAlloc(void) +{ + qemuDomainJobInfoPrivatePtr retPriv =3D g_new0(qemuDomainJobInfoPrivat= e, 1); + return (void *)retPriv; +} + +static void +qemuDomainJobInfoPrivateCopy(qemuDomainJobInfoPtr src, + qemuDomainJobInfoPtr dest) +{ + memcpy(dest->privateData, src->privateData, + sizeof(qemuDomainJobInfoPrivate)); +} + +static qemuDomainJobInfoPtr +qemuDomainJobInfoAlloc(void) +{ + qemuDomainJobInfoPtr ret =3D g_new0(qemuDomainJobInfo, 1); + ret->cb.allocJobInfoPrivate =3D &qemuDomainJobInfoPrivateAlloc; + ret->cb.freeJobInfoPrivate =3D &qemuJobInfoFreePrivate; + ret->cb.copyJobInfoPrivate =3D &qemuDomainJobInfoPrivateCopy; + ret->privateData =3D ret->cb.allocJobInfoPrivate(); + return ret; +} + +static void +qemuDomainCurrentJobInfoInit(qemuDomainJobObjPtr job) +{ + job->current =3D qemuDomainJobInfoAlloc(); + job->current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; +} + =20 qemuDomainJobInfoPtr qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) { - qemuDomainJobInfoPtr ret =3D g_new0(qemuDomainJobInfo, 1); + qemuDomainJobInfoPtr ret =3D qemuDomainJobInfoAlloc(); =20 memcpy(ret, info, sizeof(*info)); - + ret->cb.copyJobInfoPrivate(info, ret); ret->errmsg =3D g_strdup(info->errmsg); =20 return ret; @@ -166,10 +207,43 @@ qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driv= er, } =20 =20 +static void * +qemuJobAllocPrivate(void) +{ + qemuDomainJobPrivatePtr priv; + if (VIR_ALLOC(priv) < 0) + return NULL; + return (void *)priv; +} + + +static void +qemuJobFreePrivateData(qemuDomainJobPrivatePtr priv) +{ + priv->spiceMigration =3D false; + priv->spiceMigrated =3D false; + priv->dumpCompleted =3D false; + qemuMigrationParamsFree(priv->migParams); + priv->migParams =3D NULL; +} + +static void +qemuJobFreePrivate(void *opaque) +{ + qemuDomainJobObjPtr job =3D (qemuDomainJobObjPtr) opaque; + qemuJobFreePrivateData(job->privateData); +} + + int qemuDomainObjInitJob(qemuDomainJobObjPtr job) { memset(job, 0, sizeof(*job)); + job->cb.allocJobPrivate =3D &qemuJobAllocPrivate; + job->cb.freeJobPrivate =3D &qemuJobFreePrivate; + job->cb.formatJob =3D &qemuDomainObjPrivateXMLFormatJob; + job->cb.parseJob =3D &qemuDomainObjPrivateXMLParseJob; + job->privateData =3D job->cb.allocJobPrivate(); =20 if (virCondInit(&job->cond) < 0) return -1; @@ -213,13 +287,9 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) job->phase =3D 0; job->mask =3D QEMU_JOB_DEFAULT_MASK; job->abortJob =3D false; - job->spiceMigration =3D false; - job->spiceMigrated =3D false; - job->dumpCompleted =3D false; VIR_FREE(job->error); g_clear_pointer(&job->current, qemuDomainJobInfoFree); - qemuMigrationParamsFree(job->migParams); - job->migParams =3D NULL; + job->cb.freeJobPrivate(job); job->apiFlags =3D 0; } =20 @@ -235,7 +305,7 @@ qemuDomainObjRestoreJob(virDomainObjPtr obj, job->asyncJob =3D priv->job.asyncJob; job->asyncOwner =3D priv->job.asyncOwner; job->phase =3D priv->job.phase; - job->migParams =3D g_steal_pointer(&priv->job.migParams); + job->privateData =3D g_steal_pointer(&priv->job.privateData); job->apiFlags =3D priv->job.apiFlags; =20 qemuDomainObjResetJob(&priv->job); @@ -285,6 +355,7 @@ int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) { unsigned long long now; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; =20 if (!jobInfo->stopped) return 0; @@ -298,8 +369,8 @@ qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jo= bInfo) return 0; } =20 - jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; - jobInfo->stats.mig.downtime_set =3D true; + jobInfoPriv->stats.mig.downtime =3D now - jobInfo->stopped; + jobInfoPriv->stats.mig.downtime_set =3D true; return 0; } =20 @@ -334,38 +405,39 @@ int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, virDomainJobInfoPtr info) { + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; info->type =3D qemuDomainJobStatusToType(jobInfo->status); info->timeElapsed =3D jobInfo->timeElapsed; =20 switch (jobInfo->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - info->fileTotal =3D jobInfo->stats.mig.disk_total + + info->memTotal =3D jobInfoPriv->stats.mig.ram_total; + info->memRemaining =3D jobInfoPriv->stats.mig.ram_remaining; + info->memProcessed =3D jobInfoPriv->stats.mig.ram_transferred; + info->fileTotal =3D jobInfoPriv->stats.mig.disk_total + jobInfo->mirrorStats.total; - info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + + info->fileRemaining =3D jobInfoPriv->stats.mig.disk_remaining + (jobInfo->mirrorStats.total - jobInfo->mirrorStats.transferred); - info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + + info->fileProcessed =3D jobInfoPriv->stats.mig.disk_transferred + jobInfo->mirrorStats.transferred; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + info->memTotal =3D jobInfoPriv->stats.mig.ram_total; + info->memRemaining =3D jobInfoPriv->stats.mig.ram_remaining; + info->memProcessed =3D jobInfoPriv->stats.mig.ram_transferred; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - info->memTotal =3D jobInfo->stats.dump.total; - info->memProcessed =3D jobInfo->stats.dump.completed; + info->memTotal =3D jobInfoPriv->stats.dump.total; + info->memProcessed =3D jobInfoPriv->stats.dump.completed; info->memRemaining =3D info->memTotal - info->memProcessed; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - info->fileTotal =3D jobInfo->stats.backup.total; - info->fileProcessed =3D jobInfo->stats.backup.transferred; + info->fileTotal =3D jobInfoPriv->stats.backup.total; + info->fileProcessed =3D jobInfoPriv->stats.backup.transferred; info->fileRemaining =3D info->fileTotal - info->fileProcessed; break; =20 @@ -387,7 +459,8 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr= jobInfo, virTypedParameterPtr *params, int *nparams) { - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + qemuMonitorMigrationStats *stats =3D &jobInfoPriv->stats.mig; qemuDomainMirrorStatsPtr mirrorStats =3D &jobInfo->mirrorStats; virTypedParameterPtr par =3D NULL; int maxpar =3D 0; @@ -564,7 +637,8 @@ qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobI= nfo, virTypedParameterPtr *params, int *nparams) { - qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + qemuMonitorDumpStats *stats =3D &jobInfoPriv->stats.dump; virTypedParameterPtr par =3D NULL; int maxpar =3D 0; int npar =3D 0; @@ -607,7 +681,8 @@ qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jo= bInfo, virTypedParameterPtr *params, int *nparams) { - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + qemuDomainBackupStats *stats =3D &jobInfoPriv->stats.backup; g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); =20 if (virTypedParamListAddInt(par, jobInfo->operation, @@ -782,6 +857,7 @@ qemuDomainObjCanSetJob(qemuDomainJobObjPtr job, /* Give up waiting for mutex after 30 seconds */ #define QEMU_JOB_WAIT_TIME (1000ull * 30) =20 + /** * qemuDomainObjBeginJobInternal: * @driver: qemu driver @@ -890,8 +966,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); qemuDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + qemuDomainCurrentJobInfoInit(&priv->job); priv->job.asyncJob =3D asyncJob; priv->job.asyncOwner =3D virThreadSelfID(); priv->job.asyncOwnerAPI =3D virThreadJobGet(); @@ -1190,3 +1265,248 @@ qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) priv->job.abortJob =3D true; virDomainObjBroadcast(obj); } + + +static int +qemuDomainObjPrivateXMLFormatNBDMigrationSource(virBufferPtr buf, + virStorageSourcePtr src, + virDomainXMLOptionPtr xmlo= pt) +{ + g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); + + virBufferAsprintf(&attrBuf, " type=3D'%s' format=3D'%s'", + virStorageTypeToString(src->type), + virStorageFileFormatTypeToString(src->format)); + + if (virDomainDiskSourceFormat(&childBuf, src, "source", 0, false, + VIR_DOMAIN_DEF_FORMAT_STATUS, + false, false, xmlopt) < 0) + return -1; + + virXMLFormatElement(buf, "migrationSource", &attrBuf, &childBuf); + + return 0; +} + + +static int +qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + size_t i; + virDomainDiskDefPtr disk; + qemuDomainDiskPrivatePtr diskPriv; + + for (i =3D 0; i < vm->def->ndisks; i++) { + g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); + disk =3D vm->def->disks[i]; + diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); + + virBufferAsprintf(&attrBuf, " dev=3D'%s' migrating=3D'%s'", + disk->dst, diskPriv->migrating ? "yes" : "no"); + + if (diskPriv->migrSource && + qemuDomainObjPrivateXMLFormatNBDMigrationSource(&childBuf, + diskPriv->migr= Source, + priv->driver->= xmlopt) < 0) + return -1; + + virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); + } + + return 0; +} + + +int +qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, + virDomainObjPtr vm, + qemuDomainJobObjPtr jobObj) +{ + g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); + qemuDomainJob job =3D jobObj->active; + qemuDomainJobPrivatePtr jobPriv =3D jobObj->privateData; + + if (!qemuDomainTrackJob(job)) + job =3D QEMU_JOB_NONE; + + if (job =3D=3D QEMU_JOB_NONE && + jobObj->asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) + return 0; + + virBufferAsprintf(&attrBuf, " type=3D'%s' async=3D'%s'", + qemuDomainJobTypeToString(job), + qemuDomainAsyncJobTypeToString(jobObj->asyncJob)); + + if (jobObj->phase) { + virBufferAsprintf(&attrBuf, " phase=3D'%s'", + qemuDomainAsyncJobPhaseToString(jobObj->asyncJob, + jobObj->phase)); + } + + if (jobObj->asyncJob !=3D QEMU_ASYNC_JOB_NONE) + virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", jobObj->apiFlags); + + if (jobObj->asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT && + qemuDomainObjPrivateXMLFormatNBDMigration(&childBuf, vm) < 0) + return -1; + + if (jobPriv->migParams) + qemuMigrationParamsFormat(&childBuf, jobPriv->migParams); + + virXMLFormatElement(buf, "job", &attrBuf, &childBuf); + + return 0; +} + + +static int +qemuDomainObjPrivateXMLParseJobNBDSource(xmlNodePtr node, + xmlXPathContextPtr ctxt, + virDomainDiskDefPtr disk, + virDomainXMLOptionPtr xmlopt) +{ + VIR_XPATH_NODE_AUTORESTORE(ctxt); + qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); + g_autofree char *format =3D NULL; + g_autofree char *type =3D NULL; + g_autoptr(virStorageSource) migrSource =3D NULL; + xmlNodePtr sourceNode; + + ctxt->node =3D node; + + if (!(ctxt->node =3D virXPathNode("./migrationSource", ctxt))) + return 0; + + if (!(type =3D virXMLPropString(ctxt->node, "type"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing storage source type")); + return -1; + } + + if (!(format =3D virXMLPropString(ctxt->node, "format"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing storage source format")); + return -1; + } + + if (!(migrSource =3D virDomainStorageSourceParseBase(type, format, NUL= L))) + return -1; + + /* newer libvirt uses the subelement instead of formatting the + * source directly into */ + if ((sourceNode =3D virXPathNode("./source", ctxt))) + ctxt->node =3D sourceNode; + + if (virDomainStorageSourceParse(ctxt->node, ctxt, migrSource, + VIR_DOMAIN_DEF_PARSE_STATUS, xmlopt) <= 0) + return -1; + + diskPriv->migrSource =3D g_steal_pointer(&migrSource); + return 0; +} + + +static int +qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm, + qemuDomainJobObjPtr job, + xmlXPathContextPtr ctxt, + virDomainXMLOptionPtr xmlopt) +{ + g_autofree xmlNodePtr *nodes =3D NULL; + size_t i; + int n; + + if ((n =3D virXPathNodeSet("./disk[@migrating=3D'yes']", ctxt, &nodes)= ) < 0) + return -1; + + if (n > 0) { + if (job->asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + VIR_WARN("Found disks marked for migration but we were not " + "migrating"); + n =3D 0; + } + for (i =3D 0; i < n; i++) { + virDomainDiskDefPtr disk; + g_autofree char *dst =3D NULL; + + if ((dst =3D virXMLPropString(nodes[i], "dev")) && + (disk =3D virDomainDiskByTarget(vm->def, dst))) { + QEMU_DOMAIN_DISK_PRIVATE(disk)->migrating =3D true; + + if (qemuDomainObjPrivateXMLParseJobNBDSource(nodes[i], ctx= t, + disk, + xmlopt) < 0) + return -1; + } + } + } + + return 0; +} + + +int +qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, + qemuDomainJobObjPtr job, + xmlXPathContextPtr ctxt, + virDomainXMLOptionPtr xmlopt) +{ + qemuDomainJobPrivatePtr jobPriv =3D job->privateData; + VIR_XPATH_NODE_AUTORESTORE(ctxt); + g_autofree char *tmp =3D NULL; + + if (!(ctxt->node =3D virXPathNode("./job[1]", ctxt))) + return 0; + + if ((tmp =3D virXPathString("string(@type)", ctxt))) { + int type; + + if ((type =3D qemuDomainJobTypeFromString(tmp)) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unknown job type %s"), tmp); + return -1; + } + VIR_FREE(tmp); + job->active =3D type; + } + + if ((tmp =3D virXPathString("string(@async)", ctxt))) { + int async; + + if ((async =3D qemuDomainAsyncJobTypeFromString(tmp)) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unknown async job type %s"), tmp); + return -1; + } + VIR_FREE(tmp); + job->asyncJob =3D async; + + if ((tmp =3D virXPathString("string(@phase)", ctxt))) { + job->phase =3D qemuDomainAsyncJobPhaseFromString(async, tmp); + if (job->phase < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unknown job phase %s"), tmp); + return -1; + } + VIR_FREE(tmp); + } + } + + if (virXPathULongHex("string(@flags)", ctxt, &job->apiFlags) =3D=3D -2= ) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid job flags"= )); + return -1; + } + + if (qemuDomainObjPrivateXMLParseJobNBD(vm, job, ctxt, xmlopt) < 0) + return -1; + + if (qemuMigrationParamsParse(ctxt, &jobPriv->migParams) < 0) + return -1; + + return 0; +} diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 124664354d..ca546c8844 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -19,7 +19,6 @@ #pragma once =20 #include -#include "qemu_migration_params.h" =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define QEMU_JOB_DEFAULT_MASK \ @@ -99,7 +98,6 @@ typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP, } qemuDomainJobStatsType; =20 - typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; struct _qemuDomainMirrorStats { @@ -107,16 +105,22 @@ struct _qemuDomainMirrorStats { unsigned long long total; }; =20 -typedef struct _qemuDomainBackupStats qemuDomainBackupStats; -struct _qemuDomainBackupStats { - unsigned long long transferred; - unsigned long long total; - unsigned long long tmp_used; - unsigned long long tmp_total; -}; =20 typedef struct _qemuDomainJobInfo qemuDomainJobInfo; typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; + +typedef void *(*qemuDomainObjJobInfoPrivateAlloc)(void); +typedef void (*qemuDomainObjJobInfoPrivateFree)(void *); +typedef void (*qemuDomainObjJobInfoPrivateCopy)(qemuDomainJobInfoPtr, + qemuDomainJobInfoPtr); + +typedef struct _qemuDomainObjPrivateJobInfoCallbacks qemuDomainObjPrivateJ= obInfoCallbacks; +struct _qemuDomainObjPrivateJobInfoCallbacks { + qemuDomainObjJobInfoPrivateAlloc allocJobInfoPrivate; + qemuDomainObjJobInfoPrivateFree freeJobInfoPrivate; + qemuDomainObjJobInfoPrivateCopy copyJobInfoPrivate; +}; + struct _qemuDomainJobInfo { qemuDomainJobStatus status; virDomainJobOperation operation; @@ -136,16 +140,15 @@ struct _qemuDomainJobInfo { bool timeDeltaSet; /* Raw values from QEMU */ qemuDomainJobStatsType statsType; - union { - qemuMonitorMigrationStats mig; - qemuMonitorDumpStats dump; - qemuDomainBackupStats backup; - } stats; qemuDomainMirrorStats mirrorStats; =20 char *errmsg; /* optional error message for failed completed jobs */ + + void *privateData; /* job specific collection of info */ + qemuDomainObjPrivateJobInfoCallbacks cb; }; =20 + void qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); =20 @@ -156,6 +159,25 @@ qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); =20 typedef struct _qemuDomainJobObj qemuDomainJobObj; typedef qemuDomainJobObj *qemuDomainJobObjPtr; + +typedef void *(*qemuDomainObjPrivateJobAlloc)(void); +typedef void (*qemuDomainObjPrivateJobFree)(void *); +typedef int (*qemuDomainObjPrivateJobFormat)(virBufferPtr, + virDomainObjPtr, + qemuDomainJobObjPtr); +typedef int (*qemuDomainObjPrivateJobParse)(virDomainObjPtr, + qemuDomainJobObjPtr, + xmlXPathContextPtr, + virDomainXMLOptionPtr); + +typedef struct _qemuDomainObjPrivateJobCallbacks qemuDomainObjPrivateJobCa= llbacks; +struct _qemuDomainObjPrivateJobCallbacks { + qemuDomainObjPrivateJobAlloc allocJobPrivate; + qemuDomainObjPrivateJobFree freeJobPrivate; + qemuDomainObjPrivateJobFormat formatJob; + qemuDomainObjPrivateJobParse parseJob; +}; + struct _qemuDomainJobObj { virCond cond; /* Use to coordinate jobs */ =20 @@ -182,14 +204,10 @@ struct _qemuDomainJobObj { qemuDomainJobInfoPtr current; /* async job progress data */ qemuDomainJobInfoPtr completed; /* statistics data of a recently c= ompleted job */ bool abortJob; /* abort of the job requested */ - bool spiceMigration; /* we asked for spice migration an= d we - * should wait for it to finish */ - bool spiceMigrated; /* spice migration completed */ char *error; /* job event completion error */ - bool dumpCompleted; /* dump completed */ - - qemuMigrationParamsPtr migParams; unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ + void *privateData; /* job specific collection of data= */ + qemuDomainObjPrivateJobCallbacks cb; }; =20 const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, @@ -267,3 +285,14 @@ void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); int qemuDomainObjInitJob(qemuDomainJobObjPtr job); =20 bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob); + +int +qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, + virDomainObjPtr vm, + qemuDomainJobObjPtr jobObj); + +int +qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, + qemuDomainJobObjPtr job, + xmlXPathContextPtr ctxt, + virDomainXMLOptionPtr xmlopt); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index cd8d7ffb56..3ff5c5db53 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3701,14 +3701,16 @@ static int qemuDumpWaitForCompletion(virDomainObjPtr vm) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D priv->job.current->private= Data; =20 VIR_DEBUG("Waiting for dump completion"); - while (!priv->job.dumpCompleted && !priv->job.abortJob) { + while (!jobPriv->dumpCompleted && !priv->job.abortJob) { if (virDomainObjWait(vm) < 0) return -1; } =20 - if (priv->job.current->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STAT= US_FAILED) { + if (jobInfoPriv->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STATUS_FAI= LED) { if (priv->job.error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), @@ -13554,6 +13556,7 @@ qemuDomainGetJobInfoDumpStats(virQEMUDriverPtr driv= er, qemuDomainJobInfoPtr jobInfo) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; qemuMonitorDumpStats stats =3D { 0 }; int rc; =20 @@ -13565,33 +13568,33 @@ qemuDomainGetJobInfoDumpStats(virQEMUDriverPtr dr= iver, if (qemuDomainObjExitMonitor(driver, vm) < 0 || rc < 0) return -1; =20 - jobInfo->stats.dump =3D stats; + jobInfoPriv->stats.dump =3D stats; =20 if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) return -1; =20 - switch (jobInfo->stats.dump.status) { + switch (jobInfoPriv->stats.dump.status) { case QEMU_MONITOR_DUMP_STATUS_NONE: case QEMU_MONITOR_DUMP_STATUS_FAILED: case QEMU_MONITOR_DUMP_STATUS_LAST: virReportError(VIR_ERR_OPERATION_FAILED, _("dump query failed, status=3D%d"), - jobInfo->stats.dump.status); + jobInfoPriv->stats.dump.status); return -1; break; =20 case QEMU_MONITOR_DUMP_STATUS_ACTIVE: jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; VIR_DEBUG("dump active, bytes written=3D'%llu' remaining=3D'%llu'", - jobInfo->stats.dump.completed, - jobInfo->stats.dump.total - - jobInfo->stats.dump.completed); + jobInfoPriv->stats.dump.completed, + jobInfoPriv->stats.dump.total - + jobInfoPriv->stats.dump.completed); break; =20 case QEMU_MONITOR_DUMP_STATUS_COMPLETED: jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; VIR_DEBUG("dump completed, bytes written=3D'%llu'", - jobInfo->stats.dump.completed); + jobInfoPriv->stats.dump.completed); break; } =20 diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 13427c1203..a45d13aaac 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1422,12 +1422,13 @@ static int qemuMigrationSrcWaitForSpice(virDomainObjPtr vm) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 - if (!priv->job.spiceMigration) + if (!jobPriv->spiceMigration) return 0; =20 VIR_DEBUG("Waiting for SPICE to finish migration"); - while (!priv->job.spiceMigrated && !priv->job.abortJob) { + while (!jobPriv->spiceMigrated && !priv->job.abortJob) { if (virDomainObjWait(vm) < 0) return -1; } @@ -1438,7 +1439,8 @@ qemuMigrationSrcWaitForSpice(virDomainObjPtr vm) static void qemuMigrationUpdateJobType(qemuDomainJobInfoPtr jobInfo) { - switch ((qemuMonitorMigrationStatus) jobInfo->stats.mig.status) { + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + switch ((qemuMonitorMigrationStatus) jobInfoPriv->stats.mig.status) { case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY: jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY; break; @@ -1485,6 +1487,7 @@ qemuMigrationAnyFetchStats(virQEMUDriverPtr driver, char **error) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; qemuMonitorMigrationStats stats; int rv; =20 @@ -1496,7 +1499,7 @@ qemuMigrationAnyFetchStats(virQEMUDriverPtr driver, if (qemuDomainObjExitMonitor(driver, vm) < 0 || rv < 0) return -1; =20 - jobInfo->stats.mig =3D stats; + jobInfoPriv->stats.mig =3D stats; =20 return 0; } @@ -1538,12 +1541,14 @@ qemuMigrationJobCheckStatus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv =3D vm->privateData; qemuDomainJobInfoPtr jobInfo =3D priv->job.current; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + char *error =3D NULL; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); int ret =3D -1; =20 if (!events || - jobInfo->stats.mig.status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_ERR= OR) { + jobInfoPriv->stats.mig.status =3D=3D QEMU_MONITOR_MIGRATION_STATUS= _ERROR) { if (qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobInfo, &err= or) < 0) return -1; } @@ -1777,6 +1782,7 @@ qemuMigrationSrcGraphicsRelocate(virQEMUDriverPtr dri= ver, const char *graphicsuri) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; int ret =3D -1; const char *listenAddress =3D NULL; virSocketAddr addr; @@ -1858,7 +1864,7 @@ qemuMigrationSrcGraphicsRelocate(virQEMUDriverPtr dri= ver, QEMU_ASYNC_JOB_MIGRATION_OUT) =3D= =3D 0) { ret =3D qemuMonitorGraphicsRelocate(priv->mon, type, listenAddress, port, tlsPort, tlsSubject); - priv->job.spiceMigration =3D !ret; + jobPriv->spiceMigration =3D !ret; if (qemuDomainObjExitMonitor(driver, vm) < 0) ret =3D -1; } @@ -1993,6 +1999,7 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, { virQEMUDriverPtr driver =3D opaque; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 VIR_DEBUG("vm=3D%s, conn=3D%p, asyncJob=3D%s, phase=3D%s", vm->def->name, conn, @@ -2018,7 +2025,7 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, " domain was successfully started on destination or not", vm->def->name); qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, - priv->job.migParams, priv->job.apiFlags); + jobPriv->migParams, priv->job.apiFlags); /* clear the job and let higher levels decide what to do */ qemuDomainObjDiscardAsyncJob(driver, vm); break; @@ -2403,6 +2410,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, bool relabel =3D false; int rv; char *tlsAlias =3D NULL; + qemuDomainJobPrivatePtr jobPriv =3D NULL; =20 virNWFilterReadLockFilterUpdates(); =20 @@ -2410,7 +2418,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, if (flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", - _("offline migration cannot handle " + _("offlineqemuDomainJobPrivatePtr jobPriv =3D p= riv->job.privateData;priv migration cannot handle " "non-shared storage")); goto cleanup; } @@ -2519,6 +2527,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, *def =3D NULL; =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; priv->origname =3D g_strdup(origname); =20 if (taint_hook) { @@ -2726,7 +2735,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, =20 stopjob: qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, - priv->job.migParams, priv->job.apiFlags); + jobPriv->migParams, priv->job.apiFlags); =20 if (stopProcess) { unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; @@ -3000,6 +3009,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); qemuDomainObjPrivatePtr priv =3D vm->privateData; qemuDomainJobInfoPtr jobInfo =3D NULL; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 VIR_DEBUG("driver=3D%p, vm=3D%p, cookiein=3D%s, cookieinlen=3D%d, " "flags=3D0x%x, retcode=3D%d", @@ -3025,6 +3035,8 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, =20 /* Update times with the values sent by the destination daemon */ if (mig->jobInfo && jobInfo) { + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + qemuDomainJobInfoPrivatePtr migJobInfoPriv =3D mig->jobInfo->priva= teData; int reason; =20 /* We need to refresh migration statistics after a completed post-= copy @@ -3040,8 +3052,8 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, qemuDomainJobInfoUpdateTime(jobInfo); jobInfo->timeDeltaSet =3D mig->jobInfo->timeDeltaSet; jobInfo->timeDelta =3D mig->jobInfo->timeDelta; - jobInfo->stats.mig.downtime_set =3D mig->jobInfo->stats.mig.downti= me_set; - jobInfo->stats.mig.downtime =3D mig->jobInfo->stats.mig.downtime; + jobInfoPriv->stats.mig.downtime_set =3D migJobInfoPriv->stats.mig.= downtime_set; + jobInfoPriv->stats.mig.downtime =3D migJobInfoPriv->stats.mig.down= time; } =20 if (flags & VIR_MIGRATE_OFFLINE) @@ -3084,7 +3096,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, qemuMigrationSrcRestoreDomainState(driver, vm); =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, - priv->job.migParams, priv->job.apiFlags); + jobPriv->migParams, priv->job.apiFlags); =20 if (virDomainObjSave(vm, driver->xmlopt, cfg->stateDir) < 0) VIR_WARN("Failed to save status on vm %s", vm->def->name); @@ -4681,6 +4693,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, virErrorPtr orig_err =3D NULL; virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, flags) < 0) @@ -4738,7 +4751,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, */ if (!v3proto && ret < 0) qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, - priv->job.migParams, priv->job.apiFlags); + jobPriv->migParams, priv->job.apiFlags); =20 qemuMigrationSrcRestoreDomainState(driver, vm); =20 @@ -4780,6 +4793,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, unsigned long resource) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; int ret =3D -1; =20 /* If we didn't start the job in the begin phase, start it now. */ @@ -4814,7 +4828,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, endjob: if (ret < 0) { qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, - priv->job.migParams, priv->job.apiFlags); + jobPriv->migParams, priv->job.apiFlags); qemuMigrationJobFinish(driver, vm); } else { qemuMigrationJobContinue(vm); @@ -5019,6 +5033,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, virErrorPtr orig_err =3D NULL; int cookie_flags =3D 0; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); unsigned short port; unsigned long long timeReceived =3D 0; @@ -5272,7 +5287,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, } =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, - priv->job.migParams, priv->job.apiFlags); + jobPriv->migParams, priv->job.apiFlags); =20 qemuMigrationJobFinish(driver, vm); if (!virDomainObjIsActive(vm)) diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_coo= kie.c index fb8b5bcd92..b5f4647539 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -641,7 +641,8 @@ static void qemuMigrationCookieStatisticsXMLFormat(virBufferPtr buf, qemuDomainJobInfoPtr jobInfo) { - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D jobInfo->privateData; + qemuMonitorMigrationStats *stats =3D &jobInfoPriv->stats.mig; =20 virBufferAddLit(buf, "\n"); virBufferAdjustIndent(buf, 2); @@ -1044,6 +1045,7 @@ static qemuDomainJobInfoPtr qemuMigrationCookieStatisticsXMLParse(xmlXPathContextPtr ctxt) { qemuDomainJobInfoPtr jobInfo =3D NULL; + qemuDomainJobInfoPrivatePtr jobInfoPriv =3D NULL; qemuMonitorMigrationStats *stats; VIR_XPATH_NODE_AUTORESTORE(ctxt); =20 @@ -1051,8 +1053,9 @@ qemuMigrationCookieStatisticsXMLParse(xmlXPathContext= Ptr ctxt) return NULL; =20 jobInfo =3D g_new0(qemuDomainJobInfo, 1); + jobInfoPriv =3D jobInfo->privateData; =20 - stats =3D &jobInfo->stats.mig; + stats =3D &jobInfoPriv->stats.mig; jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; =20 virXPathULongLong("string(./started[1])", ctxt, &jobInfo->started); diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_par= ams.c index 6953badcfe..ba3eb14831 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -953,6 +953,7 @@ qemuMigrationParamsEnableTLS(virQEMUDriverPtr driver, qemuMigrationParamsPtr migParams) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; virJSONValuePtr tlsProps =3D NULL; virJSONValuePtr secProps =3D NULL; virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); @@ -965,7 +966,7 @@ qemuMigrationParamsEnableTLS(virQEMUDriverPtr driver, goto error; } =20 - if (!priv->job.migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) { + if (!jobPriv->migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("TLS migration is not supported with this " "QEMU binary")); @@ -1038,8 +1039,9 @@ qemuMigrationParamsDisableTLS(virDomainObjPtr vm, qemuMigrationParamsPtr migParams) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 - if (!priv->job.migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) + if (!jobPriv->migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) return 0; =20 if (qemuMigrationParamsSetString(migParams, @@ -1168,6 +1170,7 @@ qemuMigrationParamsCheck(virQEMUDriverPtr driver, virBitmapPtr remoteCaps) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; qemuMigrationCapability cap; qemuMigrationParty party; size_t i; @@ -1221,7 +1224,7 @@ qemuMigrationParamsCheck(virQEMUDriverPtr driver, * to ask QEMU for their current settings. */ =20 - return qemuMigrationParamsFetch(driver, vm, asyncJob, &priv->job.migPa= rams); + return qemuMigrationParamsFetch(driver, vm, asyncJob, &jobPriv->migPar= ams); } =20 =20 diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_par= ams.h index 9aea24725f..381eabbe4a 100644 --- a/src/qemu/qemu_migration_params.h +++ b/src/qemu/qemu_migration_params.h @@ -70,6 +70,34 @@ typedef enum { QEMU_MIGRATION_DESTINATION =3D (1 << 1), } qemuMigrationParty; =20 +typedef struct _qemuDomainJobPrivate qemuDomainJobPrivate; +typedef qemuDomainJobPrivate *qemuDomainJobPrivatePtr; +struct _qemuDomainJobPrivate { + bool spiceMigration; /* we asked for spice migration an= d we + * should wait for it to finish */ + bool spiceMigrated; /* spice migration completed */ + bool dumpCompleted; /* dump completed */ + qemuMigrationParamsPtr migParams; +}; + + +typedef struct _qemuDomainBackupStats qemuDomainBackupStats; +struct _qemuDomainBackupStats { + unsigned long long transferred; + unsigned long long total; + unsigned long long tmp_used; + unsigned long long tmp_total; +}; + +typedef struct _qemuDomainJobInfoPrivate qemuDomainJobInfoPrivate; +typedef qemuDomainJobInfoPrivate *qemuDomainJobInfoPrivatePtr; +struct _qemuDomainJobInfoPrivate { + union { + qemuMonitorMigrationStats mig; + qemuMonitorDumpStats dump; + qemuDomainBackupStats backup; + } stats; +}; =20 virBitmapPtr qemuMigrationParamsGetAlwaysOnCaps(qemuMigrationParty party); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d36088ba98..97c6b2ec27 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1608,6 +1608,7 @@ qemuProcessHandleSpiceMigrated(qemuMonitorPtr mon G_G= NUC_UNUSED, void *opaque G_GNUC_UNUSED) { qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; =20 virObjectLock(vm); =20 @@ -1615,12 +1616,13 @@ qemuProcessHandleSpiceMigrated(qemuMonitorPtr mon G= _GNUC_UNUSED, vm, vm->def->name); =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { VIR_DEBUG("got SPICE_MIGRATE_COMPLETED event without a migration j= ob"); goto cleanup; } =20 - priv->job.spiceMigrated =3D true; + jobPriv->spiceMigrated =3D true; virDomainObjBroadcast(vm); =20 cleanup: @@ -1636,6 +1638,7 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon G= _GNUC_UNUSED, void *opaque) { qemuDomainObjPrivatePtr priv; + qemuDomainJobInfoPrivatePtr jobInfoPriv; virQEMUDriverPtr driver =3D opaque; virObjectEventPtr event =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); @@ -1648,12 +1651,13 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon= G_GNUC_UNUSED, qemuMonitorMigrationStatusTypeToString(status)); =20 priv =3D vm->privateData; + jobInfoPriv =3D priv->job.current->privateData; if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); goto cleanup; } =20 - priv->job.current->stats.mig.status =3D status; + jobInfoPriv->stats.mig.status =3D status; virDomainObjBroadcast(vm); =20 if (status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY && @@ -1720,6 +1724,8 @@ qemuProcessHandleDumpCompleted(qemuMonitorPtr mon G_G= NUC_UNUSED, void *opaque G_GNUC_UNUSED) { qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; + qemuDomainJobInfoPrivatePtr jobInfoPriv; =20 virObjectLock(vm); =20 @@ -1727,18 +1733,20 @@ qemuProcessHandleDumpCompleted(qemuMonitorPtr mon G= _GNUC_UNUSED, vm, vm->def->name, stats, NULLSTR(error)); =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; + jobInfoPriv =3D priv->job.current->privateData; if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job"); goto cleanup; } - priv->job.dumpCompleted =3D true; - priv->job.current->stats.dump =3D *stats; + jobPriv->dumpCompleted =3D true; + jobInfoPriv->stats.dump =3D *stats; priv->job.error =3D g_strdup(error); =20 /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { priv->job.error =3D g_strdup(virGetLastErrorMessage()); - priv->job.current->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_= FAILED; + jobInfoPriv->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_FAILED; } =20 virDomainObjBroadcast(vm); @@ -3411,6 +3419,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, virDomainState state, int reason) { + qemuDomainJobPrivatePtr jobPriv =3D job->privateData; bool postcopy =3D (state =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED) || (state =3D=3D VIR_DOMAIN_RUNNING && @@ -3459,7 +3468,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, } =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_NONE, - job->migParams, job->apiFlags); + jobPriv->migParams, job->apiFlags); return 0; } =20 @@ -3471,6 +3480,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, int reason, unsigned int *stopFlags) { + qemuDomainJobPrivatePtr jobPriv =3D job->privateData; bool postcopy =3D state =3D=3D VIR_DOMAIN_PAUSED && (reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY || reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED); @@ -3554,7 +3564,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, } =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_NONE, - job->migParams, job->apiFlags); + jobPriv->migParams, job->apiFlags); return 0; } =20 --=20 2.17.1