From nobody Sun Feb 8 17:37:38 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1566438210; cv=none; d=zoho.com; s=zohoarc; b=MQvWrdjXv/2myWZAalItn6AZndX8Afe3/T0C2i9eo7rUaNWujJwb7wWvypkEEaVevSbCxfQgLNaYry88WpJbVT4TjM1Dw5nKLjistW/gAZ43moodmoMzj+T/2S6hq0cDRH5WaZKSBHhTJN+QzwsccInWr/OUnCcb7/knPBIq2VA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566438210; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=hZMrWZ3R4xe+9rFOi1E7vqDh+2RabqWH/XrjQFkQGak=; b=HzLNvdpMEPMVAwcFt1sQ+3l8JeMhKmJg4sRv4shLlOvz29AJm2ZAbl72Hh2h4etarx4ftGPWM7K35lThwh8j03OhkPv4hbbC+8Hy5Osr1301DtQFu7oZTTZjVBRgMp8lo4Xr7H1H1ffxGzmDADCoA6RglYzX3UG7vc0YM+ag77o= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1566438210566301.6927314434075; Wed, 21 Aug 2019 18:43:30 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5437A898106; Thu, 22 Aug 2019 01:43:29 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2ED6260127; Thu, 22 Aug 2019 01:43:29 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id EAF0724F33; Thu, 22 Aug 2019 01:43:28 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x7M1h0Pm000755 for ; Wed, 21 Aug 2019 21:43:00 -0400 Received: by smtp.corp.redhat.com (Postfix) id DB7FC1001B36; Thu, 22 Aug 2019 01:43:00 +0000 (UTC) Received: from blue.redhat.com (ovpn-116-234.phx2.redhat.com [10.3.116.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7F8D91001B09; Thu, 22 Aug 2019 01:43:00 +0000 (UTC) From: Eric Blake To: libvir-list@redhat.com Date: Wed, 21 Aug 2019 20:42:47 -0500 Message-Id: <20190822014249.8325-9-eblake@redhat.com> In-Reply-To: <20190822014249.8325-1-eblake@redhat.com> References: <20190822014249.8325-1-eblake@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-loop: libvir-list@redhat.com Cc: nsoffer@redhat.com, eshenitz@redhat.com, pkrempa@redhat.com Subject: [libvirt] [PATCH v10 08/10] backup: Wire up qemu full pull backup commands over QMP X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.67]); Thu, 22 Aug 2019 01:43:29 +0000 (UTC) Content-Type: text/plain; charset="utf-8" Time to actually issue the QMP transactions that start and stop backup commands (for now, just pull mode, not push). Starting a job has to kick off several pre-req steps, then a transaction, and additionally spawn an NBD server for pull mode; ending a job as well as failing partway through beginning a job has to unwind the earlier steps. Implementing push mode, as well as incremental pull and checkpoint creation, is deferred to later patches. Signed-off-by: Eric Blake --- src/qemu/qemu_domain.c | 17 +- src/qemu/qemu_driver.c | 310 ++++++++++++++++++++++++++++++++++- src/qemu/qemu_monitor_json.c | 4 + src/qemu/qemu_process.c | 8 + 4 files changed, 325 insertions(+), 14 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 333a3df247..92f55b507a 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3071,11 +3071,18 @@ qemuDomainObjPrivateXMLParseBlockjobs(virQEMUDriver= Ptr driver, } } - if ((node =3D virXPathNode("./domainbackup", ctxt)) && - !(priv->backup =3D virDomainBackupDefParseNode(ctxt->doc, node, - driver->xmlopt, - VIR_DOMAIN_BACKUP_PAR= SE_INTERNAL))) - return -1; + if ((node =3D virXPathNode("./domainbackup", ctxt))) { + if (!(priv->backup =3D virDomainBackupDefParseNode(ctxt->doc, node, + driver->xmlopt, + VIR_DOMAIN_BACKUP= _PARSE_INTERNAL))) + return -1; + /* The backup job is only stored in XML if backupBegin + * succeeded at exporting the disk, so no need to store disk + * state when we can just force-reset it to a known-good + * value. */ + for (i =3D 0; i < priv->backup->ndisks; i++) + priv->backup->disks[i].state =3D VIR_DOMAIN_BACKUP_DISK_STATE_= EXPORT; + } return 0; } diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ba8190e8c4..26171c9b9f 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -17592,8 +17592,80 @@ qemuDomainCheckpointDelete(virDomainCheckpointPtr = checkpoint, return ret; } -static int qemuDomainBackupBegin(virDomainPtr domain, const char *diskXml, - const char *checkpointXml, unsigned int f= lags) +static int +qemuDomainBackupPrepare(virQEMUDriverPtr driver, virDomainObjPtr vm, + virDomainBackupDefPtr def) +{ + int ret =3D -1; + size_t i; + + if (qemuBlockNodeNamesDetect(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + goto cleanup; + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + virStorageSourcePtr src =3D vm->def->disks[disk->idx]->src; + + if (!disk->store) + continue; + if (virAsprintf(&disk->store->nodeformat, "tmp-%s", disk->name) < = 0) + goto cleanup; + if (!disk->store->format) + disk->store->format =3D VIR_STORAGE_FILE_QCOW2; + if (def->incremental) { + if (src->format !=3D VIR_STORAGE_FILE_QCOW2) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("incremental backup of %s requires qcow2"= ), + disk->name); + goto cleanup; + } + } + } + ret =3D 0; + cleanup: + return ret; +} + +/* Called while monitor lock is held. Best-effort cleanup. */ +static int +qemuDomainBackupDiskCleanup(virQEMUDriverPtr driver, virDomainObjPtr vm, + virDomainBackupDiskDef *disk, bool incremental) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + const char *node =3D vm->def->disks[disk->idx]->src->nodeformat; + int ret =3D 0; + + if (!disk->store) + return 0; + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_EXPORT) { + /* No real need to use nbd-server-remove, since we will + * shortly be calling nbd-server-stop. */ + } + if (incremental && disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_BITMA= P && + qemuMonitorDeleteBitmap(priv->mon, node, disk->store->nodeformat) = < 0) { + VIR_WARN("Unable to remove temp bitmap for disk %s after backup", + disk->name); + ret =3D -1; + } + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_READY && + qemuMonitorBlockdevDel(priv->mon, disk->store->nodeformat) < 0) { + VIR_WARN("Unable to remove temp disk %s after backup", + disk->name); + ret =3D -1; + } + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_LABEL) + qemuDomainStorageSourceAccessRevoke(driver, vm, disk->store); + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_CREATED && + disk->store->detected && unlink(disk->store->path) < 0) { + VIR_WARN("Unable to unlink temp disk %s after backup", + disk->store->path); + ret =3D -1; + } + return ret; +} + +static int +qemuDomainBackupBegin(virDomainPtr domain, const char *diskXml, + const char *checkpointXml, unsigned int flags) { virQEMUDriverPtr driver =3D domain->conn->privateData; virDomainObjPtr vm =3D NULL; @@ -17602,8 +17674,14 @@ static int qemuDomainBackupBegin(virDomainPtr doma= in, const char *diskXml, virCapsPtr caps =3D NULL; qemuDomainObjPrivatePtr priv; int ret =3D -1; + virJSONValuePtr json =3D NULL; + bool job_started =3D false; + bool nbd_running =3D false; + size_t i; struct timeval tv; char *suffix =3D NULL; + virCommandPtr cmd =3D NULL; + const char *qemuImgPath; virCheckFlags(VIR_DOMAIN_BACKUP_BEGIN_NO_METADATA, -1); /* TODO: VIR_DOMAIN_BACKUP_BEGIN_QUIESCE */ @@ -17624,6 +17702,7 @@ static int qemuDomainBackupBegin(virDomainPtr domai= n, const char *diskXml, if (!(vm =3D qemuDomObjFromDomain(domain))) goto cleanup; + priv =3D vm->privateData; cfg =3D virQEMUDriverGetConfig(driver); if (virDomainBackupBeginEnsureACL(domain->conn, vm->def, flags) < 0) @@ -17646,25 +17725,145 @@ static int qemuDomainBackupBegin(virDomainPtr do= main, const char *diskXml, if (!(def =3D virDomainBackupDefParseString(diskXml, driver->xmlopt, 0= ))) goto cleanup; + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_BITMAP)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("qemu binary lacks pull-mode backup support")= ); + goto cleanup; + } + if (!def->server) { + if (VIR_ALLOC(def->server) < 0) + goto cleanup; + def->server->transport =3D VIR_STORAGE_NET_HOST_TRANS_TCP; + if (VIR_STRDUP(def->server->name, "localhost") < 0) + goto cleanup; + } + switch ((virStorageNetHostTransport)def->server->transport) { + case VIR_STORAGE_NET_HOST_TRANS_TCP: + /* TODO: Update qemu.conf to provide a port range, + * probably starting at 10809, for obtaining automatic + * port via virPortAllocatorAcquire, as well as store + * somewhere if we need to call virPortAllocatorRelease + * during BackupEnd. Until then, user must provide port */ + if (!def->server->port) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _(" must specify TCP port for= now")); + goto cleanup; + } + break; + case VIR_STORAGE_NET_HOST_TRANS_UNIX: + /* TODO: Do we need to mess with selinux? */ + break; + case VIR_STORAGE_NET_HOST_TRANS_RDMA: + case VIR_STORAGE_NET_HOST_TRANS_LAST: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("unexpected transport in ")); + goto cleanup; + } + } else { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("push mode backups not supported yet")); + goto cleanup; + } + if (def->incremental) { + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BITMAP_MERGE)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("qemu binary lacks persistent bitmaps support= ")); + goto cleanup; + } + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("cannot create incremental backups yet")); + goto cleanup; + } + + if (!(qemuImgPath =3D qemuFindQemuImgBinary(driver))) + goto cleanup; + /* We are going to modify the domain below. */ if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) goto cleanup; - priv =3D vm->privateData; if (priv->backup) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("another backup job is already running")); goto endjob; } - if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0) + if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0 || + qemuDomainBackupPrepare(driver, vm, def) < 0) goto endjob; /* actually start the checkpoint. 2x2 array of push/pull, full/incr, plus additional tweak if checkpoint requested */ - /* TODO: issue QMP commands: - - pull: nbd-server-start with from user (or autogenerate s= erver) - - push/pull: blockdev-add per + qemuDomainObjEnterMonitor(driver, vm); + /* - push/pull: blockdev-add per */ + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + virJSONValuePtr file; + virStorageSourcePtr src =3D vm->def->disks[disk->idx]->src; + const char *node =3D src->nodeformat; + + if (!disk->store) + continue; + if (qemuDomainStorageFileInit(driver, vm, disk->store, src) < 0) + goto endmon; + if (disk->store->detected) { + if (virStorageFileCreate(disk->store) < 0) { + virReportSystemError(errno, + _("failed to create image file '%s'"), + NULLSTR(disk->store->path)); + goto endmon; + } + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_CREATED; + } + if (qemuDomainStorageSourceAccessAllow(driver, vm, disk->store, fa= lse, + true) < 0) + goto endmon; + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_LABEL; + if (disk->store->detected) { + virBuffer buf =3D VIR_BUFFER_INITIALIZER; + + /* Force initialization of scratch/target file to new qcow2 */ + if (!(cmd =3D virCommandNewArgList(qemuImgPath, + "create", + "-f", + virStorageFileFormatTypeToStr= ing(disk->store->format), + "-o", + NULL))) + goto endmon; + virBufferAsprintf(&buf, "backing_fmt=3D%s,backing_file=3D", + virStorageFileFormatTypeToString(src->format= )); + virQEMUBuildBufferEscapeComma(&buf, src->path); + virCommandAddArgBuffer(cmd, &buf); + + virQEMUBuildBufferEscapeComma(&buf, disk->store->path); + virCommandAddArgBuffer(cmd, &buf); + if (virCommandRun(cmd, NULL) < 0) + goto endmon; + virCommandFree(cmd); + cmd =3D NULL; + } + + if (virJSONValueObjectCreate(&file, + "s:driver", "file", + "s:filename", disk->store->path, + NULL) < 0) + goto endmon; + if (virJSONValueObjectCreate(&json, + "s:driver", virStorageFileFormatTypeT= oString(disk->store->format), + "s:node-name", disk->store->nodeforma= t, + "a:file", &file, + "s:backing", node, NULL) < 0) { + virJSONValueFree(file); + goto endmon; + } + if (qemuMonitorBlockdevAdd(priv->mon, json) < 0) + goto endmon; + json =3D NULL; + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_READY; + } + + /* TODO: - incr: bitmap-add of tmp, bitmap-merge per - transaction, containing: - push+full: blockdev-backup sync:full @@ -17672,8 +17871,76 @@ static int qemuDomainBackupBegin(virDomainPtr doma= in, const char *diskXml, - pull+full: blockdev-backup sync:none - pull+incr: blockdev-backup sync:none, bitmap-disable of tmp - if checkpoint: bitmap-disable of old, bitmap-add of new + */ + if (!(json =3D virJSONValueNewArray())) + goto endmon; + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + virStorageSourcePtr src =3D vm->def->disks[disk->idx]->src; + + if (!disk->store) + continue; + if (qemuMonitorJSONTransactionAdd(json, + "blockdev-backup", + "s:device", src->nodeformat, + "s:target", disk->store->nodefor= mat, + "s:sync", "none", + "s:job-id", disk->name, + NULL) < 0) + goto endmon; + } + if (qemuMonitorTransaction(priv->mon, &json) < 0) + goto endmon; + job_started =3D true; + + /* + - pull: nbd-server-start with from user (or autogenerate s= erver) - pull: nbd-server-add per , including bitmap for incr */ + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (qemuMonitorNBDServerStart(priv->mon, def->server, NULL) < 0) + goto endmon; + nbd_running =3D true; + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + + if (!disk->store) + continue; + if (qemuMonitorNBDServerAdd(priv->mon, disk->store->nodeformat, + disk->name, false, + def->incremental ? disk->name : + NULL) < 0) + goto endmon; + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_EXPORT; + } + } + + ret =3D 0; + endmon: + /* Best effort cleanup if we fail partway through */ + if (ret < 0) { + virErrorPtr save_err =3D virSaveLastError(); + + if (nbd_running && + qemuMonitorNBDServerStop(priv->mon) < 0) + VIR_WARN("Unable to stop NBD server on vm %s after backup job", + vm->def->name); + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + + if (job_started && + qemuMonitorBlockJobCancel(priv->mon, disk->name) < 0) + VIR_WARN("Unable to stop backup job %s on vm %s after fail= ure", + disk->store->nodeformat, vm->def->name); + qemuDomainBackupDiskCleanup(driver, vm, disk, !!def->increment= al); + } + virSetError(save_err); + virFreeError(save_err); + } + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret =3D -1; + if (ret < 0) + goto endjob; VIR_STEAL_PTR(priv->backup, def); ret =3D priv->backup->id =3D 1; /* Hard-coded job id for now */ @@ -17686,7 +17953,9 @@ static int qemuDomainBackupBegin(virDomainPtr domai= n, const char *diskXml, qemuDomainObjEndJob(driver, vm); cleanup: + virCommandFree(cmd); VIR_FREE(suffix); + virJSONValueFree(json); virDomainObjEndAPI(&vm); virDomainBackupDefFree(def); virObjectUnref(caps); @@ -17736,6 +18005,8 @@ static int qemuDomainBackupEnd(virDomainPtr domain,= int id, unsigned int flags) virDomainBackupDefPtr backup =3D NULL; qemuDomainObjPrivatePtr priv; bool want_abort =3D flags & VIR_DOMAIN_BACKUP_END_ABORT; + virDomainBackupDefPtr def; + size_t i; virCheckFlags(VIR_DOMAIN_BACKUP_END_ABORT, -1); @@ -17756,9 +18027,27 @@ static int qemuDomainBackupEnd(virDomainPtr domain= , int id, unsigned int flags) if (priv->backup->type !=3D VIR_DOMAIN_BACKUP_TYPE_PUSH) want_abort =3D false; + def =3D priv->backup; + + /* We are going to modify the domain below. */ + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + qemuDomainObjEnterMonitor(driver, vm); + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) + ret =3D qemuMonitorNBDServerStop(priv->mon); + for (i =3D 0; i < def->ndisks; i++) { + if (qemuMonitorBlockJobCancel(priv->mon, + def->disks[i].name) < 0 || + qemuDomainBackupDiskCleanup(driver, vm, &def->disks[i], + !!def->incremental) < 0) + ret =3D -1; + } + if (qemuDomainObjExitMonitor(driver, vm) < 0 || ret < 0) { + ret =3D -1; + goto endjob; + } - /* TODO: QMP commands to actually cancel the pending job, and on - * pull, also tear down the NBD server */ VIR_STEAL_PTR(backup, priv->backup); if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) @@ -17767,6 +18056,9 @@ static int qemuDomainBackupEnd(virDomainPtr domain,= int id, unsigned int flags) ret =3D want_abort ? 0 : 1; + endjob: + qemuDomainObjEndJob(driver, vm); + cleanup: virDomainBackupDefFree(backup); virDomainObjEndAPI(&vm); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 0dadcce609..5d22a65dd2 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1153,6 +1153,8 @@ qemuMonitorJSONHandleBlockJobImpl(qemuMonitorPtr mon, type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_COMMIT; else if (STREQ(type_str, "mirror")) type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_COPY; + else if (STREQ(type_str, "backup")) + type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP; switch ((virConnectDomainEventBlockJobStatus) event) { case VIR_DOMAIN_BLOCK_JOB_COMPLETED: @@ -4966,6 +4968,8 @@ qemuMonitorJSONParseBlockJobInfo(virHashTablePtr bloc= kJobs, info->type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_COMMIT; else if (STREQ(type, "mirror")) info->type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_COPY; + else if (STREQ(type, "backup")) + info->type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP; else info->type =3D VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index c9921646e9..d66212781f 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -89,6 +89,7 @@ #include "virresctrl.h" #include "virvsock.h" #include "viridentity.h" +#include "backup_conf.h" #define VIR_FROM_THIS VIR_FROM_QEMU @@ -950,6 +951,13 @@ qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE= _UNUSED, VIR_DEBUG("Block job for device %s (domain: %p,%s) type %d status %d", diskAlias, vm, vm->def->name, type, status); + priv =3D vm->privateData; + if (type =3D=3D VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP && + (!priv->backup || priv->backup->type =3D=3D VIR_DOMAIN_BACKUP_TYPE= _PULL)) { + /* Event for canceling a pull-mode backup is side-effect that + * should not be forwarded on to user */ + goto cleanup; + } if (!(disk =3D qemuProcessFindDomainDiskByAliasOrQOM(vm, diskAlias, NU= LL))) goto cleanup; --=20 2.21.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list