From nobody Mon Feb 9 01:20:03 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1553681567043427.261257317826; Wed, 27 Mar 2019 03:12:47 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 39717C066464; Wed, 27 Mar 2019 10:12:45 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 11AD317D97; Wed, 27 Mar 2019 10:12:45 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id B4640181A84D; Wed, 27 Mar 2019 10:12:44 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x2RABVDr022882 for ; Wed, 27 Mar 2019 06:11:31 -0400 Received: by smtp.corp.redhat.com (Postfix) id 343521915B; Wed, 27 Mar 2019 10:11:31 +0000 (UTC) Received: from blue.redhat.com (ovpn-116-59.phx2.redhat.com [10.3.116.59]) by smtp.corp.redhat.com (Postfix) with ESMTP id 77E6F6085B; Wed, 27 Mar 2019 10:11:30 +0000 (UTC) From: Eric Blake To: libvir-list@redhat.com Date: Wed, 27 Mar 2019 05:10:52 -0500 Message-Id: <20190327101054.10648-21-eblake@redhat.com> In-Reply-To: <20190327100734.10225-1-eblake@redhat.com> References: <20190327100734.10225-1-eblake@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-loop: libvir-list@redhat.com Cc: nsoffer@redhat.com Subject: [libvirt] [PATCH v7 21/23] backup: Wire up qemu full pull backup commands over QMP X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Wed, 27 Mar 2019 10:12:46 +0000 (UTC) Content-Type: text/plain; charset="utf-8" Time to actually issue the QMP transactions that start and stop backup commands (for now, just pull mode, not push). Starting a job has to kick off several pre-req steps, then a transaction, and additionally spawn an NBD server for pull mode; ending a job as well as failing partway through beginning a job has to unwind the earlier steps. Implementing push mode, as well as incremental pull and checkpoint creation, is deferred to later patches. Signed-off-by: Eric Blake --- src/qemu/qemu_domain.c | 18 ++- src/qemu/qemu_driver.c | 288 ++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_process.c | 7 + 3 files changed, 299 insertions(+), 14 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 6648240dc4..9840be546e 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2687,16 +2687,24 @@ qemuDomainObjPrivateXMLParseBlockjobs(virQEMUDriver= Ptr driver, char *active; int tmp; int ret =3D -1; + size_t i; if ((active =3D virXPathString("string(./blockjobs/@active)", ctxt)) && (tmp =3D virTristateBoolTypeFromString(active)) > 0) priv->reconnectBlockjobs =3D tmp; - if ((node =3D virXPathNode("./domainbackup", ctxt)) && - !(priv->backup =3D virDomainBackupDefParseNode(ctxt->doc, node, - driver->xmlopt, - VIR_DOMAIN_BACKUP_PAR= SE_INTERNAL))) - goto cleanup; + if ((node =3D virXPathNode("./domainbackup", ctxt))) { + if (!(priv->backup =3D virDomainBackupDefParseNode(ctxt->doc, node, + driver->xmlopt, + VIR_DOMAIN_BACKUP= _PARSE_INTERNAL))) + goto cleanup; + /* The backup job is only stored in XML if backupBegin + * succeeded at exporting the disk, so no need to store disk + * state when we can just force-reset it to a known-good + * value. */ + for (i =3D 0; i < priv->backup->ndisks; i++) + priv->backup->disks[i].state =3D VIR_DOMAIN_BACKUP_DISK_STATE_= EXPORT; + } ret =3D 0; cleanup: diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 78c9f963ca..e6e7ba7330 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -17722,8 +17722,80 @@ qemuDomainCheckpointDelete(virDomainCheckpointPtr = checkpoint, return ret; } -static int qemuDomainBackupBegin(virDomainPtr domain, const char *diskXml, - const char *checkpointXml, unsigned int f= lags) +static int +qemuDomainBackupPrepare(virQEMUDriverPtr driver, virDomainObjPtr vm, + virDomainBackupDefPtr def) +{ + int ret =3D -1; + size_t i; + + if (qemuBlockNodeNamesDetect(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + goto cleanup; + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + virStorageSourcePtr src =3D vm->def->disks[disk->idx]->src; + + if (!disk->store) + continue; + if (virAsprintf(&disk->store->nodeformat, "tmp-%s", disk->name) < = 0) + goto cleanup; + if (!disk->store->format) + disk->store->format =3D VIR_STORAGE_FILE_QCOW2; + if (def->incremental) { + if (src->format !=3D VIR_STORAGE_FILE_QCOW2) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("incremental backup of %s requires qcow2"= ), + disk->name); + goto cleanup; + } + } + } + ret =3D 0; + cleanup: + return ret; +} + +/* Called while monitor lock is held. Best-effort cleanup. */ +static int +qemuDomainBackupDiskCleanup(virQEMUDriverPtr driver, virDomainObjPtr vm, + virDomainBackupDiskDef *disk, bool incremental) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + const char *node =3D vm->def->disks[disk->idx]->src->nodeformat; + int ret =3D 0; + + if (!disk->store) + return 0; + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_EXPORT) { + /* No real need to use nbd-server-remove, since we will + * shortly be calling nbd-server-stop. */ + } + if (incremental && disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_BITMA= P && + qemuMonitorDeleteBitmap(priv->mon, node, disk->store->nodeformat) = < 0) { + VIR_WARN("Unable to remove temp bitmap for disk %s after backup", + disk->name); + ret =3D -1; + } + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_READY && + qemuMonitorBlockdevDel(priv->mon, disk->store->nodeformat) < 0) { + VIR_WARN("Unable to remove temp disk %s after backup", + disk->name); + ret =3D -1; + } + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_LABEL) + qemuDomainDiskChainElementRevoke(driver, vm, disk->store); + if (disk->state >=3D VIR_DOMAIN_BACKUP_DISK_STATE_CREATED && + disk->store->detected && unlink(disk->store->path) < 0) { + VIR_WARN("Unable to unlink temp disk %s after backup", + disk->store->path); + ret =3D -1; + } + return ret; +} + +static int +qemuDomainBackupBegin(virDomainPtr domain, const char *diskXml, + const char *checkpointXml, unsigned int flags) { virQEMUDriverPtr driver =3D domain->conn->privateData; virDomainObjPtr vm =3D NULL; @@ -17732,8 +17804,14 @@ static int qemuDomainBackupBegin(virDomainPtr doma= in, const char *diskXml, virCapsPtr caps =3D NULL; qemuDomainObjPrivatePtr priv; int ret =3D -1; + virJSONValuePtr json =3D NULL; + bool job_started =3D false; + bool nbd_running =3D false; + size_t i; struct timeval tv; char *suffix =3D NULL; + virCommandPtr cmd =3D NULL; + const char *qemuImgPath; virCheckFlags(VIR_DOMAIN_BACKUP_BEGIN_NO_METADATA, -1); /* TODO: VIR_DOMAIN_BACKUP_BEGIN_QUIESCE */ @@ -17754,6 +17832,7 @@ static int qemuDomainBackupBegin(virDomainPtr domai= n, const char *diskXml, if (!(vm =3D qemuDomObjFromDomain(domain))) goto cleanup; + priv =3D vm->privateData; cfg =3D virQEMUDriverGetConfig(driver); if (virDomainBackupBeginEnsureACL(domain->conn, vm->def, flags) < 0) @@ -17776,25 +17855,122 @@ static int qemuDomainBackupBegin(virDomainPtr do= main, const char *diskXml, if (!(def =3D virDomainBackupDefParseString(diskXml, driver->xmlopt, 0= ))) goto cleanup; + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NBD_BITMAP)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("qemu binary lacks pull-mode backup support")= ); + goto cleanup; + } + if (!def->server || + def->server->transport !=3D VIR_STORAGE_NET_HOST_TRANS_TCP) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _(" must specify TCP server for n= ow")); + goto cleanup; + } + } else { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("push mode backups not supported yet")); + goto cleanup; + } + if (def->incremental) { + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BITMAP_MERGE)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("qemu binary lacks persistent bitmaps support= ")); + goto cleanup; + } + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("cannot create incremental backups yet")); + goto cleanup; + } + + if (!(qemuImgPath =3D qemuFindQemuImgBinary(driver))) + goto cleanup; + /* We are going to modify the domain below. */ if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) goto cleanup; - priv =3D vm->privateData; if (priv->backup) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("another backup job is already running")); goto endjob; } - if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0) + if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0 || + qemuDomainBackupPrepare(driver, vm, def) < 0) goto endjob; /* actually start the checkpoint. 2x2 array of push/pull, full/incr, plus additional tweak if checkpoint requested */ - /* TODO: issue QMP commands: - - pull: nbd-server-start with from user (or autogenerate s= erver) - - push/pull: blockdev-add per + qemuDomainObjEnterMonitor(driver, vm); + /* - push/pull: blockdev-add per */ + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + virJSONValuePtr file; + virStorageSourcePtr src =3D vm->def->disks[disk->idx]->src; + const char *node =3D src->nodeformat; + + if (!disk->store) + continue; + if (qemuDomainStorageFileInit(driver, vm, disk->store, src) < 0) + goto endmon; + if (disk->store->detected) { + if (virStorageFileCreate(disk->store) < 0) { + virReportSystemError(errno, + _("failed to create image file '%s'"), + NULLSTR(disk->store->path)); + goto endmon; + } + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_CREATED; + } + if (qemuDomainDiskChainElementPrepare(driver, vm, disk->store, fal= se, + true) < 0) + goto endmon; + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_LABEL; + if (disk->store->detected) { + virBuffer buf =3D VIR_BUFFER_INITIALIZER; + + /* Force initialization of scratch/target file to new qcow2 */ + if (!(cmd =3D virCommandNewArgList(qemuImgPath, + "create", + "-f", + virStorageFileFormatTypeToStr= ing(disk->store->format), + "-o", + NULL))) + goto endmon; + virBufferAsprintf(&buf, "backing_fmt=3D%s,backing_file=3D", + virStorageFileFormatTypeToString(src->format= )); + virQEMUBuildBufferEscapeComma(&buf, src->path); + virCommandAddArgBuffer(cmd, &buf); + + virQEMUBuildBufferEscapeComma(&buf, disk->store->path); + virCommandAddArgBuffer(cmd, &buf); + if (virCommandRun(cmd, NULL) < 0) + goto endmon; + virCommandFree(cmd); + cmd =3D NULL; + } + + if (virJSONValueObjectCreate(&file, + "s:driver", "file", + "s:filename", disk->store->path, + NULL) < 0) + goto endmon; + if (virJSONValueObjectCreate(&json, + "s:driver", virStorageFileFormatTypeT= oString(disk->store->format), + "s:node-name", disk->store->nodeforma= t, + "a:file", &file, + "s:backing", node, NULL) < 0) { + virJSONValueFree(file); + goto endmon; + } + if (qemuMonitorBlockdevAdd(priv->mon, json) < 0) + goto endmon; + json =3D NULL; + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_READY; + } + + /* TODO: - incr: bitmap-add of tmp, bitmap-merge per - transaction, containing: - push+full: blockdev-backup sync:full @@ -17802,8 +17978,77 @@ static int qemuDomainBackupBegin(virDomainPtr doma= in, const char *diskXml, - pull+full: blockdev-backup sync:none - pull+incr: blockdev-backup sync:none, bitmap-disable of tmp - if checkpoint: bitmap-disable of old, bitmap-add of new + */ + if (!(json =3D virJSONValueNewArray())) + goto endmon; + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + virStorageSourcePtr src =3D vm->def->disks[disk->idx]->src; + + if (!disk->store) + continue; + if (qemuMonitorJSONTransactionAdd(json, + "blockdev-backup", + "s:device", src->nodeformat, + "s:target", disk->store->nodefor= mat, + "s:sync", "none", + "s:job-id", disk->name, + NULL) < 0) + goto endmon; + } + if (qemuMonitorTransaction(priv->mon, &json) < 0) + goto endmon; + job_started =3D true; + + /* + - pull: nbd-server-start with from user (or autogenerate s= erver) - pull: nbd-server-add per , including bitmap for incr */ + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (qemuMonitorNBDServerStart(priv->mon, def->server->name, + def->server->port, NULL) < 0) + goto endmon; + nbd_running =3D true; + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + + if (!disk->store) + continue; + if (qemuMonitorNBDServerAdd(priv->mon, disk->store->nodeformat, + disk->name, false, + def->incremental ? disk->name : + NULL) < 0) + goto endmon; + disk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_EXPORT; + } + } + + ret =3D 0; + endmon: + /* Best effort cleanup if we fail partway through */ + if (ret < 0) { + virErrorPtr save_err =3D virSaveLastError(); + + if (nbd_running && + qemuMonitorNBDServerStop(priv->mon) < 0) + VIR_WARN("Unable to stop NBD server on vm %s after backup job", + vm->def->name); + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *disk =3D &def->disks[i]; + + if (job_started && + qemuMonitorBlockJobCancel(priv->mon, disk->name) < 0) + VIR_WARN("Unable to stop backup job %s on vm %s after fail= ure", + disk->store->nodeformat, vm->def->name); + qemuDomainBackupDiskCleanup(driver, vm, disk, !!def->increment= al); + } + virSetError(save_err); + virFreeError(save_err); + } + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret =3D -1; + if (ret < 0) + goto endjob; VIR_STEAL_PTR(priv->backup, def); ret =3D priv->backup->id =3D 1; /* Hard-coded job id for now */ @@ -17816,7 +18061,9 @@ static int qemuDomainBackupBegin(virDomainPtr domai= n, const char *diskXml, qemuDomainObjEndJob(driver, vm); cleanup: + virCommandFree(cmd); VIR_FREE(suffix); + virJSONValueFree(json); virDomainObjEndAPI(&vm); virDomainBackupDefFree(def); virObjectUnref(caps); @@ -17866,6 +18113,8 @@ static int qemuDomainBackupEnd(virDomainPtr domain,= int id, unsigned int flags) virDomainBackupDefPtr backup =3D NULL; qemuDomainObjPrivatePtr priv; bool want_abort =3D flags & VIR_DOMAIN_BACKUP_END_ABORT; + virDomainBackupDefPtr def; + size_t i; virCheckFlags(VIR_DOMAIN_BACKUP_END_ABORT, -1); @@ -17886,9 +18135,27 @@ static int qemuDomainBackupEnd(virDomainPtr domain= , int id, unsigned int flags) if (priv->backup->type !=3D VIR_DOMAIN_BACKUP_TYPE_PUSH) want_abort =3D false; + def =3D priv->backup; + + /* We are going to modify the domain below. */ + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + qemuDomainObjEnterMonitor(driver, vm); + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) + ret =3D qemuMonitorNBDServerStop(priv->mon); + for (i =3D 0; i < def->ndisks; i++) { + if (qemuMonitorBlockJobCancel(priv->mon, + def->disks[i].name) < 0 || + qemuDomainBackupDiskCleanup(driver, vm, &def->disks[i], + !!def->incremental) < 0) + ret =3D -1; + } + if (qemuDomainObjExitMonitor(driver, vm) < 0 || ret < 0) { + ret =3D -1; + goto endjob; + } - /* TODO: QMP commands to actually cancel the pending job, and on - * pull, also tear down the NBD server */ VIR_STEAL_PTR(backup, priv->backup); if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) @@ -17897,6 +18164,9 @@ static int qemuDomainBackupEnd(virDomainPtr domain,= int id, unsigned int flags) ret =3D want_abort ? 0 : 1; + endjob: + qemuDomainObjEndJob(driver, vm); + cleanup: virDomainBackupDefFree(backup); virDomainObjEndAPI(&vm); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index dc7317b723..215bc3ba53 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -930,6 +930,7 @@ qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE_= UNUSED, void *opaque) { virQEMUDriverPtr driver =3D opaque; + qemuDomainObjPrivatePtr priv; struct qemuProcessEvent *processEvent =3D NULL; virDomainDiskDefPtr disk; qemuBlockJobDataPtr job =3D NULL; @@ -940,6 +941,12 @@ qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE= _UNUSED, VIR_DEBUG("Block job for device %s (domain: %p,%s) type %d status %d", diskAlias, vm, vm->def->name, type, status); + priv =3D vm->privateData; + if (type =3D=3D VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP && priv->backup) { + /* Event for canceling a pull-mode backup is side-effect that + * should not be forwarded on to user */ + goto cleanup; + } if (!(disk =3D qemuProcessFindDomainDiskByAliasOrQOM(vm, diskAlias, NU= LL))) goto cleanup; --=20 2.20.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list