From nobody Fri Dec 19 17:38:48 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) client-ip=205.139.110.61; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1574804458; cv=none; d=zohomail.com; s=zohoarc; b=CEQbvY/oNkrDOngajQAGTbTvna+u7IDAfDWzmlys55sJNfZCiYDUTOYiZoRvxcoS3o/3/F8bJi9bUi/f79cytg3xg6vyxaq875F7HrYBxWlty/2iAUGmUI/ydXYJ7QAYccmQVSxbfCSU/rVWHvPlnl/6DOKxm2A6JRjQ2JnGQvg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574804458; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lx3ValVlyVU1pvsHKJ48W5tvr5iA+DIf+G2o8MLzjDo=; b=lJ5O+lW9gpbIWF3uSs0pmf6oul0rTVZ0woI+eouU7ORsEXIVYAeWR8Dse9KzSvB2VBGbZSJVVjALFIS8nsydNNhrnncHlvCtVTlaQN+FMcrHkVZMn4Yn4c+zqY8PE717wHpS6L1IVt/2UR540PCsgNhzlFhrTwBp9KbaPdTagkA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by mx.zohomail.com with SMTPS id 157480445878428.156855743364645; Tue, 26 Nov 2019 13:40:58 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-418-xTsvkS7AMCueGhESIGt3Fg-1; Tue, 26 Nov 2019 16:40:55 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 156EEDBA9; Tue, 26 Nov 2019 21:40:48 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E07A7600CD; Tue, 26 Nov 2019 21:40:47 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id A0D8818034ED; Tue, 26 Nov 2019 21:40:47 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id xAQLeap4027983 for ; Tue, 26 Nov 2019 16:40:36 -0500 Received: by smtp.corp.redhat.com (Postfix) id 26212600CA; Tue, 26 Nov 2019 21:40:36 +0000 (UTC) Received: from angien.redhat.com (unknown [10.43.2.48]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E4B3600C8 for ; Tue, 26 Nov 2019 21:40:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1574804457; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=lx3ValVlyVU1pvsHKJ48W5tvr5iA+DIf+G2o8MLzjDo=; b=OqmAfPKJLpMlsmZYph4wHp11xAxi2N5OQ6RKXayspeySvNAkPbLin55sJgjebC67Wqmc6W 8NnUP9TfSKZS3U1vGpSnihW0/cqIbXYyq0CHrPAJMqKst2+rc2FDQMR4ZiOxjKoUBBoCyJ 5e2bzLjwkGhto2MrlCjFrVkkFa0IHWA= From: Peter Krempa To: libvir-list@redhat.com Date: Tue, 26 Nov 2019 22:40:04 +0100 Message-Id: <0d499b81453c05bd1d03ee9c0282a5dfa8010193.1574803736.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH 18/21] qemu: Implement backup job APIs and qemu handling X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-MC-Unique: xTsvkS7AMCueGhESIGt3Fg-1 X-Mimecast-Spam-Score: 0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" This allows to start and manage the backup job. Signed-off-by: Peter Krempa --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 919 +++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1010 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h diff --git a/po/POTFILES.in b/po/POTFILES.in index 48f3f431ec..5afecf21ba 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -140,6 +140,7 @@ @SRCDIR@/src/phyp/phyp_driver.c @SRCDIR@/src/qemu/qemu_agent.c @SRCDIR@/src/qemu/qemu_alias.c +@SRCDIR@/src/qemu/qemu_backup.c @SRCDIR@/src/qemu/qemu_block.c @SRCDIR@/src/qemu/qemu_blockjob.c @SRCDIR@/src/qemu/qemu_capabilities.c diff --git a/src/qemu/Makefile.inc.am b/src/qemu/Makefile.inc.am index bf30f8a3c5..839b1cacb8 100644 --- a/src/qemu/Makefile.inc.am +++ b/src/qemu/Makefile.inc.am @@ -69,6 +69,8 @@ QEMU_DRIVER_SOURCES =3D \ qemu/qemu_vhost_user_gpu.h \ qemu/qemu_checkpoint.c \ qemu/qemu_checkpoint.h \ + qemu/qemu_backup.c \ + qemu/qemu_backup.h \ $(NULL) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c new file mode 100644 index 0000000000..771e22ccd7 --- /dev/null +++ b/src/qemu/qemu_backup.c @@ -0,0 +1,919 @@ +/* + * qemu_backup.c: Implementation and handling of the backup jobs + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * . + */ + +#include + +#include "qemu_block.h" +#include "qemu_conf.h" +#include "qemu_capabilities.h" +#include "qemu_monitor.h" +#include "qemu_process.h" +#include "qemu_backup.h" +#include "qemu_monitor_json.h" +#include "qemu_checkpoint.h" +#include "qemu_command.h" + +#include "virerror.h" +#include "virlog.h" +#include "virbuffer.h" +#include "viralloc.h" +#include "virxml.h" +#include "virstoragefile.h" +#include "virstring.h" +#include "backup_conf.h" +#include "virdomaincheckpointobjlist.h" + +#define VIR_FROM_THIS VIR_FROM_QEMU + +VIR_LOG_INIT("qemu.qemu_backup"); + + +static virDomainBackupDefPtr +qemuDomainGetBackup(virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + + if (!priv->backup) { + virReportError(VIR_ERR_NO_DOMAIN_BACKUP, "%s", + _("no domain backup job present")); + return NULL; + } + + return priv->backup; +} + + +static int +qemuBackupPrepare(virDomainBackupDefPtr def) +{ + + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (!def->server) { + def->server =3D g_new(virStorageNetHostDef, 1); + + def->server->transport =3D VIR_STORAGE_NET_HOST_TRANS_TCP; + def->server->name =3D g_strdup("localhost"); + } + + switch ((virStorageNetHostTransport) def->server->transport) { + case VIR_STORAGE_NET_HOST_TRANS_TCP: + /* TODO: Update qemu.conf to provide a port range, + * probably starting at 10809, for obtaining automatic + * port via virPortAllocatorAcquire, as well as store + * somewhere if we need to call virPortAllocatorRelease + * during BackupEnd. Until then, user must provide port */ + if (!def->server->port) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _(" must specify TCP port for= now")); + return -1; + } + break; + + case VIR_STORAGE_NET_HOST_TRANS_UNIX: + /* TODO: Do we need to mess with selinux? */ + break; + + case VIR_STORAGE_NET_HOST_TRANS_RDMA: + case VIR_STORAGE_NET_HOST_TRANS_LAST: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("unexpected transport in ")); + return -1; + } + } + + return 0; +} + + +struct qemuBackupDiskData { + virDomainBackupDiskDefPtr backupdisk; + virDomainDiskDefPtr domdisk; + qemuBlockJobDataPtr blockjob; + virStorageSourcePtr store; + char *incrementalBitmap; + qemuBlockStorageSourceChainDataPtr crdata; + bool labelled; + bool initialized; + bool created; + bool added; + bool started; + bool done; +}; + + +static void +qemuBackupDiskDataCleanupOne(virDomainObjPtr vm, + struct qemuBackupDiskData *dd) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + + if (dd->started) + return; + + if (dd->added) { + qemuDomainObjEnterMonitor(priv->driver, vm); + qemuBlockStorageSourceAttachRollback(priv->mon, dd->crdata->srcdat= a[0]); + ignore_value(qemuDomainObjExitMonitor(priv->driver, vm)); + } + + if (dd->created) { + if (virStorageFileUnlink(dd->store) < 0) + VIR_WARN("Unable to remove just-created %s", NULLSTR(dd->store= ->path)); + } + + if (dd->initialized) + virStorageFileDeinit(dd->store); + + if (dd->labelled) + qemuDomainStorageSourceAccessRevoke(priv->driver, vm, dd->store); + + if (dd->blockjob) + qemuBlockJobStartupFinalize(vm, dd->blockjob); + + qemuBlockStorageSourceChainDataFree(dd->crdata); +} + + +static void +qemuBackupDiskDataCleanup(virDomainObjPtr vm, + struct qemuBackupDiskData *dd, + size_t ndd) +{ + virErrorPtr orig_err; + size_t i; + + if (!dd) + return; + + virErrorPreserveLast(&orig_err); + + for (i =3D 0; i < ndd; i++) + qemuBackupDiskDataCleanupOne(vm, dd + i); + + g_free(dd); + virErrorRestore(&orig_err); +} + + + +static int +qemuBackupDiskPrepareOneBitmaps(struct qemuBackupDiskData *dd, + virJSONValuePtr actions, + virDomainMomentObjPtr *incremental) +{ + g_autoptr(virJSONValue) mergebitmapsdisk =3D NULL; + g_autoptr(virJSONValue) mergebitmapsstore =3D NULL; + + if (!(mergebitmapsdisk =3D virJSONValueNewArray())) + return -1; + + if (!(mergebitmapsstore =3D virJSONValueNewArray())) + return -1; + + /* TODO: this code works only if the bitmaps are present on a single n= ode. + * The algorithm needs to be changed so that it looks into the backing= chain + * so that we can combine all relevant bitmaps for a given backing cha= in */ + while (*incremental) { + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(mergebitmapsd= isk, + dd->domdisk->= src->nodeformat, + (*incremental= )->def->name) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(mergebitmapss= tore, + dd->domdisk->= src->nodeformat, + (*incremental= )->def->name) < 0) + return -1; + + incremental++; + } + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + &mergebitmapsdisk) < 0) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + &mergebitmapsstore) < 0) + return -1; + + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, + virDomainBackupDiskDefPtr backupdisk, + struct qemuBackupDiskData *dd, + virJSONValuePtr actions, + virDomainMomentObjPtr *incremental, + virQEMUDriverConfigPtr cfg) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + + /* set data structure */ + dd->backupdisk =3D backupdisk; + dd->store =3D dd->backupdisk->store; + + if (!(dd->domdisk =3D virDomainDiskByTarget(vm->def, dd->backupdisk->n= ame))) { + virReportError(VIR_ERR_INVALID_ARG, + _("no disk named '%s'"), dd->backupdisk->name); + return -1; + } + + if (!dd->store->format) + dd->store->format =3D VIR_STORAGE_FILE_QCOW2; + + if (qemuDomainStorageFileInit(priv->driver, vm, dd->store, dd->domdisk= ->src) < 0) + return -1; + + if (qemuDomainPrepareStorageSourceBlockdev(NULL, dd->store, priv, cfg)= < 0) + return -1; + + if (incremental) { + dd->incrementalBitmap =3D g_strdup_printf("backup-%s", dd->domdisk= ->dst); + + if (qemuBackupDiskPrepareOneBitmaps(dd, actions, incremental) < 0) + return -1; + } + + if (!(dd->blockjob =3D qemuBlockJobDiskNewBackup(vm, dd->domdisk, dd->= store, + dd->incrementalBitmap))) + return -1; + + if (!(dd->crdata =3D qemuBuildStorageSourceChainAttachPrepareBlockdevT= op(dd->store, + = NULL, + = priv->qemuCaps))) + return -1; + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOnePush(virJSONValuePtr actions, + struct qemuBackupDiskData *dd) +{ + qemuMonitorTransactionBackupSyncMode syncmode =3D QEMU_MONITOR_TRANSAC= TION_BACKUP_SYNC_MODE_FULL; + + if (dd->incrementalBitmap) + syncmode =3D QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_INCREMENTAL; + + if (qemuMonitorTransactionBackup(actions, + dd->domdisk->src->nodeformat, + dd->blockjob->name, + dd->store->nodeformat, + dd->incrementalBitmap, + syncmode) < 0) + return -1; + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOnePull(virJSONValuePtr actions, + struct qemuBackupDiskData *dd) +{ + if (qemuMonitorTransactionBackup(actions, + dd->domdisk->src->nodeformat, + dd->blockjob->name, + dd->store->nodeformat, + NULL, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_= MODE_NONE) < 0) + return -1; + + return 0; +} + + +static ssize_t +qemuBackupDiskPrepareData(virDomainObjPtr vm, + virDomainBackupDefPtr def, + virDomainMomentObjPtr *incremental, + virJSONValuePtr actions, + virQEMUDriverConfigPtr cfg, + struct qemuBackupDiskData **rdd) +{ + struct qemuBackupDiskData *disks =3D NULL; + ssize_t ndisks =3D 0; + size_t i; + + disks =3D g_new0(struct qemuBackupDiskData, def->ndisks); + + for (i =3D 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *backupdisk =3D &def->disks[i]; + struct qemuBackupDiskData *dd =3D disks + ndisks; + + if (!backupdisk->store) + continue; + + ndisks++; + + if (qemuBackupDiskPrepareDataOne(vm, backupdisk, dd, actions, + incremental, cfg) < 0) + goto error; + + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (qemuBackupDiskPrepareDataOnePull(actions, dd) < 0) + goto error; + } else { + if (qemuBackupDiskPrepareDataOnePush(actions, dd) < 0) + goto error; + } + } + + *rdd =3D g_steal_pointer(&disks); + + return ndisks; + + error: + qemuBackupDiskDataCleanup(vm, disks, ndisks); + return -1; +} + + +static int +qemuBackupDiskPrepareOneStorage(virDomainObjPtr vm, + virHashTablePtr blockNamedNodeData, + struct qemuBackupDiskData *dd) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + + if (virStorageSourceIsLocalStorage(dd->store) && + !virFileExists(dd->store->path) && + virStorageFileSupportsCreate(dd->store)) { + + if (qemuDomainStorageFileInit(priv->driver, vm, dd->store, NULL) <= 0) + return -1; + + dd->initialized =3D true; + + if (virStorageFileCreate(dd->store) < 0) { + virReportSystemError(errno, + _("failed to create image file '%s'"), + NULLSTR(dd->store->path)); + return -1; + } + + dd->created =3D true; + } + + if (qemuDomainStorageSourceAccessAllow(priv->driver, vm, dd->store, fa= lse, + true) < 0) + return -1; + + dd->labelled =3D true; + + if (qemuBlockStorageSourceCreateDetectSize(blockNamedNodeData, + dd->store, dd->domdisk->src= ) < 0) + return -1; + + if (qemuBlockStorageSourceCreate(vm, dd->store, NULL, NULL, + dd->crdata->srcdata[0], + QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + + dd->added =3D true; + + return 0; +} + + +static int +qemuBackupDiskPrepareStorage(virDomainObjPtr vm, + struct qemuBackupDiskData *disks, + size_t ndisks, + virHashTablePtr blockNamedNodeData) +{ + size_t i; + + for (i =3D 0; i < ndisks; i++) { + if (qemuBackupDiskPrepareOneStorage(vm, blockNamedNodeData, disks = + i) < 0) + return -1; + } + + return 0; +} + + +static void +qemuBackupDiskStarted(virDomainObjPtr vm, + struct qemuBackupDiskData *dd, + size_t ndd) +{ + size_t i; + + for (i =3D 0; i < ndd; i++) { + dd[i].started =3D true; + dd[i].backupdisk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING; + qemuBlockJobStarted(dd->blockjob, vm); + } +} + + +/** + * qemuBackupBeginPullExportDisks: + * @vm: domain object + * @disks: backup disk data list + * @ndisks: number of valid disks in @disks + * + * Exports all disks from @dd when doing a pull backup in the NBD server. = This + * function must be called while in the monitor context. + */ +static int +qemuBackupBeginPullExportDisks(virDomainObjPtr vm, + struct qemuBackupDiskData *disks, + size_t ndisks) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + size_t i; + + for (i =3D 0; i < ndisks; i++) { + struct qemuBackupDiskData *dd =3D disks + i; + + if (qemuMonitorNBDServerAdd(priv->mon, + dd->store->nodeformat, + dd->domdisk->dst, + false, + dd->incrementalBitmap) < 0) + return -1; + } + + return 0; +} + + +/** + * qemuBackupBeginCollectIncrementalCheckpoints: + * @vm: domain object + * @incrFrom: name of checkpoint representing starting point of incrementa= l backup + * + * Returns a NULL terminated list of pointers to checkpoints in chronologi= cal + * order starting from the 'current' checkpoint until reaching @incrFrom. + */ +static virDomainMomentObjPtr * +qemuBackupBeginCollectIncrementalCheckpoints(virDomainObjPtr vm, + const char *incrFrom) +{ + virDomainMomentObjPtr n =3D virDomainCheckpointGetCurrent(vm->checkpoi= nts); + g_autofree virDomainMomentObjPtr *incr =3D NULL; + size_t nincr =3D 0; + + while (n) { + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, n) < 0) + return NULL; + + if (STREQ(n->def->name, incrFrom)) { + virDomainMomentObjPtr terminator =3D NULL; + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, terminator) < 0) + return NULL; + + return g_steal_pointer(&incr); + } + + if (!n->def->parent_name) + break; + + n =3D virDomainCheckpointFindByName(vm->checkpoints, n->def->paren= t_name); + } + + virReportError(VIR_ERR_OPERATION_INVALID, + _("could not locate checkpoint '%s' for incremental bac= kup"), + incrFrom); + return NULL; +} + + +static void +qemuBackupJobTerminate(virDomainObjPtr vm, + qemuDomainJobStatus jobstatus) + +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + + qemuDomainJobInfoUpdateTime(priv->job.current); + + g_free(priv->job.completed); + priv->job.completed =3D g_new0(qemuDomainJobInfo, 1); + *priv->job.completed =3D *priv->job.current; + + priv->job.completed->stats.backup.total =3D priv->backup->push_total; + priv->job.completed->stats.backup.transferred =3D priv->backup->push_t= ransferred; + priv->job.completed->stats.backup.tmp_used =3D priv->backup->pull_tmp_= used; + priv->job.completed->stats.backup.tmp_total =3D priv->backup->pull_tmp= _total; + + priv->job.completed->status =3D jobstatus; + + qemuDomainEventEmitJobCompleted(priv->driver, vm); + + virDomainBackupDefFree(priv->backup); + priv->backup =3D NULL; + qemuDomainObjEndAsyncJob(priv->driver, vm); +} + + +/** + * qemuBackupJobCancelBlockjobs: + * @vm: domain object + * @backup: backup definition + * @terminatebackup: flag whether to terminate and unregister the backup + * + * Sends all active blockjobs which are part of @backup of @vm a signal to + * cancel. If @terminatebackup is true qemuBackupJobTerminate is also call= ed + * if there are no outstanding active blockjobs. + */ +void +qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, + virDomainBackupDefPtr backup, + bool terminatebackup) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + size_t i; + int rc =3D 0; + bool has_active =3D false; + + if (!backup) + return; + + for (i =3D 0; i < backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk =3D backup->disks + i; + virDomainDiskDefPtr disk; + g_autoptr(qemuBlockJobData) job =3D NULL; + + if (!backupdisk->store) + continue; + + /* Look up corresponding disk as backupdisk->idx is no longer reli= able */ + if (!(disk =3D virDomainDiskByTarget(vm->def, backupdisk->name))) + continue; + + if (!(job =3D qemuBlockJobDiskGetJob(disk))) + continue; + + if (backupdisk->state !=3D VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING && + backupdisk->state !=3D VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING) + continue; + + has_active =3D true; + + if (backupdisk->state !=3D VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING) + continue; + + qemuDomainObjEnterMonitor(priv->driver, vm); + + rc =3D qemuMonitorJobCancel(priv->mon, job->name, false); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + return; + + if (rc =3D=3D 0) { + backupdisk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING; + job->state =3D QEMU_BLOCKJOB_STATE_ABORTING; + } + } + + if (terminatebackup && !has_active) + qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); +} + + +int +qemuBackupBegin(virDomainObjPtr vm, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); + g_autoptr(virDomainBackupDef) def =3D NULL; + g_autoptr(virCaps) caps =3D NULL; + g_autofree char *suffix =3D NULL; + struct timeval tv; + bool pull =3D false; + virDomainMomentObjPtr chk =3D NULL; + g_autoptr(virDomainCheckpointDef) chkdef =3D NULL; + g_autofree virDomainMomentObjPtr *incremental =3D NULL; + g_autoptr(virJSONValue) actions =3D NULL; + struct qemuBackupDiskData *dd =3D NULL; + ssize_t ndd =3D 0; + g_autoptr(virHashTable) blockNamedNodeData =3D NULL; + bool job_started =3D false; + bool nbd_running =3D false; + int rc =3D 0; + int ret =3D -1; + + virCheckFlags(0, -1); + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("incremental backup is not supported yet")); + return -1; + } + + if (!(caps =3D virQEMUDriverGetCapabilities(priv->driver, false))) + return -1; + + if (!(def =3D virDomainBackupDefParseString(backupXML, priv->driver->x= mlopt, 0))) + return -1; + + if (checkpointXML) { + if (!(chkdef =3D virDomainCheckpointDefParseString(checkpointXML, = caps, + priv->driver->xml= opt, + priv->qemuCaps, 0= ))) + return -1; + + suffix =3D g_strdup(chkdef->parent.name); + } else { + gettimeofday(&tv, NULL); + suffix =3D g_strdup_printf("%lld", (long long)tv.tv_sec); + } + + if (def->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) + pull =3D true; + + /* we'll treat this kind of backup job as an asyncjob as it uses some = of the + * infrastructure for async jobs. We'll allow standard modify-type jobs + * as the interlocking of conflicting operations is handled on the blo= ck + * job level */ + if (qemuDomainObjBeginAsyncJob(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP, + VIR_DOMAIN_JOB_OPERATION_BACKUP, flags)= < 0) + return -1; + + qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + JOB_MASK(QEMU_JOB_SUSPEND) | + JOB_MASK(QEMU_JOB_MODIFY))); + priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("cannot perform disk backup for inactive domain")= ); + goto endjob; + } + + if (priv->backup) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("another backup job is already running")); + goto endjob; + } + + if (qemuBackupPrepare(def) < 0) + goto endjob; + + if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0) + goto endjob; + + if (def->incremental && + !(incremental =3D qemuBackupBeginCollectIncrementalCheckpoints(vm,= def->incremental))) + goto endjob; + + if (!(actions =3D virJSONValueNewArray())) + goto endjob; + + if (chkdef) { + if (qemuCheckpointCreateCommon(priv->driver, vm, caps, &chkdef, + &actions, &chk) < 0) + goto endjob; + } + + if ((ndd =3D qemuBackupDiskPrepareData(vm, def, incremental, actions, = cfg, &dd)) <=3D 0) { + if (ndd =3D=3D 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("no disks selected for backup")); + } + + goto endjob; + } + + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BA= CKUP) < 0) + goto endjob; + blockNamedNodeData =3D qemuMonitorBlockGetNamedNodeData(priv->mon); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || !blockNamedNodeD= ata) + goto endjob; + + if (qemuBackupDiskPrepareStorage(vm, dd, ndd, blockNamedNodeData) < 0) + goto endjob; + + priv->backup =3D g_steal_pointer(&def); + + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BA= CKUP) < 0) + goto endjob; + + /* TODO: TLS is a must-have for the modern age */ + if (pull) { + if ((rc =3D qemuMonitorNBDServerStart(priv->mon, priv->backup->ser= ver, NULL)) =3D=3D 0) + nbd_running =3D true; + } + + if (rc =3D=3D 0) + rc =3D qemuMonitorTransaction(priv->mon, &actions); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || rc < 0) + goto endjob; + + job_started =3D true; + qemuBackupDiskStarted(vm, dd, ndd); + + if (chk && + qemuCheckpointCreateFinalize(priv->driver, vm, cfg, chk, true) < 0) + goto endjob; + + if (pull) { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JO= B_BACKUP) < 0) + goto endjob; + /* note that if the export fails we've already created the checkpo= int + * and we will not delete it */ + rc =3D qemuBackupBeginPullExportDisks(vm, dd, ndd); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + goto endjob; + + if (rc < 0) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, false); + goto endjob; + } + } + + ret =3D 0; + + endjob: + qemuBackupDiskDataCleanup(vm, dd, ndd); + if (!job_started && nbd_running && + qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BA= CKUP) < 0) { + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + ignore_value(qemuDomainObjExitMonitor(priv->driver, vm)); + } + + if (ret < 0 && !job_started) + def =3D g_steal_pointer(&priv->backup); + + if (ret =3D=3D 0) + qemuDomainObjReleaseAsyncJob(vm); + else + qemuDomainObjEndAsyncJob(priv->driver, vm); + + return ret; +} + + +char * +qemuBackupGetXMLDesc(virDomainObjPtr vm, + unsigned int flags) +{ + g_auto(virBuffer) buf =3D VIR_BUFFER_INITIALIZER; + virDomainBackupDefPtr backup; + + virCheckFlags(0, NULL); + + if (!(backup =3D qemuDomainGetBackup(vm))) + return NULL; + + if (virDomainBackupDefFormat(&buf, backup, false) < 0) + return NULL; + + return virBufferContentAndReset(&buf); +} + + +void +qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + qemuBlockjobState state, + unsigned long long cur, + unsigned long long end) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + bool has_running =3D false; + bool has_cancelling =3D false; + bool has_cancelled =3D false; + bool has_failed =3D false; + qemuDomainJobStatus jobstatus =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + virDomainBackupDefPtr backup =3D priv->backup; + size_t i; + + VIR_DEBUG("vm: '%s', disk:'%s', state:'%d'", + vm->def->name, disk->dst, state); + + if (!backup) + return; + + if (backup->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL) { + qemuDomainObjEnterMonitor(priv->driver, vm); + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + return; + + /* update the final statistics with the current job's data */ + backup->pull_tmp_used +=3D cur; + backup->pull_tmp_total +=3D end; + } else { + backup->push_transferred +=3D cur; + backup->push_total +=3D end; + } + + for (i =3D 0; i < backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk =3D backup->disks + i; + + if (!backupdisk->store) + continue; + + if (STREQ(disk->dst, backupdisk->name)) { + switch (state) { + case QEMU_BLOCKJOB_STATE_COMPLETED: + backupdisk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_COMPLET= E; + break; + + case QEMU_BLOCKJOB_STATE_CONCLUDED: + case QEMU_BLOCKJOB_STATE_FAILED: + backupdisk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_FAILED; + break; + + case QEMU_BLOCKJOB_STATE_CANCELLED: + backupdisk->state =3D VIR_DOMAIN_BACKUP_DISK_STATE_CANCELL= ED; + break; + + case QEMU_BLOCKJOB_STATE_READY: + case QEMU_BLOCKJOB_STATE_NEW: + case QEMU_BLOCKJOB_STATE_RUNNING: + case QEMU_BLOCKJOB_STATE_ABORTING: + case QEMU_BLOCKJOB_STATE_PIVOTING: + case QEMU_BLOCKJOB_STATE_LAST: + default: + break; + } + } + + switch (backupdisk->state) { + case VIR_DOMAIN_BACKUP_DISK_STATE_COMPLETE: + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING: + has_running =3D true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING: + has_cancelling =3D true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_FAILED: + has_failed =3D true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLED: + has_cancelled =3D true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_NONE: + case VIR_DOMAIN_BACKUP_DISK_STATE_LAST: + break; + } + } + + if (has_running && (has_failed || has_cancelled)) { + /* cancel the rest of the jobs */ + qemuBackupJobCancelBlockjobs(vm, backup, false); + } else if (!has_running && !has_cancelling) { + /* all sub-jobs have stopped */ + + if (has_failed) + jobstatus =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + else if (has_cancelled && backup->type =3D=3D VIR_DOMAIN_BACKUP_TY= PE_PUSH) + jobstatus =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + + qemuBackupJobTerminate(vm, jobstatus); + } + + /* otherwise we must wait for the jobs to end */ +} diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h new file mode 100644 index 0000000000..96297fc9e4 --- /dev/null +++ b/src/qemu/qemu_backup.h @@ -0,0 +1,41 @@ +/* + * qemu_backup.h: Implementation and handling of the backup jobs + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * . + */ + +#pragma once + +int +qemuBackupBegin(virDomainObjPtr vm, + const char *backupXML, + const char *checkpointXML, + unsigned int flags); + +char * +qemuBackupGetXMLDesc(virDomainObjPtr vm, + unsigned int flags); + +void +qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, + virDomainBackupDefPtr backup, + bool terminatebackup); + +void +qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + qemuBlockjobState state, + unsigned long long cur, + unsigned long long end); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index b955c3efe1..4cf516a083 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -52,6 +52,7 @@ #include "qemu_blockjob.h" #include "qemu_security.h" #include "qemu_checkpoint.h" +#include "qemu_backup.h" #include "virerror.h" #include "virlog.h" @@ -17206,6 +17207,50 @@ qemuDomainCheckpointDelete(virDomainCheckpointPtr = checkpoint, } +static int +qemuDomainBackupBegin(virDomainPtr domain, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + virDomainObjPtr vm =3D NULL; + int ret =3D -1; + + if (!(vm =3D qemuDomainObjFromDomain(domain))) + goto cleanup; + + if (virDomainBackupBeginEnsureACL(domain->conn, vm->def) < 0) + goto cleanup; + + ret =3D qemuBackupBegin(vm, backupXML, checkpointXML, flags); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + +static char * +qemuDomainBackupGetXMLDesc(virDomainPtr domain, + unsigned int flags) +{ + virDomainObjPtr vm =3D NULL; + char *ret =3D NULL; + + if (!(vm =3D qemuDomainObjFromDomain(domain))) + return NULL; + + if (virDomainBackupGetXMLDescEnsureACL(domain->conn, vm->def) < 0) + goto cleanup; + + ret =3D qemuBackupGetXMLDesc(vm, flags); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + static int qemuDomainQemuMonitorCommand(virDomainPtr domain, const char *c= md, char **result, unsigned int flags) { @@ -22941,6 +22986,8 @@ static virHypervisorDriver qemuHypervisorDriver =3D= { .domainCheckpointDelete =3D qemuDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo =3D qemuDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout =3D qemuDomainAgentSetResponseTimeout, = /* 5.10.0 */ + .domainBackupBegin =3D qemuDomainBackupBegin, /* 5.10.0 */ + .domainBackupGetXMLDesc =3D qemuDomainBackupGetXMLDesc, /* 5.10.0 */ }; --=20 2.23.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list