From nobody Thu Sep 19 01:22:20 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass(p=reject dis=none) header.from=lists.libvirt.org ARC-Seal: i=1; a=rsa-sha256; t=1721241422; cv=none; d=zohomail.com; s=zohoarc; b=BueSa6BFlNUhE7SIFe4E7XdAsCQjwKSG62+6B2RSRM7I0XV8H9XHZoU4bPSruQgO5tQv5eg7tmp8ajU8I9qD1iljSJrV3oSRdJZWNF1I17Az5nXZonfq41uMsMtiMTYlc8O6HmA0hy0EPmh+M98bMYFTogBCdL65QvMKc4rldIo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1721241422; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Subject:Subject:To:To:Message-Id; bh=NyNOR8if7fFmYDX4428rZnyQEsxZOQKbBBimN8cgCsM=; b=kBSh6rzpLhcuuzby+QfeObHWJ0Ko0/0QiBfKbss9+U5/jC2a7fkZg98CgY2CVItuDt2BST6kajEP10tTa4sm1DWaEovRdKRgb8yTuLzJpnLQMZqZQWjTTDBSb4zs7nB0jaBDtz+ns7+D1gJxXAbs6xwM21PRS/U5C+prILwjLig= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1721241422020227.54108997128742; Wed, 17 Jul 2024 11:37:02 -0700 (PDT) Received: by lists.libvirt.org (Postfix, from userid 996) id E9386A53; Wed, 17 Jul 2024 14:37:00 -0400 (EDT) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id 95DFCCA5; Wed, 17 Jul 2024 14:35:42 -0400 (EDT) Received: by lists.libvirt.org (Postfix, from userid 996) id 85A40CA6; Wed, 17 Jul 2024 14:35:38 -0400 (EDT) Received: from relay.virtuozzo.com (relay.virtuozzo.com [130.117.225.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id 07B69D0D for ; Wed, 17 Jul 2024 14:35:01 -0400 (EDT) Received: from [130.117.225.1] (helo=vz9-barybin-2.ch-qa.vzint.dev) by relay.virtuozzo.com with esmtp (Exim 4.96) (envelope-from ) id 1sU9Ta-00DFTN-0n; Wed, 17 Jul 2024 20:34:46 +0200 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuozzo.com; s=relay; h=MIME-Version:Message-ID:Date:Subject:From: Content-Type; bh=uWntapm8MMK2Cv84CVdCKzEr2xQaKkrzBtGOEnF6Fhc=; b=b5gAIfeKaZo6 8QtuVyC0MdHBMz5XZRy0O3YJigJNee1N1l87uyAo3AsfO9XRLrQ9SMVGcR/dDX88OfXwq2wRZRalp gM4WEMgVDJmMFpFsZgd9HXP5kULwFidgyRaBR1s6kEhJ+MLUHoVpETIgxwMsPzsBCb70NYfwAOmnn YDXNFrWKsMtUVKXrrrJMZ47KeaPKNT/aTsdhXH/EiToll1Vba4pUuPWvOInl9JkGYbV83XaWqQ6St e384focH+a17cP0+WUWPrR4rW67aGlafgug9wtY1K2naAI9eRkLONg2NBiEAdPM0cluNIXZ3apTGB 1M5hd0wSu5t7g9VJcdWP1Q==; To: devel@lists.libvirt.org Subject: [PATCH V2 4/4] qemu snapshot: use QMP snapshot-save/delete for internal snapshots Date: Wed, 17 Jul 2024 21:21:41 +0300 Message-ID: <20240717183332.150095-9-nikolai.barybin@virtuozzo.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240717183332.150095-1-nikolai.barybin@virtuozzo.com> References: <20240717183332.150095-1-nikolai.barybin@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: YEFCFPW6OIJW3Z5YI2WJ5DJGYC6T7QYT X-Message-ID-Hash: YEFCFPW6OIJW3Z5YI2WJ5DJGYC6T7QYT X-MailFrom: nikolai.barybin@virtuozzo.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: den@openvz.org, Nikolai Barybin X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: From: Nikolai Barybin via Devel Reply-To: Nikolai Barybin X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1721241424053116600 Content-Type: text/plain; charset="utf-8" The usage of HMP commands are highly discouraged by qemu. Moreover, current snapshot creation routine does not provide flexibility in choosing target device for VM state snapshot. This patch makes use of QMP commands snapshot-save/delete and by default chooses first writable non-shared qcow2 disk (if present) as target for VM state. Signed-off-by: Nikolai Barybin --- src/qemu/qemu_snapshot.c | 207 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 194 insertions(+), 13 deletions(-) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index f5260c4a22..42d9385fd5 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -308,6 +308,114 @@ qemuSnapshotCreateInactiveExternal(virQEMUDriver *dri= ver, return ret; } =20 +static int +qemuSnapshotActiveInternalGetWrdevListHelper(virDomainObj *vm, + char ***wrdevs) +{ + size_t wrdevCount =3D 0; + size_t i =3D 0; + g_auto(GStrv) wrdevsInternal =3D NULL; + + *wrdevs =3D NULL; + for (i =3D 0; i < vm->def->ndisks; i++) { + virDomainDiskDef *disk =3D vm->def->disks[i]; + if (!disk->src->readonly && disk->src->shared) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("found shared writable disk, VM snapshotting = has no sense")); + return -1; + } + + if (!disk->src->readonly && !disk->src->shared && + disk->src->format !=3D VIR_STORAGE_FILE_QCOW2) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("found writable non-qcow2 disk, snapshotting = forbidden")); + return -1; + } + + if (!disk->src->readonly && !disk->src->shared && + disk->src->format =3D=3D VIR_STORAGE_FILE_QCOW2) { + if (wrdevCount =3D=3D 0) + wrdevsInternal =3D g_new0(char *, vm->def->ndisks + 2); + + wrdevsInternal[wrdevCount++] =3D g_strdup(disk->src->nodenamef= ormat); + } + } + + if (wrdevCount > 0 && vm->def->os.loader && vm->def->os.loader->nvram = && + vm->def->os.loader->nvram->format =3D=3D VIR_STORAGE_FILE_QCOW2) + wrdevsInternal[wrdevCount] =3D g_strdup(vm->def->os.loader->nvram-= >nodenameformat); + + if (wrdevCount =3D=3D 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("no writable device found for internal snapshot = creation/deletion")); + return -1; + } + + *wrdevs =3D g_steal_pointer(&wrdevsInternal); + return 0; +} + + +static int +qemuSnapshotCreateActiveInternalDone(virDomainObj *vm) +{ + qemuBlockJobData *job =3D NULL; + qemuDomainObjPrivate *priv =3D vm->privateData; + + if (!(job =3D virHashLookup(priv->blockjobs, g_strdup_printf("snapsave= %d", vm->def->id)))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to lookup blockjob 'snapsave%1$d'"), vm->= def->id); + return -1; + } + + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); + if (job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_FAILED) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("snapshot-save job failed: %1$s"), NULLSTR(job->e= rrmsg)); + return -1; + } + + return job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_COMPLETED ? 1 : 0; +} + + +static int +qemuSnapshotCreateActiveInternalStart(virDomainObj *vm, + const char *name) +{ + qemuBlockJobData *job =3D NULL; + g_auto(GStrv) wrdevs =3D NULL; + int ret =3D -1; + int rc =3D 0; + + if (qemuSnapshotActiveInternalGetWrdevListHelper(vm, &wrdevs) < 0) + return -1; + + if (!(job =3D qemuBlockJobDiskNew(vm, NULL, QEMU_BLOCKJOB_TYPE_SNAPSHO= T_SAVE, + g_strdup_printf("snapsave%d", vm->def-= >id)))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("failed to create n= ew blockjob")); + return -1; + } + + qemuBlockJobSyncBegin(job); + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) { + ret =3D -1; + goto cleanup; + } + + rc =3D qemuMonitorSnapshotSave(qemuDomainGetMonitor(vm), job->name, + name, wrdevs[0], wrdevs); + qemuDomainObjExitMonitor(vm); + if (rc =3D=3D 0) { + qemuBlockJobStarted(job, vm); + ret =3D 0; + } + + cleanup: + qemuBlockJobStartupFinalize(vm, job); + return ret; +} + =20 /* The domain is expected to be locked and active. */ static int @@ -321,6 +429,7 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver, bool resume =3D false; virDomainSnapshotDef *snapdef =3D virDomainSnapshotObjGetDef(snap); int ret =3D -1; + int rv =3D 0; =20 if (!qemuMigrationSrcIsAllowed(vm, false, VIR_ASYNC_JOB_SNAPSHOT, 0)) goto cleanup; @@ -342,15 +451,30 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *drive= r, } } =20 - if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) { - resume =3D false; - goto cleanup; - } + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SNAPSHOT_SAVE)) { + if ((ret =3D qemuSnapshotCreateActiveInternalStart(vm, snap->def->= name)) < 0) { + resume =3D false; + goto cleanup; + } =20 - ret =3D qemuMonitorCreateSnapshot(priv->mon, snap->def->name); - qemuDomainObjExitMonitor(vm); - if (ret < 0) - goto cleanup; + while ((rv =3D qemuSnapshotCreateActiveInternalDone(vm)) !=3D 1) { + if (rv < 0 || qemuDomainObjWait(vm) < 0) { + ret =3D -1; + goto cleanup; + } + } + /* Legacy support for QEMU versions < 6.0. */ + } else { + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0= ) { + resume =3D false; + goto cleanup; + } + + ret =3D qemuMonitorCreateSnapshot(priv->mon, snap->def->name); + qemuDomainObjExitMonitor(vm); + if (ret < 0) + goto cleanup; + } =20 if (!(snapdef->cookie =3D (virObject *) qemuDomainSaveCookieNew(vm))) goto cleanup; @@ -3603,6 +3727,54 @@ qemuSnapshotDiscardMetadata(virDomainObj *vm, } =20 =20 +static int +qemuSnapshotDiscardActiveInternal(virDomainObj *vm, + const char *name) +{ + qemuBlockJobData *job =3D NULL; + g_auto(GStrv) wrdevs =3D NULL; + int ret =3D -1; + int rc =3D 0; + + if (qemuSnapshotActiveInternalGetWrdevListHelper(vm, &wrdevs) < 0) + return -1; + + if (!(job =3D qemuBlockJobDiskNew(vm, NULL, QEMU_BLOCKJOB_TYPE_SNAPSHO= T_DELETE, + g_strdup_printf("snapdelete%d", vm->de= f->id)))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("failed to create n= ew blockjob")); + return -1; + } + + qemuBlockJobSyncBegin(job); + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) + goto cleanup; + + rc =3D qemuMonitorSnapshotDelete(qemuDomainGetMonitor(vm), job->name, = name, wrdevs); + qemuDomainObjExitMonitor(vm); + if (rc =3D=3D 0) { + qemuBlockJobStarted(job, vm); + ret =3D 0; + } else { + goto cleanup; + } + + while (job->state !=3D VIR_DOMAIN_BLOCK_JOB_COMPLETED) { + qemuDomainObjWait(vm); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); + if (job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_FAILED) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("snapshot-delete job failed: %1$s"), NULLSTR(= job->errmsg)); + ret =3D -1; + break; + } + } + + cleanup: + qemuBlockJobStartupFinalize(vm, job); + return ret; +} + + /* Discard one snapshot (or its metadata), without reparenting any childre= n. */ static int qemuSnapshotDiscardImpl(virQEMUDriver *driver, @@ -3642,14 +3814,23 @@ qemuSnapshotDiscardImpl(virQEMUDriver *driver, if (qemuSnapshotDiscardExternal(vm, snap, externalData) < = 0) return -1; } else { + qemuDomainObjPrivate *priv =3D vm->privateData; + /* Similarly as internal snapshot creation we would use a = regular job * here so set a mask to forbid any other job. */ qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_NONE); - if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPS= HOT) < 0) - return -1; - /* we continue on even in the face of error */ - qemuMonitorDeleteSnapshot(qemuDomainGetMonitor(vm), snap->= def->name); - qemuDomainObjExitMonitor(vm); + + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SNAPSHOT_DELE= TE)) { + qemuSnapshotDiscardActiveInternal(vm, snap->def->name); + /* Legacy support for QEMU versions < 6.0. */ + } else { + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_S= NAPSHOT) < 0) + return -1; + /* we continue on even in the face of error */ + qemuMonitorDeleteSnapshot(qemuDomainGetMonitor(vm), sn= ap->def->name); + qemuDomainObjExitMonitor(vm); + } + qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_DEFAULT_MASK); } } --=20 2.43.5