From nobody Thu Sep 19 01:16:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass(p=reject dis=none) header.from=lists.libvirt.org ARC-Seal: i=1; a=rsa-sha256; t=1721084966; cv=none; d=zohomail.com; s=zohoarc; b=aYWp0LRr2ZP5KL8GaBzOoLVZoagpdiC+mUMoMMEQRXuLSJH7JobFK9Ugg0+eU/Bi7b30N3PNFuGtJO9AOFLpRL2e+zhN0ggWPtVECD0MoIehoCbbu0hnGIJDATYsWMW2cUWWPI5zWB0LufQoEN8F6/9ERo1DYyxv3fjtSO91Lvc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1721084966; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Subject:Subject:To:To:Message-Id; bh=Aj7gXtfWk9k6jj179DOJwnX4SlneQmlSeTCwdkdgRlQ=; b=QdXRrlWMKA29YfMObLDVhBmEvL8fNmj2yrjmsxuHQisX5ngJ2mD1ZGMvqipLu8R7CsUpjuK4L1XgvA6BccW35+GTONQEgtCdEVCN403DEu5m6OMUmcjV3kAQBc284IjpcKIGJumgCF9JfCIUUCTHwD38gAuXPTcyN50IIV7wrVY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1721084966023770.2347136399051; Mon, 15 Jul 2024 16:09:26 -0700 (PDT) Received: by lists.libvirt.org (Postfix, from userid 996) id EE37BBD7; Mon, 15 Jul 2024 19:09:24 -0400 (EDT) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id EED51C5D; Mon, 15 Jul 2024 19:07:52 -0400 (EDT) Received: by lists.libvirt.org (Postfix, from userid 996) id 64644A53; Mon, 15 Jul 2024 19:07:47 -0400 (EDT) Received: from relay.virtuozzo.com (relay.virtuozzo.com [130.117.225.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id B383C95A for ; Mon, 15 Jul 2024 19:07:46 -0400 (EDT) Received: from [130.117.225.1] (helo=vz9-barybin-2.ch-qa.vzint.dev) by relay.virtuozzo.com with esmtp (Exim 4.96) (envelope-from ) id 1sTUXi-00CvTw-0z; Tue, 16 Jul 2024 00:52:18 +0200 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuozzo.com; s=relay; h=MIME-Version:Message-ID:Date:Subject:From: Content-Type; bh=7m+P5nniIsVa7pHgo3YIqG+6ZIth/CZ5apWN+5o1Gbw=; b=mwMC2P2GBsFG TdGWMBG2bhCKY+cWIW5tXden6iQMLfMFdmn6p2SH+TylilvBzdKbrTvuAZHCZzT60XIHUJbyAG7SK 1mpfeaw7CGnYCBwA49akzC1vS4GOpGXM5Xk68fUme1Uwga/9RLTUu8e6HV6VHrVOsM4UBClrmsGj/ UquP4Sy6qgcx/sL7bGmOs1rBcVnyzvxGbjqXFAVOczNh04Zg44fwnZZjFrMpRCBEGBb8IezJPb0uX tNj+GHwlpY57eWn7iu0mGrCqkBjPWrrdl1Js0w3aFg6FcW2ZBIPnK5xBeebUIdOBifGzzVQb7nh46 4790t+rbeJTmkUgviyhhcQ==; To: devel@lists.libvirt.org Subject: [PATCH 3/4] qemu snapshot: use QMP snapshot-save/delete for internal snapshots Date: Tue, 16 Jul 2024 01:42:27 +0300 Message-ID: <20240715225013.100847-5-nikolai.barybin@virtuozzo.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240715225013.100847-2-nikolai.barybin@virtuozzo.com> References: <20240715225013.100847-2-nikolai.barybin@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: OCB5X7YMIUTS6RZHWYZTYLMNYBQAKW5E X-Message-ID-Hash: OCB5X7YMIUTS6RZHWYZTYLMNYBQAKW5E X-MailFrom: nikolai.barybin@virtuozzo.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: den@openvz.org, Nikolai Barybin X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: From: Nikolai Barybin via Devel Reply-To: Nikolai Barybin X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1721084968057116600 Content-Type: text/plain; charset="utf-8" The usage of HMP commands are highly discouraged by qemu. Moreover, current snapshot creation routine does not provide flexibility in choosing target device for VM state snapshot. This patch makes use of QMP commands snapshot-save/delete and by default chooses first writable disk (if present) as target for VM state, NVRAM - otherwise. Signed-off-by: Nikolai Barybin --- src/qemu/qemu_snapshot.c | 158 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 148 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index f5260c4a22..83949a9a27 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -308,6 +308,96 @@ qemuSnapshotCreateInactiveExternal(virQEMUDriver *driv= er, return ret; } =20 +static int +qemuSnapshotActiveInternalGetWrdevListHelper(virDomainObj *vm, + char **wrdevs) +{ + size_t wrdevCount =3D 0; + size_t i =3D 0; + + for (i =3D 0; i < vm->def->ndisks; i++) { + virDomainDiskDef *disk =3D vm->def->disks[i]; + if (!disk->src->readonly) { + wrdevs[wrdevCount] =3D g_strdup(disk->src->nodenameformat); + wrdevCount++; + } + } + + if (wrdevCount =3D=3D 0) { + if (vm->def->os.loader->nvram) { + wrdevs[0] =3D g_strdup(vm->def->os.loader->nvram->nodenameform= at); + } else { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("no writable device for internal snapshot cre= ation/deletion")); + return -1; + } + } + + return 0; +} + + +static int +qemuSnapshotCreateActiveInternalDone(virDomainObj *vm) +{ + qemuBlockJobData *job =3D NULL; + qemuDomainObjPrivate *priv =3D vm->privateData; + + if (!(job =3D virHashLookup(priv->blockjobs, g_strdup_printf("snapsave= %d", vm->def->id)))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to lookup blockjob 'snapsave%1$d'"), vm->= def->id); + return -1; + } + + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); + if (job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_FAILED) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("snapshot-save job failed: %1$s"), NULLSTR(job->e= rrmsg)); + return -1; + } + + return job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_COMPLETED ? 1 : 0; +} + + +static int +qemuSnapshotCreateActiveInternalStart(virDomainObj *vm, + const char *name) +{ + qemuBlockJobData *job =3D NULL; + g_autofree char** wrdevs =3D NULL; + int ret =3D -1; + int rc =3D 0; + + wrdevs =3D g_new0(char *, vm->def->ndisks + 1); + if (qemuSnapshotActiveInternalGetWrdevListHelper(vm, wrdevs) < 0) + return -1; + + if (!(job =3D qemuBlockJobDiskNew(vm, NULL, QEMU_BLOCKJOB_TYPE_SNAPSHO= T_SAVE, + g_strdup_printf("snapsave%d", vm->def-= >id)))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("failed to create n= ew blockjob")); + return -1; + } + + qemuBlockJobSyncBegin(job); + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) { + ret =3D -1; + goto cleanup; + } + + rc =3D qemuMonitorSnapshotSave(qemuDomainGetMonitor(vm), job->name, + name, wrdevs[0], wrdevs); + qemuDomainObjExitMonitor(vm); + if (rc =3D=3D 0) { + qemuBlockJobStarted(job, vm); + ret =3D 0; + } + + cleanup: + qemuBlockJobStartupFinalize(vm, job); + return ret; +} + =20 /* The domain is expected to be locked and active. */ static int @@ -316,11 +406,11 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *drive= r, virDomainMomentObj *snap, unsigned int flags) { - qemuDomainObjPrivate *priv =3D vm->privateData; virObjectEvent *event =3D NULL; bool resume =3D false; virDomainSnapshotDef *snapdef =3D virDomainSnapshotObjGetDef(snap); int ret =3D -1; + int rv =3D 0; =20 if (!qemuMigrationSrcIsAllowed(vm, false, VIR_ASYNC_JOB_SNAPSHOT, 0)) goto cleanup; @@ -342,15 +432,17 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *drive= r, } } =20 - if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) { + if ((ret =3D qemuSnapshotCreateActiveInternalStart(vm, snap->def->name= )) < 0) { resume =3D false; goto cleanup; } =20 - ret =3D qemuMonitorCreateSnapshot(priv->mon, snap->def->name); - qemuDomainObjExitMonitor(vm); - if (ret < 0) - goto cleanup; + while ((rv =3D qemuSnapshotCreateActiveInternalDone(vm)) !=3D 1) { + if (rv < 0 || qemuDomainObjWait(vm) < 0) { + ret =3D -1; + goto cleanup; + } + } =20 if (!(snapdef->cookie =3D (virObject *) qemuDomainSaveCookieNew(vm))) goto cleanup; @@ -3603,6 +3695,55 @@ qemuSnapshotDiscardMetadata(virDomainObj *vm, } =20 =20 +static int +qemuSnapshotDiscardActiveInternal(virDomainObj *vm, + const char *name) +{ + qemuBlockJobData *job =3D NULL; + g_autofree char** wrdevs =3D NULL; + int ret =3D -1; + int rc =3D 0; + + wrdevs =3D g_new0(char *, vm->def->ndisks + 1); + if (qemuSnapshotActiveInternalGetWrdevListHelper(vm, wrdevs) < 0) + return -1; + + if (!(job =3D qemuBlockJobDiskNew(vm, NULL, QEMU_BLOCKJOB_TYPE_SNAPSHO= T_DELETE, + g_strdup_printf("snapdelete%d", vm->de= f->id)))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("failed to create n= ew blockjob")); + return -1; + } + + qemuBlockJobSyncBegin(job); + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) + goto cleanup; + + rc =3D qemuMonitorSnapshotDelete(qemuDomainGetMonitor(vm), job->name, = name, wrdevs); + qemuDomainObjExitMonitor(vm); + if (rc =3D=3D 0) { + qemuBlockJobStarted(job, vm); + ret =3D 0; + } else { + goto cleanup; + } + + while (job->state !=3D VIR_DOMAIN_BLOCK_JOB_COMPLETED) { + qemuDomainObjWait(vm); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); + if (job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_FAILED) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("snapshot-delete job failed: %1$s"), NULLSTR(= job->errmsg)); + ret =3D -1; + break; + } + } + + cleanup: + qemuBlockJobStartupFinalize(vm, job); + return ret; +} + + /* Discard one snapshot (or its metadata), without reparenting any childre= n. */ static int qemuSnapshotDiscardImpl(virQEMUDriver *driver, @@ -3645,11 +3786,8 @@ qemuSnapshotDiscardImpl(virQEMUDriver *driver, /* Similarly as internal snapshot creation we would use a = regular job * here so set a mask to forbid any other job. */ qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_NONE); - if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPS= HOT) < 0) - return -1; /* we continue on even in the face of error */ - qemuMonitorDeleteSnapshot(qemuDomainGetMonitor(vm), snap->= def->name); - qemuDomainObjExitMonitor(vm); + qemuSnapshotDiscardActiveInternal(vm, snap->def->name); qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_DEFAULT_MASK); } } --=20 2.43.5