From nobody Sun Feb 8 23:51:52 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1727963483732562.2572670389958; Thu, 3 Oct 2024 06:51:23 -0700 (PDT) Received: by lists.libvirt.org (Postfix, from userid 996) id 96FF51546; Thu, 3 Oct 2024 09:51:22 -0400 (EDT) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id 02E8F154A; Thu, 3 Oct 2024 09:47:19 -0400 (EDT) Received: by lists.libvirt.org (Postfix, from userid 996) id 0508FA64; Thu, 3 Oct 2024 09:47:12 -0400 (EDT) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id C754611C2 for ; Thu, 3 Oct 2024 09:46:56 -0400 (EDT) Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-348-vp7fS2JrOLqqhXVBtgGofQ-1; Thu, 03 Oct 2024 09:46:53 -0400 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 716011955E84; Thu, 3 Oct 2024 13:46:52 +0000 (UTC) Received: from speedmetal.redhat.com (unknown [10.45.242.12]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1A3D619560A3; Thu, 3 Oct 2024 13:46:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-0.5 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,RCVD_IN_VALIDITY_RPBL_BLOCKED, RCVD_IN_VALIDITY_SAFE_BLOCKED,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1727963216; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7DLDAjZSxRJiwoNUz9Rlu171S1I7mzyF/cb8htY1nu4=; b=dFruAhh1NcGTCGAcb5Kb2VSxnb33Tp88kpLiRt11VdN5B9KhVnbKNFPOtPcbMDmzDnl7Sp Dbg6z2SyPmgwWW9iGxygEFJQY/UUny2O5PkSxR/h78M89wHZWegoimwDl/ORuo82hgwcTs HCvaI0ItMiEz8Bi12sEQ/RkF+z4Um4k= X-MC-Unique: vp7fS2JrOLqqhXVBtgGofQ-1 From: Peter Krempa To: devel@lists.libvirt.org Subject: [PATCH v3 07/10] qemu snapshot: use QMP snapshot-delete for internal snapshots deletion Date: Thu, 3 Oct 2024 15:46:33 +0200 Message-ID: <0763f538eb89aa7c54d1fae8745b845ba0b96a13.1727962906.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Message-ID-Hash: YGOWCCEZJQ7VC3XQGYIK5Z5OCVKRZEBH X-Message-ID-Hash: YGOWCCEZJQ7VC3XQGYIK5Z5OCVKRZEBH X-MailFrom: pkrempa@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: nikolai.barybin@virtuozzo.com, den@openvz.org X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1727963484408116600 Content-Type: text/plain; charset="utf-8" Switch to using the modern QMP command. As the user visible logic when deleting internal snapshots using the old 'delvm' command was very lax in terms of catching inconsistencies between the snapshot metadata and on-disk state we re-implement this behaviour even using the new command. We could improve the validation but that'd go at the cost of possible failures which users might not expect. As 'delvm' was simply ignoring any kind of failure the selection of devices to delete the snapshot from is based on querying qemu first which top level images do have the internal snapshot and then continuing only on those. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 148 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 143 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index d4602d386f..4927ca0bfc 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -3711,6 +3711,134 @@ qemuSnapshotDiscardMetadata(virDomainObj *vm, } +static char ** +qemuSnapshotActiveInternalDeleteGetDevices(virDomainObj *vm, + virDomainSnapshotDef *snapdef) +{ + /* In contrast to internal snapshot *creation* we can't always rely on= the + * metadata to give us the full status of the disk images we'd need to + * delete the snapshot from, as users might have edited the VM XML, + * unplugged or plugged in disks and/or did many other kinds of modifi= cation. + * + * In order for this code to behave the same as it did with the 'delvm= ' HMP + * command, which simply deleted the snapshot where it found them and + * ignored any failures we'll detect the images in qemu first and use + * that information as source of truth for now instead of introducing + * other failure scenarios. + */ + g_autoptr(GHashTable) blockNamedNodeData =3D NULL; + const char *snapname =3D snapdef->parent.name; + g_auto(GStrv) devices =3D g_new0(char *, vm->def->ndisks + 2); + size_t ndevs =3D 0; + size_t i =3D 0; + + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_SNAPSHOT))) + return NULL; + + for (i =3D 0; i < vm->def->ndisks; i++) { + virDomainDiskDef *domdisk =3D vm->def->disks[i]; + const char *format_nodename; + qemuBlockNamedNodeData *d; + + /* readonly disks will not have permissions to delete the snapshot= , and + * in fact should not have an internal snapshot */ + if (domdisk->src->readonly) + continue; + + /* This effectively filters out empty and 'raw' disks */ + if (!(format_nodename =3D qemuBlockStorageSourceGetFormatNodename(= domdisk->src))) + continue; + + /* the data should always be present */ + if (!(d =3D virHashLookup(blockNamedNodeData, format_nodename))) + continue; + + /* there might be no snapshot for given disk with given name */ + if (!d->snapshots || + !g_strv_contains((const char **) d->snapshots, snapname)) + continue; + + devices[ndevs++] =3D g_strdup(format_nodename); + } + + if (vm->def->os.loader && + vm->def->os.loader->nvram && + vm->def->os.loader->nvram->format =3D=3D VIR_STORAGE_FILE_QCOW2) { + const char *format_nodename; + qemuBlockNamedNodeData *d; + + if ((format_nodename =3D qemuBlockStorageSourceGetFormatNodename(v= m->def->os.loader->nvram)) && + (d =3D virHashLookup(blockNamedNodeData, format_nodename)) && + d->snapshots && + g_strv_contains((const char **) d->snapshots, snapname)) { + devices[ndevs++] =3D g_strdup(format_nodename); + } + } + + return g_steal_pointer(&devices); +} + + +static int +qemuSnapshotDiscardActiveInternal(virDomainObj *vm, + virDomainSnapshotDef *snapdef) +{ + g_autofree char *jobname =3D g_strdup_printf("internal-snapshot-delete= -%s", snapdef->parent.name); + qemuBlockJobData *job =3D NULL; + g_auto(GStrv) devices =3D NULL; + int ret =3D -1; + int rc =3D 0; + + if (!(devices =3D qemuSnapshotActiveInternalDeleteGetDevices(vm, snapd= ef))) + return -1; + + /* It's possible that no devices were selected */ + if (devices[0] =3D=3D NULL) + return 0; + + if (!(job =3D qemuBlockJobDiskNew(vm, NULL, QEMU_BLOCKJOB_TYPE_SNAPSHO= T_DELETE, + jobname))) + return -1; + + qemuBlockJobSyncBegin(job); + + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) + goto cleanup; + + rc =3D qemuMonitorSnapshotDelete(qemuDomainGetMonitor(vm), job->name, + snapdef->parent.name, (const char **) d= evices); + qemuDomainObjExitMonitor(vm); + + if (rc < 0) + goto cleanup; + + qemuBlockJobStarted(job, vm); + + while (true) { + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_SNAPSHOT); + + if (job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_FAILED) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("deletion of internal snapshot '%1$s' job fai= led: %2$s"), + snapdef->parent.name, NULLSTR(job->errmsg)); + goto cleanup; + } + + if (job->state =3D=3D VIR_DOMAIN_BLOCK_JOB_COMPLETED) + break; + + if (qemuDomainObjWait(vm) < 0) + goto cleanup; + } + + ret =3D 0; + + cleanup: + qemuBlockJobStartupFinalize(vm, job); + return ret; +} + + /* Discard one snapshot (or its metadata), without reparenting any childre= n. */ static int qemuSnapshotDiscardImpl(virQEMUDriver *driver, @@ -3750,14 +3878,24 @@ qemuSnapshotDiscardImpl(virQEMUDriver *driver, if (qemuSnapshotDiscardExternal(vm, snap, externalData) < = 0) return -1; } else { + virDomainSnapshotDef *snapdef =3D virDomainSnapshotObjGetD= ef(snap); + qemuDomainObjPrivate *priv =3D vm->privateData; + /* Similarly as internal snapshot creation we would use a = regular job * here so set a mask to forbid any other job. */ qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_NONE); - if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPS= HOT) < 0) - return -1; - /* we continue on even in the face of error */ - qemuMonitorDeleteSnapshot(qemuDomainGetMonitor(vm), snap->= def->name); - qemuDomainObjExitMonitor(vm); + + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SNAPSHOT_INTE= RNAL_QMP)) { + if (qemuSnapshotDiscardActiveInternal(vm, snapdef) < 0) + return -1; + } else { + if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_S= NAPSHOT) < 0) + return -1; + /* we continue on even in the face of error */ + qemuMonitorDeleteSnapshot(qemuDomainGetMonitor(vm), sn= ap->def->name); + qemuDomainObjExitMonitor(vm); + } + qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_DEFAULT_MASK); } } --=20 2.46.0