From nobody Sat Nov 23 11:37:45 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1731660131672614.8418700601253; Fri, 15 Nov 2024 00:42:11 -0800 (PST) Received: by lists.libvirt.org (Postfix, from userid 996) id 98DA31A69; Fri, 15 Nov 2024 03:42:10 -0500 (EST) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id 9F8A51A09; Fri, 15 Nov 2024 03:39:54 -0500 (EST) Received: by lists.libvirt.org (Postfix, from userid 996) id 4A12A19E7; Fri, 15 Nov 2024 03:39:50 -0500 (EST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id 5AAFF19D1 for ; Fri, 15 Nov 2024 03:39:41 -0500 (EST) Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-592-5B1WyJh_NWy5t8vmt2pfjg-1; Fri, 15 Nov 2024 03:39:39 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D5B8A1955DC3 for ; Fri, 15 Nov 2024 08:39:38 +0000 (UTC) Received: from speedmetal.lan (unknown [10.45.242.6]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0FCC719560A3 for ; Fri, 15 Nov 2024 08:39:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,RCVD_IN_VALIDITY_RPBL_BLOCKED, RCVD_IN_VALIDITY_SAFE_BLOCKED,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731659981; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=273pm2D1U4jG82Ep7oq1+Sdt+A+LYSLOkALx3bSkUMY=; b=bLVKDlcVU+FGZhLt1azupt67QP9kAWdoWZT3xpgyGDSt9mZ0YCyCEULRP4J72SBgUIBwIq ol4DCHKjE9dJ9+MlhFdgaRwVxAuLVx7f4u8WVhnFUxPkCajf0rVpcSvKrkUhxuHHPjs0Y1 Mu9sZtDkaZ2de/XEC4aYZlcVQ21Uzo0= X-MC-Unique: 5B1WyJh_NWy5t8vmt2pfjg-1 X-Mimecast-MFC-AGG-ID: 5B1WyJh_NWy5t8vmt2pfjg From: Peter Krempa To: devel@lists.libvirt.org Subject: [PATCH 4/9] qemuSnapshotForEachQcow2: Refactor Date: Fri, 15 Nov 2024 09:39:25 +0100 Message-ID: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: f2KSlnVE4jXfZrkFBMKRAFuBGlq_vInL_z6XY6rJi4Y_1731659979 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Message-ID-Hash: HU3TRR55M3YJTLJQ7FDLB6WWKPUN5HFD X-Message-ID-Hash: HU3TRR55M3YJTLJQ7FDLB6WWKPUN5HFD X-MailFrom: pkrempa@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1731660132645116600 Content-Type: text/plain; charset="utf-8" Refactor the function to avoid recursive call to rollback and simplify calling parameters. To achieve that most of the fatal checks are extracted into a dedicated loop that runs before modifying the disk state thus removing the need to rollback altoghether. Since rollback is still necessary when creation of the snapshot fails half-way through the rollback is extracted to handle only that scenario. Additionally callers would only pass the old 'try_all' argument as true on all non-creation ("-c") modes. This means that we can infer it from the operation instead of passing it as an extra argument. This refactor will also make it much simpler to implement handling of the NVRAM pflash backing file (in case it's qcow2) for internal snapshots. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 141 ++++++++++++++++++++++++--------------- 1 file changed, 88 insertions(+), 53 deletions(-) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 222afe62e1..f560cf270c 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -249,86 +249,121 @@ qemuSnapshotCreateQcow2Files(virDomainDef *def, } -/* The domain is expected to be locked and inactive. Return -1 on normal - * failure, 1 if we skipped a disk due to try_all. */ static int -qemuSnapshotForEachQcow2Internal(virDomainDef *def, - virDomainMomentObj *snap, - const char *op, - bool try_all, - int ndisks) +qemuSnapshotForEachQcow2One(virStorageSource *src, + const char *op, + const char *snapname) +{ + g_autoptr(virCommand) cmd =3D NULL; + + cmd =3D virCommandNewArgList("qemu-img", "snapshot", + op, snapname, src->path, NULL); + + if (virCommandRun(cmd, NULL) < 0) + return -1; + + return 0; +} + + +/** + * qemuSnapshotForEachQcow2: + * + * @def: domain definition + * @snap: snapshot object + * @op: 'qemu-img snapshot' operation flag, one of "-c", "-d", "-a" + * + * Applies the selected 'qemu-img snapshot' operation @op on all relevant = QCOW2 + * images of @def. In case when @op is "-c" (create) any failure is fatal = and + * rolled back. Otherwise the selected operation @op is applied on all ima= ges + * regardless of failure. + * + * Returns: -1 on errror; 0 on complete success; 1 on partial success in + * permissive modes. + */ +static int +qemuSnapshotForEachQcow2(virDomainDef *def, + virDomainMomentObj *snap, + const char *op) { virDomainSnapshotDef *snapdef =3D virDomainSnapshotObjGetDef(snap); size_t i; bool skipped =3D false; + bool create =3D STREQ(op, "-c"); + size_t nrollback =3D -1; + virErrorPtr orig_err; - for (i =3D 0; i < ndisks; i++) { - g_autoptr(virCommand) cmd =3D virCommandNewArgList("qemu-img", "sn= apshot", - op, snap->def->na= me, NULL); - int format =3D virDomainDiskGetFormat(def->disks[i]); + /* pre-checks */ + for (i =3D 0; i < def->ndisks; i++) { + virDomainDiskDef *disk =3D def->disks[i]; - /* FIXME: we also need to handle LVM here */ if (def->disks[i]->device !=3D VIR_DOMAIN_DISK_DEVICE_DISK || snapdef->disks[i].snapshot !=3D VIR_DOMAIN_SNAPSHOT_LOCATION_I= NTERNAL) continue; - if (!virStorageSourceIsLocalStorage(def->disks[i]->src)) { + if (!virStorageSourceIsLocalStorage(disk->src)) { virReportError(VIR_ERR_OPERATION_INVALID, _("can't manipulate inactive snapshots of disk = '%1$s'"), - def->disks[i]->dst); + disk->dst); return -1; } - if (format > 0 && format !=3D VIR_STORAGE_FILE_QCOW2) { - if (try_all) { - /* Continue on even in the face of error, since other - * disks in this VM may have the same snapshot name. - */ - VIR_WARN("skipping snapshot action on %s", - def->disks[i]->dst); - skipped =3D true; - continue; - } else if (STREQ(op, "-c") && i) { - /* We must roll back partial creation by deleting - * all earlier snapshots. */ - qemuSnapshotForEachQcow2Internal(def, snap, "-d", false, i= ); - } + if (create && + disk->src->format > VIR_STORAGE_FILE_NONE && + disk->src->format !=3D VIR_STORAGE_FILE_QCOW2) { virReportError(VIR_ERR_OPERATION_INVALID, _("Disk device '%1$s' does not support snapshot= ting"), - def->disks[i]->dst); + disk->dst); return -1; } + } + + for (i =3D 0; i < def->ndisks; i++) { + virDomainDiskDef *disk =3D def->disks[i]; + + if (def->disks[i]->device !=3D VIR_DOMAIN_DISK_DEVICE_DISK || + snapdef->disks[i].snapshot !=3D VIR_DOMAIN_SNAPSHOT_LOCATION_I= NTERNAL) + continue; - virCommandAddArg(cmd, virDomainDiskGetSource(def->disks[i])); + if (disk->src->format > VIR_STORAGE_FILE_NONE && + disk->src->format !=3D VIR_STORAGE_FILE_QCOW2) { + VIR_WARN("skipping 'qemu-img snapshot %s' action on non-qcow2 = disk '%s'", + op, disk->dst); + skipped =3D true; + continue; + } - if (virCommandRun(cmd, NULL) < 0) { - if (try_all) { - VIR_WARN("skipping snapshot action on %s", - def->disks[i]->dst); + if (qemuSnapshotForEachQcow2One(disk->src, op, snap->def->name) < = 0) { + if (create) { + nrollback =3D i; + virErrorPreserveLast(&orig_err); + goto rollback; + } else { + VIR_WARN("failed 'qemu-img snapshot %s' action on '%s'", + op, disk->dst); skipped =3D true; - continue; - } else if (STREQ(op, "-c") && i) { - /* We must roll back partial creation by deleting - * all earlier snapshots. */ - qemuSnapshotForEachQcow2Internal(def, snap, "-d", false, i= ); + virResetLastError(); } - return -1; } } return skipped ? 1 : 0; -} + rollback: + for (i =3D 0; i < nrollback; i++) { + virDomainDiskDef *disk =3D def->disks[i]; -/* The domain is expected to be locked and inactive. Return -1 on normal - * failure, 1 if we skipped a disk due to try_all. */ -static int -qemuSnapshotForEachQcow2(virDomainDef *def, - virDomainMomentObj *snap, - const char *op, - bool try_all) -{ - return qemuSnapshotForEachQcow2Internal(def, snap, op, try_all, def->n= disks); + if (def->disks[i]->device !=3D VIR_DOMAIN_DISK_DEVICE_DISK || + snapdef->disks[i].snapshot !=3D VIR_DOMAIN_SNAPSHOT_LOCATION_I= NTERNAL || + (disk->src->format > VIR_STORAGE_FILE_NONE && + disk->src->format !=3D VIR_STORAGE_FILE_QCOW2)) + continue; + + ignore_value(qemuSnapshotForEachQcow2One(disk->src, "-d", snap->de= f->name)); + } + + virErrorRestore(&orig_err); + return -1; } @@ -337,7 +372,7 @@ static int qemuSnapshotCreateInactiveInternal(virDomainObj *vm, virDomainMomentObj *snap) { - return qemuSnapshotForEachQcow2(vm->def, snap, "-c", false); + return qemuSnapshotForEachQcow2(vm->def, snap, "-c"); } @@ -2634,7 +2669,7 @@ qemuSnapshotInternalRevertInactive(virDomainObj *vm, } /* Try all disks, but report failure if we skipped any. */ - if (qemuSnapshotForEachQcow2(def, snap, "-a", true) !=3D 0) + if (qemuSnapshotForEachQcow2(def, snap, "-a") !=3D 0) return -1; return 0; @@ -4002,7 +4037,7 @@ qemuSnapshotDiscardImpl(virDomainObj *vm, if (qemuSnapshotDiscardExternal(vm, snap, externalData) < = 0) return -1; } else { - if (qemuSnapshotForEachQcow2(def, snap, "-d", true) < 0) + if (qemuSnapshotForEachQcow2(def, snap, "-d") < 0) return -1; } } else { --=20 2.47.0