From nobody Fri Dec 19 20:23:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1687878951; cv=none; d=zohomail.com; s=zohoarc; b=Ac1PxG37hI8g6jsWvGbWHWgei6wuW+rFRmHaOF6p9Rc18TCMyfdafid7avtZBKKbxTVIHK68lIYVtLFs3vTzPmjGz/YXTykSKr4wfSPOgoXvP7SGa9vV2EDN0Jpd2V5B+q3G1t5F0+zIzZDRLaiQ9A4uqj77spJrvhtcuP0urbo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1687878951; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hDGhQmNDOOhgz74mCQyarA0dFI8iVH4jSJo4hRMynxE=; b=T8y6rg8m4LvvBP1fI3KV8f3D9osgJlC+tf8vME6CwWojR3fW7knpNIWDVgpDZ/MhFNfH0r3ViHGHaMltO8+nRK5bAz+xc2TM/Czix1MHgW1SLRRZ6WUyK8RuYc6WIphHcEV0LGrOyhI9JjPv1+//LSiMskM00JKrdzuKK5OChh8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1687878951173290.2650549011597; Tue, 27 Jun 2023 08:15:51 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-382-jjBztnrKNMmP0hXGVnncHg-1; Tue, 27 Jun 2023 11:15:28 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9DB848C76E6; Tue, 27 Jun 2023 15:08:06 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7394DC478C7; Tue, 27 Jun 2023 15:08:06 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 6BD7119451D7; Tue, 27 Jun 2023 15:07:48 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 9CE101946589 for ; Tue, 27 Jun 2023 15:07:41 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 8D5CF400E88; Tue, 27 Jun 2023 15:07:41 +0000 (UTC) Received: from antique-work.redhat.com (unknown [10.45.225.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3110E48FB01 for ; Tue, 27 Jun 2023 15:07:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687878950; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=hDGhQmNDOOhgz74mCQyarA0dFI8iVH4jSJo4hRMynxE=; b=glk+bSiRVpAVrUFzyKY6XqRtOudRNALVzANDTycGQmhCQsCD66xlar4e3EyMSdWBI64QM6 Y2JW7KpnuuFMqWIfIphqlFCFF7xzphJbDV6izKiAgbMucFQ3DjcvomWQCycwNZhn5+1vHu TwBvpGwqne+8bdfdGsZyMf9970OXoOU= X-MC-Unique: jjBztnrKNMmP0hXGVnncHg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Pavel Hrdina To: libvir-list@redhat.com Subject: [libvirt PATCH v2 21/24] qemu_snapshot: update backing store after deleting external snapshot Date: Tue, 27 Jun 2023 17:07:24 +0200 Message-ID: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1687878952689100001 Content-Type: text/plain; charset="utf-8"; x-default="true" With introduction of external snapshot revert we will have to update backing store of qcow images not actively used be QEMU manually. The need for this patch comes from the fact that we stop and start QEMU process therefore after revert not all existing snapshots will be known to that QEMU process due to reverting to non-leaf snapshot or having multiple branches. We need to loop over all existing snapshots and check all disks to see if they happen to have the image we are deleting as backing store and update them to point to the new image except for images currently used by the running QEMU process doing the merge operation. Signed-off-by: Pavel Hrdina --- src/qemu/qemu_snapshot.c | 95 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 2950ad7d77..337c83f151 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2530,6 +2530,8 @@ typedef struct _qemuSnapshotDeleteExternalData { virStorageSource *parentDiskSrc; /* backing disk source of the @diskSr= c */ virStorageSource *prevDiskSrc; /* source of disk for which @diskSrc is backing disk */ + GSList *disksWithBacking; /* list of storage source for which the + deleted storage source is backing store */ qemuBlockJobData *job; bool merge; } qemuSnapshotDeleteExternalData; @@ -2542,6 +2544,7 @@ qemuSnapshotDeleteExternalDataFree(qemuSnapshotDelete= ExternalData *data) return; =20 virObjectUnref(data->job); + g_slist_free(data->disksWithBacking); =20 g_free(data); } @@ -2591,6 +2594,60 @@ qemuSnapshotFindParentSnapForDisk(virDomainMomentObj= *snap, } =20 =20 +struct _qemuSnapshotDisksWithBackingStoreData { + virDomainMomentObj *current; + virStorageSource *diskSrc; + GSList **disksWithBacking; +}; + + +static int +qemuSnapshotDiskHasBackingDisk(void *payload, + const char *name G_GNUC_UNUSED, + void *opaque) +{ + virDomainMomentObj *snap =3D payload; + virDomainSnapshotDef *snapdef =3D virDomainSnapshotObjGetDef(snap); + struct _qemuSnapshotDisksWithBackingStoreData *data =3D opaque; + ssize_t i; + + /* skip snapshots that are within the active snapshot tree as it will = be handled + * by qemu */ + if (virDomainMomentIsAncestor(data->current, snap) || data->current = =3D=3D snap) + return 0; + + for (i =3D 0; i < snapdef->parent.dom->ndisks; i++) { + virDomainDiskDef *disk =3D snapdef->parent.dom->disks[i]; + + if (!virStorageSourceIsLocalStorage(disk->src)) + continue; + + if (!disk->src->backingStore) + ignore_value(virStorageSourceGetMetadata(disk->src, -1, -1, 1,= false)); + + if (virStorageSourceIsSameLocation(disk->src->backingStore, data->= diskSrc)) + *data->disksWithBacking =3D g_slist_prepend(*data->disksWithBa= cking, disk->src); + } + + return 0; +} + + +static void +qemuSnapshotGetDisksWithBackingStore(virDomainObj *vm, + virDomainMomentObj *snap, + qemuSnapshotDeleteExternalData *data) +{ + struct _qemuSnapshotDisksWithBackingStoreData iterData; + + iterData.current =3D virDomainSnapshotGetCurrent(vm->snapshots); + iterData.diskSrc =3D data->diskSrc; + iterData.disksWithBacking =3D &data->disksWithBacking; + + virDomainMomentForEachDescendant(snap, qemuSnapshotDiskHasBackingDisk,= &iterData); +} + + /** * qemuSnapshotDeleteExternalPrepareData: * @vm: domain object @@ -2682,6 +2739,8 @@ qemuSnapshotDeleteExternalPrepareData(virDomainObj *v= m, _("snapshot VM disk source and parent d= isk source are not the same")); return -1; } + + qemuSnapshotGetDisksWithBackingStore(vm, snap, data); } =20 data->parentSnap =3D qemuSnapshotFindParentSnapForDisk(snap, d= ata->snapDisk); @@ -3063,6 +3122,40 @@ qemuSnapshotSetInvalid(virDomainObj *vm, } =20 =20 +static void +qemuSnapshotUpdateBackingStore(virDomainObj *vm, + qemuSnapshotDeleteExternalData *data) +{ + GSList *cur =3D NULL; + const char *qemuImgPath; + virQEMUDriver *driver =3D QEMU_DOMAIN_PRIVATE(vm)->driver; + + if (!(qemuImgPath =3D qemuFindQemuImgBinary(driver))) + return; + + for (cur =3D data->disksWithBacking; cur; cur =3D g_slist_next(cur)) { + virStorageSource *diskSrc =3D cur->data; + g_autoptr(virCommand) cmd =3D NULL; + + /* creates cmd line args: qemu-img create -f qcow2 -o */ + if (!(cmd =3D virCommandNewArgList(qemuImgPath, + "rebase", + "-u", + "-F", + virStorageFileFormatTypeToString(= data->parentDiskSrc->format), + "-f", + virStorageFileFormatTypeToString(= diskSrc->format), + "-b", + data->parentDiskSrc->path, + diskSrc->path, + NULL))) + continue; + + ignore_value(virCommandRun(cmd, NULL)); + } +} + + static int qemuSnapshotDiscardExternal(virDomainObj *vm, virDomainMomentObj *snap, @@ -3149,6 +3242,8 @@ qemuSnapshotDiscardExternal(virDomainObj *vm, =20 qemuBlockJobSyncEnd(vm, data->job, VIR_ASYNC_JOB_SNAPSHOT); =20 + qemuSnapshotUpdateBackingStore(vm, data); + if (qemuSnapshotSetInvalid(vm, data->parentSnap, data->snapDisk, f= alse) < 0) goto error; } --=20 2.41.0