From nobody Tue Oct 22 23:29:17 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1692005887; cv=none; d=zohomail.com; s=zohoarc; b=F2TfLjTYw2P9+DDnah1t/kSydTpW8+bbt02uMo1qcajCWbYNFc+jlEj5yy+IZIsN7llv59r4f3GygY3o7RW+9tGECjsEYQ69ESfntVqPUIpKPsycAnO6F0vgTUssx1psqlTjjf8H/LBIGf7JkoGpl+T+SwKM3mVlmHLXvIN4YCY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1692005887; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=SBR/+ynU4jQC1TukIvSz7ICRDxKYPvAUg1Ag53HoOqI=; b=B9Fun4Tw7kHG1BmPX9767J+betaccANpu3UcKkhWOB3qd58KSHcij5/RTAoP20hdnrTi7iW8hCZbb1Jkh4LGTwhj51z4C1SE1cmbZ/+nBrXO8kKonmDA2wqkeI49HHgEqVLTzWRpyIFcDW/1aJEYG5JkVGi9TJf03CD8f/0SUiw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1692005887562356.50533936396164; Mon, 14 Aug 2023 02:38:07 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-642-Zx7-AW6oM-mPagPqpFpqOg-1; Mon, 14 Aug 2023 05:37:09 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 46F36805586; Mon, 14 Aug 2023 09:37:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2EE2540D2839; Mon, 14 Aug 2023 09:37:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 38A2B19451C7; Mon, 14 Aug 2023 09:36:56 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 9B89A19465B3 for ; Mon, 14 Aug 2023 09:36:37 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 2119640C207D; Mon, 14 Aug 2023 09:36:32 +0000 (UTC) Received: from antique-work.redhat.com (unknown [10.45.226.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id B6E6F40B4CD0 for ; Mon, 14 Aug 2023 09:36:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692005886; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=SBR/+ynU4jQC1TukIvSz7ICRDxKYPvAUg1Ag53HoOqI=; b=hqTOEOA6GKH8pk04EfAUWrETH/RZ/O9Wqew+g/jrR8f1zns5tH8JKr99nqiAsuCSLO0GlB J3wMaMhd3vJUtLwFvf16x1wHD/ERuEYk47o4PaRDf0h8uKZMYDKVi7nuQLDBRuAM80z0xi ydf3+TRQaIV0Sv9kg9AXkDiFmSlbqmk= X-MC-Unique: Zx7-AW6oM-mPagPqpFpqOg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Pavel Hrdina To: libvir-list@redhat.com Subject: [libvirt PATCH v3 22/25] qemu_snapshot: update backing store after deleting external snapshot Date: Mon, 14 Aug 2023 11:36:14 +0200 Message-ID: <398d88dc84373b1173b435266c9dc6677c352578.1692005543.git.phrdina@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1692005889204100008 Content-Type: text/plain; charset="utf-8"; x-default="true" With introduction of external snapshot revert we will have to update backing store of qcow images not actively used be QEMU manually. The need for this patch comes from the fact that we stop and start QEMU process therefore after revert not all existing snapshots will be known to that QEMU process due to reverting to non-leaf snapshot or having multiple branches. We need to loop over all existing snapshots and check all disks to see if they happen to have the image we are deleting as backing store and update them to point to the new image except for images currently used by the running QEMU process doing the merge operation. Signed-off-by: Pavel Hrdina --- src/qemu/qemu_snapshot.c | 122 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 122 insertions(+) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 0238ab2249..8d0581d33b 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2588,6 +2588,8 @@ typedef struct _qemuSnapshotDeleteExternalData { virStorageSource *parentDiskSrc; /* backing disk source of the @diskSr= c */ virStorageSource *prevDiskSrc; /* source of disk for which @diskSrc is backing disk */ + GSList *disksWithBacking; /* list of storage source data for which the + deleted storage source is backing store */ qemuBlockJobData *job; bool merge; } qemuSnapshotDeleteExternalData; @@ -2600,6 +2602,7 @@ qemuSnapshotDeleteExternalDataFree(qemuSnapshotDelete= ExternalData *data) return; =20 virObjectUnref(data->job); + g_slist_free_full(data->disksWithBacking, g_free); =20 g_free(data); } @@ -2649,6 +2652,84 @@ qemuSnapshotFindParentSnapForDisk(virDomainMomentObj= *snap, } =20 =20 +struct _qemuSnapshotDisksWithBackingStoreData { + virStorageSource *diskSrc; + uid_t uid; + gid_t gid; +}; + + +struct _qemuSnapshotDisksWithBackingStoreIterData { + virDomainMomentObj *current; + virStorageSource *diskSrc; + GSList **disksWithBacking; + virQEMUDriverConfig *cfg; +}; + + +static int +qemuSnapshotDiskHasBackingDisk(void *payload, + const char *name G_GNUC_UNUSED, + void *opaque) +{ + virDomainMomentObj *snap =3D payload; + virDomainSnapshotDef *snapdef =3D virDomainSnapshotObjGetDef(snap); + struct _qemuSnapshotDisksWithBackingStoreIterData *iterdata =3D opaque; + ssize_t i; + + /* skip snapshots that are within the active snapshot tree as it will = be handled + * by qemu */ + if (virDomainMomentIsAncestor(iterdata->current, snap) || iterdata->cu= rrent =3D=3D snap) + return 0; + + for (i =3D 0; i < snapdef->parent.dom->ndisks; i++) { + virDomainDiskDef *disk =3D snapdef->parent.dom->disks[i]; + uid_t uid; + gid_t gid; + + if (!virStorageSourceIsLocalStorage(disk->src)) + continue; + + qemuDomainGetImageIds(iterdata->cfg, snapdef->parent.dom, disk->sr= c, + NULL, &uid, &gid); + + if (!disk->src->backingStore) + ignore_value(virStorageSourceGetMetadata(disk->src, uid, gid, = 1, false)); + + if (virStorageSourceIsSameLocation(disk->src->backingStore, iterda= ta->diskSrc)) { + struct _qemuSnapshotDisksWithBackingStoreData *data =3D + g_new0(struct _qemuSnapshotDisksWithBackingStoreData, 1); + + data->diskSrc =3D disk->src; + data->uid =3D uid; + data->gid =3D gid; + + *iterdata->disksWithBacking =3D g_slist_prepend(*iterdata->dis= ksWithBacking, data); + } + } + + return 0; +} + + +static void +qemuSnapshotGetDisksWithBackingStore(virDomainObj *vm, + virDomainMomentObj *snap, + qemuSnapshotDeleteExternalData *data) +{ + struct _qemuSnapshotDisksWithBackingStoreIterData iterData; + virQEMUDriver *driver =3D QEMU_DOMAIN_PRIVATE(vm)->driver; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); + + iterData.current =3D virDomainSnapshotGetCurrent(vm->snapshots); + iterData.diskSrc =3D data->diskSrc; + iterData.disksWithBacking =3D &data->disksWithBacking; + iterData.cfg =3D cfg; + + virDomainMomentForEachDescendant(snap, qemuSnapshotDiskHasBackingDisk,= &iterData); +} + + /** * qemuSnapshotDeleteExternalPrepareData: * @vm: domain object @@ -2733,6 +2814,8 @@ qemuSnapshotDeleteExternalPrepareData(virDomainObj *v= m, _("snapshot VM disk source and parent d= isk source are not the same")); return -1; } + + qemuSnapshotGetDisksWithBackingStore(vm, snap, data); } =20 data->parentSnap =3D qemuSnapshotFindParentSnapForDisk(snap, d= ata->snapDisk); @@ -3129,6 +3212,43 @@ qemuSnapshotSetInvalid(virDomainObj *vm, } =20 =20 +static void +qemuSnapshotUpdateBackingStore(virDomainObj *vm, + qemuSnapshotDeleteExternalData *data) +{ + GSList *cur =3D NULL; + const char *qemuImgPath; + virQEMUDriver *driver =3D QEMU_DOMAIN_PRIVATE(vm)->driver; + + if (!(qemuImgPath =3D qemuFindQemuImgBinary(driver))) + return; + + for (cur =3D data->disksWithBacking; cur; cur =3D g_slist_next(cur)) { + struct _qemuSnapshotDisksWithBackingStoreData *backingData =3D cur= ->data; + g_autoptr(virCommand) cmd =3D NULL; + + /* creates cmd line args: qemu-img create -f qcow2 -o */ + if (!(cmd =3D virCommandNewArgList(qemuImgPath, + "rebase", + "-u", + "-F", + virStorageFileFormatTypeToString(= data->parentDiskSrc->format), + "-f", + virStorageFileFormatTypeToString(= backingData->diskSrc->format), + "-b", + data->parentDiskSrc->path, + backingData->diskSrc->path, + NULL))) + continue; + + virCommandSetUID(cmd, backingData->uid); + virCommandSetGID(cmd, backingData->gid); + + ignore_value(virCommandRun(cmd, NULL)); + } +} + + static int qemuSnapshotDiscardExternal(virDomainObj *vm, virDomainMomentObj *snap, @@ -3215,6 +3335,8 @@ qemuSnapshotDiscardExternal(virDomainObj *vm, =20 qemuBlockJobSyncEnd(vm, data->job, VIR_ASYNC_JOB_SNAPSHOT); =20 + qemuSnapshotUpdateBackingStore(vm, data); + if (qemuSnapshotSetInvalid(vm, data->parentSnap, data->snapDisk, f= alse) < 0) goto error; } --=20 2.41.0