From nobody Mon Feb 9 02:12:53 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=reject dis=none) header.from=il.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1676460584; cv=none; d=zohomail.com; s=zohoarc; b=DZpTQcTcU5UMTtCVPY+sVZCT48LfXGuZ4Hi/FWEl6Y4TdLf8qZ0uuEd0d/XIje+rWq4HKeWp1bdZWmpwNmEyXMQoRY2bDeE/vrnMjDh8ZCFmmkAslSMtRbVYXLeC/2N2lfBOT5X1flrZuZI8yW6Iwb18t1f79BSEb6AEv91fw4w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1676460584; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=bvBVmVKAHfwMqN/foYcZyf5CZUV9Sx5LL83O7CY6l00=; b=ldlqwlLjr5fmcBzv+Yl6SXkBUkyPzuGn5JxRgASfL1bX01c83OXW00AEQD0wVxL7L+29yr1HT/uu27+mgn6ew90PvUAEDp0ltRr/INFV/Z27Cpe94woZS88+lXCHeg+muAhEH5l8MvfHXb1JW/1Yveil4uGiw6G2+08yOytqxW0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=reject dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1676460584283761.686879276779; Wed, 15 Feb 2023 03:29:44 -0800 (PST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-371-WSTJLZwbMiOKyVmRztvkOg-1; Wed, 15 Feb 2023 06:29:36 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92791802D2A; Wed, 15 Feb 2023 11:29:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C96C18EC1; Wed, 15 Feb 2023 11:29:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5E5761946588; Wed, 15 Feb 2023 11:29:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 2D7911946586 for ; Wed, 15 Feb 2023 11:29:23 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 02EEC140EBF4; Wed, 15 Feb 2023 11:29:23 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast07.extmail.prod.ext.rdu2.redhat.com [10.11.55.23]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EE5CC1415108 for ; Wed, 15 Feb 2023 11:29:22 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BC6E73C10147 for ; Wed, 15 Feb 2023 11:29:22 +0000 (UTC) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-568-kw5Ge5J9Ov-AowFv1QnfDg-1; Wed, 15 Feb 2023 06:29:19 -0500 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 31F9t6Gq012704 for ; Wed, 15 Feb 2023 11:29:18 GMT Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3nrw6c2bnn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 15 Feb 2023 11:29:18 +0000 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 31FANOH1016675 for ; Wed, 15 Feb 2023 11:29:16 GMT Received: from smtprelay06.wdc07v.mail.ibm.com ([9.208.129.118]) by ppma01wdc.us.ibm.com (PPS) with ESMTPS id 3np2n6vhh8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 15 Feb 2023 11:29:16 +0000 Received: from smtpav02.wdc07v.mail.ibm.com (smtpav02.wdc07v.mail.ibm.com [10.39.53.229]) by smtprelay06.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 31FBTDuP3605024 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 15 Feb 2023 11:29:13 GMT Received: from smtpav02.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 98B64583CC; Wed, 15 Feb 2023 11:29:13 +0000 (GMT) Received: from smtpav02.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EC241583D2; Wed, 15 Feb 2023 11:29:12 +0000 (GMT) Received: from oro.sl.cloud9.ibm.com (unknown [9.59.192.176]) by smtpav02.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 15 Feb 2023 11:29:12 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676460583; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=bvBVmVKAHfwMqN/foYcZyf5CZUV9Sx5LL83O7CY6l00=; b=IudEWu9qieuAj9Wz8R+uHj3yJhpluZNSipEnrFqdsQVr3sxCqMt8UWeVbSw6PERUcpVx8Y Fw+z/+kCM+UC4HjJ2+0NjeGgsrRBJtuLfF5oJ77R0xi1vkMLF8hz7vJbBCL57O3JbfpKX/ mt/uUfo49WrGVeDRvTp0FDBE+IIC0i4= X-MC-Unique: WSTJLZwbMiOKyVmRztvkOg-1 X-Original-To: libvir-list@listman.corp.redhat.com X-MC-Unique: kw5Ge5J9Ov-AowFv1QnfDg-1 From: Or Ozeri To: libvir-list@redhat.com Subject: [PATCH v2 3/6] qemu: Add internal support for active disk internal snapshots Date: Wed, 15 Feb 2023 05:28:19 -0600 Message-Id: <20230215112822.1887694-4-oro@il.ibm.com> In-Reply-To: <20230215112822.1887694-1-oro@il.ibm.com> References: <20230215112822.1887694-1-oro@il.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: PRKEA1AkA8Aq7iTZVo-HCxThixRTkiGP X-Proofpoint-GUID: PRKEA1AkA8Aq7iTZVo-HCxThixRTkiGP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.170.22 definitions=2023-02-15_06,2023-02-15_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 suspectscore=0 spamscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 mlxscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2302150100 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: oro@il.ibm.com, dannyh@il.ibm.com Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1676460584737100009 Content-Type: text/plain; charset="utf-8"; x-default="true" libvirt supports taking external disk snapshots on a running VM, using qemu's "blockdev-snapshot" command. qemu also supports "blockdev-snapshot-internal-sync" to do the same for internal snapshots. This commit wraps this (old) qemu capability to allow future libvirt users to take internal disk snapshots on a running VM. This will only work for disk types which support internal snapshots, and thus we require the disk type to be part of a white list of known types. For this commit, the list of supported formats is empty. An upcoming commit will allow RBD disks to use this new capability. Signed-off-by: Or Ozeri --- src/qemu/qemu_monitor.c | 9 +++ src/qemu/qemu_monitor.h | 5 ++ src/qemu/qemu_monitor_json.c | 14 ++++ src/qemu/qemu_monitor_json.h | 5 ++ src/qemu/qemu_snapshot.c | 72 ++++++++++++------- .../disk_snapshot.xml | 2 +- .../disk_snapshot.xml | 2 +- .../disk_snapshot_redefine.xml | 2 +- tests/qemumonitorjsontest.c | 1 + 9 files changed, 82 insertions(+), 30 deletions(-) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 38f89167e0..f6dab34243 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4225,6 +4225,15 @@ qemuMonitorTransactionSnapshotBlockdev(virJSONValue = *actions, } =20 =20 +int +qemuMonitorTransactionInternalSnapshotBlockdev(virJSONValue *actions, + const char *device, + const char *name) +{ + return qemuMonitorJSONTransactionInternalSnapshotBlockdev(actions, dev= ice, name); +} + + int qemuMonitorTransactionBackup(virJSONValue *actions, const char *device, diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 2d16214ba2..1bfd1ccbc2 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1411,6 +1411,11 @@ qemuMonitorTransactionSnapshotBlockdev(virJSONValue = *actions, const char *node, const char *overlay); =20 +int +qemuMonitorTransactionInternalSnapshotBlockdev(virJSONValue *actions, + const char *device, + const char *name); + typedef enum { QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_NONE =3D 0, QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_INCREMENTAL, diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index db99017555..002a6caa52 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -8307,6 +8307,20 @@ qemuMonitorJSONTransactionSnapshotBlockdev(virJSONVa= lue *actions, NULL); } =20 + +int +qemuMonitorJSONTransactionInternalSnapshotBlockdev(virJSONValue *actions, + const char *device, + const char *name) +{ + return qemuMonitorJSONTransactionAdd(actions, + "blockdev-snapshot-internal-sync", + "s:device", device, + "s:name", name, + NULL); +} + + VIR_ENUM_DECL(qemuMonitorTransactionBackupSyncMode); VIR_ENUM_IMPL(qemuMonitorTransactionBackupSyncMode, QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_LAST, diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 6f376cf9b7..313004f327 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -779,6 +779,11 @@ qemuMonitorJSONTransactionSnapshotBlockdev(virJSONValu= e *actions, const char *node, const char *overlay); =20 +int +qemuMonitorJSONTransactionInternalSnapshotBlockdev(virJSONValue *actions, + const char *device, + const char *name); + int qemuMonitorJSONTransactionBackup(virJSONValue *actions, const char *device, diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index c1855b3028..e82352ba7d 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -736,7 +736,7 @@ qemuSnapshotPrepare(virDomainObj *vm, } =20 /* disk snapshot requires at least one disk */ - if (def->state =3D=3D VIR_DOMAIN_SNAPSHOT_DISK_SNAPSHOT && !external) { + if (def->state =3D=3D VIR_DOMAIN_SNAPSHOT_DISK_SNAPSHOT && !external &= & !found_internal) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("disk-only snapshots require at least " "one disk to be selected for snapshot")); @@ -852,6 +852,7 @@ qemuSnapshotDiskCleanup(qemuSnapshotDiskData *data, struct _qemuSnapshotDiskContext { qemuSnapshotDiskData *dd; size_t ndd; + bool has_internal; =20 virJSONValue *actions; =20 @@ -1070,17 +1071,17 @@ qemuSnapshotDiskPrepareOne(qemuSnapshotDiskContext = *snapctxt, =20 =20 /** - * qemuSnapshotDiskPrepareActiveExternal: + * qemuSnapshotDiskPrepareActive: * * Collects and prepares a list of structures that hold information about = disks * that are selected for the snapshot. */ static qemuSnapshotDiskContext * -qemuSnapshotDiskPrepareActiveExternal(virDomainObj *vm, - virDomainMomentObj *snap, - bool reuse, - GHashTable *blockNamedNodeData, - virDomainAsyncJob asyncJob) +qemuSnapshotDiskPrepareActive(virDomainObj *vm, + virDomainMomentObj *snap, + bool reuse, + GHashTable *blockNamedNodeData, + virDomainAsyncJob asyncJob) { g_autoptr(qemuSnapshotDiskContext) snapctxt =3D NULL; size_t i; @@ -1089,16 +1090,33 @@ qemuSnapshotDiskPrepareActiveExternal(virDomainObj = *vm, snapctxt =3D qemuSnapshotDiskContextNew(snapdef->ndisks, vm, asyncJob); =20 for (i =3D 0; i < snapdef->ndisks; i++) { - if (snapdef->disks[i].snapshot !=3D VIR_DOMAIN_SNAPSHOT_LOCATION_E= XTERNAL) - continue; + switch (snapdef->disks[i].snapshot) { + case VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL: { + if (qemuSnapshotDiskPrepareOne(snapctxt, + vm->def->disks[i], + snapdef->disks + i, + blockNamedNodeData, + reuse, + true) < 0) + return NULL; + break; + } =20 - if (qemuSnapshotDiskPrepareOne(snapctxt, - vm->def->disks[i], - snapdef->disks + i, - blockNamedNodeData, - reuse, - true) < 0) - return NULL; + case VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL: { + snapctxt->has_internal =3D true; + if (qemuMonitorTransactionInternalSnapshotBlockdev(snapctx= t->actions, + vm->def= ->disks[i]->src->nodeformat, + snapdef= ->disks[i].snapshot_name) < 0) + return NULL; + break; + } + + case VIR_DOMAIN_SNAPSHOT_LOCATION_DEFAULT: + case VIR_DOMAIN_SNAPSHOT_LOCATION_NO: + case VIR_DOMAIN_SNAPSHOT_LOCATION_MANUAL: + case VIR_DOMAIN_SNAPSHOT_LOCATION_LAST: + continue; + } } =20 return g_steal_pointer(&snapctxt); @@ -1182,7 +1200,7 @@ qemuSnapshotDiskCreate(qemuSnapshotDiskContext *snapc= txt) int rc; =20 /* check whether there's anything to do */ - if (snapctxt->ndd =3D=3D 0) + if (snapctxt->ndd =3D=3D 0 && !snapctxt->has_internal) return 0; =20 if (qemuDomainObjEnterMonitorAsync(snapctxt->vm, snapctxt->asyncJob) <= 0) @@ -1215,11 +1233,11 @@ qemuSnapshotDiskCreate(qemuSnapshotDiskContext *sna= pctxt) =20 /* The domain is expected to be locked and active. */ static int -qemuSnapshotCreateActiveExternalDisks(virDomainObj *vm, - virDomainMomentObj *snap, - GHashTable *blockNamedNodeData, - unsigned int flags, - virDomainAsyncJob asyncJob) +qemuSnapshotCreateActiveDisks(virDomainObj *vm, + virDomainMomentObj *snap, + GHashTable *blockNamedNodeData, + unsigned int flags, + virDomainAsyncJob asyncJob) { bool reuse =3D (flags & VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT) !=3D 0; g_autoptr(qemuSnapshotDiskContext) snapctxt =3D NULL; @@ -1229,8 +1247,8 @@ qemuSnapshotCreateActiveExternalDisks(virDomainObj *v= m, =20 /* prepare a list of objects to use in the vm definition so that we do= n't * have to roll back later */ - if (!(snapctxt =3D qemuSnapshotDiskPrepareActiveExternal(vm, snap, reu= se, - blockNamedNodeD= ata, asyncJob))) + if (!(snapctxt =3D qemuSnapshotDiskPrepareActive(vm, snap, reuse, + blockNamedNodeData, asy= ncJob))) return -1; =20 if (qemuSnapshotDiskCreate(snapctxt) < 0) @@ -1370,9 +1388,9 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, =20 /* the domain is now paused if a memory snapshot was requested */ =20 - if ((ret =3D qemuSnapshotCreateActiveExternalDisks(vm, snap, - blockNamedNodeData, f= lags, - VIR_ASYNC_JOB_SNAPSHO= T)) < 0) + if ((ret =3D qemuSnapshotCreateActiveDisks(vm, snap, + blockNamedNodeData, flags, + VIR_ASYNC_JOB_SNAPSHOT)) < 0) goto cleanup; =20 /* the snapshot is complete now */ diff --git a/tests/qemudomainsnapshotxml2xmlin/disk_snapshot.xml b/tests/qe= mudomainsnapshotxml2xmlin/disk_snapshot.xml index cf5ea0814e..87b6251a7f 100644 --- a/tests/qemudomainsnapshotxml2xmlin/disk_snapshot.xml +++ b/tests/qemudomainsnapshotxml2xmlin/disk_snapshot.xml @@ -4,7 +4,7 @@ - + diff --git a/tests/qemudomainsnapshotxml2xmlout/disk_snapshot.xml b/tests/q= emudomainsnapshotxml2xmlout/disk_snapshot.xml index 76c543d25c..6cf93183d5 100644 --- a/tests/qemudomainsnapshotxml2xmlout/disk_snapshot.xml +++ b/tests/qemudomainsnapshotxml2xmlout/disk_snapshot.xml @@ -5,7 +5,7 @@ - + diff --git a/tests/qemudomainsnapshotxml2xmlout/disk_snapshot_redefine.xml = b/tests/qemudomainsnapshotxml2xmlout/disk_snapshot_redefine.xml index 24b41ba7c5..f574793edf 100644 --- a/tests/qemudomainsnapshotxml2xmlout/disk_snapshot_redefine.xml +++ b/tests/qemudomainsnapshotxml2xmlout/disk_snapshot_redefine.xml @@ -10,7 +10,7 @@ - + diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 1db1f2b949..1269c74e43 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2587,6 +2587,7 @@ testQemuMonitorJSONTransaction(const void *opaque) qemuMonitorTransactionBitmapDisable(actions, "node4", "bitmap4") <= 0 || qemuMonitorTransactionBitmapMerge(actions, "node5", "bitmap5", &me= rgebitmaps) < 0 || qemuMonitorTransactionSnapshotBlockdev(actions, "node7", "overlay7= ") < 0 || + qemuMonitorTransactionInternalSnapshotBlockdev(actions, "device1",= "snapshot1") < 0 || qemuMonitorTransactionBackup(actions, "dev8", "job8", "target8", "= bitmap8", QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_= MODE_NONE) < 0 || qemuMonitorTransactionBackup(actions, "dev9", "job9", "target9", "= bitmap9", --=20 2.25.1