From nobody Fri Dec 19 20:18:25 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass(p=reject dis=none) header.from=lists.libvirt.org ARC-Seal: i=1; a=rsa-sha256; t=1738953857; cv=none; d=zohomail.com; s=zohoarc; b=gEgBfG2JC+UAqfRQA8IiUml90WJZZheRdyGZF/17L4CMXk9ceYAqmlkEr7nyAxqF+OPVEHbJ8/ooyBiCnuga/9x5DF4iQp8OaidLUp+UHmgR9S7Bs7a/tsnHf8FQF52z8QSATafONPSCoHVyCZwKgNBLsc4AdaIfxxDXVaKcNWg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1738953857; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Subject:Subject:To:To:Message-Id; bh=yChJIgLdybjrdt/Os460a4aC34nSSXchfnJBgXqBu5s=; b=PeuAbXeAaiMk3vMVxBLHCNWTKyGZbOa2m2WiGxuILf4O8Vhv4wwfZTb+dHjzQQxziMb8w7+aGmeM+S5ib6Ws7o53ro7Ct9JS9d4Ca7iZU2EbTakMTLoP95EVXndfTT9mRmk50cL0X7jMjeHpmgA9NXw3YUBGKrbKSH3yIIA5Mz4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 173895385750352.68008803501789; Fri, 7 Feb 2025 10:44:17 -0800 (PST) Received: by lists.libvirt.org (Postfix, from userid 996) id C506F1273; Fri, 7 Feb 2025 13:44:16 -0500 (EST) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id ED06E155A; Fri, 7 Feb 2025 13:38:29 -0500 (EST) Received: by lists.libvirt.org (Postfix, from userid 996) id 518CF14DC; Fri, 7 Feb 2025 13:38:23 -0500 (EST) Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id 05CCE15B6 for ; Fri, 7 Feb 2025 13:38:02 -0500 (EST) Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-437a92d7b96so23770595e9.2 for ; Fri, 07 Feb 2025 10:38:02 -0800 (PST) Received: from localhost (205-201-32-8.scinternet.net. [205.201.32.8]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21f368797c8sm33896645ad.159.2025.02.07.10.38.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Feb 2025 10:38:01 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1738953482; x=1739558282; darn=lists.libvirt.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=99FVTu95f+W7UdSZcAnOGoXIiQ1T90Q5T50dCAW6axM=; b=gA6JYTkKY6tZRmwZuvhwp0VmXE1tmPRsKL1GckxwetLZIiXAndKFi31v0lwRnXtBmw Ck5nrmR1i+kW+GO7GtDrSyfw4x1A9Npr6WcWinrbZqx0vaZWzej6rVDsUtMqNdZfgKw7 GkWRAijacBKzX2jA5IDnDVlJoc1n9L5oltAfmX87/B5D9oLMX5SVKA0X+f55wGHFcgg5 SWSb31X1mFfUMqoorBcrHPbTKSrggMSZSz63FtNYskuFZ/n/IWHhBN0wHNXJcoUy0B3V Vwjc/zQRyhYdlxntH0f2XVXtnf8taB7un/nc4fxS1DzX5GfSCIk8gnkUlwLngObNB+sW 05EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738953482; x=1739558282; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=99FVTu95f+W7UdSZcAnOGoXIiQ1T90Q5T50dCAW6axM=; b=boVGPI4xotRVP9jORP66pxpcoWQTr00rRelBYJ9NFbIZXKjGVcBQ2d5esOD7Xs2ufK HzP2+U4BfQGI3iB2o0TBfTy63p0dQkvMD0NNG/ty0V7k91ySBugGoDdgZaKkyEu+vGVq mCm2y6vFDfnm5SOoV1xQd8fTLEVGlk/MBk7twCqvb3lJKJlHEYgsRhvK05siAn0A/5j2 mviFfdUhbfZ2jAPGnrurVhCndKIFpMyptRBD+JJHBZjEadtCgAID1EIfil0cbWG/wNYP zagKSOJ18Bo2/VYTyI/iEjdyfhXGZgz4FmhPe+00JPa3LUeJ8uJ8tLsG1fvRTkJZ55XA XNWA== X-Gm-Message-State: AOJu0YyoaXaZ8GLJxyaXzrIkLIZDtmcLUmMjF7hJF0gOueB6GcT3gG6n fdGMgJYPbdun4WLEy58okU12lQV4m7WOoigvLbDDARFl/cPuE85t8EF3i8bP10rfNxHUyx1eSq+ a X-Gm-Gg: ASbGncsjcPdIdYccdG0kKYcLtrLLhojc30c25kfRHsqQSJ/OxLPilg8813AJ/LaaM7V IH41eQoyGoMxRYBYFCQWcmxLnkQt5dQZYxAE7MDBXqdmcRXjT5jONyEpPB8oiGNu4PVHI7Z2os1 MV770uRfObni4gwMynYTsjL2PKGiRHO3bmRDi5S2lBHGJqQhElJokNdaMPSxH7TvMGnL4K06LZm iBSyRHWNBCs+TJECpyvhsjA8V2T5zwgs4H678xtG/byKZenum7+e9IZJlaE8Wel2NhgcCdKc//e BAGr58w+oAf0rcmwqJggoDTxaZIvCToj0A== X-Google-Smtp-Source: AGHT+IE0k1jUwokECdzHVClizP39/kA3k9acJ+XoTuwKPBzoMgs7niPQJJg7ZzVm7iwT2DegP4HyCA== X-Received: by 2002:a05:6000:1549:b0:38d:d0ca:fbad with SMTP id ffacd0b85a97d-38dd0cb7163mr1074110f8f.14.1738953481758; Fri, 07 Feb 2025 10:38:01 -0800 (PST) To: devel@lists.libvirt.org Subject: [PATCH V3 13/19] qemu: Add support for mapped-ram on restore Date: Fri, 7 Feb 2025 11:27:24 -0700 Message-ID: <20250207183730.21686-14-jfehlig@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250207183730.21686-1-jfehlig@suse.com> References: <20250207183730.21686-1-jfehlig@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: 3RA67XCDSJPJJVC7FLZR7NMMKH4MITWK X-Message-ID-Hash: 3RA67XCDSJPJJVC7FLZR7NMMKH4MITWK X-MailFrom: jfehlig@suse.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: farosas@suse.de X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: From: Jim Fehlig via Devel Reply-To: Jim Fehlig X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1738953859191019000 Content-Type: text/plain; charset="utf-8" Add support for the mapped-ram migration capability on restore. Signed-off-by: Jim Fehlig --- src/qemu/qemu_driver.c | 27 +++++++++++++++++++------- src/qemu/qemu_migration.c | 12 ++++++------ src/qemu/qemu_process.c | 41 ++++++++++++++++++++++++++++----------- src/qemu/qemu_process.h | 15 +++++++++----- src/qemu/qemu_saveimage.c | 29 ++++++++++++++++----------- src/qemu/qemu_saveimage.h | 2 ++ src/qemu/qemu_snapshot.c | 8 ++++---- 7 files changed, 90 insertions(+), 44 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f77516a4f4..0f363849ba 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1614,7 +1614,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr= conn, } =20 if (qemuProcessStart(conn, driver, vm, NULL, VIR_ASYNC_JOB_START, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); @@ -5783,6 +5783,8 @@ qemuDomainRestoreInternal(virConnectPtr conn, virFileWrapperFd *wrapperFd =3D NULL; bool hook_taint =3D false; bool reset_nvram =3D false; + bool sparse =3D false; + g_autoptr(qemuMigrationParams) restoreParams =3D NULL; =20 virCheckFlags(VIR_DOMAIN_SAVE_BYPASS_CACHE | VIR_DOMAIN_SAVE_RUNNING | @@ -5795,9 +5797,13 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (qemuSaveImageGetMetadata(driver, NULL, path, &def, &data) < 0) goto cleanup; =20 + sparse =3D data->header.format =3D=3D QEMU_SAVE_FORMAT_SPARSE; + if (!(restoreParams =3D qemuMigrationParamsForSave(sparse))) + goto cleanup; + fd =3D qemuSaveImageOpen(driver, path, (flags & VIR_DOMAIN_SAVE_BYPASS_CACHE) !=3D 0, - &wrapperFd, false); + sparse, &wrapperFd, false); if (fd < 0) goto cleanup; =20 @@ -5851,7 +5857,7 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_RESTORE, flags) <= 0) goto cleanup; =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, restor= eParams, false, reset_nvram, VIR_ASYNC_JOB_START); =20 qemuProcessEndJob(vm); @@ -5966,7 +5972,8 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, cons= t char *path, if (qemuSaveImageGetMetadata(driver, NULL, path, &def, &data) < 0) goto cleanup; =20 - fd =3D qemuSaveImageOpen(driver, path, 0, NULL, false); + fd =3D qemuSaveImageOpen(driver, path, 0, false, NULL, false); + if (fd < 0) goto cleanup; =20 @@ -6105,6 +6112,8 @@ qemuDomainObjRestore(virConnectPtr conn, g_autofree char *xmlout =3D NULL; virQEMUSaveData *data =3D NULL; virFileWrapperFd *wrapperFd =3D NULL; + bool sparse =3D false; + g_autoptr(qemuMigrationParams) restoreParams =3D NULL; =20 ret =3D qemuSaveImageGetMetadata(driver, NULL, path, &def, &data); if (ret < 0) { @@ -6122,7 +6131,11 @@ qemuDomainObjRestore(virConnectPtr conn, goto cleanup; } =20 - fd =3D qemuSaveImageOpen(driver, path, bypass_cache, &wrapperFd, false= ); + sparse =3D data->header.format =3D=3D QEMU_SAVE_FORMAT_SPARSE; + if (!(restoreParams =3D qemuMigrationParamsForSave(sparse))) + return -1; + + fd =3D qemuSaveImageOpen(driver, path, bypass_cache, sparse, &wrapperF= d, false); if (fd < 0) goto cleanup; =20 @@ -6164,7 +6177,7 @@ qemuDomainObjRestore(virConnectPtr conn, =20 virDomainObjAssignDef(vm, &def, true, NULL); =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, restor= eParams, start_paused, reset_nvram, asyncJob); =20 cleanup: @@ -6370,7 +6383,7 @@ qemuDomainObjStart(virConnectPtr conn, } =20 ret =3D qemuProcessStart(conn, driver, vm, NULL, asyncJob, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags= ); virDomainAuditStart(vm, "booted", ret >=3D 0); if (ret >=3D 0) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index bd46143717..9665b523af 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3068,9 +3068,8 @@ qemuMigrationDstPrepare(virDomainObj *vm, const char *protocol, const char *listenAddress, unsigned short port, - int fd) + int *fd) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_autofree char *migrateFrom =3D NULL; =20 if (tunnel) { @@ -3124,8 +3123,9 @@ qemuMigrationDstPrepare(virDomainObj *vm, migrateFrom =3D g_strdup_printf(incFormat, protocol, listenAddress= , port); } =20 - return qemuProcessIncomingDefNew(priv->qemuCaps, listenAddress, - migrateFrom, fd, NULL); + return qemuProcessIncomingDefNew(vm, listenAddress, + migrateFrom, fd, + NULL, NULL); } =20 =20 @@ -3267,7 +3267,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, =20 if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, listenAddress, port, - dataFD[0]))) + &dataFD[0]))) goto error; =20 qemuMigrationDstPrepareDiskSeclabels(vm, migrate_disks, flags); @@ -3638,7 +3638,7 @@ qemuMigrationDstPrepareResume(virQEMUDriver *driver, priv->origname =3D g_strdup(origname); =20 if (!(incoming =3D qemuMigrationDstPrepare(vm, false, protocol, - listenAddress, port, -1))) + listenAddress, port, NULL))) goto cleanup; =20 if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_IN) < 0) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 063033cb95..94e7e90d28 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4823,6 +4823,7 @@ qemuProcessIncomingDefFree(qemuProcessIncomingDef *in= c) =20 g_free(inc->address); g_free(inc->uri); + qemuFDPassFree(inc->fdPassMigrate); g_free(inc); } =20 @@ -4836,26 +4837,38 @@ qemuProcessIncomingDefFree(qemuProcessIncomingDef *= inc) * qemuProcessIncomingDefFree will NOT close it. */ qemuProcessIncomingDef * -qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, +qemuProcessIncomingDefNew(virDomainObj *vm, const char *listenAddress, const char *migrateFrom, - int fd, - const char *path) + int *fd, + const char *path, + virQEMUSaveData *data) { + qemuDomainObjPrivate *priv =3D vm->privateData; qemuProcessIncomingDef *inc =3D NULL; =20 - if (qemuMigrationDstCheckProtocol(qemuCaps, migrateFrom) < 0) + if (qemuMigrationDstCheckProtocol(priv->qemuCaps, migrateFrom) < 0) return NULL; =20 inc =3D g_new0(qemuProcessIncomingDef, 1); =20 inc->address =3D g_strdup(listenAddress); =20 - inc->uri =3D qemuMigrationDstGetURI(migrateFrom, fd); + if (data && data->header.format =3D=3D QEMU_SAVE_FORMAT_SPARSE) { + size_t offset =3D sizeof(virQEMUSaveHeader) + data->header.data_le= n; + + inc->fdPassMigrate =3D qemuFDPassNew("libvirt-incoming-migrate", p= riv); + qemuFDPassAddFD(inc->fdPassMigrate, fd, "-fd"); + inc->uri =3D g_strdup_printf("file:%s,offset=3D%#lx", + qemuFDPassGetPath(inc->fdPassMigrate), = offset); + } else { + inc->uri =3D qemuMigrationDstGetURI(migrateFrom, *fd); + } + if (!inc->uri) goto error; =20 - inc->fd =3D fd; + inc->fd =3D *fd; inc->path =3D path; =20 return inc; @@ -7899,8 +7912,11 @@ qemuProcessLaunch(virConnectPtr conn, &nnicindexes, &nicindexes))) goto cleanup; =20 - if (incoming && incoming->fd !=3D -1) - virCommandPassFD(cmd, incoming->fd, 0); + if (incoming) { + if (incoming->fd !=3D -1) + virCommandPassFD(cmd, incoming->fd, 0); + qemuFDPassTransferCommand(incoming->fdPassMigrate, cmd); + } =20 /* now that we know it is about to start call the hook if present */ if (qemuProcessStartHook(driver, vm, @@ -8319,6 +8335,7 @@ qemuProcessStart(virConnectPtr conn, int migrateFd, const char *migratePath, virDomainMomentObj *snapshot, + qemuMigrationParams *migParams, virNetDevVPortProfileOp vmop, unsigned int flags) { @@ -8372,7 +8389,7 @@ qemuProcessStart(virConnectPtr conn, relabel =3D true; =20 if (incoming) { - if (qemuMigrationDstRun(vm, incoming->uri, asyncJob, NULL, 0) < 0) + if (qemuMigrationDstRun(vm, incoming->uri, asyncJob, migParams, 0)= < 0) goto stop; } else { /* Refresh state of devices from QEMU. During migration this happe= ns @@ -8426,6 +8443,7 @@ qemuProcessStart(virConnectPtr conn, * @path: path to memory state file * @snapshot: internal snapshot to load when starting QEMU process or NULL * @data: data from memory state file or NULL + * @migParams: Migration params to use on restore or NULL * @asyncJob: type of asynchronous job * @start_flags: flags to start QEMU process with * @reason: audit log reason @@ -8452,6 +8470,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, const char *path, virDomainMomentObj *snapshot, virQEMUSaveData *data, + qemuMigrationParams *migParams, virDomainAsyncJob asyncJob, unsigned int start_flags, const char *reason, @@ -8480,7 +8499,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, /* The fd passed to qemuProcessIncomingDefNew is used to create the mi= gration * URI, so it must be called after starting the decompression program. */ - incoming =3D qemuProcessIncomingDefNew(priv->qemuCaps, NULL, "stdio", = *fd, path); + incoming =3D qemuProcessIncomingDefNew(vm, NULL, "stdio", fd, path, da= ta); if (!incoming) return -1; =20 @@ -8495,7 +8514,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, =20 if (qemuProcessStart(conn, driver, vm, cookie ? cookie->cpu : NULL, asyncJob, incoming, *fd, path, snapshot, - VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, + migParams, VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, start_flags) =3D=3D 0) *started =3D true; =20 diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index a9e0a03a21..c51335ad7a 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -53,14 +53,17 @@ struct _qemuProcessIncomingDef { char *address; /* address where QEMU is supposed to listen */ char *uri; /* used when calling migrate-incoming QMP command */ int fd; /* for fd:N URI */ + qemuFDPass *fdPassMigrate; /* for file:/dev/fdset/n,offset=3Dx URI */ const char *path; /* path associated with fd */ }; =20 -qemuProcessIncomingDef *qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, - const char *listenAddr= ess, - const char *migrateFro= m, - int fd, - const char *path); +qemuProcessIncomingDef *qemuProcessIncomingDefNew(virDomainObj *vm, + const char *listenAddres= s, + const char *migrateFrom, + int *fd, + const char *path, + virQEMUSaveData *data); + void qemuProcessIncomingDefFree(qemuProcessIncomingDef *inc); =20 int qemuProcessBeginJob(virDomainObj *vm, @@ -87,6 +90,7 @@ int qemuProcessStart(virConnectPtr conn, int stdin_fd, const char *stdin_path, virDomainMomentObj *snapshot, + qemuMigrationParams *migParams, virNetDevVPortProfileOp vmop, unsigned int flags); =20 @@ -97,6 +101,7 @@ int qemuProcessStartWithMemoryState(virConnectPtr conn, const char *path, virDomainMomentObj *snapshot, virQEMUSaveData *data, + qemuMigrationParams *migParams, virDomainAsyncJob asyncJob, unsigned int start_flags, const char *reason, diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 0ffbe03f24..3f20ffd58c 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -348,7 +348,8 @@ qemuSaveImageDecompressionStart(virQEMUSaveData *data, if (header->version !=3D 2) return 0; =20 - if (header->format =3D=3D QEMU_SAVE_FORMAT_RAW) + if (header->format =3D=3D QEMU_SAVE_FORMAT_RAW || + header->format =3D=3D QEMU_SAVE_FORMAT_SPARSE) return 0; =20 if (!(cmd =3D qemuSaveImageGetCompressionCommand(header->format))) @@ -697,6 +698,7 @@ qemuSaveImageGetMetadata(virQEMUDriver *driver, * @driver: qemu driver data * @path: path of the save image * @bypass_cache: bypass cache when opening the file + * @sparse: Image contains mapped-ram save format * @wrapperFd: returns the file wrapper structure * @open_write: open the file for writing (for updates) * @@ -706,6 +708,7 @@ int qemuSaveImageOpen(virQEMUDriver *driver, const char *path, bool bypass_cache, + bool sparse, virFileWrapperFd **wrapperFd, bool open_write) { @@ -727,15 +730,18 @@ qemuSaveImageOpen(virQEMUDriver *driver, if ((fd =3D qemuDomainOpenFile(cfg, NULL, path, oflags, NULL)) < 0) return -1; =20 - if (bypass_cache && - !(*wrapperFd =3D virFileWrapperFdNew(&fd, path, - VIR_FILE_WRAPPER_BYPASS_CACHE))) - return -1; + /* If sparse, no need for the iohelper or positioning the file pointer= . */ + if (!sparse) { + if (bypass_cache && + !(*wrapperFd =3D virFileWrapperFdNew(&fd, path, + VIR_FILE_WRAPPER_BYPASS_CAC= HE))) + return -1; =20 - /* Read the header to position the file pointer for QEMU. Unfortunatel= y we - * can't use lseek with virFileWrapperFD. */ - if (qemuSaveImageReadHeader(fd, NULL) < 0) - return -1; + /* Read the header to position the file pointer for QEMU. Unfortun= ately we + * can't use lseek with virFileWrapperFD. */ + if (qemuSaveImageReadHeader(fd, NULL) < 0) + return -1; + } =20 ret =3D fd; fd =3D -1; @@ -751,6 +757,7 @@ qemuSaveImageStartVM(virConnectPtr conn, int *fd, virQEMUSaveData *data, const char *path, + qemuMigrationParams *restoreParams, bool start_paused, bool reset_nvram, virDomainAsyncJob asyncJob) @@ -767,8 +774,8 @@ qemuSaveImageStartVM(virConnectPtr conn, start_flags |=3D VIR_QEMU_PROCESS_START_RESET_NVRAM; =20 if (qemuProcessStartWithMemoryState(conn, driver, vm, fd, path, NULL, = data, - asyncJob, start_flags, "restored", - &started) < 0) { + restoreParams, asyncJob, start_fla= gs, + "restored", &started) < 0) { goto cleanup; } =20 diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 2b3d839e5b..b3992de126 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -84,6 +84,7 @@ qemuSaveImageStartVM(virConnectPtr conn, int *fd, virQEMUSaveData *data, const char *path, + qemuMigrationParams *restoreParams, bool start_paused, bool reset_nvram, virDomainAsyncJob asyncJob) @@ -106,6 +107,7 @@ int qemuSaveImageOpen(virQEMUDriver *driver, const char *path, bool bypass_cache, + bool sparse, virFileWrapperFd **wrapperFd, bool open_write) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(4); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 3088b28716..6f8ba5697b 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2407,7 +2407,7 @@ qemuSnapshotRevertExternalPrepare(virDomainObj *vm, return -1; =20 memdata->fd =3D qemuSaveImageOpen(driver, memdata->path, - false, NULL, false); + false, false, NULL, false); if (memdata->fd < 0) return -1; =20 @@ -2647,7 +2647,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, =20 if (qemuProcessStartWithMemoryState(snapshot->domain->conn, driver, vm, &memdata.fd, memdata.path, loadSna= p, - memdata.data, VIR_ASYNC_JOB_SNAPSH= OT, + memdata.data, NULL, VIR_ASYNC_JOB_= SNAPSHOT, start_flags, "from-snapshot", &started) < 0) { if (started) { @@ -2801,7 +2801,7 @@ qemuSnapshotRevertInactive(virDomainObj *vm, =20 rc =3D qemuProcessStart(snapshot->domain->conn, driver, vm, NULL, VIR_ASYNC_JOB_SNAPSHOT, NULL, -1, NULL, NULL, - VIR_NETDEV_VPORT_PROFILE_OP_CREATE, + NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >=3D 0); if (rc < 0) { @@ -3277,7 +3277,7 @@ qemuSnapshotDeleteExternalPrepare(virDomainObj *vm, =20 if (!virDomainObjIsActive(vm)) { if (qemuProcessStart(NULL, driver, vm, NULL, VIR_ASYNC_JOB_SNA= PSHOT, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, VIR_QEMU_PROCESS_START_PAUSED) < 0) { return -1; --=20 2.43.0