From nobody Tue Sep 9 23:57:23 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass(p=reject dis=none) header.from=lists.libvirt.org ARC-Seal: i=1; a=rsa-sha256; t=1737588629; cv=none; d=zohomail.com; s=zohoarc; b=N6UPzFnY2tERE03NRUPqnD5QlEMl+oEAQklIrvoTz0wHXd8nWfMEA5W0V1EFrqoe7BeuBTfbtjnIzhxJuaAtOWa41UfLvX2k/FuKVL+D772DIHgbRSMhwQ4xexvbhB9v5V5SMSJ89HOWr1oZkcBPQbUgLCE0Oow8mzTdABGPy9g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1737588629; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Subject:Subject:To:To:Message-Id; bh=gtXnG3o47/6LU/MnO7psElLTMwUzslzJA6kO+lcfYKU=; b=E8S0UyPbDZuxOnShwirziXS8yqRScMm7GSgO1oM4wALR56xUSMY7vq7MeHnpEZNMHRTUe3GDRd3x8e28p8Su01GBpdqjMF0YyJ16UsRwCUeTsDX7ErV7JETeMSnTBXXxDFb9OzQpDaanFfV2ObeEcr+YUwkbQhToVZCB51Abj60= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1737588629804999.8164225800557; Wed, 22 Jan 2025 15:30:29 -0800 (PST) Received: by lists.libvirt.org (Postfix, from userid 996) id 256BA130E; Wed, 22 Jan 2025 18:30:29 -0500 (EST) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id 35DB5136B; Wed, 22 Jan 2025 18:23:30 -0500 (EST) Received: by lists.libvirt.org (Postfix, from userid 996) id C8455130E; Wed, 22 Jan 2025 18:23:18 -0500 (EST) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id 7ACF7138D for ; Wed, 22 Jan 2025 18:22:57 -0500 (EST) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-5db6890b64eso658892a12.3 for ; Wed, 22 Jan 2025 15:22:57 -0800 (PST) Received: from localhost (75-169-8-111.slkc.qwest.net. [75.169.8.111]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-aa84ed133fdsm8907885a12.17.2025.01.22.15.22.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Jan 2025 15:22:55 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1737588176; x=1738192976; darn=lists.libvirt.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N0hKq2o9axwJDP0HYySHNYMrMhimAQE00ZVBsNhw0hs=; b=HVbWsWE+6vTjWVpB633gYFxyxsTxTh699y19LosoeerlwjnztGssas7c9TyMDeN0nm SLXK47vB9RqeJrYXKC4GzBo5V+aJjhYy5NfoHTaNNYo7jrW5Qv9Ge3GwPzdv5/726zY1 +TwLqTfSg9hUmKnXEhaYGgRCay/LlP2VkGoBT8X0VjJz51JKJMrB8rZ8HEudiRkpNGpY YOwf0VI2xY088xSRPDGTSlNEWDT+AcdeH+zEJ3yttRSNaA+X2PdSuPBShx19wGHLlq89 CbwkxxWim2tyZzG7UrhOUgujV5tcE6IMflwx54/lez0ItCsln2yVKv8BI4Avpth+OmtY OnpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737588176; x=1738192976; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N0hKq2o9axwJDP0HYySHNYMrMhimAQE00ZVBsNhw0hs=; b=CHWqafd9i9Z6mvQ/bJw3TbpMT1Xb7N7WHMfgyFd+y9k8ON8exSIxqeMrxH8Dvq7ZuX /v5djzIPd7a4qA9/2FTORyvFMTfvxBMD0hj7NsbZYooJYQxw5doMgXfePJvjA03uGUkq Acv78lBIRph9LeTGfHUrCwmmQNnSYSsYCuGuhWJybQxJAugSrst+ZYltc2t9FRwcUdpG hXQgH27A2ivd/y2quS/Uj4Jcb2dLPfePQF5MLvktAIGvN7ZaaZj3tHTUGSE4wuTOHpCg H8aJG3cPAzI5u1o7WbXT3TwzHvi2EMFfK8Q5Gp9tfW7r1ujJn28YsrR6fnSmJIpSck23 Dd5Q== X-Gm-Message-State: AOJu0Ywcw13Ce23DR/bUDpGbqNvj6Ke4Vg0vCOAFEoJ2z76B/rZIenLP firqA2wIwMq5+XcLC3lCv07xZt/MExXljPp7QQMlSuM44lq8vKAcxDkHvtBv3i6bfwG0054auR6 7 X-Gm-Gg: ASbGncsuGI+snCf9LYLMFGo65c8l+o++YK2uLK6UeJaUzIgwJplE6Tf78wGlZUwNwMs h5iM68putOGZpYXrq9lUX57bYAL6kgaW5/rb7ZNlZgAE4WkEoOPKPjYxmZnRRgwU6FV+Nq2shDt b5sa0GrBk9Pj69TpBdoXRbK1b520UitzW72hIPYyvAzu7M6TvRqfEI5KQOEkkIqwRFsXD7ayROC gzejT+bAl9vrshPqxSETa/Mc1K1TvUbfZ3pE5vgVmTTVV5S4XVU9gvAmRtoVXG4PFQLcqqVTMjx V0DTrNpiHjbq X-Google-Smtp-Source: AGHT+IF+QXNeYBn8x47rTINS0443LNhDAQzMuDd3GIKihElheV0xl8GytMIuFKZnCg9DDqHKB9LaXw== X-Received: by 2002:a17:907:7e95:b0:aa6:715a:75b5 with SMTP id a640c23a62f3a-ab38b44d44fmr1925925766b.46.1737588176222; Wed, 22 Jan 2025 15:22:56 -0800 (PST) To: devel@lists.libvirt.org Subject: [PATCH V2 14/20] qemu: Add support for mapped-ram on restore Date: Wed, 22 Jan 2025 16:16:48 -0700 Message-ID: <20250122232228.19306-15-jfehlig@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250122232228.19306-1-jfehlig@suse.com> References: <20250122232228.19306-1-jfehlig@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: QB4ORGCZHSXMC4RO6FGXMXTP5IJ5B5D4 X-Message-ID-Hash: QB4ORGCZHSXMC4RO6FGXMXTP5IJ5B5D4 X-MailFrom: jfehlig@suse.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: farosas@suse.de X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: From: Jim Fehlig via Devel Reply-To: Jim Fehlig X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1737588630430019000 Content-Type: text/plain; charset="utf-8" Add support for the mapped-ram migration capability on restore. Signed-off-by: Jim Fehlig --- src/qemu/qemu_driver.c | 26 +++++++++++++----- src/qemu/qemu_migration.c | 12 ++++---- src/qemu/qemu_process.c | 58 +++++++++++++++++++++++++++++++-------- src/qemu/qemu_process.h | 15 ++++++---- src/qemu/qemu_saveimage.c | 12 +++++--- src/qemu/qemu_saveimage.h | 2 ++ src/qemu/qemu_snapshot.c | 8 +++--- 7 files changed, 95 insertions(+), 38 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 5afc2ea846..ad58ec92f1 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1614,7 +1614,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr= conn, } =20 if (qemuProcessStart(conn, driver, vm, NULL, VIR_ASYNC_JOB_START, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); @@ -5783,6 +5783,8 @@ qemuDomainRestoreInternal(virConnectPtr conn, virFileWrapperFd *wrapperFd =3D NULL; bool hook_taint =3D false; bool reset_nvram =3D false; + bool sparse =3D false; + g_autoptr(qemuMigrationParams) restoreParams =3D NULL; =20 virCheckFlags(VIR_DOMAIN_SAVE_BYPASS_CACHE | VIR_DOMAIN_SAVE_RUNNING | @@ -5795,9 +5797,13 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (qemuSaveImageGetMetadata(driver, NULL, path, &def, &data, false) <= 0) goto cleanup; =20 + sparse =3D data->header.format =3D=3D QEMU_SAVE_FORMAT_SPARSE; + if (!(restoreParams =3D qemuMigrationParamsForSave(sparse))) + goto cleanup; + fd =3D qemuSaveImageOpen(driver, path, (flags & VIR_DOMAIN_SAVE_BYPASS_CACHE) !=3D 0, - &wrapperFd, false); + sparse, &wrapperFd, false); if (fd < 0) goto cleanup; =20 @@ -5851,7 +5857,7 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_RESTORE, flags) <= 0) goto cleanup; =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, restor= eParams, false, reset_nvram, VIR_ASYNC_JOB_START); =20 qemuProcessEndJob(vm); @@ -5966,7 +5972,7 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, cons= t char *path, if (qemuSaveImageGetMetadata(driver, NULL, path, &def, &data, false) <= 0) goto cleanup; =20 - fd =3D qemuSaveImageOpen(driver, path, 0, NULL, false); + fd =3D qemuSaveImageOpen(driver, path, 0, false, NULL, false); =20 if (fd < 0) goto cleanup; @@ -6107,6 +6113,8 @@ qemuDomainObjRestore(virConnectPtr conn, g_autofree char *xmlout =3D NULL; virQEMUSaveData *data =3D NULL; virFileWrapperFd *wrapperFd =3D NULL; + bool sparse =3D false; + g_autoptr(qemuMigrationParams) restoreParams =3D NULL; =20 ret =3D qemuSaveImageGetMetadata(driver, NULL, path, &def, &data, true= ); if (ret < 0) { @@ -6115,7 +6123,11 @@ qemuDomainObjRestore(virConnectPtr conn, goto cleanup; } =20 - fd =3D qemuSaveImageOpen(driver, path, bypass_cache, &wrapperFd, false= ); + sparse =3D data->header.format =3D=3D QEMU_SAVE_FORMAT_SPARSE; + if (!(restoreParams =3D qemuMigrationParamsForSave(sparse))) + return -1; + + fd =3D qemuSaveImageOpen(driver, path, bypass_cache, sparse, &wrapperF= d, false); if (fd < 0) goto cleanup; =20 @@ -6157,7 +6169,7 @@ qemuDomainObjRestore(virConnectPtr conn, =20 virDomainObjAssignDef(vm, &def, true, NULL); =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, restor= eParams, start_paused, reset_nvram, asyncJob); =20 cleanup: @@ -6363,7 +6375,7 @@ qemuDomainObjStart(virConnectPtr conn, } =20 ret =3D qemuProcessStart(conn, driver, vm, NULL, asyncJob, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags= ); virDomainAuditStart(vm, "booted", ret >=3D 0); if (ret >=3D 0) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b2f17f8006..8af6d1d3e2 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3064,9 +3064,8 @@ qemuMigrationDstPrepare(virDomainObj *vm, const char *protocol, const char *listenAddress, unsigned short port, - int fd) + int *fd) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_autofree char *migrateFrom =3D NULL; =20 if (tunnel) { @@ -3120,8 +3119,9 @@ qemuMigrationDstPrepare(virDomainObj *vm, migrateFrom =3D g_strdup_printf(incFormat, protocol, listenAddress= , port); } =20 - return qemuProcessIncomingDefNew(priv->qemuCaps, listenAddress, - migrateFrom, fd, NULL); + return qemuProcessIncomingDefNew(vm, listenAddress, + migrateFrom, fd, + NULL, NULL); } =20 =20 @@ -3263,7 +3263,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, =20 if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, listenAddress, port, - dataFD[0]))) + &dataFD[0]))) goto error; =20 qemuMigrationDstPrepareDiskSeclabels(vm, migrate_disks, flags); @@ -3634,7 +3634,7 @@ qemuMigrationDstPrepareResume(virQEMUDriver *driver, priv->origname =3D g_strdup(origname); =20 if (!(incoming =3D qemuMigrationDstPrepare(vm, false, protocol, - listenAddress, port, -1))) + listenAddress, port, NULL))) goto cleanup; =20 if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_IN) < 0) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 8d51da77c5..e5ad6f0528 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4804,6 +4804,7 @@ qemuProcessIncomingDefFree(qemuProcessIncomingDef *in= c) =20 g_free(inc->address); g_free(inc->uri); + qemuFDPassFree(inc->fdPassMigrate); g_free(inc); } =20 @@ -4817,26 +4818,54 @@ qemuProcessIncomingDefFree(qemuProcessIncomingDef *= inc) * qemuProcessIncomingDefFree will NOT close it. */ qemuProcessIncomingDef * -qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, +qemuProcessIncomingDefNew(virDomainObj *vm, const char *listenAddress, const char *migrateFrom, - int fd, - const char *path) + int *fd, + const char *path, + virQEMUSaveData *data) { + qemuDomainObjPrivate *priv =3D vm->privateData; qemuProcessIncomingDef *inc =3D NULL; =20 - if (qemuMigrationDstCheckProtocol(qemuCaps, migrateFrom) < 0) + if (qemuMigrationDstCheckProtocol(priv->qemuCaps, migrateFrom) < 0) return NULL; =20 inc =3D g_new0(qemuProcessIncomingDef, 1); =20 inc->address =3D g_strdup(listenAddress); =20 - inc->uri =3D qemuMigrationDstGetURI(migrateFrom, fd); + if (data) { + size_t offset =3D sizeof(virQEMUSaveHeader) + data->header.data_le= n; + + if (data->header.format =3D=3D QEMU_SAVE_FORMAT_SPARSE) { + unsigned int fdsetId; + + inc->fdPassMigrate =3D qemuFDPassNew("migrate", priv); + qemuFDPassAddFD(inc->fdPassMigrate, fd, "-fd"); + qemuFDPassGetId(inc->fdPassMigrate, &fdsetId); + inc->uri =3D g_strdup_printf("file:/dev/fdset/%u,offset=3D%#lx= ", fdsetId, offset); + } else { + g_autofree char *tmp =3D g_new(char, offset); + + /* HACK: + * We provide qemu the offset in case of mapped-ram, but must = set + * the file offset for the legacy save format. Unfortunately we + * can't lseek when the fd is wrapped by virFileWrapperFd, so + * we do a needless read instead. + */ + if (saferead(*fd, tmp, offset) !=3D offset) + VIR_DEBUG("failed to read from save file"); + inc->uri =3D qemuMigrationDstGetURI(migrateFrom, *fd); + } + } else { + inc->uri =3D qemuMigrationDstGetURI(migrateFrom, *fd); + } + if (!inc->uri) goto error; =20 - inc->fd =3D fd; + inc->fd =3D *fd; inc->path =3D path; =20 return inc; @@ -7879,8 +7908,11 @@ qemuProcessLaunch(virConnectPtr conn, &nnicindexes, &nicindexes))) goto cleanup; =20 - if (incoming && incoming->fd !=3D -1) - virCommandPassFD(cmd, incoming->fd, 0); + if (incoming) { + if (incoming->fd !=3D -1) + virCommandPassFD(cmd, incoming->fd, 0); + qemuFDPassTransferCommand(incoming->fdPassMigrate, cmd); + } =20 /* now that we know it is about to start call the hook if present */ if (qemuProcessStartHook(driver, vm, @@ -8299,6 +8331,7 @@ qemuProcessStart(virConnectPtr conn, int migrateFd, const char *migratePath, virDomainMomentObj *snapshot, + qemuMigrationParams *migParams, virNetDevVPortProfileOp vmop, unsigned int flags) { @@ -8352,7 +8385,7 @@ qemuProcessStart(virConnectPtr conn, relabel =3D true; =20 if (incoming) { - if (qemuMigrationDstRun(vm, incoming->uri, asyncJob, NULL, 0) < 0) + if (qemuMigrationDstRun(vm, incoming->uri, asyncJob, migParams, 0)= < 0) goto stop; } else { /* Refresh state of devices from QEMU. During migration this happe= ns @@ -8406,6 +8439,7 @@ qemuProcessStart(virConnectPtr conn, * @path: path to memory state file * @snapshot: internal snapshot to load when starting QEMU process or NULL * @data: data from memory state file or NULL + * @migParams: Migration params to use on restore or NULL * @asyncJob: type of asynchronous job * @start_flags: flags to start QEMU process with * @reason: audit log reason @@ -8432,6 +8466,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, const char *path, virDomainMomentObj *snapshot, virQEMUSaveData *data, + qemuMigrationParams *migParams, virDomainAsyncJob asyncJob, unsigned int start_flags, const char *reason, @@ -8446,8 +8481,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, int rc =3D 0; int ret =3D -1; =20 - incoming =3D qemuProcessIncomingDefNew(priv->qemuCaps, NULL, "stdio", - *fd, path); + incoming =3D qemuProcessIncomingDefNew(vm, NULL, "stdio", fd, path, da= ta); if (!incoming) return -1; =20 @@ -8473,7 +8507,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, =20 if (qemuProcessStart(conn, driver, vm, cookie ? cookie->cpu : NULL, asyncJob, incoming, *fd, path, snapshot, - VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, + migParams, VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, start_flags) =3D=3D 0) *started =3D true; =20 diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 1dd6ae23c8..a6258b075c 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -53,14 +53,17 @@ struct _qemuProcessIncomingDef { char *address; /* address where QEMU is supposed to listen */ char *uri; /* used when calling migrate-incoming QMP command */ int fd; /* for fd:N URI */ + qemuFDPass *fdPassMigrate; /* for file:/dev/fdset/n,offset=3Dx URI */ const char *path; /* path associated with fd */ }; =20 -qemuProcessIncomingDef *qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, - const char *listenAddr= ess, - const char *migrateFro= m, - int fd, - const char *path); +qemuProcessIncomingDef *qemuProcessIncomingDefNew(virDomainObj *vm, + const char *listenAddres= s, + const char *migrateFrom, + int *fd, + const char *path, + virQEMUSaveData *data); + void qemuProcessIncomingDefFree(qemuProcessIncomingDef *inc); =20 int qemuProcessBeginJob(virDomainObj *vm, @@ -87,6 +90,7 @@ int qemuProcessStart(virConnectPtr conn, int stdin_fd, const char *stdin_path, virDomainMomentObj *snapshot, + qemuMigrationParams *migParams, virNetDevVPortProfileOp vmop, unsigned int flags); =20 @@ -97,6 +101,7 @@ int qemuProcessStartWithMemoryState(virConnectPtr conn, const char *path, virDomainMomentObj *snapshot, virQEMUSaveData *data, + qemuMigrationParams *migParams, virDomainAsyncJob asyncJob, unsigned int start_flags, const char *reason, diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 3a71c23c01..3e89f88301 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -264,7 +264,8 @@ qemuSaveImageDecompressionStart(virQEMUSaveData *data, if (header->version !=3D 2) return 0; =20 - if (header->format =3D=3D QEMU_SAVE_FORMAT_RAW) + if (header->format =3D=3D QEMU_SAVE_FORMAT_RAW || + header->format =3D=3D QEMU_SAVE_FORMAT_SPARSE) return 0; =20 if (!(cmd =3D qemuSaveImageGetCompressionCommand(header->format))) @@ -671,6 +672,7 @@ qemuSaveImageGetMetadata(virQEMUDriver *driver, * @driver: qemu driver data * @path: path of the save image * @bypass_cache: bypass cache when opening the file + * @sparse: Image contains mapped-ram save format * @wrapperFd: returns the file wrapper structure * @open_write: open the file for writing (for updates) * @@ -680,6 +682,7 @@ int qemuSaveImageOpen(virQEMUDriver *driver, const char *path, bool bypass_cache, + bool sparse, virFileWrapperFd **wrapperFd, bool open_write) { @@ -701,7 +704,7 @@ qemuSaveImageOpen(virQEMUDriver *driver, if ((fd =3D qemuDomainOpenFile(cfg, NULL, path, oflags, NULL)) < 0) return -1; =20 - if (bypass_cache && + if (!sparse && bypass_cache && !(*wrapperFd =3D virFileWrapperFdNew(&fd, path, VIR_FILE_WRAPPER_BYPASS_CACHE))) return -1; @@ -720,6 +723,7 @@ qemuSaveImageStartVM(virConnectPtr conn, int *fd, virQEMUSaveData *data, const char *path, + qemuMigrationParams *restoreParams, bool start_paused, bool reset_nvram, virDomainAsyncJob asyncJob) @@ -736,8 +740,8 @@ qemuSaveImageStartVM(virConnectPtr conn, start_flags |=3D VIR_QEMU_PROCESS_START_RESET_NVRAM; =20 if (qemuProcessStartWithMemoryState(conn, driver, vm, fd, path, NULL, = data, - asyncJob, start_flags, "restored", - &started) < 0) { + restoreParams, asyncJob, start_fla= gs, + "restored", &started) < 0) { goto cleanup; } =20 diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 7340925274..320728145a 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -84,6 +84,7 @@ qemuSaveImageStartVM(virConnectPtr conn, int *fd, virQEMUSaveData *data, const char *path, + qemuMigrationParams *restoreParams, bool start_paused, bool reset_nvram, virDomainAsyncJob asyncJob) @@ -102,6 +103,7 @@ int qemuSaveImageOpen(virQEMUDriver *driver, const char *path, bool bypass_cache, + bool sparse, virFileWrapperFd **wrapperFd, bool open_write); =20 diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 5991455a4e..cc304add88 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2394,7 +2394,7 @@ qemuSnapshotRevertExternalPrepare(virDomainObj *vm, return -1; =20 memdata->fd =3D qemuSaveImageOpen(driver, memdata->path, - false, NULL, false); + false, false, NULL, false); if (memdata->fd < 0) return -1; =20 @@ -2634,7 +2634,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, =20 if (qemuProcessStartWithMemoryState(snapshot->domain->conn, driver, vm, &memdata.fd, memdata.path, loadSna= p, - memdata.data, VIR_ASYNC_JOB_SNAPSH= OT, + memdata.data, NULL, VIR_ASYNC_JOB_= SNAPSHOT, start_flags, "from-snapshot", &started) < 0) { if (started) { @@ -2787,7 +2787,7 @@ qemuSnapshotRevertInactive(virDomainObj *vm, =20 rc =3D qemuProcessStart(snapshot->domain->conn, driver, vm, NULL, VIR_ASYNC_JOB_SNAPSHOT, NULL, -1, NULL, NULL, - VIR_NETDEV_VPORT_PROFILE_OP_CREATE, + NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >=3D 0); if (rc < 0) { @@ -3264,7 +3264,7 @@ qemuSnapshotDeleteExternalPrepare(virDomainObj *vm, =20 if (!virDomainObjIsActive(vm)) { if (qemuProcessStart(NULL, driver, vm, NULL, VIR_ASYNC_JOB_SNA= PSHOT, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, VIR_QEMU_PROCESS_START_PAUSED) < 0) { return -1; --=20 2.43.0