From nobody Sat Nov 23 18:02:29 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass(p=reject dis=none) header.from=lists.libvirt.org ARC-Seal: i=1; a=rsa-sha256; t=1723160778; cv=none; d=zohomail.com; s=zohoarc; b=P5CGrnYP8UlSkXdIE7SpFxNmGBlKBWeyxp8blnEi3K9wkOxkgultCmYdehqrQyT+RpODnKA5Vt+dCOVp1VUW6MzblMm2z/jEBx8KW4FN89BjrsxUnR8DoUGead/N99x99GbaM60hREiwkrc0VLUfsdKYwCzQa508jikWEmTUDeQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1723160778; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Subject:Subject:To:To:Message-Id; bh=Q1jPow6jUUr+ftMEeHQhtefQv/lw+dd8XC1RAQI+0ZI=; b=knMkZPn6ptF+dlDkXX2z0/HKMVxtc0mFz+s/QmE7lCu/eo1nkSp1zlVTQl2t9dFK/8Aj2K4eicn09XgTHmy6xPpzFKnOLBjzCPqjyRY3cc4oo1TOVBAODlALSuNdpKxDJ0gWuyH5SOYLOB4ALmy/fhPQu5/3AKGuKfyVcjiTSEc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1723160778244158.47518820436494; Thu, 8 Aug 2024 16:46:18 -0700 (PDT) Received: by lists.libvirt.org (Postfix, from userid 996) id 27C90136; Thu, 8 Aug 2024 19:46:17 -0400 (EDT) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id 12F141518; Thu, 8 Aug 2024 19:39:18 -0400 (EDT) Received: by lists.libvirt.org (Postfix, from userid 996) id 7A0AD145D; Thu, 8 Aug 2024 19:39:12 -0400 (EDT) Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id A9305145D for ; Thu, 8 Aug 2024 19:38:47 -0400 (EDT) Received: by mail-lj1-f176.google.com with SMTP id 38308e7fff4ca-2f15790b472so18189891fa.0 for ; Thu, 08 Aug 2024 16:38:47 -0700 (PDT) Received: from localhost ([69.51.98.127]) by smtp.gmail.com with ESMTPSA id ca18e2360f4ac-81fd4d82be5sm412714939f.49.2024.08.08.16.38.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Aug 2024 16:38:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1723160326; x=1723765126; darn=lists.libvirt.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mEuT9VLMxUCo0T//ZGuAvHDAROiO3GCaFRnR/EbdtKI=; b=DVa00ii9L/kq979cdeLh4YefTcn88x9kTvvSzpLBwQ+d1+uRgDbXg8x3vos8tMe11D aR8HsU52nBt9iOSh5RyvKBSuEO9uq6vA8+tjwfhS4cR7vnPZrCn4GhPHfGJia1WPLVzk JAmlVhWzqkeVcIYV4ZwMeV5yfTkK/olTz1UCB2N8cnhWVlioSTbg2chUnNtOhnQacr4S EB3T65pa4nvwNNCJf95YX2ffeuWh/GpazSV4ASu1oGGYMaVWlUF1bOzq/88xk2sMIFBi ivCJiQHohYvmUA4PlFPDUo53n0szxbnMJGLuaZ6K2XWYasMinxZuh+rrEvwJbxDwIGZf Xp5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723160326; x=1723765126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mEuT9VLMxUCo0T//ZGuAvHDAROiO3GCaFRnR/EbdtKI=; b=psWahefBgOWHZBSq+JXm2k8NWqfitTimJSq4+SC5DxGIyDJDfniTB7b9+TDy/Af1U7 w/ScDdrxYCfIJ9JJMxuFuUjryxVA2ZvAdY5hN4p73PfjDcUCtBas0aYHk45Kyklz5r33 41BJvSCOnWDj3ko96ch9gLvFLywsceNT3X+Fb+4cPCcKPKuGd2Q4fCOyBO/epDTwLEdy G7h6n/oOAoJkDtUFtEDH9dXwcFJEhOXeypByxA0mDTiN+VbuI9LOJzp0hOrhJCoX6l7i 1j9Z2a5/71ztza2Kuig8/aPusfveR1d0fPDcG7DO4mshhMb1drDL4r1RKNJvOPyWIl2Z qqLg== X-Gm-Message-State: AOJu0YyVISvIqcmI7Toa4LdPln2t9RHMkmDIVeZ9vi5G3iIbGNdbpS0/ i4BNvYbdb32CzXLIqTq4LJ/ZRfijUoEEM7CSUTb7rFyx084wG6KplPYDRA9bcfUC6VtrFe9uYBf n X-Google-Smtp-Source: AGHT+IHV50xtBz2L4uf6FfZEV6lh25AWnij8VOmvYwj2crwB53P7ejfKnPXRIqJInV2DakyNPa+AZg== X-Received: by 2002:a05:651c:b26:b0:2f1:59ed:87b8 with SMTP id 38308e7fff4ca-2f19de21feemr29062141fa.3.1723160326197; Thu, 08 Aug 2024 16:38:46 -0700 (PDT) To: devel@lists.libvirt.org Subject: [PATCH 14/20] qemu: Add support for mapped-ram on restore Date: Thu, 8 Aug 2024 17:38:07 -0600 Message-Id: <20240808233813.22905-15-jfehlig@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240808233813.22905-1-jfehlig@suse.com> References: <20240808233813.22905-1-jfehlig@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: QFIQJE5PAEX32YW43BJLRTAFZLAO3HDZ X-Message-ID-Hash: QFIQJE5PAEX32YW43BJLRTAFZLAO3HDZ X-MailFrom: jfehlig@suse.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: farosas@suse.de X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: From: Jim Fehlig via Devel Reply-To: Jim Fehlig X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1723160780147116600 Content-Type: text/plain; charset="utf-8" Add support for the mapped-ram migration capability on restore. Using mapped-ram with QEMU to restore an image requires the same steps as saving: - The 'mapped-ram' migration capability must be set to true - The 'multifd' migration capability must be set to true and the 'multifd-channels' migration parameter must set to a value >=3D 1 - QEMU must be provided an fdset containing the migration fd(s) - The 'migrate-incoming' qmp command is invoked with a URI referencing the fdset and an offset where to start reading the data stream, e.g. {"execute":"migrate-incoming", "arguments":{"uri":"file:/dev/fdset/0,offset=3D0x119eb"}} Signed-off-by: Jim Fehlig --- src/qemu/qemu_driver.c | 26 +++++++++++++----- src/qemu/qemu_migration.c | 11 ++++---- src/qemu/qemu_process.c | 58 +++++++++++++++++++++++++++++++-------- src/qemu/qemu_process.h | 15 ++++++---- src/qemu/qemu_saveimage.c | 9 ++++-- src/qemu/qemu_saveimage.h | 2 ++ src/qemu/qemu_snapshot.c | 8 +++--- 7 files changed, 92 insertions(+), 37 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 87d75b6baa..6d0f52951c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1611,7 +1611,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr= conn, } =20 if (qemuProcessStart(conn, driver, vm, NULL, VIR_ASYNC_JOB_START, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); @@ -5774,6 +5774,8 @@ qemuDomainRestoreInternal(virConnectPtr conn, virFileWrapperFd *wrapperFd =3D NULL; bool hook_taint =3D false; bool reset_nvram =3D false; + bool mapped_ram =3D false; + g_autoptr(qemuMigrationParams) restoreParams =3D NULL; =20 virCheckFlags(VIR_DOMAIN_SAVE_BYPASS_CACHE | VIR_DOMAIN_SAVE_RUNNING | @@ -5786,9 +5788,13 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (qemuSaveImageGetMetadata(driver, NULL, path, &def, &data, false) <= 0) goto cleanup; =20 + mapped_ram =3D data->header.features & QEMU_SAVE_FEATURE_MAPPED_RAM; + if (!(restoreParams =3D qemuMigrationParamsForSave(mapped_ram))) + return -1; + fd =3D qemuSaveImageOpen(driver, path, (flags & VIR_DOMAIN_SAVE_BYPASS_CACHE) !=3D 0, - &wrapperFd, false); + mapped_ram, &wrapperFd, false); if (fd < 0) goto cleanup; =20 @@ -5842,7 +5848,7 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_RESTORE, flags) <= 0) goto cleanup; =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, restor= eParams, false, reset_nvram, VIR_ASYNC_JOB_START); =20 qemuProcessEndJob(vm); @@ -5957,7 +5963,7 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, cons= t char *path, if (qemuSaveImageGetMetadata(driver, NULL, path, &def, &data, false) <= 0) goto cleanup; =20 - fd =3D qemuSaveImageOpen(driver, path, 0, NULL, false); + fd =3D qemuSaveImageOpen(driver, path, 0, false, NULL, false); =20 if (fd < 0) goto cleanup; @@ -6098,6 +6104,8 @@ qemuDomainObjRestore(virConnectPtr conn, g_autofree char *xmlout =3D NULL; virQEMUSaveData *data =3D NULL; virFileWrapperFd *wrapperFd =3D NULL; + bool mapped_ram =3D false; + g_autoptr(qemuMigrationParams) restoreParams =3D NULL; =20 ret =3D qemuSaveImageGetMetadata(driver, NULL, path, &def, &data, true= ); if (ret < 0) { @@ -6106,7 +6114,11 @@ qemuDomainObjRestore(virConnectPtr conn, goto cleanup; } =20 - fd =3D qemuSaveImageOpen(driver, path, bypass_cache, &wrapperFd, false= ); + mapped_ram =3D data->header.features & QEMU_SAVE_FEATURE_MAPPED_RAM; + if (!(restoreParams =3D qemuMigrationParamsForSave(mapped_ram))) + return -1; + + fd =3D qemuSaveImageOpen(driver, path, bypass_cache, mapped_ram, &wrap= perFd, false); if (fd < 0) goto cleanup; =20 @@ -6148,7 +6160,7 @@ qemuDomainObjRestore(virConnectPtr conn, =20 virDomainObjAssignDef(vm, &def, true, NULL); =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, restor= eParams, start_paused, reset_nvram, asyncJob); =20 cleanup: @@ -6349,7 +6361,7 @@ qemuDomainObjStart(virConnectPtr conn, } =20 ret =3D qemuProcessStart(conn, driver, vm, NULL, asyncJob, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags= ); virDomainAuditStart(vm, "booted", ret >=3D 0); if (ret >=3D 0) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index fd897b4ed1..35d3e26908 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2953,9 +2953,8 @@ qemuMigrationDstPrepare(virDomainObj *vm, const char *protocol, const char *listenAddress, unsigned short port, - int fd) + int *fd) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_autofree char *migrateFrom =3D NULL; =20 if (tunnel) { @@ -3009,8 +3008,8 @@ qemuMigrationDstPrepare(virDomainObj *vm, migrateFrom =3D g_strdup_printf(incFormat, protocol, listenAddress= , port); } =20 - return qemuProcessIncomingDefNew(priv->qemuCaps, listenAddress, - migrateFrom, fd, NULL); + return qemuProcessIncomingDefNew(vm, listenAddress, + migrateFrom, fd, NULL, NULL); } =20 =20 @@ -3154,7 +3153,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, =20 if (!(incoming =3D qemuMigrationDstPrepare(vm, tunnel, protocol, listenAddress, port, - dataFD[0]))) + &dataFD[0]))) goto error; =20 if (qemuProcessPrepareDomain(driver, vm, startFlags) < 0) @@ -3524,7 +3523,7 @@ qemuMigrationDstPrepareResume(virQEMUDriver *driver, priv->origname =3D g_strdup(origname); =20 if (!(incoming =3D qemuMigrationDstPrepare(vm, false, protocol, - listenAddress, port, -1))) + listenAddress, port, NULL))) goto cleanup; =20 if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_IN) < 0) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1d7f214212..b02cd84aff 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4680,6 +4680,7 @@ qemuProcessIncomingDefFree(qemuProcessIncomingDef *in= c) =20 g_free(inc->address); g_free(inc->uri); + qemuFDPassFree(inc->fdPassMigrate); g_free(inc); } =20 @@ -4693,26 +4694,54 @@ qemuProcessIncomingDefFree(qemuProcessIncomingDef *= inc) * qemuProcessIncomingDefFree will NOT close it. */ qemuProcessIncomingDef * -qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, +qemuProcessIncomingDefNew(virDomainObj *vm, const char *listenAddress, const char *migrateFrom, - int fd, - const char *path) + int *fd, + const char *path, + virQEMUSaveData *data) { + qemuDomainObjPrivate *priv =3D vm->privateData; qemuProcessIncomingDef *inc =3D NULL; =20 - if (qemuMigrationDstCheckProtocol(qemuCaps, migrateFrom) < 0) + if (qemuMigrationDstCheckProtocol(priv->qemuCaps, migrateFrom) < 0) return NULL; =20 inc =3D g_new0(qemuProcessIncomingDef, 1); =20 inc->address =3D g_strdup(listenAddress); =20 - inc->uri =3D qemuMigrationDstGetURI(migrateFrom, fd); + if (data) { + size_t offset =3D sizeof(virQEMUSaveHeader) + data->header.data_le= n; + + if (data->header.features & QEMU_SAVE_FEATURE_MAPPED_RAM) { + unsigned int fdsetId; + + inc->fdPassMigrate =3D qemuFDPassNew("migrate", priv); + qemuFDPassAddFD(inc->fdPassMigrate, fd, "-buffered-fd"); + qemuFDPassGetId(inc->fdPassMigrate, &fdsetId); + inc->uri =3D g_strdup_printf("file:/dev/fdset/%u,offset=3D%#lx= ", fdsetId, offset); + } else { + g_autofree char *tmp =3D g_new(char, offset); + + /* HACK: + * We provide qemu the offset in case of mapped-ram, but must = set + * the file offset for the legacy save format. Unfortunately we + * can't lseek when the fd is wrapped by virFileWrapperFd, so + * we do a needless read instead. + */ + if (saferead(*fd, tmp, offset) !=3D offset) + VIR_DEBUG("failed to read from save file"); + inc->uri =3D qemuMigrationDstGetURI(migrateFrom, *fd); + } + } else { + inc->uri =3D qemuMigrationDstGetURI(migrateFrom, *fd); + } + if (!inc->uri) goto error; =20 - inc->fd =3D fd; + inc->fd =3D *fd; inc->path =3D path; =20 return inc; @@ -7782,8 +7811,11 @@ qemuProcessLaunch(virConnectPtr conn, &nnicindexes, &nicindexes))) goto cleanup; =20 - if (incoming && incoming->fd !=3D -1) - virCommandPassFD(cmd, incoming->fd, 0); + if (incoming) { + if (incoming->fd !=3D -1) + virCommandPassFD(cmd, incoming->fd, 0); + qemuFDPassTransferCommand(incoming->fdPassMigrate, cmd); + } =20 /* now that we know it is about to start call the hook if present */ if (qemuProcessStartHook(driver, vm, @@ -8195,6 +8227,7 @@ qemuProcessStart(virConnectPtr conn, int migrateFd, const char *migratePath, virDomainMomentObj *snapshot, + qemuMigrationParams *migParams, virNetDevVPortProfileOp vmop, unsigned int flags) { @@ -8248,7 +8281,7 @@ qemuProcessStart(virConnectPtr conn, relabel =3D true; =20 if (incoming) { - if (qemuMigrationDstRun(vm, incoming->uri, asyncJob, NULL, 0) < 0) + if (qemuMigrationDstRun(vm, incoming->uri, asyncJob, migParams, 0)= < 0) goto stop; } else { /* Refresh state of devices from QEMU. During migration this happe= ns @@ -8302,6 +8335,7 @@ qemuProcessStart(virConnectPtr conn, * @path: path to memory state file * @snapshot: internal snapshot to load when starting QEMU process or NULL * @data: data from memory state file or NULL + * @migParams: Migration params to use on restore or NULL * @asyncJob: type of asynchronous job * @start_flags: flags to start QEMU process with * @reason: audit log reason @@ -8328,6 +8362,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, const char *path, virDomainMomentObj *snapshot, virQEMUSaveData *data, + qemuMigrationParams *migParams, virDomainAsyncJob asyncJob, unsigned int start_flags, const char *reason, @@ -8342,8 +8377,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, int rc =3D 0; int ret =3D -1; =20 - incoming =3D qemuProcessIncomingDefNew(priv->qemuCaps, NULL, "stdio", - *fd, path); + incoming =3D qemuProcessIncomingDefNew(vm, NULL, "stdio", fd, path, da= ta); if (!incoming) return -1; =20 @@ -8369,7 +8403,7 @@ qemuProcessStartWithMemoryState(virConnectPtr conn, =20 if (qemuProcessStart(conn, driver, vm, cookie ? cookie->cpu : NULL, asyncJob, incoming, *fd, path, snapshot, - VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, + migParams, VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, start_flags) =3D=3D 0) *started =3D true; =20 diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index e48d53dc46..93699da8bd 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -54,14 +54,17 @@ struct _qemuProcessIncomingDef { char *address; /* address where QEMU is supposed to listen */ char *uri; /* used when calling migrate-incoming QMP command */ int fd; /* for fd:N URI */ + qemuFDPass *fdPassMigrate; /* for file:/dev/fdset/n,offset=3Dx URI */ const char *path; /* path associated with fd */ }; =20 -qemuProcessIncomingDef *qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, - const char *listenAddr= ess, - const char *migrateFro= m, - int fd, - const char *path); +qemuProcessIncomingDef *qemuProcessIncomingDefNew(virDomainObj *vm, + const char *listenAddres= s, + const char *migrateFrom, + int *fd, + const char *path, + virQEMUSaveData *data); + void qemuProcessIncomingDefFree(qemuProcessIncomingDef *inc); =20 int qemuProcessBeginJob(virDomainObj *vm, @@ -88,6 +91,7 @@ int qemuProcessStart(virConnectPtr conn, int stdin_fd, const char *stdin_path, virDomainMomentObj *snapshot, + qemuMigrationParams *migParams, virNetDevVPortProfileOp vmop, unsigned int flags); =20 @@ -98,6 +102,7 @@ int qemuProcessStartWithMemoryState(virConnectPtr conn, const char *path, virDomainMomentObj *snapshot, virQEMUSaveData *data, + qemuMigrationParams *migParams, virDomainAsyncJob asyncJob, unsigned int start_flags, const char *reason, diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 125064ab66..b99e0de1ff 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -711,6 +711,7 @@ qemuSaveImageGetMetadata(virQEMUDriver *driver, * @driver: qemu driver data * @path: path of the save image * @bypass_cache: bypass cache when opening the file + * @mapped_ram: Image contains mapped-ram save format * @wrapperFd: returns the file wrapper structure * @open_write: open the file for writing (for updates) * @@ -720,6 +721,7 @@ int qemuSaveImageOpen(virQEMUDriver *driver, const char *path, bool bypass_cache, + bool mapped_ram, virFileWrapperFd **wrapperFd, bool open_write) { @@ -741,7 +743,7 @@ qemuSaveImageOpen(virQEMUDriver *driver, if ((fd =3D qemuDomainOpenFile(cfg, NULL, path, oflags, NULL)) < 0) return -1; =20 - if (bypass_cache && + if (!mapped_ram && bypass_cache && !(*wrapperFd =3D virFileWrapperFdNew(&fd, path, VIR_FILE_WRAPPER_BYPASS_CACHE))) return -1; @@ -760,6 +762,7 @@ qemuSaveImageStartVM(virConnectPtr conn, int *fd, virQEMUSaveData *data, const char *path, + qemuMigrationParams *restoreParams, bool start_paused, bool reset_nvram, virDomainAsyncJob asyncJob) @@ -776,8 +779,8 @@ qemuSaveImageStartVM(virConnectPtr conn, start_flags |=3D VIR_QEMU_PROCESS_START_RESET_NVRAM; =20 if (qemuProcessStartWithMemoryState(conn, driver, vm, fd, path, NULL, = data, - asyncJob, start_flags, "restored", - &started) < 0) { + restoreParams, asyncJob, start_fla= gs, + "restored", &started) < 0) { goto cleanup; } =20 diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index e101fdba6e..2007784e88 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -69,6 +69,7 @@ qemuSaveImageStartVM(virConnectPtr conn, int *fd, virQEMUSaveData *data, const char *path, + qemuMigrationParams *restoreParams, bool start_paused, bool reset_nvram, virDomainAsyncJob asyncJob) @@ -87,6 +88,7 @@ int qemuSaveImageOpen(virQEMUDriver *driver, const char *path, bool bypass_cache, + bool mapped_ram, virFileWrapperFd **wrapperFd, bool open_write); =20 diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 7d05ce76f4..79c20d48f4 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2123,7 +2123,7 @@ qemuSnapshotRevertExternalPrepare(virDomainObj *vm, return -1; =20 memdata->fd =3D qemuSaveImageOpen(driver, memdata->path, - false, NULL, false); + false, false, NULL, false); if (memdata->fd < 0) return -1; =20 @@ -2366,7 +2366,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, =20 if (qemuProcessStartWithMemoryState(snapshot->domain->conn, driver, vm, &memdata.fd, memdata.path, loadSna= p, - memdata.data, VIR_ASYNC_JOB_SNAPSH= OT, + memdata.data, NULL, VIR_ASYNC_JOB_= SNAPSHOT, start_flags, "from-snapshot", &started) < 0) { if (started) { @@ -2520,7 +2520,7 @@ qemuSnapshotRevertInactive(virDomainObj *vm, =20 rc =3D qemuProcessStart(snapshot->domain->conn, driver, vm, NULL, VIR_ASYNC_JOB_SNAPSHOT, NULL, -1, NULL, NULL, - VIR_NETDEV_VPORT_PROFILE_OP_CREATE, + NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >=3D 0); if (rc < 0) { @@ -2997,7 +2997,7 @@ qemuSnapshotDeleteExternalPrepare(virDomainObj *vm, =20 if (!virDomainObjIsActive(vm)) { if (qemuProcessStart(NULL, driver, vm, NULL, VIR_ASYNC_JOB_SNA= PSHOT, - NULL, -1, NULL, NULL, + NULL, -1, NULL, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, VIR_QEMU_PROCESS_START_PAUSED) < 0) { return -1; --=20 2.35.3