From nobody Fri Mar 29 09:54:49 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=suse.de Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1651931042800475.1697842843208; Sat, 7 May 2022 06:44:02 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-529-_XzJLxsLM8mFwBlpgCX79g-1; Sat, 07 May 2022 09:43:51 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 75C7E806601; Sat, 7 May 2022 13:43:44 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5F81F56F8C3; Sat, 7 May 2022 13:43:44 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 1B7DD1947B85; Sat, 7 May 2022 13:43:38 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 8A8F51940373 for ; Sat, 7 May 2022 13:43:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 6A8D641636D; Sat, 7 May 2022 13:43:31 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast05.extmail.prod.ext.rdu2.redhat.com [10.11.55.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 66BFA416367 for ; Sat, 7 May 2022 13:43:31 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4A10380A0AD for ; Sat, 7 May 2022 13:43:31 +0000 (UTC) Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-207-HcSAlVSeOlSGRn9_T4ka7Q-1; Sat, 07 May 2022 09:43:29 -0400 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 155E11F96A; Sat, 7 May 2022 13:43:28 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D010B139E0; Sat, 7 May 2022 13:43:27 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id KEv3MH93dmKPPQAAMHmgww (envelope-from ); Sat, 07 May 2022 13:43:27 +0000 X-MC-Unique: _XzJLxsLM8mFwBlpgCX79g-1 X-Original-To: libvir-list@listman.corp.redhat.com X-MC-Unique: HcSAlVSeOlSGRn9_T4ka7Q-1 From: Claudio Fontana To: =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= Subject: [libvirt RFCv8 17/27] qemu: implement qemuSaveImageLoadMultiFd Date: Sat, 7 May 2022 15:43:10 +0200 Message-Id: <20220507134320.6289-18-cfontana@suse.de> In-Reply-To: <20220507134320.6289-1-cfontana@suse.de> References: <20220507134320.6289-1-cfontana@suse.de> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: libvir-list@redhat.com, "Dr . David Alan Gilbert" , Claudio Fontana Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1651931044804100028 Content-Type: text/plain; charset="utf-8"; x-default="true" use multifd to restore parallel saves. Signed-off-by: Claudio Fontana --- src/qemu/qemu_driver.c | 10 +++- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_migration.h | 6 ++ src/qemu/qemu_saveimage.c | 119 +++++++++++++++++++++++++++++++++++++- src/qemu/qemu_saveimage.h | 8 ++- 5 files changed, 139 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 0e8dd7748c..72ab679336 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5905,8 +5905,14 @@ qemuDomainRestoreInternal(virConnectPtr conn, flags) < 0) goto cleanup; =20 - ret =3D qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, - false, reset_nvram, true, VIR_ASYNC_JOB_STA= RT); + if (flags & VIR_DOMAIN_SAVE_PARALLEL) { + ret =3D qemuSaveImageLoadMultiFd(conn, vm, oflags, data, reset_nvr= am, + &saveFd, VIR_ASYNC_JOB_START); + + } else { + ret =3D qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, p= ath, + false, reset_nvram, true, VIR_ASYNC_JOB= _START); + } =20 qemuProcessEndJob(vm); =20 diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 93cd446b23..12b7e84f25 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1933,7 +1933,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driv= er, } =20 =20 -static int +int qemuMigrationDstWaitForCompletion(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob, diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index c3c48c19c0..38f4877cf0 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -191,6 +191,12 @@ qemuMigrationDstFinish(virQEMUDriver *driver, int retcode, bool v3proto); =20 +int +qemuMigrationDstWaitForCompletion(virQEMUDriver *driver, + virDomainObj *vm, + virDomainAsyncJob asyncJob, + bool postcopy); + int qemuMigrationSrcConfirm(virQEMUDriver *driver, virDomainObj *vm, diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 83dea78718..753e297226 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -579,6 +579,114 @@ qemuSaveImageCreate(virQEMUDriver *driver, } =20 =20 +int qemuSaveImageLoadMultiFd(virConnectPtr conn, virDomainObj *vm, int ofl= ags, + virQEMUSaveData *data, bool reset_nvram, + virQEMUSaveFd *saveFd, virDomainAsyncJob asyn= cJob) +{ + virQEMUDriver *driver =3D conn->privateData; + qemuDomainObjPrivate *priv =3D vm->privateData; + virQEMUSaveFd *multiFd =3D NULL; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); + g_autoptr(virCommand) cmd =3D NULL; + g_autofree char *helper_path =3D NULL; + g_autofree char *sun_path =3D g_strdup_printf("%s/restore-multifd.sock= ", cfg->saveDir); + bool qemu_started =3D false; + int ret =3D -1; + int nchannels =3D data->header.multifd_channels; + + if (!(helper_path =3D virFileFindResource("libvirt_multifd_helper", + abs_top_builddir "/src", + LIBEXECDIR))) + goto cleanup; + cmd =3D virCommandNewArgList(helper_path, sun_path, NULL); + virCommandAddArgFormat(cmd, "%d", nchannels); + virCommandAddArgFormat(cmd, "%d", saveFd->fd); + virCommandPassFD(cmd, saveFd->fd, 0); + + /* Perform parallel multifd migration from files (main fd + channels) = */ + if (!(multiFd =3D qemuSaveImageCreateMultiFd(driver, vm, cmd, saveFd->= path, oflags, cfg, nchannels))) + goto cleanup; + if (qemuSaveImageStartVM(conn, driver, vm, NULL, data, sun_path, + false, reset_nvram, false, asyncJob) < 0) + goto cleanup; + + qemu_started =3D true; + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_MULTIFD)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not seem to support multifd migration,= required for parallel migration from files")); + goto cleanup; + } else { + g_autoptr(qemuMigrationParams) migParams =3D qemuMigrationParamsNe= w(); + bool bwParam =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATIO= N_PARAM_BANDWIDTH); + + if (bwParam) { + if (qemuMigrationParamsSetULL(migParams, + QEMU_MIGRATION_PARAM_MAX_BANDWID= TH, + QEMU_DOMAIN_MIG_BANDWIDTH_MAX * = 1024 * 1024) < 0) + goto cleanup; + priv->migMaxBandwidth =3D QEMU_DOMAIN_MIG_BANDWIDTH_MAX; + } else { + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) =3D= =3D 0) { + qemuMonitorSetMigrationSpeed(priv->mon, + QEMU_DOMAIN_MIG_BANDWIDTH_MAX= ); + priv->migMaxBandwidth =3D QEMU_DOMAIN_MIG_BANDWIDTH_MAX; + qemuDomainObjExitMonitor(vm); + } + } + qemuMigrationParamsSetCap(migParams, QEMU_MIGRATION_CAP_MULTIFD); + if (qemuMigrationParamsSetInt(migParams, + QEMU_MIGRATION_PARAM_MULTIFD_CHANNEL= S, + nchannels) < 0) + goto cleanup; + if (qemuMigrationParamsApply(driver, vm, asyncJob, migParams) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("guest unexpectedly quit")); + goto cleanup; + } + /* multifd helper can now connect, then wait for migration to comp= lete */ + if (virCommandRunAsync(cmd, NULL) < 0) + goto cleanup; + + if (qemuMigrationDstWaitForCompletion(driver, vm, asyncJob, false)= < 0) + goto cleanup; + + if (qemuSaveImageCloseMultiFd(multiFd, nchannels, vm) < 0) + goto cleanup; + + if (qemuProcessRefreshState(driver, vm, asyncJob) < 0) + goto cleanup; + + /* run 'cont' on the destination */ + if (qemuProcessStartCPUs(driver, vm, + VIR_DOMAIN_RUNNING_RESTORED, + asyncJob) < 0) { + if (virGetLastErrorCode() =3D=3D VIR_ERR_OK) + virReportError(VIR_ERR_OPERATION_FAILED, + "%s", _("failed to resume domain")); + goto cleanup; + } + if (virDomainObjSave(vm, driver->xmlopt, cfg->stateDir) < 0) { + VIR_WARN("Failed to save status on vm %s", vm->def->name); + goto cleanup; + } + } + qemuDomainEventEmitJobCompleted(driver, vm); + ret =3D 0; + + cleanup: + if (ret < 0 && qemu_started) { + qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, + asyncJob, VIR_QEMU_PROCESS_STOP_MIGRATED); + } + ret =3D qemuSaveImageFreeMultiFd(multiFd, vm, nchannels, ret); + return ret; +} + + /* qemuSaveImageGetCompressionProgram: * @imageFormat: String representation from qemu.conf for the compression * image format being used (dump, save, or snapshot). @@ -789,6 +897,7 @@ qemuSaveImageStartVM(virConnectPtr conn, bool started =3D false; virObjectEvent *event; VIR_AUTOCLOSE intermediatefd =3D -1; + g_autofree char *migrate_from =3D NULL; g_autoptr(virCommand) cmd =3D NULL; g_autofree char *errbuf =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); @@ -835,8 +944,14 @@ qemuSaveImageStartVM(virConnectPtr conn, if (cookie && !cookie->slirpHelper) priv->disableSlirp =3D true; =20 + if (fd) { + migrate_from =3D g_strdup("stdio"); + } else { + migrate_from =3D g_strdup_printf("unix://%s", path); + } + if (qemuProcessStart(conn, driver, vm, cookie ? cookie->cpu : NULL, - asyncJob, "stdio", *fd, path, wait_incoming, + asyncJob, migrate_from, fd ? *fd : -1, path, wait= _incoming, NULL, VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, start_flags) =3D=3D 0) @@ -860,7 +975,7 @@ qemuSaveImageStartVM(virConnectPtr conn, VIR_DEBUG("Decompression binary stderr: %s", NULLSTR(errbuf)); virErrorRestore(&orig_err); } - if (VIR_CLOSE(*fd) < 0) { + if (fd && VIR_CLOSE(*fd) < 0) { virReportSystemError(errno, _("cannot close file: %s"), path); rc =3D -1; } diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 412624b968..719e6506a5 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -101,7 +101,7 @@ qemuSaveImageStartVM(virConnectPtr conn, bool reset_nvram, bool wait_incoming, virDomainAsyncJob asyncJob) - ATTRIBUTE_NONNULL(4) ATTRIBUTE_NONNULL(5) ATTRIBUTE_NONNULL(6); + ATTRIBUTE_NONNULL(5) ATTRIBUTE_NONNULL(6); =20 int qemuSaveImageOpen(virQEMUDriver *driver, @@ -119,6 +119,12 @@ qemuSaveImageGetCompressionProgram(const char *imageFo= rmat, bool use_raw_on_fail) ATTRIBUTE_NONNULL(2); =20 +int qemuSaveImageLoadMultiFd(virConnectPtr conn, virDomainObj *vm, int ofl= ags, + virQEMUSaveData *data, bool reset_nvram, + virQEMUSaveFd *saveFd, virDomainAsyncJob asyn= cJob) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(4) + ATTRIBUTE_NONNULL(6) G_GNUC_WARN_UNUSED_RESULT; + int qemuSaveImageCreate(virQEMUDriver *driver, virDomainObj *vm, --=20 2.35.3