From nobody Tue Dec 16 12:45:54 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) client-ip=8.43.85.245; envelope-from=devel-bounces@lists.libvirt.org; helo=lists.libvirt.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.libvirt.org designates 8.43.85.245 as permitted sender) smtp.mailfrom=devel-bounces@lists.libvirt.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.libvirt.org (lists.libvirt.org [8.43.85.245]) by mx.zohomail.com with SMTPS id 1739280157200975.913013417118; Tue, 11 Feb 2025 05:22:37 -0800 (PST) Received: by lists.libvirt.org (Postfix, from userid 996) id 9956615B3; Tue, 11 Feb 2025 08:22:36 -0500 (EST) Received: from lists.libvirt.org (localhost [IPv6:::1]) by lists.libvirt.org (Postfix) with ESMTP id 171A915EF; Tue, 11 Feb 2025 08:20:56 -0500 (EST) Received: by lists.libvirt.org (Postfix, from userid 996) id 219481499; Tue, 11 Feb 2025 08:20:52 -0500 (EST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.libvirt.org (Postfix) with ESMTPS id 5505C1623 for ; Tue, 11 Feb 2025 08:20:40 -0500 (EST) Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-552-gtyFv_P1Oz22V5YmMXhNwg-1; Tue, 11 Feb 2025 08:20:38 -0500 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C7AF61800992 for ; Tue, 11 Feb 2025 13:20:37 +0000 (UTC) Received: from speedmetal.lan (unknown [10.45.242.9]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 14A671800876 for ; Tue, 11 Feb 2025 13:20:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on lists.libvirt.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,RCVD_IN_VALIDITY_RPBL_BLOCKED, RCVD_IN_VALIDITY_SAFE_BLOCKED,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739280040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GT1g7ABC5xgUkrnjvOY9CiwN9XNUEtzNUST6VdTYBkk=; b=CZXnoCpTc/rYGFpc1Zl28Pfrj+kZsvB3E3r5GGWTSUa5uEbAPOSIDmOtm0kZWi4H45KCRW FAdIkGQJFUgTHMyFehPLiMfBK/uvVT9J1xkApdujob12qNjuNWMB5wOcJwfJ9zLdQzrP35 qX/IgjTc/SsUC14dxJYk6Hl7tq+oJf4= X-MC-Unique: gtyFv_P1Oz22V5YmMXhNwg-1 X-Mimecast-MFC-AGG-ID: gtyFv_P1Oz22V5YmMXhNwg From: Peter Krempa To: devel@lists.libvirt.org Subject: [PATCH 5/5] qemu: migration: Reactivate block nodes after migration if VM is left paused Date: Tue, 11 Feb 2025 14:20:27 +0100 Message-ID: <3993de9b13068bb2ef975a4440ca82ed79af9bd2.1739279953.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: V1mDig7s90GGKI-JC_GAgVUdy_0kbrS5ffry88a59UU_1739280037 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Message-ID-Hash: 5SPCPKP47YGGM44Y4OI6FIM3F2755BH6 X-Message-ID-Hash: 5SPCPKP47YGGM44Y4OI6FIM3F2755BH6 X-MailFrom: pkrempa@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; header-match-config-2; header-match-config-3; header-match-devel.lists.libvirt.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.2.2 Precedence: list List-Id: Development discussions about the libvirt library & tools Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1739280159111019000 Content-Type: text/plain; charset="utf-8" On incoming migration qemu doesn't activate the block graph nodes right away. This is to properly facilitate locking of the images. The block nodes are normally re-activated when starting the CPUs after migration, but in cases (e.g. when a paused VM was migrated) when the VM is left paused the block nodes are not re-activated by qemu. This means that blockjobs which would want to write to an existing backing chain member would fail. Generally read-only jobs would succeed with older qemu's but this was not intended. Instead with new qemu you'll always get an error if attempting to access a inactive node: error: internal error: unable to execute QEMU command 'blockdev-mirror': I= nactive 'libvirt-1-storage' can't be a backing child of active '#block052' This is the case for explicit blockjobs (virsh blockcopy) but also for non shared-storage migration (virsh migrate --copy-storage-all). Since qemu now provides 'blockdev-set-active' QMP command which can on-demand re-activate the nodes we can re-activate them in similar cases as when we'd be starting vCPUs if the VM weren't left paused. The only exception is on the source in case of a failed post-copy migration as the VM already ran on destination so it won't ever run on the source even when recovered. Resolves: https://issues.redhat.com/browse/RHEL-78398 Signed-off-by: Peter Krempa --- src/qemu/qemu_migration.c | 46 +++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 53bbbee629..159521fbf5 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -220,6 +220,39 @@ qemuMigrationSrcStoreDomainState(virDomainObj *vm) } +/** + * qemuMigrationBlockNodesReactivate: + * + * In case when we're keeping the VM paused qemu will not re-activate the = block + * device backend tree so blockjobs would fail. In case when qemu supports= the + * 'blockdev-set-active' command this function will re-activate the block = nodes. + */ +static void +qemuMigrationBlockNodesReactivate(virDomainObj *vm, + virDomainAsyncJob asyncJob) +{ + qemuDomainObjPrivate *priv =3D vm->privateData; + int rc; + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_SET_ACTIVE)) + return; + + VIR_DEBUG("re-activating block nodes"); + + if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) + return; + + rc =3D qemuMonitorBlockdevSetActive(priv->mon, NULL, true); + + qemuDomainObjExitMonitor(vm); + + if (rc < 0) { + VIR_WARN("failed to re-activate block nodes after migration of VM = '%s'", vm->def->name); + virResetLastError(); + } +} + + static void qemuMigrationSrcRestoreDomainState(virQEMUDriver *driver, virDomainObj *vm) { @@ -238,11 +271,14 @@ qemuMigrationSrcRestoreDomainState(virQEMUDriver *dri= ver, virDomainObj *vm) if (preMigrationState !=3D VIR_DOMAIN_RUNNING || state !=3D VIR_DOMAIN_PAUSED || - reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED) - return; + reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED) { + if (reason =3D=3D VIR_DOMAIN_PAUSED_IOERROR) + VIR_DEBUG("Domain is paused due to I/O error, skipping resume"= ); + + /* Don't reactivate disks on post-copy failure */ + if (reason !=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED) + qemuMigrationBlockNodesReactivate(vm, VIR_ASYNC_JOB_MIGRATION_= OUT); - if (reason =3D=3D VIR_DOMAIN_PAUSED_IOERROR) { - VIR_DEBUG("Domain is paused due to I/O error, skipping resume"); return; } @@ -6795,6 +6831,8 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, if (*inPostCopy) *doKill =3D false; + } else { + qemuMigrationBlockNodesReactivate(vm, VIR_ASYNC_JOB_MIGRATION_IN); } if (mig->jobData) { --=20 2.48.1