From nobody Sun Feb 8 20:28:23 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1654087955; cv=none; d=zohomail.com; s=zohoarc; b=buRC+6CuJFzeZybjEnHvkgJByaYWQTDFPy2HCv8lYMjxneRtubMOSW19e1suevBQn3WD1vQA2sTgjJ/Yi2RkqqfFJ1I6Cu9lNXDl5UBqwDKC0fDXnwjQZOSxhc7YBJdoZnVPmtJU67Q3QH8xp6R90xtiUIdAH5hL1gaeo5JWVkE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654087955; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zazVVzPjPiu/EJmhFZk1w9bW0V35rXLxRXx9BANxurI=; b=L6s0FLy/yYfweYrCZUg1Gj2bWiw6zcjJcaJVedzgc6Xwfz8Owuv4PMevOfaC08Foey3tTF4Wx95repNiWDjcsS4hOF/mzuvnZ52af2h6GV2CEiIEdC708hLppdx9N9pvT6mBFUv55dWx+1zzEovDloK2/8Zf7pqnyO4ssdwTGTg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1654087955914100.62355091836707; Wed, 1 Jun 2022 05:52:35 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-224-3-OX5QClOyS0cSVC2hethA-1; Wed, 01 Jun 2022 08:51:21 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7967C8041B5; Wed, 1 Jun 2022 12:50:41 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 64F4B40EC004; Wed, 1 Jun 2022 12:50:41 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 74A221937754; Wed, 1 Jun 2022 12:50:38 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 8EA6A193F6F3 for ; Wed, 1 Jun 2022 12:50:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 6AB2740EC009; Wed, 1 Jun 2022 12:50:31 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.227]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2BAE340EC00A; Wed, 1 Jun 2022 12:50:31 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id B40E4245B8C; Wed, 1 Jun 2022 14:50:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654087955; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=zazVVzPjPiu/EJmhFZk1w9bW0V35rXLxRXx9BANxurI=; b=RIvLmltD8H9MoUokRAq+UvumrczxsluQ/BZIt8F7Qz2KxqIc7wHShtlpFR5O5qSnuxX7xh urLFcfAnveXatboh5c3yzoOm6+xclkGX6O+4zBvHVP0dmD+YbTHg5wp9m3UX+9sp845ODD Z9CYgTBoiqnntdf2le6T9NcoZCRbs6s= X-MC-Unique: 3-OX5QClOyS0cSVC2hethA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH v2 63/81] qemu: Refactor qemuMigrationAnyConnectionClosed Date: Wed, 1 Jun 2022 14:50:03 +0200 Message-Id: <30b5095993919d0c22256442668c8f4b4408ffaf.1654087150.git.jdenemar@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Krempa , Pavel Hrdina Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1654087956279100001 Content-Type: text/plain; charset="utf-8" To prepare the code for handling incoming migration too. Signed-off-by: Jiri Denemark Reviewed-by: Peter Krempa Reviewed-by: Pavel Hrdina --- Notes: Version 2: - moved relevant cases directly to "all done; unreachable" section to avoid unnecessary churn src/qemu/qemu_migration.c | 68 +++++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 31 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index eee73d47ca..92833ccbe3 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2302,11 +2302,12 @@ qemuMigrationDstRun(virQEMUDriver *driver, } =20 =20 -/* This is called for outgoing non-p2p migrations when a connection to the - * client which initiated the migration was closed but we were waiting for= it - * to follow up with the next phase, that is, in between - * qemuDomainMigrateBegin3 and qemuDomainMigratePerform3 or - * qemuDomainMigratePerform3 and qemuDomainMigrateConfirm3. +/* This is called for outgoing non-p2p and incoming migrations when a + * connection to the client which controls the migration was closed but we + * were waiting for it to follow up with the next phase, that is, in betwe= en + * qemuDomainMigrateBegin3 and qemuDomainMigratePerform3, + * qemuDomainMigratePerform3 and qemuDomainMigrateConfirm3, or + * qemuDomainMigratePrepare3 and qemuDomainMigrateFinish3. */ static void qemuMigrationAnyConnectionClosed(virDomainObj *vm, @@ -2315,6 +2316,7 @@ qemuMigrationAnyConnectionClosed(virDomainObj *vm, qemuDomainObjPrivate *priv =3D vm->privateData; virQEMUDriver *driver =3D priv->driver; qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + bool postcopy =3D false; =20 VIR_DEBUG("vm=3D%s, conn=3D%p, asyncJob=3D%s, phase=3D%s", vm->def->name, conn, @@ -2322,64 +2324,68 @@ qemuMigrationAnyConnectionClosed(virDomainObj *vm, qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, priv->job.phase)); =20 - if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT)) + if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_IN) && + !qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT)) return; =20 - VIR_DEBUG("The connection which started outgoing migration of domain %= s" - " was closed; canceling the migration", + VIR_WARN("The connection which controls migration of domain %s was clo= sed", vm->def->name); =20 switch ((qemuMigrationJobPhase) priv->job.phase) { case QEMU_MIGRATION_PHASE_BEGIN3: - /* just forget we were about to migrate */ - qemuMigrationJobFinish(vm); + VIR_DEBUG("Aborting outgoing migration after Begin phase"); break; =20 case QEMU_MIGRATION_PHASE_PERFORM3_DONE: - VIR_WARN("Migration of domain %s finished but we don't know if the" - " domain was successfully started on destination or not", - vm->def->name); - if (virDomainObjIsPostcopy(vm, VIR_DOMAIN_JOB_OPERATION_MIGRATION_= OUT)) { - ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE= _POSTCOPY_FAILED)); - qemuMigrationSrcPostcopyFailed(vm); - qemuDomainCleanupAdd(vm, qemuProcessCleanupMigrationJob); - qemuMigrationJobContinue(vm); + VIR_DEBUG("Migration protocol interrupted in post-copy mode"); + postcopy =3D true; } else { - qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_O= UT, - jobPriv->migParams, priv->job.apiFlag= s); - /* clear the job and let higher levels decide what to do */ - qemuMigrationJobFinish(vm); + VIR_WARN("Migration of domain %s finished but we don't know if= the " + "domain was successfully started on destination or no= t", + vm->def->name); } break; =20 case QEMU_MIGRATION_PHASE_POSTCOPY_FAILED: case QEMU_MIGRATION_PHASE_BEGIN_RESUME: case QEMU_MIGRATION_PHASE_PERFORM_RESUME: - ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_POS= TCOPY_FAILED)); - qemuMigrationSrcPostcopyFailed(vm); - qemuDomainCleanupAdd(vm, qemuProcessCleanupMigrationJob); - qemuMigrationJobContinue(vm); + VIR_DEBUG("Connection closed while resuming failed post-copy migra= tion"); + postcopy =3D true; break; =20 + case QEMU_MIGRATION_PHASE_PREPARE: + case QEMU_MIGRATION_PHASE_PREPARE_RESUME: + /* incoming migration; the domain will be autodestroyed */ + return; + case QEMU_MIGRATION_PHASE_PERFORM3: /* cannot be seen without an active migration API; unreachable */ case QEMU_MIGRATION_PHASE_CONFIRM3: case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: case QEMU_MIGRATION_PHASE_CONFIRM_RESUME: - /* all done; unreachable */ - case QEMU_MIGRATION_PHASE_PREPARE: case QEMU_MIGRATION_PHASE_FINISH2: case QEMU_MIGRATION_PHASE_FINISH3: - case QEMU_MIGRATION_PHASE_PREPARE_RESUME: case QEMU_MIGRATION_PHASE_FINISH_RESUME: - /* incoming migration; unreachable */ + /* all done; unreachable */ case QEMU_MIGRATION_PHASE_PERFORM2: /* single phase outgoing migration; unreachable */ case QEMU_MIGRATION_PHASE_NONE: case QEMU_MIGRATION_PHASE_LAST: /* unreachable */ - ; + return; + } + + if (postcopy) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) + qemuMigrationSrcPostcopyFailed(vm); + ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_POS= TCOPY_FAILED)); + qemuDomainCleanupAdd(vm, qemuProcessCleanupMigrationJob); + qemuMigrationJobContinue(vm); + } else { + qemuMigrationParamsReset(driver, vm, priv->job.asyncJob, + jobPriv->migParams, priv->job.apiFlags); + qemuMigrationJobFinish(vm); } } =20 --=20 2.35.1