From nobody Sun Feb 8 12:32:29 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1654088103; cv=none; d=zohomail.com; s=zohoarc; b=l56lqvpGgKh/TDT6fImaNCbSWMQL7exm5m3BpaomyJPNgoiyAY8jr2wAeqVC64QCUuZ59hyTjwhFpXgBO/e9yY4e0L+VUo4S9reFE+4uX2pj3oP4jF87OcNElMlQMPmX47wb11GzXvn6PMgIvmIGezuKFWK9GF/OjdSgzS9el+A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654088103; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=bOiLi+Y0J5VoRnW3VNR1BQy1RiOmSFvqWqDIIzAy+BQ=; b=c+krHUp/8NlKBzbfLUq3sJEcO0L5WKRU0nW81sUb1/9d33vFeA5/9HzZREK5KXMj69f/b7oNeAruVdFmMvfRQ9kJKLCAH4I4thlQvRZeoW9vMcgupBr/GanwFaJgqUqQ54aAFwyrHFtQJaZyo9KAzum0l7mmaSJLkxGIARHl6x4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1654088103514550.8106589603871; Wed, 1 Jun 2022 05:55:03 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-376-YGamzZRENbuG-qPPzCX50w-1; Wed, 01 Jun 2022 08:51:42 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6ADA2180A5D5; Wed, 1 Jun 2022 12:50:52 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 565CAC28103; Wed, 1 Jun 2022 12:50:52 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id ED1D91937755; Wed, 1 Jun 2022 12:50:48 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 79B141947B90 for ; Wed, 1 Jun 2022 12:50:30 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 3F1FA112131B; Wed, 1 Jun 2022 12:50:30 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.227]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 02AB81121319 for ; Wed, 1 Jun 2022 12:50:29 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id 9BA2C245B79; Wed, 1 Jun 2022 14:50:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654088102; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=bOiLi+Y0J5VoRnW3VNR1BQy1RiOmSFvqWqDIIzAy+BQ=; b=MSNI1nnrjQJ/i+iui7Eik7nViGgE5JMLRBMCA61FLzzgoKSxpdWmqQmOD/YElk1gYvE+hm DBBYD3jIKOJNVk14g6wwBXBphXUee4+KKn2Xn8rbda6O7G5VVMZ1dEvOv3G+HDzxeB/YVi H8QIIt1874N9VBipcfBvJpEzhDaW+1Y= X-MC-Unique: YGamzZRENbuG-qPPzCX50w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH v2 45/81] qemu: Make qemuMigrationCheckPhase failure fatal Date: Wed, 1 Jun 2022 14:49:45 +0200 Message-Id: <72abb82fcc238b35ef94d6b073af0e9e5282de95.1654087150.git.jdenemar@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1654088105069100001 Content-Type: text/plain; charset="utf-8" The check can reveal a serious bug in our migration code and we should not silently ignore it. Signed-off-by: Jiri Denemark Reviewed-by: Peter Krempa --- Notes: Version 2: - no change src/qemu/qemu_migration.c | 58 ++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 22 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index edd3ac2a87..4c09caeace 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -147,9 +147,10 @@ qemuMigrationCheckPhase(virDomainObj *vm, =20 if (phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && phase < priv->job.phase) { - VIR_ERROR(_("migration protocol going backwards %s =3D> %s"), - qemuMigrationJobPhaseTypeToString(priv->job.phase), - qemuMigrationJobPhaseTypeToString(phase)); + virReportError(VIR_ERR_INTERNAL_ERROR, + _("migration protocol going backwards %s =3D> %s"), + qemuMigrationJobPhaseTypeToString(priv->job.phase), + qemuMigrationJobPhaseTypeToString(phase)); return -1; } =20 @@ -157,22 +158,23 @@ qemuMigrationCheckPhase(virDomainObj *vm, } =20 =20 -static void ATTRIBUTE_NONNULL(1) +static int G_GNUC_WARN_UNUSED_RESULT qemuMigrationJobSetPhase(virDomainObj *vm, qemuMigrationJobPhase phase) { if (qemuMigrationCheckPhase(vm, phase) < 0) - return; + return -1; =20 qemuDomainObjSetJobPhase(vm, phase); + return 0; } =20 =20 -static void ATTRIBUTE_NONNULL(1) +static int G_GNUC_WARN_UNUSED_RESULT qemuMigrationJobStartPhase(virDomainObj *vm, qemuMigrationJobPhase phase) { - qemuMigrationJobSetPhase(vm, phase); + return qemuMigrationJobSetPhase(vm, phase); } =20 =20 @@ -2567,8 +2569,9 @@ qemuMigrationSrcBeginPhase(virQEMUDriver *driver, * Otherwise we will start the async job later in the perform phase lo= sing * change protection. */ - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_BEGIN3); + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_BEGIN3) < 0) + return NULL; =20 if (!qemuMigrationSrcIsAllowed(driver, vm, true, flags)) return NULL; @@ -3117,7 +3120,9 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, flags) < 0) goto cleanup; - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PREPARE); + + if (qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PREPARE) < 0) + goto stopjob; =20 /* Domain starts inactive, even if the domain XML had an id field. */ vm->def->id =3D -1; @@ -3637,7 +3642,8 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, else phase =3D QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED; =20 - qemuMigrationJobSetPhase(vm, phase); + if (qemuMigrationJobSetPhase(vm, phase) < 0) + return -1; =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, cookiein, cookieinlen, @@ -3722,7 +3728,9 @@ qemuMigrationSrcConfirm(virQEMUDriver *driver, else phase =3D QEMU_MIGRATION_PHASE_CONFIRM3; =20 - qemuMigrationJobStartPhase(vm, phase); + if (qemuMigrationJobStartPhase(vm, phase) < 0) + goto cleanup; + virCloseCallbacksUnset(driver->closeCallbacks, vm, qemuMigrationSrcCleanup); qemuDomainCleanupRemove(vm, qemuProcessCleanupMigrationJob); @@ -4885,7 +4893,7 @@ qemuMigrationSrcPerformPeer2Peer2(virQEMUDriver *driv= er, * until the migration is complete. */ VIR_DEBUG("Perform %p", sconn); - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM2); + ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM= 2)); if (flags & VIR_MIGRATE_TUNNELLED) ret =3D qemuMigrationSrcPerformTunnel(driver, vm, st, NULL, NULL, 0, NULL, NULL, @@ -5129,7 +5137,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriver *driv= er, * confirm migration completion. */ VIR_DEBUG("Perform3 %p uri=3D%s", sconn, NULLSTR(uri)); - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM3); + ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM= 3)); VIR_FREE(cookiein); cookiein =3D g_steal_pointer(&cookieout); cookieinlen =3D cookieoutlen; @@ -5154,7 +5162,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriver *driv= er, if (ret < 0) { virErrorPreserveLast(&orig_err); } else { - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM3_DONE); + ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PER= FORM3_DONE)); } =20 /* If Perform returns < 0, then we need to cancel the VM @@ -5530,7 +5538,9 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, migParams, flags, dname, re= source, &v3proto); } else { - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM2); + if (qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM2) < = 0) + goto endjob; + ret =3D qemuMigrationSrcPerformNative(driver, vm, persist_xml, uri= , cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, NULL, NULL, 0= , NULL, @@ -5622,7 +5632,9 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, return ret; } =20 - qemuMigrationJobStartPhase(vm, QEMU_MIGRATION_PHASE_PERFORM3); + if (qemuMigrationJobStartPhase(vm, QEMU_MIGRATION_PHASE_PERFORM3) < 0) + goto endjob; + virCloseCallbacksUnset(driver->closeCallbacks, vm, qemuMigrationSrcCleanup); =20 @@ -5636,7 +5648,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, goto endjob; } =20 - qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM3_DONE); + ignore_value(qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PERFORM= 3_DONE)); =20 if (virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuMigrationSrcCleanup) < 0) @@ -6203,9 +6215,10 @@ qemuMigrationDstFinish(virQEMUDriver *driver, =20 ignore_value(virTimeMillisNow(&timeReceived)); =20 - qemuMigrationJobStartPhase(vm, - v3proto ? QEMU_MIGRATION_PHASE_FINISH3 - : QEMU_MIGRATION_PHASE_FINISH2); + if (qemuMigrationJobStartPhase(vm, + v3proto ? QEMU_MIGRATION_PHASE_FINISH3 + : QEMU_MIGRATION_PHASE_FINISH2)= < 0) + goto cleanup; =20 qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); g_clear_pointer(&priv->job.completed, virDomainJobDataFree); @@ -6277,7 +6290,8 @@ qemuMigrationProcessUnattended(virQEMUDriver *driver, else phase =3D QEMU_MIGRATION_PHASE_CONFIRM3; =20 - qemuMigrationJobStartPhase(vm, phase); + if (qemuMigrationJobStartPhase(vm, phase) < 0) + return; =20 if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) qemuMigrationDstComplete(driver, vm, true, job, &priv->job); --=20 2.35.1