From nobody Sun Feb 8 18:18:49 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1652196210; cv=none; d=zohomail.com; s=zohoarc; b=B+mBwaLxgBEe37YiwcbUGUYkAFmgzGTlCZcFQirMWZZA5AYhUp0GwOeXh35AlprtvStAw/YS64GaKIi8fusv1114CXyJ6oLNJyuTI8NX4IXUnVaYwx6o1PYocxjxoDte2QkBWUHVhny6LARiTZRP47PIPvRGzRjMUXUqBfXpX+U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1652196210; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=0oVR3DUemvcgmtp5lw3W9m9tQ812V8+fH+6Jzr2Yv90=; b=j+4mN5Uq1r6/tSsvxtpKJobI89F67d3TgPW8CmwjkTeVhHEa5sj9BgpiFZGfV9vIrDSrB5mtehkMGesHqghpOi4m8UaUaOVfHsJ2lPcAk5T2GKimiEf8vlz+gvMmPInbKKZozKbMP86SaChAgtWNDaGNz4cz3fFonykmRK/EMQg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1652196210446518.2483339036894; Tue, 10 May 2022 08:23:30 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-176-l3MJRGP7MjSZcz1UrdqMYw-1; Tue, 10 May 2022 11:23:22 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DEB79A18EFB; Tue, 10 May 2022 15:21:49 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE72740E7F17; Tue, 10 May 2022 15:21:49 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 0783D19451EC; Tue, 10 May 2022 15:21:49 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 244B81947072 for ; Tue, 10 May 2022 15:21:47 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 13B8054B092; Tue, 10 May 2022 15:21:47 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.187]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C949554B072 for ; Tue, 10 May 2022 15:21:46 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id 8691D2445F5; Tue, 10 May 2022 17:21:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652196209; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=0oVR3DUemvcgmtp5lw3W9m9tQ812V8+fH+6Jzr2Yv90=; b=TA/MV1koCT7sLDtSIq+u3QJS+RnU52wfwcslKAKBVuGLTaOAsKZwUeK87qQdNwnGRUMXim K5g5wimIrcTQot/zC+Qgix+Lo+ODk0XpSwUsGImOYznzVHD4JDsspoAdYPV1GjXaZIfu5N 5c+2Uh21n7DHxzkenvi4WqxPppH9BBI= X-MC-Unique: l3MJRGP7MjSZcz1UrdqMYw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH 16/80] qemu: Restore failed migration job on reconnect Date: Tue, 10 May 2022 17:20:37 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1652196210882100008 Content-Type: text/plain; charset="utf-8" Since we keep the migration job active when post-copy migration fails, we need to restore it when reconnecting to running domains. Signed-off-by: Jiri Denemark Reviewed-by: Peter Krempa --- src/qemu/qemu_process.c | 128 ++++++++++++++++++++++++++++++---------- 1 file changed, 96 insertions(+), 32 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index e83668e088..3d73c716f1 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3456,20 +3456,48 @@ qemuProcessCleanupMigrationJob(virQEMUDriver *drive= r, } =20 =20 +static void +qemuProcessRestoreMigrationJob(virDomainObj *vm, + qemuDomainJobObj *job) +{ + qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobPrivate *jobPriv =3D job->privateData; + virDomainJobOperation op; + unsigned long long allowedJobs; + + if (job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { + op =3D VIR_DOMAIN_JOB_OPERATION_MIGRATION_IN; + allowedJobs =3D VIR_JOB_NONE; + } else { + op =3D VIR_DOMAIN_JOB_OPERATION_MIGRATION_OUT; + allowedJobs =3D VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_MIGRATION_= OP); + } + + qemuDomainObjRestoreAsyncJob(vm, job->asyncJob, job->phase, op, + QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION, + VIR_DOMAIN_JOB_STATUS_PAUSED, + allowedJobs); + + job->privateData =3D g_steal_pointer(&priv->job.privateData); + priv->job.privateData =3D jobPriv; + priv->job.apiFlags =3D job->apiFlags; + + qemuDomainCleanupAdd(vm, qemuProcessCleanupMigrationJob); +} + + +/* + * Returns + * -1 on error, the domain will be killed, + * 0 the domain should remain running with the migration job discarde= d, + * 1 the daemon was restarted during post-copy phase + */ static int qemuProcessRecoverMigrationIn(virQEMUDriver *driver, virDomainObj *vm, - const qemuDomainJobObj *job, - virDomainState state, - int reason) + qemuDomainJobObj *job, + virDomainState state) { - - qemuDomainJobPrivate *jobPriv =3D job->privateData; - bool postcopy =3D (state =3D=3D VIR_DOMAIN_PAUSED && - reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED) || - (state =3D=3D VIR_DOMAIN_RUNNING && - reason =3D=3D VIR_DOMAIN_RUNNING_POSTCOPY); - VIR_DEBUG("Active incoming migration in phase %s", qemuMigrationJobPhaseTypeToString(job->phase)); =20 @@ -3506,32 +3534,37 @@ qemuProcessRecoverMigrationIn(virQEMUDriver *driver, /* migration finished, we started resuming the domain but didn't * confirm success or failure yet; killing it seems safest unless * we already started guest CPUs or we were in post-copy mode */ - if (postcopy) { + if (virDomainObjIsPostcopy(vm, VIR_DOMAIN_JOB_OPERATION_MIGRATION_= IN)) { qemuMigrationDstPostcopyFailed(vm); - } else if (state !=3D VIR_DOMAIN_RUNNING) { + return 1; + } + + if (state !=3D VIR_DOMAIN_RUNNING) { VIR_DEBUG("Killing migrated domain %s", vm->def->name); return -1; } break; } =20 - qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_NONE, - jobPriv->migParams, job->apiFlags); return 0; } =20 + +/* + * Returns + * -1 on error, the domain will be killed, + * 0 the domain should remain running with the migration job discarde= d, + * 1 the daemon was restarted during post-copy phase + */ static int qemuProcessRecoverMigrationOut(virQEMUDriver *driver, virDomainObj *vm, - const qemuDomainJobObj *job, + qemuDomainJobObj *job, virDomainState state, int reason, unsigned int *stopFlags) { - qemuDomainJobPrivate *jobPriv =3D job->privateData; - bool postcopy =3D state =3D=3D VIR_DOMAIN_PAUSED && - (reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY || - reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED); + bool postcopy =3D virDomainObjIsPostcopy(vm, VIR_DOMAIN_JOB_OPERATION_= MIGRATION_OUT); bool resume =3D false; =20 VIR_DEBUG("Active outgoing migration in phase %s", @@ -3571,8 +3604,10 @@ qemuProcessRecoverMigrationOut(virQEMUDriver *driver, * of Finish3 step; third party needs to check what to do next; in * post-copy mode we can use PAUSED_POSTCOPY_FAILED state for this */ - if (postcopy) + if (postcopy) { qemuMigrationSrcPostcopyFailed(vm); + return 1; + } break; =20 case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: @@ -3582,11 +3617,12 @@ qemuProcessRecoverMigrationOut(virQEMUDriver *drive= r, */ if (postcopy) { qemuMigrationSrcPostcopyFailed(vm); - } else { - VIR_DEBUG("Resuming domain %s after failed migration", - vm->def->name); - resume =3D true; + return 1; } + + VIR_DEBUG("Resuming domain %s after failed migration", + vm->def->name); + resume =3D true; break; =20 case QEMU_MIGRATION_PHASE_CONFIRM3: @@ -3610,15 +3646,49 @@ qemuProcessRecoverMigrationOut(virQEMUDriver *drive= r, } } =20 + return 0; +} + + +static int +qemuProcessRecoverMigration(virQEMUDriver *driver, + virDomainObj *vm, + qemuDomainJobObj *job, + unsigned int *stopFlags) +{ + qemuDomainJobPrivate *jobPriv =3D job->privateData; + virDomainState state; + int reason; + int rc; + + state =3D virDomainObjGetState(vm, &reason); + + if (job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + rc =3D qemuProcessRecoverMigrationOut(driver, vm, job, + state, reason, stopFlags); + } else { + rc =3D qemuProcessRecoverMigrationIn(driver, vm, job, state); + } + + if (rc < 0) + return -1; + + if (rc > 0) { + qemuProcessRestoreMigrationJob(vm, job); + return 0; + } + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_NONE, jobPriv->migParams, job->apiFlags); + return 0; } =20 + static int qemuProcessRecoverJob(virQEMUDriver *driver, virDomainObj *vm, - const qemuDomainJobObj *job, + qemuDomainJobObj *job, unsigned int *stopFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -3636,14 +3706,8 @@ qemuProcessRecoverJob(virQEMUDriver *driver, =20 switch (job->asyncJob) { case VIR_ASYNC_JOB_MIGRATION_OUT: - if (qemuProcessRecoverMigrationOut(driver, vm, job, - state, reason, stopFlags) < 0) - return -1; - break; - case VIR_ASYNC_JOB_MIGRATION_IN: - if (qemuProcessRecoverMigrationIn(driver, vm, job, - state, reason) < 0) + if (qemuProcessRecoverMigration(driver, vm, job, stopFlags) < 0) return -1; break; =20 --=20 2.35.1