From nobody Sun Feb 8 22:00:43 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1654087935; cv=none; d=zohomail.com; s=zohoarc; b=Pnv9Ecvivkc1hl0fKv+5oiIsvs989JC0qZjA+fuE2ONUPLLMtZ4VYRnSy+0E6atz6fyZUC311qfznzWeFRInIqsUrJp92n52QyZbanURe14okPpl8/OWhw6wmMSABDi8atQzp/Ut82Yswq6jS6rF/lVCik87yfy2szHkeHEA2EM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654087935; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=iCIv8CTUYs2SucDNxv08kPFiO5DvP/6jq49YehYjAVk=; b=C4BaiEa8/BWwbmCCuB1HdsclMNt2zFRxMjLAFWXP/hcetWEVKPbaWPjIice51yLpcTbySx5W2ffEw/En2IZb1k8YQT2reSE4rXvD93VgQrLFfzkBxyQAtl8ODSy7m5Iz19FCT8ufnbNgbSffCfrbLH1doAHQ3DfBY+gwr42xJag= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1654087935855802.3609806321439; Wed, 1 Jun 2022 05:52:15 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-228-XLvJvsjLMJKBEeA50CRE9Q-1; Wed, 01 Jun 2022 08:51:40 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 32CEC382ECF0; Wed, 1 Jun 2022 12:50:49 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1DB282026D07; Wed, 1 Jun 2022 12:50:49 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 3D77F1955EDB; Wed, 1 Jun 2022 12:50:45 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 50FF61947B99 for ; Wed, 1 Jun 2022 12:50:30 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 22D112166B2E; Wed, 1 Jun 2022 12:50:30 +0000 (UTC) Received: from virval.usersys.redhat.com (unknown [10.43.2.227]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D72852166B33; Wed, 1 Jun 2022 12:50:29 +0000 (UTC) Received: by virval.usersys.redhat.com (Postfix, from userid 500) id 922AB245B72; Wed, 1 Jun 2022 14:50:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654087934; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=iCIv8CTUYs2SucDNxv08kPFiO5DvP/6jq49YehYjAVk=; b=J8LZUKY1vAz/72J9EQ9J3q7ApGLQEjWwgYVnO2BSDxdlM0ChNewpRIU3RUuYEtKWAT1+q2 L4yqWpghATr8LCVDBCz9HkX9P9D8PbMI0e6t25pgWnUBxYEetJqgmPkchD+0LPlIEZI1+E hAN4/G4KeOWyMXmaRH/o8c1OnbNsdnA= X-MC-Unique: XLvJvsjLMJKBEeA50CRE9Q-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH v2 38/81] qemu: Finish completed unattended migration Date: Wed, 1 Jun 2022 14:49:38 +0200 Message-Id: <81afadffa4365c9e364398a6f806eb79f49ac175.1654087150.git.jdenemar@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Krempa , Pavel Hrdina Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1654087936166100017 Content-Type: text/plain; charset="utf-8" So far migration could only be completed while a migration API was running and waiting for the migration to finish. In case such API could not be called (the connection that initiated the migration is broken) the migration would just be aborted or left in a "don't know what to do" state. But this will change soon and we will be able to successfully complete such migration once we get the corresponding event from QEMU. This is specific to post-copy migration when vCPUs are already running on the destination and we're only waiting for all memory pages to be transferred. Such post-copy migration (which no-one is actively watching) is called unattended migration. Signed-off-by: Jiri Denemark Reviewed-by: Peter Krempa Reviewed-by: Pavel Hrdina --- Notes: Version 2: - no change src/qemu/qemu_domain.c | 1 + src/qemu/qemu_domain.h | 1 + src/qemu/qemu_driver.c | 5 +++++ src/qemu/qemu_migration.c | 43 +++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_migration.h | 6 ++++++ src/qemu/qemu_process.c | 12 ++++++++++- 6 files changed, 65 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 5dee9c6f26..d04ec6cd0c 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -11114,6 +11114,7 @@ qemuProcessEventFree(struct qemuProcessEvent *event) qemuMonitorMemoryDeviceSizeChangeFree(event->data); break; case QEMU_PROCESS_EVENT_PR_DISCONNECT: + case QEMU_PROCESS_EVENT_UNATTENDED_MIGRATION: case QEMU_PROCESS_EVENT_LAST: break; } diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index ce2dba499c..153dfe3a23 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -426,6 +426,7 @@ typedef enum { QEMU_PROCESS_EVENT_RDMA_GID_STATUS_CHANGED, QEMU_PROCESS_EVENT_GUEST_CRASHLOADED, QEMU_PROCESS_EVENT_MEMORY_DEVICE_SIZE_CHANGE, + QEMU_PROCESS_EVENT_UNATTENDED_MIGRATION, =20 QEMU_PROCESS_EVENT_LAST } qemuProcessEventType; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 28cb454ab7..4edf5635c0 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4307,6 +4307,11 @@ static void qemuProcessEventHandler(void *data, void= *opaque) case QEMU_PROCESS_EVENT_MEMORY_DEVICE_SIZE_CHANGE: processMemoryDeviceSizeChange(driver, vm, processEvent->data); break; + case QEMU_PROCESS_EVENT_UNATTENDED_MIGRATION: + qemuMigrationProcessUnattended(driver, vm, + processEvent->action, + processEvent->status); + break; case QEMU_PROCESS_EVENT_LAST: break; } diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 95b69108dc..d427840d14 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -5811,8 +5811,11 @@ qemuMigrationDstComplete(virQEMUDriver *driver, =20 qemuDomainSaveStatus(vm); =20 - /* Guest is successfully running, so cancel previous auto destroy */ - qemuProcessAutoDestroyRemove(driver, vm); + /* Guest is successfully running, so cancel previous auto destroy. The= re's + * nothing to remove when we are resuming post-copy migration. + */ + if (!virDomainObjIsFailedPostcopy(vm)) + qemuProcessAutoDestroyRemove(driver, vm); =20 /* Remove completed stats for post-copy, everything but timing fields * is obsolete anyway. @@ -6179,6 +6182,42 @@ qemuMigrationDstFinish(virQEMUDriver *driver, } =20 =20 +void +qemuMigrationProcessUnattended(virQEMUDriver *driver, + virDomainObj *vm, + virDomainAsyncJob job, + qemuMonitorMigrationStatus status) +{ + qemuDomainObjPrivate *priv =3D vm->privateData; + qemuMigrationJobPhase phase; + + if (!qemuMigrationJobIsActive(vm, job) || + status !=3D QEMU_MONITOR_MIGRATION_STATUS_COMPLETED) + return; + + VIR_DEBUG("Unattended %s migration of domain %s successfully finished", + job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN ? "incoming" : "outgoi= ng", + vm->def->name); + + if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) + phase =3D QEMU_MIGRATION_PHASE_FINISH3; + else + phase =3D QEMU_MIGRATION_PHASE_CONFIRM3; + + qemuMigrationJobStartPhase(vm, phase); + + if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) + qemuMigrationDstComplete(driver, vm, true, job, &priv->job); + else + qemuMigrationSrcComplete(driver, vm, job); + + qemuMigrationJobFinish(vm); + + if (!virDomainObjIsActive(vm)) + qemuDomainRemoveInactive(driver, vm); +} + + /* Helper function called while vm is active. */ int qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index c099cf99cf..eeb69a52bf 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -211,6 +211,12 @@ qemuMigrationSrcComplete(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob); =20 +void +qemuMigrationProcessUnattended(virQEMUDriver *driver, + virDomainObj *vm, + virDomainAsyncJob job, + qemuMonitorMigrationStatus status); + bool qemuMigrationSrcIsAllowed(virQEMUDriver *driver, virDomainObj *vm, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d3769de496..97d84893be 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1549,12 +1549,22 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G= _GNUC_UNUSED, } break; =20 + case QEMU_MONITOR_MIGRATION_STATUS_COMPLETED: + /* A post-copy migration marked as failed when reconnecting to a d= omain + * with running migration may actually still be running, but we're= not + * watching it in any thread. Let's make sure the migration is pro= perly + * finished in case we get a "completed" event. + */ + if (virDomainObjIsFailedPostcopy(vm) && priv->job.asyncOwner =3D= =3D 0) + qemuProcessEventSubmit(vm, QEMU_PROCESS_EVENT_UNATTENDED_MIGRA= TION, + priv->job.asyncJob, status, NULL); + break; + case QEMU_MONITOR_MIGRATION_STATUS_INACTIVE: case QEMU_MONITOR_MIGRATION_STATUS_SETUP: case QEMU_MONITOR_MIGRATION_STATUS_ACTIVE: case QEMU_MONITOR_MIGRATION_STATUS_PRE_SWITCHOVER: case QEMU_MONITOR_MIGRATION_STATUS_DEVICE: - case QEMU_MONITOR_MIGRATION_STATUS_COMPLETED: case QEMU_MONITOR_MIGRATION_STATUS_ERROR: case QEMU_MONITOR_MIGRATION_STATUS_CANCELLING: case QEMU_MONITOR_MIGRATION_STATUS_CANCELLED: --=20 2.35.1