From nobody Wed Dec 17 21:56:58 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1494513718176491.10415860847434; Thu, 11 May 2017 07:41:58 -0700 (PDT) Received: from localhost ([::1]:48554 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d8pHu-0000er-FC for importer@patchew.org; Thu, 11 May 2017 10:41:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36721) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d8pAK-0002Be-IO for qemu-devel@nongnu.org; Thu, 11 May 2017 10:34:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1d8pAJ-0007EY-Jp for qemu-devel@nongnu.org; Thu, 11 May 2017 10:34:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54052) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1d8pAG-0007An-Pk; Thu, 11 May 2017 10:34:00 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C83679D414; Thu, 11 May 2017 14:33:59 +0000 (UTC) Received: from noname.str.redhat.com (dhcp-192-175.str.redhat.com [10.33.192.175]) by smtp.corp.redhat.com (Postfix) with ESMTP id A67C47A407; Thu, 11 May 2017 14:33:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com C83679D414 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=kwolf@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com C83679D414 From: Kevin Wolf To: qemu-block@nongnu.org Date: Thu, 11 May 2017 16:32:31 +0200 Message-Id: <1494513181-7900-29-git-send-email-kwolf@redhat.com> In-Reply-To: <1494513181-7900-1-git-send-email-kwolf@redhat.com> References: <1494513181-7900-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 11 May 2017 14:33:59 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 28/58] migration: Unify block node activation error handling X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Migration code activates all block driver nodes on the destination when the migration completes. It does so by calling bdrv_invalidate_cache_all() and blk_resume_after_migration(). There is one code path for precopy and one for postcopy migration, resulting in four function calls, which used to have three different failure modes. This patch unifies the behaviour so that failure to activate all block nodes is non-fatal, but the error message is logged and the VM isn't automatically started. 'cont' will retry activating the block nodes. Signed-off-by: Kevin Wolf Reviewed-by: Eric Blake --- migration/migration.c | 16 +++++----------- migration/savevm.c | 12 +++++------- qmp.c | 18 +++++++++--------- 3 files changed, 19 insertions(+), 27 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 799952c..04af719 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -338,20 +338,14 @@ static void process_incoming_migration_bh(void *opaqu= e) Error *local_err =3D NULL; MigrationIncomingState *mis =3D opaque; =20 - /* Make sure all file formats flush their mutable metadata */ + /* Make sure all file formats flush their mutable metadata. + * If we get an error here, just don't restart the VM yet. */ bdrv_invalidate_cache_all(&local_err); - if (local_err) { - migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE, - MIGRATION_STATUS_FAILED); - error_report_err(local_err); - migrate_decompress_threads_join(); - exit(EXIT_FAILURE); + if (!local_err) { + blk_resume_after_migration(&local_err); } - - /* If we get an error here, just don't restart the VM yet. */ - blk_resume_after_migration(&local_err); if (local_err) { - error_free(local_err); + error_report_err(local_err); local_err =3D NULL; autostart =3D false; } diff --git a/migration/savevm.c b/migration/savevm.c index 352a8f2..3ca8d11 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1612,16 +1612,14 @@ static void loadvm_postcopy_handle_run_bh(void *opa= que) =20 qemu_announce_self(); =20 - /* Make sure all file formats flush their mutable metadata */ + /* Make sure all file formats flush their mutable metadata. + * If we get an error here, just don't restart the VM yet. */ bdrv_invalidate_cache_all(&local_err); - if (local_err) { - error_report_err(local_err); + if (!local_err) { + blk_resume_after_migration(&local_err); } - - /* If we get an error here, just don't restart the VM yet. */ - blk_resume_after_migration(&local_err); if (local_err) { - error_free(local_err); + error_report_err(local_err); local_err =3D NULL; autostart =3D false; } diff --git a/qmp.c b/qmp.c index ab74cd7..25b5050 100644 --- a/qmp.c +++ b/qmp.c @@ -196,15 +196,15 @@ void qmp_cont(Error **errp) } =20 /* Continuing after completed migration. Images have been inactivated = to - * allow the destination to take control. Need to get control back now= . */ - if (runstate_check(RUN_STATE_FINISH_MIGRATE) || - runstate_check(RUN_STATE_POSTMIGRATE)) - { - bdrv_invalidate_cache_all(&local_err); - if (local_err) { - error_propagate(errp, local_err); - return; - } + * allow the destination to take control. Need to get control back now. + * + * If there are no inactive block nodes (e.g. because the VM was just + * paused rather than completing a migration), bdrv_inactivate_all() s= imply + * doesn't do anything. */ + bdrv_invalidate_cache_all(&local_err); + if (local_err) { + error_propagate(errp, local_err); + return; } =20 blk_resume_after_migration(&local_err); --=20 1.8.3.1