From nobody Mon Apr 29 12:53:58 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 149150166922870.01662717750844; Thu, 6 Apr 2017 11:01:09 -0700 (PDT) Received: from localhost ([::1]:47173 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cwBiV-00032q-Ka for importer@patchew.org; Thu, 06 Apr 2017 14:01:07 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60640) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cwBho-0002iu-1A for qemu-devel@nongnu.org; Thu, 06 Apr 2017 14:00:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cwBhn-0002TC-0v for qemu-devel@nongnu.org; Thu, 06 Apr 2017 14:00:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33750) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cwBhi-0002QA-0m; Thu, 06 Apr 2017 14:00:18 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C3263D1FED; Thu, 6 Apr 2017 18:00:16 +0000 (UTC) Received: from noname.redhat.com (ovpn-117-62.ams2.redhat.com [10.36.117.62]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7ABA199690; Thu, 6 Apr 2017 18:00:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com C3263D1FED Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=kwolf@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com C3263D1FED From: Kevin Wolf To: qemu-block@nongnu.org Date: Thu, 6 Apr 2017 19:59:53 +0200 Message-Id: <1491501593-23613-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 06 Apr 2017 18:00:17 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH for-2.9] block: Ignore guest dev permissions during incoming migration X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, armband@enea.com, jcody@redhat.com, Ciprian.Barbu@enea.com, qemu-devel@nongnu.org, mreitz@redhat.com, Alexandru.Avadanii@enea.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Usually guest devices don't like other writers to the same image, so they use blk_set_perm() to prevent this from happening. In the migration phase before the VM is actually running, though, they don't have a problem with writes to the image. On the other hand, storage migration needs to be able to write to the image in this phase, so the restrictive blk_set_perm() call of qdev devices breaks it. This patch flags all BlockBackends with a qdev device as blk->disable_perm during incoming migration, which means that the requested permissions are stored in the BlockBackend, but not actually applied to its root node yet. Once migration has finished and the VM should be resumed, the permissions are applied. If they cannot be applied (e.g. because the NBD server used for block migration hasn't been shut down), resuming the VM fails. Signed-off-by: Kevin Wolf Tested-by: Kashyap Chamarthy --- RFC -> v1: - s/blk_next/blk_all_next/g [Max] block/block-backend.c | 40 +++++++++++++++++++++++++++++++++++++++- include/block/block.h | 2 ++ migration/migration.c | 8 ++++++++ qmp.c | 6 ++++++ 4 files changed, 55 insertions(+), 1 deletion(-) diff --git a/block/block-backend.c b/block/block-backend.c index 0b63773..18ece99 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -61,6 +61,7 @@ struct BlockBackend { =20 uint64_t perm; uint64_t shared_perm; + bool disable_perm; =20 bool allow_write_beyond_eof; =20 @@ -578,7 +579,7 @@ int blk_set_perm(BlockBackend *blk, uint64_t perm, uint= 64_t shared_perm, { int ret; =20 - if (blk->root) { + if (blk->root && !blk->disable_perm) { ret =3D bdrv_child_try_set_perm(blk->root, perm, shared_perm, errp= ); if (ret < 0) { return ret; @@ -597,15 +598,52 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, = uint64_t *shared_perm) *shared_perm =3D blk->shared_perm; } =20 +/* + * Notifies the user of all BlockBackends that migration has completed. qd= ev + * devices can tighten their permissions in response (specifically revoke + * shared write permissions that we needed for storage migration). + * + * If an error is returned, the VM cannot be allowed to be resumed. + */ +void blk_resume_after_migration(Error **errp) +{ + BlockBackend *blk; + Error *local_err =3D NULL; + + for (blk =3D blk_all_next(NULL); blk; blk =3D blk_all_next(blk)) { + if (!blk->disable_perm) { + continue; + } + + blk->disable_perm =3D false; + + blk_set_perm(blk, blk->perm, blk->shared_perm, &local_err); + if (local_err) { + error_propagate(errp, local_err); + blk->disable_perm =3D true; + return; + } + } +} + static int blk_do_attach_dev(BlockBackend *blk, void *dev) { if (blk->dev) { return -EBUSY; } + + /* While migration is still incoming, we don't need to apply the + * permissions of guest device BlockBackends. We might still have a bl= ock + * job or NBD server writing to the image for storage migration. */ + if (runstate_check(RUN_STATE_INMIGRATE)) { + blk->disable_perm =3D true; + } + blk_ref(blk); blk->dev =3D dev; blk->legacy_dev =3D false; blk_iostatus_reset(blk); + return 0; } =20 diff --git a/include/block/block.h b/include/block/block.h index 5149260..3e09222 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -366,6 +366,8 @@ void bdrv_invalidate_cache(BlockDriverState *bs, Error = **errp); void bdrv_invalidate_cache_all(Error **errp); int bdrv_inactivate_all(void); =20 +void blk_resume_after_migration(Error **errp); + /* Ensure contents are flushed to disk. */ int bdrv_flush(BlockDriverState *bs); int coroutine_fn bdrv_co_flush(BlockDriverState *bs); diff --git a/migration/migration.c b/migration/migration.c index 54060f7..ad4036f 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -349,6 +349,14 @@ static void process_incoming_migration_bh(void *opaque) exit(EXIT_FAILURE); } =20 + /* If we get an error here, just don't restart the VM yet. */ + blk_resume_after_migration(&local_err); + if (local_err) { + error_free(local_err); + local_err =3D NULL; + autostart =3D false; + } + /* * This must happen after all error conditions are dealt with and * we're sure the VM is going to be running on this host. diff --git a/qmp.c b/qmp.c index fa82b59..a744e44 100644 --- a/qmp.c +++ b/qmp.c @@ -207,6 +207,12 @@ void qmp_cont(Error **errp) } } =20 + blk_resume_after_migration(&local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + if (runstate_check(RUN_STATE_INMIGRATE)) { autostart =3D 1; } else { --=20 1.8.3.1