From nobody Sat Feb 7 04:40:16 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1511967421523371.456717590205; Wed, 29 Nov 2017 06:57:01 -0800 (PST) Received: from localhost ([::1]:43498 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eK3nI-0001mb-MN for importer@patchew.org; Wed, 29 Nov 2017 09:57:00 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41632) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eK3ia-0006KV-PK for qemu-devel@nongnu.org; Wed, 29 Nov 2017 09:52:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eK3iZ-0006M8-Qg for qemu-devel@nongnu.org; Wed, 29 Nov 2017 09:52:08 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42888) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eK3iU-0006Gs-WC; Wed, 29 Nov 2017 09:52:03 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 03FE6883CA; Wed, 29 Nov 2017 14:52:02 +0000 (UTC) Received: from lemon.redhat.com (ovpn-12-98.pek2.redhat.com [10.72.12.98]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2F74E5C269; Wed, 29 Nov 2017 14:51:57 +0000 (UTC) From: Fam Zheng To: qemu-devel@nongnu.org Date: Wed, 29 Nov 2017 22:49:54 +0800 Message-Id: <20171129144956.11409-8-famz@redhat.com> In-Reply-To: <20171129144956.11409-1-famz@redhat.com> References: <20171129144956.11409-1-famz@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 29 Nov 2017 14:52:02 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH RFC 7/9] block: Switch to use AIO drained begin/end API X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, jcody@redhat.com, Max Reitz , Stefan Hajnoczi , pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Instead of the recursion of the "disable/enable external requests" operations on the graph, we switch to AioContext's API to disable/enable on the whole AioContext altogether. Strictly it is be a bit more than necessary, but as all drained sections are short, it is not a big problem. Drained end can just get away with that. The other half of drained begin is to wait for requests, which we can do with BDRV_POLL_WHILE() in a loop. Signed-off-by: Fam Zheng --- block/io.c | 116 ++++++---------------------------------------------------= ---- 1 file changed, 10 insertions(+), 106 deletions(-) diff --git a/block/io.c b/block/io.c index 7f07972489..914037b21a 100644 --- a/block/io.c +++ b/block/io.c @@ -40,28 +40,6 @@ static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int bytes, BdrvRequestFlags flags); =20 -void bdrv_parent_drained_begin(BlockDriverState *bs) -{ - BdrvChild *c; - - QLIST_FOREACH(c, &bs->parents, next_parent) { - if (c->role->drained_begin) { - c->role->drained_begin(c); - } - } -} - -void bdrv_parent_drained_end(BlockDriverState *bs) -{ - BdrvChild *c; - - QLIST_FOREACH(c, &bs->parents, next_parent) { - if (c->role->drained_end) { - c->role->drained_end(c); - } - } -} - static void bdrv_merge_limits(BlockLimits *dst, const BlockLimits *src) { dst->opt_transfer =3D MAX(dst->opt_transfer, src->opt_transfer); @@ -141,71 +119,6 @@ typedef struct { bool begin; } BdrvCoDrainData; =20 -static void coroutine_fn bdrv_drain_invoke_entry(void *opaque) -{ - BdrvCoDrainData *data =3D opaque; - BlockDriverState *bs =3D data->bs; - - if (data->begin) { - bs->drv->bdrv_co_drain_begin(bs); - } else { - bs->drv->bdrv_co_drain_end(bs); - } - - /* Set data->done before reading bs->wakeup. */ - atomic_mb_set(&data->done, true); - bdrv_wakeup(bs); -} - -static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) -{ - BdrvCoDrainData data =3D { .bs =3D bs, .done =3D false, .begin =3D beg= in}; - - if (!bs->drv || (begin && !bs->drv->bdrv_co_drain_begin) || - (!begin && !bs->drv->bdrv_co_drain_end)) { - return; - } - - data.co =3D qemu_coroutine_create(bdrv_drain_invoke_entry, &data); - bdrv_coroutine_enter(bs, data.co); - BDRV_POLL_WHILE(bs, !data.done); -} - -static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin) -{ - BdrvChild *child, *tmp; - bool waited; - - /* Ensure any pending metadata writes are submitted to bs->file. */ - bdrv_drain_invoke(bs, begin); - - /* Wait for drained requests to finish */ - waited =3D BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); - - QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { - BlockDriverState *bs =3D child->bs; - bool in_main_loop =3D - qemu_get_current_aio_context() =3D=3D qemu_get_aio_context(); - assert(bs->refcnt > 0); - if (in_main_loop) { - /* In case the recursive bdrv_drain_recurse processes a - * block_job_defer_to_main_loop BH and modifies the graph, - * let's hold a reference to bs until we are done. - * - * IOThread doesn't have such a BH, and it is not safe to call - * bdrv_unref without BQL, so skip doing it there. - */ - bdrv_ref(bs); - } - waited |=3D bdrv_drain_recurse(bs, begin); - if (in_main_loop) { - bdrv_unref(bs); - } - } - - return waited; -} - static void bdrv_co_drain_bh_cb(void *opaque) { BdrvCoDrainData *data =3D opaque; @@ -256,12 +169,13 @@ void bdrv_drained_begin(BlockDriverState *bs) return; } =20 - if (atomic_fetch_inc(&bs->quiesce_counter) =3D=3D 0) { - aio_disable_external(bdrv_get_aio_context(bs)); - bdrv_parent_drained_begin(bs); + if (atomic_fetch_inc(&bs->quiesce_counter) > 0) { + return; + } + aio_context_drained_begin(bdrv_get_aio_context(bs)); + while (BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0)) { + /* Loop until no progress is made. */ } - - bdrv_drain_recurse(bs, true); } =20 void bdrv_drained_end(BlockDriverState *bs) @@ -275,9 +189,7 @@ void bdrv_drained_end(BlockDriverState *bs) return; } =20 - bdrv_parent_drained_end(bs); - bdrv_drain_recurse(bs, false); - aio_enable_external(bdrv_get_aio_context(bs)); + aio_context_drained_end(bdrv_get_aio_context(bs)); } =20 /* @@ -324,14 +236,11 @@ void bdrv_drain_all_begin(void) BdrvNextIterator it; GSList *aio_ctxs =3D NULL, *ctx; =20 - block_job_pause_all(); - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 aio_context_acquire(aio_context); - bdrv_parent_drained_begin(bs); - aio_disable_external(aio_context); + aio_context_drained_begin(aio_context); aio_context_release(aio_context); =20 if (!g_slist_find(aio_ctxs, aio_context)) { @@ -347,14 +256,13 @@ void bdrv_drain_all_begin(void) */ while (waited) { waited =3D false; - for (ctx =3D aio_ctxs; ctx !=3D NULL; ctx =3D ctx->next) { AioContext *aio_context =3D ctx->data; =20 aio_context_acquire(aio_context); for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { if (aio_context =3D=3D bdrv_get_aio_context(bs)) { - waited |=3D bdrv_drain_recurse(bs, true); + waited |=3D BDRV_POLL_WHILE(bs, atomic_read(&bs->in_fl= ight) > 0); } } aio_context_release(aio_context); @@ -373,13 +281,9 @@ void bdrv_drain_all_end(void) AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 aio_context_acquire(aio_context); - aio_enable_external(aio_context); - bdrv_parent_drained_end(bs); - bdrv_drain_recurse(bs, false); + aio_context_drained_end(aio_context); aio_context_release(aio_context); } - - block_job_resume_all(); } =20 void bdrv_drain_all(void) --=20 2.14.3