From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465168442611.8588701630306; Wed, 11 Apr 2018 09:46:08 -0700 (PDT) Received: from localhost ([::1]:39644 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Isp-0007jy-Em for importer@patchew.org; Wed, 11 Apr 2018 12:46:07 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41088) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In9-0002hE-VZ for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In7-0004Vi-Gj for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:15 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:46372 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6Imw-0004JQ-L8; Wed, 11 Apr 2018 12:40:02 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1D86B722E1; Wed, 11 Apr 2018 16:39:58 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id BEC5DBDC58; Wed, 11 Apr 2018 16:39:56 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:22 +0200 Message-Id: <20180411163940.2523-2-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:39:58 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:39:58 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 01/19] test-bdrv-drain: bdrv_drain() works with cross-AioContext events X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" As long as nobody keeps the other I/O thread from working, there is no reason why bdrv_drain() wouldn't work with cross-AioContext events. The key is that the root request we're waiting for is in the AioContext we're polling (which it always is for bdrv_drain()) so that aio_poll() is woken up in the end. Add a test case that shows that it works. Remove the comment in bdrv_drain() that claims otherwise. Signed-off-by: Kevin Wolf --- block/io.c | 4 -- tests/test-bdrv-drain.c | 187 ++++++++++++++++++++++++++++++++++++++++++++= +++- 2 files changed, 186 insertions(+), 5 deletions(-) diff --git a/block/io.c b/block/io.c index bd9a19a9c4..28e71221b0 100644 --- a/block/io.c +++ b/block/io.c @@ -369,10 +369,6 @@ void bdrv_unapply_subtree_drain(BdrvChild *child, Bloc= kDriverState *old_parent) * * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriver= State * AioContext. - * - * Only this BlockDriverState's AioContext is run, so in-flight requests m= ust - * not depend on events in other AioContexts. In that case, use - * bdrv_drain_all() instead. */ void coroutine_fn bdrv_co_drain(BlockDriverState *bs) { diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 7673de1062..29634102d8 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -27,9 +27,13 @@ #include "block/blockjob_int.h" #include "sysemu/block-backend.h" #include "qapi/error.h" +#include "iothread.h" + +static QemuEvent done_event; =20 typedef struct BDRVTestState { int drain_count; + AioContext *bh_indirection_ctx; } BDRVTestState; =20 static void coroutine_fn bdrv_test_co_drain_begin(BlockDriverState *bs) @@ -50,16 +54,29 @@ static void bdrv_test_close(BlockDriverState *bs) g_assert_cmpint(s->drain_count, >, 0); } =20 +static void co_reenter_bh(void *opaque) +{ + aio_co_wake(opaque); +} + static int coroutine_fn bdrv_test_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t byte= s, QEMUIOVector *qiov, int flags) { + BDRVTestState *s =3D bs->opaque; + /* We want this request to stay until the polling loop in drain waits = for * it to complete. We need to sleep a while as bdrv_drain_invoke() com= es * first and polls its result, too, but it shouldn't accidentally comp= lete * this request yet. */ qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, 100000); =20 + if (s->bh_indirection_ctx) { + aio_bh_schedule_oneshot(s->bh_indirection_ctx, co_reenter_bh, + qemu_coroutine_self()); + qemu_coroutine_yield(); + } + return 0; } =20 @@ -490,6 +507,164 @@ static void test_graph_change(void) blk_unref(blk_b); } =20 +struct test_iothread_data { + BlockDriverState *bs; + enum drain_type drain_type; + int *aio_ret; +}; + +static void test_iothread_drain_entry(void *opaque) +{ + struct test_iothread_data *data =3D opaque; + + aio_context_acquire(bdrv_get_aio_context(data->bs)); + do_drain_begin(data->drain_type, data->bs); + g_assert_cmpint(*data->aio_ret, =3D=3D, 0); + do_drain_end(data->drain_type, data->bs); + aio_context_release(bdrv_get_aio_context(data->bs)); + + qemu_event_set(&done_event); +} + +static void test_iothread_aio_cb(void *opaque, int ret) +{ + int *aio_ret =3D opaque; + *aio_ret =3D ret; + qemu_event_set(&done_event); +} + +/* + * Starts an AIO request on a BDS that runs in the AioContext of iothread = 1. + * The request involves a BH on iothread 2 before it can complete. + * + * @drain_thread =3D 0 means that do_drain_begin/end are called from the m= ain + * thread, @drain_thread =3D 1 means that they are called from iothread 1.= Drain + * for this BDS cannot be called from iothread 2 because only the main thr= ead + * may do cross-AioContext polling. + */ +static void test_iothread_common(enum drain_type drain_type, int drain_thr= ead) +{ + BlockBackend *blk; + BlockDriverState *bs; + BDRVTestState *s; + BlockAIOCB *acb; + int aio_ret; + struct test_iothread_data data; + + IOThread *a =3D iothread_new(); + IOThread *b =3D iothread_new(); + AioContext *ctx_a =3D iothread_get_aio_context(a); + AioContext *ctx_b =3D iothread_get_aio_context(b); + + QEMUIOVector qiov; + struct iovec iov =3D { + .iov_base =3D NULL, + .iov_len =3D 0, + }; + qemu_iovec_init_external(&qiov, &iov, 1); + + /* bdrv_drain_all() may only be called from the main loop thread */ + if (drain_type =3D=3D BDRV_DRAIN_ALL && drain_thread !=3D 0) { + goto out; + } + + blk =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + bs =3D bdrv_new_open_driver(&bdrv_test, "test-node", BDRV_O_RDWR, + &error_abort); + s =3D bs->opaque; + blk_insert_bs(blk, bs, &error_abort); + + blk_set_aio_context(blk, ctx_a); + aio_context_acquire(ctx_a); + + s->bh_indirection_ctx =3D ctx_b; + + aio_ret =3D -EINPROGRESS; + if (drain_thread =3D=3D 0) { + acb =3D blk_aio_preadv(blk, 0, &qiov, 0, test_iothread_aio_cb, &ai= o_ret); + } else { + acb =3D blk_aio_preadv(blk, 0, &qiov, 0, aio_ret_cb, &aio_ret); + } + g_assert(acb !=3D NULL); + g_assert_cmpint(aio_ret, =3D=3D, -EINPROGRESS); + + aio_context_release(ctx_a); + + data =3D (struct test_iothread_data) { + .bs =3D bs, + .drain_type =3D drain_type, + .aio_ret =3D &aio_ret, + }; + + switch (drain_thread) { + case 0: + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_acquire(ctx_a); + } + + /* The request is running on the IOThread a. Draining its block de= vice + * will make sure that it has completed as far as the BDS is conce= rned, + * but the drain in this thread can continue immediately after + * bdrv_dec_in_flight() and aio_ret might be assigned only slightly + * later. */ + qemu_event_reset(&done_event); + do_drain_begin(drain_type, bs); + g_assert_cmpint(bs->in_flight, =3D=3D, 0); + + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_release(ctx_a); + } + qemu_event_wait(&done_event); + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_acquire(ctx_a); + } + + g_assert_cmpint(aio_ret, =3D=3D, 0); + do_drain_end(drain_type, bs); + + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_release(ctx_a); + } + break; + case 1: + qemu_event_reset(&done_event); + aio_bh_schedule_oneshot(ctx_a, test_iothread_drain_entry, &data); + qemu_event_wait(&done_event); + break; + default: + g_assert_not_reached(); + } + + aio_context_acquire(ctx_a); + blk_set_aio_context(blk, qemu_get_aio_context()); + aio_context_release(ctx_a); + + bdrv_unref(bs); + blk_unref(blk); + +out: + iothread_join(a); + iothread_join(b); +} + +static void test_iothread_drain_all(void) +{ + test_iothread_common(BDRV_DRAIN_ALL, 0); + test_iothread_common(BDRV_DRAIN_ALL, 1); +} + +static void test_iothread_drain(void) +{ + test_iothread_common(BDRV_DRAIN, 0); + test_iothread_common(BDRV_DRAIN, 1); +} + +static void test_iothread_drain_subtree(void) +{ + test_iothread_common(BDRV_SUBTREE_DRAIN, 0); + test_iothread_common(BDRV_SUBTREE_DRAIN, 1); +} + =20 typedef struct TestBlockJob { BlockJob common; @@ -613,10 +788,13 @@ static void test_blockjob_drain_subtree(void) =20 int main(int argc, char **argv) { + int ret; + bdrv_init(); qemu_init_main_loop(&error_abort); =20 g_test_init(&argc, &argv, NULL); + qemu_event_init(&done_event, false); =20 g_test_add_func("/bdrv-drain/driver-cb/drain_all", test_drv_cb_drain_a= ll); g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain); @@ -643,10 +821,17 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/multiparent", test_multiparent); g_test_add_func("/bdrv-drain/graph-change", test_graph_change); =20 + g_test_add_func("/bdrv-drain/iothread/drain_all", test_iothread_drain_= all); + g_test_add_func("/bdrv-drain/iothread/drain", test_iothread_drain); + g_test_add_func("/bdrv-drain/iothread/drain_subtree", + test_iothread_drain_subtree); + g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_= all); g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain); g_test_add_func("/bdrv-drain/blockjob/drain_subtree", test_blockjob_drain_subtree); =20 - return g_test_run(); + ret =3D g_test_run(); + qemu_event_destroy(&done_event); + return ret; } --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523464967492364.5466785984603; Wed, 11 Apr 2018 09:42:47 -0700 (PDT) Received: from localhost ([::1]:39621 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6IpX-0004G1-Ld for importer@patchew.org; Wed, 11 Apr 2018 12:42:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41043) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In7-0002eh-R0 for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In1-0004Nm-WB for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:13 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:50416 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6Imw-0004JS-L3; Wed, 11 Apr 2018 12:40:02 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8B567406F890; Wed, 11 Apr 2018 16:39:59 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 59000BDC56; Wed, 11 Apr 2018 16:39:58 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:23 +0200 Message-Id: <20180411163940.2523-3-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:39:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:39:59 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 02/19] block: Use bdrv_do_drain_begin/end in bdrv_drain_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" bdrv_do_drain_begin/end() implement already everything that bdrv_drain_all_begin/end() need and currently still do manually: Disable external events, call parent drain callbacks, call block driver callbacks. It also does two more things: The first is incrementing bs->quiesce_counter. bdrv_drain_all() already stood out in the test case by behaving different from the other drain variants. Adding this is not only safe, but in fact a bug fix. The second is calling bdrv_drain_recurse(). We already do that later in the same function in a loop, so basically doing an early first iteration doesn't hurt. Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- block/io.c | 10 ++-------- tests/test-bdrv-drain.c | 14 ++++---------- 2 files changed, 6 insertions(+), 18 deletions(-) diff --git a/block/io.c b/block/io.c index 28e71221b0..cad59db2f4 100644 --- a/block/io.c +++ b/block/io.c @@ -412,11 +412,8 @@ void bdrv_drain_all_begin(void) for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 - /* Stop things in parent-to-child order */ aio_context_acquire(aio_context); - aio_disable_external(aio_context); - bdrv_parent_drained_begin(bs, NULL); - bdrv_drain_invoke(bs, true, true); + bdrv_do_drained_begin(bs, true, NULL); aio_context_release(aio_context); =20 if (!g_slist_find(aio_ctxs, aio_context)) { @@ -457,11 +454,8 @@ void bdrv_drain_all_end(void) for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 - /* Re-enable things in child-to-parent order */ aio_context_acquire(aio_context); - bdrv_drain_invoke(bs, false, true); - bdrv_parent_drained_end(bs, NULL); - aio_enable_external(aio_context); + bdrv_do_drained_end(bs, true, NULL); aio_context_release(aio_context); } } diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 29634102d8..cd870609c5 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -276,8 +276,7 @@ static void test_quiesce_common(enum drain_type drain_t= ype, bool recursive) =20 static void test_quiesce_drain_all(void) { - // XXX drain_all doesn't quiesce - //test_quiesce_common(BDRV_DRAIN_ALL, true); + test_quiesce_common(BDRV_DRAIN_ALL, true); } =20 static void test_quiesce_drain(void) @@ -319,12 +318,7 @@ static void test_nested(void) =20 for (outer =3D 0; outer < DRAIN_TYPE_MAX; outer++) { for (inner =3D 0; inner < DRAIN_TYPE_MAX; inner++) { - /* XXX bdrv_drain_all() doesn't increase the quiesce_counter */ - int bs_quiesce =3D (outer !=3D BDRV_DRAIN_ALL) + - (inner !=3D BDRV_DRAIN_ALL); - int backing_quiesce =3D (outer =3D=3D BDRV_SUBTREE_DRAIN) + - (inner =3D=3D BDRV_SUBTREE_DRAIN); - int backing_cb_cnt =3D (outer !=3D BDRV_DRAIN) + + int backing_quiesce =3D (outer !=3D BDRV_DRAIN) + (inner !=3D BDRV_DRAIN); =20 g_assert_cmpint(bs->quiesce_counter, =3D=3D, 0); @@ -335,10 +329,10 @@ static void test_nested(void) do_drain_begin(outer, bs); do_drain_begin(inner, bs); =20 - g_assert_cmpint(bs->quiesce_counter, =3D=3D, bs_quiesce); + g_assert_cmpint(bs->quiesce_counter, =3D=3D, 2); g_assert_cmpint(backing->quiesce_counter, =3D=3D, backing_quie= sce); g_assert_cmpint(s->drain_count, =3D=3D, 2); - g_assert_cmpint(backing_s->drain_count, =3D=3D, backing_cb_cnt= ); + g_assert_cmpint(backing_s->drain_count, =3D=3D, backing_quiesc= e); =20 do_drain_end(inner, bs); do_drain_end(outer, bs); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523464956331229.9511922897035; Wed, 11 Apr 2018 09:42:36 -0700 (PDT) Received: from localhost ([::1]:39620 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6IpE-00040h-QN for importer@patchew.org; Wed, 11 Apr 2018 12:42:24 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40898) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In2-0002Ua-P9 for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In2-0004Nw-1s for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:08 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:56882 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6Imw-0004JT-KH; Wed, 11 Apr 2018 12:40:02 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 058AFEB71E; Wed, 11 Apr 2018 16:40:01 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id C689FBDC5B; Wed, 11 Apr 2018 16:39:59 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:24 +0200 Message-Id: <20180411163940.2523-4-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Wed, 11 Apr 2018 16:40:01 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Wed, 11 Apr 2018 16:40:01 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 03/19] block: Remove 'recursive' parameter from bdrv_drain_invoke() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" All callers pass false for the 'recursive' parameter now. Remove it. Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- block/io.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/block/io.c b/block/io.c index cad59db2f4..d2bd89c3bb 100644 --- a/block/io.c +++ b/block/io.c @@ -167,9 +167,8 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *= opaque) } =20 /* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */ -static void bdrv_drain_invoke(BlockDriverState *bs, bool begin, bool recur= sive) +static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) { - BdrvChild *child, *tmp; BdrvCoDrainData data =3D { .bs =3D bs, .done =3D false, .begin =3D beg= in}; =20 if (!bs->drv || (begin && !bs->drv->bdrv_co_drain_begin) || @@ -180,12 +179,6 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bo= ol begin, bool recursive) data.co =3D qemu_coroutine_create(bdrv_drain_invoke_entry, &data); bdrv_coroutine_enter(bs, data.co); BDRV_POLL_WHILE(bs, !data.done); - - if (recursive) { - QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { - bdrv_drain_invoke(child->bs, begin, true); - } - } } =20 static bool bdrv_drain_recurse(BlockDriverState *bs) @@ -286,7 +279,7 @@ void bdrv_do_drained_begin(BlockDriverState *bs, bool r= ecursive, } =20 bdrv_parent_drained_begin(bs, parent); - bdrv_drain_invoke(bs, true, false); + bdrv_drain_invoke(bs, true); bdrv_drain_recurse(bs); =20 if (recursive) { @@ -321,7 +314,7 @@ void bdrv_do_drained_end(BlockDriverState *bs, bool rec= ursive, old_quiesce_counter =3D atomic_fetch_dec(&bs->quiesce_counter); =20 /* Re-enable things in child-to-parent order */ - bdrv_drain_invoke(bs, false, false); + bdrv_drain_invoke(bs, false); bdrv_parent_drained_end(bs, parent); if (old_quiesce_counter =3D=3D 1) { aio_enable_external(bdrv_get_aio_context(bs)); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523464975535605.348931663891; Wed, 11 Apr 2018 09:42:55 -0700 (PDT) Received: from localhost ([::1]:39623 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Ipi-0004QL-MR for importer@patchew.org; Wed, 11 Apr 2018 12:42:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41087) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In9-0002hC-Vg for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In6-0004V5-Vs for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:15 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33568 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6Imw-0004Ji-U9; Wed, 11 Apr 2018 12:40:03 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 79FC04023113; Wed, 11 Apr 2018 16:40:02 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3FD4CBDC4D; Wed, 11 Apr 2018 16:40:01 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:25 +0200 Message-Id: <20180411163940.2523-5-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:02 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:02 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 04/19] block: Don't manually poll in bdrv_drain_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" All involved nodes are already idle, we called bdrv_do_draine_begin() on them. The comment in the code suggested that this were not correct because the completion of a request on one node could spawn a new request on a different node (which might have been drained before, so we wouldn't drain the new request). In reality, new requests to different nodes aren't spawned out of nothing, but only in the context of a parent request, and they aren't submitted to random nodes, but only to child nodes. As long as we still poll for the completion of the parent request (which we do), draining each root node separately is good enough. Remove the additional polling code from bdrv_drain_all_begin() and replace it with an assertion that all nodes are already idle after we drained them separately. Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- block/io.c | 41 ++++++++++++----------------------------- 1 file changed, 12 insertions(+), 29 deletions(-) diff --git a/block/io.c b/block/io.c index d2bd89c3bb..ea6f9f023a 100644 --- a/block/io.c +++ b/block/io.c @@ -376,6 +376,16 @@ void bdrv_drain(BlockDriverState *bs) bdrv_drained_end(bs); } =20 +static void bdrv_drain_assert_idle(BlockDriverState *bs) +{ + BdrvChild *child, *next; + + assert(atomic_read(&bs->in_flight) =3D=3D 0); + QLIST_FOREACH_SAFE(child, &bs->children, next, next) { + bdrv_drain_assert_idle(child->bs); + } +} + /* * Wait for pending requests to complete across all BlockDriverStates * @@ -390,11 +400,8 @@ void bdrv_drain(BlockDriverState *bs) */ void bdrv_drain_all_begin(void) { - /* Always run first iteration so any pending completion BHs run */ - bool waited =3D true; BlockDriverState *bs; BdrvNextIterator it; - GSList *aio_ctxs =3D NULL, *ctx; =20 /* BDRV_POLL_WHILE() for a node can only be called from its own I/O th= read * or the main loop AioContext. We potentially use BDRV_POLL_WHILE() on @@ -408,35 +415,11 @@ void bdrv_drain_all_begin(void) aio_context_acquire(aio_context); bdrv_do_drained_begin(bs, true, NULL); aio_context_release(aio_context); - - if (!g_slist_find(aio_ctxs, aio_context)) { - aio_ctxs =3D g_slist_prepend(aio_ctxs, aio_context); - } } =20 - /* Note that completion of an asynchronous I/O operation can trigger a= ny - * number of other I/O operations on other devices---for example a - * coroutine can submit an I/O request to another device in response to - * request completion. Therefore we must keep looping until there was= no - * more activity rather than simply draining each device independently. - */ - while (waited) { - waited =3D false; - - for (ctx =3D aio_ctxs; ctx !=3D NULL; ctx =3D ctx->next) { - AioContext *aio_context =3D ctx->data; - - aio_context_acquire(aio_context); - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { - if (aio_context =3D=3D bdrv_get_aio_context(bs)) { - waited |=3D bdrv_drain_recurse(bs); - } - } - aio_context_release(aio_context); - } + for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { + bdrv_drain_assert_idle(bs); } - - g_slist_free(aio_ctxs); } =20 void bdrv_drain_all_end(void) --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465143003119.64419749430954; Wed, 11 Apr 2018 09:45:43 -0700 (PDT) Received: from localhost ([::1]:39636 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6IsQ-0007A0-4a for importer@patchew.org; Wed, 11 Apr 2018 12:45:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40943) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In4-0002YO-MR for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In3-0004QI-Mt for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:10 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:34046 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6Imy-0004Kw-CG; Wed, 11 Apr 2018 12:40:04 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E23E081A88B3; Wed, 11 Apr 2018 16:40:03 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id AE85ABDC4D; Wed, 11 Apr 2018 16:40:02 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:26 +0200 Message-Id: <20180411163940.2523-6-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 11 Apr 2018 16:40:03 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 11 Apr 2018 16:40:03 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 05/19] tests/test-bdrv-drain: bdrv_drain_all() works in coroutines now X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(), coroutine context is automatically left with a BH, preventing the deadlocks that made bdrv_drain_all*() unsafe in coroutine context. We can consider it compatible now the latest, after having removed the old polling code as dead code. Enable the coroutine test cases for bdrv_drain_all(). Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- tests/test-bdrv-drain.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index cd870609c5..6de525b509 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -233,6 +233,11 @@ static void test_drv_cb_drain_subtree(void) test_drv_cb_common(BDRV_SUBTREE_DRAIN, true); } =20 +static void test_drv_cb_co_drain_all(void) +{ + call_in_coroutine(test_drv_cb_drain_all); +} + static void test_drv_cb_co_drain(void) { call_in_coroutine(test_drv_cb_drain); @@ -289,6 +294,11 @@ static void test_quiesce_drain_subtree(void) test_quiesce_common(BDRV_SUBTREE_DRAIN, true); } =20 +static void test_quiesce_co_drain_all(void) +{ + call_in_coroutine(test_quiesce_drain_all); +} + static void test_quiesce_co_drain(void) { call_in_coroutine(test_quiesce_drain); @@ -795,7 +805,8 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/driver-cb/drain_subtree", test_drv_cb_drain_subtree); =20 - // XXX bdrv_drain_all() doesn't work in coroutine context + g_test_add_func("/bdrv-drain/driver-cb/co/drain_all", + test_drv_cb_co_drain_all); g_test_add_func("/bdrv-drain/driver-cb/co/drain", test_drv_cb_co_drain= ); g_test_add_func("/bdrv-drain/driver-cb/co/drain_subtree", test_drv_cb_co_drain_subtree); @@ -806,7 +817,8 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/quiesce/drain_subtree", test_quiesce_drain_subtree); =20 - // XXX bdrv_drain_all() doesn't work in coroutine context + g_test_add_func("/bdrv-drain/quiesce/co/drain_all", + test_quiesce_co_drain_all); g_test_add_func("/bdrv-drain/quiesce/co/drain", test_quiesce_co_drain); g_test_add_func("/bdrv-drain/quiesce/co/drain_subtree", test_quiesce_co_drain_subtree); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465381535433.38124706141514; Wed, 11 Apr 2018 09:49:41 -0700 (PDT) Received: from localhost ([::1]:39669 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6IwG-0001yl-Ob for importer@patchew.org; Wed, 11 Apr 2018 12:49:40 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41086) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In9-0002hB-VC for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In6-0004UL-6s for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:15 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33572 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6In0-0004M7-3U; Wed, 11 Apr 2018 12:40:06 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id AD5904023141; Wed, 11 Apr 2018 16:40:05 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 27DEBBDC2F; Wed, 11 Apr 2018 16:40:04 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:27 +0200 Message-Id: <20180411163940.2523-7-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:05 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:05 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 06/19] block: Avoid unnecessary aio_poll() in AIO_WAIT_WHILE() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Commit 91af091f923 added an additional aio_poll() to BDRV_POLL_WHILE() in order to make sure that all pending BHs are executed on drain. This was the wrong place to make the fix, as it is useless overhead for all other users of the macro and unnecessarily complicates the mechanism. This patch effectively reverts said commit (the context has changed a bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the loop condition for drain. The effect is probably hard to measure in any real-world use case because actual I/O will dominate, but if I run only the initialisation part of 'qemu-img convert' where it calls bdrv_block_status() for the whole image to find out how much data there is copy, this phase actually needs only roughly half the time after this patch. Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- include/block/aio-wait.h | 22 ++++++++-------------- block/io.c | 11 ++++++++++- 2 files changed, 18 insertions(+), 15 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index 8c90a2e66e..783d3678dd 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -73,29 +73,23 @@ typedef struct { */ #define AIO_WAIT_WHILE(wait, ctx, cond) ({ \ bool waited_ =3D false; \ - bool busy_ =3D true; \ AioWait *wait_ =3D (wait); \ AioContext *ctx_ =3D (ctx); \ if (in_aio_context_home_thread(ctx_)) { \ - while ((cond) || busy_) { \ - busy_ =3D aio_poll(ctx_, (cond)); \ - waited_ |=3D !!(cond) | busy_; \ + while ((cond)) { \ + aio_poll(ctx_, true); \ + waited_ =3D true; \ } \ } else { \ assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ /* Increment wait_->num_waiters before evaluating cond. */ \ atomic_inc(&wait_->num_waiters); \ - while (busy_) { \ - if ((cond)) { \ - waited_ =3D busy_ =3D true; \ - aio_context_release(ctx_); \ - aio_poll(qemu_get_aio_context(), true); \ - aio_context_acquire(ctx_); \ - } else { \ - busy_ =3D aio_poll(ctx_, false); \ - waited_ |=3D busy_; \ - } \ + while ((cond)) { \ + aio_context_release(ctx_); \ + aio_poll(qemu_get_aio_context(), true); \ + aio_context_acquire(ctx_); \ + waited_ =3D true; \ } \ atomic_dec(&wait_->num_waiters); \ } \ diff --git a/block/io.c b/block/io.c index ea6f9f023a..6f580f49ff 100644 --- a/block/io.c +++ b/block/io.c @@ -181,13 +181,22 @@ static void bdrv_drain_invoke(BlockDriverState *bs, b= ool begin) BDRV_POLL_WHILE(bs, !data.done); } =20 +/* Returns true if BDRV_POLL_WHILE() should go into a blocking aio_poll() = */ +static bool bdrv_drain_poll(BlockDriverState *bs) +{ + /* Execute pending BHs first and check everything else only after the = BHs + * have executed. */ + while (aio_poll(bs->aio_context, false)); + return atomic_read(&bs->in_flight); +} + static bool bdrv_drain_recurse(BlockDriverState *bs) { BdrvChild *child, *tmp; bool waited; =20 /* Wait for drained requests to finish */ - waited =3D BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); + waited =3D BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs)); =20 QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { BlockDriverState *bs =3D child->bs; --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465162748198.47540506315; Wed, 11 Apr 2018 09:46:02 -0700 (PDT) Received: from localhost ([::1]:39643 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Isj-0007cR-Rw for importer@patchew.org; Wed, 11 Apr 2018 12:46:01 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41090) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In9-0002hG-W4 for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In5-0004U5-St for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:15 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:50424 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6In1-0004NZ-UI; Wed, 11 Apr 2018 12:40:08 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7C25C4182D58; Wed, 11 Apr 2018 16:40:07 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id E7E9ABDC2F; Wed, 11 Apr 2018 16:40:05 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:28 +0200 Message-Id: <20180411163940.2523-8-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:40:07 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:40:07 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 07/19] block: Really pause block jobs on drain X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: Kevin Wolf --- include/block/block.h | 7 +++++++ include/block/block_int.h | 7 +++++++ include/block/blockjob_int.h | 8 ++++++++ block.c | 9 +++++++++ block/io.c | 27 ++++++++++++++++++++++++--- block/mirror.c | 8 ++++++++ blockjob.c | 21 +++++++++++++++++++++ tests/test-bdrv-drain.c | 18 ++++++++++-------- 8 files changed, 94 insertions(+), 11 deletions(-) diff --git a/include/block/block.h b/include/block/block.h index cdec3639a3..23dee3c114 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -559,6 +559,13 @@ void bdrv_parent_drained_begin(BlockDriverState *bs, B= drvChild *ignore); void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore); =20 /** + * bdrv_drain_poll: + * + * Poll for pending requests in @bs. This is part of bdrv_drained_begin. + */ +bool bdrv_drain_poll(BlockDriverState *bs, bool top_level); + +/** * bdrv_drained_begin: * * Begin a quiesced section for exclusive access to the BDS, by disabling diff --git a/include/block/block_int.h b/include/block/block_int.h index c4dd1d4bb8..dc6985e3ae 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -575,6 +575,13 @@ struct BdrvChildRole { void (*drained_begin)(BdrvChild *child); void (*drained_end)(BdrvChild *child); =20 + /* + * Returns whether the parent has pending requests for the child. This + * callback is polled after .drained_begin() has been called until all + * activity on the child has stopped. + */ + bool (*drained_poll)(BdrvChild *child); + /* Notifies the parent that the child has been activated/inactivated (= e.g. * when migration is completing) and it can start/stop requesting * permissions and doing I/O on it. */ diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h index 642adce68b..3a2f851d3f 100644 --- a/include/block/blockjob_int.h +++ b/include/block/blockjob_int.h @@ -106,6 +106,14 @@ struct BlockJobDriver { void coroutine_fn (*resume)(BlockJob *job); =20 /* + * Returns whether the job has pending requests for the child or will + * submit new requests before the next pause point. This callback is p= olled + * in the context of draining a job node after requesting that the job= be + * paused, until all activity on the child has stopped. + */ + bool (*drained_poll)(BlockJob *job); + + /* * If the callback is not NULL, it will be invoked before the job is * resumed in a new AioContext. This is the place to move any resourc= es * besides job->blk to the new AioContext. diff --git a/block.c b/block.c index a2caadf0a0..462287bdfb 100644 --- a/block.c +++ b/block.c @@ -820,6 +820,12 @@ static void bdrv_child_cb_drained_begin(BdrvChild *chi= ld) bdrv_drained_begin(bs); } =20 +static bool bdrv_child_cb_drained_poll(BdrvChild *child) +{ + BlockDriverState *bs =3D child->opaque; + return bdrv_drain_poll(bs, false); +} + static void bdrv_child_cb_drained_end(BdrvChild *child) { BlockDriverState *bs =3D child->opaque; @@ -904,6 +910,7 @@ const BdrvChildRole child_file =3D { .get_parent_desc =3D bdrv_child_get_parent_desc, .inherit_options =3D bdrv_inherited_options, .drained_begin =3D bdrv_child_cb_drained_begin, + .drained_poll =3D bdrv_child_cb_drained_poll, .drained_end =3D bdrv_child_cb_drained_end, .attach =3D bdrv_child_cb_attach, .detach =3D bdrv_child_cb_detach, @@ -928,6 +935,7 @@ const BdrvChildRole child_format =3D { .get_parent_desc =3D bdrv_child_get_parent_desc, .inherit_options =3D bdrv_inherited_fmt_options, .drained_begin =3D bdrv_child_cb_drained_begin, + .drained_poll =3D bdrv_child_cb_drained_poll, .drained_end =3D bdrv_child_cb_drained_end, .attach =3D bdrv_child_cb_attach, .detach =3D bdrv_child_cb_detach, @@ -1047,6 +1055,7 @@ const BdrvChildRole child_backing =3D { .detach =3D bdrv_backing_detach, .inherit_options =3D bdrv_backing_options, .drained_begin =3D bdrv_child_cb_drained_begin, + .drained_poll =3D bdrv_child_cb_drained_poll, .drained_end =3D bdrv_child_cb_drained_end, .inactivate =3D bdrv_child_cb_inactivate, .update_filename =3D bdrv_backing_update_filename, diff --git a/block/io.c b/block/io.c index 6f580f49ff..0a778eeff4 100644 --- a/block/io.c +++ b/block/io.c @@ -69,6 +69,20 @@ void bdrv_parent_drained_end(BlockDriverState *bs, BdrvC= hild *ignore) } } =20 +static bool bdrv_parent_drained_poll(BlockDriverState *bs) +{ + BdrvChild *c, *next; + bool busy =3D false; + + QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) { + if (c->role->drained_poll) { + busy |=3D c->role->drained_poll(c); + } + } + + return busy; +} + static void bdrv_merge_limits(BlockLimits *dst, const BlockLimits *src) { dst->opt_transfer =3D MAX(dst->opt_transfer, src->opt_transfer); @@ -182,11 +196,18 @@ static void bdrv_drain_invoke(BlockDriverState *bs, b= ool begin) } =20 /* Returns true if BDRV_POLL_WHILE() should go into a blocking aio_poll() = */ -static bool bdrv_drain_poll(BlockDriverState *bs) +bool bdrv_drain_poll(BlockDriverState *bs, bool top_level) { /* Execute pending BHs first and check everything else only after the = BHs * have executed. */ - while (aio_poll(bs->aio_context, false)); + if (top_level) { + while (aio_poll(bs->aio_context, false)); + } + + if (bdrv_parent_drained_poll(bs)) { + return true; + } + return atomic_read(&bs->in_flight); } =20 @@ -196,7 +217,7 @@ static bool bdrv_drain_recurse(BlockDriverState *bs) bool waited; =20 /* Wait for drained requests to finish */ - waited =3D BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs)); + waited =3D BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs, true)); =20 QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { BlockDriverState *bs =3D child->bs; diff --git a/block/mirror.c b/block/mirror.c index 820f512c7b..939504bd80 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -975,6 +975,12 @@ static void mirror_pause(BlockJob *job) mirror_wait_for_all_io(s); } =20 +static bool mirror_drained_poll(BlockJob *job) +{ + MirrorBlockJob *s =3D container_of(job, MirrorBlockJob, common); + return !!s->in_flight; +} + static void mirror_attached_aio_context(BlockJob *job, AioContext *new_con= text) { MirrorBlockJob *s =3D container_of(job, MirrorBlockJob, common); @@ -1004,6 +1010,7 @@ static const BlockJobDriver mirror_job_driver =3D { .start =3D mirror_run, .complete =3D mirror_complete, .pause =3D mirror_pause, + .drained_poll =3D mirror_drained_poll, .attached_aio_context =3D mirror_attached_aio_context, .drain =3D mirror_drain, }; @@ -1015,6 +1022,7 @@ static const BlockJobDriver commit_active_job_driver = =3D { .start =3D mirror_run, .complete =3D mirror_complete, .pause =3D mirror_pause, + .drained_poll =3D mirror_drained_poll, .attached_aio_context =3D mirror_attached_aio_context, .drain =3D mirror_drain, }; diff --git a/blockjob.c b/blockjob.c index 27f957e571..3d65196309 100644 --- a/blockjob.c +++ b/blockjob.c @@ -306,6 +306,26 @@ static void child_job_drained_begin(BdrvChild *c) block_job_pause(job); } =20 +static bool child_job_drained_poll(BdrvChild *c) +{ + BlockJob *job =3D c->opaque; + + /* An inactive or completed job doesn't have any pending requests. Jobs + * with !job->busy are either already paused or have a pause point aft= er + * being reentered, so no job driver code will run before they pause. = */ + if (!job->busy || job->completed || job->deferred_to_main_loop) { + return false; + } + + /* Otherwise, assume that it isn't fully stopped yet, but allow the jo= b to + * override this assumption. */ + if (job->driver->drained_poll) { + return job->driver->drained_poll(job); + } else { + return true; + } +} + static void child_job_drained_end(BdrvChild *c) { BlockJob *job =3D c->opaque; @@ -315,6 +335,7 @@ static void child_job_drained_end(BdrvChild *c) static const BdrvChildRole child_job =3D { .get_parent_desc =3D child_job_get_parent_desc, .drained_begin =3D child_job_drained_begin, + .drained_poll =3D child_job_drained_poll, .drained_end =3D child_job_drained_end, .stay_at_node =3D true, }; diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 6de525b509..37f2d6ac8c 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -686,7 +686,11 @@ static void coroutine_fn test_job_start(void *opaque) =20 block_job_event_ready(&s->common); while (!s->should_complete) { - block_job_sleep_ns(&s->common, 100000); + /* Avoid block_job_sleep_ns() because it marks the job as !busy. We + * want to emulate some actual activity (probably some I/O) here so + * that drain has to wait for this acitivity to stop. */ + qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, 100000); + block_job_pause_point(&s->common); } =20 block_job_defer_to_main_loop(&s->common, test_job_completed, NULL); @@ -728,7 +732,7 @@ static void test_blockjob_common(enum drain_type drain_= type) =20 g_assert_cmpint(job->pause_count, =3D=3D, 0); g_assert_false(job->paused); - g_assert_false(job->busy); /* We're in block_job_sleep_ns() */ + g_assert_true(job->busy); /* We're in qemu_co_sleep_ns() */ =20 do_drain_begin(drain_type, src); =20 @@ -738,15 +742,14 @@ static void test_blockjob_common(enum drain_type drai= n_type) } else { g_assert_cmpint(job->pause_count, =3D=3D, 1); } - /* XXX We don't wait until the job is actually paused. Is this okay? */ - /* g_assert_true(job->paused); */ + g_assert_true(job->paused); g_assert_false(job->busy); /* The job is paused */ =20 do_drain_end(drain_type, src); =20 g_assert_cmpint(job->pause_count, =3D=3D, 0); g_assert_false(job->paused); - g_assert_false(job->busy); /* We're in block_job_sleep_ns() */ + g_assert_true(job->busy); /* We're in qemu_co_sleep_ns() */ =20 do_drain_begin(drain_type, target); =20 @@ -756,15 +759,14 @@ static void test_blockjob_common(enum drain_type drai= n_type) } else { g_assert_cmpint(job->pause_count, =3D=3D, 1); } - /* XXX We don't wait until the job is actually paused. Is this okay? */ - /* g_assert_true(job->paused); */ + g_assert_true(job->paused); g_assert_false(job->busy); /* The job is paused */ =20 do_drain_end(drain_type, target); =20 g_assert_cmpint(job->pause_count, =3D=3D, 0); g_assert_false(job->paused); - g_assert_false(job->busy); /* We're in block_job_sleep_ns() */ + g_assert_true(job->busy); /* We're in qemu_co_sleep_ns() */ =20 ret =3D block_job_complete_sync(job, &error_abort); g_assert_cmpint(ret, =3D=3D, 0); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 152346498426077.95526311983474; Wed, 11 Apr 2018 09:43:04 -0700 (PDT) Received: from localhost ([::1]:39624 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Ipr-0004Xf-Fe for importer@patchew.org; Wed, 11 Apr 2018 12:43:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41085) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6In9-0002hA-Us for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6In5-0004UB-Sv for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:15 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:46398 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6In3-0004Px-HI; Wed, 11 Apr 2018 12:40:09 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 13C148DC31; Wed, 11 Apr 2018 16:40:09 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id B594FBDC5B; Wed, 11 Apr 2018 16:40:07 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:29 +0200 Message-Id: <20180411163940.2523-9-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:40:09 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:40:09 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 08/19] block: Remove bdrv_drain_recurse() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" For bdrv_drain(), recursively waiting for child node requests is pointless because we didn't quiesce their parents, so new requests could come in anyway. Letting the function work only on a single node makes it more consistent. For subtree drains and drain_all, we already have the recursion in bdrv_do_drained_begin(), so the extra recursion doesn't add anything either. Remove the useless code. Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- block/io.c | 36 +++--------------------------------- 1 file changed, 3 insertions(+), 33 deletions(-) diff --git a/block/io.c b/block/io.c index 0a778eeff4..f24f39c278 100644 --- a/block/io.c +++ b/block/io.c @@ -211,38 +211,6 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool top_le= vel) return atomic_read(&bs->in_flight); } =20 -static bool bdrv_drain_recurse(BlockDriverState *bs) -{ - BdrvChild *child, *tmp; - bool waited; - - /* Wait for drained requests to finish */ - waited =3D BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs, true)); - - QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { - BlockDriverState *bs =3D child->bs; - bool in_main_loop =3D - qemu_get_current_aio_context() =3D=3D qemu_get_aio_context(); - assert(bs->refcnt > 0); - if (in_main_loop) { - /* In case the recursive bdrv_drain_recurse processes a - * block_job_defer_to_main_loop BH and modifies the graph, - * let's hold a reference to bs until we are done. - * - * IOThread doesn't have such a BH, and it is not safe to call - * bdrv_unref without BQL, so skip doing it there. - */ - bdrv_ref(bs); - } - waited |=3D bdrv_drain_recurse(bs); - if (in_main_loop) { - bdrv_unref(bs); - } - } - - return waited; -} - static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive, BdrvChild *parent); static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive, @@ -310,7 +278,9 @@ void bdrv_do_drained_begin(BlockDriverState *bs, bool r= ecursive, =20 bdrv_parent_drained_begin(bs, parent); bdrv_drain_invoke(bs, true); - bdrv_drain_recurse(bs); + + /* Wait for drained requests to finish */ + BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs, true)); =20 if (recursive) { bs->recursive_quiesce_counter++; --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465565770772.5017771773395; Wed, 11 Apr 2018 09:52:45 -0700 (PDT) Received: from localhost ([::1]:39696 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6IzD-00052H-Q9 for importer@patchew.org; Wed, 11 Apr 2018 12:52:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41146) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InB-0002jM-NA for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InA-0004XS-4o for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:17 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:46402 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6In4-0004Sc-T8; Wed, 11 Apr 2018 12:40:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7FA498DC31; Wed, 11 Apr 2018 16:40:10 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D8BEBDC57; Wed, 11 Apr 2018 16:40:09 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:30 +0200 Message-Id: <20180411163940.2523-10-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:40:10 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:40:10 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 09/19] test-bdrv-drain: Add test for node deletion X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Max Reitz This patch adds two bdrv-drain tests for what happens if some BDS goes away during the drainage. The basic idea is that you have a parent BDS with some child nodes. Then, you drain one of the children. Because of that, the party who actually owns the parent decides to (A) delete it, or (B) detach all its children from it -- both while the child is still being drained. A real-world case where this can happen is the mirror block job, which may exit if you drain one of its children. Signed-off-by: Max Reitz Signed-off-by: Kevin Wolf --- tests/test-bdrv-drain.c | 169 ++++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 169 insertions(+) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 37f2d6ac8c..05768213be 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -792,6 +792,172 @@ static void test_blockjob_drain_subtree(void) test_blockjob_common(BDRV_SUBTREE_DRAIN); } =20 + +typedef struct BDRVTestTopState { + BdrvChild *wait_child; +} BDRVTestTopState; + +static void bdrv_test_top_close(BlockDriverState *bs) +{ + BdrvChild *c, *next_c; + QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) { + bdrv_unref_child(bs, c); + } +} + +static int coroutine_fn bdrv_test_top_co_preadv(BlockDriverState *bs, + uint64_t offset, uint64_t = bytes, + QEMUIOVector *qiov, int fl= ags) +{ + BDRVTestTopState *tts =3D bs->opaque; + return bdrv_co_preadv(tts->wait_child, offset, bytes, qiov, flags); +} + +static BlockDriver bdrv_test_top_driver =3D { + .format_name =3D "test_top_driver", + .instance_size =3D sizeof(BDRVTestTopState), + + .bdrv_close =3D bdrv_test_top_close, + .bdrv_co_preadv =3D bdrv_test_top_co_preadv, + + .bdrv_child_perm =3D bdrv_format_default_perms, +}; + +typedef struct TestCoDeleteByDrainData { + BlockBackend *blk; + bool detach_instead_of_delete; + bool done; +} TestCoDeleteByDrainData; + +static void coroutine_fn test_co_delete_by_drain(void *opaque) +{ + TestCoDeleteByDrainData *dbdd =3D opaque; + BlockBackend *blk =3D dbdd->blk; + BlockDriverState *bs =3D blk_bs(blk); + BDRVTestTopState *tts =3D bs->opaque; + void *buffer =3D g_malloc(65536); + QEMUIOVector qiov; + struct iovec iov =3D { + .iov_base =3D buffer, + .iov_len =3D 65536, + }; + + qemu_iovec_init_external(&qiov, &iov, 1); + + /* Pretend some internal write operation from parent to child. + * Important: We have to read from the child, not from the parent! + * Draining works by first propagating it all up the tree to the + * root and then waiting for drainage from root to the leaves + * (protocol nodes). If we have a request waiting on the root, + * everything will be drained before we go back down the tree, but + * we do not want that. We want to be in the middle of draining + * when this following requests returns. */ + bdrv_co_preadv(tts->wait_child, 0, 65536, &qiov, 0); + + g_assert_cmpint(bs->refcnt, =3D=3D, 1); + + if (!dbdd->detach_instead_of_delete) { + blk_unref(blk); + } else { + BdrvChild *c, *next_c; + QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) { + bdrv_unref_child(bs, c); + } + } + + dbdd->done =3D true; +} + +/** + * Test what happens when some BDS has some children, you drain one of + * them and this results in the BDS being deleted. + * + * If @detach_instead_of_delete is set, the BDS is not going to be + * deleted but will only detach all of its children. + */ +static void do_test_delete_by_drain(bool detach_instead_of_delete) +{ + BlockBackend *blk; + BlockDriverState *bs, *child_bs, *null_bs; + BDRVTestTopState *tts; + TestCoDeleteByDrainData dbdd; + Coroutine *co; + + bs =3D bdrv_new_open_driver(&bdrv_test_top_driver, "top", BDRV_O_RDWR, + &error_abort); + bs->total_sectors =3D 65536 >> BDRV_SECTOR_BITS; + tts =3D bs->opaque; + + null_bs =3D bdrv_open("null-co://", NULL, NULL, BDRV_O_RDWR | BDRV_O_P= ROTOCOL, + &error_abort); + bdrv_attach_child(bs, null_bs, "null-child", &child_file, &error_abort= ); + + /* This child will be the one to pass to requests through to, and + * it will stall until a drain occurs */ + child_bs =3D bdrv_new_open_driver(&bdrv_test, "child", BDRV_O_RDWR, + &error_abort); + child_bs->total_sectors =3D 65536 >> BDRV_SECTOR_BITS; + /* Takes our reference to child_bs */ + tts->wait_child =3D bdrv_attach_child(bs, child_bs, "wait-child", &chi= ld_file, + &error_abort); + + /* This child is just there to be deleted + * (for detach_instead_of_delete =3D=3D true) */ + null_bs =3D bdrv_open("null-co://", NULL, NULL, BDRV_O_RDWR | BDRV_O_P= ROTOCOL, + &error_abort); + bdrv_attach_child(bs, null_bs, "null-child", &child_file, &error_abort= ); + + blk =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + blk_insert_bs(blk, bs, &error_abort); + + /* Referenced by blk now */ + bdrv_unref(bs); + + g_assert_cmpint(bs->refcnt, =3D=3D, 1); + g_assert_cmpint(child_bs->refcnt, =3D=3D, 1); + g_assert_cmpint(null_bs->refcnt, =3D=3D, 1); + + + dbdd =3D (TestCoDeleteByDrainData){ + .blk =3D blk, + .detach_instead_of_delete =3D detach_instead_of_delete, + .done =3D false, + }; + co =3D qemu_coroutine_create(test_co_delete_by_drain, &dbdd); + qemu_coroutine_enter(co); + + /* Drain the child while the read operation is still pending. + * This should result in the operation finishing and + * test_co_delete_by_drain() resuming. Thus, @bs will be deleted + * and the coroutine will exit while this drain operation is still + * in progress. */ + bdrv_ref(child_bs); + bdrv_drain(child_bs); + bdrv_unref(child_bs); + + while (!dbdd.done) { + aio_poll(qemu_get_aio_context(), true); + } + + if (detach_instead_of_delete) { + /* Here, the reference has not passed over to the coroutine, + * so we have to delete the BB ourselves */ + blk_unref(blk); + } +} + + +static void test_delete_by_drain(void) +{ + do_test_delete_by_drain(false); +} + +static void test_detach_by_drain(void) +{ + do_test_delete_by_drain(true); +} + + int main(int argc, char **argv) { int ret; @@ -839,6 +1005,9 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/blockjob/drain_subtree", test_blockjob_drain_subtree); =20 + g_test_add_func("/bdrv-drain/deletion", test_delete_by_drain); + g_test_add_func("/bdrv-drain/detach", test_detach_by_drain); + ret =3D g_test_run(); qemu_event_destroy(&done_event); return ret; --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465362547965.5763453708888; Wed, 11 Apr 2018 09:49:22 -0700 (PDT) Received: from localhost ([::1]:39666 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Ivv-0001ci-GN for importer@patchew.org; Wed, 11 Apr 2018 12:49:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41235) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InE-0002nC-VY for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InD-0004Zz-Lz for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:21 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33580 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InA-0004X3-0u; Wed, 11 Apr 2018 12:40:16 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9BACF4023141; Wed, 11 Apr 2018 16:40:15 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id BA034BDC5B; Wed, 11 Apr 2018 16:40:10 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:31 +0200 Message-Id: <20180411163940.2523-11-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:15 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:15 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 10/19] block: Drain recursively with a single BDRV_POLL_WHILE() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Anything can happen inside BDRV_POLL_WHILE(), including graph changes that may interfere with its callers (e.g. child list iteration in recursive callers of bdrv_do_drained_begin). Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the end of bdrv_do_drained_begin() to avoid such effects. The recursion happens now inside the loop condition. As the graph can only change between bdrv_drain_poll() calls, but not inside of it, doing the recursion here is safe. Signed-off-by: Kevin Wolf --- include/block/block.h | 2 +- block.c | 2 +- block/io.c | 58 +++++++++++++++++++++++++++++++++++++----------= ---- 3 files changed, 44 insertions(+), 18 deletions(-) diff --git a/include/block/block.h b/include/block/block.h index 23dee3c114..91bf3b4e36 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -563,7 +563,7 @@ void bdrv_parent_drained_end(BlockDriverState *bs, Bdrv= Child *ignore); * * Poll for pending requests in @bs. This is part of bdrv_drained_begin. */ -bool bdrv_drain_poll(BlockDriverState *bs, bool top_level); +bool bdrv_drain_poll(BlockDriverState *bs, bool top_level, bool recursive); =20 /** * bdrv_drained_begin: diff --git a/block.c b/block.c index 462287bdfb..9fe39ac8c1 100644 --- a/block.c +++ b/block.c @@ -823,7 +823,7 @@ static void bdrv_child_cb_drained_begin(BdrvChild *chil= d) static bool bdrv_child_cb_drained_poll(BdrvChild *child) { BlockDriverState *bs =3D child->opaque; - return bdrv_drain_poll(bs, false); + return bdrv_drain_poll(bs, false, false); } =20 static void bdrv_child_cb_drained_end(BdrvChild *child) diff --git a/block/io.c b/block/io.c index f24f39c278..1287630c58 100644 --- a/block/io.c +++ b/block/io.c @@ -161,6 +161,7 @@ typedef struct { bool done; bool begin; bool recursive; + bool poll; BdrvChild *parent; } BdrvCoDrainData; =20 @@ -196,8 +197,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bo= ol begin) } =20 /* Returns true if BDRV_POLL_WHILE() should go into a blocking aio_poll() = */ -bool bdrv_drain_poll(BlockDriverState *bs, bool top_level) +bool bdrv_drain_poll(BlockDriverState *bs, bool top_level, bool recursive) { + BdrvChild *child, *next; + /* Execute pending BHs first and check everything else only after the = BHs * have executed. */ if (top_level) { @@ -208,11 +211,23 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool top_l= evel) return true; } =20 - return atomic_read(&bs->in_flight); + if (atomic_read(&bs->in_flight)) { + return true; + } + + if (recursive) { + QLIST_FOREACH_SAFE(child, &bs->children, next, next) { + if (bdrv_drain_poll(child->bs, false, recursive)) { + return true; + } + } + } + + return false; } =20 static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive, - BdrvChild *parent); + BdrvChild *parent, bool poll); static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive, BdrvChild *parent); =20 @@ -224,7 +239,7 @@ static void bdrv_co_drain_bh_cb(void *opaque) =20 bdrv_dec_in_flight(bs); if (data->begin) { - bdrv_do_drained_begin(bs, data->recursive, data->parent); + bdrv_do_drained_begin(bs, data->recursive, data->parent, data->pol= l); } else { bdrv_do_drained_end(bs, data->recursive, data->parent); } @@ -235,7 +250,7 @@ static void bdrv_co_drain_bh_cb(void *opaque) =20 static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs, bool begin, bool recursive, - BdrvChild *parent) + BdrvChild *parent, bool po= ll) { BdrvCoDrainData data; =20 @@ -250,6 +265,7 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDr= iverState *bs, .begin =3D begin, .recursive =3D recursive, .parent =3D parent, + .poll =3D poll, }; bdrv_inc_in_flight(bs); aio_bh_schedule_oneshot(bdrv_get_aio_context(bs), @@ -262,12 +278,12 @@ static void coroutine_fn bdrv_co_yield_to_drain(Block= DriverState *bs, } =20 void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive, - BdrvChild *parent) + BdrvChild *parent, bool poll) { BdrvChild *child, *next; =20 if (qemu_in_coroutine()) { - bdrv_co_yield_to_drain(bs, true, recursive, parent); + bdrv_co_yield_to_drain(bs, true, recursive, parent, poll); return; } =20 @@ -279,25 +295,35 @@ void bdrv_do_drained_begin(BlockDriverState *bs, bool= recursive, bdrv_parent_drained_begin(bs, parent); bdrv_drain_invoke(bs, true); =20 - /* Wait for drained requests to finish */ - BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs, true)); - if (recursive) { bs->recursive_quiesce_counter++; QLIST_FOREACH_SAFE(child, &bs->children, next, next) { - bdrv_do_drained_begin(child->bs, true, child); + bdrv_do_drained_begin(child->bs, true, child, false); } } + + /* + * Wait for drained requests to finish. + * + * Calling BDRV_POLL_WHILE() only once for the top-level node is okay:= The + * call is needed so things in this AioContext can make progress even + * though we don't return to the main AioContext loop - this automatic= ally + * includes other nodes in the same AioContext and therefore all child + * nodes. + */ + if (poll) { + BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs, true, recursive)); + } } =20 void bdrv_drained_begin(BlockDriverState *bs) { - bdrv_do_drained_begin(bs, false, NULL); + bdrv_do_drained_begin(bs, false, NULL, true); } =20 void bdrv_subtree_drained_begin(BlockDriverState *bs) { - bdrv_do_drained_begin(bs, true, NULL); + bdrv_do_drained_begin(bs, true, NULL, true); } =20 void bdrv_do_drained_end(BlockDriverState *bs, bool recursive, @@ -307,7 +333,7 @@ void bdrv_do_drained_end(BlockDriverState *bs, bool rec= ursive, int old_quiesce_counter; =20 if (qemu_in_coroutine()) { - bdrv_co_yield_to_drain(bs, false, recursive, parent); + bdrv_co_yield_to_drain(bs, false, recursive, parent, false); return; } assert(bs->quiesce_counter > 0); @@ -343,7 +369,7 @@ void bdrv_apply_subtree_drain(BdrvChild *child, BlockDr= iverState *new_parent) int i; =20 for (i =3D 0; i < new_parent->recursive_quiesce_counter; i++) { - bdrv_do_drained_begin(child->bs, true, child); + bdrv_do_drained_begin(child->bs, true, child, true); } } =20 @@ -413,7 +439,7 @@ void bdrv_drain_all_begin(void) AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 aio_context_acquire(aio_context); - bdrv_do_drained_begin(bs, true, NULL); + bdrv_do_drained_begin(bs, true, NULL, true); aio_context_release(aio_context); } =20 --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465763924691.1651698519601; Wed, 11 Apr 2018 09:56:03 -0700 (PDT) Received: from localhost ([::1]:39735 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6J2R-0008GT-4e for importer@patchew.org; Wed, 11 Apr 2018 12:56:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41236) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InE-0002nE-Vj for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InE-0004aD-1O for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:21 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:50434 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InB-0004YC-Gv; Wed, 11 Apr 2018 12:40:17 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 13269406F890; Wed, 11 Apr 2018 16:40:17 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id D596DBDC58; Wed, 11 Apr 2018 16:40:15 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:32 +0200 Message-Id: <20180411163940.2523-12-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:40:17 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:40:17 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 11/19] test-bdrv-drain: Test node deletion in subtree recursion X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" If bdrv_do_drained_begin() polls during its subtree recursion, the graph can change and mess up the bs->children iteration. Test that this doesn't happen. Signed-off-by: Kevin Wolf --- tests/test-bdrv-drain.c | 38 +++++++++++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 05768213be..4af6571ca3 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -875,7 +875,8 @@ static void coroutine_fn test_co_delete_by_drain(void *= opaque) * If @detach_instead_of_delete is set, the BDS is not going to be * deleted but will only detach all of its children. */ -static void do_test_delete_by_drain(bool detach_instead_of_delete) +static void do_test_delete_by_drain(bool detach_instead_of_delete, + enum drain_type drain_type) { BlockBackend *blk; BlockDriverState *bs, *child_bs, *null_bs; @@ -931,9 +932,23 @@ static void do_test_delete_by_drain(bool detach_instea= d_of_delete) * test_co_delete_by_drain() resuming. Thus, @bs will be deleted * and the coroutine will exit while this drain operation is still * in progress. */ - bdrv_ref(child_bs); - bdrv_drain(child_bs); - bdrv_unref(child_bs); + switch (drain_type) { + case BDRV_DRAIN: + bdrv_ref(child_bs); + bdrv_drain(child_bs); + bdrv_unref(child_bs); + break; + case BDRV_SUBTREE_DRAIN: + /* Would have to ref/unref bs here for !detach_instead_of_delete, = but + * then the whole test becomes pointless because the graph changes + * don't occur during the drain any more. */ + assert(detach_instead_of_delete); + bdrv_subtree_drained_begin(bs); + bdrv_subtree_drained_end(bs); + break; + default: + g_assert_not_reached(); + } =20 while (!dbdd.done) { aio_poll(qemu_get_aio_context(), true); @@ -946,15 +961,19 @@ static void do_test_delete_by_drain(bool detach_inste= ad_of_delete) } } =20 - static void test_delete_by_drain(void) { - do_test_delete_by_drain(false); + do_test_delete_by_drain(false, BDRV_DRAIN); } =20 static void test_detach_by_drain(void) { - do_test_delete_by_drain(true); + do_test_delete_by_drain(true, BDRV_DRAIN); +} + +static void test_detach_by_drain_subtree(void) +{ + do_test_delete_by_drain(true, BDRV_SUBTREE_DRAIN); } =20 =20 @@ -1005,8 +1024,9 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/blockjob/drain_subtree", test_blockjob_drain_subtree); =20 - g_test_add_func("/bdrv-drain/deletion", test_delete_by_drain); - g_test_add_func("/bdrv-drain/detach", test_detach_by_drain); + g_test_add_func("/bdrv-drain/deletion/drain", test_delete_by_drain); + g_test_add_func("/bdrv-drain/detach/drain", test_detach_by_drain); + g_test_add_func("/bdrv-drain/detach/drain_subtree", test_detach_by_dra= in_subtree); =20 ret =3D g_test_run(); qemu_event_destroy(&done_event); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465410540909.9590797415876; Wed, 11 Apr 2018 09:50:10 -0700 (PDT) Received: from localhost ([::1]:39670 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Iwj-0002RN-Lh for importer@patchew.org; Wed, 11 Apr 2018 12:50:09 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41299) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InJ-0002tX-QJ for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InF-0004b0-L4 for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:25 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33608 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InC-0004ZE-Tg; Wed, 11 Apr 2018 12:40:18 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7F60A4023335; Wed, 11 Apr 2018 16:40:18 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E3EABDC57; Wed, 11 Apr 2018 16:40:17 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:33 +0200 Message-Id: <20180411163940.2523-13-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:18 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:18 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 12/19] block: Don't poll in parent drain callbacks X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" bdrv_do_drained_begin() is only safe if we have a single BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow that parent callbacks introduce a nested polling loop that could cause graph changes while we're traversing the graph. Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single node without waiting for its requests to complete. These requests will be waited for in the BDRV_POLL_WHILE() call down the call chain. Signed-off-by: Kevin Wolf --- include/block/block.h | 9 +++++++++ block.c | 2 +- block/io.c | 24 ++++++++++++++++-------- 3 files changed, 26 insertions(+), 9 deletions(-) diff --git a/include/block/block.h b/include/block/block.h index 91bf3b4e36..de2cba2c74 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -578,6 +578,15 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool top_le= vel, bool recursive); void bdrv_drained_begin(BlockDriverState *bs); =20 /** + * bdrv_do_drained_begin_quiesce: + * + * Quiesces a BDS like bdrv_drained_begin(), but does not wait for already + * running requests to complete. + */ +void bdrv_do_drained_begin_quiesce(BlockDriverState *bs, + BdrvChild *parent); + +/** * Like bdrv_drained_begin, but recursively begins a quiesced section for * exclusive access to all child nodes as well. */ diff --git a/block.c b/block.c index 9fe39ac8c1..330238de19 100644 --- a/block.c +++ b/block.c @@ -817,7 +817,7 @@ static char *bdrv_child_get_parent_desc(BdrvChild *c) static void bdrv_child_cb_drained_begin(BdrvChild *child) { BlockDriverState *bs =3D child->opaque; - bdrv_drained_begin(bs); + bdrv_do_drained_begin_quiesce(bs, NULL); } =20 static bool bdrv_child_cb_drained_poll(BdrvChild *child) diff --git a/block/io.c b/block/io.c index 1287630c58..f372b9ffb0 100644 --- a/block/io.c +++ b/block/io.c @@ -277,15 +277,10 @@ static void coroutine_fn bdrv_co_yield_to_drain(Block= DriverState *bs, assert(data.done); } =20 -void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive, - BdrvChild *parent, bool poll) +void bdrv_do_drained_begin_quiesce(BlockDriverState *bs, + BdrvChild *parent) { - BdrvChild *child, *next; - - if (qemu_in_coroutine()) { - bdrv_co_yield_to_drain(bs, true, recursive, parent, poll); - return; - } + assert(!qemu_in_coroutine()); =20 /* Stop things in parent-to-child order */ if (atomic_fetch_inc(&bs->quiesce_counter) =3D=3D 0) { @@ -294,6 +289,19 @@ void bdrv_do_drained_begin(BlockDriverState *bs, bool = recursive, =20 bdrv_parent_drained_begin(bs, parent); bdrv_drain_invoke(bs, true); +} + +static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive, + BdrvChild *parent, bool poll) +{ + BdrvChild *child, *next; + + if (qemu_in_coroutine()) { + bdrv_co_yield_to_drain(bs, true, recursive, parent, poll); + return; + } + + bdrv_do_drained_begin_quiesce(bs, parent); =20 if (recursive) { bs->recursive_quiesce_counter++; --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465559567268.08209487340605; Wed, 11 Apr 2018 09:52:39 -0700 (PDT) Received: from localhost ([::1]:39695 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6Iz8-0004yT-NE for importer@patchew.org; Wed, 11 Apr 2018 12:52:38 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41360) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InM-0002um-8o for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InK-0004dG-Tx for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:28 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:34048 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InE-0004aK-CY; Wed, 11 Apr 2018 12:40:20 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EC3B68010F67; Wed, 11 Apr 2018 16:40:19 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id BA35CBDC5B; Wed, 11 Apr 2018 16:40:18 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:34 +0200 Message-Id: <20180411163940.2523-14-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 11 Apr 2018 16:40:19 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 11 Apr 2018 16:40:19 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 13/19] test-bdrv-drain: Graph change through parent callback X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Signed-off-by: Kevin Wolf --- tests/test-bdrv-drain.c | 130 ++++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 130 insertions(+) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 4af6571ca3..fdf3ce19ea 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -977,6 +977,135 @@ static void test_detach_by_drain_subtree(void) } =20 =20 +struct detach_by_parent_data { + BlockDriverState *parent_b; + BdrvChild *child_b; + BlockDriverState *c; + BdrvChild *child_c; +}; + +static void detach_by_parent_aio_cb(void *opaque, int ret) +{ + struct detach_by_parent_data *data =3D opaque; + + g_assert_cmpint(ret, =3D=3D, 0); + bdrv_unref_child(data->parent_b, data->child_b); + + bdrv_ref(data->c); + data->child_c =3D bdrv_attach_child(data->parent_b, data->c, "PB-C", + &child_file, &error_abort); +} + +/* + * Initial graph: + * + * PA PB + * \ / \ + * A B C + * + * PA has a pending write request whose callback changes the child nodes o= f PB: + * It removes B and adds C instead. The subtree of PB is drained, which wi= ll + * indirectly drain the write request, too. + */ +static void test_detach_by_parent_cb(void) +{ + BlockBackend *blk; + BlockDriverState *parent_a, *parent_b, *a, *b, *c; + BdrvChild *child_a, *child_b; + BlockAIOCB *acb; + struct detach_by_parent_data data; + + QEMUIOVector qiov; + struct iovec iov =3D { + .iov_base =3D NULL, + .iov_len =3D 0, + }; + qemu_iovec_init_external(&qiov, &iov, 1); + + /* Create all involved nodes */ + parent_a =3D bdrv_new_open_driver(&bdrv_test, "parent-a", BDRV_O_RDWR, + &error_abort); + parent_b =3D bdrv_new_open_driver(&bdrv_test, "parent-b", 0, + &error_abort); + + a =3D bdrv_new_open_driver(&bdrv_test, "a", BDRV_O_RDWR, &error_abort); + b =3D bdrv_new_open_driver(&bdrv_test, "b", BDRV_O_RDWR, &error_abort); + c =3D bdrv_new_open_driver(&bdrv_test, "c", BDRV_O_RDWR, &error_abort); + + /* blk is a BB for parent-a */ + blk =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + blk_insert_bs(blk, parent_a, &error_abort); + bdrv_unref(parent_a); + + /* Set child relationships */ + bdrv_ref(b); + bdrv_ref(a); + child_b =3D bdrv_attach_child(parent_b, b, "PB-B", &child_file, &error= _abort); + child_a =3D bdrv_attach_child(parent_b, a, "PB-A", &child_backing, &er= ror_abort); + + bdrv_ref(a); + bdrv_attach_child(parent_a, a, "PA-A", &child_file, &error_abort); + + g_assert_cmpint(parent_a->refcnt, =3D=3D, 1); + g_assert_cmpint(parent_b->refcnt, =3D=3D, 1); + g_assert_cmpint(a->refcnt, =3D=3D, 3); + g_assert_cmpint(b->refcnt, =3D=3D, 2); + g_assert_cmpint(c->refcnt, =3D=3D, 1); + + g_assert(QLIST_FIRST(&parent_b->children) =3D=3D child_a); + g_assert(QLIST_NEXT(child_a, next) =3D=3D child_b); + g_assert(QLIST_NEXT(child_b, next) =3D=3D NULL); + + /* Start the evil write request */ + data =3D (struct detach_by_parent_data) { + .parent_b =3D parent_b, + .child_b =3D child_b, + .c =3D c, + }; + acb =3D blk_aio_preadv(blk, 0, &qiov, 0, detach_by_parent_aio_cb, &dat= a); + g_assert(acb !=3D NULL); + + /* Drain and check the expected result */ + bdrv_subtree_drained_begin(parent_b); + + g_assert(data.child_c !=3D NULL); + + g_assert_cmpint(parent_a->refcnt, =3D=3D, 1); + g_assert_cmpint(parent_b->refcnt, =3D=3D, 1); + g_assert_cmpint(a->refcnt, =3D=3D, 3); + g_assert_cmpint(b->refcnt, =3D=3D, 1); + g_assert_cmpint(c->refcnt, =3D=3D, 2); + + g_assert(QLIST_FIRST(&parent_b->children) =3D=3D data.child_c); + g_assert(QLIST_NEXT(data.child_c, next) =3D=3D child_a); + g_assert(QLIST_NEXT(child_a, next) =3D=3D NULL); + + g_assert_cmpint(parent_a->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(parent_b->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(a->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(b->quiesce_counter, =3D=3D, 0); + g_assert_cmpint(c->quiesce_counter, =3D=3D, 1); + + bdrv_subtree_drained_end(parent_b); + + bdrv_unref(parent_b); + blk_unref(blk); + + /* XXX Once bdrv_close() unref's children instead of just detaching th= em, + * this won't be necessary any more. */ + bdrv_unref(a); + bdrv_unref(a); + bdrv_unref(c); + + g_assert_cmpint(a->refcnt, =3D=3D, 1); + g_assert_cmpint(b->refcnt, =3D=3D, 1); + g_assert_cmpint(c->refcnt, =3D=3D, 1); + bdrv_unref(a); + bdrv_unref(b); + bdrv_unref(c); +} + + int main(int argc, char **argv) { int ret; @@ -1027,6 +1156,7 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/deletion/drain", test_delete_by_drain); g_test_add_func("/bdrv-drain/detach/drain", test_detach_by_drain); g_test_add_func("/bdrv-drain/detach/drain_subtree", test_detach_by_dra= in_subtree); + g_test_add_func("/bdrv-drain/detach/parent_cb", test_detach_by_parent_= cb); =20 ret =3D g_test_run(); qemu_event_destroy(&done_event); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465641689436.0219724425866; Wed, 11 Apr 2018 09:54:01 -0700 (PDT) Received: from localhost ([::1]:39699 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6J0S-00064c-Ge for importer@patchew.org; Wed, 11 Apr 2018 12:54:00 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41356) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InL-0002uj-Tq for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InL-0004dX-2I for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:27 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:34730 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InF-0004aw-RE; Wed, 11 Apr 2018 12:40:21 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 643C0407021B; Wed, 11 Apr 2018 16:40:21 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 326B8BDC58; Wed, 11 Apr 2018 16:40:20 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:35 +0200 Message-Id: <20180411163940.2523-15-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Wed, 11 Apr 2018 16:40:21 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Wed, 11 Apr 2018 16:40:21 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 14/19] block: Defer .bdrv_drain_begin callback to polling phase X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" We cannot allow aio_poll() in bdrv_drain_invoke(begin=3Dtrue) until we're done with propagating the drain through the graph and are doing the single final BDRV_POLL_WHILE(). Just schedule the coroutine with the callback and increase bs->in_flight to make sure that the polling phase will wait for it. Signed-off-by: Kevin Wolf --- block/io.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/block/io.c b/block/io.c index f372b9ffb0..fc26c1a2f8 100644 --- a/block/io.c +++ b/block/io.c @@ -178,22 +178,40 @@ static void coroutine_fn bdrv_drain_invoke_entry(void= *opaque) =20 /* Set data->done before reading bs->wakeup. */ atomic_mb_set(&data->done, true); - bdrv_wakeup(bs); + bdrv_dec_in_flight(bs); + + if (data->begin) { + g_free(data); + } } =20 /* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) { - BdrvCoDrainData data =3D { .bs =3D bs, .done =3D false, .begin =3D beg= in}; + BdrvCoDrainData *data; =20 if (!bs->drv || (begin && !bs->drv->bdrv_co_drain_begin) || (!begin && !bs->drv->bdrv_co_drain_end)) { return; } =20 - data.co =3D qemu_coroutine_create(bdrv_drain_invoke_entry, &data); - bdrv_coroutine_enter(bs, data.co); - BDRV_POLL_WHILE(bs, !data.done); + data =3D g_new(BdrvCoDrainData, 1); + *data =3D (BdrvCoDrainData) { + .bs =3D bs, + .done =3D false, + .begin =3D begin + }; + + /* Make sure the driver callback completes during the polling phase for + * drain_begin. */ + bdrv_inc_in_flight(bs); + data->co =3D qemu_coroutine_create(bdrv_drain_invoke_entry, data); + aio_co_schedule(bdrv_get_aio_context(bs), data->co); + + if (!begin) { + BDRV_POLL_WHILE(bs, !data->done); + g_free(data); + } } =20 /* Returns true if BDRV_POLL_WHILE() should go into a blocking aio_poll() = */ --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465458850610.2245951840827; Wed, 11 Apr 2018 09:50:58 -0700 (PDT) Received: from localhost ([::1]:39677 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6IxV-0003PE-Tu for importer@patchew.org; Wed, 11 Apr 2018 12:50:57 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41378) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InN-0002uo-6n for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InL-0004dy-KJ for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:29 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:34054 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InH-0004bh-8z; Wed, 11 Apr 2018 12:40:23 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D3D968010F67; Wed, 11 Apr 2018 16:40:22 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9EC85AB3FD; Wed, 11 Apr 2018 16:40:21 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:36 +0200 Message-Id: <20180411163940.2523-16-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 11 Apr 2018 16:40:22 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 11 Apr 2018 16:40:22 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 15/19] test-bdrv-drain: Test that bdrv_drain_invoke() doesn't poll X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This adds a test case that goes wrong if bdrv_drain_invoke() calls aio_poll(). Signed-off-by: Kevin Wolf --- tests/test-bdrv-drain.c | 102 +++++++++++++++++++++++++++++++++++++++++---= ---- 1 file changed, 88 insertions(+), 14 deletions(-) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index fdf3ce19ea..79d845ecbb 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -34,12 +34,16 @@ static QemuEvent done_event; typedef struct BDRVTestState { int drain_count; AioContext *bh_indirection_ctx; + bool sleep_in_drain_begin; } BDRVTestState; =20 static void coroutine_fn bdrv_test_co_drain_begin(BlockDriverState *bs) { BDRVTestState *s =3D bs->opaque; s->drain_count++; + if (s->sleep_in_drain_begin) { + qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, 100000); + } } =20 static void coroutine_fn bdrv_test_co_drain_end(BlockDriverState *bs) @@ -80,6 +84,22 @@ static int coroutine_fn bdrv_test_co_preadv(BlockDriverS= tate *bs, return 0; } =20 +static void bdrv_test_child_perm(BlockDriverState *bs, BdrvChild *c, + const BdrvChildRole *role, + BlockReopenQueue *reopen_queue, + uint64_t perm, uint64_t shared, + uint64_t *nperm, uint64_t *nshared) +{ + /* bdrv_format_default_perms() accepts only these two, so disguise + * detach_by_driver_cb_role as one of them. */ + if (role !=3D &child_file && role !=3D &child_backing) { + role =3D &child_file; + } + + bdrv_format_default_perms(bs, c, role, reopen_queue, perm, shared, + nperm, nshared); +} + static BlockDriver bdrv_test =3D { .format_name =3D "test", .instance_size =3D sizeof(BDRVTestState), @@ -90,7 +110,7 @@ static BlockDriver bdrv_test =3D { .bdrv_co_drain_begin =3D bdrv_test_co_drain_begin, .bdrv_co_drain_end =3D bdrv_test_co_drain_end, =20 - .bdrv_child_perm =3D bdrv_format_default_perms, + .bdrv_child_perm =3D bdrv_test_child_perm, }; =20 static void aio_ret_cb(void *opaque, int ret) @@ -982,13 +1002,14 @@ struct detach_by_parent_data { BdrvChild *child_b; BlockDriverState *c; BdrvChild *child_c; + bool by_parent_cb; }; +static struct detach_by_parent_data detach_by_parent_data; =20 -static void detach_by_parent_aio_cb(void *opaque, int ret) +static void detach_indirect_bh(void *opaque) { struct detach_by_parent_data *data =3D opaque; =20 - g_assert_cmpint(ret, =3D=3D, 0); bdrv_unref_child(data->parent_b, data->child_b); =20 bdrv_ref(data->c); @@ -996,6 +1017,25 @@ static void detach_by_parent_aio_cb(void *opaque, int= ret) &child_file, &error_abort); } =20 +static void detach_by_parent_aio_cb(void *opaque, int ret) +{ + struct detach_by_parent_data *data =3D &detach_by_parent_data; + + g_assert_cmpint(ret, =3D=3D, 0); + if (data->by_parent_cb) { + detach_indirect_bh(data); + } +} + +static void detach_by_driver_cb_drained_begin(BdrvChild *child) +{ + aio_bh_schedule_oneshot(qemu_get_current_aio_context(), + detach_indirect_bh, &detach_by_parent_data); + child_file.drained_begin(child); +} + +static BdrvChildRole detach_by_driver_cb_role; + /* * Initial graph: * @@ -1003,17 +1043,25 @@ static void detach_by_parent_aio_cb(void *opaque, i= nt ret) * \ / \ * A B C * - * PA has a pending write request whose callback changes the child nodes o= f PB: - * It removes B and adds C instead. The subtree of PB is drained, which wi= ll - * indirectly drain the write request, too. + * by_parent_cb =3D=3D true: Test that parent callbacks don't poll + * + * PA has a pending write request whose callback changes the child nod= es of + * PB: It removes B and adds C instead. The subtree of PB is drained, = which + * will indirectly drain the write request, too. + * + * by_parent_cb =3D=3D false: Test that bdrv_drain_invoke() doesn't poll + * + * PA's BdrvChildRole has a .drained_begin callback that schedules a BH + * that does the same graph change. If bdrv_drain_invoke() calls it, t= he + * state is messed up, but if it is only polled in the single + * BDRV_POLL_WHILE() at the end of the drain, this should work fine. */ -static void test_detach_by_parent_cb(void) +static void test_detach_indirect(bool by_parent_cb) { BlockBackend *blk; BlockDriverState *parent_a, *parent_b, *a, *b, *c; BdrvChild *child_a, *child_b; BlockAIOCB *acb; - struct detach_by_parent_data data; =20 QEMUIOVector qiov; struct iovec iov =3D { @@ -1022,6 +1070,12 @@ static void test_detach_by_parent_cb(void) }; qemu_iovec_init_external(&qiov, &iov, 1); =20 + if (!by_parent_cb) { + detach_by_driver_cb_role =3D child_file; + detach_by_driver_cb_role.drained_begin =3D + detach_by_driver_cb_drained_begin; + } + /* Create all involved nodes */ parent_a =3D bdrv_new_open_driver(&bdrv_test, "parent-a", BDRV_O_RDWR, &error_abort); @@ -1037,6 +1091,13 @@ static void test_detach_by_parent_cb(void) blk_insert_bs(blk, parent_a, &error_abort); bdrv_unref(parent_a); =20 + /* If we want to get bdrv_drain_invoke() to call aio_poll(), the driver + * callback must not return immediately. */ + if (!by_parent_cb) { + BDRVTestState *s =3D parent_a->opaque; + s->sleep_in_drain_begin =3D true; + } + /* Set child relationships */ bdrv_ref(b); bdrv_ref(a); @@ -1044,7 +1105,9 @@ static void test_detach_by_parent_cb(void) child_a =3D bdrv_attach_child(parent_b, a, "PB-A", &child_backing, &er= ror_abort); =20 bdrv_ref(a); - bdrv_attach_child(parent_a, a, "PA-A", &child_file, &error_abort); + bdrv_attach_child(parent_a, a, "PA-A", + by_parent_cb ? &child_file : &detach_by_driver_cb_ro= le, + &error_abort); =20 g_assert_cmpint(parent_a->refcnt, =3D=3D, 1); g_assert_cmpint(parent_b->refcnt, =3D=3D, 1); @@ -1057,18 +1120,19 @@ static void test_detach_by_parent_cb(void) g_assert(QLIST_NEXT(child_b, next) =3D=3D NULL); =20 /* Start the evil write request */ - data =3D (struct detach_by_parent_data) { + detach_by_parent_data =3D (struct detach_by_parent_data) { .parent_b =3D parent_b, .child_b =3D child_b, .c =3D c, + .by_parent_cb =3D by_parent_cb, }; - acb =3D blk_aio_preadv(blk, 0, &qiov, 0, detach_by_parent_aio_cb, &dat= a); + acb =3D blk_aio_preadv(blk, 0, &qiov, 0, detach_by_parent_aio_cb, NULL= ); g_assert(acb !=3D NULL); =20 /* Drain and check the expected result */ bdrv_subtree_drained_begin(parent_b); =20 - g_assert(data.child_c !=3D NULL); + g_assert(detach_by_parent_data.child_c !=3D NULL); =20 g_assert_cmpint(parent_a->refcnt, =3D=3D, 1); g_assert_cmpint(parent_b->refcnt, =3D=3D, 1); @@ -1076,8 +1140,8 @@ static void test_detach_by_parent_cb(void) g_assert_cmpint(b->refcnt, =3D=3D, 1); g_assert_cmpint(c->refcnt, =3D=3D, 2); =20 - g_assert(QLIST_FIRST(&parent_b->children) =3D=3D data.child_c); - g_assert(QLIST_NEXT(data.child_c, next) =3D=3D child_a); + g_assert(QLIST_FIRST(&parent_b->children) =3D=3D detach_by_parent_data= .child_c); + g_assert(QLIST_NEXT(detach_by_parent_data.child_c, next) =3D=3D child_= a); g_assert(QLIST_NEXT(child_a, next) =3D=3D NULL); =20 g_assert_cmpint(parent_a->quiesce_counter, =3D=3D, 1); @@ -1105,6 +1169,15 @@ static void test_detach_by_parent_cb(void) bdrv_unref(c); } =20 +static void test_detach_by_parent_cb(void) +{ + test_detach_indirect(true); +} + +static void test_detach_by_driver_cb(void) +{ + test_detach_indirect(false); +} =20 int main(int argc, char **argv) { @@ -1157,6 +1230,7 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/detach/drain", test_detach_by_drain); g_test_add_func("/bdrv-drain/detach/drain_subtree", test_detach_by_dra= in_subtree); g_test_add_func("/bdrv-drain/detach/parent_cb", test_detach_by_parent_= cb); + g_test_add_func("/bdrv-drain/detach/driver_cb", test_detach_by_driver_= cb); =20 ret =3D g_test_run(); qemu_event_destroy(&done_event); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 152346575892060.125087338201865; Wed, 11 Apr 2018 09:55:58 -0700 (PDT) Received: from localhost ([::1]:39731 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6J2M-00089C-3s for importer@patchew.org; Wed, 11 Apr 2018 12:55:58 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41480) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InT-0002yc-BR for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InP-0004fs-AQ for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:35 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:50460 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InI-0004cA-NS; Wed, 11 Apr 2018 12:40:24 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4AEE0406F890; Wed, 11 Apr 2018 16:40:24 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 16AAEBDC4D; Wed, 11 Apr 2018 16:40:22 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:37 +0200 Message-Id: <20180411163940.2523-17-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:40:24 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 11 Apr 2018 16:40:24 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 16/19] block: Allow AIO_WAIT_WHILE with NULL ctx X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" bdrv_drain_all() wants to have a single polling loop for draining the in-flight requests of all nodes. This means that the AIO_WAIT_WHILE() condition relies on activity in multiple AioContexts, which is polled from the mainloop context. We must therefore call AIO_WAIT_WHILE() from the mainloop thread and use the AioWait notification mechanism. Just randomly picking the AioContext of any non-mainloop thread would work, but instead of bothering to find such a context in the caller, we can just as well accept NULL for ctx. Signed-off-by: Kevin Wolf --- include/block/aio-wait.h | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index 783d3678dd..c85a62f798 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -57,7 +57,8 @@ typedef struct { /** * AIO_WAIT_WHILE: * @wait: the aio wait object - * @ctx: the aio context + * @ctx: the aio context, or NULL if multiple aio contexts (for which the + * caller does not hold a lock) are involved in the polling conditio= n. * @cond: wait while this conditional expression is true * * Wait while a condition is true. Use this to implement synchronous @@ -75,7 +76,7 @@ typedef struct { bool waited_ =3D false; \ AioWait *wait_ =3D (wait); \ AioContext *ctx_ =3D (ctx); \ - if (in_aio_context_home_thread(ctx_)) { \ + if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ waited_ =3D true; \ @@ -86,9 +87,13 @@ typedef struct { /* Increment wait_->num_waiters before evaluating cond. */ \ atomic_inc(&wait_->num_waiters); \ while ((cond)) { \ - aio_context_release(ctx_); \ + if (ctx_) { \ + aio_context_release(ctx_); \ + } \ aio_poll(qemu_get_aio_context(), true); \ - aio_context_acquire(ctx_); \ + if (ctx_) { \ + aio_context_acquire(ctx_); \ + } \ waited_ =3D true; \ } \ atomic_dec(&wait_->num_waiters); \ --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465959854658.9743705921779; Wed, 11 Apr 2018 09:59:19 -0700 (PDT) Received: from localhost ([::1]:39988 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6J5b-0002p1-1N for importer@patchew.org; Wed, 11 Apr 2018 12:59:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41474) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InT-0002yZ-AE for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InP-0004g0-E8 for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:35 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:56920 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InK-0004co-B8; Wed, 11 Apr 2018 12:40:26 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DD56BF080F; Wed, 11 Apr 2018 16:40:25 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 828C6AB3FD; Wed, 11 Apr 2018 16:40:24 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:38 +0200 Message-Id: <20180411163940.2523-18-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Wed, 11 Apr 2018 16:40:25 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Wed, 11 Apr 2018 16:40:25 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 17/19] block: Move bdrv_drain_all_begin() out of coroutine context X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Before we can introduce a single polling loop for all nodes in bdrv_drain_all_begin(), we must make sure to run it outside of coroutine context like we already do for bdrv_do_drained_begin(). Signed-off-by: Kevin Wolf --- block/io.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/block/io.c b/block/io.c index fc26c1a2f8..db6810b35f 100644 --- a/block/io.c +++ b/block/io.c @@ -255,11 +255,16 @@ static void bdrv_co_drain_bh_cb(void *opaque) Coroutine *co =3D data->co; BlockDriverState *bs =3D data->bs; =20 - bdrv_dec_in_flight(bs); - if (data->begin) { - bdrv_do_drained_begin(bs, data->recursive, data->parent, data->pol= l); + if (bs) { + bdrv_dec_in_flight(bs); + if (data->begin) { + bdrv_do_drained_begin(bs, data->recursive, data->parent, data-= >poll); + } else { + bdrv_do_drained_end(bs, data->recursive, data->parent); + } } else { - bdrv_do_drained_end(bs, data->recursive, data->parent); + assert(data->begin); + bdrv_drain_all_begin(); } =20 data->done =3D true; @@ -285,7 +290,9 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDr= iverState *bs, .parent =3D parent, .poll =3D poll, }; - bdrv_inc_in_flight(bs); + if (bs) { + bdrv_inc_in_flight(bs); + } aio_bh_schedule_oneshot(bdrv_get_aio_context(bs), bdrv_co_drain_bh_cb, &data); =20 @@ -455,6 +462,11 @@ void bdrv_drain_all_begin(void) BlockDriverState *bs; BdrvNextIterator it; =20 + if (qemu_in_coroutine()) { + bdrv_co_yield_to_drain(NULL, true, false, NULL, true); + return; + } + /* BDRV_POLL_WHILE() for a node can only be called from its own I/O th= read * or the main loop AioContext. We potentially use BDRV_POLL_WHILE() on * nodes in several different AioContexts, so make sure we're in the m= ain --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523466162991476.97807443027534; Wed, 11 Apr 2018 10:02:42 -0700 (PDT) Received: from localhost ([::1]:40801 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6J8s-0006dV-6J for importer@patchew.org; Wed, 11 Apr 2018 13:02:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41475) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InT-0002ya-AR for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InQ-0004ge-5n for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:35 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:46478 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InL-0004dn-Og; Wed, 11 Apr 2018 12:40:27 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 551BD8DC2E; Wed, 11 Apr 2018 16:40:27 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 23666AB3FD; Wed, 11 Apr 2018 16:40:26 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:39 +0200 Message-Id: <20180411163940.2523-19-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:40:27 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 11 Apr 2018 16:40:27 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 18/19] block: Allow graph changes in bdrv_drain_all_begin/end sections X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and did a subtree drain for each of them. This works fine as long as the graph is static, but sadly, reality looks different. If the graph changes so that root nodes are added or removed, we would have to compensate for this. bdrv_next() returns each root node only once even if it's the root node for multiple BlockBackends or for a monitor-owned block driver tree, which would only complicate things. The much easier and more obviously correct way is to fundamentally change the way the functions work: Iterate over all BlockDriverStates, no matter who owns them, and drain them individually. Compensation is only necessary when a new BDS is created inside a drain_all section. Removal of a BDS doesn't require any action because it's gone afterwards anyway. This change means that some nodes get a higher bs->quiesce_count now because each node propagates its individual drain to all of its parents. In the old subtree drain, propagation back to the parent that made the recursive drain request was avoided. While this isn't perfectly beautiful, accepting the higher counts seems preferable to adding drain code to multiple other places that modify the graph. The test case is changed to account for the higher counts where necessary. Signed-off-by: Kevin Wolf --- include/block/block.h | 1 + include/block/block_int.h | 1 + block.c | 20 +++++++++++++++- block/io.c | 58 ++++++++++++++++++++++++++++++++++++-------= ---- tests/test-bdrv-drain.c | 37 +++++++++++++++++------------- 5 files changed, 87 insertions(+), 30 deletions(-) diff --git a/include/block/block.h b/include/block/block.h index de2cba2c74..0b59d519c5 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -412,6 +412,7 @@ BlockDriverState *bdrv_lookup_bs(const char *device, Error **errp); bool bdrv_chain_contains(BlockDriverState *top, BlockDriverState *base); BlockDriverState *bdrv_next_node(BlockDriverState *bs); +BlockDriverState *bdrv_next_all_states(BlockDriverState *bs); =20 typedef struct BdrvNextIterator { enum { diff --git a/include/block/block_int.h b/include/block/block_int.h index dc6985e3ae..2c86a7b53f 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -804,6 +804,7 @@ int coroutine_fn bdrv_co_pwritev(BdrvChild *child, int64_t offset, unsigned int bytes, QEMUIOVector *qiov, BdrvRequestFlags flags); =20 +extern unsigned int bdrv_drain_all_count; void bdrv_apply_subtree_drain(BdrvChild *child, BlockDriverState *new_pare= nt); void bdrv_unapply_subtree_drain(BdrvChild *child, BlockDriverState *old_pa= rent); =20 diff --git a/block.c b/block.c index 330238de19..c1b339c4a4 100644 --- a/block.c +++ b/block.c @@ -332,6 +332,10 @@ BlockDriverState *bdrv_new(void) =20 qemu_co_queue_init(&bs->flush_queue); =20 + for (i =3D 0; i < bdrv_drain_all_count; i++) { + bdrv_drained_begin(bs); + } + QTAILQ_INSERT_TAIL(&all_bdrv_states, bs, bs_list); =20 return bs; @@ -1160,7 +1164,7 @@ static int bdrv_open_driver(BlockDriverState *bs, Blo= ckDriver *drv, int open_flags, Error **errp) { Error *local_err =3D NULL; - int ret; + int i, ret; =20 bdrv_assign_node_name(bs, node_name, &local_err); if (local_err) { @@ -1208,6 +1212,12 @@ static int bdrv_open_driver(BlockDriverState *bs, Bl= ockDriver *drv, assert(bdrv_min_mem_align(bs) !=3D 0); assert(is_power_of_2(bs->bl.request_alignment)); =20 + for (i =3D 0; i < bs->quiesce_counter; i++) { + if (drv->bdrv_co_drain_begin) { + drv->bdrv_co_drain_begin(bs); + } + } + return 0; open_failed: bs->drv =3D NULL; @@ -4034,6 +4044,14 @@ BlockDriverState *bdrv_next_node(BlockDriverState *b= s) return QTAILQ_NEXT(bs, node_list); } =20 +BlockDriverState *bdrv_next_all_states(BlockDriverState *bs) +{ + if (!bs) { + return QTAILQ_FIRST(&all_bdrv_states); + } + return QTAILQ_NEXT(bs, bs_list); +} + const char *bdrv_get_node_name(const BlockDriverState *bs) { return bs->node_name; diff --git a/block/io.c b/block/io.c index db6810b35f..af65c3ec2f 100644 --- a/block/io.c +++ b/block/io.c @@ -38,6 +38,8 @@ /* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */ #define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS) =20 +static AioWait drain_all_aio_wait; + static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int bytes, BdrvRequestFlags flags); =20 @@ -445,6 +447,29 @@ static void bdrv_drain_assert_idle(BlockDriverState *b= s) } } =20 +unsigned int bdrv_drain_all_count =3D 0; + +static bool bdrv_drain_all_poll(void) +{ + BlockDriverState *bs =3D NULL; + bool result =3D false; + + /* Execute pending BHs first (may modify the graph) and check everythi= ng + * else only after the BHs have executed. */ + while (aio_poll(qemu_get_aio_context(), false)); + + /* bdrv_drain_poll() with top_level =3D false can't make changes to the + * graph, so iterating bdrv_next_all_states() is safe. */ + while ((bs =3D bdrv_next_all_states(bs))) { + AioContext *aio_context =3D bdrv_get_aio_context(bs); + aio_context_acquire(aio_context); + result |=3D bdrv_drain_poll(bs, false, false); + aio_context_release(aio_context); + } + + return result; +} + /* * Wait for pending requests to complete across all BlockDriverStates * @@ -459,45 +484,51 @@ static void bdrv_drain_assert_idle(BlockDriverState *= bs) */ void bdrv_drain_all_begin(void) { - BlockDriverState *bs; - BdrvNextIterator it; + BlockDriverState *bs =3D NULL; =20 if (qemu_in_coroutine()) { bdrv_co_yield_to_drain(NULL, true, false, NULL, true); return; } =20 - /* BDRV_POLL_WHILE() for a node can only be called from its own I/O th= read - * or the main loop AioContext. We potentially use BDRV_POLL_WHILE() on - * nodes in several different AioContexts, so make sure we're in the m= ain - * context. */ + /* AIO_WAIT_WHILE() with a NULL context can only be called from the ma= in + * loop AioContext, so make sure we're in the main context. */ assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); + assert(bdrv_drain_all_count < INT_MAX); + bdrv_drain_all_count++; =20 - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { + /* Quiesce all nodes, without polling in-flight requests yet. The graph + * cannot change during this loop. */ + while ((bs =3D bdrv_next_all_states(bs))) { AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 aio_context_acquire(aio_context); - bdrv_do_drained_begin(bs, true, NULL, true); + bdrv_do_drained_begin(bs, false, NULL, false); aio_context_release(aio_context); } =20 - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { + /* Now poll the in-flight requests */ + AIO_WAIT_WHILE(&drain_all_aio_wait, NULL, bdrv_drain_all_poll()); + + while ((bs =3D bdrv_next_all_states(bs))) { bdrv_drain_assert_idle(bs); } } =20 void bdrv_drain_all_end(void) { - BlockDriverState *bs; - BdrvNextIterator it; + BlockDriverState *bs =3D NULL; =20 - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { + while ((bs =3D bdrv_next_all_states(bs))) { AioContext *aio_context =3D bdrv_get_aio_context(bs); =20 aio_context_acquire(aio_context); - bdrv_do_drained_end(bs, true, NULL); + bdrv_do_drained_end(bs, false, NULL); aio_context_release(aio_context); } + + assert(bdrv_drain_all_count > 0); + bdrv_drain_all_count--; } =20 void bdrv_drain_all(void) @@ -620,6 +651,7 @@ void bdrv_inc_in_flight(BlockDriverState *bs) void bdrv_wakeup(BlockDriverState *bs) { aio_wait_kick(bdrv_get_aio_wait(bs)); + aio_wait_kick(&drain_all_aio_wait); } =20 void bdrv_dec_in_flight(BlockDriverState *bs) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 79d845ecbb..97ca0743c6 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -174,7 +174,8 @@ static void do_drain_end(enum drain_type drain_type, Bl= ockDriverState *bs) } } =20 -static void test_drv_cb_common(enum drain_type drain_type, bool recursive) +static void test_drv_cb_common(enum drain_type drain_type, int top_drain_c= ount, + int backing_drain_count) { BlockBackend *blk; BlockDriverState *bs, *backing; @@ -205,8 +206,8 @@ static void test_drv_cb_common(enum drain_type drain_ty= pe, bool recursive) =20 do_drain_begin(drain_type, bs); =20 - g_assert_cmpint(s->drain_count, =3D=3D, 1); - g_assert_cmpint(backing_s->drain_count, =3D=3D, !!recursive); + g_assert_cmpint(s->drain_count, =3D=3D, top_drain_count); + g_assert_cmpint(backing_s->drain_count, =3D=3D, backing_drain_count); =20 do_drain_end(drain_type, bs); =20 @@ -225,8 +226,8 @@ static void test_drv_cb_common(enum drain_type drain_ty= pe, bool recursive) do_drain_begin(drain_type, bs); =20 g_assert_cmpint(aio_ret, =3D=3D, 0); - g_assert_cmpint(s->drain_count, =3D=3D, 1); - g_assert_cmpint(backing_s->drain_count, =3D=3D, !!recursive); + g_assert_cmpint(s->drain_count, =3D=3D, top_drain_count); + g_assert_cmpint(backing_s->drain_count, =3D=3D, backing_drain_count); =20 do_drain_end(drain_type, bs); =20 @@ -240,17 +241,17 @@ static void test_drv_cb_common(enum drain_type drain_= type, bool recursive) =20 static void test_drv_cb_drain_all(void) { - test_drv_cb_common(BDRV_DRAIN_ALL, true); + test_drv_cb_common(BDRV_DRAIN_ALL, 2, 1); } =20 static void test_drv_cb_drain(void) { - test_drv_cb_common(BDRV_DRAIN, false); + test_drv_cb_common(BDRV_DRAIN, 1, 0); } =20 static void test_drv_cb_drain_subtree(void) { - test_drv_cb_common(BDRV_SUBTREE_DRAIN, true); + test_drv_cb_common(BDRV_SUBTREE_DRAIN, 1, 1); } =20 static void test_drv_cb_co_drain_all(void) @@ -268,7 +269,9 @@ static void test_drv_cb_co_drain_subtree(void) call_in_coroutine(test_drv_cb_drain_subtree); } =20 -static void test_quiesce_common(enum drain_type drain_type, bool recursive) +static void test_quiesce_common(enum drain_type drain_type, + int top_quiesce_count, + int backing_quiesce_count) { BlockBackend *blk; BlockDriverState *bs, *backing; @@ -286,8 +289,8 @@ static void test_quiesce_common(enum drain_type drain_t= ype, bool recursive) =20 do_drain_begin(drain_type, bs); =20 - g_assert_cmpint(bs->quiesce_counter, =3D=3D, 1); - g_assert_cmpint(backing->quiesce_counter, =3D=3D, !!recursive); + g_assert_cmpint(bs->quiesce_counter, =3D=3D, top_quiesce_count); + g_assert_cmpint(backing->quiesce_counter, =3D=3D, backing_quiesce_coun= t); =20 do_drain_end(drain_type, bs); =20 @@ -301,17 +304,17 @@ static void test_quiesce_common(enum drain_type drain= _type, bool recursive) =20 static void test_quiesce_drain_all(void) { - test_quiesce_common(BDRV_DRAIN_ALL, true); + test_quiesce_common(BDRV_DRAIN_ALL, 2, 1); } =20 static void test_quiesce_drain(void) { - test_quiesce_common(BDRV_DRAIN, false); + test_quiesce_common(BDRV_DRAIN, 1, 0); } =20 static void test_quiesce_drain_subtree(void) { - test_quiesce_common(BDRV_SUBTREE_DRAIN, true); + test_quiesce_common(BDRV_SUBTREE_DRAIN, 1, 1); } =20 static void test_quiesce_co_drain_all(void) @@ -348,6 +351,8 @@ static void test_nested(void) =20 for (outer =3D 0; outer < DRAIN_TYPE_MAX; outer++) { for (inner =3D 0; inner < DRAIN_TYPE_MAX; inner++) { + int top_quiesce =3D 2 + (outer =3D=3D BDRV_DRAIN_ALL) + + (inner =3D=3D BDRV_DRAIN_ALL); int backing_quiesce =3D (outer !=3D BDRV_DRAIN) + (inner !=3D BDRV_DRAIN); =20 @@ -359,9 +364,9 @@ static void test_nested(void) do_drain_begin(outer, bs); do_drain_begin(inner, bs); =20 - g_assert_cmpint(bs->quiesce_counter, =3D=3D, 2); + g_assert_cmpint(bs->quiesce_counter, =3D=3D, top_quiesce); g_assert_cmpint(backing->quiesce_counter, =3D=3D, backing_quie= sce); - g_assert_cmpint(s->drain_count, =3D=3D, 2); + g_assert_cmpint(s->drain_count, =3D=3D, top_quiesce); g_assert_cmpint(backing_s->drain_count, =3D=3D, backing_quiesc= e); =20 do_drain_end(inner, bs); --=20 2.13.6 From nobody Mon May 6 07:07:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523465691974207.27666817177055; Wed, 11 Apr 2018 09:54:51 -0700 (PDT) Received: from localhost ([::1]:39712 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6J18-0006nV-Ar for importer@patchew.org; Wed, 11 Apr 2018 12:54:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41503) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f6InV-00030q-Bb for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f6InU-0004j1-7M for qemu-devel@nongnu.org; Wed, 11 Apr 2018 12:40:37 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33618 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f6InN-0004ep-6h; Wed, 11 Apr 2018 12:40:29 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C03A5402291E; Wed, 11 Apr 2018 16:40:28 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-181.ams2.redhat.com [10.36.117.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8FF1DBDC4D; Wed, 11 Apr 2018 16:40:27 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 11 Apr 2018 18:39:40 +0200 Message-Id: <20180411163940.2523-20-kwolf@redhat.com> In-Reply-To: <20180411163940.2523-1-kwolf@redhat.com> References: <20180411163940.2523-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:28 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Wed, 11 Apr 2018 16:40:28 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 19/19] test-bdrv-drain: Test graph changes in drain_all section X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This tests both adding and remove a node between bdrv_drain_all_begin() and bdrv_drain_all_end(), and enabled the existing detach test for drain_all. Signed-off-by: Kevin Wolf --- tests/test-bdrv-drain.c | 75 +++++++++++++++++++++++++++++++++++++++++++++= ++-- 1 file changed, 73 insertions(+), 2 deletions(-) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 97ca0743c6..462842a761 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -457,7 +457,7 @@ static void test_multiparent(void) blk_unref(blk_b); } =20 -static void test_graph_change(void) +static void test_graph_change_drain_subtree(void) { BlockBackend *blk_a, *blk_b; BlockDriverState *bs_a, *bs_b, *backing; @@ -536,6 +536,63 @@ static void test_graph_change(void) blk_unref(blk_b); } =20 +static void test_graph_change_drain_all(void) +{ + BlockBackend *blk_a, *blk_b; + BlockDriverState *bs_a, *bs_b; + BDRVTestState *a_s, *b_s; + + /* Create node A with a BlockBackend */ + blk_a =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + bs_a =3D bdrv_new_open_driver(&bdrv_test, "test-node-a", BDRV_O_RDWR, + &error_abort); + a_s =3D bs_a->opaque; + blk_insert_bs(blk_a, bs_a, &error_abort); + + g_assert_cmpint(bs_a->quiesce_counter, =3D=3D, 0); + g_assert_cmpint(a_s->drain_count, =3D=3D, 0); + + /* Call bdrv_drain_all_begin() */ + bdrv_drain_all_begin(); + + g_assert_cmpint(bs_a->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(a_s->drain_count, =3D=3D, 1); + + /* Create node B with a BlockBackend */ + blk_b =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + bs_b =3D bdrv_new_open_driver(&bdrv_test, "test-node-b", BDRV_O_RDWR, + &error_abort); + b_s =3D bs_b->opaque; + blk_insert_bs(blk_b, bs_b, &error_abort); + + g_assert_cmpint(bs_a->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(bs_b->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(a_s->drain_count, =3D=3D, 1); + g_assert_cmpint(b_s->drain_count, =3D=3D, 1); + + /* Unref and finally delete node A */ + blk_unref(blk_a); + + g_assert_cmpint(bs_a->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(bs_b->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(a_s->drain_count, =3D=3D, 1); + g_assert_cmpint(b_s->drain_count, =3D=3D, 1); + + bdrv_unref(bs_a); + + g_assert_cmpint(bs_b->quiesce_counter, =3D=3D, 1); + g_assert_cmpint(b_s->drain_count, =3D=3D, 1); + + /* End the drained section */ + bdrv_drain_all_end(); + + g_assert_cmpint(bs_b->quiesce_counter, =3D=3D, 0); + g_assert_cmpint(b_s->drain_count, =3D=3D, 0); + + bdrv_unref(bs_b); + blk_unref(blk_b); +} + struct test_iothread_data { BlockDriverState *bs; enum drain_type drain_type; @@ -971,6 +1028,10 @@ static void do_test_delete_by_drain(bool detach_inste= ad_of_delete, bdrv_subtree_drained_begin(bs); bdrv_subtree_drained_end(bs); break; + case BDRV_DRAIN_ALL: + bdrv_drain_all_begin(); + bdrv_drain_all_end(); + break; default: g_assert_not_reached(); } @@ -991,6 +1052,11 @@ static void test_delete_by_drain(void) do_test_delete_by_drain(false, BDRV_DRAIN); } =20 +static void test_detach_by_drain_all(void) +{ + do_test_delete_by_drain(true, BDRV_DRAIN_ALL); +} + static void test_detach_by_drain(void) { do_test_delete_by_drain(true, BDRV_DRAIN); @@ -1219,7 +1285,11 @@ int main(int argc, char **argv) =20 g_test_add_func("/bdrv-drain/nested", test_nested); g_test_add_func("/bdrv-drain/multiparent", test_multiparent); - g_test_add_func("/bdrv-drain/graph-change", test_graph_change); + + g_test_add_func("/bdrv-drain/graph-change/drain_subtree", + test_graph_change_drain_subtree); + g_test_add_func("/bdrv-drain/graph-change/drain_all", + test_graph_change_drain_all); =20 g_test_add_func("/bdrv-drain/iothread/drain_all", test_iothread_drain_= all); g_test_add_func("/bdrv-drain/iothread/drain", test_iothread_drain); @@ -1232,6 +1302,7 @@ int main(int argc, char **argv) test_blockjob_drain_subtree); =20 g_test_add_func("/bdrv-drain/deletion/drain", test_delete_by_drain); + g_test_add_func("/bdrv-drain/detach/drain_all", test_detach_by_drain_a= ll); g_test_add_func("/bdrv-drain/detach/drain", test_detach_by_drain); g_test_add_func("/bdrv-drain/detach/drain_subtree", test_detach_by_dra= in_subtree); g_test_add_func("/bdrv-drain/detach/parent_cb", test_detach_by_parent_= cb); --=20 2.13.6