From nobody Wed Dec 17 05:38:44 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1529340727262688.5347405805486; Mon, 18 Jun 2018 09:52:07 -0700 (PDT) Received: from localhost ([::1]:35972 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fUxNq-00013s-Be for importer@patchew.org; Mon, 18 Jun 2018 12:52:02 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51845) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fUxHQ-0004rw-DO for qemu-devel@nongnu.org; Mon, 18 Jun 2018 12:45:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fUxHJ-0006D4-Dt for qemu-devel@nongnu.org; Mon, 18 Jun 2018 12:45:24 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33728 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fUxHC-00066B-Cw; Mon, 18 Jun 2018 12:45:10 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CB545402333A; Mon, 18 Jun 2018 16:45:09 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-120.ams2.redhat.com [10.36.117.120]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24CA92026D5B; Mon, 18 Jun 2018 16:45:09 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Mon, 18 Jun 2018 18:44:30 +0200 Message-Id: <20180618164504.24488-2-kwolf@redhat.com> In-Reply-To: <20180618164504.24488-1-kwolf@redhat.com> References: <20180618164504.24488-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 18 Jun 2018 16:45:09 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 18 Jun 2018 16:45:09 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PULL 01/35] test-bdrv-drain: bdrv_drain() works with cross-AioContext events X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" As long as nobody keeps the other I/O thread from working, there is no reason why bdrv_drain() wouldn't work with cross-AioContext events. The key is that the root request we're waiting for is in the AioContext we're polling (which it always is for bdrv_drain()) so that aio_poll() is woken up in the end. Add a test case that shows that it works. Remove the comment in bdrv_drain() that claims otherwise. Signed-off-by: Kevin Wolf --- block/io.c | 4 -- tests/test-bdrv-drain.c | 187 ++++++++++++++++++++++++++++++++++++++++++++= +++- 2 files changed, 186 insertions(+), 5 deletions(-) diff --git a/block/io.c b/block/io.c index b7beaeeb9f..5f286bcedb 100644 --- a/block/io.c +++ b/block/io.c @@ -370,10 +370,6 @@ void bdrv_unapply_subtree_drain(BdrvChild *child, Bloc= kDriverState *old_parent) * * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriver= State * AioContext. - * - * Only this BlockDriverState's AioContext is run, so in-flight requests m= ust - * not depend on events in other AioContexts. In that case, use - * bdrv_drain_all() instead. */ void coroutine_fn bdrv_co_drain(BlockDriverState *bs) { diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index a11c4cfbf2..fb68539d17 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -27,9 +27,13 @@ #include "block/blockjob_int.h" #include "sysemu/block-backend.h" #include "qapi/error.h" +#include "iothread.h" + +static QemuEvent done_event; =20 typedef struct BDRVTestState { int drain_count; + AioContext *bh_indirection_ctx; } BDRVTestState; =20 static void coroutine_fn bdrv_test_co_drain_begin(BlockDriverState *bs) @@ -50,16 +54,29 @@ static void bdrv_test_close(BlockDriverState *bs) g_assert_cmpint(s->drain_count, >, 0); } =20 +static void co_reenter_bh(void *opaque) +{ + aio_co_wake(opaque); +} + static int coroutine_fn bdrv_test_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t byte= s, QEMUIOVector *qiov, int flags) { + BDRVTestState *s =3D bs->opaque; + /* We want this request to stay until the polling loop in drain waits = for * it to complete. We need to sleep a while as bdrv_drain_invoke() com= es * first and polls its result, too, but it shouldn't accidentally comp= lete * this request yet. */ qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, 100000); =20 + if (s->bh_indirection_ctx) { + aio_bh_schedule_oneshot(s->bh_indirection_ctx, co_reenter_bh, + qemu_coroutine_self()); + qemu_coroutine_yield(); + } + return 0; } =20 @@ -490,6 +507,164 @@ static void test_graph_change(void) blk_unref(blk_b); } =20 +struct test_iothread_data { + BlockDriverState *bs; + enum drain_type drain_type; + int *aio_ret; +}; + +static void test_iothread_drain_entry(void *opaque) +{ + struct test_iothread_data *data =3D opaque; + + aio_context_acquire(bdrv_get_aio_context(data->bs)); + do_drain_begin(data->drain_type, data->bs); + g_assert_cmpint(*data->aio_ret, =3D=3D, 0); + do_drain_end(data->drain_type, data->bs); + aio_context_release(bdrv_get_aio_context(data->bs)); + + qemu_event_set(&done_event); +} + +static void test_iothread_aio_cb(void *opaque, int ret) +{ + int *aio_ret =3D opaque; + *aio_ret =3D ret; + qemu_event_set(&done_event); +} + +/* + * Starts an AIO request on a BDS that runs in the AioContext of iothread = 1. + * The request involves a BH on iothread 2 before it can complete. + * + * @drain_thread =3D 0 means that do_drain_begin/end are called from the m= ain + * thread, @drain_thread =3D 1 means that they are called from iothread 1.= Drain + * for this BDS cannot be called from iothread 2 because only the main thr= ead + * may do cross-AioContext polling. + */ +static void test_iothread_common(enum drain_type drain_type, int drain_thr= ead) +{ + BlockBackend *blk; + BlockDriverState *bs; + BDRVTestState *s; + BlockAIOCB *acb; + int aio_ret; + struct test_iothread_data data; + + IOThread *a =3D iothread_new(); + IOThread *b =3D iothread_new(); + AioContext *ctx_a =3D iothread_get_aio_context(a); + AioContext *ctx_b =3D iothread_get_aio_context(b); + + QEMUIOVector qiov; + struct iovec iov =3D { + .iov_base =3D NULL, + .iov_len =3D 0, + }; + qemu_iovec_init_external(&qiov, &iov, 1); + + /* bdrv_drain_all() may only be called from the main loop thread */ + if (drain_type =3D=3D BDRV_DRAIN_ALL && drain_thread !=3D 0) { + goto out; + } + + blk =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + bs =3D bdrv_new_open_driver(&bdrv_test, "test-node", BDRV_O_RDWR, + &error_abort); + s =3D bs->opaque; + blk_insert_bs(blk, bs, &error_abort); + + blk_set_aio_context(blk, ctx_a); + aio_context_acquire(ctx_a); + + s->bh_indirection_ctx =3D ctx_b; + + aio_ret =3D -EINPROGRESS; + if (drain_thread =3D=3D 0) { + acb =3D blk_aio_preadv(blk, 0, &qiov, 0, test_iothread_aio_cb, &ai= o_ret); + } else { + acb =3D blk_aio_preadv(blk, 0, &qiov, 0, aio_ret_cb, &aio_ret); + } + g_assert(acb !=3D NULL); + g_assert_cmpint(aio_ret, =3D=3D, -EINPROGRESS); + + aio_context_release(ctx_a); + + data =3D (struct test_iothread_data) { + .bs =3D bs, + .drain_type =3D drain_type, + .aio_ret =3D &aio_ret, + }; + + switch (drain_thread) { + case 0: + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_acquire(ctx_a); + } + + /* The request is running on the IOThread a. Draining its block de= vice + * will make sure that it has completed as far as the BDS is conce= rned, + * but the drain in this thread can continue immediately after + * bdrv_dec_in_flight() and aio_ret might be assigned only slightly + * later. */ + qemu_event_reset(&done_event); + do_drain_begin(drain_type, bs); + g_assert_cmpint(bs->in_flight, =3D=3D, 0); + + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_release(ctx_a); + } + qemu_event_wait(&done_event); + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_acquire(ctx_a); + } + + g_assert_cmpint(aio_ret, =3D=3D, 0); + do_drain_end(drain_type, bs); + + if (drain_type !=3D BDRV_DRAIN_ALL) { + aio_context_release(ctx_a); + } + break; + case 1: + qemu_event_reset(&done_event); + aio_bh_schedule_oneshot(ctx_a, test_iothread_drain_entry, &data); + qemu_event_wait(&done_event); + break; + default: + g_assert_not_reached(); + } + + aio_context_acquire(ctx_a); + blk_set_aio_context(blk, qemu_get_aio_context()); + aio_context_release(ctx_a); + + bdrv_unref(bs); + blk_unref(blk); + +out: + iothread_join(a); + iothread_join(b); +} + +static void test_iothread_drain_all(void) +{ + test_iothread_common(BDRV_DRAIN_ALL, 0); + test_iothread_common(BDRV_DRAIN_ALL, 1); +} + +static void test_iothread_drain(void) +{ + test_iothread_common(BDRV_DRAIN, 0); + test_iothread_common(BDRV_DRAIN, 1); +} + +static void test_iothread_drain_subtree(void) +{ + test_iothread_common(BDRV_SUBTREE_DRAIN, 0); + test_iothread_common(BDRV_SUBTREE_DRAIN, 1); +} + =20 typedef struct TestBlockJob { BlockJob common; @@ -618,10 +793,13 @@ static void test_blockjob_drain_subtree(void) =20 int main(int argc, char **argv) { + int ret; + bdrv_init(); qemu_init_main_loop(&error_abort); =20 g_test_init(&argc, &argv, NULL); + qemu_event_init(&done_event, false); =20 g_test_add_func("/bdrv-drain/driver-cb/drain_all", test_drv_cb_drain_a= ll); g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain); @@ -648,10 +826,17 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/multiparent", test_multiparent); g_test_add_func("/bdrv-drain/graph-change", test_graph_change); =20 + g_test_add_func("/bdrv-drain/iothread/drain_all", test_iothread_drain_= all); + g_test_add_func("/bdrv-drain/iothread/drain", test_iothread_drain); + g_test_add_func("/bdrv-drain/iothread/drain_subtree", + test_iothread_drain_subtree); + g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_= all); g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain); g_test_add_func("/bdrv-drain/blockjob/drain_subtree", test_blockjob_drain_subtree); =20 - return g_test_run(); + ret =3D g_test_run(); + qemu_event_destroy(&done_event); + return ret; } --=20 2.13.6