From nobody Fri Oct 24 22:14:58 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1519841571156639.9745234428568; Wed, 28 Feb 2018 10:12:51 -0800 (PST) Received: from localhost ([::1]:45987 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1er6Dh-0001jh-OS for importer@patchew.org; Wed, 28 Feb 2018 13:12:49 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41571) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1er66f-0004Bf-LI for qemu-devel@nongnu.org; Wed, 28 Feb 2018 13:05:35 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1er66e-00047v-8x for qemu-devel@nongnu.org; Wed, 28 Feb 2018 13:05:33 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:47032 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1er66Y-00044Y-8M; Wed, 28 Feb 2018 13:05:26 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CFA6E818B103; Wed, 28 Feb 2018 18:05:25 +0000 (UTC) Received: from localhost (unknown [10.40.205.159]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F16F79C042; Wed, 28 Feb 2018 18:05:18 +0000 (UTC) From: Max Reitz To: qemu-block@nongnu.org Date: Wed, 28 Feb 2018 19:04:54 +0100 Message-Id: <20180228180507.3964-4-mreitz@redhat.com> In-Reply-To: <20180228180507.3964-1-mreitz@redhat.com> References: <20180228180507.3964-1-mreitz@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 28 Feb 2018 18:05:25 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 28 Feb 2018 18:05:25 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'mreitz@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH v3 03/16] tests: Add bdrv-drain test for node deletion X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-devel@nongnu.org, Max Reitz , Stefan Hajnoczi , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch adds two bdrv-drain tests for what happens if some BDS goes away during the drainage. The basic idea is that you have a parent BDS with some child nodes. Then, you drain one of the children. Because of that, the party who actually owns the parent decides to (A) delete it, or (B) detach all its children from it -- both while the child is still being drained. A real-world case where this can happen is the mirror block job, which may exit if you drain one of its children. Signed-off-by: Max Reitz --- tests/test-bdrv-drain.c | 165 ++++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 165 insertions(+) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index d760e2b243..c8c9178f25 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -610,6 +610,169 @@ static void test_blockjob_drain_subtree(void) test_blockjob_common(BDRV_SUBTREE_DRAIN); } =20 + +typedef struct BDRVTestTopState { + BdrvChild *wait_child; +} BDRVTestTopState; + +static void bdrv_test_top_close(BlockDriverState *bs) +{ + BdrvChild *c, *next_c; + QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) { + bdrv_unref_child(bs, c); + } +} + +static int coroutine_fn bdrv_test_top_co_preadv(BlockDriverState *bs, + uint64_t offset, uint64_t = bytes, + QEMUIOVector *qiov, int fl= ags) +{ + BDRVTestTopState *tts =3D bs->opaque; + return bdrv_co_preadv(tts->wait_child, offset, bytes, qiov, flags); +} + +static BlockDriver bdrv_test_top_driver =3D { + .format_name =3D "test_top_driver", + .instance_size =3D sizeof(BDRVTestTopState), + + .bdrv_close =3D bdrv_test_top_close, + .bdrv_co_preadv =3D bdrv_test_top_co_preadv, + + .bdrv_child_perm =3D bdrv_format_default_perms, +}; + +typedef struct TestCoDeleteByDrainData { + BlockBackend *blk; + bool detach_instead_of_delete; + bool done; +} TestCoDeleteByDrainData; + +static void coroutine_fn test_co_delete_by_drain(void *opaque) +{ + TestCoDeleteByDrainData *dbdd =3D opaque; + BlockBackend *blk =3D dbdd->blk; + BlockDriverState *bs =3D blk_bs(blk); + BDRVTestTopState *tts =3D bs->opaque; + void *buffer =3D g_malloc(65536); + QEMUIOVector qiov; + struct iovec iov =3D { + .iov_base =3D buffer, + .iov_len =3D 65536, + }; + + qemu_iovec_init_external(&qiov, &iov, 1); + + /* Pretend some internal write operation from parent to child. + * Important: We have to read from the child, not from the parent! + * Draining works by first propagating it all up the tree to the + * root and then waiting for drainage from root to the leaves + * (protocol nodes). If we have a request waiting on the root, + * everything will be drained before we go back down the tree, but + * we do not want that. We want to be in the middle of draining + * when this following requests returns. */ + bdrv_co_preadv(tts->wait_child, 0, 65536, &qiov, 0); + /* The drain is running concurrently, so it must have its own + * reference to @bs */ + g_assert_cmpint(bs->refcnt, =3D=3D, 2); + + if (!dbdd->detach_instead_of_delete) { + blk_unref(blk); + } else { + BdrvChild *c, *next_c; + QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) { + bdrv_unref_child(bs, c); + } + } + + dbdd->done =3D true; +} + +/** + * Test what happens when some BDS has some children, you drain one of + * them and this results in the BDS being deleted. + * + * If @detach_instead_of_delete is set, the BDS is not going to be + * deleted but will only detach all of its children. + */ +static void do_test_delete_by_drain(bool detach_instead_of_delete) +{ + BlockBackend *blk; + BlockDriverState *bs, *child_bs, *null_bs; + BDRVTestTopState *tts; + TestCoDeleteByDrainData dbdd; + Coroutine *co; + + bs =3D bdrv_new_open_driver(&bdrv_test_top_driver, "top", BDRV_O_RDWR, + &error_abort); + bs->total_sectors =3D 65536 >> BDRV_SECTOR_BITS; + tts =3D bs->opaque; + + null_bs =3D bdrv_open("null-co://", NULL, NULL, BDRV_O_RDWR | BDRV_O_P= ROTOCOL, + &error_abort); + bdrv_attach_child(bs, null_bs, "null-child", &child_file, &error_abort= ); + + /* This child will be the one to pass to requests through to, and + * it will stall until a drain occurs */ + child_bs =3D bdrv_new_open_driver(&bdrv_test, "child", BDRV_O_RDWR, + &error_abort); + child_bs->total_sectors =3D 65536 >> BDRV_SECTOR_BITS; + /* Takes our reference to child_bs */ + tts->wait_child =3D bdrv_attach_child(bs, child_bs, "wait-child", &chi= ld_file, + &error_abort); + + /* This child is just there to be deleted + * (for detach_instead_of_delete =3D=3D true) */ + null_bs =3D bdrv_open("null-co://", NULL, NULL, BDRV_O_RDWR | BDRV_O_P= ROTOCOL, + &error_abort); + bdrv_attach_child(bs, null_bs, "null-child", &child_file, &error_abort= ); + + blk =3D blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + blk_insert_bs(blk, bs, &error_abort); + + /* Referenced by blk now */ + bdrv_unref(bs); + + g_assert_cmpint(bs->refcnt, =3D=3D, 1); + g_assert_cmpint(child_bs->refcnt, =3D=3D, 1); + g_assert_cmpint(null_bs->refcnt, =3D=3D, 1); + + + dbdd =3D (TestCoDeleteByDrainData){ + .blk =3D blk, + .detach_instead_of_delete =3D detach_instead_of_delete, + .done =3D false, + }; + co =3D qemu_coroutine_create(test_co_delete_by_drain, &dbdd); + qemu_coroutine_enter(co); + + /* Drain the child while the read operation is still pending. + * This should result in the operation finishing and + * test_co_delete_by_drain() resuming. Thus, @bs will be deleted + * and the coroutine will exit while this drain operation is still + * in progress. */ + bdrv_ref(child_bs); + bdrv_drain(child_bs); + bdrv_unref(child_bs); + + while (!dbdd.done) { + aio_poll(qemu_get_aio_context(), true); + } + + if (detach_instead_of_delete) { + /* Here, the reference has not passed over to the coroutine, + * so we have to delete the BB ourselves */ + blk_unref(blk); + } +} + + +static void test_delete_by_drain(void) +{ + do_test_delete_by_drain(false); + do_test_delete_by_drain(true); +} + + int main(int argc, char **argv) { bdrv_init(); @@ -647,5 +810,7 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/blockjob/drain_subtree", test_blockjob_drain_subtree); =20 + g_test_add_func("/bdrv-drain/deletion", test_delete_by_drain); + return g_test_run(); } --=20 2.14.3