From nobody Thu Nov 6 19:36:59 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1543241966251674.125708739899; Mon, 26 Nov 2018 06:19:26 -0800 (PST) Received: from localhost ([::1]:36590 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gRHjQ-0008OM-N2 for importer@patchew.org; Mon, 26 Nov 2018 09:19:24 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49111) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gRHh1-0006mj-Le for qemu-devel@nongnu.org; Mon, 26 Nov 2018 09:16:58 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gRHgz-0001Vk-Ql for qemu-devel@nongnu.org; Mon, 26 Nov 2018 09:16:55 -0500 Received: from mx1.redhat.com ([209.132.183.28]:46950) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gRHgs-0001Px-Vj; Mon, 26 Nov 2018 09:16:47 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 930C486663; Mon, 26 Nov 2018 14:16:45 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-116-140.ams2.redhat.com [10.36.116.140]) by smtp.corp.redhat.com (Postfix) with ESMTP id 83B2453B31; Mon, 26 Nov 2018 14:16:43 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Mon, 26 Nov 2018 15:16:38 +0100 Message-Id: <20181126141639.13208-2-kwolf@redhat.com> In-Reply-To: <20181126141639.13208-1-kwolf@redhat.com> References: <20181126141639.13208-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 26 Nov 2018 14:16:45 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH for-3.1 v2 1/2] block: Don't inactivate children before parents X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" bdrv_child_cb_inactivate() asserts that parents are already inactive when children get inactivated. This precondition is necessary because parents could still issue requests in their inactivation code. When block nodes are created individually with -blockdev, all of them are monitor owned and will be returned by bdrv_next() in an undefined order (in practice, in the order of their creation, which is usually children before parents), which obviously fails the assertion: qemu: block.c:899: bdrv_child_cb_inactivate: Assertion `bs->open_flags & BD= RV_O_INACTIVE' failed. This patch fixes the ordering by skipping nodes with still active parents in bdrv_inactivate_recurse() because we know that they will be covered by recursion when the last active parent becomes inactive. Signed-off-by: Kevin Wolf --- block.c | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/block.c b/block.c index 5ba3435f8f..e9181c3be7 100644 --- a/block.c +++ b/block.c @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp) } } =20 +static bool bdrv_has_bds_parent(BlockDriverState *bs, bool only_active) +{ + BdrvChild *parent; + + QLIST_FOREACH(parent, &bs->parents, next_parent) { + if (parent->role->parent_is_bds) { + BlockDriverState *parent_bs =3D parent->opaque; + if (!only_active || !(parent_bs->open_flags & BDRV_O_INACTIVE)= ) { + return true; + } + } + } + + return false; +} + static int bdrv_inactivate_recurse(BlockDriverState *bs, bool setting_flag) { @@ -4622,6 +4638,14 @@ static int bdrv_inactivate_recurse(BlockDriverState = *bs, return -ENOMEDIUM; } =20 + /* Make sure that we don't inactivate a child before its parent. + * It will be covered by recursion from the yet active parent. */ + if (bdrv_has_bds_parent(bs, true)) { + return 0; + } + + assert(!(bs->open_flags & BDRV_O_INACTIVE)); + if (!setting_flag && bs->drv->bdrv_inactivate) { ret =3D bs->drv->bdrv_inactivate(bs); if (ret < 0) { @@ -4629,7 +4653,7 @@ static int bdrv_inactivate_recurse(BlockDriverState *= bs, } } =20 - if (setting_flag && !(bs->open_flags & BDRV_O_INACTIVE)) { + if (setting_flag) { uint64_t perm, shared_perm; =20 QLIST_FOREACH(parent, &bs->parents, next_parent) { @@ -4682,6 +4706,12 @@ int bdrv_inactivate_all(void) * is allowed. */ for (pass =3D 0; pass < 2; pass++) { for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { + /* Nodes with BDS parents are covered by recursion from the la= st + * parent that gets inactivated. Don't inactivate them a second + * time if that has already happened. */ + if (bdrv_has_bds_parent(bs, false)) { + continue; + } ret =3D bdrv_inactivate_recurse(bs, pass); if (ret < 0) { bdrv_next_cleanup(&it); --=20 2.19.1