From nobody Wed Nov 5 20:00:13 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1536844303073893.4539327290768; Thu, 13 Sep 2018 06:11:43 -0700 (PDT) Received: from localhost ([::1]:42378 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g0RPF-00083o-Tw for importer@patchew.org; Thu, 13 Sep 2018 09:11:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47112) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g0R7b-0002dg-Tc for qemu-devel@nongnu.org; Thu, 13 Sep 2018 08:53:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g0R7b-0002Ex-1A for qemu-devel@nongnu.org; Thu, 13 Sep 2018 08:53:23 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39440) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g0R7W-0002AK-Nb; Thu, 13 Sep 2018 08:53:18 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1413680F6B; Thu, 13 Sep 2018 12:53:18 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-67.ams2.redhat.com [10.36.117.67]) by smtp.corp.redhat.com (Postfix) with ESMTP id 34C0160BE0; Thu, 13 Sep 2018 12:53:16 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Thu, 13 Sep 2018 14:52:14 +0200 Message-Id: <20180913125217.23173-15-kwolf@redhat.com> In-Reply-To: <20180913125217.23173-1-kwolf@redhat.com> References: <20180913125217.23173-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 13 Sep 2018 12:53:18 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v2 14/17] block: Remove aio_poll() in bdrv_drain_poll variants X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, slp@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, pbonzini@redhat.com, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" bdrv_drain_poll_top_level() was buggy because it didn't release the AioContext lock of the node to be drained before calling aio_poll(). This way, callbacks called by aio_poll() would possibly take the lock a second time and run into a deadlock with a nested AIO_WAIT_WHILE() call. However, it turns out that the aio_poll() call isn't actually needed any more. It was introduced in commit 91af091f923, which is effectively reverted by this patch. The cases it was supposed to fix are now covered by bdrv_drain_poll(), which waits for block jobs to reach a quiescent state. Signed-off-by: Kevin Wolf Reviewed-by: Fam Zheng Reviewed-by: Max Reitz --- block/io.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/block/io.c b/block/io.c index 914ba78f1a..8b81ff3913 100644 --- a/block/io.c +++ b/block/io.c @@ -268,10 +268,6 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recurs= ive, static bool bdrv_drain_poll_top_level(BlockDriverState *bs, bool recursive, BdrvChild *ignore_parent) { - /* Execute pending BHs first and check everything else only after the = BHs - * have executed. */ - while (aio_poll(bs->aio_context, false)); - return bdrv_drain_poll(bs, recursive, ignore_parent, false); } =20 @@ -511,10 +507,6 @@ static bool bdrv_drain_all_poll(void) BlockDriverState *bs =3D NULL; bool result =3D false; =20 - /* Execute pending BHs first (may modify the graph) and check everythi= ng - * else only after the BHs have executed. */ - while (aio_poll(qemu_get_aio_context(), false)); - /* bdrv_drain_poll() can't make changes to the graph and we are holdin= g the * main AioContext lock, so iterating bdrv_next_all_states() is safe. = */ while ((bs =3D bdrv_next_all_states(bs))) { --=20 2.13.6