From nobody Wed Dec 17 05:38:21 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1529340576826272.31664390300307; Mon, 18 Jun 2018 09:49:36 -0700 (PDT) Received: from localhost ([::1]:35950 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fUxLT-0007Zq-TN for importer@patchew.org; Mon, 18 Jun 2018 12:49:35 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51730) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fUxHL-0004lF-CG for qemu-devel@nongnu.org; Mon, 18 Jun 2018 12:45:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fUxHK-0006Dq-F1 for qemu-devel@nongnu.org; Mon, 18 Jun 2018 12:45:19 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33740 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fUxHG-0006Ay-VP; Mon, 18 Jun 2018 12:45:15 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 74137402333A; Mon, 18 Jun 2018 16:45:14 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-120.ams2.redhat.com [10.36.117.120]) by smtp.corp.redhat.com (Postfix) with ESMTP id C14A82026D68; Mon, 18 Jun 2018 16:45:13 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Mon, 18 Jun 2018 18:44:35 +0200 Message-Id: <20180618164504.24488-7-kwolf@redhat.com> In-Reply-To: <20180618164504.24488-1-kwolf@redhat.com> References: <20180618164504.24488-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 18 Jun 2018 16:45:14 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 18 Jun 2018 16:45:14 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PULL 06/35] block: Avoid unnecessary aio_poll() in AIO_WAIT_WHILE() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Commit 91af091f923 added an additional aio_poll() to BDRV_POLL_WHILE() in order to make sure that all pending BHs are executed on drain. This was the wrong place to make the fix, as it is useless overhead for all other users of the macro and unnecessarily complicates the mechanism. This patch effectively reverts said commit (the context has changed a bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the loop condition for drain. The effect is probably hard to measure in any real-world use case because actual I/O will dominate, but if I run only the initialisation part of 'qemu-img convert' where it calls bdrv_block_status() for the whole image to find out how much data there is copy, this phase actually needs only roughly half the time after this patch. Signed-off-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- include/block/aio-wait.h | 22 ++++++++-------------- block/io.c | 11 ++++++++++- 2 files changed, 18 insertions(+), 15 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index 8c90a2e66e..783d3678dd 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -73,29 +73,23 @@ typedef struct { */ #define AIO_WAIT_WHILE(wait, ctx, cond) ({ \ bool waited_ =3D false; \ - bool busy_ =3D true; \ AioWait *wait_ =3D (wait); \ AioContext *ctx_ =3D (ctx); \ if (in_aio_context_home_thread(ctx_)) { \ - while ((cond) || busy_) { \ - busy_ =3D aio_poll(ctx_, (cond)); \ - waited_ |=3D !!(cond) | busy_; \ + while ((cond)) { \ + aio_poll(ctx_, true); \ + waited_ =3D true; \ } \ } else { \ assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ /* Increment wait_->num_waiters before evaluating cond. */ \ atomic_inc(&wait_->num_waiters); \ - while (busy_) { \ - if ((cond)) { \ - waited_ =3D busy_ =3D true; \ - aio_context_release(ctx_); \ - aio_poll(qemu_get_aio_context(), true); \ - aio_context_acquire(ctx_); \ - } else { \ - busy_ =3D aio_poll(ctx_, false); \ - waited_ |=3D busy_; \ - } \ + while ((cond)) { \ + aio_context_release(ctx_); \ + aio_poll(qemu_get_aio_context(), true); \ + aio_context_acquire(ctx_); \ + waited_ =3D true; \ } \ atomic_dec(&wait_->num_waiters); \ } \ diff --git a/block/io.c b/block/io.c index 983307cf03..bc7a2d78b8 100644 --- a/block/io.c +++ b/block/io.c @@ -182,13 +182,22 @@ static void bdrv_drain_invoke(BlockDriverState *bs, b= ool begin) BDRV_POLL_WHILE(bs, !data.done); } =20 +/* Returns true if BDRV_POLL_WHILE() should go into a blocking aio_poll() = */ +static bool bdrv_drain_poll(BlockDriverState *bs) +{ + /* Execute pending BHs first and check everything else only after the = BHs + * have executed. */ + while (aio_poll(bs->aio_context, false)); + return atomic_read(&bs->in_flight); +} + static bool bdrv_drain_recurse(BlockDriverState *bs) { BdrvChild *child, *tmp; bool waited; =20 /* Wait for drained requests to finish */ - waited =3D BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); + waited =3D BDRV_POLL_WHILE(bs, bdrv_drain_poll(bs)); =20 QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { BlockDriverState *bs =3D child->bs; --=20 2.13.6