From nobody Wed Nov 5 12:43:56 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1537460756982234.55367698123587; Thu, 20 Sep 2018 09:25:56 -0700 (PDT) Received: from localhost ([::1]:51968 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g31m1-0000p1-Pm for importer@patchew.org; Thu, 20 Sep 2018 12:25:49 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45522) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g31gk-0004dT-8S for qemu-devel@nongnu.org; Thu, 20 Sep 2018 12:20:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g31gj-0005au-Gc for qemu-devel@nongnu.org; Thu, 20 Sep 2018 12:20:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47310) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g31gh-0005Xs-CU; Thu, 20 Sep 2018 12:20:19 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8B6473003D28; Thu, 20 Sep 2018 16:20:18 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-116-231.ams2.redhat.com [10.36.116.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id A21CE85A33; Thu, 20 Sep 2018 16:20:16 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Thu, 20 Sep 2018 18:19:42 +0200 Message-Id: <20180920161958.27453-4-kwolf@redhat.com> In-Reply-To: <20180920161958.27453-1-kwolf@redhat.com> References: <20180920161958.27453-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Thu, 20 Sep 2018 16:20:18 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 03/19] aio-wait: Increase num_waiters even in home thread X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, slp@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, pbonzini@redhat.com, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Even if AIO_WAIT_WHILE() is called in the home context of the AioContext, we still want to allow the condition to change depending on other threads as long as they kick the AioWait. Specfically block jobs can be running in an I/O thread and should then be able to kick a drain in the main loop context. Signed-off-by: Kevin Wolf Reviewed-by: Fam Zheng --- include/block/aio-wait.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index c85a62f798..600fad183e 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -76,6 +76,8 @@ typedef struct { bool waited_ =3D false; \ AioWait *wait_ =3D (wait); \ AioContext *ctx_ =3D (ctx); \ + /* Increment wait_->num_waiters before evaluating cond. */ \ + atomic_inc(&wait_->num_waiters); \ if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ @@ -84,8 +86,6 @@ typedef struct { } else { \ assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ - /* Increment wait_->num_waiters before evaluating cond. */ \ - atomic_inc(&wait_->num_waiters); \ while ((cond)) { \ if (ctx_) { \ aio_context_release(ctx_); \ @@ -96,8 +96,8 @@ typedef struct { } \ waited_ =3D true; \ } \ - atomic_dec(&wait_->num_waiters); \ } \ + atomic_dec(&wait_->num_waiters); \ waited_; }) =20 /** --=20 2.13.6