From nobody Tue Feb 10 08:27:13 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1487787320205132.00782930618652; Wed, 22 Feb 2017 10:15:20 -0800 (PST) Received: from localhost ([::1]:54573 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cgbRd-0002eI-Rh for importer@patchew.org; Wed, 22 Feb 2017 13:15:17 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50925) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cgbKA-0005EB-SK for qemu-devel@nongnu.org; Wed, 22 Feb 2017 13:07:39 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cgbK9-00072G-FM for qemu-devel@nongnu.org; Wed, 22 Feb 2017 13:07:34 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42206) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cgbK6-00070p-Ki; Wed, 22 Feb 2017 13:07:30 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D3DB94E4E6; Wed, 22 Feb 2017 18:07:30 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-116-160.ams2.redhat.com [10.36.116.160]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v1MI7PKT005110; Wed, 22 Feb 2017 13:07:29 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 22 Feb 2017 19:07:24 +0100 Message-Id: <20170222180725.28611-3-pbonzini@redhat.com> In-Reply-To: <20170222180725.28611-1-pbonzini@redhat.com> References: <20170222180725.28611-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 22 Feb 2017 18:07:30 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 2/3] nfs: do not use aio_context_acquire/release X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jcody@redhat.com, qemu-block@nongnu.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Now that all bottom halves and callbacks take care of taking the AioContext lock, we can migrate some users away from it and to a specific QemuMutex or CoMutex. Protect libnfs calls with a QemuMutex. Callbacks are invoked using bottom halves, so we don't even have to drop it around callback invocations. Signed-off-by: Paolo Bonzini --- block/nfs.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/block/nfs.c b/block/nfs.c index 08b43dd..3ee3275 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -54,6 +54,7 @@ typedef struct NFSClient { int events; bool has_zero_init; AioContext *aio_context; + QemuMutex mutex; blkcnt_t st_blocks; bool cache_used; NFSServer *server; @@ -191,6 +192,7 @@ static void nfs_parse_filename(const char *filename, QD= ict *options, static void nfs_process_read(void *arg); static void nfs_process_write(void *arg); =20 +/* Called with QemuMutex held. */ static void nfs_set_events(NFSClient *client) { int ev =3D nfs_which_events(client->context); @@ -209,20 +211,20 @@ static void nfs_process_read(void *arg) { NFSClient *client =3D arg; =20 - aio_context_acquire(client->aio_context); + qemu_mutex_lock(&client->mutex); nfs_service(client->context, POLLIN); nfs_set_events(client); - aio_context_release(client->aio_context); + qemu_mutex_unlock(&client->mutex); } =20 static void nfs_process_write(void *arg) { NFSClient *client =3D arg; =20 - aio_context_acquire(client->aio_context); + qemu_mutex_lock(&client->mutex); nfs_service(client->context, POLLOUT); nfs_set_events(client); - aio_context_release(client->aio_context); + qemu_mutex_unlock(&client->mutex); } =20 static void nfs_co_init_task(BlockDriverState *bs, NFSRPC *task) @@ -242,6 +244,7 @@ static void nfs_co_generic_bh_cb(void *opaque) aio_co_wake(task->co); } =20 +/* Called (via nfs_service) with QemuMutex held. */ static void nfs_co_generic_cb(int ret, struct nfs_context *nfs, void *data, void *private_data) @@ -273,14 +276,17 @@ static int coroutine_fn nfs_co_readv(BlockDriverState= *bs, nfs_co_init_task(bs, &task); task.iov =3D iov; =20 + qemu_mutex_lock(&client->mutex); if (nfs_pread_async(client->context, client->fh, sector_num * BDRV_SECTOR_SIZE, nb_sectors * BDRV_SECTOR_SIZE, nfs_co_generic_cb, &task) !=3D 0) { + qemu_mutex_unlock(&client->mutex); return -ENOMEM; } =20 nfs_set_events(client); + qemu_mutex_unlock(&client->mutex); while (!task.complete) { qemu_coroutine_yield(); } @@ -314,15 +320,18 @@ static int coroutine_fn nfs_co_writev(BlockDriverStat= e *bs, =20 qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE); =20 + qemu_mutex_lock(&client->mutex); if (nfs_pwrite_async(client->context, client->fh, sector_num * BDRV_SECTOR_SIZE, nb_sectors * BDRV_SECTOR_SIZE, buf, nfs_co_generic_cb, &task) !=3D 0) { + qemu_mutex_unlock(&client->mutex); g_free(buf); return -ENOMEM; } =20 nfs_set_events(client); + qemu_mutex_unlock(&client->mutex); while (!task.complete) { qemu_coroutine_yield(); } @@ -343,12 +352,15 @@ static int coroutine_fn nfs_co_flush(BlockDriverState= *bs) =20 nfs_co_init_task(bs, &task); =20 + qemu_mutex_lock(&client->mutex); if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb, &task) !=3D 0) { + qemu_mutex_unlock(&client->mutex); return -ENOMEM; } =20 nfs_set_events(client); + qemu_mutex_unlock(&client->mutex); while (!task.complete) { qemu_coroutine_yield(); } @@ -434,6 +446,7 @@ static void nfs_file_close(BlockDriverState *bs) { NFSClient *client =3D bs->opaque; nfs_client_close(client); + qemu_mutex_destroy(&client->mutex); } =20 static NFSServer *nfs_config(QDict *options, Error **errp) @@ -641,6 +654,7 @@ static int nfs_file_open(BlockDriverState *bs, QDict *o= ptions, int flags, if (ret < 0) { return ret; } + qemu_mutex_init(&client->mutex); bs->total_sectors =3D ret; ret =3D 0; return ret; @@ -696,6 +710,7 @@ static int nfs_has_zero_init(BlockDriverState *bs) return client->has_zero_init; } =20 +/* Called (via nfs_service) with QemuMutex held. */ static void nfs_get_allocated_file_size_cb(int ret, struct nfs_context *nfs, void *dat= a, void *private_data) --=20 2.9.3