From nobody Tue Feb 10 16:57:53 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1501862350020477.7721991071568; Fri, 4 Aug 2017 08:59:10 -0700 (PDT) Received: from localhost ([::1]:50268 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ddf0G-0004YT-H9 for importer@patchew.org; Fri, 04 Aug 2017 11:59:08 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54685) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ddeJO-0005Uz-En for qemu-devel@nongnu.org; Fri, 04 Aug 2017 11:14:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ddeJL-0007F8-Sj for qemu-devel@nongnu.org; Fri, 04 Aug 2017 11:14:50 -0400 Received: from mailhub.sw.ru ([195.214.232.25]:19287 helo=relay.sw.ru) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ddeJL-0007Bw-G9 for qemu-devel@nongnu.org; Fri, 04 Aug 2017 11:14:47 -0400 Received: from kvm.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v74FEeGK003061; Fri, 4 Aug 2017 18:14:41 +0300 (MSK) From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org, qemu-devel@nongnu.org Date: Fri, 4 Aug 2017 18:14:39 +0300 Message-Id: <20170804151440.320927-17-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.11.1 In-Reply-To: <20170804151440.320927-1-vsementsov@virtuozzo.com> References: <20170804151440.320927-1-vsementsov@virtuozzo.com> X-detected-operating-system: by eggs.gnu.org: OpenBSD 3.x [fuzzy] X-Received-From: 195.214.232.25 Subject: [Qemu-devel] [PATCH 16/17] block/nbd-client: drop reply field from NBDClientSession X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, mreitz@redhat.com, den@openvz.org, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Drop 'reply' from NBDClientSession. It's usage is not very transparent: 1. it is used to deliver error to receiving coroutine, and receiving coroutine must save or handle it somehow and then zero out it in NBDClientSession. 2. it is used to inform receiving coroutines that nbd_read_reply_entry is out for some reason (error or disconnect) To simplify this scheme: - drop NBDClientSession.reply - introduce NBDClientSession.requests[...].ret for (1) - introduce NBDClientSession.eio_to_all for (2) Signed-off-by: Vladimir Sementsov-Ogievskiy --- block/nbd-client.h | 3 ++- block/nbd-client.c | 25 ++++++++++--------------- 2 files changed, 12 insertions(+), 16 deletions(-) diff --git a/block/nbd-client.h b/block/nbd-client.h index 0f84ccc073..0b0aa67342 100644 --- a/block/nbd-client.h +++ b/block/nbd-client.h @@ -31,8 +31,9 @@ typedef struct NBDClientSession { Coroutine *co; NBDRequest *request; QEMUIOVector *qiov; + int ret; } requests[MAX_NBD_REQUESTS]; - NBDReply reply; + bool eio_to_all; } NBDClientSession; =20 NBDClientSession *nbd_get_client_session(BlockDriverState *bs); diff --git a/block/nbd-client.c b/block/nbd-client.c index 61780c5df9..7c151b3dd3 100644 --- a/block/nbd-client.c +++ b/block/nbd-client.c @@ -72,10 +72,10 @@ static coroutine_fn void nbd_read_reply_entry(void *opa= que) uint64_t i; int ret; Error *local_err =3D NULL; + NBDReply reply; =20 for (;;) { - assert(s->reply.handle =3D=3D 0); - ret =3D nbd_receive_reply(s->ioc, &s->reply, &local_err); + ret =3D nbd_receive_reply(s->ioc, &reply, &local_err); if (ret < 0) { error_report_err(local_err); } @@ -87,16 +87,14 @@ static coroutine_fn void nbd_read_reply_entry(void *opa= que) * handler acts as a synchronization point and ensures that only * one coroutine is called until the reply finishes. */ - i =3D HANDLE_TO_INDEX(s, s->reply.handle); + i =3D HANDLE_TO_INDEX(s, reply.handle); if (i >=3D MAX_NBD_REQUESTS || !s->requests[i].co || - s->reply.handle !=3D s->requests[i].request->handle) + reply.handle !=3D s->requests[i].request->handle) { break; } =20 - if (s->reply.error =3D=3D 0 && - s->requests[i].request->type =3D=3D NBD_CMD_READ) - { + if (reply.error =3D=3D 0 && s->requests[i].request->type =3D=3D NB= D_CMD_READ) { assert(s->requests[i].qiov !=3D NULL); ret =3D nbd_rwv(s->ioc, s->requests[i].qiov->iov, s->requests[i].qiov->niov, @@ -106,6 +104,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opa= que) } } =20 + s->requests[i].ret =3D -reply.error; + /* We're woken up by the receiving coroutine itself. Note that th= ere * is no race between yielding and reentering read_reply_co. This * is because: @@ -121,7 +121,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opa= que) qemu_coroutine_yield(); } =20 - s->reply.handle =3D 0; + s->eio_to_all =3D true; nbd_recv_coroutines_wake_all(s); s->read_reply_co =3D NULL; } @@ -180,19 +180,14 @@ static int nbd_co_request(BlockDriverState *bs, =20 /* Wait until we're woken up by nbd_read_reply_entry. */ qemu_coroutine_yield(); - if (!s->ioc || s->reply.handle =3D=3D 0) { + if (!s->ioc || s->eio_to_all) { rc =3D -EIO; goto out; } =20 - assert(s->reply.handle =3D=3D request->handle); - - rc =3D -s->reply.error; + rc =3D s->requests[i].ret; =20 out: - /* Tell the read handler to read another header. */ - s->reply.handle =3D 0; - s->requests[i].co =3D NULL; =20 /* Kick the read_reply_co to get the next reply. */ --=20 2.11.1