[Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry

Stefan Hajnoczi posted 1 patch 6 years, 8 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20170822125113.5025-1-stefanha@redhat.com
Test FreeBSD passed
Test checkpatch passed
Test docker passed
Test s390x passed
block/nbd-client.h |  7 ++++++-
block/nbd-client.c | 35 ++++++++++++++++++++++-------------
2 files changed, 28 insertions(+), 14 deletions(-)
[Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Stefan Hajnoczi 6 years, 8 months ago
The following scenario leads to an assertion failure in
qio_channel_yield():

1. Request coroutine calls qio_channel_yield() successfully when sending
   would block on the socket.  It is now yielded.
2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
   nbd_receive_reply() failed.
3. Request coroutine is entered and returns from qio_channel_yield().
   Note that the socket fd handler has not fired yet so
   ioc->write_coroutine is still set.
4. Request coroutine attempts to send the request body with nbd_rwv()
   but the socket would still block.  qio_channel_yield() is called
   again and assert(!ioc->write_coroutine) is hit.

The problem is that nbd_read_reply_entry() does not distinguish between
request coroutines that are waiting to receive a reply and those that
are not.

This patch adds a per-request bool receiving flag so
nbd_read_reply_entry() can avoid spurious aio_wake() calls.

Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
This should fix the issue that Dave is seeing but I'm concerned that
there are more problems in nbd-client.c.  We don't have good
abstractions for writing coroutine socket I/O code.  Something like Go's
channels would avoid manual low-level coroutine calls.  There is
currently no way to cancel qio_channel_yield() so requests doing I/O may
remain in-flight indefinitely and nbd-client.c doesn't join them...

 block/nbd-client.h |  7 ++++++-
 block/nbd-client.c | 35 ++++++++++++++++++++++-------------
 2 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 1935ffbcaa..b435754b82 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -17,6 +17,11 @@
 
 #define MAX_NBD_REQUESTS    16
 
+typedef struct {
+    Coroutine *coroutine;
+    bool receiving;         /* waiting for read_reply_co? */
+} NBDClientRequest;
+
 typedef struct NBDClientSession {
     QIOChannelSocket *sioc; /* The master data channel */
     QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
@@ -27,7 +32,7 @@ typedef struct NBDClientSession {
     Coroutine *read_reply_co;
     int in_flight;
 
-    Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
+    NBDClientRequest requests[MAX_NBD_REQUESTS];
     NBDReply reply;
     bool quit;
 } NBDClientSession;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 422ecb4307..c2834f6b47 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
     int i;
 
     for (i = 0; i < MAX_NBD_REQUESTS; i++) {
-        if (s->recv_coroutine[i]) {
-            aio_co_wake(s->recv_coroutine[i]);
+        NBDClientRequest *req = &s->requests[i];
+
+        if (req->coroutine && req->receiving) {
+            aio_co_wake(req->coroutine);
         }
     }
 }
@@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
          * one coroutine is called until the reply finishes.
          */
         i = HANDLE_TO_INDEX(s, s->reply.handle);
-        if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
+        if (i >= MAX_NBD_REQUESTS ||
+            !s->requests[i].coroutine ||
+            !s->requests[i].receiving) {
             break;
         }
 
-        /* We're woken up by the recv_coroutine itself.  Note that there
+        /* We're woken up again by the request itself.  Note that there
          * is no race between yielding and reentering read_reply_co.  This
          * is because:
          *
-         * - if recv_coroutine[i] runs on the same AioContext, it is only
+         * - if the request runs on the same AioContext, it is only
          *   entered after we yield
          *
-         * - if recv_coroutine[i] runs on a different AioContext, reentering
+         * - if the request runs on a different AioContext, reentering
          *   read_reply_co happens through a bottom half, which can only
          *   run after we yield.
          */
-        aio_co_wake(s->recv_coroutine[i]);
+        aio_co_wake(s->requests[i].coroutine);
         qemu_coroutine_yield();
     }
 
-    if (ret < 0) {
-        s->quit = true;
-    }
+    s->quit = true;
     nbd_recv_coroutines_enter_all(s);
     s->read_reply_co = NULL;
 }
@@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
     s->in_flight++;
 
     for (i = 0; i < MAX_NBD_REQUESTS; i++) {
-        if (s->recv_coroutine[i] == NULL) {
-            s->recv_coroutine[i] = qemu_coroutine_self();
+        if (s->requests[i].coroutine == NULL) {
             break;
         }
     }
 
     g_assert(qemu_in_coroutine());
     assert(i < MAX_NBD_REQUESTS);
+
+    s->requests[i].coroutine = qemu_coroutine_self();
+    s->requests[i].receiving = false;
+
     request->handle = INDEX_TO_HANDLE(s, i);
 
     if (s->quit) {
@@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
                                  NBDReply *reply,
                                  QEMUIOVector *qiov)
 {
+    int i = HANDLE_TO_INDEX(s, request->handle);
     int ret;
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
+    s->requests[i].receiving = true;
     qemu_coroutine_yield();
+    s->requests[i].receiving = false;
     *reply = s->reply;
     if (reply->handle != request->handle || !s->ioc || s->quit) {
         reply->error = EIO;
@@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
                           NULL);
             if (ret != request->len) {
                 reply->error = EIO;
+                s->quit = true;
             }
         }
 
@@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
     NBDClientSession *s = nbd_get_client_session(bs);
     int i = HANDLE_TO_INDEX(s, request->handle);
 
-    s->recv_coroutine[i] = NULL;
+    s->requests[i].coroutine = NULL;
 
     /* Kick the read_reply_co to get the next reply.  */
     if (s->read_reply_co) {
-- 
2.13.5


Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Paolo Bonzini 6 years, 8 months ago
On 22/08/2017 14:51, Stefan Hajnoczi wrote:
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c.  We don't have good
> abstractions for writing coroutine socket I/O code.  Something like Go's
> channels would avoid manual low-level coroutine calls.  There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...

The idea was that shutdown(2) would force them to reenter...

Paolo

Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Stefan Hajnoczi 6 years, 8 months ago
On Tue, Aug 22, 2017 at 03:23:32PM +0200, Paolo Bonzini wrote:
> On 22/08/2017 14:51, Stefan Hajnoczi wrote:
> > This should fix the issue that Dave is seeing but I'm concerned that
> > there are more problems in nbd-client.c.  We don't have good
> > abstractions for writing coroutine socket I/O code.  Something like Go's
> > channels would avoid manual low-level coroutine calls.  There is
> > currently no way to cancel qio_channel_yield() so requests doing I/O may
> > remain in-flight indefinitely and nbd-client.c doesn't join them...
> 
> The idea was that shutdown(2) would force them to reenter...

That depends on the BDRV_POLL_WHILE() allowing all request coroutines to
terminate before we call nbd_client_detach_aio_context():

    qio_channel_shutdown(client->ioc,
                         QIO_CHANNEL_SHUTDOWN_BOTH,
                         NULL);
    BDRV_POLL_WHILE(bs, client->read_reply_co);

    nbd_client_detach_aio_context(bs);

I'm not sure we have any guarantee that request coroutines will have
terminated.

Once nbd_client_detach_aio_context() is called
ioc->read_coroutine/write_coroutine are set to NULL.  At that point any
remaining coroutine doing I/O on ioc will be in trouble.

Stefan

Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Paolo Bonzini 6 years, 8 months ago
On 23/08/2017 16:45, Stefan Hajnoczi wrote:
> That depends on the BDRV_POLL_WHILE() allowing all request coroutines to
> terminate before we call nbd_client_detach_aio_context():
> 
>     qio_channel_shutdown(client->ioc,
>                          QIO_CHANNEL_SHUTDOWN_BOTH,
>                          NULL);
>     BDRV_POLL_WHILE(bs, client->read_reply_co);
> 
>     nbd_client_detach_aio_context(bs);
> 
> I'm not sure we have any guarantee that request coroutines will have
> terminated.

Ok, I see my confusion, it's only because of the "receiving" flag which
actually means "waiting for reply".  Your patch is okay.

Paolo

> Once nbd_client_detach_aio_context() is called
> ioc->read_coroutine/write_coroutine are set to NULL.  At that point any
> remaining coroutine doing I/O on ioc will be in trouble.


Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Dr. David Alan Gilbert 6 years, 8 months ago
* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
> 
> 1. Request coroutine calls qio_channel_yield() successfully when sending
>    would block on the socket.  It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>    nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
>    Note that the socket fd handler has not fired yet so
>    ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
>    but the socket would still block.  qio_channel_yield() is called
>    again and assert(!ioc->write_coroutine) is hit.
> 
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
> 
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
> 
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

With that patch that assert does seem to go away; just leaving the 
other failure we're seeing.

Dave

> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c.  We don't have good
> abstractions for writing coroutine socket I/O code.  Something like Go's
> channels would avoid manual low-level coroutine calls.  There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
> 
>  block/nbd-client.h |  7 ++++++-
>  block/nbd-client.c | 35 ++++++++++++++++++++++-------------
>  2 files changed, 28 insertions(+), 14 deletions(-)
> 
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 1935ffbcaa..b435754b82 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -17,6 +17,11 @@
>  
>  #define MAX_NBD_REQUESTS    16
>  
> +typedef struct {
> +    Coroutine *coroutine;
> +    bool receiving;         /* waiting for read_reply_co? */
> +} NBDClientRequest;
> +
>  typedef struct NBDClientSession {
>      QIOChannelSocket *sioc; /* The master data channel */
>      QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
> @@ -27,7 +32,7 @@ typedef struct NBDClientSession {
>      Coroutine *read_reply_co;
>      int in_flight;
>  
> -    Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
> +    NBDClientRequest requests[MAX_NBD_REQUESTS];
>      NBDReply reply;
>      bool quit;
>  } NBDClientSession;
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 422ecb4307..c2834f6b47 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
>      int i;
>  
>      for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> -        if (s->recv_coroutine[i]) {
> -            aio_co_wake(s->recv_coroutine[i]);
> +        NBDClientRequest *req = &s->requests[i];
> +
> +        if (req->coroutine && req->receiving) {
> +            aio_co_wake(req->coroutine);
>          }
>      }
>  }
> @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>           * one coroutine is called until the reply finishes.
>           */
>          i = HANDLE_TO_INDEX(s, s->reply.handle);
> -        if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
> +        if (i >= MAX_NBD_REQUESTS ||
> +            !s->requests[i].coroutine ||
> +            !s->requests[i].receiving) {
>              break;
>          }
>  
> -        /* We're woken up by the recv_coroutine itself.  Note that there
> +        /* We're woken up again by the request itself.  Note that there
>           * is no race between yielding and reentering read_reply_co.  This
>           * is because:
>           *
> -         * - if recv_coroutine[i] runs on the same AioContext, it is only
> +         * - if the request runs on the same AioContext, it is only
>           *   entered after we yield
>           *
> -         * - if recv_coroutine[i] runs on a different AioContext, reentering
> +         * - if the request runs on a different AioContext, reentering
>           *   read_reply_co happens through a bottom half, which can only
>           *   run after we yield.
>           */
> -        aio_co_wake(s->recv_coroutine[i]);
> +        aio_co_wake(s->requests[i].coroutine);
>          qemu_coroutine_yield();
>      }
>  
> -    if (ret < 0) {
> -        s->quit = true;
> -    }
> +    s->quit = true;
>      nbd_recv_coroutines_enter_all(s);
>      s->read_reply_co = NULL;
>  }
> @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
>      s->in_flight++;
>  
>      for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> -        if (s->recv_coroutine[i] == NULL) {
> -            s->recv_coroutine[i] = qemu_coroutine_self();
> +        if (s->requests[i].coroutine == NULL) {
>              break;
>          }
>      }
>  
>      g_assert(qemu_in_coroutine());
>      assert(i < MAX_NBD_REQUESTS);
> +
> +    s->requests[i].coroutine = qemu_coroutine_self();
> +    s->requests[i].receiving = false;
> +
>      request->handle = INDEX_TO_HANDLE(s, i);
>  
>      if (s->quit) {
> @@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
>                                   NBDReply *reply,
>                                   QEMUIOVector *qiov)
>  {
> +    int i = HANDLE_TO_INDEX(s, request->handle);
>      int ret;
>  
>      /* Wait until we're woken up by nbd_read_reply_entry.  */
> +    s->requests[i].receiving = true;
>      qemu_coroutine_yield();
> +    s->requests[i].receiving = false;
>      *reply = s->reply;
>      if (reply->handle != request->handle || !s->ioc || s->quit) {
>          reply->error = EIO;
> @@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
>                            NULL);
>              if (ret != request->len) {
>                  reply->error = EIO;
> +                s->quit = true;
>              }
>          }
>  
> @@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
>      NBDClientSession *s = nbd_get_client_session(bs);
>      int i = HANDLE_TO_INDEX(s, request->handle);
>  
> -    s->recv_coroutine[i] = NULL;
> +    s->requests[i].coroutine = NULL;
>  
>      /* Kick the read_reply_co to get the next reply.  */
>      if (s->read_reply_co) {
> -- 
> 2.13.5
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Eric Blake 6 years, 8 months ago
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
> 
> 1. Request coroutine calls qio_channel_yield() successfully when sending
>    would block on the socket.  It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>    nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
>    Note that the socket fd handler has not fired yet so
>    ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
>    but the socket would still block.  qio_channel_yield() is called
>    again and assert(!ioc->write_coroutine) is hit.
> 
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
> 
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
> 
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c.  We don't have good
> abstractions for writing coroutine socket I/O code.  Something like Go's
> channels would avoid manual low-level coroutine calls.  There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...

Is this patch needed for 2.10-rc4, or does Fam's series cover the issue?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Re: [Qemu-devel] [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Stefan Hajnoczi 6 years, 8 months ago
On Wed, Aug 23, 2017 at 3:20 PM, Eric Blake <eblake@redhat.com> wrote:
> On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
>> The following scenario leads to an assertion failure in
>> qio_channel_yield():
>>
>> 1. Request coroutine calls qio_channel_yield() successfully when sending
>>    would block on the socket.  It is now yielded.
>> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>>    nbd_receive_reply() failed.
>> 3. Request coroutine is entered and returns from qio_channel_yield().
>>    Note that the socket fd handler has not fired yet so
>>    ioc->write_coroutine is still set.
>> 4. Request coroutine attempts to send the request body with nbd_rwv()
>>    but the socket would still block.  qio_channel_yield() is called
>>    again and assert(!ioc->write_coroutine) is hit.
>>
>> The problem is that nbd_read_reply_entry() does not distinguish between
>> request coroutines that are waiting to receive a reply and those that
>> are not.
>>
>> This patch adds a per-request bool receiving flag so
>> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>>
>> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>> This should fix the issue that Dave is seeing but I'm concerned that
>> there are more problems in nbd-client.c.  We don't have good
>> abstractions for writing coroutine socket I/O code.  Something like Go's
>> channels would avoid manual low-level coroutine calls.  There is
>> currently no way to cancel qio_channel_yield() so requests doing I/O may
>> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> Is this patch needed for 2.10-rc4, or does Fam's series cover the issue?

Fam's series fixes non-shared storage migration.

This patch addresses the failure case when the server closes the
connection prematurely.

Stefan

Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Eric Blake 6 years, 8 months ago
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
> 
> 1. Request coroutine calls qio_channel_yield() successfully when sending
>    would block on the socket.  It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>    nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
>    Note that the socket fd handler has not fired yet so
>    ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
>    but the socket would still block.  qio_channel_yield() is called
>    again and assert(!ioc->write_coroutine) is hit.
> 
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
> 
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
> 
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c.  We don't have good
> abstractions for writing coroutine socket I/O code.  Something like Go's
> channels would avoid manual low-level coroutine calls.  There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...

Vladimir has some cleanups that rewrite the NBD coroutines to be more
legible, but it is invasive enough to be 2.11 material.  I think that
for a stop-gap of getting 2.10 out the door, we may be better off
including this patch - but I would still like some positive review from
more than just me.  There's not much time left before I need to send the
-rc4 NBD pull request, though.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Paolo Bonzini 6 years, 8 months ago
On 23/08/2017 16:51, Eric Blake wrote:
> On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
>> The following scenario leads to an assertion failure in
>> qio_channel_yield():
>>
>> 1. Request coroutine calls qio_channel_yield() successfully when sending
>>    would block on the socket.  It is now yielded.
>> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>>    nbd_receive_reply() failed.
>> 3. Request coroutine is entered and returns from qio_channel_yield().
>>    Note that the socket fd handler has not fired yet so
>>    ioc->write_coroutine is still set.
>> 4. Request coroutine attempts to send the request body with nbd_rwv()
>>    but the socket would still block.  qio_channel_yield() is called
>>    again and assert(!ioc->write_coroutine) is hit.
>>
>> The problem is that nbd_read_reply_entry() does not distinguish between
>> request coroutines that are waiting to receive a reply and those that
>> are not.
>>
>> This patch adds a per-request bool receiving flag so
>> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>>
>> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>> This should fix the issue that Dave is seeing but I'm concerned that
>> there are more problems in nbd-client.c.  We don't have good
>> abstractions for writing coroutine socket I/O code.  Something like Go's
>> channels would avoid manual low-level coroutine calls.  There is
>> currently no way to cancel qio_channel_yield() so requests doing I/O may
>> remain in-flight indefinitely and nbd-client.c doesn't join them...
> 
> Vladimir has some cleanups that rewrite the NBD coroutines to be more
> legible, but it is invasive enough to be 2.11 material.  I think that
> for a stop-gap of getting 2.10 out the door, we may be better off
> including this patch - but I would still like some positive review from
> more than just me.  There's not much time left before I need to send the
> -rc4 NBD pull request, though.
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Vladimir Sementsov-Ogievskiy 6 years, 8 months ago
22.08.2017 15:51, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
>     would block on the socket.  It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>     nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
>     Note that the socket fd handler has not fired yet so
>     ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
>     but the socket would still block.  qio_channel_yield() is called
>     again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c.  We don't have good
> abstractions for writing coroutine socket I/O code.  Something like Go's
> channels would avoid manual low-level coroutine calls.  There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
>   block/nbd-client.h |  7 ++++++-
>   block/nbd-client.c | 35 ++++++++++++++++++++++-------------
>   2 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 1935ffbcaa..b435754b82 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -17,6 +17,11 @@
>   
>   #define MAX_NBD_REQUESTS    16
>   
> +typedef struct {
> +    Coroutine *coroutine;
> +    bool receiving;         /* waiting for read_reply_co? */
> +} NBDClientRequest;
> +
>   typedef struct NBDClientSession {
>       QIOChannelSocket *sioc; /* The master data channel */
>       QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
> @@ -27,7 +32,7 @@ typedef struct NBDClientSession {
>       Coroutine *read_reply_co;
>       int in_flight;
>   
> -    Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
> +    NBDClientRequest requests[MAX_NBD_REQUESTS];
>       NBDReply reply;
>       bool quit;
>   } NBDClientSession;
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 422ecb4307..c2834f6b47 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
>       int i;
>   
>       for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> -        if (s->recv_coroutine[i]) {
> -            aio_co_wake(s->recv_coroutine[i]);
> +        NBDClientRequest *req = &s->requests[i];
> +
> +        if (req->coroutine && req->receiving) {
> +            aio_co_wake(req->coroutine);
>           }
>       }
>   }
> @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>            * one coroutine is called until the reply finishes.
>            */
>           i = HANDLE_TO_INDEX(s, s->reply.handle);
> -        if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
> +        if (i >= MAX_NBD_REQUESTS ||
> +            !s->requests[i].coroutine ||
> +            !s->requests[i].receiving) {
>               break;
>           }
>   
> -        /* We're woken up by the recv_coroutine itself.  Note that there
> +        /* We're woken up again by the request itself.  Note that there
>            * is no race between yielding and reentering read_reply_co.  This
>            * is because:
>            *
> -         * - if recv_coroutine[i] runs on the same AioContext, it is only
> +         * - if the request runs on the same AioContext, it is only
>            *   entered after we yield
>            *
> -         * - if recv_coroutine[i] runs on a different AioContext, reentering
> +         * - if the request runs on a different AioContext, reentering
>            *   read_reply_co happens through a bottom half, which can only
>            *   run after we yield.
>            */
> -        aio_co_wake(s->recv_coroutine[i]);
> +        aio_co_wake(s->requests[i].coroutine);
>           qemu_coroutine_yield();
>       }
>   
> -    if (ret < 0) {
> -        s->quit = true;
> -    }
> +    s->quit = true;

good. this fixes the case when "if (i  >= MAX...)" goes here without 
setting ret.

>       nbd_recv_coroutines_enter_all(s);
>       s->read_reply_co = NULL;
>   }
> @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
>       s->in_flight++;
>   
>       for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> -        if (s->recv_coroutine[i] == NULL) {
> -            s->recv_coroutine[i] = qemu_coroutine_self();
> +        if (s->requests[i].coroutine == NULL) {
>               break;
>           }
>       }
>   
>       g_assert(qemu_in_coroutine());
>       assert(i < MAX_NBD_REQUESTS);
> +
> +    s->requests[i].coroutine = qemu_coroutine_self();
> +    s->requests[i].receiving = false;
> +
>       request->handle = INDEX_TO_HANDLE(s, i);
>   
>       if (s->quit) {
> @@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
>                                    NBDReply *reply,
>                                    QEMUIOVector *qiov)
>   {
> +    int i = HANDLE_TO_INDEX(s, request->handle);
>       int ret;
>   
>       /* Wait until we're woken up by nbd_read_reply_entry.  */
> +    s->requests[i].receiving = true;
>       qemu_coroutine_yield();
> +    s->requests[i].receiving = false;
>       *reply = s->reply;
>       if (reply->handle != request->handle || !s->ioc || s->quit) {
>           reply->error = EIO;
> @@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
>                             NULL);
>               if (ret != request->len) {
>                   reply->error = EIO;
> +                s->quit = true;

as I understand, some fixes around s->quit are merged into this patch 
and actually unrelated to the described problem.
Anyway, setting quite here should be not bad (I set corresponding 
eio_to_all variable in my series on each error and check it after each 
possible yield).

>               }
>           }
>   
> @@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
>       NBDClientSession *s = nbd_get_client_session(bs);
>       int i = HANDLE_TO_INDEX(s, request->handle);
>   
> -    s->recv_coroutine[i] = NULL;
> +    s->requests[i].coroutine = NULL;
>   
>       /* Kick the read_reply_co to get the next reply.  */
>       if (s->read_reply_co) {


Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>


-- 
Best regards,
Vladimir


Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Posted by Eric Blake 6 years, 8 months ago
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
> 
> 1. Request coroutine calls qio_channel_yield() successfully when sending
>    would block on the socket.  It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>    nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
>    Note that the socket fd handler has not fired yet so
>    ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
>    but the socket would still block.  qio_channel_yield() is called
>    again and assert(!ioc->write_coroutine) is hit.
> 
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
> 
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
> 
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

Using the steps in
https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg03853.html,
I've verified that this avoids the hang that is otherwise present, so
I'm adding:

Tested-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org