The user "John Doe" reported a deadlock when attempting to use
qemu-storage-daemon to serve both a base file over NBD, and a qcow2
file with that NBD export as its backing file, from the same process,
even though it worked just fine when there were two q-s-d processes.
The bulk of the NBD server code properly uses coroutines to make
progress in an event-driven manner, but the code for spawning a new
coroutine at the point when listen(2) detects a new client was
hard-coded to use the global GMainContext; in other words, the
callback that triggers nbd_client_new to let the server start the
negotiation sequence with the client requires the main loop to be
making progress. However, the code for bdrv_open of a qcow2 image
with an NBD backing file uses an AIO_WAIT_WHILE nested event loop to
ensure that the entire qcow2 backing chain is either fully loaded or
rejected, without any side effects from the main loop causing unwanted
changes to the disk being loaded (in short, an AioContext represents
the set of actions that are known to be safe while handling block
layer I/O, while excluding any other pending actions in the global
main loop with potentially larger risk of unwanted side effects).
This creates a classic case of deadlock: the server can't progress to
the point of accept(2)ing the client to write to the NBD socket
because the main loop is being starved until the AIO_WAIT_WHILE
completes the bdrv_open, but the AIO_WAIT_WHILE can't progress because
it is blocked on the client coroutine stuck in a read() of the
expected magic number from the server side of the socket.
This patch adds a new API to allow clients to opt in to listening via
an AioContext rather than a GMainContext. This will allow NBD to fix
the deadlock by performing all actions during bdrv_open in the main
loop AioContext.
An upcoming patch will then add a unit test (kept separate to make it
easier to rearrange the series to demonstrate the deadlock without
this patch).
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: Retitle and add new API rather than changing semantics of
existing qio_net_listener_set_client_func; use qio accessor rather
than direct access to the sioc fd and a lower-level aio call
---
include/io/net-listener.h | 16 ++++++++++++++++
io/net-listener.c | 36 +++++++++++++++++++++++++++++++++---
2 files changed, 49 insertions(+), 3 deletions(-)
diff --git a/include/io/net-listener.h b/include/io/net-listener.h
index 7188721cb34..e93efd5d96a 100644
--- a/include/io/net-listener.h
+++ b/include/io/net-listener.h
@@ -151,6 +151,22 @@ void qio_net_listener_set_client_func(QIONetListener *listener,
gpointer data,
GDestroyNotify notify);
+/**
+ * qio_net_listener_set_client_aio_func:
+ * @listener: the network listener object
+ * @func: the callback function
+ * @data: opaque data to pass to @func
+ * @context: AioContext that @func will be bound to; if #NULL, this will
+ * will use qemu_get_aio_context().
+ *
+ * Similar to qio_net_listener_set_client_func_full(), except that the polling
+ * will be done by an AioContext rather than a GMainContext.
+ */
+void qio_net_listener_set_client_aio_func(QIONetListener *listener,
+ QIONetListenerClientFunc func,
+ void *data,
+ AioContext *context);
+
/**
* qio_net_listener_wait_client:
* @listener: the network listener object
diff --git a/io/net-listener.c b/io/net-listener.c
index ebc61f81ed6..53f2e7091d7 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -72,6 +72,17 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
}
+static void qio_net_listener_aio_func(void *opaque)
+{
+ QIONetListenerSource *data = opaque;
+
+ assert(data->io_source == NULL);
+ assert(data->listener->aio_context != NULL);
+ qio_net_listener_channel_func(QIO_CHANNEL(data->sioc), G_IO_IN,
+ data->listener);
+}
+
+
int qio_net_listener_open_sync(QIONetListener *listener,
SocketAddress *addr,
int num,
@@ -144,8 +155,12 @@ qio_net_listener_watch(QIONetListener *listener, size_t i, const char *caller)
qio_net_listener_channel_func,
listener, NULL, listener->context);
} else {
- /* The user passed an AioContext. Not supported yet. */
- g_assert_not_reached();
+ /* The user passed an AioContext. */
+ assert(listener->context == NULL);
+ qio_channel_set_aio_fd_handler(
+ QIO_CHANNEL(listener->source[i]->sioc),
+ listener->aio_context, qio_net_listener_aio_func,
+ NULL, NULL, listener->source[i]);
}
}
}
@@ -170,7 +185,10 @@ qio_net_listener_unwatch(QIONetListener *listener, const char *caller)
listener->source[i]->io_source = NULL;
}
} else {
- g_assert_not_reached();
+ assert(listener->context == NULL);
+ qio_channel_set_aio_fd_handler(
+ QIO_CHANNEL(listener->source[i]->sioc),
+ NULL, NULL, NULL, NULL, NULL);
}
}
object_unref(OBJECT(listener));
@@ -244,6 +262,18 @@ void qio_net_listener_set_client_func(QIONetListener *listener,
notify, NULL, NULL);
}
+void qio_net_listener_set_client_aio_func(QIONetListener *listener,
+ QIONetListenerClientFunc func,
+ void *data,
+ AioContext *context)
+{
+ if (!context) {
+ context = qemu_get_aio_context();
+ }
+ qio_net_listener_set_client_func_internal(listener, func, data,
+ NULL, NULL, context);
+}
+
struct QIONetListenerClientWaitData {
QIOChannelSocket *sioc;
GMainLoop *loop;
--
2.51.1