Simimlar to AIO_WAIT_WHILE(), bdrv_drain_poll_top_level() needs to
release the AioContext lock of the node to be drained before calling
aio_poll(). Otherwise, callbacks called by aio_poll() would possibly
take the lock a second time and run into a deadlock with a nested
AIO_WAIT_WHILE() call.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/io.c | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/block/io.c b/block/io.c
index 7100344c7b..832d2536bf 100644
--- a/block/io.c
+++ b/block/io.c
@@ -268,9 +268,32 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursive,
static bool bdrv_drain_poll_top_level(BlockDriverState *bs, bool recursive,
BdrvChild *ignore_parent)
{
+ AioContext *ctx = bdrv_get_aio_context(bs);
+
+ /*
+ * We cannot easily release the lock unconditionally here because many
+ * callers of drain function (like qemu initialisation, tools, etc.) don't
+ * even hold the main context lock.
+ *
+ * This means that we fix potential deadlocks for the case where we are in
+ * the main context and polling a BDS in a different AioContext, but
+ * draining a BDS in the main context from a different I/O thread would
+ * still have this problem. Fortunately, this isn't supposed to happen
+ * anyway.
+ */
+ if (ctx != qemu_get_aio_context()) {
+ aio_context_release(ctx);
+ } else {
+ assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+ }
+
/* Execute pending BHs first and check everything else only after the BHs
* have executed. */
- while (aio_poll(bs->aio_context, false));
+ while (aio_poll(ctx, false));
+
+ if (ctx != qemu_get_aio_context()) {
+ aio_context_acquire(ctx);
+ }
return bdrv_drain_poll(bs, recursive, ignore_parent, false);
}
--
2.13.6
On Fri, 08/17 19:02, Kevin Wolf wrote:
> Simimlar to AIO_WAIT_WHILE(), bdrv_drain_poll_top_level() needs to
> release the AioContext lock of the node to be drained before calling
> aio_poll(). Otherwise, callbacks called by aio_poll() would possibly
> take the lock a second time and run into a deadlock with a nested
> AIO_WAIT_WHILE() call.
>
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
> block/io.c | 25 ++++++++++++++++++++++++-
> 1 file changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/block/io.c b/block/io.c
> index 7100344c7b..832d2536bf 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -268,9 +268,32 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursive,
> static bool bdrv_drain_poll_top_level(BlockDriverState *bs, bool recursive,
> BdrvChild *ignore_parent)
> {
> + AioContext *ctx = bdrv_get_aio_context(bs);
> +
> + /*
> + * We cannot easily release the lock unconditionally here because many
> + * callers of drain function (like qemu initialisation, tools, etc.) don't
> + * even hold the main context lock.
> + *
> + * This means that we fix potential deadlocks for the case where we are in
> + * the main context and polling a BDS in a different AioContext, but
> + * draining a BDS in the main context from a different I/O thread would
> + * still have this problem. Fortunately, this isn't supposed to happen
> + * anyway.
> + */
> + if (ctx != qemu_get_aio_context()) {
> + aio_context_release(ctx);
> + } else {
> + assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> + }
> +
> /* Execute pending BHs first and check everything else only after the BHs
> * have executed. */
> - while (aio_poll(bs->aio_context, false));
> + while (aio_poll(ctx, false));
> +
> + if (ctx != qemu_get_aio_context()) {
> + aio_context_acquire(ctx);
> + }
>
> return bdrv_drain_poll(bs, recursive, ignore_parent, false);
> }
> --
> 2.13.6
>
The same question as patch 3: why not just use AIO_WAIT_WHILE() here? It takes
care to not release any lock if both running and polling in the main context
(taking the in_aio_context_home_thread() branch).
Fam
Am 24.08.2018 um 09:24 hat Fam Zheng geschrieben:
> On Fri, 08/17 19:02, Kevin Wolf wrote:
> > Simimlar to AIO_WAIT_WHILE(), bdrv_drain_poll_top_level() needs to
> > release the AioContext lock of the node to be drained before calling
> > aio_poll(). Otherwise, callbacks called by aio_poll() would possibly
> > take the lock a second time and run into a deadlock with a nested
> > AIO_WAIT_WHILE() call.
> >
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > ---
> > block/io.c | 25 ++++++++++++++++++++++++-
> > 1 file changed, 24 insertions(+), 1 deletion(-)
> >
> > diff --git a/block/io.c b/block/io.c
> > index 7100344c7b..832d2536bf 100644
> > --- a/block/io.c
> > +++ b/block/io.c
> > @@ -268,9 +268,32 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursive,
> > static bool bdrv_drain_poll_top_level(BlockDriverState *bs, bool recursive,
> > BdrvChild *ignore_parent)
> > {
> > + AioContext *ctx = bdrv_get_aio_context(bs);
> > +
> > + /*
> > + * We cannot easily release the lock unconditionally here because many
> > + * callers of drain function (like qemu initialisation, tools, etc.) don't
> > + * even hold the main context lock.
> > + *
> > + * This means that we fix potential deadlocks for the case where we are in
> > + * the main context and polling a BDS in a different AioContext, but
> > + * draining a BDS in the main context from a different I/O thread would
> > + * still have this problem. Fortunately, this isn't supposed to happen
> > + * anyway.
> > + */
> > + if (ctx != qemu_get_aio_context()) {
> > + aio_context_release(ctx);
> > + } else {
> > + assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> > + }
> > +
> > /* Execute pending BHs first and check everything else only after the BHs
> > * have executed. */
> > - while (aio_poll(bs->aio_context, false));
> > + while (aio_poll(ctx, false));
> > +
> > + if (ctx != qemu_get_aio_context()) {
> > + aio_context_acquire(ctx);
> > + }
> >
> > return bdrv_drain_poll(bs, recursive, ignore_parent, false);
> > }
>
> The same question as patch 3: why not just use AIO_WAIT_WHILE() here? It takes
> care to not release any lock if both running and polling in the main context
> (taking the in_aio_context_home_thread() branch).
I don't think AIO_WAIT_WHILE() can be non-blocking, though?
There is also no real condition here to check. It's just polling as long
as there is activity to get the pending BH callbacks run. I'm not sure
how I could possibly write this as a AIO_WAIT_WHILE() condition.
After all, this one just doesn't feel like the right use case for
AIO_WAIT_WHILE().
Kevin
© 2016 - 2025 Red Hat, Inc.