When the AioContext changes, we need to associate a LinuxAioState with
the new AioContext. Use the bdrv_attach_aio_context callback and call
the new aio_setup_linux_aio(), which will allocate a new AioContext if
needed, and return errors on failures. If it fails for any reason,
fallback to threaded AIO with an error message, as the device is already
in-use by the guest.
Signed-off-by: Nishanth Aravamudan <naravamudan@digitalocean.com>
---
Note this patch didn't exist in v2, but is a result of feedback to that
posting.
block/file-posix.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/block/file-posix.c b/block/file-posix.c
index 6a1714d4a8..ce24950acf 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1730,6 +1730,21 @@ static BlockAIOCB *raw_aio_flush(BlockDriverState *bs,
return paio_submit(bs, s->fd, 0, NULL, 0, cb, opaque, QEMU_AIO_FLUSH);
}
+static void raw_aio_attach_aio_context(BlockDriverState *bs,
+ AioContext *new_context)
+{
+#ifdef CONFIG_LINUX_AIO
+ BDRVRawState *s = bs->opaque;
+ if (s->use_linux_aio) {
+ if (aio_setup_linux_aio(new_context) != 0) {
+ error_report("Unable to use native AIO, falling back to "
+ "thread pool");
+ s->use_linux_aio = false;
+ }
+ }
+#endif
+}
+
static void raw_close(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
@@ -2608,6 +2623,7 @@ BlockDriver bdrv_file = {
.bdrv_refresh_limits = raw_refresh_limits,
.bdrv_io_plug = raw_aio_plug,
.bdrv_io_unplug = raw_aio_unplug,
+ .bdrv_attach_aio_context = raw_aio_attach_aio_context,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
--
2.17.1
On Thu, 06/21 15:21, Nishanth Aravamudan wrote: > When the AioContext changes, we need to associate a LinuxAioState with > the new AioContext. Use the bdrv_attach_aio_context callback and call > the new aio_setup_linux_aio(), which will allocate a new AioContext if > needed, and return errors on failures. If it fails for any reason, > fallback to threaded AIO with an error message, as the device is already > in-use by the guest. > > Signed-off-by: Nishanth Aravamudan <naravamudan@digitalocean.com> > --- > Note this patch didn't exist in v2, but is a result of feedback to that > posting. This should be squashed into patch 1, no? Fam
Am 22.06.2018 um 04:25 hat Fam Zheng geschrieben: > On Thu, 06/21 15:21, Nishanth Aravamudan wrote: > > When the AioContext changes, we need to associate a LinuxAioState with > > the new AioContext. Use the bdrv_attach_aio_context callback and call > > the new aio_setup_linux_aio(), which will allocate a new AioContext if > > needed, and return errors on failures. If it fails for any reason, > > fallback to threaded AIO with an error message, as the device is already > > in-use by the guest. > > > > Signed-off-by: Nishanth Aravamudan <naravamudan@digitalocean.com> > > --- > > Note this patch didn't exist in v2, but is a result of feedback to that > > posting. > > This should be squashed into patch 1, no? Yes, without it, patch 1 is incorrect. Specifically, at least the assertion in aio_get_linux_aio() won't hold true without it. Kevin
On 22.06.2018 [11:02:06 +0200], Kevin Wolf wrote: > Am 22.06.2018 um 04:25 hat Fam Zheng geschrieben: > > On Thu, 06/21 15:21, Nishanth Aravamudan wrote: > > > When the AioContext changes, we need to associate a LinuxAioState with > > > the new AioContext. Use the bdrv_attach_aio_context callback and call > > > the new aio_setup_linux_aio(), which will allocate a new AioContext if > > > needed, and return errors on failures. If it fails for any reason, > > > fallback to threaded AIO with an error message, as the device is already > > > in-use by the guest. > > > > > > Signed-off-by: Nishanth Aravamudan <naravamudan@digitalocean.com> > > > --- > > > Note this patch didn't exist in v2, but is a result of feedback to that > > > posting. > > > > This should be squashed into patch 1, no? > > Yes, without it, patch 1 is incorrect. Specifically, at least the > assertion in aio_get_linux_aio() won't hold true without it. Yes, you are right! Sorry about that. I'll send a v4 today. -Nish
© 2016 - 2026 Red Hat, Inc.