[PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued

Jens Axboe posted 2 patches 6 days, 21 hours ago
Maintainers: Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>
[PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Jens Axboe 6 days, 21 hours ago
When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
block I/O coroutine inline on the vCPU thread because
qemu_get_current_aio_context() returns the main AioContext when BQL is
held. The coroutine calls luring_co_submit() which queues an SQE via
fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
in gsource_prepare() on the main loop thread.

Since the coroutine ran inline (not via aio_co_schedule()), no BH is
scheduled and aio_notify() is never called. The main loop remains asleep
in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
the next timer fires.

Fix this by calling aio_notify() after queuing the SQE. This wakes the
main loop via the eventfd so it can run gsource_prepare() and submit the
pending SQE promptly.

This is a generic fix that benefits all devices using aio=io_uring.
Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
main loop after queuing block I/O.

This is usually a bit hard to detect, as it also relies on the ppoll
loop not waking up for other activity, and micro benchmarks tend not to
see it because they don't have any real processing time. With a
synthetic test case that has a few usleep() to simulate processing of
read data, it's very noticeable. The below example reads 128MB with
O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
each batch submit, and a 1ms delay after processing each completion.
Running it on /dev/sda yields:

time sudo ./iotest /dev/sda

________________________________________________________
Executed in   25.76 secs      fish           external
   usr time    6.19 millis  783.00 micros    5.41 millis
   sys time   12.43 millis  642.00 micros   11.79 millis

while on a virtio-blk or NVMe device we get:

time sudo ./iotest /dev/vdb

________________________________________________________
Executed in    1.25 secs      fish           external
   usr time    1.40 millis    0.30 millis    1.10 millis
   sys time   17.61 millis    1.43 millis   16.18 millis

time sudo ./iotest /dev/nvme0n1

________________________________________________________
Executed in    1.26 secs      fish           external
   usr time    6.11 millis    0.52 millis    5.59 millis
   sys time   13.94 millis    1.50 millis   12.43 millis

where the latter are consistent. If we run the same test but keep the
socket for the ssh connection active by having activity there, then
the sda test looks as follows:

time sudo ./iotest /dev/sda

________________________________________________________
Executed in    1.23 secs      fish           external
   usr time    2.70 millis   39.00 micros    2.66 millis
   sys time    4.97 millis  977.00 micros    3.99 millis

as now the ppoll loop is woken all the time anyway.

After this fix, on an idle system:

time sudo ./iotest /dev/sda

________________________________________________________
Executed in    1.30 secs      fish           external
   usr time    2.14 millis    0.14 millis    2.00 millis
   sys time   16.93 millis    1.16 millis   15.76 millis

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 util/fdmon-io_uring.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
index d0b56127c670..96392876b490 100644
--- a/util/fdmon-io_uring.c
+++ b/util/fdmon-io_uring.c
@@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
 
     trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
                                  cqe_handler);
+
+    /*
+     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
+     * runs a coroutine inline (holding BQL), it queues SQEs here but the
+     * actual io_uring_submit() only happens in gsource_prepare().  Without
+     * this notify, ppoll() can sleep up to 499ms before submitting.
+     */
+    aio_notify(ctx);
 }
 
 static void fdmon_special_cqe_handler(CqeHandler *cqe_handler)
-- 
2.51.0
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Kevin Wolf 6 days, 19 hours ago
Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> block I/O coroutine inline on the vCPU thread because
> qemu_get_current_aio_context() returns the main AioContext when BQL is
> held. The coroutine calls luring_co_submit() which queues an SQE via
> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> in gsource_prepare() on the main loop thread.

Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
in the recent changes (or I guess worker threads in theory, but I don't
think there any that actually make use of aio_add_sqe()).

> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> scheduled and aio_notify() is never called. The main loop remains asleep
> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> the next timer fires.
> 
> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> main loop via the eventfd so it can run gsource_prepare() and submit the
> pending SQE promptly.
> 
> This is a generic fix that benefits all devices using aio=io_uring.
> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> main loop after queuing block I/O.
> 
> This is usually a bit hard to detect, as it also relies on the ppoll
> loop not waking up for other activity, and micro benchmarks tend not to
> see it because they don't have any real processing time. With a
> synthetic test case that has a few usleep() to simulate processing of
> read data, it's very noticeable. The below example reads 128MB with
> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> each batch submit, and a 1ms delay after processing each completion.
> Running it on /dev/sda yields:
> 
> time sudo ./iotest /dev/sda
> 
> ________________________________________________________
> Executed in   25.76 secs      fish           external
>    usr time    6.19 millis  783.00 micros    5.41 millis
>    sys time   12.43 millis  642.00 micros   11.79 millis
> 
> while on a virtio-blk or NVMe device we get:
> 
> time sudo ./iotest /dev/vdb
> 
> ________________________________________________________
> Executed in    1.25 secs      fish           external
>    usr time    1.40 millis    0.30 millis    1.10 millis
>    sys time   17.61 millis    1.43 millis   16.18 millis
> 
> time sudo ./iotest /dev/nvme0n1
> 
> ________________________________________________________
> Executed in    1.26 secs      fish           external
>    usr time    6.11 millis    0.52 millis    5.59 millis
>    sys time   13.94 millis    1.50 millis   12.43 millis
> 
> where the latter are consistent. If we run the same test but keep the
> socket for the ssh connection active by having activity there, then
> the sda test looks as follows:
> 
> time sudo ./iotest /dev/sda
> 
> ________________________________________________________
> Executed in    1.23 secs      fish           external
>    usr time    2.70 millis   39.00 micros    2.66 millis
>    sys time    4.97 millis  977.00 micros    3.99 millis
> 
> as now the ppoll loop is woken all the time anyway.
> 
> After this fix, on an idle system:
> 
> time sudo ./iotest /dev/sda
> 
> ________________________________________________________
> Executed in    1.30 secs      fish           external
>    usr time    2.14 millis    0.14 millis    2.00 millis
>    sys time   16.93 millis    1.16 millis   15.76 millis
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  util/fdmon-io_uring.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> index d0b56127c670..96392876b490 100644
> --- a/util/fdmon-io_uring.c
> +++ b/util/fdmon-io_uring.c
> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
>  
>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
>                                   cqe_handler);
> +
> +    /*
> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> +     */
> +    aio_notify(ctx);
>  }

Makes sense to me.

At first I wondered if we should use defer_call() for the aio_notify()
to batch the submission, but of course holding the BQL will already take
care of that. And in iothreads where there is no BQL, the aio_notify()
shouldn't make a difference anyway because we're already in the right
thread.

I suppose the other variation could be have another io_uring_enter()
call here (but then probably really through defer_call()) to avoid
waiting for another CPU to submit the request in its main loop. But I
don't really have an intuition if that would make things better or worse
in the common case.

Fiona, does this fix your case, too?

Kevin
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Stefan Hajnoczi 1 day, 19 hours ago
On Fri, Feb 13, 2026 at 05:04:31PM +0100, Kevin Wolf wrote:
> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> > When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> > block I/O coroutine inline on the vCPU thread because
> > qemu_get_current_aio_context() returns the main AioContext when BQL is
> > held. The coroutine calls luring_co_submit() which queues an SQE via
> > fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> > in gsource_prepare() on the main loop thread.
> 
> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> in the recent changes (or I guess worker threads in theory, but I don't
> think there any that actually make use of aio_add_sqe()).

Worker threads don't have an AioContext, so they cannot call
aio_add_sqe().

Stefan
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Fiona Ebner 2 days, 1 hour ago
Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>> block I/O coroutine inline on the vCPU thread because
>> qemu_get_current_aio_context() returns the main AioContext when BQL is
>> held. The coroutine calls luring_co_submit() which queues an SQE via
>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>> in gsource_prepare() on the main loop thread.
> 
> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> in the recent changes (or I guess worker threads in theory, but I don't
> think there any that actually make use of aio_add_sqe()).
> 
>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>> scheduled and aio_notify() is never called. The main loop remains asleep
>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>> the next timer fires.
>>
>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
>> main loop via the eventfd so it can run gsource_prepare() and submit the
>> pending SQE promptly.
>>
>> This is a generic fix that benefits all devices using aio=io_uring.
>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>> main loop after queuing block I/O.
>>
>> This is usually a bit hard to detect, as it also relies on the ppoll
>> loop not waking up for other activity, and micro benchmarks tend not to
>> see it because they don't have any real processing time. With a
>> synthetic test case that has a few usleep() to simulate processing of
>> read data, it's very noticeable. The below example reads 128MB with
>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>> each batch submit, and a 1ms delay after processing each completion.
>> Running it on /dev/sda yields:
>>
>> time sudo ./iotest /dev/sda
>>
>> ________________________________________________________
>> Executed in   25.76 secs      fish           external
>>    usr time    6.19 millis  783.00 micros    5.41 millis
>>    sys time   12.43 millis  642.00 micros   11.79 millis
>>
>> while on a virtio-blk or NVMe device we get:
>>
>> time sudo ./iotest /dev/vdb
>>
>> ________________________________________________________
>> Executed in    1.25 secs      fish           external
>>    usr time    1.40 millis    0.30 millis    1.10 millis
>>    sys time   17.61 millis    1.43 millis   16.18 millis
>>
>> time sudo ./iotest /dev/nvme0n1
>>
>> ________________________________________________________
>> Executed in    1.26 secs      fish           external
>>    usr time    6.11 millis    0.52 millis    5.59 millis
>>    sys time   13.94 millis    1.50 millis   12.43 millis
>>
>> where the latter are consistent. If we run the same test but keep the
>> socket for the ssh connection active by having activity there, then
>> the sda test looks as follows:
>>
>> time sudo ./iotest /dev/sda
>>
>> ________________________________________________________
>> Executed in    1.23 secs      fish           external
>>    usr time    2.70 millis   39.00 micros    2.66 millis
>>    sys time    4.97 millis  977.00 micros    3.99 millis
>>
>> as now the ppoll loop is woken all the time anyway.
>>
>> After this fix, on an idle system:
>>
>> time sudo ./iotest /dev/sda
>>
>> ________________________________________________________
>> Executed in    1.30 secs      fish           external
>>    usr time    2.14 millis    0.14 millis    2.00 millis
>>    sys time   16.93 millis    1.16 millis   15.76 millis
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>>  util/fdmon-io_uring.c | 8 ++++++++
>>  1 file changed, 8 insertions(+)
>>
>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
>> index d0b56127c670..96392876b490 100644
>> --- a/util/fdmon-io_uring.c
>> +++ b/util/fdmon-io_uring.c
>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
>>  
>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
>>                                   cqe_handler);
>> +
>> +    /*
>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
>> +     */
>> +    aio_notify(ctx);
>>  }
> 
> Makes sense to me.
> 
> At first I wondered if we should use defer_call() for the aio_notify()
> to batch the submission, but of course holding the BQL will already take
> care of that. And in iothreads where there is no BQL, the aio_notify()
> shouldn't make a difference anyway because we're already in the right
> thread.
> 
> I suppose the other variation could be have another io_uring_enter()
> call here (but then probably really through defer_call()) to avoid
> waiting for another CPU to submit the request in its main loop. But I
> don't really have an intuition if that would make things better or worse
> in the common case.
> 
> Fiona, does this fix your case, too?

Yes, it does fix my issue [0] and the second patch gives another small
improvement :)

Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
would feel nicer to me to not have fdmon-io_uring.c call "back up". I
guess it also depends on whether we expect another future fdmon
implementation with .add_sqe() to also benefit from it.

[0]:
https://lore.kernel.org/qemu-devel/9901305b-fbdf-4893-8e80-3bc0d1d645b0@proxmox.com/

Best Regards,
Fiona
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Stefan Hajnoczi 1 day, 19 hours ago
On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> > Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> >> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> >> block I/O coroutine inline on the vCPU thread because
> >> qemu_get_current_aio_context() returns the main AioContext when BQL is
> >> held. The coroutine calls luring_co_submit() which queues an SQE via
> >> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> >> in gsource_prepare() on the main loop thread.
> > 
> > Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> > in the recent changes (or I guess worker threads in theory, but I don't
> > think there any that actually make use of aio_add_sqe()).
> > 
> >> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> >> scheduled and aio_notify() is never called. The main loop remains asleep
> >> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> >> the next timer fires.
> >>
> >> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> >> main loop via the eventfd so it can run gsource_prepare() and submit the
> >> pending SQE promptly.
> >>
> >> This is a generic fix that benefits all devices using aio=io_uring.
> >> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> >> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> >> main loop after queuing block I/O.
> >>
> >> This is usually a bit hard to detect, as it also relies on the ppoll
> >> loop not waking up for other activity, and micro benchmarks tend not to
> >> see it because they don't have any real processing time. With a
> >> synthetic test case that has a few usleep() to simulate processing of
> >> read data, it's very noticeable. The below example reads 128MB with
> >> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> >> each batch submit, and a 1ms delay after processing each completion.
> >> Running it on /dev/sda yields:
> >>
> >> time sudo ./iotest /dev/sda
> >>
> >> ________________________________________________________
> >> Executed in   25.76 secs      fish           external
> >>    usr time    6.19 millis  783.00 micros    5.41 millis
> >>    sys time   12.43 millis  642.00 micros   11.79 millis
> >>
> >> while on a virtio-blk or NVMe device we get:
> >>
> >> time sudo ./iotest /dev/vdb
> >>
> >> ________________________________________________________
> >> Executed in    1.25 secs      fish           external
> >>    usr time    1.40 millis    0.30 millis    1.10 millis
> >>    sys time   17.61 millis    1.43 millis   16.18 millis
> >>
> >> time sudo ./iotest /dev/nvme0n1
> >>
> >> ________________________________________________________
> >> Executed in    1.26 secs      fish           external
> >>    usr time    6.11 millis    0.52 millis    5.59 millis
> >>    sys time   13.94 millis    1.50 millis   12.43 millis
> >>
> >> where the latter are consistent. If we run the same test but keep the
> >> socket for the ssh connection active by having activity there, then
> >> the sda test looks as follows:
> >>
> >> time sudo ./iotest /dev/sda
> >>
> >> ________________________________________________________
> >> Executed in    1.23 secs      fish           external
> >>    usr time    2.70 millis   39.00 micros    2.66 millis
> >>    sys time    4.97 millis  977.00 micros    3.99 millis
> >>
> >> as now the ppoll loop is woken all the time anyway.
> >>
> >> After this fix, on an idle system:
> >>
> >> time sudo ./iotest /dev/sda
> >>
> >> ________________________________________________________
> >> Executed in    1.30 secs      fish           external
> >>    usr time    2.14 millis    0.14 millis    2.00 millis
> >>    sys time   16.93 millis    1.16 millis   15.76 millis
> >>
> >> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >> ---
> >>  util/fdmon-io_uring.c | 8 ++++++++
> >>  1 file changed, 8 insertions(+)
> >>
> >> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> >> index d0b56127c670..96392876b490 100644
> >> --- a/util/fdmon-io_uring.c
> >> +++ b/util/fdmon-io_uring.c
> >> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
> >>  
> >>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
> >>                                   cqe_handler);
> >> +
> >> +    /*
> >> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> >> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> >> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> >> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> >> +     */
> >> +    aio_notify(ctx);
> >>  }
> > 
> > Makes sense to me.
> > 
> > At first I wondered if we should use defer_call() for the aio_notify()
> > to batch the submission, but of course holding the BQL will already take
> > care of that. And in iothreads where there is no BQL, the aio_notify()
> > shouldn't make a difference anyway because we're already in the right
> > thread.
> > 
> > I suppose the other variation could be have another io_uring_enter()
> > call here (but then probably really through defer_call()) to avoid
> > waiting for another CPU to submit the request in its main loop. But I
> > don't really have an intuition if that would make things better or worse
> > in the common case.
> > 
> > Fiona, does this fix your case, too?
> 
> Yes, it does fix my issue [0] and the second patch gives another small
> improvement :)
> 
> Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
> itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
> would feel nicer to me to not have fdmon-io_uring.c call "back up". I
> guess it also depends on whether we expect another future fdmon
> implementation with .add_sqe() to also benefit from it.

Calling aio_notify() from aio-posix.c:aio_add_sqe() sounds better to me
because fdmon-io_uring.c has to be careful about calling aio_*() APIs to
avoid loops.

Stefan

> 
> [0]:
> https://lore.kernel.org/qemu-devel/9901305b-fbdf-4893-8e80-3bc0d1d645b0@proxmox.com/
> 
> Best Regards,
> Fiona
> 
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Jens Axboe 1 day, 19 hours ago
On 2/18/26 9:11 AM, Stefan Hajnoczi wrote:
> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>>>> block I/O coroutine inline on the vCPU thread because
>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>>>> in gsource_prepare() on the main loop thread.
>>>
>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
>>> in the recent changes (or I guess worker threads in theory, but I don't
>>> think there any that actually make use of aio_add_sqe()).
>>>
>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>>>> scheduled and aio_notify() is never called. The main loop remains asleep
>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>>>> the next timer fires.
>>>>
>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
>>>> main loop via the eventfd so it can run gsource_prepare() and submit the
>>>> pending SQE promptly.
>>>>
>>>> This is a generic fix that benefits all devices using aio=io_uring.
>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>>>> main loop after queuing block I/O.
>>>>
>>>> This is usually a bit hard to detect, as it also relies on the ppoll
>>>> loop not waking up for other activity, and micro benchmarks tend not to
>>>> see it because they don't have any real processing time. With a
>>>> synthetic test case that has a few usleep() to simulate processing of
>>>> read data, it's very noticeable. The below example reads 128MB with
>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>>>> each batch submit, and a 1ms delay after processing each completion.
>>>> Running it on /dev/sda yields:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in   25.76 secs      fish           external
>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
>>>>
>>>> while on a virtio-blk or NVMe device we get:
>>>>
>>>> time sudo ./iotest /dev/vdb
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.25 secs      fish           external
>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
>>>>
>>>> time sudo ./iotest /dev/nvme0n1
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.26 secs      fish           external
>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
>>>>
>>>> where the latter are consistent. If we run the same test but keep the
>>>> socket for the ssh connection active by having activity there, then
>>>> the sda test looks as follows:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.23 secs      fish           external
>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
>>>>
>>>> as now the ppoll loop is woken all the time anyway.
>>>>
>>>> After this fix, on an idle system:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.30 secs      fish           external
>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
>>>>
>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>>> ---
>>>>  util/fdmon-io_uring.c | 8 ++++++++
>>>>  1 file changed, 8 insertions(+)
>>>>
>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
>>>> index d0b56127c670..96392876b490 100644
>>>> --- a/util/fdmon-io_uring.c
>>>> +++ b/util/fdmon-io_uring.c
>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
>>>>  
>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
>>>>                                   cqe_handler);
>>>> +
>>>> +    /*
>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
>>>> +     */
>>>> +    aio_notify(ctx);
>>>>  }
>>>
>>> Makes sense to me.
>>>
>>> At first I wondered if we should use defer_call() for the aio_notify()
>>> to batch the submission, but of course holding the BQL will already take
>>> care of that. And in iothreads where there is no BQL, the aio_notify()
>>> shouldn't make a difference anyway because we're already in the right
>>> thread.
>>>
>>> I suppose the other variation could be have another io_uring_enter()
>>> call here (but then probably really through defer_call()) to avoid
>>> waiting for another CPU to submit the request in its main loop. But I
>>> don't really have an intuition if that would make things better or worse
>>> in the common case.
>>>
>>> Fiona, does this fix your case, too?
>>
>> Yes, it does fix my issue [0] and the second patch gives another small
>> improvement :)
>>
>> Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
>> itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
>> would feel nicer to me to not have fdmon-io_uring.c call "back up". I
>> guess it also depends on whether we expect another future fdmon
>> implementation with .add_sqe() to also benefit from it.
> 
> Calling aio_notify() from aio-posix.c:aio_add_sqe() sounds better to me
> because fdmon-io_uring.c has to be careful about calling aio_*() APIs to
> avoid loops.

Would anyone care to make that edit? I'm on a plane and gone for a bit,
so won't get back to this for the next week. But I would love to see a
fix go in, as this issue has been plaguing me with test timeouts for
quite a while on the CI front. And seems like I'm not alone, if the
patches fix Fiona's issues as well.

-- 
Jens Axboe
[PATCH v2] aio-posix: notify main loop when SQEs are queued
Posted by Jens Axboe 1 day, 18 hours ago
On 2/18/26 9:19 AM, Jens Axboe wrote:
> On 2/18/26 9:11 AM, Stefan Hajnoczi wrote:
>> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
>>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
>>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
>>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>>>>> block I/O coroutine inline on the vCPU thread because
>>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
>>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
>>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>>>>> in gsource_prepare() on the main loop thread.
>>>>
>>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
>>>> in the recent changes (or I guess worker threads in theory, but I don't
>>>> think there any that actually make use of aio_add_sqe()).
>>>>
>>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>>>>> scheduled and aio_notify() is never called. The main loop remains asleep
>>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>>>>> the next timer fires.
>>>>>
>>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
>>>>> main loop via the eventfd so it can run gsource_prepare() and submit the
>>>>> pending SQE promptly.
>>>>>
>>>>> This is a generic fix that benefits all devices using aio=io_uring.
>>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>>>>> main loop after queuing block I/O.
>>>>>
>>>>> This is usually a bit hard to detect, as it also relies on the ppoll
>>>>> loop not waking up for other activity, and micro benchmarks tend not to
>>>>> see it because they don't have any real processing time. With a
>>>>> synthetic test case that has a few usleep() to simulate processing of
>>>>> read data, it's very noticeable. The below example reads 128MB with
>>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>>>>> each batch submit, and a 1ms delay after processing each completion.
>>>>> Running it on /dev/sda yields:
>>>>>
>>>>> time sudo ./iotest /dev/sda
>>>>>
>>>>> ________________________________________________________
>>>>> Executed in   25.76 secs      fish           external
>>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
>>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
>>>>>
>>>>> while on a virtio-blk or NVMe device we get:
>>>>>
>>>>> time sudo ./iotest /dev/vdb
>>>>>
>>>>> ________________________________________________________
>>>>> Executed in    1.25 secs      fish           external
>>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
>>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
>>>>>
>>>>> time sudo ./iotest /dev/nvme0n1
>>>>>
>>>>> ________________________________________________________
>>>>> Executed in    1.26 secs      fish           external
>>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
>>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
>>>>>
>>>>> where the latter are consistent. If we run the same test but keep the
>>>>> socket for the ssh connection active by having activity there, then
>>>>> the sda test looks as follows:
>>>>>
>>>>> time sudo ./iotest /dev/sda
>>>>>
>>>>> ________________________________________________________
>>>>> Executed in    1.23 secs      fish           external
>>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
>>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
>>>>>
>>>>> as now the ppoll loop is woken all the time anyway.
>>>>>
>>>>> After this fix, on an idle system:
>>>>>
>>>>> time sudo ./iotest /dev/sda
>>>>>
>>>>> ________________________________________________________
>>>>> Executed in    1.30 secs      fish           external
>>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
>>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
>>>>>
>>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>>>> ---
>>>>>  util/fdmon-io_uring.c | 8 ++++++++
>>>>>  1 file changed, 8 insertions(+)
>>>>>
>>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
>>>>> index d0b56127c670..96392876b490 100644
>>>>> --- a/util/fdmon-io_uring.c
>>>>> +++ b/util/fdmon-io_uring.c
>>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
>>>>>  
>>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
>>>>>                                   cqe_handler);
>>>>> +
>>>>> +    /*
>>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
>>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
>>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
>>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
>>>>> +     */
>>>>> +    aio_notify(ctx);
>>>>>  }
>>>>
>>>> Makes sense to me.
>>>>
>>>> At first I wondered if we should use defer_call() for the aio_notify()
>>>> to batch the submission, but of course holding the BQL will already take
>>>> care of that. And in iothreads where there is no BQL, the aio_notify()
>>>> shouldn't make a difference anyway because we're already in the right
>>>> thread.
>>>>
>>>> I suppose the other variation could be have another io_uring_enter()
>>>> call here (but then probably really through defer_call()) to avoid
>>>> waiting for another CPU to submit the request in its main loop. But I
>>>> don't really have an intuition if that would make things better or worse
>>>> in the common case.
>>>>
>>>> Fiona, does this fix your case, too?
>>>
>>> Yes, it does fix my issue [0] and the second patch gives another small
>>> improvement :)
>>>
>>> Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
>>> itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
>>> would feel nicer to me to not have fdmon-io_uring.c call "back up". I
>>> guess it also depends on whether we expect another future fdmon
>>> implementation with .add_sqe() to also benefit from it.
>>
>> Calling aio_notify() from aio-posix.c:aio_add_sqe() sounds better to me
>> because fdmon-io_uring.c has to be careful about calling aio_*() APIs to
>> avoid loops.
> 
> Would anyone care to make that edit? I'm on a plane and gone for a bit,
> so won't get back to this for the next week. But I would love to see a
> fix go in, as this issue has been plaguing me with test timeouts for
> quite a while on the CI front. And seems like I'm not alone, if the
> patches fix Fiona's issues as well.

Still on a plane but tested this one and it works for me too. Does seem
like a better approach, rather than stuff it in the fdmon part.

Feel free to run with this one and also to update the commit message if
you want. Thanks!


commit a8a94e7a05964d470b8fba50c9d4769489c21752
Author: Jens Axboe <axboe@kernel.dk>
Date:   Fri Feb 13 06:52:14 2026 -0700

    aio-posix: notify main loop when SQEs are queued
    
    When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
    block I/O coroutine inline on the vCPU thread because
    qemu_get_current_aio_context() returns the main AioContext when BQL is
    held. The coroutine calls luring_co_submit() which queues an SQE via
    fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
    in gsource_prepare() on the main loop thread.
    
    Since the coroutine ran inline (not via aio_co_schedule()), no BH is
    scheduled and aio_notify() is never called. The main loop remains asleep
    in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
    the next timer fires.
    
    Fix this by calling aio_notify() after queuing the SQE. This wakes the
    main loop via the eventfd so it can run gsource_prepare() and submit the
    pending SQE promptly.
    
    This is a generic fix that benefits all devices using aio=io_uring.
    Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
    MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
    main loop after queuing block I/O.
    
    This is usually a bit hard to detect, as it also relies on the ppoll
    loop not waking up for other activity, and micro benchmarks tend not to
    see it because they don't have any real processing time. With a
    synthetic test case that has a few usleep() to simulate processing of
    read data, it's very noticeable. The below example reads 128MB with
    O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
    each batch submit, and a 1ms delay after processing each completion.
    Running it on /dev/sda yields:
    
    time sudo ./iotest /dev/sda
    
    ________________________________________________________
    Executed in   25.76 secs      fish           external
       usr time    6.19 millis  783.00 micros    5.41 millis
       sys time   12.43 millis  642.00 micros   11.79 millis
    
    while on a virtio-blk or NVMe device we get:
    
    time sudo ./iotest /dev/vdb
    
    ________________________________________________________
    Executed in    1.25 secs      fish           external
       usr time    1.40 millis    0.30 millis    1.10 millis
       sys time   17.61 millis    1.43 millis   16.18 millis
    
    time sudo ./iotest /dev/nvme0n1
    
    ________________________________________________________
    Executed in    1.26 secs      fish           external
       usr time    6.11 millis    0.52 millis    5.59 millis
       sys time   13.94 millis    1.50 millis   12.43 millis
    
    where the latter are consistent. If we run the same test but keep the
    socket for the ssh connection active by having activity there, then
    the sda test looks as follows:
    
    time sudo ./iotest /dev/sda
    
    ________________________________________________________
    Executed in    1.23 secs      fish           external
       usr time    2.70 millis   39.00 micros    2.66 millis
       sys time    4.97 millis  977.00 micros    3.99 millis
    
    as now the ppoll loop is woken all the time anyway.
    
    After this fix, on an idle system:
    
    time sudo ./iotest /dev/sda
    
    ________________________________________________________
    Executed in    1.30 secs      fish           external
       usr time    2.14 millis    0.14 millis    2.00 millis
       sys time   16.93 millis    1.16 millis   15.76 millis
    
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

diff --git a/util/aio-posix.c b/util/aio-posix.c
index e24b955fd91a..8c7b3795c82d 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -813,5 +813,13 @@ void aio_add_sqe(void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque),
 {
     AioContext *ctx = qemu_get_current_aio_context();
     ctx->fdmon_ops->add_sqe(ctx, prep_sqe, opaque, cqe_handler);
+
+    /*
+     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
+     * runs a coroutine inline (holding BQL), it queues SQEs here but the
+     * actual io_uring_submit() only happens in gsource_prepare().  Without
+     * this notify, ppoll() can sleep up to 499ms before submitting.
+     */
+    aio_notify(ctx);
 }
 #endif /* CONFIG_LINUX_IO_URING */

-- 
Jens Axboe
Re: [PATCH v2] aio-posix: notify main loop when SQEs are queued
Posted by Kevin Wolf 19 hours ago
Am 18.02.2026 um 17:41 hat Jens Axboe geschrieben:
> On 2/18/26 9:19 AM, Jens Axboe wrote:
> > On 2/18/26 9:11 AM, Stefan Hajnoczi wrote:
> >> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
> >>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> >>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> >>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> >>>>> block I/O coroutine inline on the vCPU thread because
> >>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
> >>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
> >>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> >>>>> in gsource_prepare() on the main loop thread.
> >>>>
> >>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> >>>> in the recent changes (or I guess worker threads in theory, but I don't
> >>>> think there any that actually make use of aio_add_sqe()).
> >>>>
> >>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> >>>>> scheduled and aio_notify() is never called. The main loop remains asleep
> >>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> >>>>> the next timer fires.
> >>>>>
> >>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> >>>>> main loop via the eventfd so it can run gsource_prepare() and submit the
> >>>>> pending SQE promptly.
> >>>>>
> >>>>> This is a generic fix that benefits all devices using aio=io_uring.
> >>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> >>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> >>>>> main loop after queuing block I/O.
> >>>>>
> >>>>> This is usually a bit hard to detect, as it also relies on the ppoll
> >>>>> loop not waking up for other activity, and micro benchmarks tend not to
> >>>>> see it because they don't have any real processing time. With a
> >>>>> synthetic test case that has a few usleep() to simulate processing of
> >>>>> read data, it's very noticeable. The below example reads 128MB with
> >>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> >>>>> each batch submit, and a 1ms delay after processing each completion.
> >>>>> Running it on /dev/sda yields:
> >>>>>
> >>>>> time sudo ./iotest /dev/sda
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in   25.76 secs      fish           external
> >>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
> >>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
> >>>>>
> >>>>> while on a virtio-blk or NVMe device we get:
> >>>>>
> >>>>> time sudo ./iotest /dev/vdb
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.25 secs      fish           external
> >>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
> >>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
> >>>>>
> >>>>> time sudo ./iotest /dev/nvme0n1
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.26 secs      fish           external
> >>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
> >>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
> >>>>>
> >>>>> where the latter are consistent. If we run the same test but keep the
> >>>>> socket for the ssh connection active by having activity there, then
> >>>>> the sda test looks as follows:
> >>>>>
> >>>>> time sudo ./iotest /dev/sda
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.23 secs      fish           external
> >>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
> >>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
> >>>>>
> >>>>> as now the ppoll loop is woken all the time anyway.
> >>>>>
> >>>>> After this fix, on an idle system:
> >>>>>
> >>>>> time sudo ./iotest /dev/sda
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.30 secs      fish           external
> >>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
> >>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
> >>>>>
> >>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >>>>> ---
> >>>>>  util/fdmon-io_uring.c | 8 ++++++++
> >>>>>  1 file changed, 8 insertions(+)
> >>>>>
> >>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> >>>>> index d0b56127c670..96392876b490 100644
> >>>>> --- a/util/fdmon-io_uring.c
> >>>>> +++ b/util/fdmon-io_uring.c
> >>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
> >>>>>  
> >>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
> >>>>>                                   cqe_handler);
> >>>>> +
> >>>>> +    /*
> >>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> >>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> >>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> >>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> >>>>> +     */
> >>>>> +    aio_notify(ctx);
> >>>>>  }
> >>>>
> >>>> Makes sense to me.
> >>>>
> >>>> At first I wondered if we should use defer_call() for the aio_notify()
> >>>> to batch the submission, but of course holding the BQL will already take
> >>>> care of that. And in iothreads where there is no BQL, the aio_notify()
> >>>> shouldn't make a difference anyway because we're already in the right
> >>>> thread.
> >>>>
> >>>> I suppose the other variation could be have another io_uring_enter()
> >>>> call here (but then probably really through defer_call()) to avoid
> >>>> waiting for another CPU to submit the request in its main loop. But I
> >>>> don't really have an intuition if that would make things better or worse
> >>>> in the common case.
> >>>>
> >>>> Fiona, does this fix your case, too?
> >>>
> >>> Yes, it does fix my issue [0] and the second patch gives another small
> >>> improvement :)
> >>>
> >>> Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
> >>> itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
> >>> would feel nicer to me to not have fdmon-io_uring.c call "back up". I
> >>> guess it also depends on whether we expect another future fdmon
> >>> implementation with .add_sqe() to also benefit from it.
> >>
> >> Calling aio_notify() from aio-posix.c:aio_add_sqe() sounds better to me
> >> because fdmon-io_uring.c has to be careful about calling aio_*() APIs to
> >> avoid loops.
> > 
> > Would anyone care to make that edit? I'm on a plane and gone for a bit,
> > so won't get back to this for the next week. But I would love to see a
> > fix go in, as this issue has been plaguing me with test timeouts for
> > quite a while on the CI front. And seems like I'm not alone, if the
> > patches fix Fiona's issues as well.
> 
> Still on a plane but tested this one and it works for me too. Does seem
> like a better approach, rather than stuff it in the fdmon part.
> 
> Feel free to run with this one and also to update the commit message if
> you want. Thanks!
> 
> 
> commit a8a94e7a05964d470b8fba50c9d4769489c21752
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Fri Feb 13 06:52:14 2026 -0700
> 
>     aio-posix: notify main loop when SQEs are queued
>     
>     When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>     block I/O coroutine inline on the vCPU thread because
>     qemu_get_current_aio_context() returns the main AioContext when BQL is
>     held. The coroutine calls luring_co_submit() which queues an SQE via
>     fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>     in gsource_prepare() on the main loop thread.
>     
>     Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>     scheduled and aio_notify() is never called. The main loop remains asleep
>     in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>     the next timer fires.
>     
>     Fix this by calling aio_notify() after queuing the SQE. This wakes the
>     main loop via the eventfd so it can run gsource_prepare() and submit the
>     pending SQE promptly.
>     
>     This is a generic fix that benefits all devices using aio=io_uring.
>     Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>     MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>     main loop after queuing block I/O.
>     
>     This is usually a bit hard to detect, as it also relies on the ppoll
>     loop not waking up for other activity, and micro benchmarks tend not to
>     see it because they don't have any real processing time. With a
>     synthetic test case that has a few usleep() to simulate processing of
>     read data, it's very noticeable. The below example reads 128MB with
>     O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>     each batch submit, and a 1ms delay after processing each completion.
>     Running it on /dev/sda yields:
>     
>     time sudo ./iotest /dev/sda
>     
>     ________________________________________________________
>     Executed in   25.76 secs      fish           external
>        usr time    6.19 millis  783.00 micros    5.41 millis
>        sys time   12.43 millis  642.00 micros   11.79 millis
>     
>     while on a virtio-blk or NVMe device we get:
>     
>     time sudo ./iotest /dev/vdb
>     
>     ________________________________________________________
>     Executed in    1.25 secs      fish           external
>        usr time    1.40 millis    0.30 millis    1.10 millis
>        sys time   17.61 millis    1.43 millis   16.18 millis
>     
>     time sudo ./iotest /dev/nvme0n1
>     
>     ________________________________________________________
>     Executed in    1.26 secs      fish           external
>        usr time    6.11 millis    0.52 millis    5.59 millis
>        sys time   13.94 millis    1.50 millis   12.43 millis
>     
>     where the latter are consistent. If we run the same test but keep the
>     socket for the ssh connection active by having activity there, then
>     the sda test looks as follows:
>     
>     time sudo ./iotest /dev/sda
>     
>     ________________________________________________________
>     Executed in    1.23 secs      fish           external
>        usr time    2.70 millis   39.00 micros    2.66 millis
>        sys time    4.97 millis  977.00 micros    3.99 millis
>     
>     as now the ppoll loop is woken all the time anyway.
>     
>     After this fix, on an idle system:
>     
>     time sudo ./iotest /dev/sda
>     
>     ________________________________________________________
>     Executed in    1.30 secs      fish           external
>        usr time    2.14 millis    0.14 millis    2.00 millis
>        sys time   16.93 millis    1.16 millis   15.76 millis
>     
>     Signed-off-by: Jens Axboe <axboe@kernel.dk>
> 
> diff --git a/util/aio-posix.c b/util/aio-posix.c
> index e24b955fd91a..8c7b3795c82d 100644
> --- a/util/aio-posix.c
> +++ b/util/aio-posix.c
> @@ -813,5 +813,13 @@ void aio_add_sqe(void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque),
>  {
>      AioContext *ctx = qemu_get_current_aio_context();
>      ctx->fdmon_ops->add_sqe(ctx, prep_sqe, opaque, cqe_handler);
> +
> +    /*
> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the

I think the comment could even be more generic here. This is not
specific to coroutines, but the scenario is just that a vCPU thread
holding the BQL performs I/O.

> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> +     */
> +    aio_notify(ctx);
>  }
>  #endif /* CONFIG_LINUX_IO_URING */

With or without a changed comment to that effect:

Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Re: [PATCH v2] aio-posix: notify main loop when SQEs are queued
Posted by Stefan Hajnoczi 1 day, 14 hours ago
On Wed, Feb 18, 2026 at 09:41:49AM -0700, Jens Axboe wrote:
> On 2/18/26 9:19 AM, Jens Axboe wrote:
> > On 2/18/26 9:11 AM, Stefan Hajnoczi wrote:
> >> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
> >>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> >>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> >>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> >>>>> block I/O coroutine inline on the vCPU thread because
> >>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
> >>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
> >>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> >>>>> in gsource_prepare() on the main loop thread.
> >>>>
> >>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> >>>> in the recent changes (or I guess worker threads in theory, but I don't
> >>>> think there any that actually make use of aio_add_sqe()).
> >>>>
> >>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> >>>>> scheduled and aio_notify() is never called. The main loop remains asleep
> >>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> >>>>> the next timer fires.
> >>>>>
> >>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> >>>>> main loop via the eventfd so it can run gsource_prepare() and submit the
> >>>>> pending SQE promptly.
> >>>>>
> >>>>> This is a generic fix that benefits all devices using aio=io_uring.
> >>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> >>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> >>>>> main loop after queuing block I/O.
> >>>>>
> >>>>> This is usually a bit hard to detect, as it also relies on the ppoll
> >>>>> loop not waking up for other activity, and micro benchmarks tend not to
> >>>>> see it because they don't have any real processing time. With a
> >>>>> synthetic test case that has a few usleep() to simulate processing of
> >>>>> read data, it's very noticeable. The below example reads 128MB with
> >>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> >>>>> each batch submit, and a 1ms delay after processing each completion.
> >>>>> Running it on /dev/sda yields:
> >>>>>
> >>>>> time sudo ./iotest /dev/sda
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in   25.76 secs      fish           external
> >>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
> >>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
> >>>>>
> >>>>> while on a virtio-blk or NVMe device we get:
> >>>>>
> >>>>> time sudo ./iotest /dev/vdb
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.25 secs      fish           external
> >>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
> >>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
> >>>>>
> >>>>> time sudo ./iotest /dev/nvme0n1
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.26 secs      fish           external
> >>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
> >>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
> >>>>>
> >>>>> where the latter are consistent. If we run the same test but keep the
> >>>>> socket for the ssh connection active by having activity there, then
> >>>>> the sda test looks as follows:
> >>>>>
> >>>>> time sudo ./iotest /dev/sda
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.23 secs      fish           external
> >>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
> >>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
> >>>>>
> >>>>> as now the ppoll loop is woken all the time anyway.
> >>>>>
> >>>>> After this fix, on an idle system:
> >>>>>
> >>>>> time sudo ./iotest /dev/sda
> >>>>>
> >>>>> ________________________________________________________
> >>>>> Executed in    1.30 secs      fish           external
> >>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
> >>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
> >>>>>
> >>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >>>>> ---
> >>>>>  util/fdmon-io_uring.c | 8 ++++++++
> >>>>>  1 file changed, 8 insertions(+)
> >>>>>
> >>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> >>>>> index d0b56127c670..96392876b490 100644
> >>>>> --- a/util/fdmon-io_uring.c
> >>>>> +++ b/util/fdmon-io_uring.c
> >>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
> >>>>>  
> >>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
> >>>>>                                   cqe_handler);
> >>>>> +
> >>>>> +    /*
> >>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> >>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> >>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> >>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> >>>>> +     */
> >>>>> +    aio_notify(ctx);
> >>>>>  }
> >>>>
> >>>> Makes sense to me.
> >>>>
> >>>> At first I wondered if we should use defer_call() for the aio_notify()
> >>>> to batch the submission, but of course holding the BQL will already take
> >>>> care of that. And in iothreads where there is no BQL, the aio_notify()
> >>>> shouldn't make a difference anyway because we're already in the right
> >>>> thread.
> >>>>
> >>>> I suppose the other variation could be have another io_uring_enter()
> >>>> call here (but then probably really through defer_call()) to avoid
> >>>> waiting for another CPU to submit the request in its main loop. But I
> >>>> don't really have an intuition if that would make things better or worse
> >>>> in the common case.
> >>>>
> >>>> Fiona, does this fix your case, too?
> >>>
> >>> Yes, it does fix my issue [0] and the second patch gives another small
> >>> improvement :)
> >>>
> >>> Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
> >>> itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
> >>> would feel nicer to me to not have fdmon-io_uring.c call "back up". I
> >>> guess it also depends on whether we expect another future fdmon
> >>> implementation with .add_sqe() to also benefit from it.
> >>
> >> Calling aio_notify() from aio-posix.c:aio_add_sqe() sounds better to me
> >> because fdmon-io_uring.c has to be careful about calling aio_*() APIs to
> >> avoid loops.
> > 
> > Would anyone care to make that edit? I'm on a plane and gone for a bit,
> > so won't get back to this for the next week. But I would love to see a
> > fix go in, as this issue has been plaguing me with test timeouts for
> > quite a while on the CI front. And seems like I'm not alone, if the
> > patches fix Fiona's issues as well.
> 
> Still on a plane but tested this one and it works for me too. Does seem
> like a better approach, rather than stuff it in the fdmon part.
> 
> Feel free to run with this one and also to update the commit message if
> you want. Thanks!
> 
> 
> commit a8a94e7a05964d470b8fba50c9d4769489c21752
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Fri Feb 13 06:52:14 2026 -0700
> 
>     aio-posix: notify main loop when SQEs are queued
>     
>     When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>     block I/O coroutine inline on the vCPU thread because
>     qemu_get_current_aio_context() returns the main AioContext when BQL is
>     held. The coroutine calls luring_co_submit() which queues an SQE via
>     fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>     in gsource_prepare() on the main loop thread.
>     
>     Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>     scheduled and aio_notify() is never called. The main loop remains asleep
>     in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>     the next timer fires.
>     
>     Fix this by calling aio_notify() after queuing the SQE. This wakes the
>     main loop via the eventfd so it can run gsource_prepare() and submit the
>     pending SQE promptly.
>     
>     This is a generic fix that benefits all devices using aio=io_uring.
>     Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>     MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>     main loop after queuing block I/O.
>     
>     This is usually a bit hard to detect, as it also relies on the ppoll
>     loop not waking up for other activity, and micro benchmarks tend not to
>     see it because they don't have any real processing time. With a
>     synthetic test case that has a few usleep() to simulate processing of
>     read data, it's very noticeable. The below example reads 128MB with
>     O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>     each batch submit, and a 1ms delay after processing each completion.
>     Running it on /dev/sda yields:
>     
>     time sudo ./iotest /dev/sda
>     
>     ________________________________________________________
>     Executed in   25.76 secs      fish           external
>        usr time    6.19 millis  783.00 micros    5.41 millis
>        sys time   12.43 millis  642.00 micros   11.79 millis
>     
>     while on a virtio-blk or NVMe device we get:
>     
>     time sudo ./iotest /dev/vdb
>     
>     ________________________________________________________
>     Executed in    1.25 secs      fish           external
>        usr time    1.40 millis    0.30 millis    1.10 millis
>        sys time   17.61 millis    1.43 millis   16.18 millis
>     
>     time sudo ./iotest /dev/nvme0n1
>     
>     ________________________________________________________
>     Executed in    1.26 secs      fish           external
>        usr time    6.11 millis    0.52 millis    5.59 millis
>        sys time   13.94 millis    1.50 millis   12.43 millis
>     
>     where the latter are consistent. If we run the same test but keep the
>     socket for the ssh connection active by having activity there, then
>     the sda test looks as follows:
>     
>     time sudo ./iotest /dev/sda
>     
>     ________________________________________________________
>     Executed in    1.23 secs      fish           external
>        usr time    2.70 millis   39.00 micros    2.66 millis
>        sys time    4.97 millis  977.00 micros    3.99 millis
>     
>     as now the ppoll loop is woken all the time anyway.
>     
>     After this fix, on an idle system:
>     
>     time sudo ./iotest /dev/sda
>     
>     ________________________________________________________
>     Executed in    1.30 secs      fish           external
>        usr time    2.14 millis    0.14 millis    2.00 millis
>        sys time   16.93 millis    1.16 millis   15.76 millis
>     
>     Signed-off-by: Jens Axboe <axboe@kernel.dk>

Thanks, applied to my block tree together with Patch 2 from v1:
https://gitlab.com/stefanha/qemu/commits/block

Stefan
Re: [PATCH v2] aio-posix: notify main loop when SQEs are queued
Posted by Jens Axboe 21 hours ago
On 2/18/26 1:57 PM, Stefan Hajnoczi wrote:
> Thanks, applied to my block tree together with Patch 2 from v1:
> https://gitlab.com/stefanha/qemu/commits/block

Great, thank you!

-- 
Jens Axboe
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Stefan Hajnoczi 1 day, 19 hours ago
On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> > Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> >> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> >> block I/O coroutine inline on the vCPU thread because
> >> qemu_get_current_aio_context() returns the main AioContext when BQL is
> >> held. The coroutine calls luring_co_submit() which queues an SQE via
> >> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> >> in gsource_prepare() on the main loop thread.
> > 
> > Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> > in the recent changes (or I guess worker threads in theory, but I don't
> > think there any that actually make use of aio_add_sqe()).
> > 
> >> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> >> scheduled and aio_notify() is never called. The main loop remains asleep
> >> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> >> the next timer fires.
> >>
> >> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> >> main loop via the eventfd so it can run gsource_prepare() and submit the
> >> pending SQE promptly.
> >>
> >> This is a generic fix that benefits all devices using aio=io_uring.
> >> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> >> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> >> main loop after queuing block I/O.
> >>
> >> This is usually a bit hard to detect, as it also relies on the ppoll
> >> loop not waking up for other activity, and micro benchmarks tend not to
> >> see it because they don't have any real processing time. With a
> >> synthetic test case that has a few usleep() to simulate processing of
> >> read data, it's very noticeable. The below example reads 128MB with
> >> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> >> each batch submit, and a 1ms delay after processing each completion.
> >> Running it on /dev/sda yields:
> >>
> >> time sudo ./iotest /dev/sda
> >>
> >> ________________________________________________________
> >> Executed in   25.76 secs      fish           external
> >>    usr time    6.19 millis  783.00 micros    5.41 millis
> >>    sys time   12.43 millis  642.00 micros   11.79 millis
> >>
> >> while on a virtio-blk or NVMe device we get:
> >>
> >> time sudo ./iotest /dev/vdb
> >>
> >> ________________________________________________________
> >> Executed in    1.25 secs      fish           external
> >>    usr time    1.40 millis    0.30 millis    1.10 millis
> >>    sys time   17.61 millis    1.43 millis   16.18 millis
> >>
> >> time sudo ./iotest /dev/nvme0n1
> >>
> >> ________________________________________________________
> >> Executed in    1.26 secs      fish           external
> >>    usr time    6.11 millis    0.52 millis    5.59 millis
> >>    sys time   13.94 millis    1.50 millis   12.43 millis
> >>
> >> where the latter are consistent. If we run the same test but keep the
> >> socket for the ssh connection active by having activity there, then
> >> the sda test looks as follows:
> >>
> >> time sudo ./iotest /dev/sda
> >>
> >> ________________________________________________________
> >> Executed in    1.23 secs      fish           external
> >>    usr time    2.70 millis   39.00 micros    2.66 millis
> >>    sys time    4.97 millis  977.00 micros    3.99 millis
> >>
> >> as now the ppoll loop is woken all the time anyway.
> >>
> >> After this fix, on an idle system:
> >>
> >> time sudo ./iotest /dev/sda
> >>
> >> ________________________________________________________
> >> Executed in    1.30 secs      fish           external
> >>    usr time    2.14 millis    0.14 millis    2.00 millis
> >>    sys time   16.93 millis    1.16 millis   15.76 millis
> >>
> >> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >> ---
> >>  util/fdmon-io_uring.c | 8 ++++++++
> >>  1 file changed, 8 insertions(+)
> >>
> >> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> >> index d0b56127c670..96392876b490 100644
> >> --- a/util/fdmon-io_uring.c
> >> +++ b/util/fdmon-io_uring.c
> >> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
> >>  
> >>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
> >>                                   cqe_handler);
> >> +
> >> +    /*
> >> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> >> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> >> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> >> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> >> +     */
> >> +    aio_notify(ctx);
> >>  }
> > 
> > Makes sense to me.
> > 
> > At first I wondered if we should use defer_call() for the aio_notify()
> > to batch the submission, but of course holding the BQL will already take
> > care of that. And in iothreads where there is no BQL, the aio_notify()
> > shouldn't make a difference anyway because we're already in the right
> > thread.
> > 
> > I suppose the other variation could be have another io_uring_enter()
> > call here (but then probably really through defer_call()) to avoid
> > waiting for another CPU to submit the request in its main loop. But I
> > don't really have an intuition if that would make things better or worse
> > in the common case.

It's possible to call io_uring_enter(). QEMU currently doesn't use
IORING_SETUP_SINGLE_ISSUER, so it's okay for multiple threads to call
io_uring_enter() on the same io_uring fd.

I experimented with IORING_SETUP_SINGLE_ISSUER (as well as
IORING_SETUP_COOP_TASKRUN and IORING_SETUP_TASKRUN_FLAG) in the past and
didn't measure a performance improvement:
https://lore.kernel.org/qemu-devel/20250724204702.576637-1-stefanha@redhat.com/

Jens, any advice regarding these flags?

Stefan
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Jens Axboe 1 day, 19 hours ago
On 2/18/26 9:06 AM, Stefan Hajnoczi wrote:
> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>>>> block I/O coroutine inline on the vCPU thread because
>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>>>> in gsource_prepare() on the main loop thread.
>>>
>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
>>> in the recent changes (or I guess worker threads in theory, but I don't
>>> think there any that actually make use of aio_add_sqe()).
>>>
>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>>>> scheduled and aio_notify() is never called. The main loop remains asleep
>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>>>> the next timer fires.
>>>>
>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
>>>> main loop via the eventfd so it can run gsource_prepare() and submit the
>>>> pending SQE promptly.
>>>>
>>>> This is a generic fix that benefits all devices using aio=io_uring.
>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>>>> main loop after queuing block I/O.
>>>>
>>>> This is usually a bit hard to detect, as it also relies on the ppoll
>>>> loop not waking up for other activity, and micro benchmarks tend not to
>>>> see it because they don't have any real processing time. With a
>>>> synthetic test case that has a few usleep() to simulate processing of
>>>> read data, it's very noticeable. The below example reads 128MB with
>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>>>> each batch submit, and a 1ms delay after processing each completion.
>>>> Running it on /dev/sda yields:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in   25.76 secs      fish           external
>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
>>>>
>>>> while on a virtio-blk or NVMe device we get:
>>>>
>>>> time sudo ./iotest /dev/vdb
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.25 secs      fish           external
>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
>>>>
>>>> time sudo ./iotest /dev/nvme0n1
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.26 secs      fish           external
>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
>>>>
>>>> where the latter are consistent. If we run the same test but keep the
>>>> socket for the ssh connection active by having activity there, then
>>>> the sda test looks as follows:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.23 secs      fish           external
>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
>>>>
>>>> as now the ppoll loop is woken all the time anyway.
>>>>
>>>> After this fix, on an idle system:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.30 secs      fish           external
>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
>>>>
>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>>> ---
>>>>  util/fdmon-io_uring.c | 8 ++++++++
>>>>  1 file changed, 8 insertions(+)
>>>>
>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
>>>> index d0b56127c670..96392876b490 100644
>>>> --- a/util/fdmon-io_uring.c
>>>> +++ b/util/fdmon-io_uring.c
>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
>>>>  
>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
>>>>                                   cqe_handler);
>>>> +
>>>> +    /*
>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
>>>> +     */
>>>> +    aio_notify(ctx);
>>>>  }
>>>
>>> Makes sense to me.
>>>
>>> At first I wondered if we should use defer_call() for the aio_notify()
>>> to batch the submission, but of course holding the BQL will already take
>>> care of that. And in iothreads where there is no BQL, the aio_notify()
>>> shouldn't make a difference anyway because we're already in the right
>>> thread.
>>>
>>> I suppose the other variation could be have another io_uring_enter()
>>> call here (but then probably really through defer_call()) to avoid
>>> waiting for another CPU to submit the request in its main loop. But I
>>> don't really have an intuition if that would make things better or worse
>>> in the common case.
> 
> It's possible to call io_uring_enter(). QEMU currently doesn't use
> IORING_SETUP_SINGLE_ISSUER, so it's okay for multiple threads to call
> io_uring_enter() on the same io_uring fd.

I would not recommend that, see below.

> I experimented with IORING_SETUP_SINGLE_ISSUER (as well as
> IORING_SETUP_COOP_TASKRUN and IORING_SETUP_TASKRUN_FLAG) in the past and
> didn't measure a performance improvement:
> https://lore.kernel.org/qemu-devel/20250724204702.576637-1-stefanha@redhat.com/
> 
> Jens, any advice regarding these flags?

None other than "yes you should use them" - it's an expanding area of
"let's make that faster", so if you tested something older, then that
may be why as we didn't have a lot earlier. We're toying with getting
rid of the uring_lock for SINGLE_ISSUER, for example.

Hence I think having multiple threads do enter is a design mistake, and
one that might snowball down the line and make it harder to step back
and make SINGLE_ISSUER work for you. Certain features also end up being
gated behing DEFER_TASKRUN, which requires SINGLE_ISSUER as well.

tldr - don't have multiple threads do enter on the same ring, ever, if
it can be avoided. It's a design mistake.

-- 
Jens Axboe
Re: [PATCH 1/2] fdmon-io_uring: notify main loop when SQEs are queued
Posted by Stefan Hajnoczi 1 day, 15 hours ago
On Wed, Feb 18, 2026 at 09:17:57AM -0700, Jens Axboe wrote:
> On 2/18/26 9:06 AM, Stefan Hajnoczi wrote:
> > On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
> >> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> >>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> >>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> >>>> block I/O coroutine inline on the vCPU thread because
> >>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
> >>>> held. The coroutine calls luring_co_submit() which queues an SQE via
> >>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> >>>> in gsource_prepare() on the main loop thread.
> >>>
> >>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
> >>> in the recent changes (or I guess worker threads in theory, but I don't
> >>> think there any that actually make use of aio_add_sqe()).
> >>>
> >>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> >>>> scheduled and aio_notify() is never called. The main loop remains asleep
> >>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> >>>> the next timer fires.
> >>>>
> >>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> >>>> main loop via the eventfd so it can run gsource_prepare() and submit the
> >>>> pending SQE promptly.
> >>>>
> >>>> This is a generic fix that benefits all devices using aio=io_uring.
> >>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> >>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> >>>> main loop after queuing block I/O.
> >>>>
> >>>> This is usually a bit hard to detect, as it also relies on the ppoll
> >>>> loop not waking up for other activity, and micro benchmarks tend not to
> >>>> see it because they don't have any real processing time. With a
> >>>> synthetic test case that has a few usleep() to simulate processing of
> >>>> read data, it's very noticeable. The below example reads 128MB with
> >>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> >>>> each batch submit, and a 1ms delay after processing each completion.
> >>>> Running it on /dev/sda yields:
> >>>>
> >>>> time sudo ./iotest /dev/sda
> >>>>
> >>>> ________________________________________________________
> >>>> Executed in   25.76 secs      fish           external
> >>>>    usr time    6.19 millis  783.00 micros    5.41 millis
> >>>>    sys time   12.43 millis  642.00 micros   11.79 millis
> >>>>
> >>>> while on a virtio-blk or NVMe device we get:
> >>>>
> >>>> time sudo ./iotest /dev/vdb
> >>>>
> >>>> ________________________________________________________
> >>>> Executed in    1.25 secs      fish           external
> >>>>    usr time    1.40 millis    0.30 millis    1.10 millis
> >>>>    sys time   17.61 millis    1.43 millis   16.18 millis
> >>>>
> >>>> time sudo ./iotest /dev/nvme0n1
> >>>>
> >>>> ________________________________________________________
> >>>> Executed in    1.26 secs      fish           external
> >>>>    usr time    6.11 millis    0.52 millis    5.59 millis
> >>>>    sys time   13.94 millis    1.50 millis   12.43 millis
> >>>>
> >>>> where the latter are consistent. If we run the same test but keep the
> >>>> socket for the ssh connection active by having activity there, then
> >>>> the sda test looks as follows:
> >>>>
> >>>> time sudo ./iotest /dev/sda
> >>>>
> >>>> ________________________________________________________
> >>>> Executed in    1.23 secs      fish           external
> >>>>    usr time    2.70 millis   39.00 micros    2.66 millis
> >>>>    sys time    4.97 millis  977.00 micros    3.99 millis
> >>>>
> >>>> as now the ppoll loop is woken all the time anyway.
> >>>>
> >>>> After this fix, on an idle system:
> >>>>
> >>>> time sudo ./iotest /dev/sda
> >>>>
> >>>> ________________________________________________________
> >>>> Executed in    1.30 secs      fish           external
> >>>>    usr time    2.14 millis    0.14 millis    2.00 millis
> >>>>    sys time   16.93 millis    1.16 millis   15.76 millis
> >>>>
> >>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >>>> ---
> >>>>  util/fdmon-io_uring.c | 8 ++++++++
> >>>>  1 file changed, 8 insertions(+)
> >>>>
> >>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> >>>> index d0b56127c670..96392876b490 100644
> >>>> --- a/util/fdmon-io_uring.c
> >>>> +++ b/util/fdmon-io_uring.c
> >>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
> >>>>  
> >>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->off,
> >>>>                                   cqe_handler);
> >>>> +
> >>>> +    /*
> >>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> >>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> >>>> +     * actual io_uring_submit() only happens in gsource_prepare().  Without
> >>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> >>>> +     */
> >>>> +    aio_notify(ctx);
> >>>>  }
> >>>
> >>> Makes sense to me.
> >>>
> >>> At first I wondered if we should use defer_call() for the aio_notify()
> >>> to batch the submission, but of course holding the BQL will already take
> >>> care of that. And in iothreads where there is no BQL, the aio_notify()
> >>> shouldn't make a difference anyway because we're already in the right
> >>> thread.
> >>>
> >>> I suppose the other variation could be have another io_uring_enter()
> >>> call here (but then probably really through defer_call()) to avoid
> >>> waiting for another CPU to submit the request in its main loop. But I
> >>> don't really have an intuition if that would make things better or worse
> >>> in the common case.
> > 
> > It's possible to call io_uring_enter(). QEMU currently doesn't use
> > IORING_SETUP_SINGLE_ISSUER, so it's okay for multiple threads to call
> > io_uring_enter() on the same io_uring fd.
> 
> I would not recommend that, see below.
> 
> > I experimented with IORING_SETUP_SINGLE_ISSUER (as well as
> > IORING_SETUP_COOP_TASKRUN and IORING_SETUP_TASKRUN_FLAG) in the past and
> > didn't measure a performance improvement:
> > https://lore.kernel.org/qemu-devel/20250724204702.576637-1-stefanha@redhat.com/
> > 
> > Jens, any advice regarding these flags?
> 
> None other than "yes you should use them" - it's an expanding area of
> "let's make that faster", so if you tested something older, then that
> may be why as we didn't have a lot earlier. We're toying with getting
> rid of the uring_lock for SINGLE_ISSUER, for example.
> 
> Hence I think having multiple threads do enter is a design mistake, and
> one that might snowball down the line and make it harder to step back
> and make SINGLE_ISSUER work for you. Certain features also end up being
> gated behing DEFER_TASKRUN, which requires SINGLE_ISSUER as well.
> 
> tldr - don't have multiple threads do enter on the same ring, ever, if
> it can be avoided. It's a design mistake.

That's useful information, thanks. I will resurrect the patches to add
modern io_uring_setup() flags and we'll document the assumption that
only one thread invokes io_uring_enter().

Stefan