Move qemu_cond_signal() inside the critical section protected by pool->lock.
Signaling while holding the lock imposes more predictable scheduling behavior.
Signed-off-by: Stepan Popov <Stepan.Popov@kaspersky.com>
---
util/thread-pool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/util/thread-pool.c b/util/thread-pool.c
index 8f8cb38d5c..8e55aefd07 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -265,8 +265,8 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg,
spawn_thread(pool);
}
QTAILQ_INSERT_TAIL(&pool->request_list, req, reqs);
- qemu_mutex_unlock(&pool->lock);
qemu_cond_signal(&pool->request_cond);
+ qemu_mutex_unlock(&pool->lock);
return &req->common;
}
--
2.34.1
On 3/30/26 15:01, Stepan Popov wrote: > Move qemu_cond_signal() inside the critical section protected by pool->lock. > Signaling while holding the lock imposes more predictable scheduling behavior. > > Signed-off-by: Stepan Popov <Stepan.Popov@kaspersky.com> > --- > util/thread-pool.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/util/thread-pool.c b/util/thread-pool.c > index 8f8cb38d5c..8e55aefd07 100644 > --- a/util/thread-pool.c > +++ b/util/thread-pool.c > @@ -265,8 +265,8 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg, > spawn_thread(pool); > } > QTAILQ_INSERT_TAIL(&pool->request_list, req, reqs); > - qemu_mutex_unlock(&pool->lock); > qemu_cond_signal(&pool->request_cond); > + qemu_mutex_unlock(&pool->lock); > return &req->common; > } In this code, in the past even very small changes had big impact on performance. So, without some measurement I am wary of taking this change. What is "more predictable scheduling behavior" is unclear, too. If thead_pool_submit_aio() is preempted between qemu_cond_signal() and qemu_mutex_unlock(), then not only the waiting thread cannot start the work, but the mutex is taken and no one else can submit other requests. Paolo
On Mon, Mar 30, 2026 at 04:01:04PM +0300, Stepan Popov wrote: > Move qemu_cond_signal() inside the critical section protected by pool->lock. > Signaling while holding the lock imposes more predictable scheduling behavior. > > Signed-off-by: Stepan Popov <Stepan.Popov@kaspersky.com> > --- > util/thread-pool.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/util/thread-pool.c b/util/thread-pool.c > index 8f8cb38d5c..8e55aefd07 100644 > --- a/util/thread-pool.c > +++ b/util/thread-pool.c > @@ -265,8 +265,8 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg, > spawn_thread(pool); > } > QTAILQ_INSERT_TAIL(&pool->request_list, req, reqs); > - qemu_mutex_unlock(&pool->lock); > qemu_cond_signal(&pool->request_cond); > + qemu_mutex_unlock(&pool->lock); > return &req->common; > } Doesn't this order mean that when the signal wakes up the waiting thread, that thread will get temporarily re-blocked waiting for 'lock' to be released by the original thread too. With regards, Daniel -- |: https://berrange.com ~~ https://hachyderm.io/@berrange :| |: https://libvirt.org ~~ https://entangle-photo.org :| |: https://pixelfed.art/berrange ~~ https://fstop138.berrange.com :|
When pthread_cond_signal wakes the waiter, the kernel requeues the waiter directly onto the mutex's PI futex and transfers the lock ownership atomically. So the waiter returns from the syscall already holding the mutex. No extra pthread_mutex_lock call, no re-blocking. So with a PI mutex, moving the signal under the lock is fine – the waiter won't get stuck waiting for the lock again. Glibc has supported PI mutexes with FUTEX_REQUEUE_PI. Here are glibc commits: x86: https://sourceware.org/git/?p=glibc.git;a=commit;h=42e69bcf1137fccfd7a95645a9d316c6490b9ff9 non-x86 : https://sourceware.org/git/?p=glibc.git;a=commit;h=8313cb997d2da2465c8560d3164358a68ea1e9ad Looking forward to your reply, Stepan. -----Original Message----- From: Daniel P. Berrangé <berrange@redhat.com> Sent: Monday, March 30, 2026 4:43 PM To: Stepan Popov <Stepan.Popov@kaspersky.com> Cc: qemu-devel@nongnu.org; qemu-trivial@nongnu.org; Paolo Bonzini <pbonzini@redhat.com> Subject: Re: [PATCH] thread-pool: signal condition variable while holding lock On Mon, Mar 30, 2026 at 04:01:04PM +0300, Stepan Popov wrote: > Move qemu_cond_signal() inside the critical section protected by pool->lock. > Signaling while holding the lock imposes more predictable scheduling behavior. > > Signed-off-by: Stepan Popov <Stepan.Popov@kaspersky.com> > --- > util/thread-pool.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/util/thread-pool.c b/util/thread-pool.c index > 8f8cb38d5c..8e55aefd07 100644 > --- a/util/thread-pool.c > +++ b/util/thread-pool.c > @@ -265,8 +265,8 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg, > spawn_thread(pool); > } > QTAILQ_INSERT_TAIL(&pool->request_list, req, reqs); > - qemu_mutex_unlock(&pool->lock); > qemu_cond_signal(&pool->request_cond); > + qemu_mutex_unlock(&pool->lock); > return &req->common; > } Doesn't this order mean that when the signal wakes up the waiting thread, that thread will get temporarily re-blocked waiting for 'lock' to be released by the original thread too. With regards, Daniel -- |: https://berrange.com ~~ https://hachyderm.io/@berrange :| |: https://libvirt.org ~~ https://entangle-photo.org :| |: https://pixelfed.art/berrange ~~ https://fstop138.berrange.com :|
© 2016 - 2026 Red Hat, Inc.