kernel/sched/wait.c | 3 +++ 1 file changed, 3 insertions(+)
When we did some benchmark tests (such as pipe tests), we found
that the wake behavior was still triggered when the wait queue
was empty, even though it would exit later.
This processing consumes some unnecessary resources. Can we
determine at the beginning of the wake up whether there are
elements in the queue that need to be awakened? I think this
judgment is probably good for performance and should be safe.
Looking forward to your reply, thank you.
Signed-off-by: Wang You <wangyoua@uniontech.com>
---
kernel/sched/wait.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 133b74730738..17011780aa21 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -205,6 +205,9 @@ void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode,
if (unlikely(!wq_head))
return;
+ if (unlikely(!wq_has_sleeper(wq_head)))
+ return;
+
__wake_up_common_lock(wq_head, mode, 1, WF_SYNC, key);
}
EXPORT_SYMBOL_GPL(__wake_up_sync_key);
--
2.20.1
On 09/06/23 13:38, Wang You wrote: > When we did some benchmark tests (such as pipe tests), we found > that the wake behavior was still triggered when the wait queue > was empty, even though it would exit later. > > This processing consumes some unnecessary resources. Can we > determine at the beginning of the wake up whether there are > elements in the queue that need to be awakened? I think this > judgment is probably good for performance and should be safe. > > Looking forward to your reply, thank you. > > Signed-off-by: Wang You <wangyoua@uniontech.com> > --- > kernel/sched/wait.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c > index 133b74730738..17011780aa21 100644 > --- a/kernel/sched/wait.c > +++ b/kernel/sched/wait.c > @@ -205,6 +205,9 @@ void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, > if (unlikely(!wq_head)) > return; > > + if (unlikely(!wq_has_sleeper(wq_head))) > + return; > + Hm so I suppose that if we end up in wake_up*() then the update of the wait condition has been done (so this shouldn't lead to missed wakeups), but that still means an extra smp_mb() before grabbing the wq_head->lock. I'd suggest benchmarking the change, this is going to cause unwanted overhead when dealing with busy queues, and I'm not sure it's saving much vs grabbing wq_head->lock before noticing the list is empty. > __wake_up_common_lock(wq_head, mode, 1, WF_SYNC, key); > } > EXPORT_SYMBOL_GPL(__wake_up_sync_key); > -- > 2.20.1
© 2016 - 2026 Red Hat, Inc.