kernel/sched/core.c | 2 ++ 1 file changed, 2 insertions(+)
When CPU 1 enters the nohz_full state, and the kworker on CPU 0 executes
the function sched_tick_remote, holding the lock on CPU1's rq
and triggering the warning WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3).
This leads to the process of printing the warning message, where the
console_sem semaphore is held. At this point, the print task on the
CPU1's rq cannot acquire the console_sem and joins the wait queue,
entering the UNINTERRUPTIBLE state. It waits for the console_sem to be
released and then wakes up. After the task on CPU 0 releases
the console_sem, it wakes up the waiting console_sem task.
In try_to_wake_up, it attempts to acquire the lock on CPU1's rq again,
resulting in a deadlock.
The triggering scenario is as follows:
CPU0 CPU1
sched_tick_remote
WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3)
report_bug con_write
printk
console_unlock
do_con_write
console_lock
down(&console_sem)
list_add_tail(&waiter.list, &sem->wait_list);
up(&console_sem)
wake_up_q(&wake_q)
try_to_wake_up
__task_rq_lock
_raw_spin_lock
This patch fixes the issue by deffering all printk console printing
during the lock holding period.
Fixes: d84b31313ef8 ("sched/isolation: Offload residual 1Hz scheduler tick")
Signed-off-by: Wang Tao <wangtao554@huawei.com>
---
kernel/sched/core.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index be00629f0ba4..8b2d5b5bfb93 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5723,8 +5723,10 @@ static void sched_tick_remote(struct work_struct *work)
* Make sure the next tick runs within a
* reasonable amount of time.
*/
+ printk_deferred_enter();
u64 delta = rq_clock_task(rq) - curr->se.exec_start;
WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
+ printk_deferred_exit();
}
curr->sched_class->task_tick(rq, curr, 0);
--
2.34.1
On Thu, Sep 11, 2025 at 12:42:49PM +0000, Wang Tao wrote: > When CPU 1 enters the nohz_full state, and the kworker on CPU 0 executes > the function sched_tick_remote, holding the lock on CPU1's rq > and triggering the warning WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3). > This leads to the process of printing the warning message, where the > console_sem semaphore is held. At this point, the print task on the > CPU1's rq cannot acquire the console_sem and joins the wait queue, > entering the UNINTERRUPTIBLE state. It waits for the console_sem to be > released and then wakes up. After the task on CPU 0 releases > the console_sem, it wakes up the waiting console_sem task. > In try_to_wake_up, it attempts to acquire the lock on CPU1's rq again, > resulting in a deadlock. > > The triggering scenario is as follows: > > CPU0 CPU1 > sched_tick_remote > WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3) > > report_bug con_write > printk > > console_unlock > do_con_write > console_lock > down(&console_sem) > list_add_tail(&waiter.list, &sem->wait_list); > up(&console_sem) > wake_up_q(&wake_q) > try_to_wake_up > __task_rq_lock > _raw_spin_lock > > This patch fixes the issue by deffering all printk console printing > during the lock holding period. > > Fixes: d84b31313ef8 ("sched/isolation: Offload residual 1Hz scheduler tick") > Signed-off-by: Wang Tao <wangtao554@huawei.com> I fundamentally hate that deferred thing and consider it a printk bug. But really, if you trip that WARN, fix it and the problem goes away.
Le Thu, Sep 11, 2025 at 03:53:58PM +0200, Peter Zijlstra a écrit : > On Thu, Sep 11, 2025 at 12:42:49PM +0000, Wang Tao wrote: > > When CPU 1 enters the nohz_full state, and the kworker on CPU 0 executes > > the function sched_tick_remote, holding the lock on CPU1's rq > > and triggering the warning WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3). > > This leads to the process of printing the warning message, where the > > console_sem semaphore is held. At this point, the print task on the > > CPU1's rq cannot acquire the console_sem and joins the wait queue, > > entering the UNINTERRUPTIBLE state. It waits for the console_sem to be > > released and then wakes up. After the task on CPU 0 releases > > the console_sem, it wakes up the waiting console_sem task. > > In try_to_wake_up, it attempts to acquire the lock on CPU1's rq again, > > resulting in a deadlock. > > > > The triggering scenario is as follows: > > > > CPU0 CPU1 > > sched_tick_remote > > WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3) > > > > report_bug con_write > > printk > > > > console_unlock > > do_con_write > > console_lock > > down(&console_sem) > > list_add_tail(&waiter.list, &sem->wait_list); > > up(&console_sem) > > wake_up_q(&wake_q) > > try_to_wake_up > > __task_rq_lock > > _raw_spin_lock > > > > This patch fixes the issue by deffering all printk console printing > > during the lock holding period. > > > > Fixes: d84b31313ef8 ("sched/isolation: Offload residual 1Hz scheduler tick") > > Signed-off-by: Wang Tao <wangtao554@huawei.com> > > I fundamentally hate that deferred thing and consider it a printk bug. > > But really, if you trip that WARN, fix it and the problem goes away. And probably it triggers a lot of false positives. An overloaded housekeeping CPU can easily be off for 2 seconds. We should make it 30 seconds. Thanks. -- Frederic Weisbecker SUSE Labs
On Thu, Sep 11, 2025 at 05:02:45PM +0200 Frederic Weisbecker wrote: > Le Thu, Sep 11, 2025 at 03:53:58PM +0200, Peter Zijlstra a écrit : > > On Thu, Sep 11, 2025 at 12:42:49PM +0000, Wang Tao wrote: > > > When CPU 1 enters the nohz_full state, and the kworker on CPU 0 executes > > > the function sched_tick_remote, holding the lock on CPU1's rq > > > and triggering the warning WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3). > > > This leads to the process of printing the warning message, where the > > > console_sem semaphore is held. At this point, the print task on the > > > CPU1's rq cannot acquire the console_sem and joins the wait queue, > > > entering the UNINTERRUPTIBLE state. It waits for the console_sem to be > > > released and then wakes up. After the task on CPU 0 releases > > > the console_sem, it wakes up the waiting console_sem task. > > > In try_to_wake_up, it attempts to acquire the lock on CPU1's rq again, > > > resulting in a deadlock. > > > > > > The triggering scenario is as follows: > > > > > > CPU0 CPU1 > > > sched_tick_remote > > > WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3) > > > > > > report_bug con_write > > > printk > > > > > > console_unlock > > > do_con_write > > > console_lock > > > down(&console_sem) > > > list_add_tail(&waiter.list, &sem->wait_list); > > > up(&console_sem) > > > wake_up_q(&wake_q) > > > try_to_wake_up > > > __task_rq_lock > > > _raw_spin_lock > > > > > > This patch fixes the issue by deffering all printk console printing > > > during the lock holding period. > > > > > > Fixes: d84b31313ef8 ("sched/isolation: Offload residual 1Hz scheduler tick") > > > Signed-off-by: Wang Tao <wangtao554@huawei.com> > > > > I fundamentally hate that deferred thing and consider it a printk bug. > > > > But really, if you trip that WARN, fix it and the problem goes away. > > And probably it triggers a lot of false positives. An overloaded housekeeping > CPU can easily be off for 2 seconds. We should make it 30 seconds. > It does trigger pretty easily. We've done some work to try to make better (spreading HK work around for example) but you can still hit it. Especially, if there are virtualization layers involved... Increasing that time a bit would be great :) Cheers, Phil > Thanks. > > -- > Frederic Weisbecker > SUSE Labs > --
Le Thu, Sep 11, 2025 at 11:14:06AM -0400, Phil Auld a écrit : > On Thu, Sep 11, 2025 at 05:02:45PM +0200 Frederic Weisbecker wrote: > > Le Thu, Sep 11, 2025 at 03:53:58PM +0200, Peter Zijlstra a écrit : > > > On Thu, Sep 11, 2025 at 12:42:49PM +0000, Wang Tao wrote: > > > > When CPU 1 enters the nohz_full state, and the kworker on CPU 0 executes > > > > the function sched_tick_remote, holding the lock on CPU1's rq > > > > and triggering the warning WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3). > > > > This leads to the process of printing the warning message, where the > > > > console_sem semaphore is held. At this point, the print task on the > > > > CPU1's rq cannot acquire the console_sem and joins the wait queue, > > > > entering the UNINTERRUPTIBLE state. It waits for the console_sem to be > > > > released and then wakes up. After the task on CPU 0 releases > > > > the console_sem, it wakes up the waiting console_sem task. > > > > In try_to_wake_up, it attempts to acquire the lock on CPU1's rq again, > > > > resulting in a deadlock. > > > > > > > > The triggering scenario is as follows: > > > > > > > > CPU0 CPU1 > > > > sched_tick_remote > > > > WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3) > > > > > > > > report_bug con_write > > > > printk > > > > > > > > console_unlock > > > > do_con_write > > > > console_lock > > > > down(&console_sem) > > > > list_add_tail(&waiter.list, &sem->wait_list); > > > > up(&console_sem) > > > > wake_up_q(&wake_q) > > > > try_to_wake_up > > > > __task_rq_lock > > > > _raw_spin_lock > > > > > > > > This patch fixes the issue by deffering all printk console printing > > > > during the lock holding period. > > > > > > > > Fixes: d84b31313ef8 ("sched/isolation: Offload residual 1Hz scheduler tick") > > > > Signed-off-by: Wang Tao <wangtao554@huawei.com> > > > > > > I fundamentally hate that deferred thing and consider it a printk bug. > > > > > > But really, if you trip that WARN, fix it and the problem goes away. > > > > And probably it triggers a lot of false positives. An overloaded housekeeping > > CPU can easily be off for 2 seconds. We should make it 30 seconds. > > > > It does trigger pretty easily. We've done some work to try to make better > (spreading HK work around for example) but you can still hit it. Especially, > if there are virtualization layers involved... > > Increasing that time a bit would be great :) Interested in sending the patch? :-) Thanks. > > Cheers, > Phil > > > > Thanks. > > > > -- > > Frederic Weisbecker > > SUSE Labs > > > > -- > -- Frederic Weisbecker SUSE Labs
Increase the sched_tick_remote WARN_ON timeout to remove false
positives due to temporarily busy HK cpus. The suggestion
was 30 seconds to catch really stuck remote tick processing
but not trigger it too easily.
Signed-off-by: Phil Auld <pauld@redhat.com>
Suggested-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index be00629f0ba4..ef90d358252d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5724,7 +5724,7 @@ static void sched_tick_remote(struct work_struct *work)
* reasonable amount of time.
*/
u64 delta = rq_clock_task(rq) - curr->se.exec_start;
- WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
+ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 30);
}
curr->sched_class->task_tick(rq, curr, 0);
--
2.51.0
Hi, On Thu, Sep 11, 2025 at 12:13:00PM -0400 Phil Auld wrote: > Increase the sched_tick_remote WARN_ON timeout to remove false > positives due to temporarily busy HK cpus. The suggestion > was 30 seconds to catch really stuck remote tick processing > but not trigger it too easily. > > Signed-off-by: Phil Auld <pauld@redhat.com> > Suggested-by: Frederic Weisbecker <frederic@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Frederic Weisbecker <frederic@kernel.org> Frederic ack'd this. Any other thoughts or opinions on this one character patch? Cheers, Phil > --- > kernel/sched/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index be00629f0ba4..ef90d358252d 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -5724,7 +5724,7 @@ static void sched_tick_remote(struct work_struct *work) > * reasonable amount of time. > */ > u64 delta = rq_clock_task(rq) - curr->se.exec_start; > - WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); > + WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 30); > } > curr->sched_class->task_tick(rq, curr, 0); > > -- > 2.51.0 > --
Increasing timeout alerts can reduce the probability of deadlocks. However, in the 'sched_tick_remote' method, there are 'WARN_ON_ONCE(rq->curr!= rq->donor)' and 'assert_clock_updated' in 'rq_clock_task'. Regardless of why these alerts are triggered, once they are triggered, 'printk' is called, which still leaves potential deadlock issues. Is there a better way to address these problems? 在 2025/9/12 0:13, Phil Auld 写道: > Increase the sched_tick_remote WARN_ON timeout to remove false > positives due to temporarily busy HK cpus. The suggestion > was 30 seconds to catch really stuck remote tick processing > but not trigger it too easily. > > Signed-off-by: Phil Auld <pauld@redhat.com> > Suggested-by: Frederic Weisbecker <frederic@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Frederic Weisbecker <frederic@kernel.org> > --- > kernel/sched/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index be00629f0ba4..ef90d358252d 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -5724,7 +5724,7 @@ static void sched_tick_remote(struct work_struct *work) > * reasonable amount of time. > */ > u64 delta = rq_clock_task(rq) - curr->se.exec_start; > - WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); > + WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 30); > } > curr->sched_class->task_tick(rq, curr, 0); >
On Tue, Sep 16, 2025 at 04:44:39PM +0800 wangtao (EQ) wrote: > Increasing timeout alerts can reduce the probability of deadlocks. However, in the 'sched_tick_remote' method, there are 'WARN_ON_ONCE(rq->curr!= rq->donor)' and 'assert_clock_updated' in 'rq_clock_task'. Regardless of why these alerts are triggered, once they are triggered, 'printk' is called, which still leaves potential deadlock issues. Is there a better way to address these problems? > I'm not specically trying to solve the printk deadlock problem. My patch is to make this particular warning go away by reducing the false positives. That's tangential to your original posting. You can use the new printk mechanism with an atomic console to get around the printk bug I think. I think you could also use a serial console instead of a framebuffer based console. Cheers, Phil > 在 2025/9/12 0:13, Phil Auld 写道: > > Increase the sched_tick_remote WARN_ON timeout to remove false > > positives due to temporarily busy HK cpus. The suggestion > > was 30 seconds to catch really stuck remote tick processing > > but not trigger it too easily. > > > > Signed-off-by: Phil Auld <pauld@redhat.com> > > Suggested-by: Frederic Weisbecker <frederic@kernel.org> > > Cc: Peter Zijlstra <peterz@infradead.org> > > Cc: Frederic Weisbecker <frederic@kernel.org> > > --- > > kernel/sched/core.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index be00629f0ba4..ef90d358252d 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -5724,7 +5724,7 @@ static void sched_tick_remote(struct work_struct *work) > > * reasonable amount of time. > > */ > > u64 delta = rq_clock_task(rq) - curr->se.exec_start; > > - WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); > > + WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 30); > > } > > curr->sched_class->task_tick(rq, curr, 0); > --
Le Thu, Sep 11, 2025 at 12:13:00PM -0400, Phil Auld a écrit : > Increase the sched_tick_remote WARN_ON timeout to remove false > positives due to temporarily busy HK cpus. The suggestion > was 30 seconds to catch really stuck remote tick processing > but not trigger it too easily. > > Signed-off-by: Phil Auld <pauld@redhat.com> > Suggested-by: Frederic Weisbecker <frederic@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Frederic Weisbecker <frederic@kernel.org> Acked-by: Frederic Weisbecker <frederic@kernel.org> -- Frederic Weisbecker SUSE Labs
Do we have plans to merge this patch into the mainline? Thanks, Tao 在 2025/9/12 0:29, Frederic Weisbecker 写道: > Le Thu, Sep 11, 2025 at 12:13:00PM -0400, Phil Auld a écrit : >> Increase the sched_tick_remote WARN_ON timeout to remove false >> positives due to temporarily busy HK cpus. The suggestion >> was 30 seconds to catch really stuck remote tick processing >> but not trigger it too easily. >> >> Signed-off-by: Phil Auld <pauld@redhat.com> >> Suggested-by: Frederic Weisbecker <frederic@kernel.org> >> Cc: Peter Zijlstra <peterz@infradead.org> >> Cc: Frederic Weisbecker <frederic@kernel.org> > Acked-by: Frederic Weisbecker <frederic@kernel.org> >
© 2016 - 2025 Red Hat, Inc.