-----邮件原件-----
发件人: Frederic Weisbecker [mailto:frederic@kernel.org]
发送时间: 2025年7月8日 20:41
收件人: wangxiongfeng (C) <wangxiongfeng2@huawei.com>
抄送: anna-maria@linutronix.de; tglx@linutronix.de; linux-kernel@vger.kernel.org; Xiexiuqi <xiexiuqi@huawei.com>; Wangshaobo (bobo) <bobo.shaobowang@huawei.com>
主题: Re: [PATCH] hrtimers: Update new CPU's next event in hrtimers_cpu_dying()
Le Tue, Jul 08, 2025 at 06:17:27PM +0800, Xiongfeng Wang a écrit :
> When testing softirq based hrtimers on an ARM32 board, with high
> resolution mode and nohz are both inactive, softirq based hrtimers
> failed to trigger when moved away from an offline CPU. The flowpath is
> as follows.
>
> CPU0 CPU1
> softirq based hrtimers are queued
> offline CPU1
> move hrtimers to CPU0 in hrtimers_cpu_dying()
> send IPI to CPU0 to retrigger next event 'softirq_expires_next' is
> KTIME_MAX call retrigger_next_event() highres and nohz is
> inactive,just return 'softirq_expires_next' is not updated hrtimer
> softirq is never triggered
>
> Some softirq based hrtimers are queued on CPU1. Then we offline CPU1.
> hrtimers_cpu_dying() moves hrtimers from CPU1 to CPU0, and then it
> send a IPI to CPU0 to let CPU0 call retrigger_next_event(). But high
> resolution mode and nohz are both inactive. So retrigger_next_event()
> just returned. 'softirq_expires_next' is never updated and remains
> KTIME_MAX. So hrtimer softirq is never raised.
>
> To fix this issue, we call hrtimer_update_next_event() in
> hrtimers_cpu_dying() to update 'softirq_expires_next' for the new CPU.
> It also update hardirq hrtimer's next event, but it should have no bad
> effect.
>
> Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from
> outgoing CPU earlier")
> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
> ---
> kernel/time/hrtimer.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index
> 30899a8cc52c..ff97eb36c116 100644
> --- a/kernel/time/hrtimer.c
> +++ b/kernel/time/hrtimer.c
> @@ -2298,8 +2298,11 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
> /*
> * The migration might have changed the first expiring softirq
> * timer on this CPU. Update it.
> + * We also need to update 'softirq_expires_next' here, because it will
> + * not be updated in retrigger_next_event() if high resolution mode
> + * and nohz are both inactive.
> */
> - __hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT);
> + hrtimer_update_next_event(new_base);
> /* Tell the other CPU to retrigger the next event */
> smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
It seems that a similar problem can happen while enqueueing a timer from an offline CPU (see the call to smp_call_function_single_async()).
How about this (untested) instead? retrigger_next_event, is not a fast path so we don't care about rare extra cost:
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 30899a8cc52c..e8c479329282 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -787,10 +787,10 @@ static void retrigger_next_event(void *arg)
* of the next expiring timer is enough. The return from the SMP
* function call will take care of the reprogramming in case the
* CPU was in a NOHZ idle sleep.
+ *
+ * In periodic low resolution mode, the next softirq expiration
+ * must also be updated.
*/
- if (!hrtimer_hres_active(base) && !tick_nohz_active)
- return;
Could you explain in detail why this judgment is added? Is it due to security issues or efficiency impact?
- Wang ShaoBo
raw_spin_lock(&base->lock);
hrtimer_update_base(base);
if (hrtimer_hres_active(base))
@@ -2295,11 +2295,6 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
&new_base->clock_base[i]);
}
- /*
- * The migration might have changed the first expiring softirq
- * timer on this CPU. Update it.
- */
- __hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT);
/* Tell the other CPU to retrigger the next event */
smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
在 2025/7/9 0:23, Wangshaobo (bobo) 写道: > -----邮件原件----- > 发件人: Frederic Weisbecker [mailto:frederic@kernel.org] > 发送时间: 2025年7月8日 20:41 > 收件人: wangxiongfeng (C) <wangxiongfeng2@huawei.com> > 抄送: anna-maria@linutronix.de; tglx@linutronix.de; linux-kernel@vger.kernel.org; Xiexiuqi <xiexiuqi@huawei.com>; Wangshaobo (bobo) <bobo.shaobowang@huawei.com> > 主题: Re: [PATCH] hrtimers: Update new CPU's next event in hrtimers_cpu_dying() > > Le Tue, Jul 08, 2025 at 06:17:27PM +0800, Xiongfeng Wang a écrit : >> When testing softirq based hrtimers on an ARM32 board, with high >> resolution mode and nohz are both inactive, softirq based hrtimers >> failed to trigger when moved away from an offline CPU. The flowpath is >> as follows. >> >> CPU0 CPU1 >> softirq based hrtimers are queued >> offline CPU1 >> move hrtimers to CPU0 in hrtimers_cpu_dying() >> send IPI to CPU0 to retrigger next event 'softirq_expires_next' is >> KTIME_MAX call retrigger_next_event() highres and nohz is >> inactive,just return 'softirq_expires_next' is not updated hrtimer >> softirq is never triggered >> >> Some softirq based hrtimers are queued on CPU1. Then we offline CPU1. >> hrtimers_cpu_dying() moves hrtimers from CPU1 to CPU0, and then it >> send a IPI to CPU0 to let CPU0 call retrigger_next_event(). But high >> resolution mode and nohz are both inactive. So retrigger_next_event() >> just returned. 'softirq_expires_next' is never updated and remains >> KTIME_MAX. So hrtimer softirq is never raised. >> >> To fix this issue, we call hrtimer_update_next_event() in >> hrtimers_cpu_dying() to update 'softirq_expires_next' for the new CPU. >> It also update hardirq hrtimer's next event, but it should have no bad >> effect. >> >> Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from >> outgoing CPU earlier") >> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> >> --- >> kernel/time/hrtimer.c | 5 ++++- >> 1 file changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index >> 30899a8cc52c..ff97eb36c116 100644 >> --- a/kernel/time/hrtimer.c >> +++ b/kernel/time/hrtimer.c >> @@ -2298,8 +2298,11 @@ int hrtimers_cpu_dying(unsigned int dying_cpu) >> /* >> * The migration might have changed the first expiring softirq >> * timer on this CPU. Update it. >> + * We also need to update 'softirq_expires_next' here, because it will >> + * not be updated in retrigger_next_event() if high resolution mode >> + * and nohz are both inactive. >> */ >> - __hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT); >> + hrtimer_update_next_event(new_base); >> /* Tell the other CPU to retrigger the next event */ >> smp_call_function_single(ncpu, retrigger_next_event, NULL, 0); > > It seems that a similar problem can happen while enqueueing a timer from an offline CPU (see the call to smp_call_function_single_async()). > > How about this (untested) instead? retrigger_next_event, is not a fast path so we don't care about rare extra cost: > > diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 30899a8cc52c..e8c479329282 100644 > --- a/kernel/time/hrtimer.c > +++ b/kernel/time/hrtimer.c > @@ -787,10 +787,10 @@ static void retrigger_next_event(void *arg) > * of the next expiring timer is enough. The return from the SMP > * function call will take care of the reprogramming in case the > * CPU was in a NOHZ idle sleep. > + * > + * In periodic low resolution mode, the next softirq expiration > + * must also be updated. > */ > - if (!hrtimer_hres_active(base) && !tick_nohz_active) > - return; > Could you explain in detail why this judgment is added? Is it due to security issues or efficiency impact? - Wang ShaoBo > > > raw_spin_lock(&base->lock); > hrtimer_update_base(base); > if (hrtimer_hres_active(base)) > @@ -2295,11 +2295,6 @@ int hrtimers_cpu_dying(unsigned int dying_cpu) > &new_base->clock_base[i]); > } > > - /* > - * The migration might have changed the first expiring softirq > - * timer on this CPU. Update it. > - */ > - __hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT); > /* Tell the other CPU to retrigger the next event */ > smp_call_function_single(ncpu, retrigger_next_event, NULL, 0); > >
© 2016 - 2025 Red Hat, Inc.