[PATCH v2] sched/idle: disable tick in idle=poll idle entry

Marcelo Tosatti posted 1 patch 1 month, 1 week ago
include/linux/sched.h    |    2 ++
kernel/sched/core.c      |   10 ++++++++++
kernel/sched/idle.c      |    2 +-
kernel/time/tick-sched.c |    1 +
4 files changed, 14 insertions(+), 1 deletion(-)
[PATCH v2] sched/idle: disable tick in idle=poll idle entry
Posted by Marcelo Tosatti 1 month, 1 week ago

Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
("tick/nohz: Conditionally restart tick on idle exit") allows
a nohz_full CPU to enter idle and return from it with the 
scheduler tick disabled (since the tick might be undesired noise).

The idle=poll case still unconditionally restarts the tick when entering
idle.

To reduce the noise for that case as well, stop the tick when entering
idle, for the idle=poll case.

Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
case where a new timer is added from an interrupt. This breaks out of
cpu_idle_poll and rearms the timer if necessary.

---

v2: Handle the case where a new timer is added from an interrupt (Frederic Weisbecker)

 include/linux/sched.h    |    2 ++
 kernel/sched/core.c      |   10 ++++++++++
 kernel/sched/idle.c      |    2 +-
 kernel/time/tick-sched.c |    1 +
 4 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index cbb7340c5866..1f6938dc20cd 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2428,4 +2428,6 @@ extern void migrate_enable(void);
 
 DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
 
+void set_tif_resched_if_polling(int cpu);
+
 #endif
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f1ebf67b48e2..f0b84600084b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -988,6 +988,11 @@ static bool set_nr_if_polling(struct task_struct *p)
 	return true;
 }
 
+void set_tif_resched_if_polling(int cpu)
+{
+	set_nr_if_polling(cpu_rq(cpu)->idle);
+}
+
 #else
 static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
 {
@@ -999,6 +1004,11 @@ static inline bool set_nr_if_polling(struct task_struct *p)
 {
 	return false;
 }
+
+void set_tif_resched_if_polling(int cpu)
+{
+	set_tsk_need_resched(cpu_rq(cpu)->idle);
+}
 #endif
 
 static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index c39b089d4f09..428c2d1cbd1b 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -324,7 +324,7 @@ static void do_idle(void)
 		 * idle as we know that the IPI is going to arrive right away.
 		 */
 		if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
-			tick_nohz_idle_restart_tick();
+			tick_nohz_idle_stop_tick();
 			cpu_idle_poll();
 		} else {
 			cpuidle_idle_call();
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index c527b421c865..efc3653999dc 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -408,6 +408,7 @@ void tick_nohz_full_kick_cpu(int cpu)
 	if (!tick_nohz_full_cpu(cpu))
 		return;
 
+	set_tif_resched_if_polling(cpu);
 	irq_work_queue_on(&per_cpu(nohz_full_kick_work, cpu), cpu);
 }
Re: [PATCH v2] sched/idle: disable tick in idle=poll idle entry
Posted by Thomas Gleixner 1 month, 1 week ago
On Wed, Oct 29 2025 at 15:00, Marcelo Tosatti wrote:
> Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
> ("tick/nohz: Conditionally restart tick on idle exit") allows
> a nohz_full CPU to enter idle and return from it with the 
> scheduler tick disabled (since the tick might be undesired noise).
>
> The idle=poll case still unconditionally restarts the tick when entering
> idle.
>
> To reduce the noise for that case as well, stop the tick when entering
> idle, for the idle=poll case.
>
> Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
> case where a new timer is added from an interrupt. This breaks out of
> cpu_idle_poll and rearms the timer if necessary.
>
> ---

ERROR: Missing Signed-off-by: line by nominal patch author 'Marcelo Tosatti <mtosatti@redhat.com>'

You've not started doing kernel development three days ago, right?
Re: [PATCH v2] sched/idle: disable tick in idle=poll idle entry
Posted by Frederic Weisbecker 1 month, 1 week ago
(Adding more people in Cc)

Le Wed, Oct 29, 2025 at 03:00:56PM -0300, Marcelo Tosatti a écrit :
> 
> Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
> ("tick/nohz: Conditionally restart tick on idle exit") allows
> a nohz_full CPU to enter idle and return from it with the 
> scheduler tick disabled (since the tick might be undesired noise).
> 
> The idle=poll case still unconditionally restarts the tick when entering
> idle.
> 
> To reduce the noise for that case as well, stop the tick when entering
> idle, for the idle=poll case.
> 
> Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
> case where a new timer is added from an interrupt. This breaks out of
> cpu_idle_poll and rearms the timer if necessary.
> 
> ---
> 
> v2: Handle the case where a new timer is added from an interrupt (Frederic Weisbecker)
> 
>  include/linux/sched.h    |    2 ++
>  kernel/sched/core.c      |   10 ++++++++++
>  kernel/sched/idle.c      |    2 +-
>  kernel/time/tick-sched.c |    1 +
>  4 files changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index cbb7340c5866..1f6938dc20cd 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2428,4 +2428,6 @@ extern void migrate_enable(void);
>  
>  DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
>  
> +void set_tif_resched_if_polling(int cpu);
> +
>  #endif
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f1ebf67b48e2..f0b84600084b 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -988,6 +988,11 @@ static bool set_nr_if_polling(struct task_struct *p)
>  	return true;
>  }
>  
> +void set_tif_resched_if_polling(int cpu)
> +{
> +	set_nr_if_polling(cpu_rq(cpu)->idle);
> +}
> +
>  #else
>  static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
>  {
> @@ -999,6 +1004,11 @@ static inline bool set_nr_if_polling(struct task_struct *p)
>  {
>  	return false;
>  }
> +
> +void set_tif_resched_if_polling(int cpu)
> +{
> +	set_tsk_need_resched(cpu_rq(cpu)->idle);
> +}
>  #endif
>  
>  static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index c39b089d4f09..428c2d1cbd1b 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -324,7 +324,7 @@ static void do_idle(void)
>  		 * idle as we know that the IPI is going to arrive right away.
>  		 */
>  		if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
> -			tick_nohz_idle_restart_tick();
> +			tick_nohz_idle_stop_tick();

Shouldn't we simply remove the tick_nohz_idle_restart_tick() line? The nohz_full
CPU should have entered here with the tick disabled already.

Also non-nohz_full systems shouldn't care.

>  			cpu_idle_poll();
>  		} else {
>  			cpuidle_idle_call();
> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> index c527b421c865..efc3653999dc 100644
> --- a/kernel/time/tick-sched.c
> +++ b/kernel/time/tick-sched.c
> @@ -408,6 +408,7 @@ void tick_nohz_full_kick_cpu(int cpu)
>  	if (!tick_nohz_full_cpu(cpu))
>  		return;
>  
> +	set_tif_resched_if_polling(cpu);

Perhaps stuff that within wake_up_full_nohz_cpu() and call
set_nr_if_polling() directly. Also this needs a big comment.

Thanks.

>  	irq_work_queue_on(&per_cpu(nohz_full_kick_work, cpu), cpu);
>  }
>  
> 

-- 
Frederic Weisbecker
SUSE Labs
Re: [PATCH v2] sched/idle: disable tick in idle=poll idle entry
Posted by Marcelo Tosatti 1 month, 1 week ago
On Thu, Oct 30, 2025 at 05:06:49PM +0100, Frederic Weisbecker wrote:
> (Adding more people in Cc)
> 
> Le Wed, Oct 29, 2025 at 03:00:56PM -0300, Marcelo Tosatti a écrit :
> > 
> > Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
> > ("tick/nohz: Conditionally restart tick on idle exit") allows
> > a nohz_full CPU to enter idle and return from it with the 
> > scheduler tick disabled (since the tick might be undesired noise).
> > 
> > The idle=poll case still unconditionally restarts the tick when entering
> > idle.
> > 
> > To reduce the noise for that case as well, stop the tick when entering
> > idle, for the idle=poll case.
> > 
> > Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
> > case where a new timer is added from an interrupt. This breaks out of
> > cpu_idle_poll and rearms the timer if necessary.
> > 
> > ---
> > 
> > v2: Handle the case where a new timer is added from an interrupt (Frederic Weisbecker)
> > 
> >  include/linux/sched.h    |    2 ++
> >  kernel/sched/core.c      |   10 ++++++++++
> >  kernel/sched/idle.c      |    2 +-
> >  kernel/time/tick-sched.c |    1 +
> >  4 files changed, 14 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index cbb7340c5866..1f6938dc20cd 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -2428,4 +2428,6 @@ extern void migrate_enable(void);
> >  
> >  DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
> >  
> > +void set_tif_resched_if_polling(int cpu);
> > +
> >  #endif
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index f1ebf67b48e2..f0b84600084b 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -988,6 +988,11 @@ static bool set_nr_if_polling(struct task_struct *p)
> >  	return true;
> >  }
> >  
> > +void set_tif_resched_if_polling(int cpu)
> > +{
> > +	set_nr_if_polling(cpu_rq(cpu)->idle);
> > +}
> > +
> >  #else
> >  static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
> >  {
> > @@ -999,6 +1004,11 @@ static inline bool set_nr_if_polling(struct task_struct *p)
> >  {
> >  	return false;
> >  }
> > +
> > +void set_tif_resched_if_polling(int cpu)
> > +{
> > +	set_tsk_need_resched(cpu_rq(cpu)->idle);
> > +}
> >  #endif
> >  
> >  static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
> > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> > index c39b089d4f09..428c2d1cbd1b 100644
> > --- a/kernel/sched/idle.c
> > +++ b/kernel/sched/idle.c
> > @@ -324,7 +324,7 @@ static void do_idle(void)
> >  		 * idle as we know that the IPI is going to arrive right away.
> >  		 */
> >  		if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
> > -			tick_nohz_idle_restart_tick();
> > +			tick_nohz_idle_stop_tick();
> 
> Shouldn't we simply remove the tick_nohz_idle_restart_tick() line? The nohz_full
> CPU should have entered here with the tick disabled already.
> 
> Also non-nohz_full systems shouldn't care.

With tick_nohz_idle_restart_tick removed:

<idle>-0 [001] d.h2. 51.356672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51360062500 softexpires=51360062500 mode=ABS
<idle>-0 [001] d.h2. 51.357671: hrtimer_cancel: hrtimer=ffff927ae205c418
<idle>-0 [001] d.h1. 51.357671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51360063398
<idle>-0 [001] d.h1. 51.357671: hrtimer_expire_exit: hrtimer=ffff927ae205c418
<idle>-0 [001] d.h2. 51.357671: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51361062500 softexpires=51361062500 mode=ABS
<idle>-0 [001] d.h2. 51.358671: hrtimer_cancel: hrtimer=ffff927ae205c418
<idle>-0 [001] d.h1. 51.358671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51361063420
<idle>-0 [001] d.h1. 51.358672: hrtimer_expire_exit: hrtimer=ffff927ae205c418
<idle>-0 [001] d.h2. 51.358672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51362062500 softexpires=51362062500 mode=ABS
<idle>-0 [001] d.h2. 51.359671: hrtimer_cancel: hrtimer=ffff927ae205c418
<idle>-0 [001] d.h1. 51.359671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51362063447
<idle>-0 [001] d.h1. 51.359672: hrtimer_expire_exit: hrtimer=ffff927ae205c418
<idle>-0 [001] d.h2. 51.359672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51363062500 softexpires=51363062500 mode=ABS

CPU 1 is idle and isolated.

> >  			cpu_idle_poll();
> >  		} else {
> >  			cpuidle_idle_call();
> > diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> > index c527b421c865..efc3653999dc 100644
> > --- a/kernel/time/tick-sched.c
> > +++ b/kernel/time/tick-sched.c
> > @@ -408,6 +408,7 @@ void tick_nohz_full_kick_cpu(int cpu)
> >  	if (!tick_nohz_full_cpu(cpu))
> >  		return;
> >  
> > +	set_tif_resched_if_polling(cpu);
> 
> Perhaps stuff that within wake_up_full_nohz_cpu() and call
> set_nr_if_polling() directly.

Can't call set_nr_if_polling() directly since if TIF_POLLING_NRFLAG is
undefined:

static inline bool set_nr_if_polling(struct task_struct *p)
{
        return false;
}

So the wakeup won't occur. Or am i missing something?


> Also this needs a big comment.

Sure!

Thanks.
Re: [PATCH v2] sched/idle: disable tick in idle=poll idle entry
Posted by Frederic Weisbecker 1 day, 15 hours ago
Le Mon, Nov 03, 2025 at 08:30:06AM -0300, Marcelo Tosatti a écrit :
> On Thu, Oct 30, 2025 at 05:06:49PM +0100, Frederic Weisbecker wrote:
> > (Adding more people in Cc)
> > 
> > Le Wed, Oct 29, 2025 at 03:00:56PM -0300, Marcelo Tosatti a écrit :
> > > 
> > > Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
> > > ("tick/nohz: Conditionally restart tick on idle exit") allows
> > > a nohz_full CPU to enter idle and return from it with the 
> > > scheduler tick disabled (since the tick might be undesired noise).
> > > 
> > > The idle=poll case still unconditionally restarts the tick when entering
> > > idle.
> > > 
> > > To reduce the noise for that case as well, stop the tick when entering
> > > idle, for the idle=poll case.
> > > 
> > > Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
> > > case where a new timer is added from an interrupt. This breaks out of
> > > cpu_idle_poll and rearms the timer if necessary.
> > > 
> > > ---
> > > 
> > > v2: Handle the case where a new timer is added from an interrupt (Frederic Weisbecker)
> > > 
> > >  include/linux/sched.h    |    2 ++
> > >  kernel/sched/core.c      |   10 ++++++++++
> > >  kernel/sched/idle.c      |    2 +-
> > >  kernel/time/tick-sched.c |    1 +
> > >  4 files changed, 14 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index cbb7340c5866..1f6938dc20cd 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -2428,4 +2428,6 @@ extern void migrate_enable(void);
> > >  
> > >  DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
> > >  
> > > +void set_tif_resched_if_polling(int cpu);
> > > +
> > >  #endif
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index f1ebf67b48e2..f0b84600084b 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -988,6 +988,11 @@ static bool set_nr_if_polling(struct task_struct *p)
> > >  	return true;
> > >  }
> > >  
> > > +void set_tif_resched_if_polling(int cpu)
> > > +{
> > > +	set_nr_if_polling(cpu_rq(cpu)->idle);
> > > +}
> > > +
> > >  #else
> > >  static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
> > >  {
> > > @@ -999,6 +1004,11 @@ static inline bool set_nr_if_polling(struct task_struct *p)
> > >  {
> > >  	return false;
> > >  }
> > > +
> > > +void set_tif_resched_if_polling(int cpu)
> > > +{
> > > +	set_tsk_need_resched(cpu_rq(cpu)->idle);
> > > +}
> > >  #endif
> > >  
> > >  static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
> > > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> > > index c39b089d4f09..428c2d1cbd1b 100644
> > > --- a/kernel/sched/idle.c
> > > +++ b/kernel/sched/idle.c
> > > @@ -324,7 +324,7 @@ static void do_idle(void)
> > >  		 * idle as we know that the IPI is going to arrive right away.
> > >  		 */
> > >  		if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
> > > -			tick_nohz_idle_restart_tick();
> > > +			tick_nohz_idle_stop_tick();
> > 
> > Shouldn't we simply remove the tick_nohz_idle_restart_tick() line? The nohz_full
> > CPU should have entered here with the tick disabled already.
> > 
> > Also non-nohz_full systems shouldn't care.
> 
> With tick_nohz_idle_restart_tick removed:
> 
> <idle>-0 [001] d.h2. 51.356672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51360062500 softexpires=51360062500 mode=ABS
> <idle>-0 [001] d.h2. 51.357671: hrtimer_cancel: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h1. 51.357671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51360063398
> <idle>-0 [001] d.h1. 51.357671: hrtimer_expire_exit: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h2. 51.357671: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51361062500 softexpires=51361062500 mode=ABS
> <idle>-0 [001] d.h2. 51.358671: hrtimer_cancel: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h1. 51.358671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51361063420
> <idle>-0 [001] d.h1. 51.358672: hrtimer_expire_exit: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h2. 51.358672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51362062500 softexpires=51362062500 mode=ABS
> <idle>-0 [001] d.h2. 51.359671: hrtimer_cancel: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h1. 51.359671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51362063447
> <idle>-0 [001] d.h1. 51.359672: hrtimer_expire_exit: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h2. 51.359672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51363062500 softexpires=51363062500 mode=ABS
> 
> CPU 1 is idle and isolated.

Suprising, somehow the CPU's tick never interrupted a non-idle section. I guess
it's possible after boot. Or the CPU had tick dependencies before. I was about
to propose stopping the tick right before exiting to userspace but since you're
using idle=poll, I guess userspace must be reached as fast as possible and
therefore you prefer to stop the tick before the next wake-up rather that after?

Also instead of polling in kernel, why not polling in userspace for events? This
sounds like a saner isolation design. Entering/exiting the kernel is always a
risk for something going wrong.

> 
> > >  			cpu_idle_poll();
> > >  		} else {
> > >  			cpuidle_idle_call();
> > > diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> > > index c527b421c865..efc3653999dc 100644
> > > --- a/kernel/time/tick-sched.c
> > > +++ b/kernel/time/tick-sched.c
> > > @@ -408,6 +408,7 @@ void tick_nohz_full_kick_cpu(int cpu)
> > >  	if (!tick_nohz_full_cpu(cpu))
> > >  		return;
> > >  
> > > +	set_tif_resched_if_polling(cpu);
> > 
> > Perhaps stuff that within wake_up_full_nohz_cpu() and call
> > set_nr_if_polling() directly.
> 
> Can't call set_nr_if_polling() directly since if TIF_POLLING_NRFLAG is
> undefined:
> 
> static inline bool set_nr_if_polling(struct task_struct *p)
> {
>         return false;
> }
> 
> So the wakeup won't occur. Or am i missing something?

Ok but can you at least move that to wake_up_full_nohz_cpu()?
tick_nohz_full_kick_cpu() is more general and doesn't only concern
new timers.

-- 
Frederic Weisbecker
SUSE Labs
Re: [PATCH v2] sched/idle: disable tick in idle=poll idle entry
Posted by Marcelo Tosatti 1 month, 1 week ago
On Wed, Oct 29, 2025 at 03:00:56PM -0300, Marcelo Tosatti wrote:
> 
> Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
> ("tick/nohz: Conditionally restart tick on idle exit") allows
> a nohz_full CPU to enter idle and return from it with the 
> scheduler tick disabled (since the tick might be undesired noise).
> 
> The idle=poll case still unconditionally restarts the tick when entering
> idle.
> 
> To reduce the noise for that case as well, stop the tick when entering
> idle, for the idle=poll case.
> 
> Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
> case where a new timer is added from an interrupt. This breaks out of
> cpu_idle_poll and rearms the timer if necessary.

Frederic,

As a reminder, this is the original patch and discussion:

https://patchew.org/linux/ZIEqlkIASx2F2DRF@tpad/