[PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs

Rafael J. Wysocki posted 3 patches 1 month, 3 weeks ago
[PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Rafael J. Wysocki 1 month, 3 weeks ago
From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

When the menu governor runs on a nohz_full CPU and there are no user
space timers in the workload on that CPU, it ends up selecting idle
states with target residency values above TICK_NSEC all the time due to
a tick_nohz_tick_stopped() check designed for a different use case.
Namely, on nohz_full CPUs the fact that the tick has been stopped does
not actually mean anything in particular, whereas in the other case it
indicates that previously the CPU was expected to be idle sufficiently
long for the tick to be stopped, so it is not unreasonable to expect
it to be idle beyond the tick period length again.
  
In some cases, this behavior causes latency in the workload to grow
undesirably.  It may also cause the workload to consume more energy
than necessary if the CPU does not spend enough time in the selected
deep idle states.

Address this by amending the tick_nohz_tick_stopped() check in question
with a tick_nohz_full_cpu() one to avoid using the time till the next
timer event as the predicted_ns value all the time on nohz_full CPUs.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpuidle/governors/menu.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -293,8 +293,18 @@
 	 * in a shallow idle state for a long time as a result of it.  In that
 	 * case, say we might mispredict and use the known time till the closest
 	 * timer event for the idle state selection.
+	 *
+	 * However, on nohz_full CPUs the tick does not run as a rule and the
+	 * time till the closest timer event may always be effectively infinite,
+	 * so using it as a replacement for the predicted idle duration would
+	 * effectively always cause the prediction results to be discarded and
+	 * deep idle states to be selected all the time.  That might introduce
+	 * unwanted latency into the workload and cause more energy than
+	 * necessary to be consumed if the discarded prediction results are
+	 * actually accurate, so skip nohz_full CPUs here.
 	 */
-	if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
+	if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
+	    predicted_ns < TICK_NSEC)
 		predicted_ns = data->next_timer_ns;
 
 	/*
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Frederic Weisbecker 3 weeks, 2 days ago
Le Wed, Aug 13, 2025 at 12:29:51PM +0200, Rafael J. Wysocki a écrit :
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> When the menu governor runs on a nohz_full CPU and there are no user
> space timers in the workload on that CPU, it ends up selecting idle
> states with target residency values above TICK_NSEC all the time due to
> a tick_nohz_tick_stopped() check designed for a different use case.
>
> Namely, on nohz_full CPUs the fact that the tick has been stopped does
> not actually mean anything in particular, whereas in the other case it
> indicates that previously the CPU was expected to be idle sufficiently
> long for the tick to be stopped, so it is not unreasonable to expect
> it to be idle beyond the tick period length again.

I understand what you mean but it may be hard to figure out for
reviewers. Can we rephrase it to something like:

When nohz_full is not running, the fact that the tick is stopped
indicates the CPU has been idle for sufficiently long so that
nohz has deferred it to the next timer callback. So it is
not unreasonable to expect the CPU to be idle beyond the tick
period length again.

However when nohz_full is running, the CPU may enter idle with the
tick already stopped. But this doesn't tell anything about the future
CPU's idleness.

>   
> In some cases, this behavior causes latency in the workload to grow
> undesirably.  It may also cause the workload to consume more energy
> than necessary if the CPU does not spend enough time in the selected
> deep idle states.
> 
> Address this by amending the tick_nohz_tick_stopped() check in question
> with a tick_nohz_full_cpu() one to avoid using the time till the next
> timer event as the predicted_ns value all the time on nohz_full CPUs.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpuidle/governors/menu.c |   12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> --- a/drivers/cpuidle/governors/menu.c
> +++ b/drivers/cpuidle/governors/menu.c
> @@ -293,8 +293,18 @@
>  	 * in a shallow idle state for a long time as a result of it.  In that
>  	 * case, say we might mispredict and use the known time till the closest
>  	 * timer event for the idle state selection.
> +	 *
> +	 * However, on nohz_full CPUs the tick does not run as a rule and the
> +	 * time till the closest timer event may always be effectively infinite,
> +	 * so using it as a replacement for the predicted idle duration would
> +	 * effectively always cause the prediction results to be discarded and
> +	 * deep idle states to be selected all the time.  That might introduce
> +	 * unwanted latency into the workload and cause more energy than
> +	 * necessary to be consumed if the discarded prediction results are
> +	 * actually accurate, so skip nohz_full CPUs here.
>  	 */
> -	if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
> +	if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
> +	    predicted_ns < TICK_NSEC)
>  		predicted_ns = data->next_timer_ns;

So, when !tick_nohz_full_cpu(dev->cpu), what is the purpose of this tick stopped
special case?

Is it because the next dynamic tick is a better prediction than the typical
interval once the tick is stopped?

Does that mean we might become more "pessimistic" concerning the predicted idle
time for nohz_full CPUs?

I guess too shallow C-states are still better than too deep but there should be
a word about that introduced side effect (if any).

Thanks!

>  	/*
> 
> 
> 

-- 
Frederic Weisbecker
SUSE Labs
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Rafael J. Wysocki 3 weeks, 2 days ago
On Thu, Sep 11, 2025 at 4:17 PM Frederic Weisbecker <frederic@kernel.org> wrote:
>
> Le Wed, Aug 13, 2025 at 12:29:51PM +0200, Rafael J. Wysocki a écrit :
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >
> > When the menu governor runs on a nohz_full CPU and there are no user
> > space timers in the workload on that CPU, it ends up selecting idle
> > states with target residency values above TICK_NSEC all the time due to
> > a tick_nohz_tick_stopped() check designed for a different use case.
> >
> > Namely, on nohz_full CPUs the fact that the tick has been stopped does
> > not actually mean anything in particular, whereas in the other case it
> > indicates that previously the CPU was expected to be idle sufficiently
> > long for the tick to be stopped, so it is not unreasonable to expect
> > it to be idle beyond the tick period length again.
>
> I understand what you mean but it may be hard to figure out for
> reviewers. Can we rephrase it to something like:
>
> When nohz_full is not running, the fact that the tick is stopped
> indicates the CPU has been idle for sufficiently long so that
> nohz has deferred it to the next timer callback. So it is
> not unreasonable to expect the CPU to be idle beyond the tick
> period length again.
>
> However when nohz_full is running, the CPU may enter idle with the
> tick already stopped. But this doesn't tell anything about the future
> CPU's idleness.

Sure, thanks for the hint.

> >
> > In some cases, this behavior causes latency in the workload to grow
> > undesirably.  It may also cause the workload to consume more energy
> > than necessary if the CPU does not spend enough time in the selected
> > deep idle states.
> >
> > Address this by amending the tick_nohz_tick_stopped() check in question
> > with a tick_nohz_full_cpu() one to avoid using the time till the next
> > timer event as the predicted_ns value all the time on nohz_full CPUs.
> >
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >  drivers/cpuidle/governors/menu.c |   12 +++++++++++-
> >  1 file changed, 11 insertions(+), 1 deletion(-)
> >
> > --- a/drivers/cpuidle/governors/menu.c
> > +++ b/drivers/cpuidle/governors/menu.c
> > @@ -293,8 +293,18 @@
> >        * in a shallow idle state for a long time as a result of it.  In that
> >        * case, say we might mispredict and use the known time till the closest
> >        * timer event for the idle state selection.
> > +      *
> > +      * However, on nohz_full CPUs the tick does not run as a rule and the
> > +      * time till the closest timer event may always be effectively infinite,
> > +      * so using it as a replacement for the predicted idle duration would
> > +      * effectively always cause the prediction results to be discarded and
> > +      * deep idle states to be selected all the time.  That might introduce
> > +      * unwanted latency into the workload and cause more energy than
> > +      * necessary to be consumed if the discarded prediction results are
> > +      * actually accurate, so skip nohz_full CPUs here.
> >        */
> > -     if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
> > +     if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
> > +         predicted_ns < TICK_NSEC)
> >               predicted_ns = data->next_timer_ns;
>
> So, when !tick_nohz_full_cpu(dev->cpu), what is the purpose of this tick stopped
> special case?
>
> Is it because the next dynamic tick is a better prediction than the typical
> interval once the tick is stopped?

When !tick_nohz_full_cpu(dev->cpu), the tick is a safety net against
getting stuck in a shallow idle state for too long.  In that case, if
the tick is stopped, the safety net is not there and it is better to
use a deep state.

However, data->next_timer_ns is a lower limit for the idle state
target residency because this is when the next timer is going to
trigger.

> Does that mean we might become more "pessimistic" concerning the predicted idle
> time for nohz_full CPUs?

Yes, and not just we might, but we do unless the idle periods in the
workload are "long".

> I guess too shallow C-states are still better than too deep but there should be
> a word about that introduced side effect (if any).

Yeah, I agree.

That said, on a nohz_full CPU there is no safety net against getting
stuck in a shallow idle state because the tick is not present.  That's
why currently the governors don't allow shallow states to be used on
nohz_full CPUs.

The lack of a safety net is generally not a problem when the CPU has
been isolated to run something doing real work all the time, with
possible idle periods in the workload, but there are people who
isolate CPUs for energy-saving reasons and don't run anything on them
on purpose.  For those folks, the current behavior to select deep idle
states every time is actually desirable.

So there are two use cases that cannot be addressed at once and I'm
thinking about adding a control knob to allow the user to decide which
way to go.
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Frederic Weisbecker 2 weeks, 2 days ago
Le Thu, Sep 11, 2025 at 07:07:42PM +0200, Rafael J. Wysocki a écrit :
> On Thu, Sep 11, 2025 at 4:17 PM Frederic Weisbecker <frederic@kernel.org> wrote:
> > So, when !tick_nohz_full_cpu(dev->cpu), what is the purpose of this tick stopped
> > special case?
> >
> > Is it because the next dynamic tick is a better prediction than the typical
> > interval once the tick is stopped?
> 
> When !tick_nohz_full_cpu(dev->cpu), the tick is a safety net against
> getting stuck in a shallow idle state for too long.  In that case, if
> the tick is stopped, the safety net is not there and it is better to
> use a deep state.

Right.

> However, data->next_timer_ns is a lower limit for the idle state
> target residency because this is when the next timer is going to
> trigger.

Ok.

> 
> > Does that mean we might become more "pessimistic" concerning the predicted idle
> > time for nohz_full CPUs?
> 
> Yes, and not just we might, but we do unless the idle periods in the
> workload are "long".

Ok.

> 
> > I guess too shallow C-states are still better than too deep but there should be
> > a word about that introduced side effect (if any).
> 
> Yeah, I agree.
> 
> That said, on a nohz_full CPU there is no safety net against getting
> stuck in a shallow idle state because the tick is not present.  That's
> why currently the governors don't allow shallow states to be used on
> nohz_full CPUs.
> 
> The lack of a safety net is generally not a problem when the CPU has
> been isolated to run something doing real work all the time, with
> possible idle periods in the workload, but there are people who
> isolate CPUs for energy-saving reasons and don't run anything on them
> on purpose.  For those folks, the current behavior to select deep idle
> states every time is actually desirable.

So far I haven't heard from anybody using nohz_full for powersavings. If
you have I'd be curious about it. Whether a task runs tickless or not, it
still runs and the CPU isn't sleeping. Also CPU 0 stays periodic on nohz_full,
which alone is a problem for powersaving but also prevents a whole package
from entering low power mode on NUMA.

Let's say it not optimized toward powersaving...

> So there are two use cases that cannot be addressed at once and I'm
> thinking about adding a control knob to allow the user to decide which
> way to go.

I'm tempted to say we should focus on having not too deep states,
at the expense of having too shallow. Of course I'm not entirely
comfortable with the idea because nohz_full CPUs may be idle for a while
on some workloads. And everyone deserves a rest at some point after
a long day.

I guess force restarting the tick upon idle entry would probably be
bad for tiny idle round-trips?

As for such a knob, I'm not sure anybody would use it.

Thanks.

-- 
Frederic Weisbecker
SUSE Labs
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Rafael J. Wysocki 1 week, 4 days ago
On Thu, Sep 18, 2025 at 5:07 PM Frederic Weisbecker <frederic@kernel.org> wrote:
>
> Le Thu, Sep 11, 2025 at 07:07:42PM +0200, Rafael J. Wysocki a écrit :
> > On Thu, Sep 11, 2025 at 4:17 PM Frederic Weisbecker <frederic@kernel.org> wrote:
> > > So, when !tick_nohz_full_cpu(dev->cpu), what is the purpose of this tick stopped
> > > special case?
> > >
> > > Is it because the next dynamic tick is a better prediction than the typical
> > > interval once the tick is stopped?
> >
> > When !tick_nohz_full_cpu(dev->cpu), the tick is a safety net against
> > getting stuck in a shallow idle state for too long.  In that case, if
> > the tick is stopped, the safety net is not there and it is better to
> > use a deep state.
>
> Right.
>
> > However, data->next_timer_ns is a lower limit for the idle state
> > target residency because this is when the next timer is going to
> > trigger.
>
> Ok.
>
> >
> > > Does that mean we might become more "pessimistic" concerning the predicted idle
> > > time for nohz_full CPUs?
> >
> > Yes, and not just we might, but we do unless the idle periods in the
> > workload are "long".
>
> Ok.
>
> >
> > > I guess too shallow C-states are still better than too deep but there should be
> > > a word about that introduced side effect (if any).
> >
> > Yeah, I agree.
> >
> > That said, on a nohz_full CPU there is no safety net against getting
> > stuck in a shallow idle state because the tick is not present.  That's
> > why currently the governors don't allow shallow states to be used on
> > nohz_full CPUs.
> >
> > The lack of a safety net is generally not a problem when the CPU has
> > been isolated to run something doing real work all the time, with
> > possible idle periods in the workload, but there are people who
> > isolate CPUs for energy-saving reasons and don't run anything on them
> > on purpose.  For those folks, the current behavior to select deep idle
> > states every time is actually desirable.
>
> So far I haven't heard from anybody using nohz_full for powersavings. If
> you have I'd be curious about it.

There is a project called LPMD that does this:

https://github.com/intel/intel-lpmd

> Whether a task runs tickless or not, it
> still runs and the CPU isn't sleeping. Also CPU 0 stays periodic on nohz_full,
> which alone is a problem for powersaving but also prevents a whole package
> from entering low power mode on NUMA.

That's not a problem for the above because it uses "isolation" for
taking some specific CPUs out of use (CPU0 is never one of them
AFAICS).

Also, it does depend on idle governors always putting those CPUs into
deep idle states.

> Let's say it not optimized toward powersaving...

Oh well ...

> > So there are two use cases that cannot be addressed at once and I'm
> > thinking about adding a control knob to allow the user to decide which
> > way to go.
>
> I'm tempted to say we should focus on having not too deep states,
> at the expense of having too shallow. Of course I'm not entirely
> comfortable with the idea because nohz_full CPUs may be idle for a while
> on some workloads. And everyone deserves a rest at some point after
> a long day.

Right.

> I guess force restarting the tick upon idle entry would probably be
> bad for tiny idle round-trips?

It wouldn't be exactly cheap in terms of latency I think.

> As for such a knob, I'm not sure anybody would use it.

Fair enough.
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Christian Loehle 1 month, 3 weeks ago
On 8/13/25 11:29, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> When the menu governor runs on a nohz_full CPU and there are no user
> space timers in the workload on that CPU, it ends up selecting idle
> states with target residency values above TICK_NSEC all the time due to
> a tick_nohz_tick_stopped() check designed for a different use case.
> Namely, on nohz_full CPUs the fact that the tick has been stopped does
> not actually mean anything in particular, whereas in the other case it
> indicates that previously the CPU was expected to be idle sufficiently
> long for the tick to be stopped, so it is not unreasonable to expect
> it to be idle beyond the tick period length again.
>   
> In some cases, this behavior causes latency in the workload to grow
> undesirably.  It may also cause the workload to consume more energy
> than necessary if the CPU does not spend enough time in the selected
> deep idle states.
> 
> Address this by amending the tick_nohz_tick_stopped() check in question
> with a tick_nohz_full_cpu() one to avoid using the time till the next
> timer event as the predicted_ns value all the time on nohz_full CPUs.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpuidle/governors/menu.c |   12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> --- a/drivers/cpuidle/governors/menu.c
> +++ b/drivers/cpuidle/governors/menu.c
> @@ -293,8 +293,18 @@
>  	 * in a shallow idle state for a long time as a result of it.  In that
>  	 * case, say we might mispredict and use the known time till the closest
>  	 * timer event for the idle state selection.
> +	 *
> +	 * However, on nohz_full CPUs the tick does not run as a rule and the
> +	 * time till the closest timer event may always be effectively infinite,
> +	 * so using it as a replacement for the predicted idle duration would
> +	 * effectively always cause the prediction results to be discarded and
> +	 * deep idle states to be selected all the time.  That might introduce
> +	 * unwanted latency into the workload and cause more energy than
> +	 * necessary to be consumed if the discarded prediction results are
> +	 * actually accurate, so skip nohz_full CPUs here.
>  	 */
> -	if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
> +	if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
> +	    predicted_ns < TICK_NSEC)
>  		predicted_ns = data->next_timer_ns;
>  
>  	/*
> 
> 
> 

OTOH the behaviour with $SUBJECT possibly means that we use predicted_ns from
get_typical_interval() (which may suggest picking a shallow state based on
previous wakeup patterns) only then to never wake up again?
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Rafael J. Wysocki 1 month, 2 weeks ago
On Thu, Aug 14, 2025 at 4:09 PM Christian Loehle
<christian.loehle@arm.com> wrote:
>
> On 8/13/25 11:29, Rafael J. Wysocki wrote:
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >
> > When the menu governor runs on a nohz_full CPU and there are no user
> > space timers in the workload on that CPU, it ends up selecting idle
> > states with target residency values above TICK_NSEC all the time due to
> > a tick_nohz_tick_stopped() check designed for a different use case.
> > Namely, on nohz_full CPUs the fact that the tick has been stopped does
> > not actually mean anything in particular, whereas in the other case it
> > indicates that previously the CPU was expected to be idle sufficiently
> > long for the tick to be stopped, so it is not unreasonable to expect
> > it to be idle beyond the tick period length again.
> >
> > In some cases, this behavior causes latency in the workload to grow
> > undesirably.  It may also cause the workload to consume more energy
> > than necessary if the CPU does not spend enough time in the selected
> > deep idle states.
> >
> > Address this by amending the tick_nohz_tick_stopped() check in question
> > with a tick_nohz_full_cpu() one to avoid using the time till the next
> > timer event as the predicted_ns value all the time on nohz_full CPUs.
> >
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >  drivers/cpuidle/governors/menu.c |   12 +++++++++++-
> >  1 file changed, 11 insertions(+), 1 deletion(-)
> >
> > --- a/drivers/cpuidle/governors/menu.c
> > +++ b/drivers/cpuidle/governors/menu.c
> > @@ -293,8 +293,18 @@
> >        * in a shallow idle state for a long time as a result of it.  In that
> >        * case, say we might mispredict and use the known time till the closest
> >        * timer event for the idle state selection.
> > +      *
> > +      * However, on nohz_full CPUs the tick does not run as a rule and the
> > +      * time till the closest timer event may always be effectively infinite,
> > +      * so using it as a replacement for the predicted idle duration would
> > +      * effectively always cause the prediction results to be discarded and
> > +      * deep idle states to be selected all the time.  That might introduce
> > +      * unwanted latency into the workload and cause more energy than
> > +      * necessary to be consumed if the discarded prediction results are
> > +      * actually accurate, so skip nohz_full CPUs here.
> >        */
> > -     if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
> > +     if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
> > +         predicted_ns < TICK_NSEC)
> >               predicted_ns = data->next_timer_ns;
> >
> >       /*
> >
> >
> >
>
> OTOH the behaviour with $SUBJECT possibly means that we use predicted_ns from
> get_typical_interval() (which may suggest picking a shallow state based on
> previous wakeup patterns) only then to never wake up again?

Yes, there is this risk, but the current behavior is more damaging IMV
because it (potentially) hurts both energy efficiency and performance.

It is also arguably easier for the user to remedy getting stuck in a
shallow idle state than to change governor's behavior (PM QoS is a bit
too blunt for this).

Moreover, configuring CPUs as nohz_full and leaving them in long idle
may not be the most efficient use of them.
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Christian Loehle 1 month, 2 weeks ago
On 8/18/25 18:41, Rafael J. Wysocki wrote:
> On Thu, Aug 14, 2025 at 4:09 PM Christian Loehle
> <christian.loehle@arm.com> wrote:
>>
>> On 8/13/25 11:29, Rafael J. Wysocki wrote:
>>> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
>>>
>>> When the menu governor runs on a nohz_full CPU and there are no user
>>> space timers in the workload on that CPU, it ends up selecting idle
>>> states with target residency values above TICK_NSEC all the time due to
>>> a tick_nohz_tick_stopped() check designed for a different use case.
>>> Namely, on nohz_full CPUs the fact that the tick has been stopped does
>>> not actually mean anything in particular, whereas in the other case it
>>> indicates that previously the CPU was expected to be idle sufficiently
>>> long for the tick to be stopped, so it is not unreasonable to expect
>>> it to be idle beyond the tick period length again.
>>>
>>> In some cases, this behavior causes latency in the workload to grow
>>> undesirably.  It may also cause the workload to consume more energy
>>> than necessary if the CPU does not spend enough time in the selected
>>> deep idle states.
>>>
>>> Address this by amending the tick_nohz_tick_stopped() check in question
>>> with a tick_nohz_full_cpu() one to avoid using the time till the next
>>> timer event as the predicted_ns value all the time on nohz_full CPUs.
>>>
>>> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
>>> ---
>>>  drivers/cpuidle/governors/menu.c |   12 +++++++++++-
>>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> --- a/drivers/cpuidle/governors/menu.c
>>> +++ b/drivers/cpuidle/governors/menu.c
>>> @@ -293,8 +293,18 @@
>>>        * in a shallow idle state for a long time as a result of it.  In that
>>>        * case, say we might mispredict and use the known time till the closest
>>>        * timer event for the idle state selection.
>>> +      *
>>> +      * However, on nohz_full CPUs the tick does not run as a rule and the
>>> +      * time till the closest timer event may always be effectively infinite,
>>> +      * so using it as a replacement for the predicted idle duration would
>>> +      * effectively always cause the prediction results to be discarded and
>>> +      * deep idle states to be selected all the time.  That might introduce
>>> +      * unwanted latency into the workload and cause more energy than
>>> +      * necessary to be consumed if the discarded prediction results are
>>> +      * actually accurate, so skip nohz_full CPUs here.
>>>        */
>>> -     if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
>>> +     if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
>>> +         predicted_ns < TICK_NSEC)
>>>               predicted_ns = data->next_timer_ns;
>>>
>>>       /*
>>>
>>>
>>>
>>
>> OTOH the behaviour with $SUBJECT possibly means that we use predicted_ns from
>> get_typical_interval() (which may suggest picking a shallow state based on
>> previous wakeup patterns) only then to never wake up again?
> 
> Yes, there is this risk, but the current behavior is more damaging IMV
> because it (potentially) hurts both energy efficiency and performance.
> 
> It is also arguably easier for the user to remedy getting stuck in a
> shallow idle state than to change governor's behavior (PM QoS is a bit
> too blunt for this).
> 
> Moreover, configuring CPUs as nohz_full and leaving them in long idle
> may not be the most efficient use of them.

True, on the other hand the setup cost for nohz_full is so high, you'd expect
the additional idle states disabling depending on the workload isn't too much
to ask for...
Anyway feel free to go ahead.
Re: [PATCH v1 3/3] cpuidle: governors: menu: Special-case nohz_full CPUs
Posted by Rafael J. Wysocki 1 month, 2 weeks ago
On Tue, Aug 19, 2025 at 11:10 AM Christian Loehle
<christian.loehle@arm.com> wrote:
>
> On 8/18/25 18:41, Rafael J. Wysocki wrote:
> > On Thu, Aug 14, 2025 at 4:09 PM Christian Loehle
> > <christian.loehle@arm.com> wrote:
> >>
> >> On 8/13/25 11:29, Rafael J. Wysocki wrote:
> >>> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >>>
> >>> When the menu governor runs on a nohz_full CPU and there are no user
> >>> space timers in the workload on that CPU, it ends up selecting idle
> >>> states with target residency values above TICK_NSEC all the time due to
> >>> a tick_nohz_tick_stopped() check designed for a different use case.
> >>> Namely, on nohz_full CPUs the fact that the tick has been stopped does
> >>> not actually mean anything in particular, whereas in the other case it
> >>> indicates that previously the CPU was expected to be idle sufficiently
> >>> long for the tick to be stopped, so it is not unreasonable to expect
> >>> it to be idle beyond the tick period length again.
> >>>
> >>> In some cases, this behavior causes latency in the workload to grow
> >>> undesirably.  It may also cause the workload to consume more energy
> >>> than necessary if the CPU does not spend enough time in the selected
> >>> deep idle states.
> >>>
> >>> Address this by amending the tick_nohz_tick_stopped() check in question
> >>> with a tick_nohz_full_cpu() one to avoid using the time till the next
> >>> timer event as the predicted_ns value all the time on nohz_full CPUs.
> >>>
> >>> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >>> ---
> >>>  drivers/cpuidle/governors/menu.c |   12 +++++++++++-
> >>>  1 file changed, 11 insertions(+), 1 deletion(-)
> >>>
> >>> --- a/drivers/cpuidle/governors/menu.c
> >>> +++ b/drivers/cpuidle/governors/menu.c
> >>> @@ -293,8 +293,18 @@
> >>>        * in a shallow idle state for a long time as a result of it.  In that
> >>>        * case, say we might mispredict and use the known time till the closest
> >>>        * timer event for the idle state selection.
> >>> +      *
> >>> +      * However, on nohz_full CPUs the tick does not run as a rule and the
> >>> +      * time till the closest timer event may always be effectively infinite,
> >>> +      * so using it as a replacement for the predicted idle duration would
> >>> +      * effectively always cause the prediction results to be discarded and
> >>> +      * deep idle states to be selected all the time.  That might introduce
> >>> +      * unwanted latency into the workload and cause more energy than
> >>> +      * necessary to be consumed if the discarded prediction results are
> >>> +      * actually accurate, so skip nohz_full CPUs here.
> >>>        */
> >>> -     if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
> >>> +     if (tick_nohz_tick_stopped() && !tick_nohz_full_cpu(dev->cpu) &&
> >>> +         predicted_ns < TICK_NSEC)
> >>>               predicted_ns = data->next_timer_ns;
> >>>
> >>>       /*
> >>>
> >>>
> >>>
> >>
> >> OTOH the behaviour with $SUBJECT possibly means that we use predicted_ns from
> >> get_typical_interval() (which may suggest picking a shallow state based on
> >> previous wakeup patterns) only then to never wake up again?
> >
> > Yes, there is this risk, but the current behavior is more damaging IMV
> > because it (potentially) hurts both energy efficiency and performance.
> >
> > It is also arguably easier for the user to remedy getting stuck in a
> > shallow idle state than to change governor's behavior (PM QoS is a bit
> > too blunt for this).
> >
> > Moreover, configuring CPUs as nohz_full and leaving them in long idle
> > may not be the most efficient use of them.
>
> True, on the other hand the setup cost for nohz_full is so high, you'd expect
> the additional idle states disabling depending on the workload isn't too much
> to ask for...

Apparently, there are cases in which there is enough idle time to ask
for a deep idle state often enough, but as a rule the idle periods are
relatively short.  In those cases, one would need to change the QoS
limit back and forth in anticipation of the "busier" and "calmer"
periods in the workload, which would be kind of equivalent to
implementing an idle governor in user space.

> Anyway feel free to go ahead.

Thank you!