[PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue

Vincent Guittot posted 10 patches 1 week ago
There is a newer version of this series
kernel/sched/core.c  |   4 +-
kernel/sched/debug.c |  15 ++-
kernel/sched/fair.c  | 236 +++++++++++++++++++++++++------------------
kernel/sched/pelt.c  |   4 +-
kernel/sched/sched.h |  12 +--
5 files changed, 152 insertions(+), 119 deletions(-)
[PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Vincent Guittot 1 week ago
Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
lag has elapsed. As a result, it stays also visible in the statistics that
are used to balance the system and in particular the field h_nr_running.

This serie fixes those metrics by creating a new h_nr_queued that tracks
all queued tasks. It renames h_nr_running into h_nr_runnable and restores
the behavior of h_nr_running i.e. tracking the number of fair tasks that
 want to run.

h_nr_runnable is used in several places to make decision on load balance:
  - PELT runnable_avg
  - deciding if a group is overloaded or has spare capacity
  - numa stats
  - reduced capacity management
  - load balance between groups

While fixing h_nr_running, some fields have been renamed to follow the
same pattern. We now have:
  - cfs.h_nr_runnable : running tasks in the hierarchy
  - cfs.h_nr_queued : enqueued tasks in the hierarchy either running or
      delayed dequeue
  - cfs.h_nr_idle : enqueued sched idle tasks in the hierarchy

cfs.nr_running has been rename cfs.nr_queued because it includes the
delayed dequeued entities

The unused cfs.idle_nr_running has been removed

Load balance compares the number of running tasks when selecting the
busiest group or runqueue and tries to migrate a runnable task and not a
sleeping delayed dequeue one.

It should be noticed that this serie doesn't fix the problem of delayed
dequeued tasks that can't migrate at wakeup.

Some additional cleanups have been added:
  - move variable declaration at the beginning of pick_next_entity() 
  - sched_can_stop_tick() should use cfs.h_nr_enqueued instead of
    cfs.nr_enqueued (previously cfs.nr_running) to know how many tasks
    are running in the whole hierarchy instead of how many entities at
    root level

Changes since v1:
- reorder the patches
- rename fields into:
  - h_nr_queued for all tasks queued both runnable and delayed dequeue
  - h_nr_runnable for all runnable tasks
  - h_nr_idle for all tasks with sched_idle policy
- Cleanup how h_nr_runnable is updated in enqueue_task_fair() and
  dequeue_entities

Peter Zijlstra (1):
  sched/eevdf: More PELT vs DELAYED_DEQUEUE

Vincent Guittot (9):
  sched/fair: Rename h_nr_running into h_nr_queued
  sched/fair: Add new cfs_rq.h_nr_runnable
  sched/fair: Removed unsued cfs_rq.h_nr_delayed
  sched/fair: Rename cfs_rq.idle_h_nr_running into h_nr_idle
  sched/fair: Remove unused cfs_rq.idle_nr_running
  sched/fair: Rename cfs_rq.nr_running into nr_queued
  sched/fair: Do not try to migrate delayed dequeue task
  sched/fair: Fix sched_can_stop_tick() for fair tasks
  sched/fair: Fix variable declaration position

 kernel/sched/core.c  |   4 +-
 kernel/sched/debug.c |  15 ++-
 kernel/sched/fair.c  | 236 +++++++++++++++++++++++++------------------
 kernel/sched/pelt.c  |   4 +-
 kernel/sched/sched.h |  12 +--
 5 files changed, 152 insertions(+), 119 deletions(-)

-- 
2.43.0
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Mike Galbraith 5 days, 14 hours ago
Greetings,

On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
> Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
> lag has elapsed. As a result, it stays also visible in the statistics that
> are used to balance the system and in particular the field h_nr_running.
>
> This serie fixes those metrics by creating a new h_nr_queued that tracks
> all queued tasks. It renames h_nr_running into h_nr_runnable and restores
> the behavior of h_nr_running i.e. tracking the number of fair tasks that
>  want to run.
>
> h_nr_runnable is used in several places to make decision on load balance:
>   - PELT runnable_avg
>   - deciding if a group is overloaded or has spare capacity
>   - numa stats
>   - reduced capacity management
>   - load balance between groups

I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
runnable seems to have an off-by-one issue, causing it to wander ever
further south.

patches 1-3 applied.
  .h_nr_runnable                 : -3046
  .runnable_avg                  : 450189777126

full set applied.
  .h_nr_runnable                 : -5707
  .runnable_avg                  : 4391793519526

	-Mike
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Vincent Guittot 4 days, 18 hours ago
On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
>
> Greetings,
>
> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
> > Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
> > lag has elapsed. As a result, it stays also visible in the statistics that
> > are used to balance the system and in particular the field h_nr_running.
> >
> > This serie fixes those metrics by creating a new h_nr_queued that tracks
> > all queued tasks. It renames h_nr_running into h_nr_runnable and restores
> > the behavior of h_nr_running i.e. tracking the number of fair tasks that
> >  want to run.
> >
> > h_nr_runnable is used in several places to make decision on load balance:
> >   - PELT runnable_avg
> >   - deciding if a group is overloaded or has spare capacity
> >   - numa stats
> >   - reduced capacity management
> >   - load balance between groups
>
> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
> runnable seems to have an off-by-one issue, causing it to wander ever
> further south.
>
> patches 1-3 applied.
>   .h_nr_runnable                 : -3046
>   .runnable_avg                  : 450189777126

Yeah, I messed up something around finish_delayed_dequeue_entity().
I'm' going to prepare a v3

>
> full set applied.
>   .h_nr_runnable                 : -5707
>   .runnable_avg                  : 4391793519526
>
>         -Mike
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Mike Galbraith 4 days, 9 hours ago
On Mon, 2024-12-02 at 10:17 +0100, Vincent Guittot wrote:
> On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
> >
> > I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
> > runnable seems to have an off-by-one issue, causing it to wander ever
> > further south.
> >
> > patches 1-3 applied.
> >   .h_nr_runnable                 : -3046
> >   .runnable_avg                  : 450189777126
>
> Yeah, I messed up something around finish_delayed_dequeue_entity().
> I'm' going to prepare a v3

v3 is all better with my light config.  I'll plug it into an rt tree
with an enterprise config and give it some exercise.

	-Mike
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by K Prateek Nayak 4 days, 17 hours ago
Hello Vincent, Mike,

On 12/2/2024 2:47 PM, Vincent Guittot wrote:
> On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
>>
>> Greetings,
>>
>> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
>>> Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
>>> lag has elapsed. As a result, it stays also visible in the statistics that
>>> are used to balance the system and in particular the field h_nr_running.
>>>
>>> This serie fixes those metrics by creating a new h_nr_queued that tracks
>>> all queued tasks. It renames h_nr_running into h_nr_runnable and restores
>>> the behavior of h_nr_running i.e. tracking the number of fair tasks that
>>>   want to run.
>>>
>>> h_nr_runnable is used in several places to make decision on load balance:
>>>    - PELT runnable_avg
>>>    - deciding if a group is overloaded or has spare capacity
>>>    - numa stats
>>>    - reduced capacity management
>>>    - load balance between groups
>>
>> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
>> runnable seems to have an off-by-one issue, causing it to wander ever
>> further south.
>>
>> patches 1-3 applied.
>>    .h_nr_runnable                 : -3046
>>    .runnable_avg                  : 450189777126
> 
> Yeah, I messed up something around finish_delayed_dequeue_entity().
> I'm' going to prepare a v3

I was looking into this and I have the below diff so far that seems to
solve the post boot negative values of h_nr_runnable on my setup; it is
only lightly tested so far:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 87552870958c..423981e65aba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5464,6 +5464,10 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
  static void set_delayed(struct sched_entity *se)
  {
  	se->sched_delayed = 1;
+
+	if (!entity_is_task(se))
+		return;
+
  	for_each_sched_entity(se) {
  		struct cfs_rq *cfs_rq = cfs_rq_of(se);
  
@@ -5476,6 +5480,10 @@ static void set_delayed(struct sched_entity *se)
  static void clear_delayed(struct sched_entity *se)
  {
  	se->sched_delayed = 0;
+
+	if (!entity_is_task(se))
+		return;
+
  	for_each_sched_entity(se) {
  		struct cfs_rq *cfs_rq = cfs_rq_of(se);
  
@@ -6977,7 +6985,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
  	struct cfs_rq *cfs_rq;
  	struct sched_entity *se = &p->se;
  	int h_nr_idle = task_has_idle_policy(p);
-	int h_nr_runnable = 0;
+	int h_nr_runnable = 1;
  	int task_new = !(flags & ENQUEUE_WAKEUP);
  	int rq_h_nr_queued = rq->cfs.h_nr_queued;
  	u64 slice = 0;
@@ -7124,8 +7132,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
  		p = task_of(se);
  		h_nr_queued = 1;
  		h_nr_idle = task_has_idle_policy(p);
-		if (!task_sleep && !task_delayed)
-			h_nr_runnable = !se->sched_delayed;
+		h_nr_runnable = !se->sched_delayed;
  	} else {
  		cfs_rq = group_cfs_rq(se);
  		slice = cfs_rq_min_slice(cfs_rq);
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index ab911d1335ba..f4ef5aaa4674 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -457,6 +457,7 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t
  
  static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
  {
+	SCHED_WARN_ON(rq->cfs.h_nr_runnable);
  	update_idle_core(rq);
  	scx_update_idle(rq, true);
  	schedstat_inc(rq->sched_goidle);
--

I'm not sure if the change in dequeue_entities() is completely necessary
but I added it after seeing the (DEQUEUE_SLEEP | DEQUEUE_DELAYED) in
throttle_cfs_rq() but there the entity cannot possibly be a task so
perhaps that part is unnecessary ¯\_(ツ)_/¯

Still testing! Will keep an eye out for v3.

> 
>>
>> full set applied.
>>    .h_nr_runnable                 : -5707
>>    .runnable_avg                  : 4391793519526
>>
>>          -Mike

-- 
Thanks and Regards,
Prateek

Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Vincent Guittot 4 days, 15 hours ago
On Mon, 2 Dec 2024 at 10:58, K Prateek Nayak <kprateek.nayak@amd.com> wrote:
>
> Hello Vincent, Mike,
>
> On 12/2/2024 2:47 PM, Vincent Guittot wrote:
> > On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
> >>
> >> Greetings,
> >>
> >> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
> >>> Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
> >>> lag has elapsed. As a result, it stays also visible in the statistics that
> >>> are used to balance the system and in particular the field h_nr_running.
> >>>
> >>> This serie fixes those metrics by creating a new h_nr_queued that tracks
> >>> all queued tasks. It renames h_nr_running into h_nr_runnable and restores
> >>> the behavior of h_nr_running i.e. tracking the number of fair tasks that
> >>>   want to run.
> >>>
> >>> h_nr_runnable is used in several places to make decision on load balance:
> >>>    - PELT runnable_avg
> >>>    - deciding if a group is overloaded or has spare capacity
> >>>    - numa stats
> >>>    - reduced capacity management
> >>>    - load balance between groups
> >>
> >> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
> >> runnable seems to have an off-by-one issue, causing it to wander ever
> >> further south.
> >>
> >> patches 1-3 applied.
> >>    .h_nr_runnable                 : -3046
> >>    .runnable_avg                  : 450189777126
> >
> > Yeah, I messed up something around finish_delayed_dequeue_entity().
> > I'm' going to prepare a v3
>
> I was looking into this and I have the below diff so far that seems to
> solve the post boot negative values of h_nr_runnable on my setup; it is
> only lightly tested so far:
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 87552870958c..423981e65aba 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5464,6 +5464,10 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
>   static void set_delayed(struct sched_entity *se)
>   {
>         se->sched_delayed = 1;
> +
> +       if (!entity_is_task(se))
> +               return;
> +
>         for_each_sched_entity(se) {
>                 struct cfs_rq *cfs_rq = cfs_rq_of(se);
>
> @@ -5476,6 +5480,10 @@ static void set_delayed(struct sched_entity *se)
>   static void clear_delayed(struct sched_entity *se)
>   {
>         se->sched_delayed = 0;
> +
> +       if (!entity_is_task(se))
> +               return;
> +
>         for_each_sched_entity(se) {
>                 struct cfs_rq *cfs_rq = cfs_rq_of(se);
>
> @@ -6977,7 +6985,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>         struct cfs_rq *cfs_rq;
>         struct sched_entity *se = &p->se;
>         int h_nr_idle = task_has_idle_policy(p);
> -       int h_nr_runnable = 0;
> +       int h_nr_runnable = 1;

I miss to invert default value when moving from h_nr_delayed to h_nr_runnable

>         int task_new = !(flags & ENQUEUE_WAKEUP);
>         int rq_h_nr_queued = rq->cfs.h_nr_queued;
>         u64 slice = 0;
> @@ -7124,8 +7132,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
>                 p = task_of(se);
>                 h_nr_queued = 1;
>                 h_nr_idle = task_has_idle_policy(p);
> -               if (!task_sleep && !task_delayed)
> -                       h_nr_runnable = !se->sched_delayed;
> +               h_nr_runnable = !se->sched_delayed;
>         } else {
>                 cfs_rq = group_cfs_rq(se);
>                 slice = cfs_rq_min_slice(cfs_rq);
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index ab911d1335ba..f4ef5aaa4674 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -457,6 +457,7 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t
>
>   static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
>   {
> +       SCHED_WARN_ON(rq->cfs.h_nr_runnable);
>         update_idle_core(rq);
>         scx_update_idle(rq, true);
>         schedstat_inc(rq->sched_goidle);
> --
>
> I'm not sure if the change in dequeue_entities() is completely necessary
> but I added it after seeing the (DEQUEUE_SLEEP | DEQUEUE_DELAYED) in
> throttle_cfs_rq() but there the entity cannot possibly be a task so
> perhaps that part is unnecessary ¯\_(ツ)_/¯
>
> Still testing! Will keep an eye out for v3.
>
> >
> >>
> >> full set applied.
> >>    .h_nr_runnable                 : -5707
> >>    .runnable_avg                  : 4391793519526
> >>
> >>          -Mike
>
> --
> Thanks and Regards,
> Prateek
>
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Vincent Guittot 4 days, 13 hours ago
On Mon, 2 Dec 2024 at 12:42, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> On Mon, 2 Dec 2024 at 10:58, K Prateek Nayak <kprateek.nayak@amd.com> wrote:
> >
> > Hello Vincent, Mike,
> >
> > On 12/2/2024 2:47 PM, Vincent Guittot wrote:
> > > On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
> > >>
> > >> Greetings,
> > >>
> > >> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
> > >>> Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
> > >>> lag has elapsed. As a result, it stays also visible in the statistics that
> > >>> are used to balance the system and in particular the field h_nr_running.
> > >>>
> > >>> This serie fixes those metrics by creating a new h_nr_queued that tracks
> > >>> all queued tasks. It renames h_nr_running into h_nr_runnable and restores
> > >>> the behavior of h_nr_running i.e. tracking the number of fair tasks that
> > >>>   want to run.
> > >>>
> > >>> h_nr_runnable is used in several places to make decision on load balance:
> > >>>    - PELT runnable_avg
> > >>>    - deciding if a group is overloaded or has spare capacity
> > >>>    - numa stats
> > >>>    - reduced capacity management
> > >>>    - load balance between groups
> > >>
> > >> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
> > >> runnable seems to have an off-by-one issue, causing it to wander ever
> > >> further south.
> > >>
> > >> patches 1-3 applied.
> > >>    .h_nr_runnable                 : -3046
> > >>    .runnable_avg                  : 450189777126
> > >
> > > Yeah, I messed up something around finish_delayed_dequeue_entity().
> > > I'm' going to prepare a v3
> >
> > I was looking into this and I have the below diff so far that seems to
> > solve the post boot negative values of h_nr_runnable on my setup; it is
> > only lightly tested so far:
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 87552870958c..423981e65aba 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5464,6 +5464,10 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
> >   static void set_delayed(struct sched_entity *se)
> >   {
> >         se->sched_delayed = 1;
> > +
> > +       if (!entity_is_task(se))
> > +               return;
> > +
> >         for_each_sched_entity(se) {
> >                 struct cfs_rq *cfs_rq = cfs_rq_of(se);
> >
> > @@ -5476,6 +5480,10 @@ static void set_delayed(struct sched_entity *se)
> >   static void clear_delayed(struct sched_entity *se)
> >   {
> >         se->sched_delayed = 0;
> > +
> > +       if (!entity_is_task(se))
> > +               return;
> > +
> >         for_each_sched_entity(se) {
> >                 struct cfs_rq *cfs_rq = cfs_rq_of(se);
> >
> > @@ -6977,7 +6985,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >         struct cfs_rq *cfs_rq;
> >         struct sched_entity *se = &p->se;
> >         int h_nr_idle = task_has_idle_policy(p);
> > -       int h_nr_runnable = 0;
> > +       int h_nr_runnable = 1;
>
> I miss to invert default value when moving from h_nr_delayed to h_nr_runnable
>
> >         int task_new = !(flags & ENQUEUE_WAKEUP);
> >         int rq_h_nr_queued = rq->cfs.h_nr_queued;
> >         u64 slice = 0;
> > @@ -7124,8 +7132,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
> >                 p = task_of(se);
> >                 h_nr_queued = 1;
> >                 h_nr_idle = task_has_idle_policy(p);
> > -               if (!task_sleep && !task_delayed)
> > -                       h_nr_runnable = !se->sched_delayed;
> > +               h_nr_runnable = !se->sched_delayed;

And I screwed up h_nr_runnable here as well

> >         } else {
> >                 cfs_rq = group_cfs_rq(se);
> >                 slice = cfs_rq_min_slice(cfs_rq);
> > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> > index ab911d1335ba..f4ef5aaa4674 100644
> > --- a/kernel/sched/idle.c
> > +++ b/kernel/sched/idle.c
> > @@ -457,6 +457,7 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t
> >
> >   static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
> >   {
> > +       SCHED_WARN_ON(rq->cfs.h_nr_runnable);
> >         update_idle_core(rq);
> >         scx_update_idle(rq, true);
> >         schedstat_inc(rq->sched_goidle);
> > --
> >
> > I'm not sure if the change in dequeue_entities() is completely necessary
> > but I added it after seeing the (DEQUEUE_SLEEP | DEQUEUE_DELAYED) in
> > throttle_cfs_rq() but there the entity cannot possibly be a task so
> > perhaps that part is unnecessary ¯\_(ツ)_/¯
> >
> > Still testing! Will keep an eye out for v3.
> >
> > >
> > >>
> > >> full set applied.
> > >>    .h_nr_runnable                 : -5707
> > >>    .runnable_avg                  : 4391793519526
> > >>
> > >>          -Mike
> >
> > --
> > Thanks and Regards,
> > Prateek
> >
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Luis Machado 4 days, 18 hours ago
On 12/2/24 09:17, Vincent Guittot wrote:
> On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
>>
>> Greetings,
>>
>> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
>>> Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
>>> lag has elapsed. As a result, it stays also visible in the statistics that
>>> are used to balance the system and in particular the field h_nr_running.
>>>
>>> This serie fixes those metrics by creating a new h_nr_queued that tracks
>>> all queued tasks. It renames h_nr_running into h_nr_runnable and restores
>>> the behavior of h_nr_running i.e. tracking the number of fair tasks that
>>>  want to run.
>>>
>>> h_nr_runnable is used in several places to make decision on load balance:
>>>   - PELT runnable_avg
>>>   - deciding if a group is overloaded or has spare capacity
>>>   - numa stats
>>>   - reduced capacity management
>>>   - load balance between groups
>>
>> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
>> runnable seems to have an off-by-one issue, causing it to wander ever
>> further south.
>>
>> patches 1-3 applied.
>>   .h_nr_runnable                 : -3046
>>   .runnable_avg                  : 450189777126
> 
> Yeah, I messed up something around finish_delayed_dequeue_entity().
> I'm' going to prepare a v3> 

Maybe something similar to what I ran into here?

https://lore.kernel.org/lkml/6df12fde-1e0d-445f-8f8a-736d11f9ee41@arm.com/

>>
>> full set applied.
>>   .h_nr_runnable                 : -5707
>>   .runnable_avg                  : 4391793519526
>>
>>         -Mike
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Vincent Guittot 4 days, 17 hours ago
On Mon, 2 Dec 2024 at 10:23, Luis Machado <luis.machado@arm.com> wrote:
>
> On 12/2/24 09:17, Vincent Guittot wrote:
> > On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
> >>
> >> Greetings,
> >>
> >> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:
> >>> Delayed dequeued feature keeps a sleeping sched_entitiy enqueued until its
> >>> lag has elapsed. As a result, it stays also visible in the statistics that
> >>> are used to balance the system and in particular the field h_nr_running.
> >>>
> >>> This serie fixes those metrics by creating a new h_nr_queued that tracks
> >>> all queued tasks. It renames h_nr_running into h_nr_runnable and restores
> >>> the behavior of h_nr_running i.e. tracking the number of fair tasks that
> >>>  want to run.
> >>>
> >>> h_nr_runnable is used in several places to make decision on load balance:
> >>>   - PELT runnable_avg
> >>>   - deciding if a group is overloaded or has spare capacity
> >>>   - numa stats
> >>>   - reduced capacity management
> >>>   - load balance between groups
> >>
> >> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
> >> runnable seems to have an off-by-one issue, causing it to wander ever
> >> further south.
> >>
> >> patches 1-3 applied.
> >>   .h_nr_runnable                 : -3046
> >>   .runnable_avg                  : 450189777126
> >
> > Yeah, I messed up something around finish_delayed_dequeue_entity().
> > I'm' going to prepare a v3>
>
> Maybe something similar to what I ran into here?
>
> https://lore.kernel.org/lkml/6df12fde-1e0d-445f-8f8a-736d11f9ee41@arm.com/

I'm going to have a look

>
> >>
> >> full set applied.
> >>   .h_nr_runnable                 : -5707
> >>   .runnable_avg                  : 4391793519526
> >>
> >>         -Mike
>
Re: [PATCH 0/10 v2] sched/fair: Fix statistics with delayed dequeue
Posted by Dietmar Eggemann 4 days, 13 hours ago
On 02/12/2024 10:59, Vincent Guittot wrote:
> On Mon, 2 Dec 2024 at 10:23, Luis Machado <luis.machado@arm.com> wrote:
>>
>> On 12/2/24 09:17, Vincent Guittot wrote:
>>> On Sun, 1 Dec 2024 at 14:30, Mike Galbraith <efault@gmx.de> wrote:
>>>>
>>>> Greetings,
>>>>
>>>> On Fri, 2024-11-29 at 17:17 +0100, Vincent Guittot wrote:

[...]

>>>>> h_nr_runnable is used in several places to make decision on load balance:
>>>>>   - PELT runnable_avg
>>>>>   - deciding if a group is overloaded or has spare capacity
>>>>>   - numa stats
>>>>>   - reduced capacity management
>>>>>   - load balance between groups
>>>>
>>>> I took the series for a spin in tip v6.12-10334-gb1b238fba309, but
>>>> runnable seems to have an off-by-one issue, causing it to wander ever
>>>> further south.
>>>>
>>>> patches 1-3 applied.
>>>>   .h_nr_runnable                 : -3046
>>>>   .runnable_avg                  : 450189777126
>>>
>>> Yeah, I messed up something around finish_delayed_dequeue_entity().
>>> I'm' going to prepare a v3>
>>
>> Maybe something similar to what I ran into here?
>>
>> https://lore.kernel.org/lkml/6df12fde-1e0d-445f-8f8a-736d11f9ee41@arm.com/
> 
> I'm going to have a look

Looks like this is not an issue anymore since commit 98442f0ccd82
("sched: Fix delayed_dequeue vs switched_from_fair()") removed
finish_delayed_dequeue_entity() from switched_from_fair() in the meantime.

[...]