[PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE

Peter Zijlstra posted 24 patches 1 year, 4 months ago
[PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Peter Zijlstra 1 year, 4 months ago
Note that tasks that are kept on the runqueue to burn off negative
lag, are not in fact runnable anymore, they'll get dequeued the moment
they get picked.

As such, don't count this time towards runnable.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/sched/fair.c  |    2 ++
 kernel/sched/sched.h |    6 ++++++
 2 files changed, 8 insertions(+)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
 			if (cfs_rq->next == se)
 				cfs_rq->next = NULL;
 			se->sched_delayed = 1;
+			update_load_avg(cfs_rq, se, 0);
 			return false;
 		}
 	}
@@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
 	}
 
 	se->sched_delayed = 0;
+	update_load_avg(cfs_rq, se, 0);
 }
 
 /*
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -816,6 +816,9 @@ static inline void se_update_runnable(st
 
 static inline long se_runnable(struct sched_entity *se)
 {
+	if (se->sched_delayed)
+		return false;
+
 	if (entity_is_task(se))
 		return !!se->on_rq;
 	else
@@ -830,6 +833,9 @@ static inline void se_update_runnable(st
 
 static inline long se_runnable(struct sched_entity *se)
 {
+	if (se->sched_delayed)
+		return false;
+
 	return !!se->on_rq;
 }
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Valentin Schneider 1 year, 4 months ago
On 27/07/24 12:27, Peter Zijlstra wrote:
> Note that tasks that are kept on the runqueue to burn off negative
> lag, are not in fact runnable anymore, they'll get dequeued the moment
> they get picked.
>
> As such, don't count this time towards runnable.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  kernel/sched/fair.c  |    2 ++
>  kernel/sched/sched.h |    6 ++++++
>  2 files changed, 8 insertions(+)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
>                       if (cfs_rq->next == se)
>                               cfs_rq->next = NULL;
>                       se->sched_delayed = 1;
> +			update_load_avg(cfs_rq, se, 0);

Shouldn't this be before setting ->sched_delayed? accumulate_sum() should
see the time delta as spent being runnable.

>                       return false;
>               }
>       }
> @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
>       }
>
>       se->sched_delayed = 0;
> +	update_load_avg(cfs_rq, se, 0);

Ditto on the ordering

>  }
>
>  /*
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -816,6 +816,9 @@ static inline void se_update_runnable(st
>
>  static inline long se_runnable(struct sched_entity *se)
>  {
> +	if (se->sched_delayed)
> +		return false;
> +

Per __update_load_avg_se(), delayed-dequeue entities are still ->on_rq, so
their load signal will increase. Do we want a similar helper for the @load
input of ___update_load_sum()?


>       if (entity_is_task(se))
>               return !!se->on_rq;
>       else
> @@ -830,6 +833,9 @@ static inline void se_update_runnable(st
>
>  static inline long se_runnable(struct sched_entity *se)
>  {
> +	if (se->sched_delayed)
> +		return false;
> +
>       return !!se->on_rq;
>  }
>
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Peter Zijlstra 1 year, 4 months ago
On Tue, Aug 13, 2024 at 02:43:56PM +0200, Valentin Schneider wrote:
> On 27/07/24 12:27, Peter Zijlstra wrote:
> > Note that tasks that are kept on the runqueue to burn off negative
> > lag, are not in fact runnable anymore, they'll get dequeued the moment
> > they get picked.
> >
> > As such, don't count this time towards runnable.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  kernel/sched/fair.c  |    2 ++
> >  kernel/sched/sched.h |    6 ++++++
> >  2 files changed, 8 insertions(+)
> >
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
> >                       if (cfs_rq->next == se)
> >                               cfs_rq->next = NULL;
> >                       se->sched_delayed = 1;
> > +			update_load_avg(cfs_rq, se, 0);
> 
> Shouldn't this be before setting ->sched_delayed? accumulate_sum() should
> see the time delta as spent being runnable.
> 
> >                       return false;
> >               }
> >       }
> > @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
> >       }
> >
> >       se->sched_delayed = 0;
> > +	update_load_avg(cfs_rq, se, 0);
> 
> Ditto on the ordering

Bah, so I remember thinking about it and then I obviously go and do it
the exact wrong way around eh? Let me double check this tomorrow morning
with the brain slightly more awake :/

> >  }
> >
> >  /*
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -816,6 +816,9 @@ static inline void se_update_runnable(st
> >
> >  static inline long se_runnable(struct sched_entity *se)
> >  {
> > +	if (se->sched_delayed)
> > +		return false;
> > +
> 
> Per __update_load_avg_se(), delayed-dequeue entities are still ->on_rq, so
> their load signal will increase. Do we want a similar helper for the @load
> input of ___update_load_sum()?

So the whole reason to keep then enqueued is so that they can continue
to compete for vruntime, and vruntime is load based. So it would be very
weird to remove them from load.
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Vincent Guittot 1 year, 4 months ago
On Wed, 14 Aug 2024 at 00:18, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Aug 13, 2024 at 02:43:56PM +0200, Valentin Schneider wrote:
> > On 27/07/24 12:27, Peter Zijlstra wrote:
> > > Note that tasks that are kept on the runqueue to burn off negative
> > > lag, are not in fact runnable anymore, they'll get dequeued the moment
> > > they get picked.
> > >
> > > As such, don't count this time towards runnable.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > >  kernel/sched/fair.c  |    2 ++
> > >  kernel/sched/sched.h |    6 ++++++
> > >  2 files changed, 8 insertions(+)
> > >
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
> > >                       if (cfs_rq->next == se)
> > >                               cfs_rq->next = NULL;
> > >                       se->sched_delayed = 1;
> > > +                   update_load_avg(cfs_rq, se, 0);
> >
> > Shouldn't this be before setting ->sched_delayed? accumulate_sum() should
> > see the time delta as spent being runnable.
> >
> > >                       return false;
> > >               }
> > >       }
> > > @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
> > >       }
> > >
> > >       se->sched_delayed = 0;
> > > +   update_load_avg(cfs_rq, se, 0);
> >
> > Ditto on the ordering
>
> Bah, so I remember thinking about it and then I obviously go and do it
> the exact wrong way around eh? Let me double check this tomorrow morning
> with the brain slightly more awake :/
>
> > >  }
> > >
> > >  /*
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -816,6 +816,9 @@ static inline void se_update_runnable(st
> > >
> > >  static inline long se_runnable(struct sched_entity *se)
> > >  {
> > > +   if (se->sched_delayed)
> > > +           return false;
> > > +
> >
> > Per __update_load_avg_se(), delayed-dequeue entities are still ->on_rq, so
> > their load signal will increase. Do we want a similar helper for the @load
> > input of ___update_load_sum()?
>
> So the whole reason to keep then enqueued is so that they can continue
> to compete for vruntime, and vruntime is load based. So it would be very
> weird to remove them from load.

We only use the weight to update vruntime, not the load. The load is
used to balance tasks between cpus and if we keep a "delayed" dequeued
task in the load, we will artificially inflate the load_avg on this rq

Shouldn't we track separately the sum of the weight of delayed dequeue
to apply it only on vruntime update ?
>
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Peter Zijlstra 1 year, 3 months ago
On Wed, Aug 14, 2024 at 02:59:00PM +0200, Vincent Guittot wrote:

> > So the whole reason to keep then enqueued is so that they can continue
> > to compete for vruntime, and vruntime is load based. So it would be very
> > weird to remove them from load.
> 
> We only use the weight to update vruntime, not the load. The load is
> used to balance tasks between cpus and if we keep a "delayed" dequeued
> task in the load, we will artificially inflate the load_avg on this rq

So far load has been a direct sum of all weight. Additionally, we delay
until a task gets picked again, migrating tasks to other CPUs will
expedite this condition.

Anyway, at the moment I don't have strong evidence either which way, and
the above argument seem to suggest not changing things for now.

We can always re-evaluate.
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Vincent Guittot 1 year, 3 months ago
On Sun, 18 Aug 2024 at 01:06, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Aug 14, 2024 at 02:59:00PM +0200, Vincent Guittot wrote:
>
> > > So the whole reason to keep then enqueued is so that they can continue
> > > to compete for vruntime, and vruntime is load based. So it would be very
> > > weird to remove them from load.
> >
> > We only use the weight to update vruntime, not the load. The load is
> > used to balance tasks between cpus and if we keep a "delayed" dequeued
> > task in the load, we will artificially inflate the load_avg on this rq
>
> So far load has been a direct sum of all weight. Additionally, we delay

it has been the sum of all runnable tasks but delayed tasks are not
runnable anymore. The task stays "enqueued" only to help clearing its
lag

> until a task gets picked again, migrating tasks to other CPUs will
> expedite this condition.
>
> Anyway, at the moment I don't have strong evidence either which way, and
> the above argument seem to suggest not changing things for now.
>
> We can always re-evaluate.
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Peter Zijlstra 1 year, 4 months ago
On Wed, Aug 14, 2024 at 12:18:07AM +0200, Peter Zijlstra wrote:
> On Tue, Aug 13, 2024 at 02:43:56PM +0200, Valentin Schneider wrote:
> > On 27/07/24 12:27, Peter Zijlstra wrote:
> > > Note that tasks that are kept on the runqueue to burn off negative
> > > lag, are not in fact runnable anymore, they'll get dequeued the moment
> > > they get picked.
> > >
> > > As such, don't count this time towards runnable.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > >  kernel/sched/fair.c  |    2 ++
> > >  kernel/sched/sched.h |    6 ++++++
> > >  2 files changed, 8 insertions(+)
> > >
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
> > >                       if (cfs_rq->next == se)
> > >                               cfs_rq->next = NULL;
> > >                       se->sched_delayed = 1;
> > > +			update_load_avg(cfs_rq, se, 0);
> > 
> > Shouldn't this be before setting ->sched_delayed? accumulate_sum() should
> > see the time delta as spent being runnable.
> > 
> > >                       return false;
> > >               }
> > >       }
> > > @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
> > >       }
> > >
> > >       se->sched_delayed = 0;
> > > +	update_load_avg(cfs_rq, se, 0);
> > 
> > Ditto on the ordering
> 
> Bah, so I remember thinking about it and then I obviously go and do it
> the exact wrong way around eh? Let me double check this tomorrow morning
> with the brain slightly more awake :/

OK, so I went over it again and I ended up with the below diff -- which
assuming I didn't make a giant mess of things *again*, I should go fold
back into various other patches ...

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1b15dbfb1ce5..fa8907f2c716 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5461,14 +5461,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 {
 	bool sleep = flags & DEQUEUE_SLEEP;
 
+	update_curr(cfs_rq);
+
 	if (flags & DEQUEUE_DELAYED) {
-		/*
-		 * DEQUEUE_DELAYED is typically called from pick_next_entity()
-		 * at which point we've already done update_curr() and do not
-		 * want to do so again.
-		 */
 		SCHED_WARN_ON(!se->sched_delayed);
-		se->sched_delayed = 0;
 	} else {
 		bool delay = sleep;
 		/*
@@ -5479,14 +5475,13 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 			delay = false;
 
 		SCHED_WARN_ON(delay && se->sched_delayed);
-		update_curr(cfs_rq);
 
 		if (sched_feat(DELAY_DEQUEUE) && delay &&
 		    !entity_eligible(cfs_rq, se)) {
 			if (cfs_rq->next == se)
 				cfs_rq->next = NULL;
-			se->sched_delayed = 1;
 			update_load_avg(cfs_rq, se, 0);
+			se->sched_delayed = 1;
 			return false;
 		}
 	}
@@ -5536,6 +5531,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
 		update_min_vruntime(cfs_rq);
 
+	if (flags & DEQUEUE_DELAYED) {
+		se->sched_delayed = 0;
+		if (sched_feat(DELAY_ZERO) && se->vlag > 0)
+			se->vlag = 0;
+	}
+
 	if (cfs_rq->nr_running == 0)
 		update_idle_cfs_rq_clock_pelt(cfs_rq);
 
@@ -5611,11 +5612,6 @@ pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq)
 	struct sched_entity *se = pick_eevdf(cfs_rq);
 	if (se->sched_delayed) {
 		dequeue_entities(rq, se, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
-		SCHED_WARN_ON(se->sched_delayed);
-		SCHED_WARN_ON(se->on_rq);
-		if (sched_feat(DELAY_ZERO) && se->vlag > 0)
-			se->vlag = 0;
-
 		return NULL;
 	}
 	return se;
@@ -6906,7 +6902,7 @@ requeue_delayed_entity(struct sched_entity *se)
 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
 
 	/*
-	 * se->sched_delayed should imply both: se->on_rq == 1.
+	 * se->sched_delayed should imply: se->on_rq == 1.
 	 * Because a delayed entity is one that is still on
 	 * the runqueue competing until elegibility.
 	 */
@@ -6927,8 +6923,8 @@ requeue_delayed_entity(struct sched_entity *se)
 		}
 	}
 
-	se->sched_delayed = 0;
 	update_load_avg(cfs_rq, se, 0);
+	se->sched_delayed = 0;
 }
 
 /*
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Valentin Schneider 1 year, 4 months ago
On 14/08/24 09:25, Peter Zijlstra wrote:
> On Wed, Aug 14, 2024 at 12:18:07AM +0200, Peter Zijlstra wrote:
>> On Tue, Aug 13, 2024 at 02:43:56PM +0200, Valentin Schneider wrote:
>> > On 27/07/24 12:27, Peter Zijlstra wrote:
>> > > @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
>> > >       }
>> > >
>> > >       se->sched_delayed = 0;
>> > > +	update_load_avg(cfs_rq, se, 0);
>> >
>> > Ditto on the ordering
>>
>> Bah, so I remember thinking about it and then I obviously go and do it
>> the exact wrong way around eh? Let me double check this tomorrow morning
>> with the brain slightly more awake :/
>
> OK, so I went over it again and I ended up with the below diff -- which
> assuming I didn't make a giant mess of things *again*, I should go fold
> back into various other patches ...
>

Looks right to me, thanks! I'll go test the newer stack of patches.
Re: [PATCH 19/24] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by Peter Zijlstra 1 year, 4 months ago
On Wed, Aug 14, 2024 at 09:25:48AM +0200, Peter Zijlstra wrote:
> On Wed, Aug 14, 2024 at 12:18:07AM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 13, 2024 at 02:43:56PM +0200, Valentin Schneider wrote:
> > > On 27/07/24 12:27, Peter Zijlstra wrote:
> > > > Note that tasks that are kept on the runqueue to burn off negative
> > > > lag, are not in fact runnable anymore, they'll get dequeued the moment
> > > > they get picked.
> > > >
> > > > As such, don't count this time towards runnable.
> > > >
> > > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > > ---
> > > >  kernel/sched/fair.c  |    2 ++
> > > >  kernel/sched/sched.h |    6 ++++++
> > > >  2 files changed, 8 insertions(+)
> > > >
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -5388,6 +5388,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
> > > >                       if (cfs_rq->next == se)
> > > >                               cfs_rq->next = NULL;
> > > >                       se->sched_delayed = 1;
> > > > +			update_load_avg(cfs_rq, se, 0);
> > > 
> > > Shouldn't this be before setting ->sched_delayed? accumulate_sum() should
> > > see the time delta as spent being runnable.
> > > 
> > > >                       return false;
> > > >               }
> > > >       }
> > > > @@ -6814,6 +6815,7 @@ requeue_delayed_entity(struct sched_enti
> > > >       }
> > > >
> > > >       se->sched_delayed = 0;
> > > > +	update_load_avg(cfs_rq, se, 0);
> > > 
> > > Ditto on the ordering
> > 
> > Bah, so I remember thinking about it and then I obviously go and do it
> > the exact wrong way around eh? Let me double check this tomorrow morning
> > with the brain slightly more awake :/
> 
> OK, so I went over it again and I ended up with the below diff -- which
> assuming I didn't make a giant mess of things *again*, I should go fold
> back into various other patches ...
> 
> ---
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1b15dbfb1ce5..fa8907f2c716 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5461,14 +5461,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  {
>  	bool sleep = flags & DEQUEUE_SLEEP;
>  
> +	update_curr(cfs_rq);
> +
>  	if (flags & DEQUEUE_DELAYED) {
> -		/*
> -		 * DEQUEUE_DELAYED is typically called from pick_next_entity()
> -		 * at which point we've already done update_curr() and do not
> -		 * want to do so again.
> -		 */
>  		SCHED_WARN_ON(!se->sched_delayed);
> -		se->sched_delayed = 0;
>  	} else {
>  		bool delay = sleep;
>  		/*

Because repeated update_curr() is harmless (I think I was thinking it
would move the clock, but it doesn't do that), but a missed
update_curr() makes a mess.

> @@ -5479,14 +5475,13 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  			delay = false;
>  
>  		SCHED_WARN_ON(delay && se->sched_delayed);
> -		update_curr(cfs_rq);
>  
>  		if (sched_feat(DELAY_DEQUEUE) && delay &&
>  		    !entity_eligible(cfs_rq, se)) {
>  			if (cfs_rq->next == se)
>  				cfs_rq->next = NULL;
> -			se->sched_delayed = 1;
>  			update_load_avg(cfs_rq, se, 0);
> +			se->sched_delayed = 1;
>  			return false;
>  		}
>  	}

As you said, update to now, then mark delayed.

> @@ -5536,6 +5531,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
>  		update_min_vruntime(cfs_rq);
>  
> +	if (flags & DEQUEUE_DELAYED) {
> +		se->sched_delayed = 0;
> +		if (sched_feat(DELAY_ZERO) && se->vlag > 0)
> +			se->vlag = 0;
> +	}
> +
>  	if (cfs_rq->nr_running == 0)
>  		update_idle_cfs_rq_clock_pelt(cfs_rq);
>  
> @@ -5611,11 +5612,6 @@ pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq)
>  	struct sched_entity *se = pick_eevdf(cfs_rq);
>  	if (se->sched_delayed) {
>  		dequeue_entities(rq, se, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
> -		SCHED_WARN_ON(se->sched_delayed);
> -		SCHED_WARN_ON(se->on_rq);
> -		if (sched_feat(DELAY_ZERO) && se->vlag > 0)
> -			se->vlag = 0;
> -
>  		return NULL;
>  	}
>  	return se;

Because if we have to clear delayed at the end up dequeue, we might as
well do all of the fixups at that point.

> @@ -6906,7 +6902,7 @@ requeue_delayed_entity(struct sched_entity *se)
>  	struct cfs_rq *cfs_rq = cfs_rq_of(se);
>  
>  	/*
> -	 * se->sched_delayed should imply both: se->on_rq == 1.
> +	 * se->sched_delayed should imply: se->on_rq == 1.
>  	 * Because a delayed entity is one that is still on
>  	 * the runqueue competing until elegibility.
>  	 */

edit fail, somewhere along history.

> @@ -6927,8 +6923,8 @@ requeue_delayed_entity(struct sched_entity *se)
>  		}
>  	}
>  
> -	se->sched_delayed = 0;
>  	update_load_avg(cfs_rq, se, 0);
> +	se->sched_delayed = 0;
>  }

What you said..
[tip: sched/core] sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Posted by tip-bot2 for Peter Zijlstra 1 year, 3 months ago
The following commit has been merged into the sched/core branch of tip:

Commit-ID:     fc1892becd5672f52329a75c73117b60ac7841b7
Gitweb:        https://git.kernel.org/tip/fc1892becd5672f52329a75c73117b60ac7841b7
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Fri, 26 Apr 2024 13:00:50 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Sat, 17 Aug 2024 11:06:45 +02:00

sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE

Note that tasks that are kept on the runqueue to burn off negative
lag, are not in fact runnable anymore, they'll get dequeued the moment
they get picked.

As such, don't count this time towards runnable.

Thanks to Valentin for spotting I had this backwards initially.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Tested-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lkml.kernel.org/r/20240727105030.514088302@infradead.org
---
 kernel/sched/fair.c  | 2 ++
 kernel/sched/sched.h | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a59339..0eb1bbf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5402,6 +5402,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		    !entity_eligible(cfs_rq, se)) {
 			if (cfs_rq->next == se)
 				cfs_rq->next = NULL;
+			update_load_avg(cfs_rq, se, 0);
 			se->sched_delayed = 1;
 			return false;
 		}
@@ -6841,6 +6842,7 @@ requeue_delayed_entity(struct sched_entity *se)
 		}
 	}
 
+	update_load_avg(cfs_rq, se, 0);
 	se->sched_delayed = 0;
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 263b4de..2f5d658 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -820,6 +820,9 @@ static inline void se_update_runnable(struct sched_entity *se)
 
 static inline long se_runnable(struct sched_entity *se)
 {
+	if (se->sched_delayed)
+		return false;
+
 	if (entity_is_task(se))
 		return !!se->on_rq;
 	else
@@ -834,6 +837,9 @@ static inline void se_update_runnable(struct sched_entity *se) { }
 
 static inline long se_runnable(struct sched_entity *se)
 {
+	if (se->sched_delayed)
+		return false;
+
 	return !!se->on_rq;
 }