[PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU

Mike Galbraith posted 1 patch 1 year, 2 months ago
kernel/sched/core.c     |   51 ++++++++++++++++++++++++++++++------------------
kernel/sched/features.h |    5 ++++
kernel/sched/sched.h    |    5 ++++
3 files changed, 42 insertions(+), 19 deletions(-)
[PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Mike Galbraith 1 year, 2 months ago
On Thu, 2024-11-21 at 06:56 -0500, Phil Auld wrote:
> On Wed, Nov 20, 2024 at 07:37:39PM +0100 Mike Galbraith wrote:
> > On Tue, 2024-11-19 at 12:51 +0100, Mike Galbraith wrote:
> > > On Tue, 2024-11-19 at 06:30 -0500, Phil Auld wrote:
> > > >
> > > > This, below, by itself, did not do help and caused a small slowdown on some
> > > > other tests.  Did this need to be on top of the wakeup change?
> > >
> > > No, that made a mess.
> >
> > Rashly speculating that turning mobile kthread component loose is what
> > helped your write regression...
> >
> > You could try adding (p->flags & PF_KTHREAD) to the wakeup patch to
> > only turn hard working kthreads loose to try to dodge service latency.
> > Seems unlikely wakeup frequency * instances would combine to shred fio
> > the way turning tbench loose did.
> >
>
> Thanks, I'll try that.

You may still want to try that, but my box says probably not.  Playing
with your write command line, the players I see are pinned kworkers and
mobile fio instances.

Maybe try the below instead. The changelog is obsolete BS unless you
say otherwise, but while twiddled V2 will still migrate tbench a bit,
and per trace_printk() does still let all kinds of stuff wander off to
roll the SIS dice, it does not even scratch the paint of the formerly
obliterated tbench progression.

Question: did wiping off the evil leave any meaningful goodness behind?

---

sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU

Phil Auld (Redhat) reported an fio benchmark regression having been found
to have been caused by addition of the DELAY_DEQUEUE feature, suggested it
may be related to wakees losing the ability to migrate, and confirmed that
restoration of same indeed did restore previous performance.

V2: do not rip buddies apart, convenient on/off switch

Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
Signed-off-by: Mike Galbraith <efault@gmx.de>
---
 kernel/sched/core.c     |   51 ++++++++++++++++++++++++++++++------------------
 kernel/sched/features.h |    5 ++++
 kernel/sched/sched.h    |    5 ++++
 3 files changed, 42 insertions(+), 19 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3783,28 +3783,41 @@ ttwu_do_activate(struct rq *rq, struct t
  */
 static int ttwu_runnable(struct task_struct *p, int wake_flags)
 {
-	struct rq_flags rf;
-	struct rq *rq;
-	int ret = 0;
-
-	rq = __task_rq_lock(p, &rf);
-	if (task_on_rq_queued(p)) {
-		update_rq_clock(rq);
-		if (p->se.sched_delayed)
-			enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
-		if (!task_on_cpu(rq, p)) {
-			/*
-			 * When on_rq && !on_cpu the task is preempted, see if
-			 * it should preempt the task that is current now.
-			 */
-			wakeup_preempt(rq, p, wake_flags);
+	CLASS(__task_rq_lock, rq_guard)(p);
+	struct rq *rq = rq_guard.rq;
+
+	if (!task_on_rq_queued(p))
+		return 0;
+
+	update_rq_clock(rq);
+	if (p->se.sched_delayed) {
+		int queue_flags = ENQUEUE_DELAYED | ENQUEUE_NOCLOCK;
+		int dequeue = sched_feat(DEQUEUE_DELAYED);
+
+		/*
+		 * Since sched_delayed means we cannot be current anywhere,
+		 * dequeue it here and have it fall through to the
+		 * select_task_rq() case further along in ttwu() path.
+		 * Note: Do not rip buddies apart else chaos follows.
+		 */
+		if (dequeue && rq->nr_running > 1 && p->nr_cpus_allowed > 1 &&
+		    !(rq->curr->last_wakee == p || p->last_wakee == rq->curr)) {
+			dequeue_task(rq, p, DEQUEUE_SLEEP | queue_flags);
+			return 0;
 		}
-		ttwu_do_wakeup(p);
-		ret = 1;
+
+		enqueue_task(rq, p, queue_flags);
+	}
+	if (!task_on_cpu(rq, p)) {
+		/*
+		 * When on_rq && !on_cpu the task is preempted, see if
+		 * it should preempt the task that is current now.
+		 */
+		wakeup_preempt(rq, p, wake_flags);
 	}
-	__task_rq_unlock(rq, &rf);
+	ttwu_do_wakeup(p);

-	return ret;
+	return 1;
 }

 #ifdef CONFIG_SMP
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -47,6 +47,11 @@ SCHED_FEAT(CACHE_HOT_BUDDY, true)
  * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
  */
 SCHED_FEAT(DELAY_DEQUEUE, true)
+/*
+ * Free ONLY non-buddy delayed tasks to wakeup-migrate to avoid taking.
+ * an unnecessary latency hit.  Rending buddies asunder inflicts harm.
+ */
+SCHED_FEAT(DEQUEUE_DELAYED, true)
 SCHED_FEAT(DELAY_ZERO, true)

 /*
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1783,6 +1783,11 @@ task_rq_unlock(struct rq *rq, struct tas
 	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
 }

+DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct,
+		    _T->rq = __task_rq_lock(_T->lock, &_T->rf),
+		    __task_rq_unlock(_T->rq, &_T->rf),
+		    struct rq *rq; struct rq_flags rf)
+
 DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct,
 		    _T->rq = task_rq_lock(_T->lock, &_T->rf),
 		    task_rq_unlock(_T->rq, _T->lock, &_T->rf),
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Phil Auld 1 year, 2 months ago
On Sat, Nov 23, 2024 at 09:44:40AM +0100 Mike Galbraith wrote:
> On Thu, 2024-11-21 at 06:56 -0500, Phil Auld wrote:
> > On Wed, Nov 20, 2024 at 07:37:39PM +0100 Mike Galbraith wrote:
> > > On Tue, 2024-11-19 at 12:51 +0100, Mike Galbraith wrote:
> > > > On Tue, 2024-11-19 at 06:30 -0500, Phil Auld wrote:
> > > > >
> > > > > This, below, by itself, did not do help and caused a small slowdown on some
> > > > > other tests.  Did this need to be on top of the wakeup change?
> > > >
> > > > No, that made a mess.
> > >
> > > Rashly speculating that turning mobile kthread component loose is what
> > > helped your write regression...
> > >
> > > You could try adding (p->flags & PF_KTHREAD) to the wakeup patch to
> > > only turn hard working kthreads loose to try to dodge service latency.
> > > Seems unlikely wakeup frequency * instances would combine to shred fio
> > > the way turning tbench loose did.
> > >
> >
> > Thanks, I'll try that.
> 
> You may still want to try that, but my box says probably not.  Playing
> with your write command line, the players I see are pinned kworkers and
> mobile fio instances.
>

Yep. The PF_KTHREAD thing did not help. 


> Maybe try the below instead. The changelog is obsolete BS unless you
> say otherwise, but while twiddled V2 will still migrate tbench a bit,
> and per trace_printk() does still let all kinds of stuff wander off to
> roll the SIS dice, it does not even scratch the paint of the formerly
> obliterated tbench progression.
>

Will give this one a try when I get caught up from being off all week for
US turkey day. 


Thanks!

> Question: did wiping off the evil leave any meaningful goodness behind?


Is that for this patch?  

If you mean for the original patch (which subsequently broke the reads) then
no. It was more or less even for all the other tests. It fixed the randwrite
issue by moving it to randread. Everything else we run regularly was about
the same. So no extra goodness to help decide :)


Cheers,
Phil

> 
> ---
> 
> sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
> 
> Phil Auld (Redhat) reported an fio benchmark regression having been found
> to have been caused by addition of the DELAY_DEQUEUE feature, suggested it
> may be related to wakees losing the ability to migrate, and confirmed that
> restoration of same indeed did restore previous performance.
> 
> V2: do not rip buddies apart, convenient on/off switch
> 
> Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
> Signed-off-by: Mike Galbraith <efault@gmx.de>
> ---
>  kernel/sched/core.c     |   51 ++++++++++++++++++++++++++++++------------------
>  kernel/sched/features.h |    5 ++++
>  kernel/sched/sched.h    |    5 ++++
>  3 files changed, 42 insertions(+), 19 deletions(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3783,28 +3783,41 @@ ttwu_do_activate(struct rq *rq, struct t
>   */
>  static int ttwu_runnable(struct task_struct *p, int wake_flags)
>  {
> -	struct rq_flags rf;
> -	struct rq *rq;
> -	int ret = 0;
> -
> -	rq = __task_rq_lock(p, &rf);
> -	if (task_on_rq_queued(p)) {
> -		update_rq_clock(rq);
> -		if (p->se.sched_delayed)
> -			enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
> -		if (!task_on_cpu(rq, p)) {
> -			/*
> -			 * When on_rq && !on_cpu the task is preempted, see if
> -			 * it should preempt the task that is current now.
> -			 */
> -			wakeup_preempt(rq, p, wake_flags);
> +	CLASS(__task_rq_lock, rq_guard)(p);
> +	struct rq *rq = rq_guard.rq;
> +
> +	if (!task_on_rq_queued(p))
> +		return 0;
> +
> +	update_rq_clock(rq);
> +	if (p->se.sched_delayed) {
> +		int queue_flags = ENQUEUE_DELAYED | ENQUEUE_NOCLOCK;
> +		int dequeue = sched_feat(DEQUEUE_DELAYED);
> +
> +		/*
> +		 * Since sched_delayed means we cannot be current anywhere,
> +		 * dequeue it here and have it fall through to the
> +		 * select_task_rq() case further along in ttwu() path.
> +		 * Note: Do not rip buddies apart else chaos follows.
> +		 */
> +		if (dequeue && rq->nr_running > 1 && p->nr_cpus_allowed > 1 &&
> +		    !(rq->curr->last_wakee == p || p->last_wakee == rq->curr)) {
> +			dequeue_task(rq, p, DEQUEUE_SLEEP | queue_flags);
> +			return 0;
>  		}
> -		ttwu_do_wakeup(p);
> -		ret = 1;
> +
> +		enqueue_task(rq, p, queue_flags);
> +	}
> +	if (!task_on_cpu(rq, p)) {
> +		/*
> +		 * When on_rq && !on_cpu the task is preempted, see if
> +		 * it should preempt the task that is current now.
> +		 */
> +		wakeup_preempt(rq, p, wake_flags);
>  	}
> -	__task_rq_unlock(rq, &rf);
> +	ttwu_do_wakeup(p);
> 
> -	return ret;
> +	return 1;
>  }
> 
>  #ifdef CONFIG_SMP
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -47,6 +47,11 @@ SCHED_FEAT(CACHE_HOT_BUDDY, true)
>   * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
>   */
>  SCHED_FEAT(DELAY_DEQUEUE, true)
> +/*
> + * Free ONLY non-buddy delayed tasks to wakeup-migrate to avoid taking.
> + * an unnecessary latency hit.  Rending buddies asunder inflicts harm.
> + */
> +SCHED_FEAT(DEQUEUE_DELAYED, true)
>  SCHED_FEAT(DELAY_ZERO, true)
> 
>  /*
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1783,6 +1783,11 @@ task_rq_unlock(struct rq *rq, struct tas
>  	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
>  }
> 
> +DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct,
> +		    _T->rq = __task_rq_lock(_T->lock, &_T->rf),
> +		    __task_rq_unlock(_T->rq, &_T->rf),
> +		    struct rq *rq; struct rq_flags rf)
> +
>  DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct,
>  		    _T->rq = task_rq_lock(_T->lock, &_T->rf),
>  		    task_rq_unlock(_T->rq, _T->lock, &_T->rf),
> 

-- 
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Mike Galbraith 1 year, 2 months ago
On Mon, 2024-12-02 at 11:24 -0500, Phil Auld wrote:
> On Sat, Nov 23, 2024 at 09:44:40AM +0100 Mike Galbraith wrote:
>
>
> > Question: did wiping off the evil leave any meaningful goodness behind?
>
> Is that for this patch?

Yeah.  Trying it on my box with your write command line didn't improve
the confidence level either.  My box has one CPU handling IRQs and
waking pinned workers to service 8 fio instances.  Patch was useless
for that.

	-Mike
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Phil Auld 1 year, 2 months ago
On Mon, Dec 02, 2024 at 05:55:28PM +0100 Mike Galbraith wrote:
> On Mon, 2024-12-02 at 11:24 -0500, Phil Auld wrote:
> > On Sat, Nov 23, 2024 at 09:44:40AM +0100 Mike Galbraith wrote:
> >
> >
> > > Question: did wiping off the evil leave any meaningful goodness behind?
> >
> > Is that for this patch?
> 
> Yeah.  Trying it on my box with your write command line didn't improve
> the confidence level either.  My box has one CPU handling IRQs and
> waking pinned workers to service 8 fio instances.  Patch was useless
> for that.
>

I'll give it a try. Our "box" is multiple different boxes but the results
vary somewhat.  The one I sent info about earlier in this thread is just
one of the more egregious and is the one the perf team lent me for a while.


Cheers,
Phil


> 	-Mike
> 
> 

--
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Phil Auld 1 year, 2 months ago
Hi Mike et al.,

On Mon, Dec 02, 2024 at 02:12:52PM -0500 Phil Auld wrote:
> On Mon, Dec 02, 2024 at 05:55:28PM +0100 Mike Galbraith wrote:
> > On Mon, 2024-12-02 at 11:24 -0500, Phil Auld wrote:
> > > On Sat, Nov 23, 2024 at 09:44:40AM +0100 Mike Galbraith wrote:
> > >
> > >
> > > > Question: did wiping off the evil leave any meaningful goodness behind?
> > >
> > > Is that for this patch?
> > 
> > Yeah.  Trying it on my box with your write command line didn't improve
> > the confidence level either.  My box has one CPU handling IRQs and
> > waking pinned workers to service 8 fio instances.  Patch was useless
> > for that.
> >
> 
> I'll give it a try. Our "box" is multiple different boxes but the results
> vary somewhat.  The one I sent info about earlier in this thread is just
> one of the more egregious and is the one the perf team lent me for a while.
>

In our testing this has the same effect as the original dequeue-when-delayed
fix.  It solves the randwrite issue and introduces the ~10-15% randread
regression. 

Seems to be a real trade-off here. The same guys who benefit from spreading
in one case benefit from staying put in the other... 


Cheers,
Phil

--
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Mike Galbraith 1 year, 1 month ago
On Mon, 2024-12-09 at 08:11 -0500, Phil Auld wrote:
>
> Hi Mike et al.,
>
> On Mon, Dec 02, 2024 at 02:12:52PM -0500 Phil Auld wrote:
> > On Mon, Dec 02, 2024 at 05:55:28PM +0100 Mike Galbraith wrote:
> > > On Mon, 2024-12-02 at 11:24 -0500, Phil Auld wrote:
> > > > On Sat, Nov 23, 2024 at 09:44:40AM +0100 Mike Galbraith wrote:
> > > >
> > > >
> > > > > Question: did wiping off the evil leave any meaningful goodness behind?
> > > >
> > > > Is that for this patch?
> > >
> > > Yeah.  Trying it on my box with your write command line didn't improve
> > > the confidence level either.  My box has one CPU handling IRQs and
> > > waking pinned workers to service 8 fio instances.  Patch was useless
> > > for that.
> > >
> >
> > I'll give it a try. Our "box" is multiple different boxes but the results
> > vary somewhat.  The one I sent info about earlier in this thread is just
> > one of the more egregious and is the one the perf team lent me for a while.
> >
>
> In our testing this has the same effect as the original dequeue-when-delayed
> fix.  It solves the randwrite issue and introduces the ~10-15% randread
> regression.
>
> Seems to be a real trade-off here. The same guys who benefit from spreading
> in one case benefit from staying put in the other...

Does as much harm as it does good isn't the mark of a keeper.  Oh well.

	-Mike
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by K Prateek Nayak 1 year, 2 months ago
Hello Mike,

On 11/23/2024 2:14 PM, Mike Galbraith wrote:
> [..snip..]
> 
> Question: did wiping off the evil leave any meaningful goodness behind?
> 
> ---
> 
> sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
> 
> Phil Auld (Redhat) reported an fio benchmark regression having been found
> to have been caused by addition of the DELAY_DEQUEUE feature, suggested it
> may be related to wakees losing the ability to migrate, and confirmed that
> restoration of same indeed did restore previous performance.
> 
> V2: do not rip buddies apart, convenient on/off switch
> 
> Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
> Signed-off-by: Mike Galbraith <efault@gmx.de>
> ---
>   kernel/sched/core.c     |   51 ++++++++++++++++++++++++++++++------------------
>   kernel/sched/features.h |    5 ++++
>   kernel/sched/sched.h    |    5 ++++
>   3 files changed, 42 insertions(+), 19 deletions(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3783,28 +3783,41 @@ ttwu_do_activate(struct rq *rq, struct t
>    */
>   static int ttwu_runnable(struct task_struct *p, int wake_flags)
>   {
> -	struct rq_flags rf;
> -	struct rq *rq;
> -	int ret = 0;
> -
> -	rq = __task_rq_lock(p, &rf);
> -	if (task_on_rq_queued(p)) {
> -		update_rq_clock(rq);
> -		if (p->se.sched_delayed)
> -			enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
> -		if (!task_on_cpu(rq, p)) {
> -			/*
> -			 * When on_rq && !on_cpu the task is preempted, see if
> -			 * it should preempt the task that is current now.
> -			 */
> -			wakeup_preempt(rq, p, wake_flags);
> +	CLASS(__task_rq_lock, rq_guard)(p);
> +	struct rq *rq = rq_guard.rq;
> +
> +	if (!task_on_rq_queued(p))
> +		return 0;
> +
> +	update_rq_clock(rq);
> +	if (p->se.sched_delayed) {
> +		int queue_flags = ENQUEUE_DELAYED | ENQUEUE_NOCLOCK;
> +		int dequeue = sched_feat(DEQUEUE_DELAYED);
> +
> +		/*
> +		 * Since sched_delayed means we cannot be current anywhere,
> +		 * dequeue it here and have it fall through to the
> +		 * select_task_rq() case further along in ttwu() path.
> +		 * Note: Do not rip buddies apart else chaos follows.
> +		 */
> +		if (dequeue && rq->nr_running > 1 && p->nr_cpus_allowed > 1 &&

Do we really care if DEQUEUE_DELAYED is enabled / disabled here when we
encounter a delayed task?

> +		    !(rq->curr->last_wakee == p || p->last_wakee == rq->curr)) {

Technically, we are still looking at the last wakeup here since
record_wakee() is only called later. If we care about 1:1 buddies,
should we just see "current == p->last_wakee", otherwise, there is a
good chance "p" has a m:n waker-wakee relationship in which case
perhaps a "want_affine" like heuristic can help?

For science, I was wondering if the decision to dequeue + migrate or
requeue the delayed task can be put off until after the whole
select_task_rq() target selection (note: without the h_nr_delayed
stuff, some of that wake_affine_idle() logic falls apart). Hackbench
(which saw some regression with EEVDF Complete) seem to like it
somewhat, but it still falls behind NO_DELAY_DEQUEUE.

    ==================================================================
     Test          : hackbench
     Units         : Normalized time in seconds
     Interpretation: Lower is better
     Statistic     : AMean
     ==================================================================
     NO_DELAY_DEQUEUE	     Mike's v2	  Full ttwu + requeue/migrate
     5.76	           5.72  (  1% )        5.82  ( -1% )
     6.53	           6.56  (  0% )        6.65  ( -2% )
     6.79	           7.04  ( -4% )        7.02  ( -3% )
     6.91	           7.04  ( -2% )        7.03  ( -2% )
     7.63	           8.05  ( -6% )        7.88  ( -3% )

Only subtle changes in IBS profiles; there aren't any obvious shift
in hotspots with hackbench at least. Not sure if it is just the act of
needing to do a dequeue + enqueue from the wakeup context that adds to
the overall regression.

-- 
Thanks and Regards,
Prateek

> +			dequeue_task(rq, p, DEQUEUE_SLEEP | queue_flags);
> +			return 0;
>   		}
> -		ttwu_do_wakeup(p);
> -		ret = 1;
> +
> +		enqueue_task(rq, p, queue_flags);
> +	}
> +	if (!task_on_cpu(rq, p)) {
> +		/*
> +		 * When on_rq && !on_cpu the task is preempted, see if
> +		 * it should preempt the task that is current now.
> +		 */
> +		wakeup_preempt(rq, p, wake_flags);
>   	}
> -	__task_rq_unlock(rq, &rf);
> +	ttwu_do_wakeup(p);
> 
> -	return ret;
> +	return 1;
>   }
> 
>   #ifdef CONFIG_SMP
> [..snip..]
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Mike Galbraith 1 year, 2 months ago
On Tue, 2024-11-26 at 11:02 +0530, K Prateek Nayak wrote:
> Hello Mike,
> 
> On 11/23/2024 2:14 PM, Mike Galbraith wrote:
> > [..snip..]
> > 
> > Question: did wiping off the evil leave any meaningful goodness behind?
> > 
> > ---
> > 
> > sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
> > 
> > Phil Auld (Redhat) reported an fio benchmark regression having been found
> > to have been caused by addition of the DELAY_DEQUEUE feature, suggested it
> > may be related to wakees losing the ability to migrate, and confirmed that
> > restoration of same indeed did restore previous performance.
> > 
> > V2: do not rip buddies apart, convenient on/off switch
> > 
> > Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
> > Signed-off-by: Mike Galbraith <efault@gmx.de>
> > ---
> >   kernel/sched/core.c     |   51 ++++++++++++++++++++++++++++++------------------
> >   kernel/sched/features.h |    5 ++++
> >   kernel/sched/sched.h    |    5 ++++
> >   3 files changed, 42 insertions(+), 19 deletions(-)
> > 
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3783,28 +3783,41 @@ ttwu_do_activate(struct rq *rq, struct t
> >    */
> >   static int ttwu_runnable(struct task_struct *p, int wake_flags)
> >   {
> > -       struct rq_flags rf;
> > -       struct rq *rq;
> > -       int ret = 0;
> > -
> > -       rq = __task_rq_lock(p, &rf);
> > -       if (task_on_rq_queued(p)) {
> > -               update_rq_clock(rq);
> > -               if (p->se.sched_delayed)
> > -                       enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
> > -               if (!task_on_cpu(rq, p)) {
> > -                       /*
> > -                        * When on_rq && !on_cpu the task is preempted, see if
> > -                        * it should preempt the task that is current now.
> > -                        */
> > -                       wakeup_preempt(rq, p, wake_flags);
> > +       CLASS(__task_rq_lock, rq_guard)(p);
> > +       struct rq *rq = rq_guard.rq;
> > +
> > +       if (!task_on_rq_queued(p))
> > +               return 0;
> > +
> > +       update_rq_clock(rq);
> > +       if (p->se.sched_delayed) {
> > +               int queue_flags = ENQUEUE_DELAYED | ENQUEUE_NOCLOCK;
> > +               int dequeue = sched_feat(DEQUEUE_DELAYED);
> > +
> > +               /*
> > +                * Since sched_delayed means we cannot be current anywhere,
> > +                * dequeue it here and have it fall through to the
> > +                * select_task_rq() case further along in ttwu() path.
> > +                * Note: Do not rip buddies apart else chaos follows.
> > +                */
> > +               if (dequeue && rq->nr_running > 1 && p->nr_cpus_allowed > 1 &&
> 
> Do we really care if DEQUEUE_DELAYED is enabled / disabled here when we
> encounter a delayed task?

The switch is for test convenience.

> > +                   !(rq->curr->last_wakee == p || p->last_wakee == rq->curr)) {
> 
> Technically, we are still looking at the last wakeup here since
> record_wakee() is only called later. If we care about 1:1 buddies,
> should we just see "current == p->last_wakee", otherwise, there is a
> good chance "p" has a m:n waker-wakee relationship in which case
> perhaps a "want_affine" like heuristic can help?

The intent is to blunt the instrument a bit. Paul should have a highly
active interrupt source, which will give wakeup credit to whatever is
sitting on that CPU, breaking 1:1 connections.. a little bit.   That's
why it still migrates tbench buddies, but NOT at a rate that turns a
tbench progression into a new low regression.  The hope is that the
load shift caused by that active interrupt source is enough to give
Paul's regression some of the help it demonstrated wanting, without the
collateral damage.  It might now be so weak as to not meet the
"meaningful" in my question, in which case it lands on the ginormous
pile of meh, sorta works, but why would anyone care.

> For science, I was wondering if the decision to dequeue + migrate or
> requeue the delayed task can be put off until after the whole
> select_task_rq() target selection (note: without the h_nr_delayed
> stuff, some of that wake_affine_idle() logic falls apart). Hackbench
> (which saw some regression with EEVDF Complete) seem to like it
> somewhat, but it still falls behind NO_DELAY_DEQUEUE.

You can, with a few more fast path cycles and some duplication, none of
which looks very desirable.
 
>     ==================================================================
>      Test          : hackbench
>      Units         : Normalized time in seconds
>      Interpretation: Lower is better
>      Statistic     : AMean
>      ==================================================================
>      NO_DELAY_DEQUEUE        Mike's v2    Full ttwu + requeue/migrate
>      5.76                  5.72  (  1% )        5.82  ( -1% )
>      6.53                  6.56  (  0% )        6.65  ( -2% )
>      6.79                  7.04  ( -4% )        7.02  ( -3% )
>      6.91                  7.04  ( -2% )        7.03  ( -2% )
>      7.63                  8.05  ( -6% )        7.88  ( -3% )
> 
> Only subtle changes in IBS profiles; there aren't any obvious shift
> in hotspots with hackbench at least. Not sure if it is just the act of
> needing to do a dequeue + enqueue from the wakeup context that adds to
> the overall regression.

Those numbers say say to me that hackbench doesn't care deeply.  That
works for me, because I don't care deeply about nutty fork bombs ;-)

	-Mike
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Mike Galbraith 1 year, 2 months ago
On Tue, 2024-11-26 at 07:30 +0100, Mike Galbraith wrote:
>
> The intent is to blunt the instrument a bit. Paul should have a highly
> active interrupt source, which will give wakeup credit to whatever is
> sitting on that CPU, breaking 1:1 connections.. a little bit.   That's
> why it still migrates tbench buddies, but NOT at a rate that turns a
> tbench progression into a new low regression.

BTW, the reason for the tbench wreckage being so bad is that when the
box is near saturation, not only are a portion of the surviving
sched_delayed tasks affine wakeups (always the optimal configuration
for this fast mover cliebt/server pair in an L3 equipped box), they are
exclusively affine wakeups.  That is most definitely gonna hurt.

When saturating that becomes the best option for a lot of client/server
pairs, even those with a lot of concurrency.  Turning them loose to
migrate at that time is far more likely than not to hurt a LOT, so V1
was doomed.

> The hope is that the
> load shift caused by that active interrupt source is enough to give
> Paul's regression some of the help it demonstrated wanting, without the
> collateral damage.  It might now be so weak as to not meet the
> "meaningful" in my question, in which case it lands on the ginormous
> pile of meh, sorta works, but why would anyone care.

In my IO challenged box, patch is useless to fio, nothing can help a
load where all of the IO action, and wimpy action at that, is nailed to
one CPU.  I can see it helping other latency sensitive stuff, like say
1:N mother of all work and/or control threads (and ilk), but if Phil's
problematic box looks anything like this box.. nah, it's a long reach.

	-Mike
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Mike Galbraith 1 year, 2 months ago
On Tue, 2024-11-26 at 07:30 +0100, Mike Galbraith wrote:
>
> The intent is to blunt the instrument a bit. Paul should have

Yeah I did... ahem, I meant of course Phil.

	-Mike
Re: [PATCH V2] sched/fair: Dequeue sched_delayed tasks when waking to a busy CPU
Posted by Phil Auld 1 year, 2 months ago
On Tue, Nov 26, 2024 at 10:42:37AM +0100 Mike Galbraith wrote:
> On Tue, 2024-11-26 at 07:30 +0100, Mike Galbraith wrote:
> >
> > The intent is to blunt the instrument a bit. Paul should have
> 
> Yeah I did... ahem, I meant of course Phil.
>

Heh, you are not alone, Mike  :)


> 	-Mike
> 

--