[PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks

John Stultz posted 1 patch 7 months, 2 weeks ago
kernel/sched/core.c | 6 ++++++
1 file changed, 6 insertions(+)
[PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by John Stultz 7 months, 2 weeks ago
It was reported that in 6.12, smpboot_create_threads() was
taking much longer then in 6.6.

I narrowed down the call path to:
 smpboot_create_threads()
 -> kthread_create_on_cpu()
    -> kthread_bind()
       -> __kthread_bind_mask()
          ->wait_task_inactive()

Where in wait_task_inactive() we were regularly hitting the
queued case, which sets a 1 tick timeout, which when called
multiple times in a row, accumulates quickly into a long
delay.

I noticed disabling the DELAY_DEQUEUE sched feature recovered
the performance, and it seems the newly create tasks are usually
sched_delayed and left on the runqueue.

So in wait_task_inactive() when we see the task
p->se.sched_delayed, manually dequeue the sched_delayed task
with DEQUEUE_DELAYED, so we don't have to constantly wait a
tick.

This seems to work, but I've only lightly tested it, so I'd love
close review and feedback in case I've missed something in
wait_task_inactive(), or if there is a simpler alternative
approach.

NOTE: Peter did highlight[1] his general distaste for the
kthread_bind() through wait_task_inactive() functions, which
suggests a deeper rework might be better, but I'm not familiar
enough with all its users to have a sense of how that might be
done, and this fix seems to address the problem and be more
easily backported to 6.12-stable, so I wanted to submit it
again, as a potentially more short-term solution.

[1]: https://lore.kernel.org/lkml/20250422085628.GA14170@noisy.programming.kicks-ass.net/

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: kernel-team@android.com
Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
Reported-by: peter-yc.chang@mediatek.com
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: John Stultz <jstultz@google.com>
---
v2:
* Rework & simplify the check as suggested by K Prateek Nayak
* Added Reported-by tag for proper attribution
v3:
* Add Fixed-by: and Tested-by tag suggested by Prateek
---
 kernel/sched/core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c81cf642dba05..b986cd2fb19b7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
 		 * just go back and repeat.
 		 */
 		rq = task_rq_lock(p, &rf);
+		/*
+		 * If task is sched_delayed, force dequeue it, to avoid always
+		 * hitting the tick timeout in the queued case
+		 */
+		if (p->se.sched_delayed)
+			dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
 		trace_sched_wait_task(p);
 		running = task_on_cpu(rq, p);
 		queued = task_on_rq_queued(p);
-- 
2.49.0.901.g37484f566f-goog
Re: [PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by Peter Zijlstra 7 months, 2 weeks ago
On Tue, Apr 29, 2025 at 08:07:26AM -0700, John Stultz wrote:
> It was reported that in 6.12, smpboot_create_threads() was
> taking much longer then in 6.6.
> 
> I narrowed down the call path to:
>  smpboot_create_threads()
>  -> kthread_create_on_cpu()
>     -> kthread_bind()
>        -> __kthread_bind_mask()
>           ->wait_task_inactive()
> 
> Where in wait_task_inactive() we were regularly hitting the
> queued case, which sets a 1 tick timeout, which when called
> multiple times in a row, accumulates quickly into a long
> delay.
> 
> I noticed disabling the DELAY_DEQUEUE sched feature recovered
> the performance, and it seems the newly create tasks are usually
> sched_delayed and left on the runqueue.
> 
> So in wait_task_inactive() when we see the task
> p->se.sched_delayed, manually dequeue the sched_delayed task
> with DEQUEUE_DELAYED, so we don't have to constantly wait a
> tick.

---

(that is, I'll trim the Changelog a this point, seeing how the rest is
'discussion')

> This seems to work, but I've only lightly tested it, so I'd love
> close review and feedback in case I've missed something in
> wait_task_inactive(), or if there is a simpler alternative
> approach.

There might be. I suspect:

	queued = task_on_rq_queued() && !p->se.sched_delayed;

might just work, but that is indeed pushing things quite far. That gets
us into the position of changing ->cpus_allowed while still enqueued,
and while it all might just work out, it is fairly tricky and not worth
the mental pain.

> NOTE: Peter did highlight[1] his general distaste for the
> kthread_bind() through wait_task_inactive() functions, which
> suggests a deeper rework might be better, but I'm not familiar
> enough with all its users to have a sense of how that might be
> done, and this fix seems to address the problem and be more
> easily backported to 6.12-stable, so I wanted to submit it
> again, as a potentially more short-term solution.

Right, so my distaste is with wait_task_inactive() for basically random
waiting for the condition to become true. The neater solution would be a
completion of sorts, but then we need the dequeue path to do a wakeup
and urgh.

So yeah, this is what we have.

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index c81cf642dba05..b986cd2fb19b7 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
>  		 * just go back and repeat.
>  		 */
>  		rq = task_rq_lock(p, &rf);
> +		/*
> +		 * If task is sched_delayed, force dequeue it, to avoid always
> +		 * hitting the tick timeout in the queued case
> +		 */
> +		if (p->se.sched_delayed)
> +			dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
>  		trace_sched_wait_task(p);
>  		running = task_on_cpu(rq, p);
>  		queued = task_on_rq_queued(p);

Lets just do this. I'll to stick it in queue/sched/core.
Re: [PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by John Stultz 7 months, 2 weeks ago
On Wed, Apr 30, 2025 at 5:43 AM Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, Apr 29, 2025 at 08:07:26AM -0700, John Stultz wrote:
> > It was reported that in 6.12, smpboot_create_threads() was
> > taking much longer then in 6.6.
> >
> > I narrowed down the call path to:
> >  smpboot_create_threads()
> >  -> kthread_create_on_cpu()
> >     -> kthread_bind()
> >        -> __kthread_bind_mask()
> >           ->wait_task_inactive()
> >
> > Where in wait_task_inactive() we were regularly hitting the
> > queued case, which sets a 1 tick timeout, which when called
> > multiple times in a row, accumulates quickly into a long
> > delay.
> >
> > I noticed disabling the DELAY_DEQUEUE sched feature recovered
> > the performance, and it seems the newly create tasks are usually
> > sched_delayed and left on the runqueue.
> >
> > So in wait_task_inactive() when we see the task
> > p->se.sched_delayed, manually dequeue the sched_delayed task
> > with DEQUEUE_DELAYED, so we don't have to constantly wait a
> > tick.
>
> ---
>
> (that is, I'll trim the Changelog a this point, seeing how the rest is
> 'discussion')
>

Ah, thanks. I've noted you tweaking my commit messages before merging,
so I'll try to do better about leaving ephemeral notes (and Cc lists,
apparently) after the "---" fold.
My apologies for the trouble!


> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index c81cf642dba05..b986cd2fb19b7 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
> >                * just go back and repeat.
> >                */
> >               rq = task_rq_lock(p, &rf);
> > +             /*
> > +              * If task is sched_delayed, force dequeue it, to avoid always
> > +              * hitting the tick timeout in the queued case
> > +              */
> > +             if (p->se.sched_delayed)
> > +                     dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
> >               trace_sched_wait_task(p);
> >               running = task_on_cpu(rq, p);
> >               queued = task_on_rq_queued(p);
>
> Lets just do this. I'll to stick it in queue/sched/core.

Ok, thanks so much!
-john
Re: [PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by Phil Auld 7 months, 2 weeks ago
Hi John,

On Tue, Apr 29, 2025 at 08:07:26AM -0700 John Stultz wrote:
> It was reported that in 6.12, smpboot_create_threads() was
> taking much longer then in 6.6.
> 
> I narrowed down the call path to:
>  smpboot_create_threads()
>  -> kthread_create_on_cpu()
>     -> kthread_bind()
>        -> __kthread_bind_mask()
>           ->wait_task_inactive()
> 
> Where in wait_task_inactive() we were regularly hitting the
> queued case, which sets a 1 tick timeout, which when called
> multiple times in a row, accumulates quickly into a long
> delay.
> 
> I noticed disabling the DELAY_DEQUEUE sched feature recovered
> the performance, and it seems the newly create tasks are usually
> sched_delayed and left on the runqueue.

This seems odd to me. Maybe I'm just misunderstanding something but
I don't see how newly created tasks should have accumulated enough
runtime to have negative lag that needs to be decayed. 

That said, I think it does make sense to dequeue in this case. 

Cheers,
Phil

> 
> So in wait_task_inactive() when we see the task
> p->se.sched_delayed, manually dequeue the sched_delayed task
> with DEQUEUE_DELAYED, so we don't have to constantly wait a
> tick.
> 
> This seems to work, but I've only lightly tested it, so I'd love
> close review and feedback in case I've missed something in
> wait_task_inactive(), or if there is a simpler alternative
> approach.
> 
> NOTE: Peter did highlight[1] his general distaste for the
> kthread_bind() through wait_task_inactive() functions, which
> suggests a deeper rework might be better, but I'm not familiar
> enough with all its users to have a sense of how that might be
> done, and this fix seems to address the problem and be more
> easily backported to 6.12-stable, so I wanted to submit it
> again, as a potentially more short-term solution.
> 
> [1]: https://lore.kernel.org/lkml/20250422085628.GA14170@noisy.programming.kicks-ass.net/
> 
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: K Prateek Nayak <kprateek.nayak@amd.com>
> Cc: kernel-team@android.com
> Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
> Reported-by: peter-yc.chang@mediatek.com
> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
> Signed-off-by: John Stultz <jstultz@google.com>
> ---
> v2:
> * Rework & simplify the check as suggested by K Prateek Nayak
> * Added Reported-by tag for proper attribution
> v3:
> * Add Fixed-by: and Tested-by tag suggested by Prateek
> ---
>  kernel/sched/core.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index c81cf642dba05..b986cd2fb19b7 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
>  		 * just go back and repeat.
>  		 */
>  		rq = task_rq_lock(p, &rf);
> +		/*
> +		 * If task is sched_delayed, force dequeue it, to avoid always
> +		 * hitting the tick timeout in the queued case
> +		 */
> +		if (p->se.sched_delayed)
> +			dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
>  		trace_sched_wait_task(p);
>  		running = task_on_cpu(rq, p);
>  		queued = task_on_rq_queued(p);
> -- 
> 2.49.0.901.g37484f566f-goog
> 
> 

--
Re: [PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by Peter Zijlstra 7 months, 2 weeks ago
On Tue, Apr 29, 2025 at 11:36:05AM -0400, Phil Auld wrote:
> Hi John,
> 
> On Tue, Apr 29, 2025 at 08:07:26AM -0700 John Stultz wrote:
> > It was reported that in 6.12, smpboot_create_threads() was
> > taking much longer then in 6.6.
> > 
> > I narrowed down the call path to:
> >  smpboot_create_threads()
> >  -> kthread_create_on_cpu()
> >     -> kthread_bind()
> >        -> __kthread_bind_mask()
> >           ->wait_task_inactive()
> > 
> > Where in wait_task_inactive() we were regularly hitting the
> > queued case, which sets a 1 tick timeout, which when called
> > multiple times in a row, accumulates quickly into a long
> > delay.
> > 
> > I noticed disabling the DELAY_DEQUEUE sched feature recovered
> > the performance, and it seems the newly create tasks are usually
> > sched_delayed and left on the runqueue.
> 
> This seems odd to me. Maybe I'm just misunderstanding something but
> I don't see how newly created tasks should have accumulated enough
> runtime to have negative lag that needs to be decayed. 
> 
> That said, I think it does make sense to dequeue in this case. 

Well, they start at 0, any runtime will likely push them negative.
Re: [PATCH v3] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by Phil Auld 7 months, 2 weeks ago
On Wed, Apr 30, 2025 at 02:44:25PM +0200 Peter Zijlstra wrote:
> On Tue, Apr 29, 2025 at 11:36:05AM -0400, Phil Auld wrote:
> > Hi John,
> > 
> > On Tue, Apr 29, 2025 at 08:07:26AM -0700 John Stultz wrote:
> > > It was reported that in 6.12, smpboot_create_threads() was
> > > taking much longer then in 6.6.
> > > 
> > > I narrowed down the call path to:
> > >  smpboot_create_threads()
> > >  -> kthread_create_on_cpu()
> > >     -> kthread_bind()
> > >        -> __kthread_bind_mask()
> > >           ->wait_task_inactive()
> > > 
> > > Where in wait_task_inactive() we were regularly hitting the
> > > queued case, which sets a 1 tick timeout, which when called
> > > multiple times in a row, accumulates quickly into a long
> > > delay.
> > > 
> > > I noticed disabling the DELAY_DEQUEUE sched feature recovered
> > > the performance, and it seems the newly create tasks are usually
> > > sched_delayed and left on the runqueue.
> > 
> > This seems odd to me. Maybe I'm just misunderstanding something but
> > I don't see how newly created tasks should have accumulated enough
> > runtime to have negative lag that needs to be decayed. 
> > 
> > That said, I think it does make sense to dequeue in this case. 
> 
> Well, they start at 0, any runtime will likely push them negative.
> 

I thought they "made a request" and got a slice when entering the
competition so would not immediately go negative when executing.
It's now been a while since I read the paper though...

Starting at 0 (service that it ought to have is none) and going
immediately negative seems to imply never having positive lag. But,
like I said, probably just misunderstanding something :)



Cheers,
Phil
--
[tip: sched/core] sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
Posted by tip-bot2 for John Stultz 7 months, 1 week ago
The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b7ca5743a2604156d6083b88cefacef983f3a3a6
Gitweb:        https://git.kernel.org/tip/b7ca5743a2604156d6083b88cefacef983f3a3a6
Author:        John Stultz <jstultz@google.com>
AuthorDate:    Tue, 29 Apr 2025 08:07:26 -07:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Wed, 30 Apr 2025 14:45:41 +02:00

sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks

It was reported that in 6.12, smpboot_create_threads() was
taking much longer then in 6.6.

I narrowed down the call path to:
 smpboot_create_threads()
 -> kthread_create_on_cpu()
    -> kthread_bind()
       -> __kthread_bind_mask()
          ->wait_task_inactive()

Where in wait_task_inactive() we were regularly hitting the
queued case, which sets a 1 tick timeout, which when called
multiple times in a row, accumulates quickly into a long
delay.

I noticed disabling the DELAY_DEQUEUE sched feature recovered
the performance, and it seems the newly create tasks are usually
sched_delayed and left on the runqueue.

So in wait_task_inactive() when we see the task
p->se.sched_delayed, manually dequeue the sched_delayed task
with DEQUEUE_DELAYED, so we don't have to constantly wait a
tick.

Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
Reported-by: peter-yc.chang@mediatek.com
Signed-off-by: John Stultz <jstultz@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Link: https://lkml.kernel.org/r/20250429150736.3778580-1-jstultz@google.com
---
 kernel/sched/core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 79692f8..a3507ed 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
 		 * just go back and repeat.
 		 */
 		rq = task_rq_lock(p, &rf);
+		/*
+		 * If task is sched_delayed, force dequeue it, to avoid always
+		 * hitting the tick timeout in the queued case
+		 */
+		if (p->se.sched_delayed)
+			dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
 		trace_sched_wait_task(p);
 		running = task_on_cpu(rq, p);
 		queued = task_on_rq_queued(p);