[RESEND][PATCH v21 3/6] sched: Add logic to zap balance callbacks if we pick again

John Stultz posted 6 patches 4 weeks, 1 day ago
[RESEND][PATCH v21 3/6] sched: Add logic to zap balance callbacks if we pick again
Posted by John Stultz 4 weeks, 1 day ago
With proxy-exec, a task is selected to run via pick_next_task(),
and then if it is a mutex blocked task, we call find_proxy_task()
to find a runnable owner. If the runnable owner is on another
cpu, we will need to migrate the selected donor task away, after
which we will pick_again can call pick_next_task() to choose
something else.

However, in the first call to pick_next_task(), we may have
had a balance_callback setup by the class scheduler. After we
pick again, its possible pick_next_task_fair() will be called
which calls sched_balance_newidle() and sched_balance_rq().

This will throw a warning:
[    8.796467] rq->balance_callback && rq->balance_callback != &balance_push_callback
[    8.796467] WARNING: CPU: 32 PID: 458 at kernel/sched/sched.h:1750 sched_balance_rq+0xe92/0x1250
...
[    8.796467] Call Trace:
[    8.796467]  <TASK>
[    8.796467]  ? __warn.cold+0xb2/0x14e
[    8.796467]  ? sched_balance_rq+0xe92/0x1250
[    8.796467]  ? report_bug+0x107/0x1a0
[    8.796467]  ? handle_bug+0x54/0x90
[    8.796467]  ? exc_invalid_op+0x17/0x70
[    8.796467]  ? asm_exc_invalid_op+0x1a/0x20
[    8.796467]  ? sched_balance_rq+0xe92/0x1250
[    8.796467]  sched_balance_newidle+0x295/0x820
[    8.796467]  pick_next_task_fair+0x51/0x3f0
[    8.796467]  __schedule+0x23a/0x14b0
[    8.796467]  ? lock_release+0x16d/0x2e0
[    8.796467]  schedule+0x3d/0x150
[    8.796467]  worker_thread+0xb5/0x350
[    8.796467]  ? __pfx_worker_thread+0x10/0x10
[    8.796467]  kthread+0xee/0x120
[    8.796467]  ? __pfx_kthread+0x10/0x10
[    8.796467]  ret_from_fork+0x31/0x50
[    8.796467]  ? __pfx_kthread+0x10/0x10
[    8.796467]  ret_from_fork_asm+0x1a/0x30
[    8.796467]  </TASK>

This is because if a RT task was originally picked, it will
setup the rq->balance_callback with push_rt_tasks() via
set_next_task_rt().

Once the task is migrated away and we pick again, we haven't
processed any balance callbacks, so rq->balance_callback is not
in the same state as it was the first time pick_next_task was
called.

To handle this, add a zap_balance_callbacks() helper function
which cleans up the blance callbacks without running them. This
should be ok, as we are effectively undoing the state set in
the first call to pick_next_task(), and when we pick again,
the new callback can be configured for the donor task actually
selected.

Signed-off-by: John Stultz <jstultz@google.com>
---
v20:
* Tweaked to avoid build issues with different configs

Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
 kernel/sched/core.c | 39 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 38 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e0007660161fa..01bf5ef8d9fcc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5001,6 +5001,40 @@ static inline void finish_task(struct task_struct *prev)
 	smp_store_release(&prev->on_cpu, 0);
 }
 
+#if defined(CONFIG_SCHED_PROXY_EXEC)
+/*
+ * Only called from __schedule context
+ *
+ * There are some cases where we are going to re-do the action
+ * that added the balance callbacks. We may not be in a state
+ * where we can run them, so just zap them so they can be
+ * properly re-added on the next time around. This is similar
+ * handling to running the callbacks, except we just don't call
+ * them.
+ */
+static void zap_balance_callbacks(struct rq *rq)
+{
+	struct balance_callback *next, *head;
+	bool found = false;
+
+	lockdep_assert_rq_held(rq);
+
+	head = rq->balance_callback;
+	while (head) {
+		if (head == &balance_push_callback)
+			found = true;
+		next = head->next;
+		head->next = NULL;
+		head = next;
+	}
+	rq->balance_callback = found ? &balance_push_callback : NULL;
+}
+#else
+static inline void zap_balance_callbacks(struct rq *rq)
+{
+}
+#endif
+
 static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
 {
 	void (*func)(struct rq *rq);
@@ -6941,8 +6975,11 @@ static void __sched notrace __schedule(int sched_mode)
 	rq_set_donor(rq, next);
 	if (unlikely(task_is_blocked(next))) {
 		next = find_proxy_task(rq, next, &rf);
-		if (!next)
+		if (!next) {
+			/* zap the balance_callbacks before picking again */
+			zap_balance_callbacks(rq);
 			goto pick_again;
+		}
 		if (next == rq->idle)
 			goto keep_resched;
 	}
-- 
2.51.0.338.gd7d06c2dae-goog
Re: [RESEND][PATCH v21 3/6] sched: Add logic to zap balance callbacks if we pick again
Posted by K Prateek Nayak 2 weeks, 3 days ago
Hello John,

On 9/4/2025 5:51 AM, John Stultz wrote:
> With proxy-exec, a task is selected to run via pick_next_task(),
> and then if it is a mutex blocked task, we call find_proxy_task()
> to find a runnable owner. If the runnable owner is on another
> cpu, we will need to migrate the selected donor task away, after
> which we will pick_again can call pick_next_task() to choose
> something else.
> 
> However, in the first call to pick_next_task(), we may have
> had a balance_callback setup by the class scheduler. After we
> pick again, its possible pick_next_task_fair() will be called
> which calls sched_balance_newidle() and sched_balance_rq().
> 
> This will throw a warning:
> [    8.796467] rq->balance_callback && rq->balance_callback != &balance_push_callback
> [    8.796467] WARNING: CPU: 32 PID: 458 at kernel/sched/sched.h:1750 sched_balance_rq+0xe92/0x1250
> ...
> [    8.796467] Call Trace:
> [    8.796467]  <TASK>
> [    8.796467]  ? __warn.cold+0xb2/0x14e
> [    8.796467]  ? sched_balance_rq+0xe92/0x1250
> [    8.796467]  ? report_bug+0x107/0x1a0
> [    8.796467]  ? handle_bug+0x54/0x90
> [    8.796467]  ? exc_invalid_op+0x17/0x70
> [    8.796467]  ? asm_exc_invalid_op+0x1a/0x20
> [    8.796467]  ? sched_balance_rq+0xe92/0x1250
> [    8.796467]  sched_balance_newidle+0x295/0x820
> [    8.796467]  pick_next_task_fair+0x51/0x3f0
> [    8.796467]  __schedule+0x23a/0x14b0
> [    8.796467]  ? lock_release+0x16d/0x2e0
> [    8.796467]  schedule+0x3d/0x150
> [    8.796467]  worker_thread+0xb5/0x350
> [    8.796467]  ? __pfx_worker_thread+0x10/0x10
> [    8.796467]  kthread+0xee/0x120
> [    8.796467]  ? __pfx_kthread+0x10/0x10
> [    8.796467]  ret_from_fork+0x31/0x50
> [    8.796467]  ? __pfx_kthread+0x10/0x10
> [    8.796467]  ret_from_fork_asm+0x1a/0x30
> [    8.796467]  </TASK>
> 
> This is because if a RT task was originally picked, it will
> setup the rq->balance_callback with push_rt_tasks() via
> set_next_task_rt().
> 
> Once the task is migrated away and we pick again, we haven't
> processed any balance callbacks, so rq->balance_callback is not
> in the same state as it was the first time pick_next_task was
> called.
> 
> To handle this, add a zap_balance_callbacks() helper function
> which cleans up the blance callbacks without running them. This

s/blance/balance/

> should be ok, as we are effectively undoing the state set in
> the first call to pick_next_task(), and when we pick again,
> the new callback can be configured for the donor task actually
> selected.
> 
> Signed-off-by: John Stultz <jstultz@google.com>
> ---
> v20:
> * Tweaked to avoid build issues with different configs
> 
> Cc: Joel Fernandes <joelagnelf@nvidia.com>
> Cc: Qais Yousef <qyousef@layalina.io>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Zimuzo Ezeozue <zezeozue@google.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Will Deacon <will@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: "Paul E. McKenney" <paulmck@kernel.org>
> Cc: Metin Kaya <Metin.Kaya@arm.com>
> Cc: Xuewen Yan <xuewen.yan94@gmail.com>
> Cc: K Prateek Nayak <kprateek.nayak@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
> Cc: Suleiman Souhlal <suleiman@google.com>
> Cc: kuyo chang <kuyo.chang@mediatek.com>
> Cc: hupu <hupu.gm@gmail.com>
> Cc: kernel-team@android.com
> ---
>  kernel/sched/core.c | 39 ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e0007660161fa..01bf5ef8d9fcc 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5001,6 +5001,40 @@ static inline void finish_task(struct task_struct *prev)
>  	smp_store_release(&prev->on_cpu, 0);
>  }
>  
> +#if defined(CONFIG_SCHED_PROXY_EXEC)

nit. This can be an "#ifdef CONFIG_SCHED_PROXY_EXEC" now.

> +/*
> + * Only called from __schedule context
> + *
> + * There are some cases where we are going to re-do the action
> + * that added the balance callbacks. We may not be in a state
> + * where we can run them, so just zap them so they can be
> + * properly re-added on the next time around. This is similar
> + * handling to running the callbacks, except we just don't call
> + * them.
> + */
> +static void zap_balance_callbacks(struct rq *rq)
> +{
> +	struct balance_callback *next, *head;
> +	bool found = false;
> +
> +	lockdep_assert_rq_held(rq);
> +
> +	head = rq->balance_callback;
> +	while (head) {
> +		if (head == &balance_push_callback)
> +			found = true;
> +		next = head->next;
> +		head->next = NULL;
> +		head = next;
> +	}
> +	rq->balance_callback = found ? &balance_push_callback : NULL;
> +}
> +#else
> +static inline void zap_balance_callbacks(struct rq *rq)
> +{
> +}

nit.

This can perhaps be reduced to a single line in light of Thomas' recent
work to condense the stubs elsewhere:
https://lore.kernel.org/lkml/20250908212925.389031537@linutronix.de/

> +#endif
> +
>  static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
>  {
>  	void (*func)(struct rq *rq);
> @@ -6941,8 +6975,11 @@ static void __sched notrace __schedule(int sched_mode)
>  	rq_set_donor(rq, next);
>  	if (unlikely(task_is_blocked(next))) {
>  		next = find_proxy_task(rq, next, &rf);
> -		if (!next)
> +		if (!next) {
> +			/* zap the balance_callbacks before picking again */
> +			zap_balance_callbacks(rq);
>  			goto pick_again;
> +		}
>  		if (next == rq->idle)
>  			goto keep_resched;

Should we zap the callbacks if we are planning to go through schedule()
again via rq->idle since it essentially voids the last pick too?

>  	}

-- 
Thanks and Regards,
Prateek
Re: [RESEND][PATCH v21 3/6] sched: Add logic to zap balance callbacks if we pick again
Posted by John Stultz 1 week, 6 days ago
On Mon, Sep 15, 2025 at 1:32 AM K Prateek Nayak <kprateek.nayak@amd.com> wrote:
> On 9/4/2025 5:51 AM, John Stultz wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index e0007660161fa..01bf5ef8d9fcc 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5001,6 +5001,40 @@ static inline void finish_task(struct task_struct *prev)
> >       smp_store_release(&prev->on_cpu, 0);
> >  }
> >
> > +#if defined(CONFIG_SCHED_PROXY_EXEC)
>
> nit. This can be an "#ifdef CONFIG_SCHED_PROXY_EXEC" now.

Ah. Yes, this is leftover from it previously checking for PROXY_EXEC
&& CONFIG_SMP.  I'll be sure to clean that up.

> > +#else
> > +static inline void zap_balance_callbacks(struct rq *rq)
> > +{
> > +}
>
> nit.
>
> This can perhaps be reduced to a single line in light of Thomas' recent
> work to condense the stubs elsewhere:
> https://lore.kernel.org/lkml/20250908212925.389031537@linutronix.de/

Ah, if folks are ok with that, I'd prefer it as well!  Thanks for the
suggestion! I'll try to work that throughout the series.

> > +#endif
> > +
> >  static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
> >  {
> >       void (*func)(struct rq *rq);
> > @@ -6941,8 +6975,11 @@ static void __sched notrace __schedule(int sched_mode)
> >       rq_set_donor(rq, next);
> >       if (unlikely(task_is_blocked(next))) {
> >               next = find_proxy_task(rq, next, &rf);
> > -             if (!next)
> > +             if (!next) {
> > +                     /* zap the balance_callbacks before picking again */
> > +                     zap_balance_callbacks(rq);
> >                       goto pick_again;
> > +             }
> >               if (next == rq->idle)
> >                       goto keep_resched;
>
> Should we zap the callbacks if we are planning to go through schedule()
> again via rq->idle since it essentially voids the last pick too?

Hrm. So I don't think it's strictly necessary, because we will run the
set callback as part of finish_task_switch() when we switch briefly to
idle. So we don't end up with stale callbacks in the next
pick_next_task().

But I guess zapping them could help just avoid running it spuriously.
I'll give that a shot and see how it affects things.

Thanks again for all the suggestions!
-john