[RFC][PATCH] sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr()

John Stultz posted 1 patch 1 month ago
There is a newer version of this series
kernel/sched/core.c     | 24 ------------------------
kernel/sched/deadline.c | 16 ++++++++++++++--
kernel/sched/rt.c       | 15 ++++++++++++---
3 files changed, 26 insertions(+), 29 deletions(-)
[RFC][PATCH] sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr()
Posted by John Stultz 1 month ago
With proxy-execution, the scheduler selects the donor, but for
blocked donors, we end up running the lock owner.

This caused some complexity, because the class schedulers make
sure to remove the task they pick from their pushable task
lists, which prevents the donor from being migrated, but there
wasn't then anything to prevent rq->curr from being migrated
if rq->curr != rq->donor.

This was sort of hacked around by calling proxy_tag_curr() on
the rq->curr task if we were running something other then the
donor. proxy_tag_curr() did a dequeue/enqueue pair on the
rq->curr task, allowing the class schedulers to remove it from
their pushable list.

The dequeue/enqueue pair was wasteful, and additionally K Prateek
highlighted that we didn't properly undo things when we stopped
proxying, leaving the lock owner off the pushable list.

After some alternative approaches were considered, Peter
suggested just having the RT/DL classes just avoid migrating
when task_on_cpu().

So rework pick_next_pushable_dl_task() and the rt
pick_next_pushable_task() functions so that they skip over the
first pushable task if it is on_cpu.

Then just drop all of the proxy_tag_curr() logic.

Fixes: be39617e38e0 ("sched: Fix proxy/current (push,pull)ability")
Reported-by: K Prateek Nayak <kprateek.nayak@amd.com>
Closes: https://lore.kernel.org/lkml/e735cae0-2cc9-4bae-b761-fcb082ed3e94@amd.com/
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: John Stultz <jstultz@google.com>
---
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
CC: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: zhidao su <suzhidao@xiaomi.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
 kernel/sched/core.c     | 24 ------------------------
 kernel/sched/deadline.c | 16 ++++++++++++++--
 kernel/sched/rt.c       | 15 ++++++++++++---
 3 files changed, 26 insertions(+), 29 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6960c1bfc741a..88db2b2bf3d46 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6725,23 +6725,6 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 }
 #endif /* SCHED_PROXY_EXEC */
 
-static inline void proxy_tag_curr(struct rq *rq, struct task_struct *owner)
-{
-	if (!sched_proxy_exec())
-		return;
-	/*
-	 * pick_next_task() calls set_next_task() on the chosen task
-	 * at some point, which ensures it is not push/pullable.
-	 * However, the chosen/donor task *and* the mutex owner form an
-	 * atomic pair wrt push/pull.
-	 *
-	 * Make sure owner we run is not pushable. Unfortunately we can
-	 * only deal with that by means of a dequeue/enqueue cycle. :-/
-	 */
-	dequeue_task(rq, owner, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
-	enqueue_task(rq, owner, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
-}
-
 /*
  * __schedule() is the main scheduler function.
  *
@@ -6891,9 +6874,6 @@ static void __sched notrace __schedule(int sched_mode)
 		 */
 		RCU_INIT_POINTER(rq->curr, next);
 
-		if (!task_current_donor(rq, next))
-			proxy_tag_curr(rq, next);
-
 		/*
 		 * The membarrier system call requires each architecture
 		 * to have a full memory barrier after updating
@@ -6928,10 +6908,6 @@ static void __sched notrace __schedule(int sched_mode)
 		/* Also unlocks the rq: */
 		rq = context_switch(rq, prev, next, &rf);
 	} else {
-		/* In case next was already curr but just got blocked_donor */
-		if (!task_current_donor(rq, next))
-			proxy_tag_curr(rq, next);
-
 		rq_unpin_lock(rq, &rf);
 		__balance_callbacks(rq);
 		raw_spin_rq_unlock_irq(rq);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index c4402542ef44f..2cf2c1ac83493 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2556,12 +2556,24 @@ static int find_later_rq(struct task_struct *task)
 
 static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
 {
-	struct task_struct *p;
+	struct task_struct *p = NULL;
+	struct rb_node *next_node;
 
 	if (!has_pushable_dl_tasks(rq))
 		return NULL;
 
-	p = __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root));
+	next_node = rb_first_cached(&rq->dl.pushable_dl_tasks_root);
+	while (next_node) {
+		p = __node_2_pdl(next_node);
+		/* make sure task isn't on_cpu (possible with proxy-exec) */
+		if (!task_on_cpu(rq, p))
+			break;
+
+		next_node = rb_next(next_node);
+	}
+
+	if (!p)
+		return NULL;
 
 	WARN_ON_ONCE(rq->cpu != task_cpu(p));
 	WARN_ON_ONCE(task_current(rq, p));
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index fb07dcfc60a24..5dcbe776aadd2 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1847,13 +1847,22 @@ static int find_lowest_rq(struct task_struct *task)
 
 static struct task_struct *pick_next_pushable_task(struct rq *rq)
 {
-	struct task_struct *p;
+	struct plist_head *head = &rq->rt.pushable_tasks;
+	struct task_struct *i, *p = NULL;
 
 	if (!has_pushable_tasks(rq))
 		return NULL;
 
-	p = plist_first_entry(&rq->rt.pushable_tasks,
-			      struct task_struct, pushable_tasks);
+	plist_for_each_entry(i, head, pushable_tasks) {
+		/* make sure task isn't on_cpu (possible with proxy-exec) */
+		if (!task_on_cpu(rq, i)) {
+			p = i;
+			break;
+		}
+	}
+
+	if (!p)
+		return NULL;
 
 	BUG_ON(rq->cpu != task_cpu(p));
 	BUG_ON(task_current(rq, p));
-- 
2.53.0.473.g4a7958ca14-goog
Re: [RFC][PATCH] sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr()
Posted by Peter Zijlstra 1 month ago
On Sat, Mar 07, 2026 at 07:39:29AM +0000, John Stultz wrote:
> With proxy-execution, the scheduler selects the donor, but for
> blocked donors, we end up running the lock owner.
> 
> This caused some complexity, because the class schedulers make
> sure to remove the task they pick from their pushable task
> lists, which prevents the donor from being migrated, but there
> wasn't then anything to prevent rq->curr from being migrated
> if rq->curr != rq->donor.
> 
> This was sort of hacked around by calling proxy_tag_curr() on
> the rq->curr task if we were running something other then the
> donor. proxy_tag_curr() did a dequeue/enqueue pair on the
> rq->curr task, allowing the class schedulers to remove it from
> their pushable list.
> 
> The dequeue/enqueue pair was wasteful, and additionally K Prateek
> highlighted that we didn't properly undo things when we stopped
> proxying, leaving the lock owner off the pushable list.
> 
> After some alternative approaches were considered, Peter
> suggested just having the RT/DL classes just avoid migrating
> when task_on_cpu().
> 
> So rework pick_next_pushable_dl_task() and the rt
> pick_next_pushable_task() functions so that they skip over the
> first pushable task if it is on_cpu.
> 
> Then just drop all of the proxy_tag_curr() logic.
> 
> Fixes: be39617e38e0 ("sched: Fix proxy/current (push,pull)ability")
> Reported-by: K Prateek Nayak <kprateek.nayak@amd.com>
> Closes: https://lore.kernel.org/lkml/e735cae0-2cc9-4bae-b761-fcb082ed3e94@amd.com/
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: John Stultz <jstultz@google.com>

Right, that works for me ;-)

Sone bits I also had in my 'patch' that didn't make it, and quite
frankly don't belong in the same patch anyway, is the below.

Compilers are really bad (as in they utterly refuse) optimizing (even
when marked with __pure) the static branch things, and will happily emit
multiple identical in a row.

So pull out the one obvious sched_proxy_exec() branch in __schedule()
and remove some of the 'implicit' ones in that path.

Hmm?

---
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6597,11 +6597,7 @@ find_proxy_task(struct rq *rq, struct ta
 	struct mutex *mutex;
 
 	/* Follow blocked_on chain. */
-	for (p = donor; task_is_blocked(p); p = owner) {
-		mutex = p->blocked_on;
-		/* Something changed in the chain, so pick again */
-		if (!mutex)
-			return NULL;
+	for (p = donor; (mutex = p->blocked_on); p = owner) {
 		/*
 		 * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
 		 * and ensure @owner sticks around.
@@ -6829,14 +6825,16 @@ static void __sched notrace __schedule(i
 
 pick_again:
 	next = pick_next_task(rq, rq->donor, &rf);
-	rq_set_donor(rq, next);
 	rq->next_class = next->sched_class;
-	if (unlikely(task_is_blocked(next))) {
-		next = find_proxy_task(rq, next, &rf);
-		if (!next)
-			goto pick_again;
-		if (next == rq->idle)
-			goto keep_resched;
+	if (sched_proxy_exec()) {
+		rq_set_donor(rq, next);
+		if (p->blocked_on) {
+			next = find_proxy_task(rq, next, &rf);
+			if (!next)
+				goto pick_again;
+			if (next == rq->idle)
+				goto keep_resched;
+		}
 	}
 picked:
 	clear_tsk_need_resched(prev);
@@ -6886,10 +6884,6 @@ static void __sched notrace __schedule(i
 		/* Also unlocks the rq: */
 		rq = context_switch(rq, prev, next, &rf);
 	} else {
-		/* In case next was already curr but just got blocked_donor */
-		if (!task_current_donor(rq, next))
-			proxy_tag_curr(rq, next);
-
 		rq_unpin_lock(rq, &rf);
 		__balance_callbacks(rq, NULL);
 		raw_spin_rq_unlock_irq(rq);