[PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration

John Stultz posted 6 patches 4 months, 2 weeks ago
[PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by John Stultz 4 months, 2 weeks ago
As we add functionality to proxy execution, we may migrate a
donor task to a runqueue where it can't run due to cpu affinity.
Thus, we must be careful to ensure we return-migrate the task
back to a cpu in its cpumask when it becomes unblocked.

Thus we need more then just a binary concept of the task being
blocked on a mutex or not.

So add a blocked_on_state value to the task, that allows the
task to move through BO_RUNNING -> BO_BLOCKED -> BO_WAKING
and back to BO_RUNNING. This provides a guard state in
BO_WAKING so we can know the task is no longer blocked
but we don't want to run it until we have potentially
done return migration, back to a usable cpu.

Signed-off-by: John Stultz <jstultz@google.com>
---
v15:
* Split blocked_on_state into its own patch later in the
  series, as the tri-state isn't necessary until we deal
  with proxy/return migrations
v16:
* Handle case where task in the chain is being set as
  BO_WAKING by another cpu (usually via ww_mutex die code).
  Make sure we release the rq lock so the wakeup can
  complete.
* Rework to use guard() in find_proxy_task() as suggested
  by Peter
v18:
* Add initialization of blocked_on_state for init_task
v19:
* PREEMPT_RT build fixups and rework suggested by
  K Prateek Nayak
v20:
* Simplify one of the blocked_on_state changes to avoid extra
  PREMEPT_RT conditionals
v21:
* Slight reworks due to avoiding nested blocked_lock locking
* Be consistent in use of blocked_on_state helper functions
* Rework calls to proxy_deactivate() to do proper locking
  around blocked_on_state changes that we were cheating in
  previous versions.
* Minor cleanups, some comment improvements
v22:
* Re-order blocked_on_state helpers to try to make it clearer
  the set_task_blocked_on() and clear_task_blocked_on() are
  the main enter/exit states and the blocked_on_state helpers
  help manage the transition states within. Per feedback from
  K Prateek Nayak.
* Rework blocked_on_state to be defined within
  CONFIG_SCHED_PROXY_EXEC as suggested by K Prateek Nayak.
* Reworked empty stub functions to just take one line as
  suggestd by K Prateek
* Avoid using gotos out of a guard() scope, as highlighted by
  K Prateek, and instead rework logic to break and switch()
  on an action value.

Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
 include/linux/sched.h     | 92 +++++++++++++++++++++++++++++++++------
 init/init_task.c          |  3 ++
 kernel/fork.c             |  3 ++
 kernel/locking/mutex.c    | 15 ++++---
 kernel/locking/ww_mutex.h | 20 ++++-----
 kernel/sched/core.c       | 45 +++++++++++++++++--
 kernel/sched/sched.h      |  6 ++-
 7 files changed, 146 insertions(+), 38 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index cb4e81d9d9b67..8245940783c77 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -813,6 +813,12 @@ struct kmap_ctrl {
 #endif
 };
 
+enum blocked_on_state {
+	BO_RUNNABLE,
+	BO_BLOCKED,
+	BO_WAKING,
+};
+
 struct task_struct {
 #ifdef CONFIG_THREAD_INFO_IN_TASK
 	/*
@@ -1234,6 +1240,9 @@ struct task_struct {
 
 	struct mutex			*blocked_on;	/* lock we're blocked on */
 	raw_spinlock_t			blocked_lock;
+#ifdef CONFIG_SCHED_PROXY_EXEC
+	enum blocked_on_state		blocked_on_state;
+#endif
 
 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER
 	/*
@@ -2139,7 +2148,6 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock);
 	__cond_resched_rwlock_write(lock);					\
 })
 
-#ifndef CONFIG_PREEMPT_RT
 static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
 {
 	lockdep_assert_held_once(&p->blocked_lock);
@@ -2152,6 +2160,13 @@ static inline struct mutex *get_task_blocked_on(struct task_struct *p)
 	return __get_task_blocked_on(p);
 }
 
+static inline void __force_blocked_on_blocked(struct task_struct *p);
+static inline void __force_blocked_on_runnable(struct task_struct *p);
+
+/*
+ * These helpers set and clear the task blocked_on pointer, as well
+ * as setting the initial blocked_on_state, or clearing it
+ */
 static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
 {
 	WARN_ON_ONCE(!m);
@@ -2161,24 +2176,23 @@ static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
 	lockdep_assert_held_once(&p->blocked_lock);
 	/*
 	 * Check ensure we don't overwrite existing mutex value
-	 * with a different mutex. Note, setting it to the same
-	 * lock repeatedly is ok.
+	 * with a different mutex.
 	 */
-	WARN_ON_ONCE(p->blocked_on && p->blocked_on != m);
+	WARN_ON_ONCE(p->blocked_on);
 	p->blocked_on = m;
+	__force_blocked_on_blocked(p);
 }
 
 static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *m)
 {
+	/* The task should only be clearing itself */
+	WARN_ON_ONCE(p != current);
 	/* Currently we serialize blocked_on under the task::blocked_lock */
 	lockdep_assert_held_once(&p->blocked_lock);
-	/*
-	 * There may be cases where we re-clear already cleared
-	 * blocked_on relationships, but make sure we are not
-	 * clearing the relationship with a different lock.
-	 */
-	WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m);
+	/* Make sure we are clearing the relationship with the right lock */
+	WARN_ON_ONCE(m && p->blocked_on != m);
 	p->blocked_on = NULL;
+	__force_blocked_on_runnable(p);
 }
 
 static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
@@ -2186,15 +2200,65 @@ static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
 	guard(raw_spinlock_irqsave)(&p->blocked_lock);
 	__clear_task_blocked_on(p, m);
 }
-#else
-static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
+
+/*
+ * The following helpers manage the blocked_on_state transitions while
+ * the blocked_on pointer is set.
+ */
+#ifdef CONFIG_SCHED_PROXY_EXEC
+static inline void __force_blocked_on_blocked(struct task_struct *p)
+{
+	lockdep_assert_held(&p->blocked_lock);
+	p->blocked_on_state = BO_BLOCKED;
+}
+
+static inline void __set_blocked_on_waking(struct task_struct *p)
+{
+	lockdep_assert_held(&p->blocked_lock);
+	if (p->blocked_on_state == BO_BLOCKED)
+		p->blocked_on_state = BO_WAKING;
+}
+
+static inline void set_blocked_on_waking(struct task_struct *p)
+{
+	guard(raw_spinlock_irqsave)(&p->blocked_lock);
+	__set_blocked_on_waking(p);
+}
+
+static inline void __force_blocked_on_runnable(struct task_struct *p)
 {
+	lockdep_assert_held(&p->blocked_lock);
+	p->blocked_on_state = BO_RUNNABLE;
 }
 
-static inline void clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
+static inline void force_blocked_on_runnable(struct task_struct *p)
 {
+	guard(raw_spinlock_irqsave)(&p->blocked_lock);
+	__force_blocked_on_runnable(p);
+}
+
+static inline void __set_blocked_on_runnable(struct task_struct *p)
+{
+	lockdep_assert_held(&p->blocked_lock);
+	if (p->blocked_on_state == BO_WAKING)
+		p->blocked_on_state = BO_RUNNABLE;
+}
+
+static inline void set_blocked_on_runnable(struct task_struct *p)
+{
+	if (!sched_proxy_exec())
+		return;
+	guard(raw_spinlock_irqsave)(&p->blocked_lock);
+	__set_blocked_on_runnable(p);
 }
-#endif /* !CONFIG_PREEMPT_RT */
+#else  /* CONFIG_SCHED_PROXY_EXEC */
+static inline void __force_blocked_on_blocked(struct task_struct *p) {}
+static inline void __set_blocked_on_waking(struct task_struct *p) {}
+static inline void set_blocked_on_waking(struct task_struct *p) {}
+static inline void __force_blocked_on_runnable(struct task_struct *p) {}
+static inline void __set_blocked_on_runnable(struct task_struct *p) {}
+static inline void set_blocked_on_runnable(struct task_struct *p) {}
+#endif /* CONFIG_SCHED_PROXY_EXEC */
 
 static __always_inline bool need_resched(void)
 {
diff --git a/init/init_task.c b/init/init_task.c
index 7e29d86153d9f..63b66b4aa585a 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -174,6 +174,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
 	.mems_allowed_seq = SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq,
 						 &init_task.alloc_lock),
 #endif
+#ifdef CONFIG_SCHED_PROXY_EXEC
+	.blocked_on_state = BO_RUNNABLE,
+#endif
 #ifdef CONFIG_RT_MUTEXES
 	.pi_waiters	= RB_ROOT_CACHED,
 	.pi_top_task	= NULL,
diff --git a/kernel/fork.c b/kernel/fork.c
index 796cfceb2bbda..d8eb66e5be918 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2130,6 +2130,9 @@ __latent_entropy struct task_struct *copy_process(
 #endif
 
 	p->blocked_on = NULL; /* not blocked yet */
+#ifdef CONFIG_SCHED_PROXY_EXEC
+	p->blocked_on_state = BO_RUNNABLE;
+#endif
 
 #ifdef CONFIG_BCACHE
 	p->sequential_io	= 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index c44fc63d4476e..d8cf2e9a22a65 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -682,11 +682,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 		raw_spin_lock_irqsave(&lock->wait_lock, flags);
 		raw_spin_lock(&current->blocked_lock);
 		/*
-		 * As we likely have been woken up by task
-		 * that has cleared our blocked_on state, re-set
-		 * it to the lock we are trying to acquire.
+		 * Re-set blocked_on_state as unlock path set it to WAKING/RUNNABLE
 		 */
-		__set_task_blocked_on(current, lock);
+		__force_blocked_on_blocked(current);
 		set_current_state(state);
 		/*
 		 * Here we order against unlock; we must either see it change
@@ -705,7 +703,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 			 * and clear blocked on so we don't become unselectable
 			 * to run.
 			 */
-			__clear_task_blocked_on(current, lock);
+			__force_blocked_on_runnable(current);
 			raw_spin_unlock(&current->blocked_lock);
 			raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
@@ -714,7 +712,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 
 			raw_spin_lock_irqsave(&lock->wait_lock, flags);
 			raw_spin_lock(&current->blocked_lock);
-			__set_task_blocked_on(current, lock);
+			__force_blocked_on_blocked(current);
 
 			if (opt_acquired)
 				break;
@@ -966,8 +964,11 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
 
 		next = waiter->task;
 
+		raw_spin_lock(&next->blocked_lock);
 		debug_mutex_wake_waiter(lock, waiter);
-		clear_task_blocked_on(next, lock);
+		WARN_ON_ONCE(__get_task_blocked_on(next) != lock);
+		__set_blocked_on_waking(next);
+		raw_spin_unlock(&next->blocked_lock);
 		wake_q_add(&wake_q, next);
 	}
 
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index e4a81790ea7dd..f34363615eb34 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -285,11 +285,11 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
 		debug_mutex_wake_waiter(lock, waiter);
 #endif
 		/*
-		 * When waking up the task to die, be sure to clear the
-		 * blocked_on pointer. Otherwise we can see circular
-		 * blocked_on relationships that can't resolve.
+		 * When waking up the task to die, be sure to set the
+		 * blocked_on_state to BO_WAKING. Otherwise we can see
+		 * circular blocked_on relationships that can't resolve.
 		 */
-		clear_task_blocked_on(waiter->task, lock);
+		set_blocked_on_waking(waiter->task);
 		wake_q_add(wake_q, waiter->task);
 	}
 
@@ -339,15 +339,11 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
 		 */
 		if (owner != current) {
 			/*
-			 * When waking up the task to wound, be sure to clear the
-			 * blocked_on pointer. Otherwise we can see circular
-			 * blocked_on relationships that can't resolve.
-			 *
-			 * NOTE: We pass NULL here instead of lock, because we
-			 * are waking the mutex owner, who may be currently
-			 * blocked on a different mutex.
+			 * When waking up the task to wound, be sure to set the
+			 * blocked_on_state to BO_WAKING. Otherwise we can see
+			 * circular blocked_on relationships that can't resolve.
 			 */
-			clear_task_blocked_on(owner, NULL);
+			set_blocked_on_waking(owner);
 			wake_q_add(wake_q, owner);
 		}
 		return true;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 007459d42ae4a..abecd2411e29e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4328,6 +4328,12 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 		ttwu_queue(p, cpu, wake_flags);
 	}
 out:
+	/*
+	 * For now, if we've been woken up, set us as BO_RUNNABLE
+	 * We will need to be more careful later when handling
+	 * proxy migration
+	 */
+	set_blocked_on_runnable(p);
 	if (success)
 		ttwu_stat(p, task_cpu(p), wake_flags);
 
@@ -6623,7 +6629,7 @@ static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *d
 		 * as unblocked, as we aren't doing proxy-migrations
 		 * yet (more logic will be needed then).
 		 */
-		donor->blocked_on = NULL;
+		force_blocked_on_runnable(donor);
 	}
 	return NULL;
 }
@@ -6651,6 +6657,7 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 	int this_cpu = cpu_of(rq);
 	struct task_struct *p;
 	struct mutex *mutex;
+	enum { FOUND, DEACTIVATE_DONOR } action = FOUND;
 
 	/* Follow blocked_on chain. */
 	for (p = donor; task_is_blocked(p); p = owner) {
@@ -6676,20 +6683,43 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 			return NULL;
 		}
 
+		/*
+		 * If a ww_mutex hits the die/wound case, it marks the task as
+		 * BO_WAKING and calls try_to_wake_up(), so that the mutex
+		 * cycle can be broken and we avoid a deadlock.
+		 *
+		 * However, if at that moment, we are here on the cpu which the
+		 * die/wounded task is enqueued, we might loop on the cycle as
+		 * BO_WAKING still causes task_is_blocked() to return true
+		 * (since we want return migration to occur before we run the
+		 * task).
+		 *
+		 * Unfortunately since we hold the rq lock, it will block
+		 * try_to_wake_up from completing and doing the return
+		 * migration.
+		 *
+		 * So when we hit a !BO_BLOCKED task briefly schedule idle
+		 * so we release the rq and let the wakeup complete.
+		 */
+		if (p->blocked_on_state != BO_BLOCKED)
+			return proxy_resched_idle(rq);
+
 		owner = __mutex_owner(mutex);
 		if (!owner) {
-			__clear_task_blocked_on(p, mutex);
+			__force_blocked_on_runnable(p);
 			return p;
 		}
 
 		if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) {
 			/* XXX Don't handle blocked owners/delayed dequeue yet */
-			return proxy_deactivate(rq, donor);
+			action = DEACTIVATE_DONOR;
+			break;
 		}
 
 		if (task_cpu(owner) != this_cpu) {
 			/* XXX Don't handle migrations yet */
-			return proxy_deactivate(rq, donor);
+			action = DEACTIVATE_DONOR;
+			break;
 		}
 
 		if (task_on_rq_migrating(owner)) {
@@ -6747,6 +6777,13 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 		 */
 	}
 
+	/* Handle actions we need to do outside of the guard() scope */
+	switch (action) {
+	case DEACTIVATE_DONOR:
+		return proxy_deactivate(rq, donor);
+	case FOUND:
+		/* fallthrough */;
+	}
 	WARN_ON_ONCE(owner && !owner->on_rq);
 	return owner;
 }
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index cf2109b67f9a3..03deb68ee5f86 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2284,13 +2284,17 @@ static inline int task_current_donor(struct rq *rq, struct task_struct *p)
 	return rq->donor == p;
 }
 
+#ifdef CONFIG_SCHED_PROXY_EXEC
 static inline bool task_is_blocked(struct task_struct *p)
 {
 	if (!sched_proxy_exec())
 		return false;
 
-	return !!p->blocked_on;
+	return !!p->blocked_on && p->blocked_on_state != BO_RUNNABLE;
 }
+#else
+static inline bool task_is_blocked(struct task_struct *p) { return false; }
+#endif
 
 static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
 {
-- 
2.51.0.536.g15c5d4f767-goog
Re: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by Peter Zijlstra 4 months ago
On Fri, Sep 26, 2025 at 03:29:10AM +0000, John Stultz wrote:
> As we add functionality to proxy execution, we may migrate a
> donor task to a runqueue where it can't run due to cpu affinity.
> Thus, we must be careful to ensure we return-migrate the task
> back to a cpu in its cpumask when it becomes unblocked.
> 
> Thus we need more then just a binary concept of the task being
> blocked on a mutex or not.
> 
> So add a blocked_on_state value to the task, that allows the
> task to move through BO_RUNNING -> BO_BLOCKED -> BO_WAKING
> and back to BO_RUNNING. This provides a guard state in
> BO_WAKING so we can know the task is no longer blocked
> but we don't want to run it until we have potentially
> done return migration, back to a usable cpu.
> 
> Signed-off-by: John Stultz <jstultz@google.com>
> ---
>  include/linux/sched.h     | 92 +++++++++++++++++++++++++++++++++------
>  init/init_task.c          |  3 ++
>  kernel/fork.c             |  3 ++
>  kernel/locking/mutex.c    | 15 ++++---
>  kernel/locking/ww_mutex.h | 20 ++++-----
>  kernel/sched/core.c       | 45 +++++++++++++++++--
>  kernel/sched/sched.h      |  6 ++-
>  7 files changed, 146 insertions(+), 38 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index cb4e81d9d9b67..8245940783c77 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -813,6 +813,12 @@ struct kmap_ctrl {
>  #endif
>  };
>  
> +enum blocked_on_state {
> +	BO_RUNNABLE,
> +	BO_BLOCKED,
> +	BO_WAKING,
> +};

I am still struggling with all this.

  RUNNABLE is !p->blocked_on
  BLOCKED is !!p->blocked_on
  WAKING is !!p->blocked_on but you need magical beans

I'm not sure I follow the argument above, and there is a distinct lack
of comments with this enum explaining the states (although there are
some comments scattered across the patch itself).

Last time we talked about this:

  https://lkml.kernel.org/r/20241216165419.GE35539@noisy.programming.kicks-ass.net

I was equally confused; and suggested not having the WAKING state by
simply dequeueing the offending task and letting ttwu() sort it out --
since we know a wakeup will be coming our way.

I'm thinking that suggesting didn't work out somehow, but I'm still not
sure I understand why.

There is this comment:


+               /*
+                * If a ww_mutex hits the die/wound case, it marks the task as
+                * BO_WAKING and calls try_to_wake_up(), so that the mutex
+                * cycle can be broken and we avoid a deadlock.
+                *
+                * However, if at that moment, we are here on the cpu which the
+                * die/wounded task is enqueued, we might loop on the cycle as
+                * BO_WAKING still causes task_is_blocked() to return true
+                * (since we want return migration to occur before we run the
+                * task).
+                *
+                * Unfortunately since we hold the rq lock, it will block
+                * try_to_wake_up from completing and doing the return
+                * migration.
+                *
+                * So when we hit a !BO_BLOCKED task briefly schedule idle
+                * so we release the rq and let the wakeup complete.
+                */
+               if (p->blocked_on_state != BO_BLOCKED)
+                       return proxy_resched_idle(rq);


Which I presume tries to clarify things, but that only had me scratching
my head again. Why would you need task_is_blocked() to affect return
migration?
Re: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by John Stultz 4 months ago
On Wed, Oct 8, 2025 at 4:26 AM Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, Sep 26, 2025 at 03:29:10AM +0000, John Stultz wrote:
> > As we add functionality to proxy execution, we may migrate a
> > donor task to a runqueue where it can't run due to cpu affinity.
> > Thus, we must be careful to ensure we return-migrate the task
> > back to a cpu in its cpumask when it becomes unblocked.
> >
> > Thus we need more then just a binary concept of the task being
> > blocked on a mutex or not.
> >
> > So add a blocked_on_state value to the task, that allows the
> > task to move through BO_RUNNING -> BO_BLOCKED -> BO_WAKING
> > and back to BO_RUNNING. This provides a guard state in
> > BO_WAKING so we can know the task is no longer blocked
> > but we don't want to run it until we have potentially
> > done return migration, back to a usable cpu.
> >
> > Signed-off-by: John Stultz <jstultz@google.com>
> > ---
> >  include/linux/sched.h     | 92 +++++++++++++++++++++++++++++++++------
> >  init/init_task.c          |  3 ++
> >  kernel/fork.c             |  3 ++
> >  kernel/locking/mutex.c    | 15 ++++---
> >  kernel/locking/ww_mutex.h | 20 ++++-----
> >  kernel/sched/core.c       | 45 +++++++++++++++++--
> >  kernel/sched/sched.h      |  6 ++-
> >  7 files changed, 146 insertions(+), 38 deletions(-)
> >
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index cb4e81d9d9b67..8245940783c77 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -813,6 +813,12 @@ struct kmap_ctrl {
> >  #endif
> >  };
> >
> > +enum blocked_on_state {
> > +     BO_RUNNABLE,
> > +     BO_BLOCKED,
> > +     BO_WAKING,
> > +};
>
> I am still struggling with all this.

My apologies. I really appreciate you taking the time to look it over!

>   RUNNABLE is !p->blocked_on
>   BLOCKED is !!p->blocked_on
>   WAKING is !!p->blocked_on but you need magical beans
>
> I'm not sure I follow the argument above, and there is a distinct lack
> of comments with this enum explaining the states (although there are
> some comments scattered across the patch itself).

That's fair. I'll try to improve the comments there.

So the blocked_on_state values don't quite match to blocked_on as you
listed above, but it did evolve out of the fact that just having the
blocked_on pointer didn't give us enough state and had lots of subtle
bugs, so having more state helped stabilize this. I do agree it has
some duplicative aspects with the task->__state, so I'd love to
flatten it down, but so far I've not found a good way.

So p->blocked_on can be separated off, as is totally managed by the
__mutex_lock_common() path. It's set to the mutex we're trying to
take, and cleared when we get it.

Where as the p->blocked_on_state tells us:
BO_RUNNABLE: If the task was picked from the runqueue it can be run on that cpu.
BO_BLOCKED: The task can be picked, but cannot be executed, it can
only be a donor task. It may migrate to the runqueue of cpus that it
is not allowed to run on.
BO_WAKING: An intermediate "gate" state. This task was BO_BLOCKED, and
we'd like it to be BO_RUNNABLE, but we have to address that it might
be on a runqueue it can't run on. So this prevents tasks from being
run until they have been evaluated for return migration.  Ideally ttwu
will handle the return migration, but there are cases where we will do
it manually in find_proxy_task() if we come across a task in the chain
with this state.

So, just to clarify your summary, the a task can be
p->blocked_on_state=BO_RUNNABLE and p->blocked_on can be set, since we
need to run the task in order for it to complete __mutex_lock_common()
to clear its own blocked_on pointer.  BO_BLOCKED does imply
!!p->blocked_on and BO_WAKING implies !!p->blocked_on but also that we
need to evaluate return migration before we run it.

> Last time we talked about this:
>
>   https://lkml.kernel.org/r/20241216165419.GE35539@noisy.programming.kicks-ass.net
>
> I was equally confused; and suggested not having the WAKING state by
> simply dequeueing the offending task and letting ttwu() sort it out --
> since we know a wakeup will be coming our way.

So yeah, and I really appreciated that suggestion. I used that dequeue
and wake approach in the "manual" return-migration path
(proxy_force_return()), and it did simplify things, but I haven't been
able to apply it everywhere.

> I'm thinking that suggesting didn't work out somehow, but I'm still not
> sure I understand why.

So the main issue is about where we end up setting the task to
BO_WAKING (via set_blocked_on_waking()). This is done in
__mutex_unlock_slowpath(), __ww_mutex_die(), and __ww_mutex_wound().
And in those cases, we are already holding the mutex->wait_lock, and
sometimes a task's blocked_lock, without the rq lock.  So we can't
just grab the rq lock out of order, and we probably shouldn't drop and
try to reacquire the blocked_lock and wait_lock there.

Though, one approach that I just thought of would be to have a special
wake_up_q call, which would handle dequeuing the blocked_on tasks on
the wake_q before doing the wakeup?  I can give that a try.

Though I'm not sure if that will still enable us to drop the
blocked_on_state tri-state. Since I worry we may be able to get
spurious wakeups on blocked_on tasks outside the mutex_unlock_slowpath
or ww_mutex_die/wound paths. Then we risk running a proxy-migrated
task on a cpu outside its affinity set. As without proxy-migration,
spurious wakeups are ok as the task will just loop back into schedule,
but with proxy migration, we have to be sure we return migrate first.

> There is this comment:
>
>
> +               /*
> +                * If a ww_mutex hits the die/wound case, it marks the task as
> +                * BO_WAKING and calls try_to_wake_up(), so that the mutex
> +                * cycle can be broken and we avoid a deadlock.
> +                *
> +                * However, if at that moment, we are here on the cpu which the
> +                * die/wounded task is enqueued, we might loop on the cycle as
> +                * BO_WAKING still causes task_is_blocked() to return true
> +                * (since we want return migration to occur before we run the
> +                * task).
> +                *
> +                * Unfortunately since we hold the rq lock, it will block
> +                * try_to_wake_up from completing and doing the return
> +                * migration.
> +                *
> +                * So when we hit a !BO_BLOCKED task briefly schedule idle
> +                * so we release the rq and let the wakeup complete.
> +                */
> +               if (p->blocked_on_state != BO_BLOCKED)
> +                       return proxy_resched_idle(rq);
>
>
> Which I presume tries to clarify things, but that only had me scratching
> my head again. Why would you need task_is_blocked() to affect return
> migration?

So task_is_blocked() returns true when p->blocked_on is set and
p->blocked_on_state != BO_RUNNABLE.  So BO_WAKING tasks are still
prevented from being selected to run, until they have had a chance to
be return-migrated (because as a donor they may be on a runqueue where
they can't actually run on).

The problem this comment tries to describe is that due to ww_mutexes,
there may be a loop in the blocked_on chain. So the cpu running
find_proxy_task() might spin following this loop. The ww_mutex logic
will fix the loop via ww_mutex_die/wound, which sets BO_WAKING, and
wakes the task up to release the lock.   However, the try_to_wake_up()
can get stuck waiting for the rqlock that the cpu looping in
find_proxy_task() is holding. So for the case where it's not
BO_BLOCKED, we resched_idle for a moment to drop the lock and let
try_to_wake_up() complete.

Though I worry I've just repeated the comment here, so let me know if
this wasn't helpful in clarifying things.

thanks
-john
Re: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by Peter Zijlstra 4 months ago
On Wed, Oct 08, 2025 at 05:07:26PM -0700, John Stultz wrote:

> > I'm thinking that suggesting didn't work out somehow, but I'm still not
> > sure I understand why.
> 
> So the main issue is about where we end up setting the task to
> BO_WAKING (via set_blocked_on_waking()). This is done in
> __mutex_unlock_slowpath(), __ww_mutex_die(), and __ww_mutex_wound().
> And in those cases, we are already holding the mutex->wait_lock, and
> sometimes a task's blocked_lock, without the rq lock.  So we can't
> just grab the rq lock out of order, and we probably shouldn't drop and
> try to reacquire the blocked_lock and wait_lock there.

Oh bugger. In my head the scheduler locks still nest inside wait_lock,
but we've flipped that such that schedule() / find_proxy_task() can take
it inside rq->lock.

Yes that does complicate things.

So suppose we have this ww_mutex cycle thing:

		  ,-+-*	Mutex-1 <-.
	Task-A ---' |		  | ,--	Task-B
		    `->	Mutex-2 *-+-'

Where Task-A holds Mutex-1 and tries to acquire Mutex-2, and
where Task-B holds Mutex-2 and tries to acquire Mutex-1.

Then the blocked_on->owner chain will go in circles.

        Task-A  -> Mutex-2
          ^          |
          |          v
        Mutex-1 <- Task-B

We need two things:

 - find_proxy_task() to stop iterating the circle;

 - the woken task to 'unblock' and run, such that it can back-off and
   re-try the transaction.


Now, the current code does:

	__clear_task_blocked_on();
	wake_q_add();

And surely clearing ->blocked_on is sufficient to break the cycle.

Suppose it is Task-B that is made to back-off, then we have:

  Task-A -> Mutex-2 -> Task-B (no further blocked_on)

and it would attempt to run Task-B. Or worse, it could directly pick
Task-B and run it, without ever getting into find_proxy_task().

Now, here is a problem because Task-B might not be runnable on the CPU
it is currently on; and because !task_is_blocked() we don't get into the
proxy paths, so nobody is going to fix this up.

Ideally we would have dequeued Task-B alongside of clearing
->blocked_on, but alas, lock inversion spoils things.

> Though, one approach that I just thought of would be to have a special
> wake_up_q call, which would handle dequeuing the blocked_on tasks on
> the wake_q before doing the wakeup?  I can give that a try.

I think this is racy worse than you considered. CPU1 could be inside
schedule() trying to pick Task-B while CPU2 does that wound/die thing.
No spurious wakeup required.


Anyway, since the actual value of ->blocked_on doesn't matter in this
case (we really want it to be NULL, but can't because we need someone to
go back migrate the thing), why not simply use something like:

#define PROXY_STOP ((struct mutex *)(-1L))

	__set_task_blocked_on(task, PROXY_STOP);

Then, have find_proxy_task() fix it up?


Random thoughts:

 - we should probably have something like:

	next = pick_next_task();
	rq_set_donor(next)
	if (unlikely(task_is_blocked()) {
		...
	}
+	WARN_ON_ONCE(next->__state);

   at all times the task we end up picking should be in RUNNABLE state.

 - similarly, we should have ttwu() check ->blocked_on is NULL ||
   PROXY_STOP, waking a task that still has a blocked_on relation can't
   be right -- ooh, dang race conditions :/ perhaps DEBUG_MUTEX and
   serialize on wait_lock.

 - I'm confliced on having TTWU fix up PROXY_STOP; strictly not required
   I think, but might improve performance -- if so, include numbers in
   patch that adds it -- which should be a separate patch from the one
   that adds PROXY_STOP.

 - since find_proxy_task() can do a lock-break, it should probably
   re-try the pick if, at the end, a higher runqueue is modified than
   the task we ended up with.

   Also see this thread:

      https://lkml.kernel.org/r/20251006105453.522934521@infradead.org

   eg. something like:

	rq->queue_mask = 0;
	// code with rq-lock-break
   	if (rq_modified_above(rq, next->sched_class))
		return NULL;


I'm still confused on BO_RUNNABLE -- you set that around
optimistic-spin, probably because you want to retain the ->blocked_on
relation, but also you have to run that thing to make progress. There
are a few other sites that use it, but those are more confusing still.

Please try and clarify this.

Anyway, if that is indeed it, you could do this by (ab)using the LSB of
the ->blocked_on pointer I suppose (you could make PROXY_STOP -2).
Re: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by John Stultz 3 months, 3 weeks ago
On Thu, Oct 9, 2025 at 4:43 AM Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, Oct 08, 2025 at 05:07:26PM -0700, John Stultz wrote:
>
> > > I'm thinking that suggesting didn't work out somehow, but I'm still not
> > > sure I understand why.
> >
> > So the main issue is about where we end up setting the task to
> > BO_WAKING (via set_blocked_on_waking()). This is done in
> > __mutex_unlock_slowpath(), __ww_mutex_die(), and __ww_mutex_wound().
> > And in those cases, we are already holding the mutex->wait_lock, and
> > sometimes a task's blocked_lock, without the rq lock.  So we can't
> > just grab the rq lock out of order, and we probably shouldn't drop and
> > try to reacquire the blocked_lock and wait_lock there.
>
> Oh bugger. In my head the scheduler locks still nest inside wait_lock,
> but we've flipped that such that schedule() / find_proxy_task() can take
> it inside rq->lock.
>
> Yes that does complicate things.
>
> So suppose we have this ww_mutex cycle thing:
>
>                   ,-+-* Mutex-1 <-.
>         Task-A ---' |             | ,-- Task-B
>                     `-> Mutex-2 *-+-'
>
> Where Task-A holds Mutex-1 and tries to acquire Mutex-2, and
> where Task-B holds Mutex-2 and tries to acquire Mutex-1.
>
> Then the blocked_on->owner chain will go in circles.
>
>         Task-A  -> Mutex-2
>           ^          |
>           |          v
>         Mutex-1 <- Task-B
>
> We need two things:
>
>  - find_proxy_task() to stop iterating the circle;
>
>  - the woken task to 'unblock' and run, such that it can back-off and
>    re-try the transaction.
>
>
> Now, the current code does:
>
>         __clear_task_blocked_on();
>         wake_q_add();
>
> And surely clearing ->blocked_on is sufficient to break the cycle.
>
> Suppose it is Task-B that is made to back-off, then we have:
>
>   Task-A -> Mutex-2 -> Task-B (no further blocked_on)
>
> and it would attempt to run Task-B. Or worse, it could directly pick
> Task-B and run it, without ever getting into find_proxy_task().
>
> Now, here is a problem because Task-B might not be runnable on the CPU
> it is currently on; and because !task_is_blocked() we don't get into the
> proxy paths, so nobody is going to fix this up.
>
> Ideally we would have dequeued Task-B alongside of clearing
> ->blocked_on, but alas, lock inversion spoils things.

Right. Thus my adding of the blocked_on_state to try to gate the task
from running until we evaluate it for return migration.

> > Though, one approach that I just thought of would be to have a special
> > wake_up_q call, which would handle dequeuing the blocked_on tasks on
> > the wake_q before doing the wakeup?  I can give that a try.
>
> I think this is racy worse than you considered. CPU1 could be inside
> schedule() trying to pick Task-B while CPU2 does that wound/die thing.
> No spurious wakeup required.

Yeah. I took a bit of a try at it, but couldn't manage to rework
things without preserving the BO_WAKING guard.

Then trying to do the dequeue in the wake_up_q() really isn't that far
away from just doing it in ttwu() a little deeper in the call stack,
as we still have to take  task_rq_lock() to call block_task().

> Anyway, since the actual value of ->blocked_on doesn't matter in this
> case (we really want it to be NULL, but can't because we need someone to
> go back migrate the thing), why not simply use something like:
>
> #define PROXY_STOP ((struct mutex *)(-1L))
>
>         __set_task_blocked_on(task, PROXY_STOP);
>
> Then, have find_proxy_task() fix it up?

Ok, so this sounds like it sort of matches the BO_WAKING state I
currently have (replacing the BO_WAKING state with PROXY_STOP). Not
much of a logic change, but would indeed save a bit of space.
I'll take a stab at it.

> Random thoughts:
>
>  - we should probably have something like:
>
>         next = pick_next_task();
>         rq_set_donor(next)
>         if (unlikely(task_is_blocked()) {
>                 ...
>         }
> +       WARN_ON_ONCE(next->__state);
>
>    at all times the task we end up picking should be in RUNNABLE state.
>
>  - similarly, we should have ttwu() check ->blocked_on is NULL ||
>    PROXY_STOP, waking a task that still has a blocked_on relation can't
>    be right -- ooh, dang race conditions :/ perhaps DEBUG_MUTEX and
>    serialize on wait_lock.
>
>  - I'm confliced on having TTWU fix up PROXY_STOP; strictly not required
>    I think, but might improve performance -- if so, include numbers in
>    patch that adds it -- which should be a separate patch from the one
>    that adds PROXY_STOP.

Ok, I'll work to split that logic out. The nice thing in ttwu is we
already end up taking the rq lock in ttwu_runnable() when we do the
dequeue so yeah I expect it would help performance.

>  - since find_proxy_task() can do a lock-break, it should probably
>    re-try the pick if, at the end, a higher runqueue is modified than
>    the task we ended up with.

So, I think find_proxy_task() will always pick-again if it releases
the rqlock.  So I'm not sure I'm quite following this bit. Could you
clarify?

>    Also see this thread:
>
>       https://lkml.kernel.org/r/20251006105453.522934521@infradead.org
>
>    eg. something like:
>
>         rq->queue_mask = 0;
>         // code with rq-lock-break
>         if (rq_modified_above(rq, next->sched_class))
>                 return NULL;
>
>
> I'm still confused on BO_RUNNABLE -- you set that around
> optimistic-spin, probably because you want to retain the ->blocked_on
> relation, but also you have to run that thing to make progress. There
> are a few other sites that use it, but those are more confusing still.

Mostly I liked managing the blocked_on_state separately from the
blocked_on pointer as I found it simplified my thinking in the mutex
lock side, for cases where we wake up and then loop again. But let me
take a pass at reworking it a bit like you suggest to see how it goes.

> Please try and clarify this.

Will try to add more comments to explain.

> Anyway, if that is indeed it, you could do this by (ab)using the LSB of
> the ->blocked_on pointer I suppose (you could make PROXY_STOP -2).

One complication for using the LSB of the pointer, Suleiman was
thinking about stealing those for extending the blocked_on pointer for
use with other lock types (rw_sem in his case).
Currently he's got it in a structure with an enum:
  https://github.com/johnstultz-work/linux-dev/commit/e61b487d240782302199f6dc1d99851c3449b547

But we talked a little about potentially squishing that together, but
it sort of depends on how many primitive types we end up using
proxy-exec with.

As always, thanks again for all the feedback, I really appreciate it!
-john
Re: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by John Stultz 3 months, 3 weeks ago
On Mon, Oct 13, 2025 at 7:43 PM John Stultz <jstultz@google.com> wrote:
> On Thu, Oct 9, 2025 at 4:43 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >  - I'm confliced on having TTWU fix up PROXY_STOP; strictly not required
> >    I think, but might improve performance -- if so, include numbers in
> >    patch that adds it -- which should be a separate patch from the one
> >    that adds PROXY_STOP.
>
> Ok, I'll work to split that logic out. The nice thing in ttwu is we
> already end up taking the rq lock in ttwu_runnable() when we do the
> dequeue so yeah I expect it would help performance.

So, I thought this wouldn't be hard, but it ends up there's some
subtlety to trying to separate the ttwu changes.

First, I am using PROXY_WAKING instead of PROXY_STOP since it seemed
more clear and aligned to my previous mental model with BO_WAKING.

One of the issues is when we go through the:
  mutex_unlock_slowpath()/ww_mutex_die()/ww_mutex_wound()
  ->  tsk->blocked_on = PROXY_WAKING
      wake_q_add(tsk)
      ...
      wake_up_q()
      ->  wake_up_process()

The wake_up_process() call through try_to_wake_up() will hit the
ttwu_runnable() case and set the task state RUNNING.

Then on the cpu where that task is enqueued:
  __schedule()
  -> find_proxy_task()
     -> if (p->blocked_on == PROXY_WAKING)
           proxy_force_return(rq, p);

In v22, proxy_force_return() logic would block_task(p),
clear_task_blocked_on(p) and then call wake_up_process(p).
https://github.com/johnstultz-work/linux-dev/blob/proxy-exec-v22-6.17-rc6/kernel/sched/core.c#L7117

However, since the task state has already been set to TASK_RUNNING,
the second wakeup ends up short-circuiting at ttwu_state_match(), and
the now blocked task would end up left dequeued forever.

So, I've reworked the proxy_force_return() to be sort of an open coded
try_to_wakeup() to call select_task_rq() to pick the return cpu and
then basically deactivate/activate the task to migrate it over.  It
was nice to reuse block_task() and wake_up_process() previously, but
that wake/block/wake behavior tripping into the dequeued forever issue
worries me that it could be tripped in rare cases previously with my
series (despite having check after ttwu_state_mach() for this case).
So either I'll keep this approach or maybe we should add some extra
checking in ttwu_state_mach() for on_rq before bailing?  Let me know
if you have thoughts there.

Hopefully will have the patches cleaned up and out again soon.

thanks
-john
Re: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration
Posted by Peter Zijlstra 4 months ago
On Thu, Oct 09, 2025 at 01:43:02PM +0200, Peter Zijlstra wrote:
>  - we should probably have something like:
> 
> 	next = pick_next_task();
> 	rq_set_donor(next)
> 	if (unlikely(task_is_blocked()) {
> 		...
> 	}
> +	WARN_ON_ONCE(next->__state);
> 
>    at all times the task we end up picking should be in RUNNABLE state.

Pfff.. PREEMPT won't like that. Ignore this.