From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83A551C862F for ; Thu, 30 Oct 2025 00:19:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783548; cv=none; b=i9HpgrHtp3OLMDGEOaHo+2U2DN7HSZ0dp7cICjrWhh++lgjo2ZT9Whz6o0t+8wwtF7L0ssOgfhKvOIspzJq5SGFDs7vQqi61oVInhvUwu7IjD7GHUN6KE2i6CG5gIFy8FW1cMaxosw7gWqb5FU0B9VhxF83AJPXexn9O0fo+VqY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783548; c=relaxed/simple; bh=8f4+FJmDDNzlhlOSExwzlRLg9+eDyS9+8mqsNHsAwH4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=urUBiVIrI2BtJqDd0l3xZfgbfrtRSYkHraVjbyw7NRVFvWVELDbU0zEk4CSQCvZYvR2STQWx1u97jaX/jqeBXkkuDmC1iIxgfVY9ogsaCDOG7VVKKViSxGiw+mye/vNDucyR7DIZQmdFVnMKtu49D5vkOv5yCb8IGNmkOE+iDs4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PApHGFK4; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PApHGFK4" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-32eb18b5659so311076a91.2 for ; Wed, 29 Oct 2025 17:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783544; x=1762388344; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wBKt9aTDk6wcoW6/9BNaVpHRXqRLYs4ID0Vn8aOCGcI=; b=PApHGFK4J2/DQ4HSJ9blh6ouiLEmY2elohS5iX1qLelciyFmpSUpyuWA57YxJpCmeE gnnG96H+gshxxfolzn08J/S0xhJ6YzscRFKc+FGDFdIgYjpCoEfSoCmjjzQlIOwssrxh soABvVMMshevUWHyucEjP3UwJCg9UmAPhK640mKUGSJY1AKxrpXtSzLsjdb8AI/Z+BQv qNZRUivyVHFGiZB6G0trr0ycdEGItkaa2Ie1sR748RWmnyY2x6H5lJUYGl70zLYV0D1G zQRWsX5WbtSPxezfu9dnGp5kZSw2rPFyLgv0LwtGJkJvHU1Bk9ot2CeLT6POAqX4Cr8v Jekg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783544; x=1762388344; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wBKt9aTDk6wcoW6/9BNaVpHRXqRLYs4ID0Vn8aOCGcI=; b=VzIQQkOFwgA2gKxu3EM3D0USn8NUi9+ZhCZcY6RMkLOaiDW04kyYV7yHZCArj30qWB izxoH8db9CRQhZB2L+rtdhzhnOPYJMtJ9iuKRVAm6uLWfwdECLt0AM8zj/coiWZJD5ZT OVzts+JPKNr0SdF3YCqYB8Kgz865Kmjl8IEYn3skuy7lach67BWmLt5VXj8oIvhYGqx0 YeizLZqK0mGljXWJtUZqkQaA+t2Z1FsakS21FTjXzqbG1zg3FWo2drmCNe2V13FUjo41 WMp33X0ZfZvzgU6TMaGy/7RnF7EigAgCAwvAfjGUPFTQZjMQJ4FB8jC9UT0m1yw6GdYJ WeSg== X-Gm-Message-State: AOJu0Yzm7BFqPSnPgslnHRYvYNyvMknc5INrlv7f+B2ekt0CxT27tOsg 5jHH2+Scd4XH+HUl2KqJrcY1sZjKwwKIz51SKBgV+wCGMqneDQTQtvJ+jtziLZ6rJC/Rw5toRh6 6RGyivAoBQUFIU/7G/SGsjnSBpo/306yqF3qVzwXstSgrV8vhHmSuyislQF47iyivr7masPR90E ArT4wQquXYz3917Ut5I0FyszWAdj30D5c3RflFT1zhrDCgn9n2 X-Google-Smtp-Source: AGHT+IHCM6wQlo1/pwtDTIljIGnKh5WYewMV159xSYRnaYzRiMKjDah+BbNiUuzvTIdrdR/uLgfiLuYwt0fw X-Received: from pjbfh6.prod.google.com ([2002:a17:90b:346:b0:33b:52d6:e13e]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3942:b0:339:cece:a99 with SMTP id 98e67ed59e1d1-3403a265016mr5957753a91.13.1761783543982; Wed, 29 Oct 2025 17:19:03 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:42 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-2-jstultz@google.com> Subject: [PATCH v23 1/9] locking: Add task::blocked_lock to serialize blocked_on state From: John Stultz To: LKML Cc: John Stultz , K Prateek Nayak , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" So far, we have been able to utilize the mutex::wait_lock for serializing the blocked_on state, but when we move to proxying across runqueues, we will need to add more state and a way to serialize changes to this state in contexts where we don't hold the mutex::wait_lock. So introduce the task::blocked_lock, which nests under the mutex::wait_lock in the locking order, and rework the locking to use it. Signed-off-by: John Stultz Reviewed-by: K Prateek Nayak --- v15: * Split back out into later in the series v16: * Fixups to mark tasks unblocked before sleeping in mutex_optimistic_spin() * Rework to use guard() as suggested by Peter v19: * Rework logic for PREEMPT_RT issues reported by K Prateek Nayak v21: * After recently thinking more on ww_mutex code, I reworked the blocked_lock usage in mutex lock to avoid having to take nested locks in the ww_mutex paths, as I was concerned the lock ordering constraints weren't as strong as I had previously thought. v22: * Added some extra spaces to avoid dense code blocks suggested by K Prateek v23: * Move get_task_blocked_on() to kernel/locking/mutex.h as requested by PeterZ Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 48 +++++++++++++----------------------- init/init_task.c | 1 + kernel/fork.c | 1 + kernel/locking/mutex-debug.c | 4 +-- kernel/locking/mutex.c | 40 +++++++++++++++++++----------- kernel/locking/mutex.h | 6 +++++ kernel/locking/ww_mutex.h | 4 +-- kernel/sched/core.c | 4 ++- 8 files changed, 58 insertions(+), 50 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index cbb7340c5866f..16122c2a2a224 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1241,6 +1241,7 @@ struct task_struct { #endif =20 struct mutex *blocked_on; /* lock we're blocked on */ + raw_spinlock_t blocked_lock; =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER /* @@ -2149,57 +2150,42 @@ extern int __cond_resched_rwlock_write(rwlock_t *lo= ck); #ifndef CONFIG_PREEMPT_RT static inline struct mutex *__get_task_blocked_on(struct task_struct *p) { - struct mutex *m =3D p->blocked_on; - - if (m) - lockdep_assert_held_once(&m->wait_lock); - return m; + lockdep_assert_held_once(&p->blocked_lock); + return p->blocked_on; } =20 static inline void __set_task_blocked_on(struct task_struct *p, struct mut= ex *m) { - struct mutex *blocked_on =3D READ_ONCE(p->blocked_on); - WARN_ON_ONCE(!m); /* The task should only be setting itself as blocked */ WARN_ON_ONCE(p !=3D current); - /* Currently we serialize blocked_on under the mutex::wait_lock */ - lockdep_assert_held_once(&m->wait_lock); + /* Currently we serialize blocked_on under the task::blocked_lock */ + lockdep_assert_held_once(&p->blocked_lock); /* * Check ensure we don't overwrite existing mutex value * with a different mutex. Note, setting it to the same * lock repeatedly is ok. */ - WARN_ON_ONCE(blocked_on && blocked_on !=3D m); - WRITE_ONCE(p->blocked_on, m); -} - -static inline void set_task_blocked_on(struct task_struct *p, struct mutex= *m) -{ - guard(raw_spinlock_irqsave)(&m->wait_lock); - __set_task_blocked_on(p, m); + WARN_ON_ONCE(p->blocked_on && p->blocked_on !=3D m); + p->blocked_on =3D m; } =20 static inline void __clear_task_blocked_on(struct task_struct *p, struct m= utex *m) { - if (m) { - struct mutex *blocked_on =3D READ_ONCE(p->blocked_on); - - /* Currently we serialize blocked_on under the mutex::wait_lock */ - lockdep_assert_held_once(&m->wait_lock); - /* - * There may be cases where we re-clear already cleared - * blocked_on relationships, but make sure we are not - * clearing the relationship with a different lock. - */ - WARN_ON_ONCE(blocked_on && blocked_on !=3D m); - } - WRITE_ONCE(p->blocked_on, NULL); + /* Currently we serialize blocked_on under the task::blocked_lock */ + lockdep_assert_held_once(&p->blocked_lock); + /* + * There may be cases where we re-clear already cleared + * blocked_on relationships, but make sure we are not + * clearing the relationship with a different lock. + */ + WARN_ON_ONCE(m && p->blocked_on && p->blocked_on !=3D m); + p->blocked_on =3D NULL; } =20 static inline void clear_task_blocked_on(struct task_struct *p, struct mut= ex *m) { - guard(raw_spinlock_irqsave)(&m->wait_lock); + guard(raw_spinlock_irqsave)(&p->blocked_lock); __clear_task_blocked_on(p, m); } #else diff --git a/init/init_task.c b/init/init_task.c index a55e2189206fa..60477d74546e0 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -143,6 +143,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { .journal_info =3D NULL, INIT_CPU_TIMERS(init_task) .pi_lock =3D __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock), + .blocked_lock =3D __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock), .timer_slack_ns =3D 50000, /* 50 usec default slack */ .thread_pid =3D &init_struct_pid, .thread_node =3D LIST_HEAD_INIT(init_signals.thread_head), diff --git a/kernel/fork.c b/kernel/fork.c index 3da0f08615a95..0697084be202f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2038,6 +2038,7 @@ __latent_entropy struct task_struct *copy_process( ftrace_graph_init_task(p); =20 rt_mutex_init_task(p); + raw_spin_lock_init(&p->blocked_lock); =20 lockdep_assert_irqs_enabled(); #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 949103fd8e9b5..1d8cff71f65e1 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -54,13 +54,13 @@ void debug_mutex_add_waiter(struct mutex *lock, struct = mutex_waiter *waiter, lockdep_assert_held(&lock->wait_lock); =20 /* Current thread can't be already blocked (since it's executing!) */ - DEBUG_LOCKS_WARN_ON(__get_task_blocked_on(task)); + DEBUG_LOCKS_WARN_ON(get_task_blocked_on(task)); } =20 void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *wa= iter, struct task_struct *task) { - struct mutex *blocked_on =3D __get_task_blocked_on(task); + struct mutex *blocked_on =3D get_task_blocked_on(task); =20 DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); DEBUG_LOCKS_WARN_ON(waiter->task !=3D task); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index de7d6702cd96c..c44fc63d4476e 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -640,6 +640,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err_early_kill; } =20 + raw_spin_lock(¤t->blocked_lock); __set_task_blocked_on(current, lock); set_current_state(state); trace_contention_begin(lock, LCB_F_MUTEX); @@ -653,8 +654,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas * the handoff. */ if (__mutex_trylock(lock)) - goto acquired; + break; =20 + raw_spin_unlock(¤t->blocked_lock); /* * Check for signals and kill conditions while holding * wait_lock. This ensures the lock cancellation is ordered @@ -677,12 +679,14 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas =20 first =3D __mutex_waiter_is_first(lock, &waiter); =20 + raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); /* * As we likely have been woken up by task * that has cleared our blocked_on state, re-set * it to the lock we are trying to acquire. */ - set_task_blocked_on(current, lock); + __set_task_blocked_on(current, lock); set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -693,25 +697,33 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas break; =20 if (first) { - trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); + bool opt_acquired; + /* * mutex_optimistic_spin() can call schedule(), so - * clear blocked on so we don't become unselectable + * we need to release these locks before calling it, + * and clear blocked on so we don't become unselectable * to run. */ - clear_task_blocked_on(current, lock); - if (mutex_optimistic_spin(lock, ww_ctx, &waiter)) + __clear_task_blocked_on(current, lock); + raw_spin_unlock(¤t->blocked_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + + trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); + opt_acquired =3D mutex_optimistic_spin(lock, ww_ctx, &waiter); + + raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); + __set_task_blocked_on(current, lock); + + if (opt_acquired) break; - set_task_blocked_on(current, lock); trace_contention_begin(lock, LCB_F_MUTEX); } - - raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock_irqsave(&lock->wait_lock, flags); -acquired: __clear_task_blocked_on(current, lock); __set_current_state(TASK_RUNNING); + raw_spin_unlock(¤t->blocked_lock); =20 if (ww_ctx) { /* @@ -740,11 +752,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas return 0; =20 err: - __clear_task_blocked_on(current, lock); + clear_task_blocked_on(current, lock); __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); err_early_kill: - WARN_ON(__get_task_blocked_on(current)); + WARN_ON(get_task_blocked_on(current)); trace_contention_end(lock, ret); raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q); debug_mutex_free_waiter(&waiter); @@ -955,7 +967,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne next =3D waiter->task; =20 debug_mutex_wake_waiter(lock, waiter); - __clear_task_blocked_on(next, lock); + clear_task_blocked_on(next, lock); wake_q_add(&wake_q, next); } =20 diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 2e8080a9bee37..5cfd663e2c011 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -47,6 +47,12 @@ static inline struct task_struct *__mutex_owner(struct m= utex *lock) return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLA= GS); } =20 +static inline struct mutex *get_task_blocked_on(struct task_struct *p) +{ + guard(raw_spinlock_irqsave)(&p->blocked_lock); + return __get_task_blocked_on(p); +} + #ifdef CONFIG_DEBUG_MUTEXES extern void debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter); diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 31a785afee6c0..e4a81790ea7dd 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -289,7 +289,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER = *waiter, * blocked_on pointer. Otherwise we can see circular * blocked_on relationships that can't resolve. */ - __clear_task_blocked_on(waiter->task, lock); + clear_task_blocked_on(waiter->task, lock); wake_q_add(wake_q, waiter->task); } =20 @@ -347,7 +347,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * are waking the mutex owner, who may be currently * blocked on a different mutex. */ - __clear_task_blocked_on(owner, NULL); + clear_task_blocked_on(owner, NULL); wake_q_add(wake_q, owner); } return true; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index cb4f6d91d4455..517b26c515bc5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6607,6 +6607,7 @@ static struct task_struct *proxy_deactivate(struct rq= *rq, struct task_struct *d * p->pi_lock * rq->lock * mutex->wait_lock + * p->blocked_lock * * Returns the task that is going to be used as execution context (the one * that is actually going to be run on cpu_of(rq)). @@ -6630,8 +6631,9 @@ find_proxy_task(struct rq *rq, struct task_struct *do= nor, struct rq_flags *rf) * and ensure @owner sticks around. */ guard(raw_spinlock)(&mutex->wait_lock); + guard(raw_spinlock)(&p->blocked_lock); =20 - /* Check again that p is blocked with wait_lock held */ + /* Check again that p is blocked with blocked_lock held */ if (mutex !=3D __get_task_blocked_on(p)) { /* * Something changed in the blocked_on chain and --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83A021C84C6 for ; Thu, 30 Oct 2025 00:19:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783548; cv=none; b=T0+DMe5pNTyLqnyOyMgTJwsob30qyOJn7R99ecE/j8RE1noVtMTGBvU2vOWYsGFfRF/DDcR2SPn3RYmhgGrFJ5RGobmz7bySBYeYzKinYqRAeY/VeHNdw2I7Xiuvu5S5CzM2BISPKJrj1vWqC6o32iW2/p2McNp9lOICZDEONM4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783548; c=relaxed/simple; bh=4lKVWMzCVAA+UKev1Y/i+yoP3fT7r+NXsqiPCaMN3lc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dP/bzOyk2b7IybI0caHH83XJTSjlghvrzan3v0/6x+jldnH/hdvJZh5SaCPJhrx0cf43+C0FhkF3PUkc7t/IsEyDHU5EradfucaqITRjzEKsQr8XSugerS+Y8lFTSNlOHi8BoSFVvQ5Zyd4wOOCEcfibtXzEkDtv4e5nEt7TgRY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EGk3xee5; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EGk3xee5" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-33428befc08so962101a91.2 for ; Wed, 29 Oct 2025 17:19:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783546; x=1762388346; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3VbL2RLeOjLSUJQM2/KV7qhgh5vfmiQYmlgBlL4D9dM=; b=EGk3xee5J43B3C9oschtIjFyWGF5jLrh6Yhi9328YYYV55OJPpQFOIFKhcdFJNdaFf KKCoiEViIHU5E7OUsHLDZL0R5eeSiyUqYVeWnOlqsv02XEFgjSSwdV2q9hJSwpOd5Ghr eenADrwIkGlO49ficckSHQmBoP17HvSpsUqvoJukrSpXEarVrd056darpM1+l3Tm5tf5 UxjPjNhfrq/zSXuKZ4mnYf3K3qSTr2efo8gTKuqZZ+H6z7RqE+MoXoDwev5ld9Ojc2wT W23YcDjZ02PlVAREeIjuqW7hwE2CiVbjLA15VUYMHv5fdtLWiV6p9fzzOJMBDXLYgkqm YDIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783546; x=1762388346; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3VbL2RLeOjLSUJQM2/KV7qhgh5vfmiQYmlgBlL4D9dM=; b=BHq5IG17jSoOXqmqfU4liITpOVq7UX4W7x3c2WrYq08I37Hgx0ivnUgKnylmHhWB7p VnsxCPXGV1msaPr6yO+lGbWUNLBygfKpc8hGkJ8mHLIdft/gy2DD5f9ls2q2maudlmIA WW2bA9B8iJC1b9XFK0mwIyTAIWwjBYKA1swJe0xGIPe5zAyGq3Qtc7rqrJOx1KabOj9E s8MrOgnRVLxo4R+TkxE53q0FlbTY80ofKDs6u/qVNq4ZXLx62DbAmpMu+IyTjGjsvF7U QQCcS+RLiGEpLAaoi7+IAlsAveJrfYqpppwc4uRf9xdCZBjvuPshAmISmioahfwgbNgu lSBg== X-Gm-Message-State: AOJu0YwXhFaJqVeaZWfs+WrA0v9Tn7X6/gwxg7B1FgIti5ocGoaxnTOk G6d5Urqu0kH//QRFC1Fst9R/rLR56OJSG23tMN1OlTTONZL+/wHibeeipWUPnNU6l8A/HmDaXbe IFDYhMAbr7b7jBDzAhvv3QUKXyh6gD6Sq2jOicD4TBP8YU9QtMWYa0/03SqQ+Bp4+kgcuhwpSgV KePAkUMbdjydhiNCBvdaQ87+KjBUZI3shLdQHjyjqolXYienv4 X-Google-Smtp-Source: AGHT+IHndj4sC7MtLZlEGmmYsTN8/RXGKmU2AvSH8/rsywj8lj1UvJt6NxKY1lyMEF82tNU0BNzXAYvD+MRB X-Received: from pjrv16.prod.google.com ([2002:a17:90a:bb90:b0:33b:c59f:b015]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4f44:b0:32d:90c7:c63b with SMTP id 98e67ed59e1d1-3403a2f218amr5354166a91.30.1761783545580; Wed, 29 Oct 2025 17:19:05 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:43 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-3-jstultz@google.com> Subject: [PATCH v23 2/9] sched: Fix modifying donor->blocked on without proper locking From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce an action enum in find_proxy_task() which allows us to handle work needed to be done outside the mutex.wait_lock and task.blocked_lock guard scopes. This ensures proper locking when we clear the donor's blocked_on pointer in proxy_deactivate(), and the switch statement will be useful as we add more cases to handle later in this series. Signed-off-by: John Stultz Reviewed-by: K Prateek Nayak --- v23: * Split out from earlier patch. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 517b26c515bc5..0533a14ce5935 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6591,7 +6591,7 @@ static struct task_struct *proxy_deactivate(struct rq= *rq, struct task_struct *d * as unblocked, as we aren't doing proxy-migrations * yet (more logic will be needed then). */ - donor->blocked_on =3D NULL; + clear_task_blocked_on(donor, NULL); } return NULL; } @@ -6619,6 +6619,7 @@ find_proxy_task(struct rq *rq, struct task_struct *do= nor, struct rq_flags *rf) int this_cpu =3D cpu_of(rq); struct task_struct *p; struct mutex *mutex; + enum { FOUND, DEACTIVATE_DONOR } action =3D FOUND; =20 /* Follow blocked_on chain. */ for (p =3D donor; task_is_blocked(p); p =3D owner) { @@ -6652,12 +6653,14 @@ find_proxy_task(struct rq *rq, struct task_struct *= donor, struct rq_flags *rf) =20 if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) { /* XXX Don't handle blocked owners/delayed dequeue yet */ - return proxy_deactivate(rq, donor); + action =3D DEACTIVATE_DONOR; + break; } =20 if (task_cpu(owner) !=3D this_cpu) { /* XXX Don't handle migrations yet */ - return proxy_deactivate(rq, donor); + action =3D DEACTIVATE_DONOR; + break; } =20 if (task_on_rq_migrating(owner)) { @@ -6715,6 +6718,13 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) */ } =20 + /* Handle actions we need to do outside of the guard() scope */ + switch (action) { + case DEACTIVATE_DONOR: + return proxy_deactivate(rq, donor); + case FOUND: + /* fallthrough */; + } WARN_ON_ONCE(owner && !owner->on_rq); return owner; } --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 365A51DF742 for ; Thu, 30 Oct 2025 00:19:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783551; cv=none; b=upCvBS7v+U5didqi8Ngki9YQ/annMDXMr9AQ50wbA2Zt3qc8EUKDbNsNdFTZUJuFDDrScyubGYtS+VxD0XE8Uot/BaUP9268SaLNt9c8n2KrG+CXYzhle4QoPCAVZXQ9HQ1IfsVdX21FLjTAsXXjTNy0haMcf4pw9P6M5bLt7i8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783551; c=relaxed/simple; bh=Bit341BKq1vucYs9RxadFacWVU44IGEp8BA6kj8RRhI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RkOTuIPZQbc7ZAOwiKnEemH0UGMOzrdazE7rCsJiC7P+oWuxHP7pJ1HN65yZl5QOpxo9+i8D5Xu6lwtm632tD6XNm8CDQskZaBTZLVCCH8Q2tAD2exjNTqZEf31c2nHfERFNl8faJKIBcRpTFkRqf7ZELEpw+kyTlCVWBD8JeSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Z0sIeMrl; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Z0sIeMrl" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b6cf1b799dcso232664a12.2 for ; Wed, 29 Oct 2025 17:19:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783547; x=1762388347; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PMOz1ImRF6DUexeCJt4VP8AzM8x0j/Pme6rZ+V57Ryc=; b=Z0sIeMrlA3RS0i+3o2CrZ6t0yjYTxoD6OZ/hweo+8iHUb8lnF/Oeq5znxpPQiS3lZL 5X95+AzJObA+EGoXMqIFQ5K2JFWnLEo/oM8hvPAD0csIyncaBGZwQsGtFLR+ttyY6KcH NhbEbYhBfXC5Y65WP7OW/v33SaW6C+ZgbUyA5B//m1LTw5f3AU/0Fdxi5m8by7eyomhQ D7cYqw/Zc5pIPhQqdlX0ocew7BffjHZzq5jzyeNDcwf0Gtn/NzErj3cwk6XoP2CdmRpG pDzxDIfr25ReWplv+D8EFEcHG3TN7Xjunltx5Xf2+houPc+iiYOgd/eReyoMfeQtIpM3 Kncg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783547; x=1762388347; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PMOz1ImRF6DUexeCJt4VP8AzM8x0j/Pme6rZ+V57Ryc=; b=Fz2zRhCbKvJcRELWx2j0J3Jy/Ft6LVr7Z22Bv+H/b+NeH2nAvNy1b5uJdPAKuGLTiN OD7isuQYMAq1WruQ2f60c1nWYc/T101UQpeK85p+l8NL8YY797NTVwKdaWHAMIjldaxP 59ZX9rh29ya8Fs24pwPgpAQJlwizVngoMHad84dGQdCv5qo9Sq1fkf2ftHA7W8pBVkCh vBBZEUPdQ+O3DiKWnKxc2KHTRlaC2bMI4TZaEi/OqtsxIzzmd74641eRRvApxj0kv3uB nUN1RJ5Pr4LEW+CM/d2kFl8DOk+xOObCnH5bzanm8yXVLyHrI52j1vRm6vpHQcMaweuP JRgA== X-Gm-Message-State: AOJu0YyFaqRdMouHRsF4X2h3+dTuNwxKBcgZxz3UgqKvMPHeGqDBjCo+ yyQBCwtBxYv3+nlhVaydJt71CXU/9pVuTNxvXl/gM7aBv+rGDZxSo60LPsTRtJP9f2bUmpzmqUw xN6VlfwaInWT1XyQwWmaBBT8AvhCgpHGMvFnutC0feYNZFU5/9uh/LMAoZHJVSqHVVeK/ZqCkIZ XITxvjW51ARkXnYjGofULwq4oWy7/XZvrtB7pk9BDAD7CHAPf0 X-Google-Smtp-Source: AGHT+IELDJ9cc4nn3903Q1uKxYUbPv24CYTfXlN3mAah42x28pc1VSvnQ9wyW2/j+6fd8hMC7KycIhqYo5BF X-Received: from pjbch23.prod.google.com ([2002:a17:90a:f417:b0:33b:51fe:1a94]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:9147:b0:33f:df99:11f2 with SMTP id adf61e73a8af0-3478523136amr1758672637.14.1761783546966; Wed, 29 Oct 2025 17:19:06 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:44 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-4-jstultz@google.com> Subject: [PATCH v23 3/9] sched/locking: Add special p->blocked_on==PROXY_WAKING value for proxy return-migration From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As we add functionality to proxy execution, we may migrate a donor task to a runqueue where it can't run due to cpu affinity. Thus, we must be careful to ensure we return-migrate the task back to a cpu in its cpumask when it becomes unblocked. Peter helpfully provided the following example with pictures: "Suppose we have a ww_mutex cycle: ,-+-* Mutex-1 <-. Task-A ---' | | ,-- Task-B `-> Mutex-2 *-+-' Where Task-A holds Mutex-1 and tries to acquire Mutex-2, and where Task-B holds Mutex-2 and tries to acquire Mutex-1. Then the blocked_on->owner chain will go in circles. Task-A -> Mutex-2 ^ | | v Mutex-1 <- Task-B We need two things: - find_proxy_task() to stop iterating the circle; - the woken task to 'unblock' and run, such that it can back-off and re-try the transaction. Now, the current code [without this patch] does: __clear_task_blocked_on(); wake_q_add(); And surely clearing ->blocked_on is sufficient to break the cycle. Suppose it is Task-B that is made to back-off, then we have: Task-A -> Mutex-2 -> Task-B (no further blocked_on) and it would attempt to run Task-B. Or worse, it could directly pick Task-B and run it, without ever getting into find_proxy_task(). Now, here is a problem because Task-B might not be runnable on the CPU it is currently on; and because !task_is_blocked() we don't get into the proxy paths, so nobody is going to fix this up. Ideally we would have dequeued Task-B alongside of clearing ->blocked_on, but alas, [the lock ordering prevents us from getting the task_rq_lock() and] spoils things." Thus we need more than just a binary concept of the task being blocked on a mutex or not. So allow setting blocked_on to PROXY_WAKING as a special value which specifies the task is no longer blocked, but needs to be evaluated for return migration *before* it can be run. This will then be used in a later patch to handle proxy return-migration. Signed-off-by: John Stultz Reviewed-by: K Prateek Nayak --- v15: * Split blocked_on_state into its own patch later in the series, as the tri-state isn't necessary until we deal with proxy/return migrations v16: * Handle case where task in the chain is being set as BO_WAKING by another cpu (usually via ww_mutex die code). Make sure we release the rq lock so the wakeup can complete. * Rework to use guard() in find_proxy_task() as suggested by Peter v18: * Add initialization of blocked_on_state for init_task v19: * PREEMPT_RT build fixups and rework suggested by K Prateek Nayak v20: * Simplify one of the blocked_on_state changes to avoid extra PREMEPT_RT conditionals v21: * Slight reworks due to avoiding nested blocked_lock locking * Be consistent in use of blocked_on_state helper functions * Rework calls to proxy_deactivate() to do proper locking around blocked_on_state changes that we were cheating in previous versions. * Minor cleanups, some comment improvements v22: * Re-order blocked_on_state helpers to try to make it clearer the set_task_blocked_on() and clear_task_blocked_on() are the main enter/exit states and the blocked_on_state helpers help manage the transition states within. Per feedback from K Prateek Nayak. * Rework blocked_on_state to be defined within CONFIG_SCHED_PROXY_EXEC as suggested by K Prateek Nayak. * Reworked empty stub functions to just take one line as suggestd by K Prateek * Avoid using gotos out of a guard() scope, as highlighted by K Prateek, and instead rework logic to break and switch() on an action value. v23: * Big rework to using PROXY_WAKING instead of blocked_on_state as suggested by Peter. * Reworked commit message to include Peter's nice diagrams and example for why this extra state is necessary. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 51 +++++++++++++++++++++++++++++++++++++-- kernel/locking/mutex.c | 2 +- kernel/locking/ww_mutex.h | 16 ++++++------ kernel/sched/core.c | 17 +++++++++++++ 4 files changed, 75 insertions(+), 11 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 16122c2a2a224..863c54685684c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2148,10 +2148,20 @@ extern int __cond_resched_rwlock_write(rwlock_t *lo= ck); }) =20 #ifndef CONFIG_PREEMPT_RT + +/* + * With proxy exec, if a task has been proxy-migrated, it may be a donor + * on a cpu that it can't actually run on. Thus we need a special state + * to denote that the task is being woken, but that it needs to be + * evaluated for return-migration before it is run. So if the task is + * blocked_on PROXY_WAKING, return migrate it before running it. + */ +#define PROXY_WAKING ((struct mutex *)(-1L)) + static inline struct mutex *__get_task_blocked_on(struct task_struct *p) { lockdep_assert_held_once(&p->blocked_lock); - return p->blocked_on; + return p->blocked_on =3D=3D PROXY_WAKING ? NULL : p->blocked_on; } =20 static inline void __set_task_blocked_on(struct task_struct *p, struct mut= ex *m) @@ -2179,7 +2189,7 @@ static inline void __clear_task_blocked_on(struct tas= k_struct *p, struct mutex * * blocked_on relationships, but make sure we are not * clearing the relationship with a different lock. */ - WARN_ON_ONCE(m && p->blocked_on && p->blocked_on !=3D m); + WARN_ON_ONCE(m && p->blocked_on && p->blocked_on !=3D m && p->blocked_on = !=3D PROXY_WAKING); p->blocked_on =3D NULL; } =20 @@ -2188,6 +2198,35 @@ static inline void clear_task_blocked_on(struct task= _struct *p, struct mutex *m) guard(raw_spinlock_irqsave)(&p->blocked_lock); __clear_task_blocked_on(p, m); } + +static inline void __set_task_blocked_on_waking(struct task_struct *p, str= uct mutex *m) +{ + /* Currently we serialize blocked_on under the task::blocked_lock */ + lockdep_assert_held_once(&p->blocked_lock); + + if (!sched_proxy_exec()) { + __clear_task_blocked_on(p, m); + return; + } + + /* Don't set PROXY_WAKING if blocked_on was already cleared */ + if (!p->blocked_on) + return; + /* + * There may be cases where we set PROXY_WAKING on tasks that were + * already set to waking, but make sure we are not changing + * the relationship with a different lock. + */ + WARN_ON_ONCE(m && p->blocked_on !=3D m && p->blocked_on !=3D PROXY_WAKING= ); + p->blocked_on =3D PROXY_WAKING; +} + +static inline void set_task_blocked_on_waking(struct task_struct *p, struc= t mutex *m) +{ + guard(raw_spinlock_irqsave)(&p->blocked_lock); + __set_task_blocked_on_waking(p, m); +} + #else static inline void __clear_task_blocked_on(struct task_struct *p, struct r= t_mutex *m) { @@ -2196,6 +2235,14 @@ static inline void __clear_task_blocked_on(struct ta= sk_struct *p, struct rt_mute static inline void clear_task_blocked_on(struct task_struct *p, struct rt_= mutex *m) { } + +static inline void __set_task_blocked_on_waking(struct task_struct *p, str= uct rt_mutex *m) +{ +} + +static inline void set_task_blocked_on_waking(struct task_struct *p, struc= t rt_mutex *m) +{ +} #endif /* !CONFIG_PREEMPT_RT */ =20 static __always_inline bool need_resched(void) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index c44fc63d4476e..3cb9001d15119 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -967,7 +967,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne next =3D waiter->task; =20 debug_mutex_wake_waiter(lock, waiter); - clear_task_blocked_on(next, lock); + set_task_blocked_on_waking(next, lock); wake_q_add(&wake_q, next); } =20 diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index e4a81790ea7dd..5cd9dfa4b31e6 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -285,11 +285,11 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITE= R *waiter, debug_mutex_wake_waiter(lock, waiter); #endif /* - * When waking up the task to die, be sure to clear the - * blocked_on pointer. Otherwise we can see circular - * blocked_on relationships that can't resolve. + * When waking up the task to die, be sure to set the + * blocked_on to PROXY_WAKING. Otherwise we can see + * circular blocked_on relationships that can't resolve. */ - clear_task_blocked_on(waiter->task, lock); + set_task_blocked_on_waking(waiter->task, lock); wake_q_add(wake_q, waiter->task); } =20 @@ -339,15 +339,15 @@ static bool __ww_mutex_wound(struct MUTEX *lock, */ if (owner !=3D current) { /* - * When waking up the task to wound, be sure to clear the - * blocked_on pointer. Otherwise we can see circular - * blocked_on relationships that can't resolve. + * When waking up the task to wound, be sure to set the + * blocked_on to PROXY_WAKING. Otherwise we can see + * circular blocked_on relationships that can't resolve. * * NOTE: We pass NULL here instead of lock, because we * are waking the mutex owner, who may be currently * blocked on a different mutex. */ - clear_task_blocked_on(owner, NULL); + set_task_blocked_on_waking(owner, NULL); wake_q_add(wake_q, owner); } return true; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0533a14ce5935..da6dd2fc8e705 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4293,6 +4293,13 @@ int try_to_wake_up(struct task_struct *p, unsigned i= nt state, int wake_flags) ttwu_queue(p, cpu, wake_flags); } out: + /* + * For now, if we've been woken up, clear the task->blocked_on + * regardless if it was set to a mutex or PROXY_WAKING so the + * task can run. We will need to be more careful later when + * properly handling proxy migration + */ + clear_task_blocked_on(p, NULL); if (success) ttwu_stat(p, task_cpu(p), wake_flags); =20 @@ -6627,6 +6634,11 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) /* Something changed in the chain, so pick again */ if (!mutex) return NULL; + + /* if its PROXY_WAKING, resched_idle so ttwu can complete */ + if (mutex =3D=3D PROXY_WAKING) + return proxy_resched_idle(rq); + /* * By taking mutex->wait_lock we hold off concurrent mutex_unlock() * and ensure @owner sticks around. @@ -6647,6 +6659,11 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) =20 owner =3D __mutex_owner(mutex); if (!owner) { + /* + * If there is no owner, clear blocked_on + * and return p so it can run and try to + * acquire the lock + */ __clear_task_blocked_on(p, mutex); return p; } --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 697C21DF965 for ; Thu, 30 Oct 2025 00:19:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783551; cv=none; b=pC4/7cKdDUwtfVm9kccCCPEpXrAUUHdEmD0G4erreUJAMQDO3Sxw+Wj0BtUOuR/bBd/dGhgQkjSGgL0Q/6DT8++sCc+9B79+MhoCG8tiO58qnU4MhgxREUjYX0H8svHRd6c1HOQtpuCM1IbuM2STR9IlMROu6JmX4uiOONBROqY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783551; c=relaxed/simple; bh=lTBxW1HlNCLzxH9SuLV42ILabS6ahfvA6GNjyJLMQxg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UygH2mIHCHzNd/6L3mAR0nSObuM0k4ldpXQaux3Ze05owX9mIBloMWGKIJaDOsv7ze42OCdNDyNwhbWWc78+zFVjz6puXhvYWgSCrdj5YyHcZn5R711gUzifAHB/dTHjIpeaT6CAWfosq2rzUc7XPRrTEVhrDisHPE7+byhr8oY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Be1VxWX2; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Be1VxWX2" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-340299fd99dso422686a91.0 for ; Wed, 29 Oct 2025 17:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783548; x=1762388348; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VbCif3+L3fhPCreoYN0y9QZ8Sz89Lq8flwVG+2/PNww=; b=Be1VxWX2ooDF4beoFUqu5tp/8+b6fdm+e+in2sK+e4w3EnNefIYRcQ/G2nI8tdFJpJ prKIt4/yyObhr7IsM5WZqaMvK4kze+6yPFQ7OQxt4ohidbalRMVitfM9psF7M6VhgsH2 Pqhodkmd9Mfuez21wms/m2WVKJpzsVwtTQBo6qGB/Yaj+nXmhhY+fjcHk2VxAfgAmxGx JlsF0OINVO9uzt/jH16JKxZENjOklGyIKkUaR2+iI3klaRGwr36LcHZJ2+aUU2nnQGI9 OBCThGbuUGh1//OKcJJbgcdwjoT6EOMl6t1D36fKaSyc2zsqSdTxLstxb9M3q0r+rFEG QRtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783548; x=1762388348; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VbCif3+L3fhPCreoYN0y9QZ8Sz89Lq8flwVG+2/PNww=; b=fT33+f4zOlFhpC2cooiMXOryYYr45a+mHJIBUV8ly+13vN20DfKHiSKj7JQqkAVC/+ tMpCrdD4WE+TSiasfsyFWrP7/zelIb8OmgpCtXol4cN8B/MRrI6qp5Sdhe1Be3RSKGbd Cg6uo+rCfTcBgLzMuGv0N+QSG7CgKW0ih3AnVC+8GSiV5AzWDu1SckFojXLZtbQGRxfz ePyBIu7eiRw8Vjmu/9pDnONgLnDn1s3OHw4yj82Dz6ieOI3vtI62wTWlb6cbFdFuS4/l akhyl0CJITFmudzRRqpTBSgKvdryHwnTzd1dbqwtfcvUXsAhTTwkHA2l+w+LKjq4lwci Lb8Q== X-Gm-Message-State: AOJu0YzxbSagRmP2gC/lTCff7IVXEa1PWSFw+UwVSniRwYauKiIkAq0Q coC+WKou4p2KFgUOXzQ1HbhZXBES6d21GNNPqV5ok6IudP6Bzs7wnafHZs7qpg/Qd3DWqexgfRF EqvQDE975O6Lvm3+ykn3lZOMP9Do/ZpH14ksLQe7pPP4LP5ocJGl+QF8SgbP+mHB3O5if9fMaHM 0fNi+aqbdSyiioyNR0U6K/BtskXja8swe5BV9OABVjqOCCoOzj X-Google-Smtp-Source: AGHT+IGOOl1ihJXIneDYdfhpBDL/gY0wE3Xxrxs0wUzt2nvDzQs5ucb1uOt2xcHmbuJM9m5n5T0D9x6NF/Tl X-Received: from pjbmr1.prod.google.com ([2002:a17:90b:2381:b0:33e:3117:f86]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2b4f:b0:340:54a1:d6fe with SMTP id 98e67ed59e1d1-34054a1d8f9mr523111a91.15.1761783548459; Wed, 29 Oct 2025 17:19:08 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:45 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-5-jstultz@google.com> Subject: [PATCH v23 4/9] sched: Add assert_balance_callbacks_empty helper From: John Stultz To: LKML Cc: John Stultz , Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With proxy-exec utilizing pick-again logic, we can end up having balance callbacks set by the preivous pick_next_task() call left on the list. So pull the warning out into a helper function, and make sure we check it when we pick again. Suggested-by: Peter Zijlstra Signed-off-by: John Stultz Reviewed-by: K Prateek Nayak --- Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 1 + kernel/sched/sched.h | 11 ++++++++++- 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index da6dd2fc8e705..680ff147d270d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6896,6 +6896,7 @@ static void __sched notrace __schedule(int sched_mode) } =20 pick_again: + assert_balance_callbacks_empty(rq); next =3D pick_next_task(rq, rq->donor, &rf); rq_set_donor(rq, next); if (unlikely(task_is_blocked(next))) { diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 361f9101cef97..de77b3313ab18 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1779,6 +1779,15 @@ static inline void scx_rq_clock_update(struct rq *rq= , u64 clock) {} static inline void scx_rq_clock_invalidate(struct rq *rq) {} #endif /* !CONFIG_SCHED_CLASS_EXT */ =20 +#ifdef CONFIG_PROVE_LOCKING +static inline void assert_balance_callbacks_empty(struct rq *rq) +{ + WARN_ON_ONCE(rq->balance_callback && rq->balance_callback !=3D &balance_p= ush_callback); +} +#else +static inline void assert_balance_callbacks_empty(struct rq *rq) {} +#endif + /* * Lockdep annotation that avoids accidental unlocks; it's like a * sticky/continuous lockdep_assert_held(). @@ -1795,7 +1804,7 @@ static inline void rq_pin_lock(struct rq *rq, struct = rq_flags *rf) =20 rq->clock_update_flags &=3D (RQCF_REQ_SKIP|RQCF_ACT_SKIP); rf->clock_update_flags =3D 0; - WARN_ON_ONCE(rq->balance_callback && rq->balance_callback !=3D &balance_p= ush_callback); + assert_balance_callbacks_empty(rq); } =20 static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf) --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5F5D1ADC7E for ; Thu, 30 Oct 2025 00:19:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783552; cv=none; b=rHVSBikXCgbEoXKKpc78YpWEn4ky42HAcTa/4lWbGheH+hJkCXsKVHk4JGWnZq/ObTUldKRpVwb3fxO965/OC2+KpM2zdUisJR5lhDxjy5fiXWiWNlxOFvkcNW60rYzuRMr4yrbJeLPgqgqQz8cSeVc9dVsXL4a2XoHsv9Qe2EE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783552; c=relaxed/simple; bh=g+QujWJnUwHRhuZe5J7seS3AUGCqreuCnAqkj/1MMkc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=N8vxROfJMxVEEuYaWNUBMbXGfwjqfD1lU+DyN8wyKc2snhM39AxvMhi5N3S4mbWpfKULaPy0ElvSReSc2NC385Zb6j0116TM/w3RJBSmZ3ik9/mmJtZKi6EbpzXVHNk4c/hP/weePi+SIQBsObOLOLbmfb18PstvXo9mI3et1pk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0u05ljFc; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0u05ljFc" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3403e994649so825487a91.3 for ; Wed, 29 Oct 2025 17:19:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783550; x=1762388350; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JqycFpEo/ajRT9W2GQLxpqJNtH5rJBZEesAjfi64RXc=; b=0u05ljFcTW8ewrDCoj4pfwUKG7JTgRH+NwlXN6fVhyCKyBqAWq46dADtcgkS2t04P0 4sleK5i9tvqvxDkWngfyiapKA0Yacxvq/vw71+oMGDRe4XbL42tAvVgyvRaNLqbGFaJy qOQWA1rZmLzWqLfroIaoB53WjF1knPchxwvvCtjZ632+aitAZqgN2rvqqNmgVLzQdFNw zBKLtTL9IiqnYr0Jd5lU5wIXUfvS8BIKk9aQubA+dgku7bzrMU/+NlOKiUn5xJlob4yq Lbawr1wOifDyAmL9rKgwJNDXxNKJ2ELrbfstHXf43FCpO/mA7lFjiK6wGPyDV1CnkP10 YOAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783550; x=1762388350; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JqycFpEo/ajRT9W2GQLxpqJNtH5rJBZEesAjfi64RXc=; b=Rj89//g59VDU67sEZICxKPdJ6vdRJAjKdpsAQBRoVIsHRm+UcbOS2AsGE/2nnES7EW Vun9OT1YmG0w2+j2MKtsyn9yPrzONeSMBWzWhX/Hqa9ZYIxp8OlHeqPnV1+3Sh2ZjVeF 5zi4uFZzSYgq+YfXWrCJLmXiTMSfhCPVjrfT3scJfpHiHrFYZyI9qJnnxpF/fLfLgFXD ajG01VQnwdK4KRNWiMZMMueLQA11A3ICCI/s4M2hhP0FD4Lt82YkM+n3WG8JLbqz305V IpJtHjyLjPGnD+yl8en23bl2vxjPJit/IrvmVZIdCul4i+2ZMmUoh7cTU2VS6iq2N5i6 Q38w== X-Gm-Message-State: AOJu0Yx9oOZ3h75dAnGU4JkOiFyU8JeNU/+3ybY9JFwE4HbfVzIWy1n2 4C3xH0iRCI4+1XEmaA4XVvLc7tW+mv4s4QjQG82KctplVXyulqPyzj54y3MAE9211bqnBuIbuhd jcjviW5UHwxSIEFHgHBcPoQPUE8qv5w86IunzAXEYiuuB/XTFB2r1xAKwIgpsF04YQ5daeWRSvW UpfVqBnss6hlbGBBYt3rZPXEQ/AcwvpAcMz7k3ffJndeC+7Jna X-Google-Smtp-Source: AGHT+IEXoNXvskRxfAbogtvPYzzCEwDJ/L6nfcxEd1zZm3+NJqeAnn6dI3CviE/FdnVtHYXBG79vkPCqqdHH X-Received: from pjyq10.prod.google.com ([2002:a17:90a:e08a:b0:33b:51fe:1a88]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3811:b0:330:6d5e:f174 with SMTP id 98e67ed59e1d1-3403a2a3438mr5495416a91.20.1761783550119; Wed, 29 Oct 2025 17:19:10 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:46 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-6-jstultz@google.com> Subject: [PATCH v23 5/9] sched: Add logic to zap balance callbacks if we pick again From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With proxy-exec, a task is selected to run via pick_next_task(), and then if it is a mutex blocked task, we call find_proxy_task() to find a runnable owner. If the runnable owner is on another cpu, we will need to migrate the selected donor task away, after which we will pick_again can call pick_next_task() to choose something else. However, in the first call to pick_next_task(), we may have had a balance_callback setup by the class scheduler. After we pick again, its possible pick_next_task_fair() will be called which calls sched_balance_newidle() and sched_balance_rq(). This will throw a warning: [ 8.796467] rq->balance_callback && rq->balance_callback !=3D &balance_p= ush_callback [ 8.796467] WARNING: CPU: 32 PID: 458 at kernel/sched/sched.h:1750 sched= _balance_rq+0xe92/0x1250 ... [ 8.796467] Call Trace: [ 8.796467] [ 8.796467] ? __warn.cold+0xb2/0x14e [ 8.796467] ? sched_balance_rq+0xe92/0x1250 [ 8.796467] ? report_bug+0x107/0x1a0 [ 8.796467] ? handle_bug+0x54/0x90 [ 8.796467] ? exc_invalid_op+0x17/0x70 [ 8.796467] ? asm_exc_invalid_op+0x1a/0x20 [ 8.796467] ? sched_balance_rq+0xe92/0x1250 [ 8.796467] sched_balance_newidle+0x295/0x820 [ 8.796467] pick_next_task_fair+0x51/0x3f0 [ 8.796467] __schedule+0x23a/0x14b0 [ 8.796467] ? lock_release+0x16d/0x2e0 [ 8.796467] schedule+0x3d/0x150 [ 8.796467] worker_thread+0xb5/0x350 [ 8.796467] ? __pfx_worker_thread+0x10/0x10 [ 8.796467] kthread+0xee/0x120 [ 8.796467] ? __pfx_kthread+0x10/0x10 [ 8.796467] ret_from_fork+0x31/0x50 [ 8.796467] ? __pfx_kthread+0x10/0x10 [ 8.796467] ret_from_fork_asm+0x1a/0x30 [ 8.796467] This is because if a RT task was originally picked, it will setup the rq->balance_callback with push_rt_tasks() via set_next_task_rt(). Once the task is migrated away and we pick again, we haven't processed any balance callbacks, so rq->balance_callback is not in the same state as it was the first time pick_next_task was called. To handle this, add a zap_balance_callbacks() helper function which cleans up the balance callbacks without running them. This should be ok, as we are effectively undoing the state set in the first call to pick_next_task(), and when we pick again, the new callback can be configured for the donor task actually selected. Signed-off-by: John Stultz Reviewed-by: K Prateek Nayak --- v20: * Tweaked to avoid build issues with different configs v22: * Spelling fix suggested by K Prateek * Collapsed the stub implementation to one line as suggested by K Prateek * Zap callbacks when we resched idle, as suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 680ff147d270d..ab6e14259bdf2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4970,6 +4970,38 @@ static inline void finish_task(struct task_struct *p= rev) smp_store_release(&prev->on_cpu, 0); } =20 +#ifdef CONFIG_SCHED_PROXY_EXEC +/* + * Only called from __schedule context + * + * There are some cases where we are going to re-do the action + * that added the balance callbacks. We may not be in a state + * where we can run them, so just zap them so they can be + * properly re-added on the next time around. This is similar + * handling to running the callbacks, except we just don't call + * them. + */ +static void zap_balance_callbacks(struct rq *rq) +{ + struct balance_callback *next, *head; + bool found =3D false; + + lockdep_assert_rq_held(rq); + + head =3D rq->balance_callback; + while (head) { + if (head =3D=3D &balance_push_callback) + found =3D true; + next =3D head->next; + head->next =3D NULL; + head =3D next; + } + rq->balance_callback =3D found ? &balance_push_callback : NULL; +} +#else +static inline void zap_balance_callbacks(struct rq *rq) {} +#endif + static void do_balance_callbacks(struct rq *rq, struct balance_callback *h= ead) { void (*func)(struct rq *rq); @@ -6901,10 +6933,15 @@ static void __sched notrace __schedule(int sched_mo= de) rq_set_donor(rq, next); if (unlikely(task_is_blocked(next))) { next =3D find_proxy_task(rq, next, &rf); - if (!next) + if (!next) { + /* zap the balance_callbacks before picking again */ + zap_balance_callbacks(rq); goto pick_again; - if (next =3D=3D rq->idle) + } + if (next =3D=3D rq->idle) { + zap_balance_callbacks(rq); goto keep_resched; + } } picked: clear_tsk_need_resched(prev); --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FA2B1FAC4B for ; Thu, 30 Oct 2025 00:19:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783556; cv=none; b=lUstdryhjAF/tYSLfMRLPky80T1z1OTwIq5TTJx603qd7UdVBSdCuqEfL+SMPKzufquNT1GP6h36ulq9TTKA7Vr5zW2clylWltUH3c2VG3KobTq8UCKOlT9yiY++j7wwdVZ7tUos3bkyVG1Xxc624DUAxPAyQvy20AAMeoCFoqo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783556; c=relaxed/simple; bh=PkCbxvqjSC4DSej3E9lqCr11anzINVimSYK0cY+l700=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=E6Kygfs0C+3Br1HQ1X+003vXLf1hPZ3rylbSQQHR7Yr+t4LPEulfnUHHZ9HBwdYSrtYo/jtq/D3n7BJDH6Fs5kgfAZftwr1ewE0egeEzDSsdTPk+QmwB5Cxfy7SN2sCEez0FPOiN0o3rQFL7C2y9wqHNiUUsCOImYS5iRXoflNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TdTwp3mX; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TdTwp3mX" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-33dadf7c5beso387471a91.0 for ; Wed, 29 Oct 2025 17:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783552; x=1762388352; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=0waHKVD3Yt8wM1QtJ3IqaOcsD4THDiTNk5Krf6Xf8tw=; b=TdTwp3mXeiDXs8eYAbIUXVk2dBB/wmOK15C//V6RjdXR0Nw775NQPMpEnNLIWWjLza +m/WRC8UDSiq7W3B1W7s9hQGJ2QEO2DETsqmcI3lNSn4VbFiIPhelNM+lsKAzJqxAKvp CQkXA2xhwr0o7u83A3Z4TSjrVaIFqunYu1svGIGeJMMav1V91ag1KI0u/L1OyB2ryF23 tMLBnZoAGwbJ/uoE+UXCJafxK0KejhEH6tqzH4ydRcAyvJkq3EVZ/zej3W2HgStq/ujd 2V47O7jQgP7utnf/rO/8JQLHQ7fn/zEjqEAfrre6LnlL3YErZwK1y1Aw9cWM3hETvhXd uW0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783552; x=1762388352; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=0waHKVD3Yt8wM1QtJ3IqaOcsD4THDiTNk5Krf6Xf8tw=; b=YrSmfRn+j5SNwsaW2PId5xVEtk6Lx7qtRI7079Wwj4aYSPsMwK+zDjqFkWmH9wFH72 Ub+f92dhaKzi0NUDb03oyG/ArEMLRmRL4Qb/ZpZfUh/AS+7TXjucpDsfHCA6aVc4kF3c q9S8MURLmOVphiS8ZSok0lZSqY+j+//7jDTi6Jec3LboAw7Ha10pLIalOr6XX3GHNs/b 0IYCb5G+ntJbMieET10Ix10uUrHkDUffC1G/KzHwtK7jTOeHRM6PjWg0kro+abBJZ94Y uox0Zl+UB6EpDI+X3LVj0R0OxetMw0N8FrNRKg8VXTuVEMvGPnZEuZDC7o4gPoAJzRa3 1sjQ== X-Gm-Message-State: AOJu0YwhVLHSkVBxg1Xb8HhOJ1HRsEO84LEF/DKh9NsqaW/yJGrVtFnv EDcIVeBQY61lngyqqmVBYHTHGeqA4EqhJ9kSOVIdDLCFsuEDQcWSpVdEakRMyzQixPAt7MBLCQH juu5kdh+8ehf2xuvglVksdftBdKqoaXzen65JX7jsMpuCEyqdQXICQ1OCIUMLHdoJJ6o7/G1QIy KNhjEYcnmgsyJnwgZEMdmyATs8YTGdKVMLU3GEuZnnuLPP+sNi X-Google-Smtp-Source: AGHT+IF8354s6jA18WbtrQ6sBEewruI0GOrc8kob5DyAuG4XeAwDLIYC3u5r6NqrKKCvLfhoPCH95m/vfcaR X-Received: from pjte10.prod.google.com ([2002:a17:90a:c20a:b0:32b:35fb:187f]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d88b:b0:32e:8931:b59c with SMTP id 98e67ed59e1d1-3404c58bd74mr1282457a91.27.1761783551786; Wed, 29 Oct 2025 17:19:11 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:47 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-7-jstultz@google.com> Subject: [PATCH v23 6/9] sched: Handle blocked-waiter migration (and return migration) From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add logic to handle migrating a blocked waiter to a remote cpu where the lock owner is runnable. Additionally, as the blocked task may not be able to run on the remote cpu, add logic to handle return migration once the waiting task is given the mutex. Because tasks may get migrated to where they cannot run, also modify the scheduling classes to avoid sched class migrations on mutex blocked tasks, leaving find_proxy_task() and related logic to do the migrations and return migrations. This was split out from the larger proxy patch, and significantly reworked. Credits for the original patch go to: Peter Zijlstra (Intel) Juri Lelli Valentin Schneider Connor O'Brien Signed-off-by: John Stultz --- v6: * Integrated sched_proxy_exec() check in proxy_return_migration() * Minor cleanups to diff * Unpin the rq before calling __balance_callbacks() * Tweak proxy migrate to migrate deeper task in chain, to avoid tasks pingponging between rqs v7: * Fixup for unused function arguments * Switch from that_rq -> target_rq, other minor tweaks, and typo fixes suggested by Metin Kaya * Switch back to doing return migration in the ttwu path, which avoids nasty lock juggling and performance issues * Fixes for UP builds v8: * More simplifications from Metin Kaya * Fixes for null owner case, including doing return migration * Cleanup proxy_needs_return logic v9: * Narrow logic in ttwu that sets BO_RUNNABLE, to avoid missed return migrations * Switch to using zap_balance_callbacks rathern then running them when we are dropping rq locks for proxy_migration. * Drop task_is_blocked check in sched_submit_work as suggested by Metin (may re-add later if this causes trouble) * Do return migration when we're not on wake_cpu. This avoids bad task placement caused by proxy migrations raised by Xuewen Yan * Fix to call set_next_task(rq->curr) prior to dropping rq lock to avoid rq->curr getting migrated before we have actually switched from it * Cleanup to re-use proxy_resched_idle() instead of open coding it in proxy_migrate_task() * Fix return migration not to use DEQUEUE_SLEEP, so that we properly see the task as task_on_rq_migrating() after it is dequeued but before set_task_cpu() has been called on it * Fix to broaden find_proxy_task() checks to avoid race where a task is dequeued off the rq due to return migration, but set_task_cpu() and the enqueue on another rq happened after we checked task_cpu(owner). This ensures we don't proxy using a task that is not actually on our runqueue. * Cleanup to avoid the locked BO_WAKING->BO_RUNNABLE transition in try_to_wake_up() if proxy execution isn't enabled. * Cleanup to improve comment in proxy_migrate_task() explaining the set_next_task(rq->curr) logic * Cleanup deadline.c change to stylistically match rt.c change * Numerous cleanups suggested by Metin v10: * Drop WARN_ON(task_is_blocked(p)) in ttwu current case v11: * Include proxy_set_task_cpu from later in the series to this change so we can use it, rather then reworking logic later in the series. * Fix problem with return migration, where affinity was changed and wake_cpu was left outside the affinity mask. * Avoid reading the owner's cpu twice (as it might change inbetween) to avoid occasional migration-to-same-cpu edge cases * Add extra WARN_ON checks for wake_cpu and return migration edge cases. * Typo fix from Metin v13: * As we set ret, return it, not just NULL (pulling this change in from later patch) * Avoid deadlock between try_to_wake_up() and find_proxy_task() when blocked_on cycle with ww_mutex is trying a mid-chain wakeup. * Tweaks to use new __set_blocked_on_runnable() helper * Potential fix for incorrectly updated task->dl_server issues * Minor comment improvements * Add logic to handle missed wakeups, in that case doing return migration from the find_proxy_task() path * Minor cleanups v14: * Improve edge cases where we wouldn't set the task as BO_RUNNABLE v15: * Added comment to better describe proxy_needs_return() as suggested by Qais * Build fixes for !CONFIG_SMP reported by Maciej =C5=BBenczykowski * Adds fix for re-evaluating proxy_needs_return when sched_proxy_exec() is disabled, reported and diagnosed by: kuyo chang v16: * Larger rework of needs_return logic in find_proxy_task, in order to avoid problems with cpuhotplug * Rework to use guard() as suggested by Peter v18: * Integrate optimization suggested by Suleiman to do the checks for sleeping owners before checking if the task_cpu is this_cpu, so that we can avoid needlessly proxy-migrating tasks to only then dequeue them. Also check if migrating last. * Improve comments around guard locking * Include tweak to ttwu_runnable() as suggested by hupu * Rework the logic releasing the rq->donor reference before letting go of the rqlock. Just use rq->idle. * Go back to doing return migration on BO_WAKING owners, as I was hitting some softlockups caused by running tasks not making it out of BO_WAKING. v19: * Fixed proxy_force_return() logic for !SMP cases v21: * Reworked donor deactivation for unhandled sleeping owners * Commit message tweaks v22: * Add comments around zap_balance_callbacks in proxy_migration logic * Rework logic to avoid gotos out of guard() scopes, and instead use break and switch() on action value, as suggested by K Prateek * K Prateek suggested simplifications around putting donor and setting idle as next task in the migration paths, which I further simplified to using proxy_resched_idle() * Comment improvements * Dropped curr !=3D donor check in pick_next_task_fair() suggested by K Prateek v23: * Rework to use the PROXY_WAKING approach suggested by Peter * Drop unnecessarily setting wake_cpu when affinity changes as noticed by Peter * Split out the ttwu() logic changes into a later separate patch as suggested by Peter Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 230 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 200 insertions(+), 30 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ab6e14259bdf2..3cf5e75abf21e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3682,6 +3682,23 @@ static inline void ttwu_do_wakeup(struct task_struct= *p) trace_sched_wakeup(p); } =20 +#ifdef CONFIG_SCHED_PROXY_EXEC +static inline void proxy_set_task_cpu(struct task_struct *p, int cpu) +{ + unsigned int wake_cpu; + + /* + * Since we are enqueuing a blocked task on a cpu it may + * not be able to run on, preserve wake_cpu when we + * __set_task_cpu so we can return the task to where it + * was previously runnable. + */ + wake_cpu =3D p->wake_cpu; + __set_task_cpu(p, cpu); + p->wake_cpu =3D wake_cpu; +} +#endif /* CONFIG_SCHED_PROXY_EXEC */ + static void ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, struct rq_flags *rf) @@ -4293,13 +4310,6 @@ int try_to_wake_up(struct task_struct *p, unsigned i= nt state, int wake_flags) ttwu_queue(p, cpu, wake_flags); } out: - /* - * For now, if we've been woken up, clear the task->blocked_on - * regardless if it was set to a mutex or PROXY_WAKING so the - * task can run. We will need to be more careful later when - * properly handling proxy migration - */ - clear_task_blocked_on(p, NULL); if (success) ttwu_stat(p, task_cpu(p), wake_flags); =20 @@ -6602,7 +6612,7 @@ static inline struct task_struct *proxy_resched_idle(= struct rq *rq) return rq->idle; } =20 -static bool __proxy_deactivate(struct rq *rq, struct task_struct *donor) +static bool proxy_deactivate(struct rq *rq, struct task_struct *donor) { unsigned long state =3D READ_ONCE(donor->__state); =20 @@ -6622,17 +6632,144 @@ static bool __proxy_deactivate(struct rq *rq, stru= ct task_struct *donor) return try_to_block_task(rq, donor, &state, true); } =20 -static struct task_struct *proxy_deactivate(struct rq *rq, struct task_str= uct *donor) +/* + * If the blocked-on relationship crosses CPUs, migrate @p to the + * owner's CPU. + * + * This is because we must respect the CPU affinity of execution + * contexts (owner) but we can ignore affinity for scheduling + * contexts (@p). So we have to move scheduling contexts towards + * potential execution contexts. + * + * Note: The owner can disappear, but simply migrate to @target_cpu + * and leave that CPU to sort things out. + */ +static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf, + struct task_struct *p, int target_cpu) { - if (!__proxy_deactivate(rq, donor)) { - /* - * XXX: For now, if deactivation failed, set donor - * as unblocked, as we aren't doing proxy-migrations - * yet (more logic will be needed then). - */ - clear_task_blocked_on(donor, NULL); + struct rq *target_rq =3D cpu_rq(target_cpu); + + lockdep_assert_rq_held(rq); + + /* + * Since we're going to drop @rq, we have to put(@rq->donor) first, + * otherwise we have a reference that no longer belongs to us. + * + * Additionally, as we put_prev_task(prev) earlier, its possible that + * prev will migrate away as soon as we drop the rq lock, however we + * still have it marked as rq->curr, as we've not yet switched tasks. + * + * So call proxy_resched_idle() to let go of the references before + * we release the lock. + */ + proxy_resched_idle(rq); + + WARN_ON(p =3D=3D rq->curr); + + deactivate_task(rq, p, 0); + proxy_set_task_cpu(p, target_cpu); + + /* + * We have to zap callbacks before unlocking the rq + * as another CPU may jump in and call sched_balance_rq + * which can trip the warning in rq_pin_lock() if we + * leave callbacks set. + */ + zap_balance_callbacks(rq); + rq_unpin_lock(rq, rf); + raw_spin_rq_unlock(rq); + raw_spin_rq_lock(target_rq); + + activate_task(target_rq, p, 0); + wakeup_preempt(target_rq, p, 0); + + raw_spin_rq_unlock(target_rq); + raw_spin_rq_lock(rq); + rq_repin_lock(rq, rf); +} + +static void proxy_force_return(struct rq *rq, struct rq_flags *rf, + struct task_struct *p) +{ + struct rq *this_rq, *target_rq; + struct rq_flags this_rf; + int cpu, wake_flag =3D 0; + + lockdep_assert_rq_held(rq); + WARN_ON(p =3D=3D rq->curr); + + get_task_struct(p); + + /* + * We have to zap callbacks before unlocking the rq + * as another CPU may jump in and call sched_balance_rq + * which can trip the warning in rq_pin_lock() if we + * leave callbacks set. + */ + zap_balance_callbacks(rq); + rq_unpin_lock(rq, rf); + raw_spin_rq_unlock(rq); + + /* + * We drop the rq lock, and re-grab task_rq_lock to get + * the pi_lock (needed for select_task_rq) as well. + */ + this_rq =3D task_rq_lock(p, &this_rf); + update_rq_clock(this_rq); + + /* + * Since we let go of the rq lock, the task may have been + * woken or migrated to another rq before we got the + * task_rq_lock. So re-check we're on the same RQ. If + * not, the task has already been migrated and that CPU + * will handle any futher migrations. + */ + if (this_rq !=3D rq) + goto err_out; + + /* Similarly, if we've been dequeued, someone else will wake us */ + if (!task_on_rq_queued(p)) + goto err_out; + + /* + * Since we should only be calling here from __schedule() + * -> find_proxy_task(), no one else should have + * assigned current out from under us. But check and warn + * if we see this, then bail. + */ + if (task_current(this_rq, p) || task_on_cpu(this_rq, p)) { + WARN_ONCE(1, "%s rq: %i current/on_cpu task %s %d on_cpu: %i\n", + __func__, cpu_of(this_rq), + p->comm, p->pid, p->on_cpu); + goto err_out; } - return NULL; + + proxy_resched_idle(this_rq); + deactivate_task(this_rq, p, 0); + cpu =3D select_task_rq(p, p->wake_cpu, &wake_flag); + set_task_cpu(p, cpu); + target_rq =3D cpu_rq(cpu); + clear_task_blocked_on(p, NULL); + task_rq_unlock(this_rq, p, &this_rf); + + /* Drop this_rq and grab target_rq for activation */ + raw_spin_rq_lock(target_rq); + activate_task(target_rq, p, 0); + wakeup_preempt(target_rq, p, 0); + put_task_struct(p); + raw_spin_rq_unlock(target_rq); + + /* Finally, re-grab the origianl rq lock and return to pick-again */ + raw_spin_rq_lock(rq); + rq_repin_lock(rq, rf); + return; + +err_out: + put_task_struct(p); + task_rq_unlock(this_rq, p, &this_rf); + raw_spin_rq_lock(rq); + rq_repin_lock(rq, rf); + return; } =20 /* @@ -6655,10 +6792,12 @@ static struct task_struct * find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags = *rf) { struct task_struct *owner =3D NULL; + bool curr_in_chain =3D false; int this_cpu =3D cpu_of(rq); struct task_struct *p; struct mutex *mutex; - enum { FOUND, DEACTIVATE_DONOR } action =3D FOUND; + int owner_cpu; + enum { FOUND, DEACTIVATE_DONOR, MIGRATE, NEEDS_RETURN } action =3D FOUND; =20 /* Follow blocked_on chain. */ for (p =3D donor; task_is_blocked(p); p =3D owner) { @@ -6667,9 +6806,15 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) if (!mutex) return NULL; =20 - /* if its PROXY_WAKING, resched_idle so ttwu can complete */ - if (mutex =3D=3D PROXY_WAKING) - return proxy_resched_idle(rq); + /* if its PROXY_WAKING, do return migration or run if current */ + if (mutex =3D=3D PROXY_WAKING) { + if (task_current(rq, p)) { + clear_task_blocked_on(p, PROXY_WAKING); + return p; + } + action =3D NEEDS_RETURN; + break; + } =20 /* * By taking mutex->wait_lock we hold off concurrent mutex_unlock() @@ -6689,26 +6834,41 @@ find_proxy_task(struct rq *rq, struct task_struct *= donor, struct rq_flags *rf) return NULL; } =20 + if (task_current(rq, p)) + curr_in_chain =3D true; + owner =3D __mutex_owner(mutex); if (!owner) { /* - * If there is no owner, clear blocked_on - * and return p so it can run and try to - * acquire the lock + * If there is no owner, either clear blocked_on + * and return p (if it is current and safe to + * just run on this rq), or return-migrate the task. */ - __clear_task_blocked_on(p, mutex); - return p; + if (task_current(rq, p)) { + __clear_task_blocked_on(p, NULL); + return p; + } + action =3D NEEDS_RETURN; + break; } =20 if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) { /* XXX Don't handle blocked owners/delayed dequeue yet */ + if (curr_in_chain) + return proxy_resched_idle(rq); action =3D DEACTIVATE_DONOR; break; } =20 - if (task_cpu(owner) !=3D this_cpu) { - /* XXX Don't handle migrations yet */ - action =3D DEACTIVATE_DONOR; + owner_cpu =3D task_cpu(owner); + if (owner_cpu !=3D this_cpu) { + /* + * @owner can disappear, simply migrate to @owner_cpu + * and leave that CPU to sort things out. + */ + if (curr_in_chain) + return proxy_resched_idle(rq); + action =3D MIGRATE; break; } =20 @@ -6770,7 +6930,17 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) /* Handle actions we need to do outside of the guard() scope */ switch (action) { case DEACTIVATE_DONOR: - return proxy_deactivate(rq, donor); + if (proxy_deactivate(rq, donor)) + return NULL; + /* If deactivate fails, force return */ + p =3D donor; + fallthrough; + case NEEDS_RETURN: + proxy_force_return(rq, rf, p); + return NULL; + case MIGRATE: + proxy_migrate_task(rq, rf, p, owner_cpu); + return NULL; case FOUND: /* fallthrough */; } --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F9C31F9F51 for ; Thu, 30 Oct 2025 00:19:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783556; cv=none; b=SZBkEMyiY9iQ+1cevCHpQQecLEYg8gd3ssSQ0KKdipzfvQr1axTgTBRToMaygjis7XVsMRwM47kKg9dnBx246IPzHtFCKvprMPwK5DfXWc8uG3vQJ6ayl4kVUt74sMWiwZlTQtZPZxkSD4Np55OLyFkCTDM5uP57yhxyO3oIMgU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783556; c=relaxed/simple; bh=QT2Nn6r16DWwFofbrW5JWVa/vRxzfFyLiIq5haDNeb4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ui5zEQkI2li6gxbsFYoEJ6INGeXPwfbVevs3eMac0K5nlLIqZbO9C+Wfyn22jJ0CJ/kOy6Qq2ICbhSCn0owSFZd2w+kTIivjMMyPwES6LCbkVPzGw7b4YP1iUvC6cf6dVLfXT4HWbRwhHV43lTpuIcKqiUkh9eGBUn465UAiJnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XzHBS45a; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XzHBS45a" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-340299e05easo374217a91.0 for ; Wed, 29 Oct 2025 17:19:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783553; x=1762388353; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EdkyYHakwh6AX8cS6PHwqXSwqMmduHkXajdeceToMQw=; b=XzHBS45aOkRJjj4ncwSGsJ7u9Z9xDlRJZTVnP3AXlRo9D/DuuAlmghsYHC/uT56CUo 93uQdHppZWG9wWtdOP41RgahnnpHpvbUE3Ff0Fy4dPkD3tIo2BWv6/GYKuqgZwK2m0ZZ RsUf7Z1aKPpzBSQwXmyR3Di8HlIw1i0zYXoigIDKBx0bF1Bvv0DlL549LIsPxu3wk8vg 9Ahf3V8LnzhgJVyy6mmlRUloddEe9dc/Ok4Hlf751+q35sda5yCBaGMQiqcocGcoTrX8 PJNLt2POPfbd1RiJXHFVEKecTQRBlFa1vYsUCL/C7XcouU6IIXuotyoda6mXg5PfvIAG ui6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783553; x=1762388353; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EdkyYHakwh6AX8cS6PHwqXSwqMmduHkXajdeceToMQw=; b=jAUuCyPqttpzU4t8hbPbyN2LfPSEbory7Fhmi0kZLYYwjt1tk7L3kbQDg+6Roco1lo eJCfOv+RPHxgewzxiq3110Klj8MlMX4wsjZLJj549KoEeWcz9jdPDYFFIFGUxTV1k1GR BUltmLd1uPSGaPC4C5Eu4VGMsNmykZsuXRE5Qs/FX86FGa7H8tnHBRJw9BNh/8PjUnZT 80P5ZEa6xYLsG6QcgFRG/EYZ0wXj1OF5CakjtCdqoOQHBl8WMIalcGDLpok+pMlJis6o mrTB2ZrElyjEgmu3U5ystj8KfuCONJ/ioP9Y2l6i4CcLOu/2ewprl58Eml6hz10cIoFa qiCw== X-Gm-Message-State: AOJu0Yx0+bOq8FhtbGbMjVhz5iVt/odBIoT+3wvJ3UdZ1WjKc1fMWBX7 iZNXpYekU6UZTfWWm0uXTuWuHKgV5pqtnBaE6OcPWadAkSgGD3gfJ/2maX8XaiE8liT8J9KAA15 9fHTnBBwuFe9HPzQSFBzZPWhh2/i+u7dwWPgHmnYW2oKBi14b/rpjEqqDt7tMoyN/GxEjf8c6tg Ey6Edz+o9TR7VpE3C/unrm6/zI6wufCq87AJ7E72l49PjYXT8M X-Google-Smtp-Source: AGHT+IHSHADYsMGDlyHaovcsHv/qUc2C7LapBPUP7hDYFNT8mAqO2A6EAMsnBHjAfPf5FqNWi2R74uPK8rB4 X-Received: from pjbms9.prod.google.com ([2002:a17:90b:2349:b0:33d:69cf:1f82]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:28c4:b0:32e:1b1c:f8b8 with SMTP id 98e67ed59e1d1-3403a2f1869mr6063113a91.26.1761783553236; Wed, 29 Oct 2025 17:19:13 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:48 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-8-jstultz@google.com> Subject: [PATCH v23 7/9] sched: Have try_to_wake_up() handle return-migration for PROXY_WAKING case From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch adds logic so try_to_wake_up() will notice if we are waking a task where blocked_on =3D=3D PROXY_WAKING, and if necessary dequeue the task so the wakeup will naturally return-migrate the donor task back to a cpu it can run on. This helps performance as we do the dequeue and wakeup under the locks normally taken in the try_to_wake_up() and avoids having to do proxy_force_return() from __schedule(), which has to re-take similar locks and then force a pick again loop. This was split out from the larger proxy patch, and significantly reworked. Credits for the original patch go to: Peter Zijlstra (Intel) Juri Lelli Valentin Schneider Connor O'Brien Signed-off-by: John Stultz --- Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 74 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 72 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3cf5e75abf21e..4546ceb8eae56 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3697,6 +3697,56 @@ static inline void proxy_set_task_cpu(struct task_st= ruct *p, int cpu) __set_task_cpu(p, cpu); p->wake_cpu =3D wake_cpu; } + +static bool proxy_task_runnable_but_waking(struct task_struct *p) +{ + if (!sched_proxy_exec()) + return false; + return (READ_ONCE(p->__state) =3D=3D TASK_RUNNING && + READ_ONCE(p->blocked_on) =3D=3D PROXY_WAKING); +} + +static inline struct task_struct *proxy_resched_idle(struct rq *rq); + +/* + * Checks to see if task p has been proxy-migrated to another rq + * and needs to be returned. If so, we deactivate the task here + * so that it can be properly woken up on the p->wake_cpu + * (or whichever cpu select_task_rq() picks at the bottom of + * try_to_wake_up() + */ +static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p) +{ + bool ret =3D false; + + if (!sched_proxy_exec()) + return false; + + raw_spin_lock(&p->blocked_lock); + if (p->blocked_on =3D=3D PROXY_WAKING) { + if (!task_current(rq, p) && p->wake_cpu !=3D cpu_of(rq)) { + if (task_current_donor(rq, p)) + proxy_resched_idle(rq); + + deactivate_task(rq, p, DEQUEUE_NOCLOCK); + ret =3D true; + } + __clear_task_blocked_on(p, PROXY_WAKING); + resched_curr(rq); + } + raw_spin_unlock(&p->blocked_lock); + return ret; +} +#else /* !CONFIG_SCHED_PROXY_EXEC */ +static bool proxy_task_runnable_but_waking(struct task_struct *p) +{ + return false; +} + +static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p) +{ + return false; +} #endif /* CONFIG_SCHED_PROXY_EXEC */ =20 static void @@ -3784,6 +3834,8 @@ static int ttwu_runnable(struct task_struct *p, int w= ake_flags) update_rq_clock(rq); if (p->se.sched_delayed) enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED); + if (proxy_needs_return(rq, p)) + goto out; if (!task_on_cpu(rq, p)) { /* * When on_rq && !on_cpu the task is preempted, see if @@ -3794,6 +3846,7 @@ static int ttwu_runnable(struct task_struct *p, int w= ake_flags) ttwu_do_wakeup(p); ret =3D 1; } +out: __task_rq_unlock(rq, &rf); =20 return ret; @@ -3924,6 +3977,14 @@ static inline bool ttwu_queue_cond(struct task_struc= t *p, int cpu) return false; #endif =20 + /* + * If we're PROXY_WAKING, we have deactivated on this cpu, so we should + * activate it here as well, to avoid IPI'ing a cpu that is stuck in + * task_rq_lock() spinning on p->on_rq, deadlocking that cpu. + */ + if (task_on_rq_migrating(p)) + return false; + /* * Do not complicate things with the async wake_list while the CPU is * in hotplug state. @@ -4181,6 +4242,8 @@ int try_to_wake_up(struct task_struct *p, unsigned in= t state, int wake_flags) * it disabling IRQs (this allows not taking ->pi_lock). */ WARN_ON_ONCE(p->se.sched_delayed); + /* If p is current, we know we can run here, so clear blocked_on */ + clear_task_blocked_on(p, NULL); if (!ttwu_state_match(p, state, &success)) goto out; =20 @@ -4197,8 +4260,15 @@ int try_to_wake_up(struct task_struct *p, unsigned i= nt state, int wake_flags) */ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { smp_mb__after_spinlock(); - if (!ttwu_state_match(p, state, &success)) - break; + if (!ttwu_state_match(p, state, &success)) { + /* + * If we're already TASK_RUNNING, and PROXY_WAKING + * continue on to ttwu_runnable check to force + * proxy_needs_return evaluation + */ + if (!proxy_task_runnable_but_waking(p)) + break; + } =20 trace_sched_waking(p); =20 --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEA6F20966B for ; Thu, 30 Oct 2025 00:19:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783557; cv=none; b=CptJ16YmdFZ/A5EiqTsdHgQONkj8UQsN3i46heT3vbVdxdWqBuwQ5FfOG8vFoAW3SbQYpdFinkXcxwR3TiXfo+gNZBYKCOo15AkgeY80NMNYrYIPyByONrAbSCbbCwvcohVTcaUngELLTkiWsTlATUXhztNi9Rzjv0BB5F55E00= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783557; c=relaxed/simple; bh=qpAmwOBro6PUgwngh4ae2foe7ySJHkWnO7JmWVaVG60=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ONNyRqoqswDPcCk6MzOeDMhXmsRCZTVIe/ErbvkdXDNC7X+zqfBi9bcmGJfQxjP5xTCnH/DxOxLz5pefXfNJv9Ve6mGWyvSwq6gUML05rTGAOLwENmBR+aMIIU7SyNKJrONvIZ7UhDjlOV0W5E/AJJFJowitgHah8ZAozNBuAVw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a8hCsi6K; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a8hCsi6K" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-32ee62ed6beso814733a91.2 for ; Wed, 29 Oct 2025 17:19:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783555; x=1762388355; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qSSB7/BOiNVYjUnc+XFldXow4eNja4ZY/hysYvDURk8=; b=a8hCsi6KcnJrwLSNyHMqHJYmZl1MoPEps9bOUSsCgPgo5kMgELrNLB/+P8/yfFiBu6 l58GDojlqr+LyDhT3QHdnPNlRdk+VFfbMqquC9XHfJdQKF1xQmeizs4wX43KdIwLc6lL oyoMV1Pq5s8KnfGLu/RoTzNJqvNQXJ7lr3SxzAIgP1XdpCKPwyxktex/vaFvNCbjxfeg nd14BMEYtDdXvpMOAf7sa86BA6JAaxOQzENvVyq6qrY9YhS9vLJa+EzXjmngWsy6BlHB vMSCPyXQHn0hVgN96bX3CE3q2LDZrBON1/eG9400p0UCe1xkUagrRxVUPDG3ZkF6DCPA xYsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783555; x=1762388355; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qSSB7/BOiNVYjUnc+XFldXow4eNja4ZY/hysYvDURk8=; b=eqAZhVPZGHpvjTPWmmXk8CjN9ecPAMYWfLzaDUu+L63oQwJEf2np8O41xJ7o3n/7yM 2OVMdfe3+yjgISulZpwflkdzHsR4LCyM59JGwAj9Z+abKihcGbQJQAGbUCYHGdl3yKiZ gFlj7PM5pOrheu0VCRB+wKPGNUhZkvamSjw8Y9qTRjU8Y8CFISrBY4BlHFn0HzFctFVG JYKskkkII7351sXCLk/iivpyOyaitg6WEP5zlw8DDE0LrZPG54zu3kMCzhzQFr++45fm CHBcgyd+itvuQufJKNZR26ZiRJsCNtc0pnrWKRrUyW3qgJBWP7uLOP1TXpC4I49huQ5P g/Gw== X-Gm-Message-State: AOJu0YyGMtVUl82KY96Ausz9o36LHRBX+nMEu0u7NjcGo2LgnxiTvBkb cVOb8NNVFr9E7H0gyCaMfTwNuKrhsG+sInHpNqCVdKbf8jKqPAOtoPesqgjT/WFI3FUaOZ3iktY s5PqjGUMGqR6BP/yxR0C5uyYgNP8sXLVYLmoiL4V3UuNGVVaZVGXgaWaDaxg9os2O2+0ytMaqfi dxavSICpKbqC4Pa01wrb46Hw+D4oaTnd0W44e995eoJoflLqLt X-Google-Smtp-Source: AGHT+IHgiJ+FdCV9mTPk+HjaCkPwZkf2U8A55Ke+u6eraRc1ISfkRFMASsh9TnqzSY4mLRNMk2mL6Mb4byd4 X-Received: from pjbfa21.prod.google.com ([2002:a17:90a:f0d5:b0:33b:ab61:4f71]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:53c7:b0:335:28e3:81cd with SMTP id 98e67ed59e1d1-3403a281e06mr5179161a91.18.1761783554727; Wed, 29 Oct 2025 17:19:14 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:49 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-9-jstultz@google.com> Subject: [PATCH v23 8/9] sched: Add blocked_donor link to task for smarter mutex handoffs From: John Stultz To: LKML Cc: Peter Zijlstra , Juri Lelli , Valentin Schneider , "Connor O'Brien" , John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Add link to the task this task is proxying for, and use it so the mutex owner can do an intelligent hand-off of the mutex to the task that the owner is running on behalf. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: Valentin Schneider Signed-off-by: Connor O'Brien [jstultz: This patch was split out from larger proxy patch] Signed-off-by: John Stultz --- v5: * Split out from larger proxy patch v6: * Moved proxied value from earlier patch to this one where it is actually used * Rework logic to check sched_proxy_exec() instead of using ifdefs * Moved comment change to this patch where it makes sense v7: * Use more descriptive term then "us" in comments, as suggested by Metin Kaya. * Minor typo fixup from Metin Kaya * Reworked proxied variable to prev_not_proxied to simplify usage v8: * Use helper for donor blocked_on_state transition v9: * Re-add mutex lock handoff in the unlock path, but only when we have a blocked donor * Slight reword of commit message suggested by Metin v18: * Add task_init initialization for blocked_donor, suggested by Suleiman v23: * Reworks for PROXY_WAKING approach suggested by PeterZ Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 1 + init/init_task.c | 1 + kernel/fork.c | 1 + kernel/locking/mutex.c | 44 +++++++++++++++++++++++++++++++++++++++--- kernel/sched/core.c | 18 +++++++++++++++-- 5 files changed, 60 insertions(+), 5 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 863c54685684c..bac1b956027e2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1241,6 +1241,7 @@ struct task_struct { #endif =20 struct mutex *blocked_on; /* lock we're blocked on */ + struct task_struct *blocked_donor; /* task that is boosting this task */ raw_spinlock_t blocked_lock; =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER diff --git a/init/init_task.c b/init/init_task.c index 60477d74546e0..34853a511b4d8 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -177,6 +177,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { .mems_allowed_seq =3D SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq, &init_task.alloc_lock), #endif + .blocked_donor =3D NULL, #ifdef CONFIG_RT_MUTEXES .pi_waiters =3D RB_ROOT_CACHED, .pi_top_task =3D NULL, diff --git a/kernel/fork.c b/kernel/fork.c index 0697084be202f..0a9a17e25b85d 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2136,6 +2136,7 @@ __latent_entropy struct task_struct *copy_process( lockdep_init_task(p); =20 p->blocked_on =3D NULL; /* not blocked yet */ + p->blocked_donor =3D NULL; /* nobody is boosting p yet */ =20 #ifdef CONFIG_BCACHE p->sequential_io =3D 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 3cb9001d15119..08f438a54f56f 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -926,7 +926,7 @@ EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible); */ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, u= nsigned long ip) { - struct task_struct *next =3D NULL; + struct task_struct *donor, *next =3D NULL; DEFINE_WAKE_Q(wake_q); unsigned long owner; unsigned long flags; @@ -945,6 +945,12 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne MUTEX_WARN_ON(__owner_task(owner) !=3D current); MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP); =20 + if (sched_proxy_exec() && current->blocked_donor) { + /* force handoff if we have a blocked_donor */ + owner =3D MUTEX_FLAG_HANDOFF; + break; + } + if (owner & MUTEX_FLAG_HANDOFF) break; =20 @@ -958,7 +964,34 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne =20 raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); - if (!list_empty(&lock->wait_list)) { + + if (sched_proxy_exec()) { + raw_spin_lock(¤t->blocked_lock); + /* + * If we have a task boosting current, and that task was boosting + * current through this lock, hand the lock to that task, as that + * is the highest waiter, as selected by the scheduling function. + */ + donor =3D current->blocked_donor; + if (donor) { + struct mutex *next_lock; + + raw_spin_lock_nested(&donor->blocked_lock, SINGLE_DEPTH_NESTING); + next_lock =3D __get_task_blocked_on(donor); + if (next_lock =3D=3D lock) { + next =3D donor; + __set_task_blocked_on_waking(donor, next_lock); + wake_q_add(&wake_q, donor); + current->blocked_donor =3D NULL; + } + raw_spin_unlock(&donor->blocked_lock); + } + } + + /* + * Failing that, pick any on the wait list. + */ + if (!next && !list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ struct mutex_waiter *waiter =3D list_first_entry(&lock->wait_list, @@ -966,14 +999,19 @@ static noinline void __sched __mutex_unlock_slowpath(= struct mutex *lock, unsigne =20 next =3D waiter->task; =20 + raw_spin_lock_nested(&next->blocked_lock, SINGLE_DEPTH_NESTING); debug_mutex_wake_waiter(lock, waiter); - set_task_blocked_on_waking(next, lock); + __set_task_blocked_on_waking(next, lock); + raw_spin_unlock(&next->blocked_lock); wake_q_add(&wake_q, next); + } =20 if (owner & MUTEX_FLAG_HANDOFF) __mutex_handoff(lock, next); =20 + if (sched_proxy_exec()) + raw_spin_unlock(¤t->blocked_lock); raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q); } =20 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4546ceb8eae56..eabde9706981a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6846,7 +6846,17 @@ static void proxy_force_return(struct rq *rq, struct= rq_flags *rf, * Find runnable lock owner to proxy for mutex blocked donor * * Follow the blocked-on relation: - * task->blocked_on -> mutex->owner -> task... + * + * ,-> task + * | | blocked-on + * | v + * blocked_donor | mutex + * | | owner + * | v + * `-- task + * + * and set the blocked_donor relation, this latter is used by the mutex + * code to find which (blocked) task to hand-off to. * * Lock order: * @@ -6995,6 +7005,7 @@ find_proxy_task(struct rq *rq, struct task_struct *do= nor, struct rq_flags *rf) * rq, therefore holding @rq->lock is sufficient to * guarantee its existence, as per ttwu_remote(). */ + owner->blocked_donor =3D p; } =20 /* Handle actions we need to do outside of the guard() scope */ @@ -7095,6 +7106,7 @@ static void __sched notrace __schedule(int sched_mode) unsigned long prev_state; struct rq_flags rf; struct rq *rq; + bool prev_not_proxied; int cpu; =20 /* Trace preemptions consistently with task switches */ @@ -7167,10 +7179,12 @@ static void __sched notrace __schedule(int sched_mo= de) switch_count =3D &prev->nvcsw; } =20 + prev_not_proxied =3D !prev->blocked_donor; pick_again: assert_balance_callbacks_empty(rq); next =3D pick_next_task(rq, rq->donor, &rf); rq_set_donor(rq, next); + next->blocked_donor =3D NULL; if (unlikely(task_is_blocked(next))) { next =3D find_proxy_task(rq, next, &rf); if (!next) { @@ -7236,7 +7250,7 @@ static void __sched notrace __schedule(int sched_mode) rq =3D context_switch(rq, prev, next, &rf); } else { /* In case next was already curr but just got blocked_donor */ - if (!task_current_donor(rq, next)) + if (prev_not_proxied && next->blocked_donor) proxy_tag_curr(rq, next); =20 rq_unpin_lock(rq, &rf); --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 02:24:40 2026 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 773C32153D4 for ; Thu, 30 Oct 2025 00:19:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783559; cv=none; b=Cd/5+xxAQ3prqC+zkqN+iN0OzULHzmCg93rQHQCgfF02mZ7cNQrAL0v/hILohp2MJEy3QcFagRBXOg3N/vF5Wev3uaL5vMKUtwRiCeQYz+Mu9aZKHtjSa7O9VbbIhT4vl4LiRWgSpGXFp8HGqJKFQndwJYHJYusubhqakraqBj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761783559; c=relaxed/simple; bh=+P7TO7VmrWMaR93SB6+alsEe+0z91aq0cUv1BeYykA8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=stUvHiehhrgtK/Jgu21AvmErqrThXCB4f3r64h0Ahl334jdmGiM1rqYDiEZU8nYg0TM9stZep6EFjMgC2vVQ7RUcYEAlQbLITYbNdT0/aCVYhafGq0Z2sNi78NPFoF40N4cc618spO+U0K9jMgknMjuAq0dOTV03Ho8zlmB4OG0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eVpfHkP5; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eVpfHkP5" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7a267606fe8so364082b3a.0 for ; Wed, 29 Oct 2025 17:19:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761783557; x=1762388357; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YymTN7ZjNyXDosetoCN8yrsEVJlaareKAWv54RWXCuc=; b=eVpfHkP5lUtd3nRjYfBN3sH1PKZE0DR5VoOMfcJYJtLtjPuyRWP5o7e5akbSgeeYeW xPPQfHbb1GpSOEgW+0uG1hxwI7n05uWQmyaSdsEJj/MAJ5dks5aozCFhELaJ7sGMbgxz sbSSujFVbEOV8IsYdtXf7C8iAi+wGovVu+pgBBvyKXSHW9shOEzXUF+ZSE2KVq/rskPi cgAiVqdRKQumZbr4Pe5mONo7XAii0JKYv69hpL2Qy325CMLl/Ob2MyoLCZmcZnd2FnSn t0KSLZPMGq0oca5VV+RAxKodGe5Omej84QeGXnOZOSQSWfXV4rH+cGWbqGV7xDth2ZRY 0kjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761783557; x=1762388357; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YymTN7ZjNyXDosetoCN8yrsEVJlaareKAWv54RWXCuc=; b=WJ8adJWCizb6KW7USTD6QhpJy9i4HJbBrgrlhyu1IQglXhsR79Vt6dxsuisi9G3dDK IKgV3mpP4c+NTLqA4+FjIPO1RjqbnB2zKebBTeDjcJPF9t/r7GJz3DTwUq/VJRFXKxqg A/8mz1D16xjXluUC8Jjs4Kq2vP5b16oc60rpAgp6FTwwKHnNmvoflYimKX35VbIhv/mX 9h/BPzJag4Zcz04TD5cA/Nwrth/I0FqRo/w2ebHEAOnEHLplF8UugJox3Riilf2daz+b tgo0IaiNiryoPwjolNQgdXedF7sXtFa2LvFqUQ3MvkznLysB9gT1uFV4iGqo0RuGsf86 H2BA== X-Gm-Message-State: AOJu0YyKkBBk38wF79hA7faVajebxAVzRbxZc2r76HzYj0eHYwMTJJar 5+Q+vowsctMjXqqBDPjHQndGg0zNmSgdT5Jd1lT1sKZ//+g7sl6OL0+FGE8hheehXpyRG7ppiXh qWTBG9KM+MHPaOlOIK6NsK1U3b0nmqzfsvQ+zR0yLVcFK8BghuPz4PYVU16FC/fUmY+Qe+JIfJR /7kacYxCBKOd3VIyoEd8B2/+W2PeQDAxy3DC0eHLt8IrKSqBxW X-Google-Smtp-Source: AGHT+IFk2/AWGfZxxb6gdiIvh+Df7R9aqsGY/uHdrrZa3UadLjMeSn9i1dBQj4UmMFvInh97o2zW4yZ0e22Z X-Received: from pjbfh4.prod.google.com ([2002:a17:90b:344:b0:33b:a383:f4df]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:3289:b0:340:c6cd:545b with SMTP id adf61e73a8af0-34787078c60mr1550958637.44.1761783556400; Wed, 29 Oct 2025 17:19:16 -0700 (PDT) Date: Thu, 30 Oct 2025 00:18:50 +0000 In-Reply-To: <20251030001857.681432-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030001857.681432-1-jstultz@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030001857.681432-10-jstultz@google.com> Subject: [PATCH v23 9/9] sched: Migrate whole chain in proxy_migrate_task() From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of migrating one task each time through find_proxy_task(), we can walk up the blocked_donor ptrs and migrate the entire current chain in one go. This was broken out of earlier patches and held back while the series was being stabilized, but I wanted to re-introduce it. Signed-off-by: John Stultz --- v12: * Earlier this was re-using blocked_node, but I hit a race with activating blocked entities, and to avoid it introduced a new migration_node listhead v18: * Add init_task initialization of migration_node as suggested by Suleiman v22: * Move migration_node under CONFIG_SCHED_PROXY_EXEC as suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 3 +++ init/init_task.c | 3 +++ kernel/fork.c | 3 +++ kernel/sched/core.c | 25 +++++++++++++++++-------- 4 files changed, 26 insertions(+), 8 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index bac1b956027e2..cd2453c2085c1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1243,6 +1243,9 @@ struct task_struct { struct mutex *blocked_on; /* lock we're blocked on */ struct task_struct *blocked_donor; /* task that is boosting this task */ raw_spinlock_t blocked_lock; +#ifdef CONFIG_SCHED_PROXY_EXEC + struct list_head migration_node; +#endif =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER /* diff --git a/init/init_task.c b/init/init_task.c index 34853a511b4d8..78fb7cb83fa5d 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -178,6 +178,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { &init_task.alloc_lock), #endif .blocked_donor =3D NULL, +#ifdef CONFIG_SCHED_PROXY_EXEC + .migration_node =3D LIST_HEAD_INIT(init_task.migration_node), +#endif #ifdef CONFIG_RT_MUTEXES .pi_waiters =3D RB_ROOT_CACHED, .pi_top_task =3D NULL, diff --git a/kernel/fork.c b/kernel/fork.c index 0a9a17e25b85d..a7561480e879e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2137,6 +2137,9 @@ __latent_entropy struct task_struct *copy_process( =20 p->blocked_on =3D NULL; /* not blocked yet */ p->blocked_donor =3D NULL; /* nobody is boosting p yet */ +#ifdef CONFIG_SCHED_PROXY_EXEC + INIT_LIST_HEAD(&p->migration_node); +#endif =20 #ifdef CONFIG_BCACHE p->sequential_io =3D 0; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index eabde9706981a..c202ae19b4ac8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6718,6 +6718,7 @@ static void proxy_migrate_task(struct rq *rq, struct = rq_flags *rf, struct task_struct *p, int target_cpu) { struct rq *target_rq =3D cpu_rq(target_cpu); + LIST_HEAD(migrate_list); =20 lockdep_assert_rq_held(rq); =20 @@ -6734,11 +6735,16 @@ static void proxy_migrate_task(struct rq *rq, struc= t rq_flags *rf, */ proxy_resched_idle(rq); =20 - WARN_ON(p =3D=3D rq->curr); - - deactivate_task(rq, p, 0); - proxy_set_task_cpu(p, target_cpu); - + for (; p; p =3D p->blocked_donor) { + WARN_ON(p =3D=3D rq->curr); + deactivate_task(rq, p, 0); + proxy_set_task_cpu(p, target_cpu); + /* + * We can abuse blocked_node to migrate the thing, + * because @p was still on the rq. + */ + list_add(&p->migration_node, &migrate_list); + } /* * We have to zap callbacks before unlocking the rq * as another CPU may jump in and call sched_balance_rq @@ -6749,10 +6755,13 @@ static void proxy_migrate_task(struct rq *rq, struc= t rq_flags *rf, rq_unpin_lock(rq, rf); raw_spin_rq_unlock(rq); raw_spin_rq_lock(target_rq); + while (!list_empty(&migrate_list)) { + p =3D list_first_entry(&migrate_list, struct task_struct, migration_node= ); + list_del_init(&p->migration_node); =20 - activate_task(target_rq, p, 0); - wakeup_preempt(target_rq, p, 0); - + activate_task(target_rq, p, 0); + wakeup_preempt(target_rq, p, 0); + } raw_spin_rq_unlock(target_rq); raw_spin_rq_lock(rq); rq_repin_lock(rq, rf); --=20 2.51.1.930.gacf6e81ea2-goog