From nobody Wed Oct 1 23:30:09 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A06ED261573 for ; Fri, 26 Sep 2025 03:29:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857380; cv=none; b=hiSZVLgYTv/EYC3p5qpHIcig4B+gsV8rymLx7LfRm8OItuJAM9NmdG/aeIT9W+2UiEoBZjXJboctpHnjusoDFs+ClXVRp0TYRs7x5/0IrMGhVPRKXDpdMZsOBM6HECsxpt11kT+OzYNF2EP95qBMunGEv6PC2gxhyYv9YdeTefo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857380; c=relaxed/simple; bh=JtfAi5hWsh3uJOW3ZPAhF8I+1Yd0SMHHh4xc7fhhfUQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KYgs147i5irBjBhUWhXw7NFxIzKEgJ8UnkPz7YfKw28FNBG2YtramwlDcvaJNSnZAaC5CqwPZSLeRSeUkUGY/7JhTBY3Pj71NuUOWTUzffXSZvn6crDk3AoLYz3BZ3IJXtsWoK6bobOBrXaP7da+RFHZuAoyQGSKKjE6TH9gDSk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Z26pu3TO; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Z26pu3TO" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-244581953b8so17598335ad.2 for ; Thu, 25 Sep 2025 20:29:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758857378; x=1759462178; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=S6Qns3BtlUetjoWf4Xv2j7oWU8nUNLzVejhQGmPjFYo=; b=Z26pu3TO/Ia70awFYuJdTe9zvIQHiqkGLrxJmAWiCwh6xMXzdy3YwBNUEhJiGbUkKa /LKOfz2Kj2iHoTpmTpyMyJaLYlTU5RUurTGCO1Ozsivio4fULL1G7Z5qCQm47AaCZKFm dN+HQFwv6ASv+jQKgBx3T1QQx2Qlh/GXF89v71vpIn9944RNWeVT83Frnbhj8cCly8gJ bkpAKd2U9ozVfrB/tHcHToOO5/j17YmSBcTQaW0YQZ55A/T7mhsbjMe90AelxIz9f0tx eFhby+Lw89FlpBlnFtCecVBXOIfcBiK3Ny5d728kiGkvDdIoZ5ajKNBxNE3npWcAtfiB LdXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758857378; x=1759462178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=S6Qns3BtlUetjoWf4Xv2j7oWU8nUNLzVejhQGmPjFYo=; b=iJkZcaRIxRVFCJBXT8Z+cLLw2Q+ILo592h2T9xJ07MZXOkGwow8dJYaFHp45R4gVN3 9GaFOPdVIIZeyQ1tT5qSkYR4kt386aeGdJy8NM51+LyEWrZZ2vN8OUJL1IbHDt1alXDQ GGfnhPxzeXHkWF2QtLMAp8kNZEcdG4NF0iOlU0AlbuZiMP/vXsxcDU+dLZo+TV1QE99K MxuAMY6xUda7FwvzkM3iQImI9cokMwzyUS2m6b8GIeOYvz0t8lE3kxSy2qddIZf28lL8 p3gA9Y0Ah42tF8F9Bv4wN3zurn/lc1/gWWYxo2OtvthxrVr9O2Qc11CDoJHuYm+nMpIS p7rg== X-Gm-Message-State: AOJu0YzGd2YvR1p28swBZsvtSdJh1zWRxAJZGoPe25zei9MxnB1THfwm wAALv8DOmTgdquYBVqijbGqIQmU2U+mGw8RJbl0PhdLfXt9GRDOSXCTZWjPDVOq4DNL3LMNdOUb /5wXl3iOIp9fpKVxL4YEkmJRgBsKWHj7buiO7kGIKpO/Jd9cLHa29SEbZhU8M1//SnkCZLuiR+v 9TrztjVQR+juvUAPdVJKkWkvlR6+ZgTGv3iSNNwUL9zx2pFD4o X-Google-Smtp-Source: AGHT+IE8WCtDtXoZb6+m4qTEuPGJA6sX6QzlbwF0/YgqXhsf/nd1+IEOzNlvS/T+PHDLnjDm7T26kS2XugYF X-Received: from pjbsx12.prod.google.com ([2002:a17:90b:2ccc:b0:32d:def7:e60f]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:3850:b0:267:9a29:7800 with SMTP id d9443c01a7336-27ed4ae4927mr67161835ad.59.1758857377761; Thu, 25 Sep 2025 20:29:37 -0700 (PDT) Date: Fri, 26 Sep 2025 03:29:09 +0000 In-Reply-To: <20250926032931.27663-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250926032931.27663-1-jstultz@google.com> X-Mailer: git-send-email 2.51.0.536.g15c5d4f767-goog Message-ID: <20250926032931.27663-2-jstultz@google.com> Subject: [PATCH v22 1/6] locking: Add task::blocked_lock to serialize blocked_on state From: John Stultz To: LKML Cc: John Stultz , K Prateek Nayak , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" So far, we have been able to utilize the mutex::wait_lock for serializing the blocked_on state, but when we move to proxying across runqueues, we will need to add more state and a way to serialize changes to this state in contexts where we don't hold the mutex::wait_lock. So introduce the task::blocked_lock, which nests under the mutex::wait_lock in the locking order, and rework the locking to use it. Signed-off-by: John Stultz Reviewed-by: K Prateek Nayak --- v15: * Split back out into later in the series v16: * Fixups to mark tasks unblocked before sleeping in mutex_optimistic_spin() * Rework to use guard() as suggested by Peter v19: * Rework logic for PREEMPT_RT issues reported by K Prateek Nayak v21: * After recently thinking more on ww_mutex code, I reworked the blocked_lock usage in mutex lock to avoid having to take nested locks in the ww_mutex paths, as I was concerned the lock ordering constraints weren't as strong as I had previously thought. v22: * Added some extra spaces to avoid dense code blocks suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 52 +++++++++++++++--------------------- init/init_task.c | 1 + kernel/fork.c | 1 + kernel/locking/mutex-debug.c | 4 +-- kernel/locking/mutex.c | 40 +++++++++++++++++---------- kernel/locking/ww_mutex.h | 4 +-- kernel/sched/core.c | 4 ++- 7 files changed, 57 insertions(+), 49 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index e4ce0a76831e5..cb4e81d9d9b67 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1233,6 +1233,7 @@ struct task_struct { #endif =20 struct mutex *blocked_on; /* lock we're blocked on */ + raw_spinlock_t blocked_lock; =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER /* @@ -2141,57 +2142,48 @@ extern int __cond_resched_rwlock_write(rwlock_t *lo= ck); #ifndef CONFIG_PREEMPT_RT static inline struct mutex *__get_task_blocked_on(struct task_struct *p) { - struct mutex *m =3D p->blocked_on; + lockdep_assert_held_once(&p->blocked_lock); + return p->blocked_on; +} =20 - if (m) - lockdep_assert_held_once(&m->wait_lock); - return m; +static inline struct mutex *get_task_blocked_on(struct task_struct *p) +{ + guard(raw_spinlock_irqsave)(&p->blocked_lock); + return __get_task_blocked_on(p); } =20 static inline void __set_task_blocked_on(struct task_struct *p, struct mut= ex *m) { - struct mutex *blocked_on =3D READ_ONCE(p->blocked_on); - WARN_ON_ONCE(!m); /* The task should only be setting itself as blocked */ WARN_ON_ONCE(p !=3D current); - /* Currently we serialize blocked_on under the mutex::wait_lock */ - lockdep_assert_held_once(&m->wait_lock); + /* Currently we serialize blocked_on under the task::blocked_lock */ + lockdep_assert_held_once(&p->blocked_lock); /* * Check ensure we don't overwrite existing mutex value * with a different mutex. Note, setting it to the same * lock repeatedly is ok. */ - WARN_ON_ONCE(blocked_on && blocked_on !=3D m); - WRITE_ONCE(p->blocked_on, m); -} - -static inline void set_task_blocked_on(struct task_struct *p, struct mutex= *m) -{ - guard(raw_spinlock_irqsave)(&m->wait_lock); - __set_task_blocked_on(p, m); + WARN_ON_ONCE(p->blocked_on && p->blocked_on !=3D m); + p->blocked_on =3D m; } =20 static inline void __clear_task_blocked_on(struct task_struct *p, struct m= utex *m) { - if (m) { - struct mutex *blocked_on =3D READ_ONCE(p->blocked_on); - - /* Currently we serialize blocked_on under the mutex::wait_lock */ - lockdep_assert_held_once(&m->wait_lock); - /* - * There may be cases where we re-clear already cleared - * blocked_on relationships, but make sure we are not - * clearing the relationship with a different lock. - */ - WARN_ON_ONCE(blocked_on && blocked_on !=3D m); - } - WRITE_ONCE(p->blocked_on, NULL); + /* Currently we serialize blocked_on under the task::blocked_lock */ + lockdep_assert_held_once(&p->blocked_lock); + /* + * There may be cases where we re-clear already cleared + * blocked_on relationships, but make sure we are not + * clearing the relationship with a different lock. + */ + WARN_ON_ONCE(m && p->blocked_on && p->blocked_on !=3D m); + p->blocked_on =3D NULL; } =20 static inline void clear_task_blocked_on(struct task_struct *p, struct mut= ex *m) { - guard(raw_spinlock_irqsave)(&m->wait_lock); + guard(raw_spinlock_irqsave)(&p->blocked_lock); __clear_task_blocked_on(p, m); } #else diff --git a/init/init_task.c b/init/init_task.c index e557f622bd906..7e29d86153d9f 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -140,6 +140,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { .journal_info =3D NULL, INIT_CPU_TIMERS(init_task) .pi_lock =3D __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock), + .blocked_lock =3D __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock), .timer_slack_ns =3D 50000, /* 50 usec default slack */ .thread_pid =3D &init_struct_pid, .thread_node =3D LIST_HEAD_INIT(init_signals.thread_head), diff --git a/kernel/fork.c b/kernel/fork.c index c4ada32598bd5..796cfceb2bbda 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2030,6 +2030,7 @@ __latent_entropy struct task_struct *copy_process( ftrace_graph_init_task(p); =20 rt_mutex_init_task(p); + raw_spin_lock_init(&p->blocked_lock); =20 lockdep_assert_irqs_enabled(); #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 949103fd8e9b5..1d8cff71f65e1 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -54,13 +54,13 @@ void debug_mutex_add_waiter(struct mutex *lock, struct = mutex_waiter *waiter, lockdep_assert_held(&lock->wait_lock); =20 /* Current thread can't be already blocked (since it's executing!) */ - DEBUG_LOCKS_WARN_ON(__get_task_blocked_on(task)); + DEBUG_LOCKS_WARN_ON(get_task_blocked_on(task)); } =20 void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *wa= iter, struct task_struct *task) { - struct mutex *blocked_on =3D __get_task_blocked_on(task); + struct mutex *blocked_on =3D get_task_blocked_on(task); =20 DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); DEBUG_LOCKS_WARN_ON(waiter->task !=3D task); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index de7d6702cd96c..c44fc63d4476e 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -640,6 +640,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err_early_kill; } =20 + raw_spin_lock(¤t->blocked_lock); __set_task_blocked_on(current, lock); set_current_state(state); trace_contention_begin(lock, LCB_F_MUTEX); @@ -653,8 +654,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas * the handoff. */ if (__mutex_trylock(lock)) - goto acquired; + break; =20 + raw_spin_unlock(¤t->blocked_lock); /* * Check for signals and kill conditions while holding * wait_lock. This ensures the lock cancellation is ordered @@ -677,12 +679,14 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas =20 first =3D __mutex_waiter_is_first(lock, &waiter); =20 + raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); /* * As we likely have been woken up by task * that has cleared our blocked_on state, re-set * it to the lock we are trying to acquire. */ - set_task_blocked_on(current, lock); + __set_task_blocked_on(current, lock); set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -693,25 +697,33 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas break; =20 if (first) { - trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); + bool opt_acquired; + /* * mutex_optimistic_spin() can call schedule(), so - * clear blocked on so we don't become unselectable + * we need to release these locks before calling it, + * and clear blocked on so we don't become unselectable * to run. */ - clear_task_blocked_on(current, lock); - if (mutex_optimistic_spin(lock, ww_ctx, &waiter)) + __clear_task_blocked_on(current, lock); + raw_spin_unlock(¤t->blocked_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + + trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); + opt_acquired =3D mutex_optimistic_spin(lock, ww_ctx, &waiter); + + raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); + __set_task_blocked_on(current, lock); + + if (opt_acquired) break; - set_task_blocked_on(current, lock); trace_contention_begin(lock, LCB_F_MUTEX); } - - raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock_irqsave(&lock->wait_lock, flags); -acquired: __clear_task_blocked_on(current, lock); __set_current_state(TASK_RUNNING); + raw_spin_unlock(¤t->blocked_lock); =20 if (ww_ctx) { /* @@ -740,11 +752,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas return 0; =20 err: - __clear_task_blocked_on(current, lock); + clear_task_blocked_on(current, lock); __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); err_early_kill: - WARN_ON(__get_task_blocked_on(current)); + WARN_ON(get_task_blocked_on(current)); trace_contention_end(lock, ret); raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q); debug_mutex_free_waiter(&waiter); @@ -955,7 +967,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne next =3D waiter->task; =20 debug_mutex_wake_waiter(lock, waiter); - __clear_task_blocked_on(next, lock); + clear_task_blocked_on(next, lock); wake_q_add(&wake_q, next); } =20 diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 31a785afee6c0..e4a81790ea7dd 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -289,7 +289,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER = *waiter, * blocked_on pointer. Otherwise we can see circular * blocked_on relationships that can't resolve. */ - __clear_task_blocked_on(waiter->task, lock); + clear_task_blocked_on(waiter->task, lock); wake_q_add(wake_q, waiter->task); } =20 @@ -347,7 +347,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * are waking the mutex owner, who may be currently * blocked on a different mutex. */ - __clear_task_blocked_on(owner, NULL); + clear_task_blocked_on(owner, NULL); wake_q_add(wake_q, owner); } return true; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 631e25ce15c66..007459d42ae4a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6639,6 +6639,7 @@ static struct task_struct *proxy_deactivate(struct rq= *rq, struct task_struct *d * p->pi_lock * rq->lock * mutex->wait_lock + * p->blocked_lock * * Returns the task that is going to be used as execution context (the one * that is actually going to be run on cpu_of(rq)). @@ -6662,8 +6663,9 @@ find_proxy_task(struct rq *rq, struct task_struct *do= nor, struct rq_flags *rf) * and ensure @owner sticks around. */ guard(raw_spinlock)(&mutex->wait_lock); + guard(raw_spinlock)(&p->blocked_lock); =20 - /* Check again that p is blocked with wait_lock held */ + /* Check again that p is blocked with blocked_lock held */ if (mutex !=3D __get_task_blocked_on(p)) { /* * Something changed in the blocked_on chain and --=20 2.51.0.536.g15c5d4f767-goog From nobody Wed Oct 1 23:30:09 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F38192641CA for ; Fri, 26 Sep 2025 03:29:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857382; cv=none; b=kdZlL6HBpD9Vki/vGoXCXQOpLxvkqVSYbEbv9+bEdiEpM8PsPpX8qO5Rd3FkOdmfG35SW66bpiPWST4g7+B4tvqWGpLspFf5JHRUZhuNjthZ68dm8plBMITMFZarwfX8l+O1n3j8xlUCZY7+lNJ9hu/MXSxCL8v148LWbYJJpVk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857382; c=relaxed/simple; bh=dEg9UyuGsudqtxq6+yAU2e6OOjobmkXVJozNXA1WFpQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PcKhwM4mhAhVhBNhqoIV/uCPqJybt6Zik/I5ezwQZurPFyu/mpzc4o37IBRxcVsm1QQBj3PFtbEpDvVPouFJT9YZ6/GMjXonfTs+ABFAN0H1MiP6sJWD+odXTqauoJgsjF2unXaa7Y+gfBufBbHc9sBOG4eaE9aVuiR+/TKAmDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p2h8u7iD; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p2h8u7iD" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-27eeb9730d9so6565675ad.0 for ; Thu, 25 Sep 2025 20:29:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758857379; x=1759462179; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g5MKKS2eKGAyJpvV7ZR8zAZC2VVGeAKn36HWAP4dzS0=; b=p2h8u7iDKtpm10iPMaGfe8q1e8YbJ+NPxHCQhEhKaf3c+AM4cjGpCVuq0jJIqJUavO eAotRV0M+y5yzTbLsPUkfa9RF3hgr7HxzM7C2Va1xZyYCmbR0EKBJVh+Ghp+ZSD399Ou YSTEUBZXcwDGZ+NONL4ZgJ6WxUGS3VbtmfNur0D/5btMMIDkp22F6Fq5RyJrVnPJx2aX LD737eJX/n9KMRn9lPZ0yApC8iA47urhtg75rh+8WCQfhx5FXk4eLfGd2K1vAp6qDC48 xvYk8BimYTrNHTxgBNukZLnWnu3en5/GiWGhHgL5pyzNjnKLdKPBMfAQHBDclF+gikme kd1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758857379; x=1759462179; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g5MKKS2eKGAyJpvV7ZR8zAZC2VVGeAKn36HWAP4dzS0=; b=W4Eci57NDoxznK8VW1hGIe0P8gsldxUNEBTu6D0Wp71SvmCakL5ylAd9ikRZtYvdHB Y5LgxJZPos7X1uIRGI0bozGslXgTGj6+Re4AXS1xIDY0Lqd5Gqbmn2hLnHBrCd2gsWBk iHP5MbX1GE7F1feIOTgVWFrxt8+GrIrbebX0L9E2KIWIe56mGWE415QX1JSY1OI25yNZ 2zq/uv9IR1FjFO6ptoVGgjSTjLotHyB1NpYQKAJ11qn1l2d2Br68HFj9Ex22b5Cpg3vQ TTD25TaRzM8UvSrcEQnY9UVVWH2qi5nOGCE5hjlGObhoaa8E61iOU6IgXZ0wRHo/JGju PfJg== X-Gm-Message-State: AOJu0YyD/idELhlPvi15TmCT/hdPGVHSYfSelSaEP2LNgGMZjUISYsjH pUoFrGxL3dTyhMgLwI52Xg1JbJxAWhjCSxMXYgWWqcHTjNqmEvP5TRQFH7DdLPfOes/BZa6cE1Q PZzECj3yIWQJI9/JPvirWaLsGb1IarAnG0OppBSnYK0O52NacV/aTcBq3qSDG+62DxI4On3REpi d94CgByq13HUpFKT/jyW9wV/4fHXAo/syn45RQARlhH852g/hf X-Google-Smtp-Source: AGHT+IGzEbXsUdqAWw7/El30HferXCqPoCj3etVBH1760vNuuynEcSG1c6jm5XddD8bGBhojVPg043lW95bq X-Received: from plhq1.prod.google.com ([2002:a17:903:11c1:b0:24c:1a91:d08a]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e84b:b0:267:bd66:14f3 with SMTP id d9443c01a7336-27ed4ad4194mr56199555ad.51.1758857379132; Thu, 25 Sep 2025 20:29:39 -0700 (PDT) Date: Fri, 26 Sep 2025 03:29:10 +0000 In-Reply-To: <20250926032931.27663-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250926032931.27663-1-jstultz@google.com> X-Mailer: git-send-email 2.51.0.536.g15c5d4f767-goog Message-ID: <20250926032931.27663-3-jstultz@google.com> Subject: [PATCH v22 2/6] sched/locking: Add blocked_on_state to provide necessary tri-state for proxy return-migration From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As we add functionality to proxy execution, we may migrate a donor task to a runqueue where it can't run due to cpu affinity. Thus, we must be careful to ensure we return-migrate the task back to a cpu in its cpumask when it becomes unblocked. Thus we need more then just a binary concept of the task being blocked on a mutex or not. So add a blocked_on_state value to the task, that allows the task to move through BO_RUNNING -> BO_BLOCKED -> BO_WAKING and back to BO_RUNNING. This provides a guard state in BO_WAKING so we can know the task is no longer blocked but we don't want to run it until we have potentially done return migration, back to a usable cpu. Signed-off-by: John Stultz --- v15: * Split blocked_on_state into its own patch later in the series, as the tri-state isn't necessary until we deal with proxy/return migrations v16: * Handle case where task in the chain is being set as BO_WAKING by another cpu (usually via ww_mutex die code). Make sure we release the rq lock so the wakeup can complete. * Rework to use guard() in find_proxy_task() as suggested by Peter v18: * Add initialization of blocked_on_state for init_task v19: * PREEMPT_RT build fixups and rework suggested by K Prateek Nayak v20: * Simplify one of the blocked_on_state changes to avoid extra PREMEPT_RT conditionals v21: * Slight reworks due to avoiding nested blocked_lock locking * Be consistent in use of blocked_on_state helper functions * Rework calls to proxy_deactivate() to do proper locking around blocked_on_state changes that we were cheating in previous versions. * Minor cleanups, some comment improvements v22: * Re-order blocked_on_state helpers to try to make it clearer the set_task_blocked_on() and clear_task_blocked_on() are the main enter/exit states and the blocked_on_state helpers help manage the transition states within. Per feedback from K Prateek Nayak. * Rework blocked_on_state to be defined within CONFIG_SCHED_PROXY_EXEC as suggested by K Prateek Nayak. * Reworked empty stub functions to just take one line as suggestd by K Prateek * Avoid using gotos out of a guard() scope, as highlighted by K Prateek, and instead rework logic to break and switch() on an action value. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 92 +++++++++++++++++++++++++++++++++------ init/init_task.c | 3 ++ kernel/fork.c | 3 ++ kernel/locking/mutex.c | 15 ++++--- kernel/locking/ww_mutex.h | 20 ++++----- kernel/sched/core.c | 45 +++++++++++++++++-- kernel/sched/sched.h | 6 ++- 7 files changed, 146 insertions(+), 38 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index cb4e81d9d9b67..8245940783c77 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -813,6 +813,12 @@ struct kmap_ctrl { #endif }; =20 +enum blocked_on_state { + BO_RUNNABLE, + BO_BLOCKED, + BO_WAKING, +}; + struct task_struct { #ifdef CONFIG_THREAD_INFO_IN_TASK /* @@ -1234,6 +1240,9 @@ struct task_struct { =20 struct mutex *blocked_on; /* lock we're blocked on */ raw_spinlock_t blocked_lock; +#ifdef CONFIG_SCHED_PROXY_EXEC + enum blocked_on_state blocked_on_state; +#endif =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER /* @@ -2139,7 +2148,6 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock= ); __cond_resched_rwlock_write(lock); \ }) =20 -#ifndef CONFIG_PREEMPT_RT static inline struct mutex *__get_task_blocked_on(struct task_struct *p) { lockdep_assert_held_once(&p->blocked_lock); @@ -2152,6 +2160,13 @@ static inline struct mutex *get_task_blocked_on(stru= ct task_struct *p) return __get_task_blocked_on(p); } =20 +static inline void __force_blocked_on_blocked(struct task_struct *p); +static inline void __force_blocked_on_runnable(struct task_struct *p); + +/* + * These helpers set and clear the task blocked_on pointer, as well + * as setting the initial blocked_on_state, or clearing it + */ static inline void __set_task_blocked_on(struct task_struct *p, struct mut= ex *m) { WARN_ON_ONCE(!m); @@ -2161,24 +2176,23 @@ static inline void __set_task_blocked_on(struct tas= k_struct *p, struct mutex *m) lockdep_assert_held_once(&p->blocked_lock); /* * Check ensure we don't overwrite existing mutex value - * with a different mutex. Note, setting it to the same - * lock repeatedly is ok. + * with a different mutex. */ - WARN_ON_ONCE(p->blocked_on && p->blocked_on !=3D m); + WARN_ON_ONCE(p->blocked_on); p->blocked_on =3D m; + __force_blocked_on_blocked(p); } =20 static inline void __clear_task_blocked_on(struct task_struct *p, struct m= utex *m) { + /* The task should only be clearing itself */ + WARN_ON_ONCE(p !=3D current); /* Currently we serialize blocked_on under the task::blocked_lock */ lockdep_assert_held_once(&p->blocked_lock); - /* - * There may be cases where we re-clear already cleared - * blocked_on relationships, but make sure we are not - * clearing the relationship with a different lock. - */ - WARN_ON_ONCE(m && p->blocked_on && p->blocked_on !=3D m); + /* Make sure we are clearing the relationship with the right lock */ + WARN_ON_ONCE(m && p->blocked_on !=3D m); p->blocked_on =3D NULL; + __force_blocked_on_runnable(p); } =20 static inline void clear_task_blocked_on(struct task_struct *p, struct mut= ex *m) @@ -2186,15 +2200,65 @@ static inline void clear_task_blocked_on(struct tas= k_struct *p, struct mutex *m) guard(raw_spinlock_irqsave)(&p->blocked_lock); __clear_task_blocked_on(p, m); } -#else -static inline void __clear_task_blocked_on(struct task_struct *p, struct r= t_mutex *m) + +/* + * The following helpers manage the blocked_on_state transitions while + * the blocked_on pointer is set. + */ +#ifdef CONFIG_SCHED_PROXY_EXEC +static inline void __force_blocked_on_blocked(struct task_struct *p) +{ + lockdep_assert_held(&p->blocked_lock); + p->blocked_on_state =3D BO_BLOCKED; +} + +static inline void __set_blocked_on_waking(struct task_struct *p) +{ + lockdep_assert_held(&p->blocked_lock); + if (p->blocked_on_state =3D=3D BO_BLOCKED) + p->blocked_on_state =3D BO_WAKING; +} + +static inline void set_blocked_on_waking(struct task_struct *p) +{ + guard(raw_spinlock_irqsave)(&p->blocked_lock); + __set_blocked_on_waking(p); +} + +static inline void __force_blocked_on_runnable(struct task_struct *p) { + lockdep_assert_held(&p->blocked_lock); + p->blocked_on_state =3D BO_RUNNABLE; } =20 -static inline void clear_task_blocked_on(struct task_struct *p, struct rt_= mutex *m) +static inline void force_blocked_on_runnable(struct task_struct *p) { + guard(raw_spinlock_irqsave)(&p->blocked_lock); + __force_blocked_on_runnable(p); +} + +static inline void __set_blocked_on_runnable(struct task_struct *p) +{ + lockdep_assert_held(&p->blocked_lock); + if (p->blocked_on_state =3D=3D BO_WAKING) + p->blocked_on_state =3D BO_RUNNABLE; +} + +static inline void set_blocked_on_runnable(struct task_struct *p) +{ + if (!sched_proxy_exec()) + return; + guard(raw_spinlock_irqsave)(&p->blocked_lock); + __set_blocked_on_runnable(p); } -#endif /* !CONFIG_PREEMPT_RT */ +#else /* CONFIG_SCHED_PROXY_EXEC */ +static inline void __force_blocked_on_blocked(struct task_struct *p) {} +static inline void __set_blocked_on_waking(struct task_struct *p) {} +static inline void set_blocked_on_waking(struct task_struct *p) {} +static inline void __force_blocked_on_runnable(struct task_struct *p) {} +static inline void __set_blocked_on_runnable(struct task_struct *p) {} +static inline void set_blocked_on_runnable(struct task_struct *p) {} +#endif /* CONFIG_SCHED_PROXY_EXEC */ =20 static __always_inline bool need_resched(void) { diff --git a/init/init_task.c b/init/init_task.c index 7e29d86153d9f..63b66b4aa585a 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -174,6 +174,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { .mems_allowed_seq =3D SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq, &init_task.alloc_lock), #endif +#ifdef CONFIG_SCHED_PROXY_EXEC + .blocked_on_state =3D BO_RUNNABLE, +#endif #ifdef CONFIG_RT_MUTEXES .pi_waiters =3D RB_ROOT_CACHED, .pi_top_task =3D NULL, diff --git a/kernel/fork.c b/kernel/fork.c index 796cfceb2bbda..d8eb66e5be918 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2130,6 +2130,9 @@ __latent_entropy struct task_struct *copy_process( #endif =20 p->blocked_on =3D NULL; /* not blocked yet */ +#ifdef CONFIG_SCHED_PROXY_EXEC + p->blocked_on_state =3D BO_RUNNABLE; +#endif =20 #ifdef CONFIG_BCACHE p->sequential_io =3D 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index c44fc63d4476e..d8cf2e9a22a65 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -682,11 +682,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int s= tate, unsigned int subclas raw_spin_lock_irqsave(&lock->wait_lock, flags); raw_spin_lock(¤t->blocked_lock); /* - * As we likely have been woken up by task - * that has cleared our blocked_on state, re-set - * it to the lock we are trying to acquire. + * Re-set blocked_on_state as unlock path set it to WAKING/RUNNABLE */ - __set_task_blocked_on(current, lock); + __force_blocked_on_blocked(current); set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -705,7 +703,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas * and clear blocked on so we don't become unselectable * to run. */ - __clear_task_blocked_on(current, lock); + __force_blocked_on_runnable(current); raw_spin_unlock(¤t->blocked_lock); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); =20 @@ -714,7 +712,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas =20 raw_spin_lock_irqsave(&lock->wait_lock, flags); raw_spin_lock(¤t->blocked_lock); - __set_task_blocked_on(current, lock); + __force_blocked_on_blocked(current); =20 if (opt_acquired) break; @@ -966,8 +964,11 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne =20 next =3D waiter->task; =20 + raw_spin_lock(&next->blocked_lock); debug_mutex_wake_waiter(lock, waiter); - clear_task_blocked_on(next, lock); + WARN_ON_ONCE(__get_task_blocked_on(next) !=3D lock); + __set_blocked_on_waking(next); + raw_spin_unlock(&next->blocked_lock); wake_q_add(&wake_q, next); } =20 diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index e4a81790ea7dd..f34363615eb34 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -285,11 +285,11 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITE= R *waiter, debug_mutex_wake_waiter(lock, waiter); #endif /* - * When waking up the task to die, be sure to clear the - * blocked_on pointer. Otherwise we can see circular - * blocked_on relationships that can't resolve. + * When waking up the task to die, be sure to set the + * blocked_on_state to BO_WAKING. Otherwise we can see + * circular blocked_on relationships that can't resolve. */ - clear_task_blocked_on(waiter->task, lock); + set_blocked_on_waking(waiter->task); wake_q_add(wake_q, waiter->task); } =20 @@ -339,15 +339,11 @@ static bool __ww_mutex_wound(struct MUTEX *lock, */ if (owner !=3D current) { /* - * When waking up the task to wound, be sure to clear the - * blocked_on pointer. Otherwise we can see circular - * blocked_on relationships that can't resolve. - * - * NOTE: We pass NULL here instead of lock, because we - * are waking the mutex owner, who may be currently - * blocked on a different mutex. + * When waking up the task to wound, be sure to set the + * blocked_on_state to BO_WAKING. Otherwise we can see + * circular blocked_on relationships that can't resolve. */ - clear_task_blocked_on(owner, NULL); + set_blocked_on_waking(owner); wake_q_add(wake_q, owner); } return true; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 007459d42ae4a..abecd2411e29e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4328,6 +4328,12 @@ int try_to_wake_up(struct task_struct *p, unsigned i= nt state, int wake_flags) ttwu_queue(p, cpu, wake_flags); } out: + /* + * For now, if we've been woken up, set us as BO_RUNNABLE + * We will need to be more careful later when handling + * proxy migration + */ + set_blocked_on_runnable(p); if (success) ttwu_stat(p, task_cpu(p), wake_flags); =20 @@ -6623,7 +6629,7 @@ static struct task_struct *proxy_deactivate(struct rq= *rq, struct task_struct *d * as unblocked, as we aren't doing proxy-migrations * yet (more logic will be needed then). */ - donor->blocked_on =3D NULL; + force_blocked_on_runnable(donor); } return NULL; } @@ -6651,6 +6657,7 @@ find_proxy_task(struct rq *rq, struct task_struct *do= nor, struct rq_flags *rf) int this_cpu =3D cpu_of(rq); struct task_struct *p; struct mutex *mutex; + enum { FOUND, DEACTIVATE_DONOR } action =3D FOUND; =20 /* Follow blocked_on chain. */ for (p =3D donor; task_is_blocked(p); p =3D owner) { @@ -6676,20 +6683,43 @@ find_proxy_task(struct rq *rq, struct task_struct *= donor, struct rq_flags *rf) return NULL; } =20 + /* + * If a ww_mutex hits the die/wound case, it marks the task as + * BO_WAKING and calls try_to_wake_up(), so that the mutex + * cycle can be broken and we avoid a deadlock. + * + * However, if at that moment, we are here on the cpu which the + * die/wounded task is enqueued, we might loop on the cycle as + * BO_WAKING still causes task_is_blocked() to return true + * (since we want return migration to occur before we run the + * task). + * + * Unfortunately since we hold the rq lock, it will block + * try_to_wake_up from completing and doing the return + * migration. + * + * So when we hit a !BO_BLOCKED task briefly schedule idle + * so we release the rq and let the wakeup complete. + */ + if (p->blocked_on_state !=3D BO_BLOCKED) + return proxy_resched_idle(rq); + owner =3D __mutex_owner(mutex); if (!owner) { - __clear_task_blocked_on(p, mutex); + __force_blocked_on_runnable(p); return p; } =20 if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) { /* XXX Don't handle blocked owners/delayed dequeue yet */ - return proxy_deactivate(rq, donor); + action =3D DEACTIVATE_DONOR; + break; } =20 if (task_cpu(owner) !=3D this_cpu) { /* XXX Don't handle migrations yet */ - return proxy_deactivate(rq, donor); + action =3D DEACTIVATE_DONOR; + break; } =20 if (task_on_rq_migrating(owner)) { @@ -6747,6 +6777,13 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) */ } =20 + /* Handle actions we need to do outside of the guard() scope */ + switch (action) { + case DEACTIVATE_DONOR: + return proxy_deactivate(rq, donor); + case FOUND: + /* fallthrough */; + } WARN_ON_ONCE(owner && !owner->on_rq); return owner; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index cf2109b67f9a3..03deb68ee5f86 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2284,13 +2284,17 @@ static inline int task_current_donor(struct rq *rq,= struct task_struct *p) return rq->donor =3D=3D p; } =20 +#ifdef CONFIG_SCHED_PROXY_EXEC static inline bool task_is_blocked(struct task_struct *p) { if (!sched_proxy_exec()) return false; =20 - return !!p->blocked_on; + return !!p->blocked_on && p->blocked_on_state !=3D BO_RUNNABLE; } +#else +static inline bool task_is_blocked(struct task_struct *p) { return false; } +#endif =20 static inline int task_on_cpu(struct rq *rq, struct task_struct *p) { --=20 2.51.0.536.g15c5d4f767-goog From nobody Wed Oct 1 23:30:09 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67698265CA8 for ; Fri, 26 Sep 2025 03:29:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857383; cv=none; b=CGlG//0E/7tAB7P/IFdD8xW4ZM7Q+G9iABFAeiObhMIiAVXW+zznDHAkj7piZ5YM5RVayC27ud4MfGLHFgfEzTFTCViSsN2PNP6l28TovKy7UlOUXASM02eakqAl9NehOaxJXgk7FP6rDsbByqyVLDNhxqyi0S9DyUgmdwGJCvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857383; c=relaxed/simple; bh=ByK8/x1tt5oPdvHfzNUqnZZefft3Yw5ZeiFyCsyeXdo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eSFWH3SYOB6LeUcuBJ9O2qfETVQO/zS5lVeeuDUohNv8RA50IksPRnjGgIEfJWZoZ5lt2D9tsgicb5pc5rXs5vyF3KbW1EfRbnXIWa9X2N3dPiXp7pSIWmQtI1y82ZdPdqTPjbblDvDIgZpj6zrvKlLmmC0Osj3fnKqzh9ZTMXk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cxxeJR5J; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cxxeJR5J" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-32eb18b5500so2908092a91.2 for ; Thu, 25 Sep 2025 20:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758857381; x=1759462181; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EjwZVObXSdDQzqr9JPtk/bu6D9iQBNEIElU8HnQUaaI=; b=cxxeJR5JaHGXFuC0GKaMboejRVNpbhirqVPUwNFY8aVrwYcvQQ1DzcrUpYKrJ/MEYY P1ps8U8Ojjp2C5mT8fmYYtygnHaUjvCR3/yx47dmfaInbdd32ARklJV11eO8K9AdOg3c zfh2XSjUlWY0BgrFqmSNtg/QFB5N4itlHS5XiPwfWGSN4jWHnyaJvGVd4RAzUHdm2E2u dnoVdCmfmoTe4mCkrUKM8xPmdgQaMg0AnKWatfsRRmwZQHKtwQnNNL+sS8Z0p9kk9in8 yLvYhHS4oKjl58OLhNZGNlvGZrUSvhaQ2vrr+fGIyJ4APQGcP+uTXvylUpSghG535dZo N3Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758857381; x=1759462181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EjwZVObXSdDQzqr9JPtk/bu6D9iQBNEIElU8HnQUaaI=; b=Tv9d0NT1Y9aP6uzmIJ+1PwVNcY6rgBaAukJxgIy3zD0N+ptcAYEWXP4E++6srB77TA sjI1nNTa+Lxb04StPsvWeuy2/o5Hi9Y8NN4HIX6NqrKbohpvgzWpdDl4EbDwaJZRsVj5 t8ZV5wrOCZH2WVWxVxEKOwPYpoKc4ih4PqcqdEywyOgqgYBdUxvVeQV1ou20pZBE6A4b 1iUs9ihSAKEZs1nm2MwI4KXxF/FObYTiJ5HmNXfIIMaUAfUmDBqkAtiQzE/76eu0aBTH 33bvScRvWf6dJV+EFyv3Z9/xuT9hznYGNhQck7kx0Lg1s+OMqs7SqdzLFklY2flK+ueK hc+A== X-Gm-Message-State: AOJu0YxtIe9+QfChnpdqWm1eceJ3e+1Y5HEJJyOmc7/HLQnstnV1pC6C SH9uZ5pGKB0lhkLMZQFLyNtATRlyZcY02NEXrM5cO59IEVoL1C7R9NLjB6iRTPoW5I7vhhjIKWP oiyBQZKvDrWZMYiwEocCMDAI/dq+iSY220yUBME2wz0eejn4DEkqmyJe7zgx9aCYfEePeEqsMdg g2GDcZJoLCmJ/AkA6PHY9MHWzvDa5gQuHPFHvz5iY4ScyXCW1/ X-Google-Smtp-Source: AGHT+IHQHfHAJOJ99kyAbp3SU+NAOR0hzqg9heJZeiVkns8I+waWarOs6Ql+bJRJqop8tMiOydECBznl/i5b X-Received: from pjbmi6.prod.google.com ([2002:a17:90b:4b46:b0:330:793a:2e77]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:380f:b0:32e:32e4:9789 with SMTP id 98e67ed59e1d1-3342a257486mr6517076a91.3.1758857380535; Thu, 25 Sep 2025 20:29:40 -0700 (PDT) Date: Fri, 26 Sep 2025 03:29:11 +0000 In-Reply-To: <20250926032931.27663-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250926032931.27663-1-jstultz@google.com> X-Mailer: git-send-email 2.51.0.536.g15c5d4f767-goog Message-ID: <20250926032931.27663-4-jstultz@google.com> Subject: [PATCH v22 3/6] sched: Add logic to zap balance callbacks if we pick again From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With proxy-exec, a task is selected to run via pick_next_task(), and then if it is a mutex blocked task, we call find_proxy_task() to find a runnable owner. If the runnable owner is on another cpu, we will need to migrate the selected donor task away, after which we will pick_again can call pick_next_task() to choose something else. However, in the first call to pick_next_task(), we may have had a balance_callback setup by the class scheduler. After we pick again, its possible pick_next_task_fair() will be called which calls sched_balance_newidle() and sched_balance_rq(). This will throw a warning: [ 8.796467] rq->balance_callback && rq->balance_callback !=3D &balance_p= ush_callback [ 8.796467] WARNING: CPU: 32 PID: 458 at kernel/sched/sched.h:1750 sched= _balance_rq+0xe92/0x1250 ... [ 8.796467] Call Trace: [ 8.796467] [ 8.796467] ? __warn.cold+0xb2/0x14e [ 8.796467] ? sched_balance_rq+0xe92/0x1250 [ 8.796467] ? report_bug+0x107/0x1a0 [ 8.796467] ? handle_bug+0x54/0x90 [ 8.796467] ? exc_invalid_op+0x17/0x70 [ 8.796467] ? asm_exc_invalid_op+0x1a/0x20 [ 8.796467] ? sched_balance_rq+0xe92/0x1250 [ 8.796467] sched_balance_newidle+0x295/0x820 [ 8.796467] pick_next_task_fair+0x51/0x3f0 [ 8.796467] __schedule+0x23a/0x14b0 [ 8.796467] ? lock_release+0x16d/0x2e0 [ 8.796467] schedule+0x3d/0x150 [ 8.796467] worker_thread+0xb5/0x350 [ 8.796467] ? __pfx_worker_thread+0x10/0x10 [ 8.796467] kthread+0xee/0x120 [ 8.796467] ? __pfx_kthread+0x10/0x10 [ 8.796467] ret_from_fork+0x31/0x50 [ 8.796467] ? __pfx_kthread+0x10/0x10 [ 8.796467] ret_from_fork_asm+0x1a/0x30 [ 8.796467] This is because if a RT task was originally picked, it will setup the rq->balance_callback with push_rt_tasks() via set_next_task_rt(). Once the task is migrated away and we pick again, we haven't processed any balance callbacks, so rq->balance_callback is not in the same state as it was the first time pick_next_task was called. To handle this, add a zap_balance_callbacks() helper function which cleans up the balance callbacks without running them. This should be ok, as we are effectively undoing the state set in the first call to pick_next_task(), and when we pick again, the new callback can be configured for the donor task actually selected. Signed-off-by: John Stultz --- v20: * Tweaked to avoid build issues with different configs v22: * Spelling fix suggested by K Prateek * Collapsed the stub implementation to one line as suggested by K Prateek * Zap callbacks when we resched idle, as suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index abecd2411e29e..7bba05c07a79d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5001,6 +5001,38 @@ static inline void finish_task(struct task_struct *p= rev) smp_store_release(&prev->on_cpu, 0); } =20 +#ifdef CONFIG_SCHED_PROXY_EXEC +/* + * Only called from __schedule context + * + * There are some cases where we are going to re-do the action + * that added the balance callbacks. We may not be in a state + * where we can run them, so just zap them so they can be + * properly re-added on the next time around. This is similar + * handling to running the callbacks, except we just don't call + * them. + */ +static void zap_balance_callbacks(struct rq *rq) +{ + struct balance_callback *next, *head; + bool found =3D false; + + lockdep_assert_rq_held(rq); + + head =3D rq->balance_callback; + while (head) { + if (head =3D=3D &balance_push_callback) + found =3D true; + next =3D head->next; + head->next =3D NULL; + head =3D next; + } + rq->balance_callback =3D found ? &balance_push_callback : NULL; +} +#else +static inline void zap_balance_callbacks(struct rq *rq) {} +#endif + static void do_balance_callbacks(struct rq *rq, struct balance_callback *h= ead) { void (*func)(struct rq *rq); @@ -6942,10 +6974,15 @@ static void __sched notrace __schedule(int sched_mo= de) rq_set_donor(rq, next); if (unlikely(task_is_blocked(next))) { next =3D find_proxy_task(rq, next, &rf); - if (!next) + if (!next) { + /* zap the balance_callbacks before picking again */ + zap_balance_callbacks(rq); goto pick_again; - if (next =3D=3D rq->idle) + } + if (next =3D=3D rq->idle) { + zap_balance_callbacks(rq); goto keep_resched; + } } picked: clear_tsk_need_resched(prev); --=20 2.51.0.536.g15c5d4f767-goog From nobody Wed Oct 1 23:30:09 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B14C26B2D3 for ; Fri, 26 Sep 2025 03:29:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857385; cv=none; b=eMn5LeNO18NzH+FPjnwNgnevQTUu1vlhO74l4BlwzgZkocWGs/4FPz5O9E9203Yk99SBwn3a4274eKbiylDTEGHQGLyxO7HeFEWZf+j8b+SULsD1HDzgyZu2Q9ZbZtPyWsp/RTT1uedp4lX5rZ/bzBww2EuSiWCr5q5FKNPcQEc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857385; c=relaxed/simple; bh=qJFXKPITvuT7X+XmrovEThel3iJ0hK0ygL8oow4WsJo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BKipScont7Nb20hpg/ncm+6auMLBEgFz/mQyVF+OcEXcjNDdEUpVnIhHsAqMjgl4XnSDnd4lL2JpJo6+7hXUfdFrhQ16eJv+8LmSf5/boliGFnsykZMI3DM6fBVCCRGT3vVfK7YrcTUtnSgrJDgHCdxBrowXwradaV5uSir/X7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k49rkqrs; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k49rkqrs" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b54d0ffd16eso1159120a12.3 for ; Thu, 25 Sep 2025 20:29:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758857382; x=1759462182; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=sKOlWB7gZRYvUHqbh9o97tZMntRNxjlW0wwQyknH9Xo=; b=k49rkqrsP0m0e7eD2lVp1XioqAy258MO6t3Eqc2kEiru44UKDNpUZSW+cpIOEVNxvV s4mcLi6KFrxLEDo/hRmg+BbmufQ0KxWV6VvvIFEDO/dYghvqPErc7F8X6OxdVPwoVB1+ e74On4oQu+gR/S8xXzZDsGk0jBsLcyscphsSc92WoRFHW2xWKuav0Oil/xa3kMvKWxjC DYuRjZkLX2v/FvbfnIwI/4UmF1FEV7FjG/QJtpTEDTS9C9upYIyl9vplkAu78M+pMcB2 DriKUjCCY0IRz0p6Q5CeTTPbi+M4Q7HoCU0Xvw3RApTFqNh/EnL3upTPAJzGhnGKe0V+ sePw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758857382; x=1759462182; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=sKOlWB7gZRYvUHqbh9o97tZMntRNxjlW0wwQyknH9Xo=; b=LeD96iLtTDMvE1EM2lcmQ3vwRaif2E+pp3ty5asnUBblQ/n4EgEk4++TKO9UdF25jR 6QsvT9u3rbfYTcC1AxsdQbmgOMKivWxpcJwwMQQo3G3vd9jt/cb9hQCqatVp9K5Iaeew E5ZwT5fjjEZIqtaKisVwTvXnpJ3vLw5T011fD6XPJNENIBNh3mo2+nyOaMhVBjQgDP42 GhcrTxXaJqJz1MhfNpw8a+NI1caYsftMFhx4+dXj0YJtq55gaCP9+J2xxmFPbSUD7rPV IFuSNVcpd/QtW3UmxIrlzfNVTJZZhSlcmkIvHcmWw+GMZH2FK4X91K+B8pq2mFt6YaM9 RSlA== X-Gm-Message-State: AOJu0YzyH4A0E4R4/N2ilQ4qX+cXCdIWMwVQ6GZf5W4Ju66aoYp5dBRQ siMrZV0drCVx5M+r/n/Fwje9rGM4vdBM5ZxHVHx7lidbyibGT56WvPXu8x0ooSUwvm977KrECM5 BamPf688nxxPjjc6NgGqrAGCmg2Wr/G0WXxr3GHCxUWJ/q+BNk5j3aAQ/Csl5Red3ppTqhR6UTR 7CkUOehA+g0hmQrF27uKBO93dmAnXI7q9600xHAOA/tn6w5GBk X-Google-Smtp-Source: AGHT+IG/ccK5bm/w+8oPZuZ30pcjTo0nwl94Jv0byvDVM+/n28ATTMBJpPN5ZE1XzcXKHwQO4jRZqibg3dUl X-Received: from pgbfm8.prod.google.com ([2002:a05:6a02:4988:b0:b49:d191:459c]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6300:880f:b0:2f5:ba02:a28f with SMTP id adf61e73a8af0-2f5ba6d74d5mr1189757637.19.1758857382282; Thu, 25 Sep 2025 20:29:42 -0700 (PDT) Date: Fri, 26 Sep 2025 03:29:12 +0000 In-Reply-To: <20250926032931.27663-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250926032931.27663-1-jstultz@google.com> X-Mailer: git-send-email 2.51.0.536.g15c5d4f767-goog Message-ID: <20250926032931.27663-5-jstultz@google.com> Subject: [PATCH v22 4/6] sched: Handle blocked-waiter migration (and return migration) From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add logic to handle migrating a blocked waiter to a remote cpu where the lock owner is runnable. Additionally, as the blocked task may not be able to run on the remote cpu, add logic to handle return migration once the waiting task is given the mutex. Because tasks may get migrated to where they cannot run, also modify the scheduling classes to avoid sched class migrations on mutex blocked tasks, leaving find_proxy_task() and related logic to do the migrations and return migrations. This was split out from the larger proxy patch, and significantly reworked. Credits for the original patch go to: Peter Zijlstra (Intel) Juri Lelli Valentin Schneider Connor O'Brien Signed-off-by: John Stultz --- v6: * Integrated sched_proxy_exec() check in proxy_return_migration() * Minor cleanups to diff * Unpin the rq before calling __balance_callbacks() * Tweak proxy migrate to migrate deeper task in chain, to avoid tasks pingponging between rqs v7: * Fixup for unused function arguments * Switch from that_rq -> target_rq, other minor tweaks, and typo fixes suggested by Metin Kaya * Switch back to doing return migration in the ttwu path, which avoids nasty lock juggling and performance issues * Fixes for UP builds v8: * More simplifications from Metin Kaya * Fixes for null owner case, including doing return migration * Cleanup proxy_needs_return logic v9: * Narrow logic in ttwu that sets BO_RUNNABLE, to avoid missed return migrations * Switch to using zap_balance_callbacks rathern then running them when we are dropping rq locks for proxy_migration. * Drop task_is_blocked check in sched_submit_work as suggested by Metin (may re-add later if this causes trouble) * Do return migration when we're not on wake_cpu. This avoids bad task placement caused by proxy migrations raised by Xuewen Yan * Fix to call set_next_task(rq->curr) prior to dropping rq lock to avoid rq->curr getting migrated before we have actually switched from it * Cleanup to re-use proxy_resched_idle() instead of open coding it in proxy_migrate_task() * Fix return migration not to use DEQUEUE_SLEEP, so that we properly see the task as task_on_rq_migrating() after it is dequeued but before set_task_cpu() has been called on it * Fix to broaden find_proxy_task() checks to avoid race where a task is dequeued off the rq due to return migration, but set_task_cpu() and the enqueue on another rq happened after we checked task_cpu(owner). This ensures we don't proxy using a task that is not actually on our runqueue. * Cleanup to avoid the locked BO_WAKING->BO_RUNNABLE transition in try_to_wake_up() if proxy execution isn't enabled. * Cleanup to improve comment in proxy_migrate_task() explaining the set_next_task(rq->curr) logic * Cleanup deadline.c change to stylistically match rt.c change * Numerous cleanups suggested by Metin v10: * Drop WARN_ON(task_is_blocked(p)) in ttwu current case v11: * Include proxy_set_task_cpu from later in the series to this change so we can use it, rather then reworking logic later in the series. * Fix problem with return migration, where affinity was changed and wake_cpu was left outside the affinity mask. * Avoid reading the owner's cpu twice (as it might change inbetween) to avoid occasional migration-to-same-cpu edge cases * Add extra WARN_ON checks for wake_cpu and return migration edge cases. * Typo fix from Metin v13: * As we set ret, return it, not just NULL (pulling this change in from later patch) * Avoid deadlock between try_to_wake_up() and find_proxy_task() when blocked_on cycle with ww_mutex is trying a mid-chain wakeup. * Tweaks to use new __set_blocked_on_runnable() helper * Potential fix for incorrectly updated task->dl_server issues * Minor comment improvements * Add logic to handle missed wakeups, in that case doing return migration from the find_proxy_task() path * Minor cleanups v14: * Improve edge cases where we wouldn't set the task as BO_RUNNABLE v15: * Added comment to better describe proxy_needs_return() as suggested by Qais * Build fixes for !CONFIG_SMP reported by Maciej =C5=BBenczykowski * Adds fix for re-evaluating proxy_needs_return when sched_proxy_exec() is disabled, reported and diagnosed by: kuyo chang v16: * Larger rework of needs_return logic in find_proxy_task, in order to avoid problems with cpuhotplug * Rework to use guard() as suggested by Peter v18: * Integrate optimization suggested by Suleiman to do the checks for sleeping owners before checking if the task_cpu is this_cpu, so that we can avoid needlessly proxy-migrating tasks to only then dequeue them. Also check if migrating last. * Improve comments around guard locking * Include tweak to ttwu_runnable() as suggested by hupu * Rework the logic releasing the rq->donor reference before letting go of the rqlock. Just use rq->idle. * Go back to doing return migration on BO_WAKING owners, as I was hitting some softlockups caused by running tasks not making it out of BO_WAKING. v19: * Fixed proxy_force_return() logic for !SMP cases v21: * Reworked donor deactivation for unhandled sleeping owners * Commit message tweaks v22: * Add comments around zap_balance_callbacks in proxy_migration logic * Rework logic to avoid gotos out of guard() scopes, and instead use break and switch() on action value, as suggested by K Prateek * K Prateek suggested simplifications around putting donor and setting idle as next task in the migration paths, which I further simplified to using proxy_resched_idle() * Comment improvements * Dropped curr !=3D donor check in pick_next_task_fair() suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 256 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 228 insertions(+), 28 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7bba05c07a79d..d063d2c9bd5aa 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3157,6 +3157,14 @@ static int __set_cpus_allowed_ptr_locked(struct task= _struct *p, =20 __do_set_cpus_allowed(p, ctx); =20 + /* + * It might be that the p->wake_cpu is no longer + * allowed, so set it to the dest_cpu so return + * migration doesn't send it to an invalid cpu + */ + if (!is_cpu_allowed(p, p->wake_cpu)) + p->wake_cpu =3D dest_cpu; + return affine_move_task(rq, p, rf, dest_cpu, ctx->flags); =20 out: @@ -3717,6 +3725,72 @@ static inline void ttwu_do_wakeup(struct task_struct= *p) trace_sched_wakeup(p); } =20 +#ifdef CONFIG_SCHED_PROXY_EXEC +static inline void proxy_set_task_cpu(struct task_struct *p, int cpu) +{ + unsigned int wake_cpu; + + /* + * Since we are enqueuing a blocked task on a cpu it may + * not be able to run on, preserve wake_cpu when we + * __set_task_cpu so we can return the task to where it + * was previously runnable. + */ + wake_cpu =3D p->wake_cpu; + __set_task_cpu(p, cpu); + p->wake_cpu =3D wake_cpu; +} + +static bool proxy_task_runnable_but_waking(struct task_struct *p) +{ + if (!sched_proxy_exec()) + return false; + return (READ_ONCE(p->__state) =3D=3D TASK_RUNNING && + READ_ONCE(p->blocked_on_state) =3D=3D BO_WAKING); +} + +/* + * Checks to see if task p has been proxy-migrated to another rq + * and needs to be returned. If so, we deactivate the task here + * so that it can be properly woken up on the p->wake_cpu + * (or whichever cpu select_task_rq() picks at the bottom of + * try_to_wake_up() + */ +static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p) +{ + bool ret =3D false; + + if (!sched_proxy_exec()) + return false; + + raw_spin_lock(&p->blocked_lock); + if (__get_task_blocked_on(p) && p->blocked_on_state =3D=3D BO_WAKING) { + if (!task_current(rq, p) && (p->wake_cpu !=3D cpu_of(rq))) { + if (task_current_donor(rq, p)) { + put_prev_task(rq, p); + rq_set_donor(rq, rq->idle); + } + deactivate_task(rq, p, DEQUEUE_NOCLOCK); + ret =3D true; + } + __set_blocked_on_runnable(p); + resched_curr(rq); + } + raw_spin_unlock(&p->blocked_lock); + return ret; +} +#else /* !CONFIG_SCHED_PROXY_EXEC */ +static bool proxy_task_runnable_but_waking(struct task_struct *p) +{ + return false; +} + +static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p) +{ + return false; +} +#endif /* CONFIG_SCHED_PROXY_EXEC */ + static void ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, struct rq_flags *rf) @@ -3802,6 +3876,8 @@ static int ttwu_runnable(struct task_struct *p, int w= ake_flags) update_rq_clock(rq); if (p->se.sched_delayed) enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED); + if (proxy_needs_return(rq, p)) + goto out; if (!task_on_cpu(rq, p)) { /* * When on_rq && !on_cpu the task is preempted, see if @@ -3812,6 +3888,7 @@ static int ttwu_runnable(struct task_struct *p, int w= ake_flags) ttwu_do_wakeup(p); ret =3D 1; } +out: __task_rq_unlock(rq, &rf); =20 return ret; @@ -4199,6 +4276,8 @@ int try_to_wake_up(struct task_struct *p, unsigned in= t state, int wake_flags) * it disabling IRQs (this allows not taking ->pi_lock). */ WARN_ON_ONCE(p->se.sched_delayed); + /* If current is waking up, we know we can run here, so set BO_RUNNBLE */ + set_blocked_on_runnable(p); if (!ttwu_state_match(p, state, &success)) goto out; =20 @@ -4215,8 +4294,15 @@ int try_to_wake_up(struct task_struct *p, unsigned i= nt state, int wake_flags) */ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { smp_mb__after_spinlock(); - if (!ttwu_state_match(p, state, &success)) - break; + if (!ttwu_state_match(p, state, &success)) { + /* + * If we're already TASK_RUNNING, and BO_WAKING + * continue on to ttwu_runnable check to force + * proxy_needs_return evaluation + */ + if (!proxy_task_runnable_but_waking(p)) + break; + } =20 trace_sched_waking(p); =20 @@ -4278,6 +4364,7 @@ int try_to_wake_up(struct task_struct *p, unsigned in= t state, int wake_flags) * enqueue, such as ttwu_queue_wakelist(). */ WRITE_ONCE(p->__state, TASK_WAKING); + set_blocked_on_runnable(p); =20 /* * If the owning (remote) CPU is still in the middle of schedule() with @@ -4328,12 +4415,6 @@ int try_to_wake_up(struct task_struct *p, unsigned i= nt state, int wake_flags) ttwu_queue(p, cpu, wake_flags); } out: - /* - * For now, if we've been woken up, set us as BO_RUNNABLE - * We will need to be more careful later when handling - * proxy migration - */ - set_blocked_on_runnable(p); if (success) ttwu_stat(p, task_cpu(p), wake_flags); =20 @@ -6633,7 +6714,7 @@ static inline struct task_struct *proxy_resched_idle(= struct rq *rq) return rq->idle; } =20 -static bool __proxy_deactivate(struct rq *rq, struct task_struct *donor) +static bool proxy_deactivate(struct rq *rq, struct task_struct *donor) { unsigned long state =3D READ_ONCE(donor->__state); =20 @@ -6653,17 +6734,97 @@ static bool __proxy_deactivate(struct rq *rq, struc= t task_struct *donor) return try_to_block_task(rq, donor, &state, true); } =20 -static struct task_struct *proxy_deactivate(struct rq *rq, struct task_str= uct *donor) +/* + * If the blocked-on relationship crosses CPUs, migrate @p to the + * owner's CPU. + * + * This is because we must respect the CPU affinity of execution + * contexts (owner) but we can ignore affinity for scheduling + * contexts (@p). So we have to move scheduling contexts towards + * potential execution contexts. + * + * Note: The owner can disappear, but simply migrate to @target_cpu + * and leave that CPU to sort things out. + */ +static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf, + struct task_struct *p, int target_cpu) { - if (!__proxy_deactivate(rq, donor)) { - /* - * XXX: For now, if deactivation failed, set donor - * as unblocked, as we aren't doing proxy-migrations - * yet (more logic will be needed then). - */ - force_blocked_on_runnable(donor); - } - return NULL; + struct rq *target_rq =3D cpu_rq(target_cpu); + + lockdep_assert_rq_held(rq); + + /* + * Since we're going to drop @rq, we have to put(@rq->donor) first, + * otherwise we have a reference that no longer belongs to us. + * + * Additionally, as we put_prev_task(prev) earlier, its possible that + * prev will migrate away as soon as we drop the rq lock, however we + * still have it marked as rq->curr, as we've not yet switched tasks. + * + * So call proxy_resched_idle() to let go of the references before + * we release the lock. + */ + proxy_resched_idle(rq); + + WARN_ON(p =3D=3D rq->curr); + + deactivate_task(rq, p, 0); + proxy_set_task_cpu(p, target_cpu); + + /* + * We have to zap callbacks before unlocking the rq + * as another CPU may jump in and call sched_balance_rq + * which can trip the warning in rq_pin_lock() if we + * leave callbacks set. + */ + zap_balance_callbacks(rq); + rq_unpin_lock(rq, rf); + raw_spin_rq_unlock(rq); + raw_spin_rq_lock(target_rq); + + activate_task(target_rq, p, 0); + wakeup_preempt(target_rq, p, 0); + + raw_spin_rq_unlock(target_rq); + raw_spin_rq_lock(rq); + rq_repin_lock(rq, rf); +} + +static void proxy_force_return(struct rq *rq, struct rq_flags *rf, + struct task_struct *p) +{ + lockdep_assert_rq_held(rq); + + proxy_resched_idle(rq); + + WARN_ON(p =3D=3D rq->curr); + + set_blocked_on_waking(p); + get_task_struct(p); + block_task(rq, p, 0); + + /* + * We have to zap callbacks before unlocking the rq + * as another CPU may jump in and call sched_balance_rq + * which can trip the warning in rq_pin_lock() if we + * leave callbacks set. + */ + zap_balance_callbacks(rq); + rq_unpin_lock(rq, rf); + raw_spin_rq_unlock(rq); + + wake_up_process(p); + put_task_struct(p); + + raw_spin_rq_lock(rq); + rq_repin_lock(rq, rf); +} + +static inline bool proxy_can_run_here(struct rq *rq, struct task_struct *p) +{ + if (p =3D=3D rq->curr || p->wake_cpu =3D=3D cpu_of(rq)) + return true; + return false; } =20 /* @@ -6686,10 +6847,12 @@ static struct task_struct * find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags = *rf) { struct task_struct *owner =3D NULL; + bool curr_in_chain =3D false; int this_cpu =3D cpu_of(rq); struct task_struct *p; struct mutex *mutex; - enum { FOUND, DEACTIVATE_DONOR } action =3D FOUND; + int owner_cpu; + enum { FOUND, DEACTIVATE_DONOR, MIGRATE, NEEDS_RETURN } action =3D FOUND; =20 /* Follow blocked_on chain. */ for (p =3D donor; task_is_blocked(p); p =3D owner) { @@ -6715,6 +6878,10 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) return NULL; } =20 + /* Double check blocked_on_state now we're holding the lock */ + if (p->blocked_on_state =3D=3D BO_RUNNABLE) + return p; + /* * If a ww_mutex hits the die/wound case, it marks the task as * BO_WAKING and calls try_to_wake_up(), so that the mutex @@ -6730,27 +6897,50 @@ find_proxy_task(struct rq *rq, struct task_struct *= donor, struct rq_flags *rf) * try_to_wake_up from completing and doing the return * migration. * - * So when we hit a !BO_BLOCKED task briefly schedule idle - * so we release the rq and let the wakeup complete. + * So when we hit a BO_WAKING task try to wake it up ourselves. */ - if (p->blocked_on_state !=3D BO_BLOCKED) - return proxy_resched_idle(rq); + if (p->blocked_on_state =3D=3D BO_WAKING) { + if (task_current(rq, p)) { + /* If its current just set it runnable */ + __force_blocked_on_runnable(p); + return p; + } + action =3D NEEDS_RETURN; + break; + } + + if (task_current(rq, p)) + curr_in_chain =3D true; =20 owner =3D __mutex_owner(mutex); if (!owner) { + /* If the owner is null, we may have some work to do */ + if (!proxy_can_run_here(rq, p)) { + action =3D NEEDS_RETURN; + break; + } + __force_blocked_on_runnable(p); return p; } =20 if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) { /* XXX Don't handle blocked owners/delayed dequeue yet */ + if (curr_in_chain) + return proxy_resched_idle(rq); action =3D DEACTIVATE_DONOR; break; } =20 - if (task_cpu(owner) !=3D this_cpu) { - /* XXX Don't handle migrations yet */ - action =3D DEACTIVATE_DONOR; + owner_cpu =3D task_cpu(owner); + if (owner_cpu !=3D this_cpu) { + /* + * @owner can disappear, simply migrate to @owner_cpu + * and leave that CPU to sort things out. + */ + if (curr_in_chain) + return proxy_resched_idle(rq); + action =3D MIGRATE; break; } =20 @@ -6812,7 +7002,17 @@ find_proxy_task(struct rq *rq, struct task_struct *d= onor, struct rq_flags *rf) /* Handle actions we need to do outside of the guard() scope */ switch (action) { case DEACTIVATE_DONOR: - return proxy_deactivate(rq, donor); + if (proxy_deactivate(rq, donor)) + return NULL; + /* If deactivate fails, force return */ + p =3D donor; + fallthrough; + case NEEDS_RETURN: + proxy_force_return(rq, rf, p); + return NULL; + case MIGRATE: + proxy_migrate_task(rq, rf, p, owner_cpu); + return NULL; case FOUND: /* fallthrough */; } --=20 2.51.0.536.g15c5d4f767-goog From nobody Wed Oct 1 23:30:09 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8139926D4EA for ; Fri, 26 Sep 2025 03:29:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857386; cv=none; b=L4XU6kRePnClDH5S8C3kc9cBTA7bEzTZibYVxvIjM/j4TWR740/w5jkiRvPmwDeQK+7INtbrEoLeZQOtgHsMPBeOC8wf567hpuIhGjhl1Z7+jICN5U+SgHQ9i3JN+k9JrZZwUmMtCSU9pClax/3WSspuHvzpTQMNiuaSLdpzODA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857386; c=relaxed/simple; bh=sebDGtRAChtL4pBhRFAJOnIQ3bPqd/Mt8D1V8sSVvGo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KApTAEmTn1LKDUMQjkrjAdwmAy2qWUXMuWBcW3VWM6cgUqw9tquM34VT2/fpYug8xkNzD2ZyaNCa3hhn3ELoe97PHcKqKMiX0eZhD1gTzv0lk6o0NeItsLBJFkc+GTwl+UeFKjKQHtj+90qjkUsPeXaOHFs/G3nNMHpUoFAg/aQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=F+nW6x6M; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F+nW6x6M" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-33428befc5bso1822475a91.0 for ; Thu, 25 Sep 2025 20:29:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758857384; x=1759462184; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EVniAvkYcE3NQZe/dcAW1cLAsfU+9wXAac8HUPiz+NI=; b=F+nW6x6M9ERE3Fnj7tPdPjNRheFNrl9mj9ZzCVbfOnM6kHPy3If/Cra1C4Zpn+OpPs p7ZA0TvInHsoVSGY1rwgjgmPvIASh/8Uz/Hp0pYWpYpNo5ue3NyBmGi1SDH5FTkbM5bw Zjbetm8AGRH3b5jGca71ywmBSY71Y4LL2l8JFfLNvM49qRLCI4e8c6/AM09TOqjyWURM Cp9J+wPa39kIzSTvBCJHA7mAqoVL+2a9fg1sMi8n3jVW9FcflgJRV5PiRhrxrmO9xAjI f8f2VHmbluQThTOxeEYyCS1FO48TiznP46osDvvnUuG/gWPA8TQ1BDZvGVfrs4OWJLg5 A1MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758857384; x=1759462184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EVniAvkYcE3NQZe/dcAW1cLAsfU+9wXAac8HUPiz+NI=; b=WfI9NsiUpY6PAf8dxU2DQKiXKYgGFGOQ8TYqCaNI0CigFdq4E96fs1Cc51GYIu0+D9 264+8YrXEGkkVqO+oxDfkzJe43LVGF9WyXhew49b1Z8Xe2NN8qOC1ziOwdWl27Un1Q8E SOXttqOpkLvMk9BrZ4+aaeKnOCidA858qCz8hojSM1tBUqv9Aaw0kQaENhPm34Zd2z8r 6TJ++7BjYZKaHoenIFw7xayJ1VBwl1tMP6aG4jCKcRGJVCiF7LtUIsqrFb+FVdkNjNOA Kr4a7VfONgSDZ3EV/jIqf4VLSZyjXFXME03+5dBe7YCp5oOGhWWF/Dx/n5bUWkugtRmN 0rag== X-Gm-Message-State: AOJu0YxHO/zO5ZqozSk9Y1FvnLAkvmzaPJc+omsk21T4GmU75nnmGmee 8AVgDcTFqJQNwCOvUIIPsyJltukbMrcebHYAVEobSCJDH6bUn430NqpfFsVaHeEShTQKbjnxXOi FPv6R2aOdVGr6FD8SkxdhDrvYSN4eyvyRnNXZ7TkeFt5mspRhgdHHu/HUIl78HTGPuMZyIlb6YW s/JuNE6NuE7M4/Ggyo4uxSmKR5byfd38YXDPgMGLu0ZbtVQzOZ X-Google-Smtp-Source: AGHT+IEnCUXF6dZbaJoN/tJtv819rNjOqGD293yPMCXQaZP5Zz4ycQR+QdH0TQc/OJCf/TRYQGJFkSfhEKYT X-Received: from pjbsx12.prod.google.com ([2002:a17:90b:2ccc:b0:32d:def7:e60f]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3e85:b0:31f:5ebe:fa1c with SMTP id 98e67ed59e1d1-3342a7ba512mr5803514a91.0.1758857383659; Thu, 25 Sep 2025 20:29:43 -0700 (PDT) Date: Fri, 26 Sep 2025 03:29:13 +0000 In-Reply-To: <20250926032931.27663-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250926032931.27663-1-jstultz@google.com> X-Mailer: git-send-email 2.51.0.536.g15c5d4f767-goog Message-ID: <20250926032931.27663-6-jstultz@google.com> Subject: [PATCH v22 5/6] sched: Add blocked_donor link to task for smarter mutex handoffs From: John Stultz To: LKML Cc: Peter Zijlstra , Juri Lelli , Valentin Schneider , "Connor O'Brien" , John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Add link to the task this task is proxying for, and use it so the mutex owner can do an intelligent hand-off of the mutex to the task that the owner is running on behalf. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: Valentin Schneider Signed-off-by: Connor O'Brien [jstultz: This patch was split out from larger proxy patch] Signed-off-by: John Stultz --- v5: * Split out from larger proxy patch v6: * Moved proxied value from earlier patch to this one where it is actually used * Rework logic to check sched_proxy_exec() instead of using ifdefs * Moved comment change to this patch where it makes sense v7: * Use more descriptive term then "us" in comments, as suggested by Metin Kaya. * Minor typo fixup from Metin Kaya * Reworked proxied variable to prev_not_proxied to simplify usage v8: * Use helper for donor blocked_on_state transition v9: * Re-add mutex lock handoff in the unlock path, but only when we have a blocked donor * Slight reword of commit message suggested by Metin v18: * Add task_init initialization for blocked_donor, suggested by Suleiman Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 1 + init/init_task.c | 1 + kernel/fork.c | 2 +- kernel/locking/mutex.c | 41 ++++++++++++++++++++++++++++++++++++++--- kernel/sched/core.c | 18 ++++++++++++++++-- 5 files changed, 57 insertions(+), 6 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 8245940783c77..5ca495d5d0a2d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1239,6 +1239,7 @@ struct task_struct { #endif =20 struct mutex *blocked_on; /* lock we're blocked on */ + struct task_struct *blocked_donor; /* task that is boosting this task */ raw_spinlock_t blocked_lock; #ifdef CONFIG_SCHED_PROXY_EXEC enum blocked_on_state blocked_on_state; diff --git a/init/init_task.c b/init/init_task.c index 63b66b4aa585a..4fb95ab1810a3 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -177,6 +177,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { #ifdef CONFIG_SCHED_PROXY_EXEC .blocked_on_state =3D BO_RUNNABLE, #endif + .blocked_donor =3D NULL, #ifdef CONFIG_RT_MUTEXES .pi_waiters =3D RB_ROOT_CACHED, .pi_top_task =3D NULL, diff --git a/kernel/fork.c b/kernel/fork.c index d8eb66e5be918..651ebe85e1521 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2130,10 +2130,10 @@ __latent_entropy struct task_struct *copy_process( #endif =20 p->blocked_on =3D NULL; /* not blocked yet */ + p->blocked_donor =3D NULL; /* nobody is boosting p yet */ #ifdef CONFIG_SCHED_PROXY_EXEC p->blocked_on_state =3D BO_RUNNABLE; #endif - #ifdef CONFIG_BCACHE p->sequential_io =3D 0; p->sequential_io_avg =3D 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index d8cf2e9a22a65..fca2ee0756b1f 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -924,7 +924,7 @@ EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible); */ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, u= nsigned long ip) { - struct task_struct *next =3D NULL; + struct task_struct *donor, *next =3D NULL; DEFINE_WAKE_Q(wake_q); unsigned long owner; unsigned long flags; @@ -943,6 +943,12 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne MUTEX_WARN_ON(__owner_task(owner) !=3D current); MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP); =20 + if (sched_proxy_exec() && current->blocked_donor) { + /* force handoff if we have a blocked_donor */ + owner =3D MUTEX_FLAG_HANDOFF; + break; + } + if (owner & MUTEX_FLAG_HANDOFF) break; =20 @@ -956,7 +962,34 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne =20 raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); - if (!list_empty(&lock->wait_list)) { + + if (sched_proxy_exec()) { + raw_spin_lock(¤t->blocked_lock); + /* + * If we have a task boosting current, and that task was boosting + * current through this lock, hand the lock to that task, as that + * is the highest waiter, as selected by the scheduling function. + */ + donor =3D current->blocked_donor; + if (donor) { + struct mutex *next_lock; + + raw_spin_lock_nested(&donor->blocked_lock, SINGLE_DEPTH_NESTING); + next_lock =3D __get_task_blocked_on(donor); + if (next_lock =3D=3D lock) { + next =3D donor; + __set_blocked_on_waking(donor); + wake_q_add(&wake_q, donor); + current->blocked_donor =3D NULL; + } + raw_spin_unlock(&donor->blocked_lock); + } + } + + /* + * Failing that, pick any on the wait list. + */ + if (!next && !list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ struct mutex_waiter *waiter =3D list_first_entry(&lock->wait_list, @@ -964,7 +997,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne =20 next =3D waiter->task; =20 - raw_spin_lock(&next->blocked_lock); + raw_spin_lock_nested(&next->blocked_lock, SINGLE_DEPTH_NESTING); debug_mutex_wake_waiter(lock, waiter); WARN_ON_ONCE(__get_task_blocked_on(next) !=3D lock); __set_blocked_on_waking(next); @@ -975,6 +1008,8 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne if (owner & MUTEX_FLAG_HANDOFF) __mutex_handoff(lock, next); =20 + if (sched_proxy_exec()) + raw_spin_unlock(¤t->blocked_lock); raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q); } =20 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d063d2c9bd5aa..bccaa4bf41b7d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6831,7 +6831,17 @@ static inline bool proxy_can_run_here(struct rq *rq,= struct task_struct *p) * Find runnable lock owner to proxy for mutex blocked donor * * Follow the blocked-on relation: - * task->blocked_on -> mutex->owner -> task... + * + * ,-> task + * | | blocked-on + * | v + * blocked_donor | mutex + * | | owner + * | v + * `-- task + * + * and set the blocked_donor relation, this latter is used by the mutex + * code to find which (blocked) task to hand-off to. * * Lock order: * @@ -6997,6 +7007,7 @@ find_proxy_task(struct rq *rq, struct task_struct *do= nor, struct rq_flags *rf) * rq, therefore holding @rq->lock is sufficient to * guarantee its existence, as per ttwu_remote(). */ + owner->blocked_donor =3D p; } =20 /* Handle actions we need to do outside of the guard() scope */ @@ -7097,6 +7108,7 @@ static void __sched notrace __schedule(int sched_mode) unsigned long prev_state; struct rq_flags rf; struct rq *rq; + bool prev_not_proxied; int cpu; =20 /* Trace preemptions consistently with task switches */ @@ -7169,9 +7181,11 @@ static void __sched notrace __schedule(int sched_mod= e) switch_count =3D &prev->nvcsw; } =20 + prev_not_proxied =3D !prev->blocked_donor; pick_again: next =3D pick_next_task(rq, rq->donor, &rf); rq_set_donor(rq, next); + next->blocked_donor =3D NULL; if (unlikely(task_is_blocked(next))) { next =3D find_proxy_task(rq, next, &rf); if (!next) { @@ -7237,7 +7251,7 @@ static void __sched notrace __schedule(int sched_mode) rq =3D context_switch(rq, prev, next, &rf); } else { /* In case next was already curr but just got blocked_donor */ - if (!task_current_donor(rq, next)) + if (prev_not_proxied && next->blocked_donor) proxy_tag_curr(rq, next); =20 rq_unpin_lock(rq, &rf); --=20 2.51.0.536.g15c5d4f767-goog From nobody Wed Oct 1 23:30:09 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 146DB26FA50 for ; Fri, 26 Sep 2025 03:29:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857387; cv=none; b=ZGxeYdmkvkhmQ9H0GQJ6uakQB1ks0Rqnug35NwfWOZbQpOcppxNJz0sxi/B6PNXeiomT6IESQvoGrrcsg4niOO59OrvHFHudJyTvP42RCVUN0+6b3up6nRKTgBkUiOKfsa/upyL5LBrU4HBQ5Zn1ox1c3taeGMvgTWUJF/4A5VU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758857387; c=relaxed/simple; bh=oxJYxf0D9wV8jG868hZUqa1ZKRfoGHriX6agd45f2Vs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TJmDQs6MMZBY7T5qYT0VAA9lrXCKl5pv45QuU0adFl901rSXm0mL5L2dFf9737hnn28j/iP7H0hVnqUH60AVgmNCrSztr/logAOtllp+M+Y8QomOUsXXIb+IhpEA46+CkLlxfG/42N8YE4SRTdtqPisuHGwWQhblAb4ddl3vi8M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Adzs9ryT; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Adzs9ryT" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-780fb254938so1585503b3a.0 for ; Thu, 25 Sep 2025 20:29:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758857385; x=1759462185; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Cqb2NoUTnDZS2ow9iX9NrbUXLUr3u3oGiPzjXhRf4WI=; b=Adzs9ryTRVjutdUfOZo8MZ+Nv/pDd6RO82hM8TsUXKmNQlBF5u8DVlFCl9cLs0q1Vf bfAalpyJ5FZuLC5weLScqv60ydwyrzGY1AMZjMsJJ0sz0NRAxZa7jN78XwCfqAMn/lzi HgxkoE9oWY0yd0r0SzmO3sgdtW0MFN5j97w5DPxcHFidR9bOgIwet+AxQkabrnzFfkXJ yTjDUkamlkCPEtL74YMPvq9F4+sOEJLT9W6tw/cXVUz+Aaund27QZzmnr6kn6GCMDRqU 6oRnAYovxp59kr8pkekIaDehX5v1TH+IVpeI55dqCMOPMhNXd3+dTu/y2PMySGX64uEW 62vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758857385; x=1759462185; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Cqb2NoUTnDZS2ow9iX9NrbUXLUr3u3oGiPzjXhRf4WI=; b=n1lTQQOJUs/xHJhAcloVFZBBVdcbMFkSXUjrJgFPiAEzAAZN5zJM2P9xSdV9H0AGBZ pzAXEtfJYhmX7Rt2TWLQ4g/cnkkueSdj7iCP+ZGt5Cw/rH7n+MJGhI3ULzNEmUyPFz99 sYAbLRU5OvS82D0FDUqT7O3xlIpCPynd4+5RBmsneyXQuGauNiu7MKAKCHSgYtwpDR7L 2ut/7K90VguEogK2b1FEr/CRwJaKzVDMfBJcrEfd1975hOpngf7rDzIwo01qgQ9sCpGz 9C34zFTLj9pVRWOWXyMZPcIfsFjbiByPVsJcIGTt9Vfc25StQk5MgLCEcoRY78zmCbay sC9Q== X-Gm-Message-State: AOJu0YyEFYgXbpQlMIge5bnAlX7tUYlCnCkHBi2E6hoVfwU/e9aXdKcH Gwqm/2UUPqOs9YFfZ5dkr3WiIb1G+3yu93pEWAT28ktv/LJQ6dcL7cLtdpBJv7YFEGz/0y6hkDc tPHB3hCMCRxgv38agaFF+XjKiUYrsVLL+OUg3cgVWGzMtrVAXlW5EFcNN1C6IHrRNSVELFoTjEN qeA451AHj3YaFYwTxELsJlp9mAmMZXqzehyRjFpJta7eM6M7RI X-Google-Smtp-Source: AGHT+IHfz31e/L7FazMfu8I2SfXMQ/HoIay4ORSfOQCBxSSNn+ogqXoP8JKxROYWd7iNLznlW4g3OHJ1n0sP X-Received: from pgbfm2.prod.google.com ([2002:a05:6a02:4982:b0:b55:381:5559]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:258c:b0:252:2bfe:b65a with SMTP id adf61e73a8af0-2e7c4ea12b0mr6986315637.7.1758857385110; Thu, 25 Sep 2025 20:29:45 -0700 (PDT) Date: Fri, 26 Sep 2025 03:29:14 +0000 In-Reply-To: <20250926032931.27663-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250926032931.27663-1-jstultz@google.com> X-Mailer: git-send-email 2.51.0.536.g15c5d4f767-goog Message-ID: <20250926032931.27663-7-jstultz@google.com> Subject: [PATCH v22 6/6] sched: Migrate whole chain in proxy_migrate_task() From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of migrating one task each time through find_proxy_task(), we can walk up the blocked_donor ptrs and migrate the entire current chain in one go. This was broken out of earlier patches and held back while the series was being stabilized, but I wanted to re-introduce it. Signed-off-by: John Stultz --- v12: * Earlier this was re-using blocked_node, but I hit a race with activating blocked entities, and to avoid it introduced a new migration_node listhead v18: * Add init_task initialization of migration_node as suggested by Suleiman v22: * Move migration_node under CONFIG_SCHED_PROXY_EXEC as suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 1 + init/init_task.c | 3 ++- kernel/fork.c | 1 + kernel/sched/core.c | 25 +++++++++++++++++-------- 4 files changed, 21 insertions(+), 9 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 5ca495d5d0a2d..4a3c836d0bab3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1243,6 +1243,7 @@ struct task_struct { raw_spinlock_t blocked_lock; #ifdef CONFIG_SCHED_PROXY_EXEC enum blocked_on_state blocked_on_state; + struct list_head migration_node; #endif =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER diff --git a/init/init_task.c b/init/init_task.c index 4fb95ab1810a3..26dc30e2827cd 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -174,10 +174,11 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES= ) =3D { .mems_allowed_seq =3D SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq, &init_task.alloc_lock), #endif + .blocked_donor =3D NULL, #ifdef CONFIG_SCHED_PROXY_EXEC .blocked_on_state =3D BO_RUNNABLE, + .migration_node =3D LIST_HEAD_INIT(init_task.migration_node), #endif - .blocked_donor =3D NULL, #ifdef CONFIG_RT_MUTEXES .pi_waiters =3D RB_ROOT_CACHED, .pi_top_task =3D NULL, diff --git a/kernel/fork.c b/kernel/fork.c index 651ebe85e1521..f195aff7470ce 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2133,6 +2133,7 @@ __latent_entropy struct task_struct *copy_process( p->blocked_donor =3D NULL; /* nobody is boosting p yet */ #ifdef CONFIG_SCHED_PROXY_EXEC p->blocked_on_state =3D BO_RUNNABLE; + INIT_LIST_HEAD(&p->migration_node); #endif #ifdef CONFIG_BCACHE p->sequential_io =3D 0; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index bccaa4bf41b7d..9dfc4d705e295 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6750,6 +6750,7 @@ static void proxy_migrate_task(struct rq *rq, struct = rq_flags *rf, struct task_struct *p, int target_cpu) { struct rq *target_rq =3D cpu_rq(target_cpu); + LIST_HEAD(migrate_list); =20 lockdep_assert_rq_held(rq); =20 @@ -6766,11 +6767,16 @@ static void proxy_migrate_task(struct rq *rq, struc= t rq_flags *rf, */ proxy_resched_idle(rq); =20 - WARN_ON(p =3D=3D rq->curr); - - deactivate_task(rq, p, 0); - proxy_set_task_cpu(p, target_cpu); - + for (; p; p =3D p->blocked_donor) { + WARN_ON(p =3D=3D rq->curr); + deactivate_task(rq, p, 0); + proxy_set_task_cpu(p, target_cpu); + /* + * We can abuse blocked_node to migrate the thing, + * because @p was still on the rq. + */ + list_add(&p->migration_node, &migrate_list); + } /* * We have to zap callbacks before unlocking the rq * as another CPU may jump in and call sched_balance_rq @@ -6781,10 +6787,13 @@ static void proxy_migrate_task(struct rq *rq, struc= t rq_flags *rf, rq_unpin_lock(rq, rf); raw_spin_rq_unlock(rq); raw_spin_rq_lock(target_rq); + while (!list_empty(&migrate_list)) { + p =3D list_first_entry(&migrate_list, struct task_struct, migration_node= ); + list_del_init(&p->migration_node); =20 - activate_task(target_rq, p, 0); - wakeup_preempt(target_rq, p, 0); - + activate_task(target_rq, p, 0); + wakeup_preempt(target_rq, p, 0); + } raw_spin_rq_unlock(target_rq); raw_spin_rq_lock(rq); rq_repin_lock(rq, rf); --=20 2.51.0.536.g15c5d4f767-goog