From nobody Tue Oct 7 08:35:21 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41FA31EF38E for ; Sat, 12 Jul 2025 03:34:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752291259; cv=none; b=hztnh1FTbUIPhcq4vpXh4N/xJRmCHbFyUg8kcoZf+OXe2pXrgzQaH/T+zMFd+TR9WrO5UK8V+6Z9hl0bVewRACV18qlzwNyQvCHwUGeEsE3SgAT6XqwyXG8I/p2OfVGqB009KXuaCqzx4gutQO4nNPhTSc2ALSXrrEu/O35IJDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752291259; c=relaxed/simple; bh=uOykFACp3NbUBmPSMCaDEu9zlSnIbC8VBbKQTciMjpM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HMvyPiVfKHo2SNpE+mCPmM0WyV6ax2if9RyhIJNqL+VA8iCimmuaAJiIxT1hsr4JaxmMNm/EA6SCmXrF+PzdvS8SxvwEjZr+/YlkKq9av7VTNrzE+ZdQFdF48w9IvV1R2o8wIOx5CGTMtdGIbi26Bcu3bEQ2Yx3mts00aPGc5Zc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1BaQnH+7; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1BaQnH+7" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3141f9ce4e2so4221193a91.1 for ; Fri, 11 Jul 2025 20:34:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752291257; x=1752896057; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RcsDbgmCNCPDTwLfou023Ud0plRNGaZ+zadqPA7d6qM=; b=1BaQnH+77hr3rkr8/FxZtZwTL+vy4IOysif4j7mxZPAODoEQsEU1wDsaCop3Pey1dX Sh4U83v8lmuVjAIXyWHm8rZMLMw/XYJuAYGUF/mwfr9eQyzrVsqEBr08sjhs5HSBCnQa eqgg37S99w4xejJ3og6zRFcLH79xpYejQ+6NzTnWJjXIhV5AdN+C6Mn5EHoq4d3LZqVa nAtfN++RqJ/YbcVP6WEXnonwaRu8JKi/jcxi/JEiWn3raYIpfheBQwVT352EkA9Xb2Hq tqG6swYGp5vgFvap+a1gTbr0uhX31TRqL9LfpvmJBE3nJu1dAgwUjGHGahq/EUJBLGsI P3pA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752291257; x=1752896057; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RcsDbgmCNCPDTwLfou023Ud0plRNGaZ+zadqPA7d6qM=; b=aIgfcI93Fgiy3CDQmYWt1+mQ/+NgRbkjgZunPLPiucLSkcdm4MGfZyEQpmnCJpOzLk tn5XUw0/lQnP/sVVb3VKReiVgqsdOU2ttgHeNb0je38NvxK9UNKvddKBkC7zsfgIY/IF TV06DN3v6kuaevt4RL3KFVnJrIhxCxqgM7hNVFcg8/11/C9RHT8rTqczT78IlqJL81dg EAshb77JTDXXBMnB4G9AbwEBpO8+ulACFz1GOsuuNCjMqOntR4y3Ah/D2SzSDK7n7Emu UIYolnzKWWKYO+Nhi6FzJj9IQ/dZNOf6QDKzVLK2w4lNAoY7U0hc4HmGyg3EFdp0YxbB wr/w== X-Gm-Message-State: AOJu0Yz3MwVGJcwxdYlW+4EY/U6Wmo7CqyNfAS4hJ8iESQB7DqdS8ns4 wZQpS7eUBu3Jvzq1oCldvtXzfJetkjvZhLQf5orKlnwYthC5N7nY90E0zVMGhiHSYnzFA66m1yH 4usxdH0gkIFOTkNGJsWGinJU2OXgLDFx4mSxHA4t3HdWUvy10wx4F17/r6+2YfxR8GL8fHHK+c8 8NaAuxDwSWMLm77qRz8bBAQ658V4pmUd6EJYdUWisG42cOWCYn X-Google-Smtp-Source: AGHT+IHzIBXtohB8qukap9wqMJzeQbi2viJq7t4aockOpxtbi7Ph7vlyq29vA1m/lTtwhH9n6G136WYREuKy X-Received: from pjbqo12.prod.google.com ([2002:a17:90b:3dcc:b0:312:ea08:fa64]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:48cf:b0:31c:260e:55e9 with SMTP id 98e67ed59e1d1-31c4f591cebmr7852591a91.24.1752291256255; Fri, 11 Jul 2025 20:34:16 -0700 (PDT) Date: Sat, 12 Jul 2025 03:33:43 +0000 In-Reply-To: <20250712033407.2383110-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250712033407.2383110-1-jstultz@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250712033407.2383110-3-jstultz@google.com> Subject: [PATCH v19 2/8] locking/mutex: Rework task_struct::blocked_on From: John Stultz To: LKML Cc: Peter Zijlstra , K Prateek Nayak , Juri Lelli , "Connor O'Brien" , John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Track the blocked-on relation for mutexes, to allow following this relation at schedule time. task | blocked-on v mutex | owner v task This all will be used for tracking blocked-task/mutex chains with the prox-execution patch in a similar fashion to how priority inheritance is done with rt_mutexes. For serialization, blocked-on is only set by the task itself (current). And both when setting or clearing (potentially by others), is done while holding the mutex::wait_lock. Tested-by: K Prateek Nayak Signed-off-by: Peter Zijlstra (Intel) [minor changes while rebasing] Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Connor O'Brien [jstultz: Fix blocked_on tracking in __mutex_lock_common in error paths] Signed-off-by: John Stultz --- v2: * Fixed blocked_on tracking in error paths that was causing crashes v4: * Ensure we clear blocked_on when waking ww_mutexes to die or wound. This is critical so we don't get circular blocked_on relationships that can't be resolved. v5: * Fix potential bug where the skip_wait path might clear blocked_on when that path never set it * Slight tweaks to where we set blocked_on to make it consistent, along with extra WARN_ON correctness checking * Minor comment changes v7: * Minor commit message change suggested by Metin Kaya * Fix WARN_ON conditionals in unlock path (as blocked_on might already be cleared), found while looking at issue Metin Kaya raised. * Minor tweaks to be consistent in what we do under the blocked_on lock, also tweaked variable name to avoid confusion with label, and comment typos, as suggested by Metin Kaya * Minor tweak for CONFIG_SCHED_PROXY_EXEC name change * Moved unused block of code to later in the series, as suggested by Metin Kaya * Switch to a tri-state to be able to distinguish from waking and runnable so we can later safely do return migration from ttwu * Folded together with related blocked_on changes v8: * Fix issue leaving task BO_BLOCKED when calling into optimistic spinning path. * Include helper to better handle BO_BLOCKED->BO_WAKING transitions v9: * Typo fixup pointed out by Metin * Cleanup BO_WAKING->BO_RUNNABLE transitions for the !proxy case * Many cleanups and simplifications suggested by Metin v11: * Whitespace fixup pointed out by Metin v13: * Refactor set_blocked_on helpers clean things up a bit v14: * Small build fixup with PREEMPT_RT v15: * Improve consistency of names for functions that assume blocked_lock is held, as suggested by Peter * Use guard instead of separate spinlock/unlock calls, also suggested by Peter * Drop blocked_on_state tri-state for now, as its not needed until later in the series, when we get to proxy-migration and return- migration. v16: * Clear blocked on before optimistic spinning v19: * Build fixup for PREEMPT_RT suggested by K Prateek Nayak Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 5 +---- kernel/fork.c | 3 +-- kernel/locking/mutex-debug.c | 9 +++++---- kernel/locking/mutex.c | 22 ++++++++++++++++++++++ kernel/locking/ww_mutex.h | 18 ++++++++++++++++-- 5 files changed, 45 insertions(+), 12 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index ba2a02ead8c7e..aa7966141a090 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1238,10 +1238,7 @@ struct task_struct { struct rt_mutex_waiter *pi_blocked_on; #endif =20 -#ifdef CONFIG_DEBUG_MUTEXES - /* Mutex deadlock detection: */ - struct mutex_waiter *blocked_on; -#endif + struct mutex *blocked_on; /* lock we're blocked on */ =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER /* diff --git a/kernel/fork.c b/kernel/fork.c index 1ee8eb11f38ba..5f87f05aff4a0 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2123,9 +2123,8 @@ __latent_entropy struct task_struct *copy_process( lockdep_init_task(p); #endif =20 -#ifdef CONFIG_DEBUG_MUTEXES p->blocked_on =3D NULL; /* not blocked yet */ -#endif + #ifdef CONFIG_BCACHE p->sequential_io =3D 0; p->sequential_io_avg =3D 0; diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 6e6f6071cfa27..758b7a6792b0c 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -53,17 +53,18 @@ void debug_mutex_add_waiter(struct mutex *lock, struct = mutex_waiter *waiter, { lockdep_assert_held(&lock->wait_lock); =20 - /* Mark the current thread as blocked on the lock: */ - task->blocked_on =3D waiter; + /* Current thread can't be already blocked (since it's executing!) */ + DEBUG_LOCKS_WARN_ON(task->blocked_on); } =20 void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *wa= iter, struct task_struct *task) { + struct mutex *blocked_on =3D READ_ONCE(task->blocked_on); + DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); DEBUG_LOCKS_WARN_ON(waiter->task !=3D task); - DEBUG_LOCKS_WARN_ON(task->blocked_on !=3D waiter); - task->blocked_on =3D NULL; + DEBUG_LOCKS_WARN_ON(blocked_on && blocked_on !=3D lock); =20 INIT_LIST_HEAD(&waiter->list); waiter->task =3D NULL; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index a39ecccbd106a..e2f59863a866e 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -644,6 +644,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err_early_kill; } =20 + WARN_ON(current->blocked_on); + current->blocked_on =3D lock; set_current_state(state); trace_contention_begin(lock, LCB_F_MUTEX); for (;;) { @@ -680,6 +682,12 @@ __mutex_lock_common(struct mutex *lock, unsigned int s= tate, unsigned int subclas =20 first =3D __mutex_waiter_is_first(lock, &waiter); =20 + /* + * As we likely have been woken up by task + * that has cleared our blocked_on state, re-set + * it to the lock we are trying to aquire. + */ + current->blocked_on =3D lock; set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -691,8 +699,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int s= tate, unsigned int subclas =20 if (first) { trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); + /* clear blocked_on as mutex_optimistic_spin may schedule() */ + current->blocked_on =3D NULL; if (mutex_optimistic_spin(lock, ww_ctx, &waiter)) break; + current->blocked_on =3D lock; trace_contention_begin(lock, LCB_F_MUTEX); } =20 @@ -700,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas } raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: + current->blocked_on =3D NULL; __set_current_state(TASK_RUNNING); =20 if (ww_ctx) { @@ -729,9 +741,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int s= tate, unsigned int subclas return 0; =20 err: + current->blocked_on =3D NULL; __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); err_early_kill: + WARN_ON(current->blocked_on); trace_contention_end(lock, ret); raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q); debug_mutex_free_waiter(&waiter); @@ -942,6 +956,14 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne next =3D waiter->task; =20 debug_mutex_wake_waiter(lock, waiter); + /* + * Unlock wakeups can be happening in parallel + * (when optimistic spinners steal and release + * the lock), so blocked_on may already be + * cleared here. + */ + WARN_ON(next->blocked_on && next->blocked_on !=3D lock); + next->blocked_on =3D NULL; wake_q_add(&wake_q, next); } =20 diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 37f025a096c9d..45fe05e51db18 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -283,7 +283,15 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER= *waiter, if (waiter->ww_ctx->acquired > 0 && __ww_ctx_less(waiter->ww_ctx, ww_ctx)= ) { #ifndef WW_RT debug_mutex_wake_waiter(lock, waiter); + /* + * When waking up the task to die, be sure to clear the + * blocked_on pointer. Otherwise we can see circular + * blocked_on relationships that can't resolve. + */ + WARN_ON(waiter->task->blocked_on && + waiter->task->blocked_on !=3D lock); #endif + waiter->task->blocked_on =3D NULL; wake_q_add(wake_q, waiter->task); } =20 @@ -331,9 +339,15 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * it's wounded in __ww_mutex_check_kill() or has a * wakeup pending to re-read the wounded state. */ - if (owner !=3D current) + if (owner !=3D current) { + /* + * When waking up the task to wound, be sure to clear the + * blocked_on pointer. Otherwise we can see circular + * blocked_on relationships that can't resolve. + */ + owner->blocked_on =3D NULL; wake_q_add(wake_q, owner); - + } return true; } =20 --=20 2.50.0.727.gbf7dc18ff4-goog