From nobody Tue Oct 7 19:56:08 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 892C322422A for ; Mon, 7 Jul 2025 20:44:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751921062; cv=none; b=q6+s/Ij1Z8FLQhI6Er3EbtnQ/NhjQNMrlJio26cx9KvHzRYvj6oUp8gKNR4QeyqD4Yfqd01eNR7BS7dVQBHHbs1K0SRHtzLf3YTUoYnXVljQi9T5cUytavWc52jPDc0S9QvdkmksmFLsVhwu9fqRblM4nclqr7XtnKv0Kg1ZAqo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751921062; c=relaxed/simple; bh=By0nbPfTOcXYPbjtB4bCxl3LcCaPQcULGVBaxYXufTg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QpzMz9LIicx04XZE74kWA5+NUvxOTYEQvq4C6DHlbPwT78iAB5cdw5r9ZvL2m8waCwYOR1egWhWzlCuuAcacxSj+4Bpeg9XeHU/GkPcGqG7aKSsL329NLujbWwTWzJZG8O3eSN94gMM+Tt8AQDhwc+XpQ0u+Qb6u4WB+yP9dqbU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Nes1nGnC; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Nes1nGnC" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b34ca003f1cso2398458a12.2 for ; Mon, 07 Jul 2025 13:44:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751921060; x=1752525860; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jtBQet3ao/se13H1IhwJdvBga7baKePSjfnpu6TJKGA=; b=Nes1nGnCF29K/OZqh0XbPwnuro5QYk+fCL7A0CwJ9EPywoEdTv3O7rK8zfwcEOIGA4 V55bzU9CxXJbAsJ9vcT/0l6MpJwXRXsp2n/ts6KiRZOtuXg0RFZPlyf8VqIMCvS3Sbc5 cd846JMjwsr+CxtWnP8DfOB06IrCfdiJk9L9RYIk8HSYzK+vu6VQzCXakW4+ztikiiiU VWjH3xj4NI+VovB0/rziZnf9O7rpAoweRP9UyF4xlee5e/QdJI2DIT6+bF/TrbuPLyp3 TnGXRSWMtn/tZ9x5zyAiavx9vy2Ek2yix8dkTraPAyUXwnYcY0QrqdOD1vzm0VGgXTBe /Hsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751921060; x=1752525860; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jtBQet3ao/se13H1IhwJdvBga7baKePSjfnpu6TJKGA=; b=Fr8EJ7yjAmlJOgpTLK3SzejSGpeEfHWOoEglLXSJGRjeoQk1oqM3/XOYnMDzluTSK5 SHDK/BGtp1bAm0TTD99G5apKdCsFPJRnfyCCCxjZMVemTLzTictBQs4aKVsl1/UoYM49 /jnOi2SRm4FygT2RfymwttMkSiCWlOso4RgXMaImGOD2cGORZrA1t6gEBKwpz0fBA1dA RLzaVxa132VFqnhmSm+jUy0lk5ViLaBfx/+ortLzI+JHaSox2o+CgMEBhjqAIlbWDhBO 7cQ8JpYNICexsvSTu/t1wzVPxUA2+xoG5ivrpnkixVSFC+SyAHs0p0JXxXvGPtE1DSpQ rLog== X-Gm-Message-State: AOJu0Yy+hzNaVubi+tDqf/zGz0aJlxNl4ZVxC/4ZqQpDBH5W0oJ5hUTX tXLDXhLiHZHXwN/UOncahD+Xf3KwYbb0DM4SWewhaZ/hBy9/D3n7D0oDakJtQYSniF1l+sIHMVs g8cA67glbsA7m/v4W3PRyYvnGPS15ZgSPM8rGykZdCzatsS7VxBzhZYc56de6yWNYHabnjHOu8E usf0E2I22H0lc8wjEMndZlNCR+Jgr9vGpVhc6KEZ2H+njmxf24 X-Google-Smtp-Source: AGHT+IE6Stgyv8V51A6xi7yNUlB8wlRvRck1wD5Syl1/svcfov1FCfa8c3Jfm7KV3dcThVNirb87xyltQgy1 X-Received: from pjbee15.prod.google.com ([2002:a17:90a:fc4f:b0:313:2d44:397b]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2ed0:b0:311:f30b:c21 with SMTP id 98e67ed59e1d1-31aadd9fd46mr16216617a91.26.1751921059785; Mon, 07 Jul 2025 13:44:19 -0700 (PDT) Date: Mon, 7 Jul 2025 20:43:50 +0000 In-Reply-To: <20250707204409.1028494-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250707204409.1028494-1-jstultz@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707204409.1028494-4-jstultz@google.com> Subject: [RESEND][PATCH v18 3/8] locking/mutex: Add p->blocked_on wrappers for correctness checks From: John Stultz To: LKML Cc: Valentin Schneider , K Prateek Nayak , "Connor O'Brien" , John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Valentin Schneider This lets us assert mutex::wait_lock is held whenever we access p->blocked_on, as well as warn us for unexpected state changes. Tested-by: K Prateek Nayak Signed-off-by: Valentin Schneider [fix conflicts, call in more places] Signed-off-by: Connor O'Brien [jstultz: tweaked commit subject, reworked a good bit] Signed-off-by: John Stultz --- v2: * Added get_task_blocked_on() accessor v4: * Address READ_ONCE usage that was dropped in v2 * Reordered to be a later add on to the main patch series as Peter was unhappy with similar wrappers in other patches. v5: * Added some extra correctness checking in wrappers v7: * Tweaks to reorder this change in the patch series * Minor cleanup to set_task_blocked_on() suggested by Metin Kaya v15: * Split out into its own patch again. * Further improve assumption checks in helpers. v16: * Fix optimistic spin case that can call schedule() v17: * Fix typos caught by Metin Kaya * Add lockdep_assert_held_once and drop the READ_ONCE in __get_task_blocked_on(), as suggested by Juri Lelli Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 54 ++++++++++++++++++++++++++++++++++-- kernel/locking/mutex-debug.c | 4 +-- kernel/locking/mutex.c | 32 ++++++++++----------- kernel/locking/ww_mutex.h | 6 ++-- 4 files changed, 70 insertions(+), 26 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index aa7966141a090..1d7f625adbb5e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -2177,6 +2178,57 @@ extern int __cond_resched_rwlock_write(rwlock_t *loc= k); __cond_resched_rwlock_write(lock); \ }) =20 +static inline void __set_task_blocked_on(struct task_struct *p, struct mut= ex *m) +{ + WARN_ON_ONCE(!m); + /* The task should only be setting itself as blocked */ + WARN_ON_ONCE(p !=3D current); + /* Currently we serialize blocked_on under the mutex::wait_lock */ + lockdep_assert_held_once(&m->wait_lock); + /* + * Check ensure we don't overwrite existing mutex value + * with a different mutex. Note, setting it to the same + * lock repeatedly is ok. + */ + WARN_ON_ONCE(p->blocked_on && p->blocked_on !=3D m); + p->blocked_on =3D m; +} + +static inline void set_task_blocked_on(struct task_struct *p, struct mutex= *m) +{ + guard(raw_spinlock_irqsave)(&m->wait_lock); + __set_task_blocked_on(p, m); +} + +static inline void __clear_task_blocked_on(struct task_struct *p, struct m= utex *m) +{ + WARN_ON_ONCE(!m); + /* Currently we serialize blocked_on under the mutex::wait_lock */ + lockdep_assert_held_once(&m->wait_lock); + /* + * There may be cases where we re-clear already cleared + * blocked_on relationships, but make sure we are not + * clearing the relationship with a different lock. + */ + WARN_ON_ONCE(m && p->blocked_on && p->blocked_on !=3D m); + p->blocked_on =3D NULL; +} + +static inline void clear_task_blocked_on(struct task_struct *p, struct mut= ex *m) +{ + guard(raw_spinlock_irqsave)(&m->wait_lock); + __clear_task_blocked_on(p, m); +} + +static inline struct mutex *__get_task_blocked_on(struct task_struct *p) +{ + struct mutex *m =3D p->blocked_on; + + if (m) + lockdep_assert_held_once(&m->wait_lock); + return m; +} + static __always_inline bool need_resched(void) { return unlikely(tif_need_resched()); @@ -2216,8 +2268,6 @@ extern bool sched_task_on_rq(struct task_struct *p); extern unsigned long get_wchan(struct task_struct *p); extern struct task_struct *cpu_curr_snapshot(int cpu); =20 -#include - /* * In order to reduce various lock holder preemption latencies provide an * interface to see if a vCPU is currently running or not. diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 758b7a6792b0c..949103fd8e9b5 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -54,13 +54,13 @@ void debug_mutex_add_waiter(struct mutex *lock, struct = mutex_waiter *waiter, lockdep_assert_held(&lock->wait_lock); =20 /* Current thread can't be already blocked (since it's executing!) */ - DEBUG_LOCKS_WARN_ON(task->blocked_on); + DEBUG_LOCKS_WARN_ON(__get_task_blocked_on(task)); } =20 void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *wa= iter, struct task_struct *task) { - struct mutex *blocked_on =3D READ_ONCE(task->blocked_on); + struct mutex *blocked_on =3D __get_task_blocked_on(task); =20 DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); DEBUG_LOCKS_WARN_ON(waiter->task !=3D task); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index e2f59863a866e..80d778fedd605 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -644,8 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err_early_kill; } =20 - WARN_ON(current->blocked_on); - current->blocked_on =3D lock; + __set_task_blocked_on(current, lock); set_current_state(state); trace_contention_begin(lock, LCB_F_MUTEX); for (;;) { @@ -685,9 +684,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas /* * As we likely have been woken up by task * that has cleared our blocked_on state, re-set - * it to the lock we are trying to aquire. + * it to the lock we are trying to acquire. */ - current->blocked_on =3D lock; + set_task_blocked_on(current, lock); set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -699,11 +698,15 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas =20 if (first) { trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); - /* clear blocked_on as mutex_optimistic_spin may schedule() */ - current->blocked_on =3D NULL; + /* + * mutex_optimistic_spin() can call schedule(), so + * clear blocked on so we don't become unselectable + * to run. + */ + clear_task_blocked_on(current, lock); if (mutex_optimistic_spin(lock, ww_ctx, &waiter)) break; - current->blocked_on =3D lock; + set_task_blocked_on(current, lock); trace_contention_begin(lock, LCB_F_MUTEX); } =20 @@ -711,7 +714,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas } raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: - current->blocked_on =3D NULL; + __clear_task_blocked_on(current, lock); __set_current_state(TASK_RUNNING); =20 if (ww_ctx) { @@ -741,11 +744,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas return 0; =20 err: - current->blocked_on =3D NULL; + __clear_task_blocked_on(current, lock); __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); err_early_kill: - WARN_ON(current->blocked_on); + WARN_ON(__get_task_blocked_on(current)); trace_contention_end(lock, ret); raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q); debug_mutex_free_waiter(&waiter); @@ -956,14 +959,7 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne next =3D waiter->task; =20 debug_mutex_wake_waiter(lock, waiter); - /* - * Unlock wakeups can be happening in parallel - * (when optimistic spinners steal and release - * the lock), so blocked_on may already be - * cleared here. - */ - WARN_ON(next->blocked_on && next->blocked_on !=3D lock); - next->blocked_on =3D NULL; + __clear_task_blocked_on(next, lock); wake_q_add(&wake_q, next); } =20 diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 00db40946328e..086fd5487ca77 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -289,9 +289,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER = *waiter, * blocked_on pointer. Otherwise we can see circular * blocked_on relationships that can't resolve. */ - WARN_ON(waiter->task->blocked_on && - waiter->task->blocked_on !=3D lock); - waiter->task->blocked_on =3D NULL; + __clear_task_blocked_on(waiter->task, lock); wake_q_add(wake_q, waiter->task); } =20 @@ -345,7 +343,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * blocked_on pointer. Otherwise we can see circular * blocked_on relationships that can't resolve. */ - owner->blocked_on =3D NULL; + __clear_task_blocked_on(owner, lock); wake_q_add(wake_q, owner); } return true; --=20 2.50.0.727.gbf7dc18ff4-goog