[PATCH 2/3] hung_task: extend hung task blocker tracking to rwsems

Lance Yang posted 3 patches 3 months, 1 week ago
[PATCH 2/3] hung_task: extend hung task blocker tracking to rwsems
Posted by Lance Yang 3 months, 1 week ago
From: Lance Yang <lance.yang@linux.dev>

Inspired by mutex blocker tracking[1], and having already extended it to
semaphores, let's now add support for reader-writer semaphores (rwsems).

The approach is simple: when a task enters TASK_UNINTERRUPTIBLE while
waiting for an rwsem, we just call hung_task_set_blocker(). The hung task
detector can then query the rwsem's owner to identify the lock holder.

Tracking works reliably for writers, as there can only be a single writer
holding the lock, and its task struct is stored in the owner field.

The main challenge lies with readers. The owner field points to only one
of many concurrent readers, so we might lose track of the blocker if that
specific reader unlocks, even while others remain. This is not a
significant issue, however. In practice, long-lasting lock contention is
almost always caused by a writer. Therefore, reliably tracking the writer
is the primary goal of this patch series ;)

With this change, the hung task detector can now show blocker task's info
like below:

[Fri Jun 27 15:21:34 2025] INFO: task cat:28631 blocked for more than 122 seconds.
[Fri Jun 27 15:21:34 2025]       Tainted: G S                  6.16.0-rc3 #8
[Fri Jun 27 15:21:34 2025] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Fri Jun 27 15:21:34 2025] task:cat             state:D stack:0     pid:28631 tgid:28631 ppid:28501  task_flags:0x400000 flags:0x00004000
[Fri Jun 27 15:21:34 2025] Call Trace:
[Fri Jun 27 15:21:34 2025]  <TASK>
[Fri Jun 27 15:21:34 2025]  __schedule+0x7c7/0x1930
[Fri Jun 27 15:21:34 2025]  ? __pfx___schedule+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? policy_nodemask+0x215/0x340
[Fri Jun 27 15:21:34 2025]  ? _raw_spin_lock_irq+0x8a/0xe0
[Fri Jun 27 15:21:34 2025]  ? __pfx__raw_spin_lock_irq+0x10/0x10
[Fri Jun 27 15:21:34 2025]  schedule+0x6a/0x180
[Fri Jun 27 15:21:34 2025]  schedule_preempt_disabled+0x15/0x30
[Fri Jun 27 15:21:34 2025]  rwsem_down_read_slowpath+0x55e/0xe10
[Fri Jun 27 15:21:34 2025]  ? __pfx_rwsem_down_read_slowpath+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? __pfx___might_resched+0x10/0x10
[Fri Jun 27 15:21:34 2025]  down_read+0xc9/0x230
[Fri Jun 27 15:21:34 2025]  ? __pfx_down_read+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? __debugfs_file_get+0x14d/0x700
[Fri Jun 27 15:21:34 2025]  ? __pfx___debugfs_file_get+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? handle_pte_fault+0x52a/0x710
[Fri Jun 27 15:21:34 2025]  ? selinux_file_permission+0x3a9/0x590
[Fri Jun 27 15:21:34 2025]  read_dummy_rwsem_read+0x4a/0x90
[Fri Jun 27 15:21:34 2025]  full_proxy_read+0xff/0x1c0
[Fri Jun 27 15:21:34 2025]  ? rw_verify_area+0x6d/0x410
[Fri Jun 27 15:21:34 2025]  vfs_read+0x177/0xa50
[Fri Jun 27 15:21:34 2025]  ? __pfx_vfs_read+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? fdget_pos+0x1cf/0x4c0
[Fri Jun 27 15:21:34 2025]  ksys_read+0xfc/0x1d0
[Fri Jun 27 15:21:34 2025]  ? __pfx_ksys_read+0x10/0x10
[Fri Jun 27 15:21:34 2025]  do_syscall_64+0x66/0x2d0
[Fri Jun 27 15:21:34 2025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[Fri Jun 27 15:21:34 2025] RIP: 0033:0x7f3f8faefb40
[Fri Jun 27 15:21:34 2025] RSP: 002b:00007ffdeda5ab98 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Fri Jun 27 15:21:34 2025] RAX: ffffffffffffffda RBX: 0000000000010000 RCX: 00007f3f8faefb40
[Fri Jun 27 15:21:34 2025] RDX: 0000000000010000 RSI: 00000000010fa000 RDI: 0000000000000003
[Fri Jun 27 15:21:34 2025] RBP: 00000000010fa000 R08: 0000000000000000 R09: 0000000000010fff
[Fri Jun 27 15:21:34 2025] R10: 00007ffdeda59fe0 R11: 0000000000000246 R12: 00000000010fa000
[Fri Jun 27 15:21:34 2025] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000fff
[Fri Jun 27 15:21:34 2025]  </TASK>
[Fri Jun 27 15:21:34 2025] INFO: task cat:28631 <reader> blocked on an rw-semaphore likely owned by task cat:28630 <writer>
[Fri Jun 27 15:21:34 2025] task:cat             state:S stack:0     pid:28630 tgid:28630 ppid:28501  task_flags:0x400000 flags:0x00004000
[Fri Jun 27 15:21:34 2025] Call Trace:
[Fri Jun 27 15:21:34 2025]  <TASK>
[Fri Jun 27 15:21:34 2025]  __schedule+0x7c7/0x1930
[Fri Jun 27 15:21:34 2025]  ? __pfx___schedule+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? __mod_timer+0x304/0xa80
[Fri Jun 27 15:21:34 2025]  schedule+0x6a/0x180
[Fri Jun 27 15:21:34 2025]  schedule_timeout+0xfb/0x230
[Fri Jun 27 15:21:34 2025]  ? __pfx_schedule_timeout+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? __pfx_process_timeout+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? down_write+0xc4/0x140
[Fri Jun 27 15:21:34 2025]  msleep_interruptible+0xbe/0x150
[Fri Jun 27 15:21:34 2025]  read_dummy_rwsem_write+0x54/0x90
[Fri Jun 27 15:21:34 2025]  full_proxy_read+0xff/0x1c0
[Fri Jun 27 15:21:34 2025]  ? rw_verify_area+0x6d/0x410
[Fri Jun 27 15:21:34 2025]  vfs_read+0x177/0xa50
[Fri Jun 27 15:21:34 2025]  ? __pfx_vfs_read+0x10/0x10
[Fri Jun 27 15:21:34 2025]  ? fdget_pos+0x1cf/0x4c0
[Fri Jun 27 15:21:34 2025]  ksys_read+0xfc/0x1d0
[Fri Jun 27 15:21:34 2025]  ? __pfx_ksys_read+0x10/0x10
[Fri Jun 27 15:21:34 2025]  do_syscall_64+0x66/0x2d0
[Fri Jun 27 15:21:34 2025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[Fri Jun 27 15:21:34 2025] RIP: 0033:0x7f8f288efb40
[Fri Jun 27 15:21:34 2025] RSP: 002b:00007ffffb631038 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Fri Jun 27 15:21:34 2025] RAX: ffffffffffffffda RBX: 0000000000010000 RCX: 00007f8f288efb40
[Fri Jun 27 15:21:34 2025] RDX: 0000000000010000 RSI: 000000002a4b5000 RDI: 0000000000000003
[Fri Jun 27 15:21:34 2025] RBP: 000000002a4b5000 R08: 0000000000000000 R09: 0000000000010fff
[Fri Jun 27 15:21:34 2025] R10: 00007ffffb630460 R11: 0000000000000246 R12: 000000002a4b5000
[Fri Jun 27 15:21:34 2025] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000fff
[Fri Jun 27 15:21:34 2025]  </TASK>

[1] https://lore.kernel.org/all/174046694331.2194069.15472952050240807469.stgit@mhiramat.tok.corp.google.com/

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 include/linux/hung_task.h | 18 +++++++++---------
 kernel/hung_task.c        | 29 +++++++++++++++++++++++++----
 kernel/locking/rwsem.c    | 17 ++++++++++++++++-
 3 files changed, 50 insertions(+), 14 deletions(-)

diff --git a/include/linux/hung_task.h b/include/linux/hung_task.h
index 1bc2b3244613..34e615c76ca5 100644
--- a/include/linux/hung_task.h
+++ b/include/linux/hung_task.h
@@ -21,17 +21,17 @@
  * type.
  *
  * Type encoding:
- * 00 - Blocked on mutex        (BLOCKER_TYPE_MUTEX)
- * 01 - Blocked on semaphore    (BLOCKER_TYPE_SEM)
- * 10 - Blocked on rt-mutex     (BLOCKER_TYPE_RTMUTEX)
- * 11 - Blocked on rw-semaphore (BLOCKER_TYPE_RWSEM)
+ * 00 - Blocked on mutex			(BLOCKER_TYPE_MUTEX)
+ * 01 - Blocked on semaphore			(BLOCKER_TYPE_SEM)
+ * 10 - Blocked on rw-semaphore as READER	(BLOCKER_TYPE_RWSEM_READER)
+ * 11 - Blocked on rw-semaphore as WRITER	(BLOCKER_TYPE_RWSEM_WRITER)
  */
-#define BLOCKER_TYPE_MUTEX      0x00UL
-#define BLOCKER_TYPE_SEM        0x01UL
-#define BLOCKER_TYPE_RTMUTEX    0x02UL
-#define BLOCKER_TYPE_RWSEM      0x03UL
+#define BLOCKER_TYPE_MUTEX		0x00UL
+#define BLOCKER_TYPE_SEM		0x01UL
+#define BLOCKER_TYPE_RWSEM_READER	0x02UL
+#define BLOCKER_TYPE_RWSEM_WRITER	0x03UL
 
-#define BLOCKER_TYPE_MASK       0x03UL
+#define BLOCKER_TYPE_MASK		0x03UL
 
 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER
 static inline void hung_task_set_blocker(void *lock, unsigned long type)
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index d2432df2b905..8708a1205f82 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -23,6 +23,7 @@
 #include <linux/sched/debug.h>
 #include <linux/sched/sysctl.h>
 #include <linux/hung_task.h>
+#include <linux/rwsem.h>
 
 #include <trace/events/sched.h>
 
@@ -100,6 +101,7 @@ static void debug_show_blocker(struct task_struct *task)
 {
 	struct task_struct *g, *t;
 	unsigned long owner, blocker, blocker_type;
+	const char *rwsem_blocked_by, *rwsem_blocked_as;
 
 	RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "No rcu lock held");
 
@@ -111,12 +113,20 @@ static void debug_show_blocker(struct task_struct *task)
 
 	switch (blocker_type) {
 	case BLOCKER_TYPE_MUTEX:
-		owner = mutex_get_owner(
-			(struct mutex *)hung_task_blocker_to_lock(blocker));
+		owner = mutex_get_owner(hung_task_blocker_to_lock(blocker));
 		break;
 	case BLOCKER_TYPE_SEM:
-		owner = sem_last_holder(
-			(struct semaphore *)hung_task_blocker_to_lock(blocker));
+		owner = sem_last_holder(hung_task_blocker_to_lock(blocker));
+		break;
+	case BLOCKER_TYPE_RWSEM_READER:
+	case BLOCKER_TYPE_RWSEM_WRITER:
+		owner = (unsigned long)rwsem_owner(
+					hung_task_blocker_to_lock(blocker));
+		rwsem_blocked_as = (blocker_type == BLOCKER_TYPE_RWSEM_READER) ?
+					"reader" : "writer";
+		rwsem_blocked_by = is_rwsem_reader_owned(
+					hung_task_blocker_to_lock(blocker)) ?
+					"reader" : "writer";
 		break;
 	default:
 		WARN_ON_ONCE(1);
@@ -134,6 +144,11 @@ static void debug_show_blocker(struct task_struct *task)
 			pr_err("INFO: task %s:%d is blocked on a semaphore, but the last holder is not found.\n",
 			       task->comm, task->pid);
 			break;
+		case BLOCKER_TYPE_RWSEM_READER:
+		case BLOCKER_TYPE_RWSEM_WRITER:
+			pr_err("INFO: task %s:%d is blocked on an rw-semaphore, but the owner is not found.\n",
+			       task->comm, task->pid);
+			break;
 		}
 		return;
 	}
@@ -152,6 +167,12 @@ static void debug_show_blocker(struct task_struct *task)
 			pr_err("INFO: task %s:%d blocked on a semaphore likely last held by task %s:%d\n",
 			       task->comm, task->pid, t->comm, t->pid);
 			break;
+		case BLOCKER_TYPE_RWSEM_READER:
+		case BLOCKER_TYPE_RWSEM_WRITER:
+			pr_err("INFO: task %s:%d <%s> blocked on an rw-semaphore likely owned by task %s:%d <%s>\n",
+			       task->comm, task->pid, rwsem_blocked_as, t->comm,
+			       t->pid, rwsem_blocked_by);
+			break;
 		}
 		sched_show_task(t);
 		return;
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index a310eb9896de..92c6332da401 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -27,6 +27,7 @@
 #include <linux/export.h>
 #include <linux/rwsem.h>
 #include <linux/atomic.h>
+#include <linux/hung_task.h>
 #include <trace/events/lock.h>
 
 #ifndef CONFIG_PREEMPT_RT
@@ -1065,10 +1066,13 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 		wake_up_q(&wake_q);
 
 	trace_contention_begin(sem, LCB_F_READ);
+	set_current_state(state);
+
+	if (state == TASK_UNINTERRUPTIBLE)
+		hung_task_set_blocker(sem, BLOCKER_TYPE_RWSEM_READER);
 
 	/* wait to be given the lock */
 	for (;;) {
-		set_current_state(state);
 		if (!smp_load_acquire(&waiter.task)) {
 			/* Matches rwsem_mark_wake()'s smp_store_release(). */
 			break;
@@ -1083,8 +1087,12 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 		}
 		schedule_preempt_disabled();
 		lockevent_inc(rwsem_sleep_reader);
+		set_current_state(state);
 	}
 
+	if (state == TASK_UNINTERRUPTIBLE)
+		hung_task_clear_blocker();
+
 	__set_current_state(TASK_RUNNING);
 	lockevent_inc(rwsem_rlock);
 	trace_contention_end(sem, 0);
@@ -1146,6 +1154,9 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	set_current_state(state);
 	trace_contention_begin(sem, LCB_F_WRITE);
 
+	if (state == TASK_UNINTERRUPTIBLE)
+		hung_task_set_blocker(sem, BLOCKER_TYPE_RWSEM_WRITER);
+
 	for (;;) {
 		if (rwsem_try_write_lock(sem, &waiter)) {
 			/* rwsem_try_write_lock() implies ACQUIRE on success */
@@ -1179,6 +1190,10 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 trylock_again:
 		raw_spin_lock_irq(&sem->wait_lock);
 	}
+
+	if (state == TASK_UNINTERRUPTIBLE)
+		hung_task_clear_blocker();
+
 	__set_current_state(TASK_RUNNING);
 	raw_spin_unlock_irq(&sem->wait_lock);
 	lockevent_inc(rwsem_wlock);
-- 
2.49.0
Re: [PATCH 2/3] hung_task: extend hung task blocker tracking to rwsems
Posted by Geert Uytterhoeven 1 month, 2 weeks ago
Hi Lance,

On Fri, 27 Jun 2025 at 09:30, Lance Yang <ioworker0@gmail.com> wrote:
> From: Lance Yang <lance.yang@linux.dev>
>
> Inspired by mutex blocker tracking[1], and having already extended it to
> semaphores, let's now add support for reader-writer semaphores (rwsems).
>
> The approach is simple: when a task enters TASK_UNINTERRUPTIBLE while
> waiting for an rwsem, we just call hung_task_set_blocker(). The hung task
> detector can then query the rwsem's owner to identify the lock holder.
>
> Tracking works reliably for writers, as there can only be a single writer
> holding the lock, and its task struct is stored in the owner field.
>
> The main challenge lies with readers. The owner field points to only one
> of many concurrent readers, so we might lose track of the blocker if that
> specific reader unlocks, even while others remain. This is not a
> significant issue, however. In practice, long-lasting lock contention is
> almost always caused by a writer. Therefore, reliably tracking the writer
> is the primary goal of this patch series ;)
>
> With this change, the hung task detector can now show blocker task's info
> like below:
>
> [Fri Jun 27 15:21:34 2025] INFO: task cat:28631 blocked for more than 122 seconds.
> [Fri Jun 27 15:21:34 2025]       Tainted: G S                  6.16.0-rc3 #8
> [Fri Jun 27 15:21:34 2025] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [Fri Jun 27 15:21:34 2025] task:cat             state:D stack:0     pid:28631 tgid:28631 ppid:28501  task_flags:0x400000 flags:0x00004000
> [Fri Jun 27 15:21:34 2025] Call Trace:
> [Fri Jun 27 15:21:34 2025]  <TASK>
> [Fri Jun 27 15:21:34 2025]  __schedule+0x7c7/0x1930
> [Fri Jun 27 15:21:34 2025]  ? __pfx___schedule+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? policy_nodemask+0x215/0x340
> [Fri Jun 27 15:21:34 2025]  ? _raw_spin_lock_irq+0x8a/0xe0
> [Fri Jun 27 15:21:34 2025]  ? __pfx__raw_spin_lock_irq+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  schedule+0x6a/0x180
> [Fri Jun 27 15:21:34 2025]  schedule_preempt_disabled+0x15/0x30
> [Fri Jun 27 15:21:34 2025]  rwsem_down_read_slowpath+0x55e/0xe10
> [Fri Jun 27 15:21:34 2025]  ? __pfx_rwsem_down_read_slowpath+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? __pfx___might_resched+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  down_read+0xc9/0x230
> [Fri Jun 27 15:21:34 2025]  ? __pfx_down_read+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? __debugfs_file_get+0x14d/0x700
> [Fri Jun 27 15:21:34 2025]  ? __pfx___debugfs_file_get+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? handle_pte_fault+0x52a/0x710
> [Fri Jun 27 15:21:34 2025]  ? selinux_file_permission+0x3a9/0x590
> [Fri Jun 27 15:21:34 2025]  read_dummy_rwsem_read+0x4a/0x90
> [Fri Jun 27 15:21:34 2025]  full_proxy_read+0xff/0x1c0
> [Fri Jun 27 15:21:34 2025]  ? rw_verify_area+0x6d/0x410
> [Fri Jun 27 15:21:34 2025]  vfs_read+0x177/0xa50
> [Fri Jun 27 15:21:34 2025]  ? __pfx_vfs_read+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? fdget_pos+0x1cf/0x4c0
> [Fri Jun 27 15:21:34 2025]  ksys_read+0xfc/0x1d0
> [Fri Jun 27 15:21:34 2025]  ? __pfx_ksys_read+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  do_syscall_64+0x66/0x2d0
> [Fri Jun 27 15:21:34 2025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [Fri Jun 27 15:21:34 2025] RIP: 0033:0x7f3f8faefb40
> [Fri Jun 27 15:21:34 2025] RSP: 002b:00007ffdeda5ab98 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
> [Fri Jun 27 15:21:34 2025] RAX: ffffffffffffffda RBX: 0000000000010000 RCX: 00007f3f8faefb40
> [Fri Jun 27 15:21:34 2025] RDX: 0000000000010000 RSI: 00000000010fa000 RDI: 0000000000000003
> [Fri Jun 27 15:21:34 2025] RBP: 00000000010fa000 R08: 0000000000000000 R09: 0000000000010fff
> [Fri Jun 27 15:21:34 2025] R10: 00007ffdeda59fe0 R11: 0000000000000246 R12: 00000000010fa000
> [Fri Jun 27 15:21:34 2025] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000fff
> [Fri Jun 27 15:21:34 2025]  </TASK>
> [Fri Jun 27 15:21:34 2025] INFO: task cat:28631 <reader> blocked on an rw-semaphore likely owned by task cat:28630 <writer>
> [Fri Jun 27 15:21:34 2025] task:cat             state:S stack:0     pid:28630 tgid:28630 ppid:28501  task_flags:0x400000 flags:0x00004000
> [Fri Jun 27 15:21:34 2025] Call Trace:
> [Fri Jun 27 15:21:34 2025]  <TASK>
> [Fri Jun 27 15:21:34 2025]  __schedule+0x7c7/0x1930
> [Fri Jun 27 15:21:34 2025]  ? __pfx___schedule+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? __mod_timer+0x304/0xa80
> [Fri Jun 27 15:21:34 2025]  schedule+0x6a/0x180
> [Fri Jun 27 15:21:34 2025]  schedule_timeout+0xfb/0x230
> [Fri Jun 27 15:21:34 2025]  ? __pfx_schedule_timeout+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? __pfx_process_timeout+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? down_write+0xc4/0x140
> [Fri Jun 27 15:21:34 2025]  msleep_interruptible+0xbe/0x150
> [Fri Jun 27 15:21:34 2025]  read_dummy_rwsem_write+0x54/0x90
> [Fri Jun 27 15:21:34 2025]  full_proxy_read+0xff/0x1c0
> [Fri Jun 27 15:21:34 2025]  ? rw_verify_area+0x6d/0x410
> [Fri Jun 27 15:21:34 2025]  vfs_read+0x177/0xa50
> [Fri Jun 27 15:21:34 2025]  ? __pfx_vfs_read+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  ? fdget_pos+0x1cf/0x4c0
> [Fri Jun 27 15:21:34 2025]  ksys_read+0xfc/0x1d0
> [Fri Jun 27 15:21:34 2025]  ? __pfx_ksys_read+0x10/0x10
> [Fri Jun 27 15:21:34 2025]  do_syscall_64+0x66/0x2d0
> [Fri Jun 27 15:21:34 2025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [Fri Jun 27 15:21:34 2025] RIP: 0033:0x7f8f288efb40
> [Fri Jun 27 15:21:34 2025] RSP: 002b:00007ffffb631038 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
> [Fri Jun 27 15:21:34 2025] RAX: ffffffffffffffda RBX: 0000000000010000 RCX: 00007f8f288efb40
> [Fri Jun 27 15:21:34 2025] RDX: 0000000000010000 RSI: 000000002a4b5000 RDI: 0000000000000003
> [Fri Jun 27 15:21:34 2025] RBP: 000000002a4b5000 R08: 0000000000000000 R09: 0000000000010fff
> [Fri Jun 27 15:21:34 2025] R10: 00007ffffb630460 R11: 0000000000000246 R12: 000000002a4b5000
> [Fri Jun 27 15:21:34 2025] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000fff
> [Fri Jun 27 15:21:34 2025]  </TASK>
>
> [1] https://lore.kernel.org/all/174046694331.2194069.15472952050240807469.stgit@mhiramat.tok.corp.google.com/
>
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Signed-off-by: Lance Yang <lance.yang@linux.dev>

Thanks for your patch, which is now commit 194a9b9e843b4077
("hung_task: show the blocker task if the task is hung on
semaphore") in v6.16-rc1.

Eero reported [1] two WARNINGS seen with v6.16 on emulated Atari.
I managed to reproduce it on ARAnyM using the provided config (it does
not happen with atari_defconfig), and bisected it to this commit:

------------[ cut here ]------------
WARNING: CPU: 0 PID: 39 at include/linux/hung_task.h:48
__down_common+0x13a/0x1be
CPU: 0 UID: 0 PID: 39 Comm: getty Not tainted
6.15.0-rc6hatari-00018-g194a9b9e843b #1986 NONE
Stack from 01633d00:
        01633d00 00366e9e 00366e9e 00000000 002c9762 00360cb5 01633d24 0000873e
        00366e9e 01633d40 0002e0d4 00360cb5 00000030 00000009 0039c79a 00061408
        01633d78 000028e0 00360cb5 00000030 002c9762 00000009 00000000 00000000
        7fffffff 00000002 1185d266 01633eb0 01326c58 00000080 01633dc0 002c9762
        00360cb5 00000030 00000009 00000000 00002014 01326c00 1185d266 01633eb0
        002c93ea 00053d60 00061408 01326c58 0038db90 0038db90 01633e32 01633fb8
Call Trace: [<002c9762>] __down_common+0x13a/0x1be
 [<0000873e>] dump_stack+0x10/0x16
 [<0002e0d4>] __warn+0x7a/0xbc
 [<00061408>] msleep+0x0/0x2c
 [<000028e0>] warn_slowpath_fmt+0x42/0x62
 [<002c9762>] __down_common+0x13a/0x1be
 [<002c9762>] __down_common+0x13a/0x1be
 [<00002014>] arch_local_irq_enable+0xe/0x22
 [<002c93ea>] mutex_lock+0x0/0x28
 [<00053d60>] other_cpu_in_panic+0x0/0x26
 [<00061408>] msleep+0x0/0x2c
 [<002c97fc>] __down+0x16/0x1e
 [<002c9832>] down+0x2e/0x30
 [<00053dac>] console_lock+0x26/0x4c
 [<001aae4e>] do_con_write+0x3a/0x16d4
 [<002c93ea>] mutex_lock+0x0/0x28
 [<0004fa70>] __add_wait_queue+0x3a/0x6a
 [<001ac520>] con_write+0x1a/0x30
 [<0019cafa>] n_tty_write+0x2c6/0x35e
 [<00199456>] signal_pending+0x0/0x26
 [<000aba2a>] __kvmalloc_node_noprof+0x3a/0x114
 [<00004cc0>] io_uring_try_cancel_requests+0x98/0x318
 [<0004fb2e>] woken_wake_function+0x0/0x24
 [<0019a180>] file_tty_write.isra.0+0x144/0x1b8
 [<0019a206>] tty_write+0x12/0x16
 [<000b97c2>] vfs_write+0xec/0x148
 [<00028000>] fp_getdest+0x1b8/0x224
 [<00010000>] g_trace+0x16/0x28
 [<000b9916>] ksys_write+0x54/0x8a
 [<000b9962>] sys_write+0x16/0x1a
 [<000093da>] syscall+0x8/0xc
 [<0000c001>] arch_dma_prep_coherent+0x51/0x58

---[ end trace 0000000000000000 ]---
------------[ cut here ]------------
WARNING: CPU: 0 PID: 39 at include/linux/hung_task.h:56
__down_common+0x17a/0x1be
CPU: 0 UID: 0 PID: 39 Comm: getty Tainted: G        W
6.15.0-rc6hatari-00018-g194a9b9e843b #1986 NONE
Tainted: [W]=WARN
Stack from 01633d00:
        01633d00 00366e9e 00366e9e 00000000 002c97a2 00360cb5 01633d24 0000873e
        00366e9e 01633d40 0002e0d4 00360cb5 00000038 00000009 0039c79a 01633db2
        01633d78 000028e0 00360cb5 00000038 002c97a2 00000009 00000000 00000000
        00000000 00000002 00000000 00000000 01326c58 0039c79a 01633dc0 002c97a2
        00360cb5 00000038 00000009 00000000 00002014 01326c00 1185d266 01633eb0
        002c93ea 00053d60 00061408 01326c58 00380000 01000000 01220162 b64001b8
Call Trace: [<002c97a2>] __down_common+0x17a/0x1be
 [<0000873e>] dump_stack+0x10/0x16
 [<0002e0d4>] __warn+0x7a/0xbc
 [<000028e0>] warn_slowpath_fmt+0x42/0x62
 [<002c97a2>] __down_common+0x17a/0x1be
 [<002c97a2>] __down_common+0x17a/0x1be
 [<00002014>] arch_local_irq_enable+0xe/0x22
 [<002c93ea>] mutex_lock+0x0/0x28
 [<00053d60>] other_cpu_in_panic+0x0/0x26
 [<00061408>] msleep+0x0/0x2c
 [<002c97fc>] __down+0x16/0x1e
 [<002c9832>] down+0x2e/0x30
 [<00053dac>] console_lock+0x26/0x4c
 [<001aae4e>] do_con_write+0x3a/0x16d4
 [<002c93ea>] mutex_lock+0x0/0x28
 [<0004fa70>] __add_wait_queue+0x3a/0x6a
 [<001ac520>] con_write+0x1a/0x30
 [<0019cafa>] n_tty_write+0x2c6/0x35e
 [<00199456>] signal_pending+0x0/0x26
 [<000aba2a>] __kvmalloc_node_noprof+0x3a/0x114
 [<00004cc0>] io_uring_try_cancel_requests+0x98/0x318
 [<0004fb2e>] woken_wake_function+0x0/0x24
 [<0019a180>] file_tty_write.isra.0+0x144/0x1b8
 [<0019a206>] tty_write+0x12/0x16
 [<000b97c2>] vfs_write+0xec/0x148
 [<00028000>] fp_getdest+0x1b8/0x224
 [<00010000>] g_trace+0x16/0x28
 [<000b9916>] ksys_write+0x54/0x8a
 [<000b9962>] sys_write+0x16/0x1a
 [<000093da>] syscall+0x8/0xc
 [<0000c001>] arch_dma_prep_coherent+0x51/0x58

---[ end trace 0000000000000000 ]---

It still happens on v6.17-rc2.  Reverting commits 77da18de55ac6417
("hung_task: extend hung task blocker tracking to rwsems") and
194a9b9e843b4077 ("hung_task: show the blocker task if the task is
hung on semaphore") fixes the issue for me.

Thanks!

[1] "v6.16 console issues on Atari Falcon"
    https://lore.kernel.org/all/92518308-c763-4591-96ef-6b38c5d8f434@helsinkinet.fi

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds
Re: [PATCH 2/3] hung_task: extend hung task blocker tracking to rwsems
Posted by Sergey Senozhatsky 1 month, 2 weeks ago
Hi Greet,

On (25/08/22 09:34), Geert Uytterhoeven wrote:
> WARNING: CPU: 0 PID: 39 at include/linux/hung_task.h:48
> __down_common+0x13a/0x1be
> CPU: 0 UID: 0 PID: 39 Comm: getty Not tainted
> 6.15.0-rc6hatari-00018-g194a9b9e843b #1986 NONE
> Stack from 01633d00:
>         01633d00 00366e9e 00366e9e 00000000 002c9762 00360cb5 01633d24 0000873e
>         00366e9e 01633d40 0002e0d4 00360cb5 00000030 00000009 0039c79a 00061408
>         01633d78 000028e0 00360cb5 00000030 002c9762 00000009 00000000 00000000
>         7fffffff 00000002 1185d266 01633eb0 01326c58 00000080 01633dc0 002c9762
>         00360cb5 00000030 00000009 00000000 00002014 01326c00 1185d266 01633eb0
>         002c93ea 00053d60 00061408 01326c58 0038db90 0038db90 01633e32 01633fb8
> Call Trace: [<002c9762>] __down_common+0x13a/0x1be
>  [<0000873e>] dump_stack+0x10/0x16
>  [<0002e0d4>] __warn+0x7a/0xbc
>  [<00061408>] msleep+0x0/0x2c
>  [<000028e0>] warn_slowpath_fmt+0x42/0x62
>  [<002c9762>] __down_common+0x13a/0x1be
>  [<002c9762>] __down_common+0x13a/0x1be
>  [<00002014>] arch_local_irq_enable+0xe/0x22
>  [<002c93ea>] mutex_lock+0x0/0x28
>  [<00053d60>] other_cpu_in_panic+0x0/0x26
>  [<00061408>] msleep+0x0/0x2c
>  [<002c97fc>] __down+0x16/0x1e
>  [<002c9832>] down+0x2e/0x30
>  [<00053dac>] console_lock+0x26/0x4c
>  [<001aae4e>] do_con_write+0x3a/0x16d4
>  [<002c93ea>] mutex_lock+0x0/0x28
>  [<0004fa70>] __add_wait_queue+0x3a/0x6a
>  [<001ac520>] con_write+0x1a/0x30
>  [<0019cafa>] n_tty_write+0x2c6/0x35e
>  [<00199456>] signal_pending+0x0/0x26
>  [<000aba2a>] __kvmalloc_node_noprof+0x3a/0x114
>  [<00004cc0>] io_uring_try_cancel_requests+0x98/0x318
>  [<0004fb2e>] woken_wake_function+0x0/0x24
>  [<0019a180>] file_tty_write.isra.0+0x144/0x1b8
>  [<0019a206>] tty_write+0x12/0x16
>  [<000b97c2>] vfs_write+0xec/0x148
>  [<00028000>] fp_getdest+0x1b8/0x224
>  [<00010000>] g_trace+0x16/0x28
>  [<000b9916>] ksys_write+0x54/0x8a
>  [<000b9962>] sys_write+0x16/0x1a
>  [<000093da>] syscall+0x8/0xc
>  [<0000c001>] arch_dma_prep_coherent+0x51/0x58

I'm struggling to make any sense out of this backtrace, this looks
like random ksyms names.  Any chance you can get more reliable/informative
backtrace?  config-debug maybe, or something.
Re: [PATCH 2/3] hung_task: extend hung task blocker tracking to rwsems
Posted by Geert Uytterhoeven 1 month, 2 weeks ago
On Fri, 22 Aug 2025 at 09:34, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> On Fri, 27 Jun 2025 at 09:30, Lance Yang <ioworker0@gmail.com> wrote:
> > From: Lance Yang <lance.yang@linux.dev>
> >
> > Inspired by mutex blocker tracking[1], and having already extended it to
> > semaphores, let's now add support for reader-writer semaphores (rwsems).
> >
> > The approach is simple: when a task enters TASK_UNINTERRUPTIBLE while
> > waiting for an rwsem, we just call hung_task_set_blocker(). The hung task
> > detector can then query the rwsem's owner to identify the lock holder.
> >
> > Tracking works reliably for writers, as there can only be a single writer
> > holding the lock, and its task struct is stored in the owner field.
> >
> > The main challenge lies with readers. The owner field points to only one
> > of many concurrent readers, so we might lose track of the blocker if that
> > specific reader unlocks, even while others remain. This is not a
> > significant issue, however. In practice, long-lasting lock contention is
> > almost always caused by a writer. Therefore, reliably tracking the writer
> > is the primary goal of this patch series ;)
> >
> > With this change, the hung task detector can now show blocker task's info
> > like below:
> >
> > [Fri Jun 27 15:21:34 2025] INFO: task cat:28631 blocked for more than 122 seconds.
> > [Fri Jun 27 15:21:34 2025]       Tainted: G S                  6.16.0-rc3 #8
> > [Fri Jun 27 15:21:34 2025] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > [Fri Jun 27 15:21:34 2025] task:cat             state:D stack:0     pid:28631 tgid:28631 ppid:28501  task_flags:0x400000 flags:0x00004000
> > [Fri Jun 27 15:21:34 2025] Call Trace:
> > [Fri Jun 27 15:21:34 2025]  <TASK>
> > [Fri Jun 27 15:21:34 2025]  __schedule+0x7c7/0x1930
> > [Fri Jun 27 15:21:34 2025]  ? __pfx___schedule+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? policy_nodemask+0x215/0x340
> > [Fri Jun 27 15:21:34 2025]  ? _raw_spin_lock_irq+0x8a/0xe0
> > [Fri Jun 27 15:21:34 2025]  ? __pfx__raw_spin_lock_irq+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  schedule+0x6a/0x180
> > [Fri Jun 27 15:21:34 2025]  schedule_preempt_disabled+0x15/0x30
> > [Fri Jun 27 15:21:34 2025]  rwsem_down_read_slowpath+0x55e/0xe10
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_rwsem_down_read_slowpath+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? __pfx___might_resched+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  down_read+0xc9/0x230
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_down_read+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? __debugfs_file_get+0x14d/0x700
> > [Fri Jun 27 15:21:34 2025]  ? __pfx___debugfs_file_get+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? handle_pte_fault+0x52a/0x710
> > [Fri Jun 27 15:21:34 2025]  ? selinux_file_permission+0x3a9/0x590
> > [Fri Jun 27 15:21:34 2025]  read_dummy_rwsem_read+0x4a/0x90
> > [Fri Jun 27 15:21:34 2025]  full_proxy_read+0xff/0x1c0
> > [Fri Jun 27 15:21:34 2025]  ? rw_verify_area+0x6d/0x410
> > [Fri Jun 27 15:21:34 2025]  vfs_read+0x177/0xa50
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_vfs_read+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? fdget_pos+0x1cf/0x4c0
> > [Fri Jun 27 15:21:34 2025]  ksys_read+0xfc/0x1d0
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_ksys_read+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  do_syscall_64+0x66/0x2d0
> > [Fri Jun 27 15:21:34 2025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> > [Fri Jun 27 15:21:34 2025] RIP: 0033:0x7f3f8faefb40
> > [Fri Jun 27 15:21:34 2025] RSP: 002b:00007ffdeda5ab98 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
> > [Fri Jun 27 15:21:34 2025] RAX: ffffffffffffffda RBX: 0000000000010000 RCX: 00007f3f8faefb40
> > [Fri Jun 27 15:21:34 2025] RDX: 0000000000010000 RSI: 00000000010fa000 RDI: 0000000000000003
> > [Fri Jun 27 15:21:34 2025] RBP: 00000000010fa000 R08: 0000000000000000 R09: 0000000000010fff
> > [Fri Jun 27 15:21:34 2025] R10: 00007ffdeda59fe0 R11: 0000000000000246 R12: 00000000010fa000
> > [Fri Jun 27 15:21:34 2025] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000fff
> > [Fri Jun 27 15:21:34 2025]  </TASK>
> > [Fri Jun 27 15:21:34 2025] INFO: task cat:28631 <reader> blocked on an rw-semaphore likely owned by task cat:28630 <writer>
> > [Fri Jun 27 15:21:34 2025] task:cat             state:S stack:0     pid:28630 tgid:28630 ppid:28501  task_flags:0x400000 flags:0x00004000
> > [Fri Jun 27 15:21:34 2025] Call Trace:
> > [Fri Jun 27 15:21:34 2025]  <TASK>
> > [Fri Jun 27 15:21:34 2025]  __schedule+0x7c7/0x1930
> > [Fri Jun 27 15:21:34 2025]  ? __pfx___schedule+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? __mod_timer+0x304/0xa80
> > [Fri Jun 27 15:21:34 2025]  schedule+0x6a/0x180
> > [Fri Jun 27 15:21:34 2025]  schedule_timeout+0xfb/0x230
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_schedule_timeout+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_process_timeout+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? down_write+0xc4/0x140
> > [Fri Jun 27 15:21:34 2025]  msleep_interruptible+0xbe/0x150
> > [Fri Jun 27 15:21:34 2025]  read_dummy_rwsem_write+0x54/0x90
> > [Fri Jun 27 15:21:34 2025]  full_proxy_read+0xff/0x1c0
> > [Fri Jun 27 15:21:34 2025]  ? rw_verify_area+0x6d/0x410
> > [Fri Jun 27 15:21:34 2025]  vfs_read+0x177/0xa50
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_vfs_read+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  ? fdget_pos+0x1cf/0x4c0
> > [Fri Jun 27 15:21:34 2025]  ksys_read+0xfc/0x1d0
> > [Fri Jun 27 15:21:34 2025]  ? __pfx_ksys_read+0x10/0x10
> > [Fri Jun 27 15:21:34 2025]  do_syscall_64+0x66/0x2d0
> > [Fri Jun 27 15:21:34 2025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> > [Fri Jun 27 15:21:34 2025] RIP: 0033:0x7f8f288efb40
> > [Fri Jun 27 15:21:34 2025] RSP: 002b:00007ffffb631038 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
> > [Fri Jun 27 15:21:34 2025] RAX: ffffffffffffffda RBX: 0000000000010000 RCX: 00007f8f288efb40
> > [Fri Jun 27 15:21:34 2025] RDX: 0000000000010000 RSI: 000000002a4b5000 RDI: 0000000000000003
> > [Fri Jun 27 15:21:34 2025] RBP: 000000002a4b5000 R08: 0000000000000000 R09: 0000000000010fff
> > [Fri Jun 27 15:21:34 2025] R10: 00007ffffb630460 R11: 0000000000000246 R12: 000000002a4b5000
> > [Fri Jun 27 15:21:34 2025] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000fff
> > [Fri Jun 27 15:21:34 2025]  </TASK>
> >
> > [1] https://lore.kernel.org/all/174046694331.2194069.15472952050240807469.stgit@mhiramat.tok.corp.google.com/
> >
> > Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Signed-off-by: Lance Yang <lance.yang@linux.dev>
>
> Thanks for your patch, which is now commit 194a9b9e843b4077
> ("hung_task: show the blocker task if the task is hung on
> semaphore") in v6.16-rc1.

Sorry, wrong email thread.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds