bpf_iter_unix_seq_show() may deadlock when lock_sock_fast() takes the fast
path and the iter prog attempts to update a sockmap. Which ends up spinning
at sock_map_update_elem()'s bh_lock_sock():
WARNING: possible recursive locking detected
test_progs/1393 is trying to acquire lock:
ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: sock_map_update_elem+0xdb/0x1f0
but task is already holding lock:
ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(slock-AF_UNIX);
lock(slock-AF_UNIX);
*** DEADLOCK ***
May be due to missing lock nesting notation
4 locks held by test_progs/1393:
#0: ffff88814b59c790 (&p->lock){+.+.}-{4:4}, at: bpf_seq_read+0x59/0x10d0
#1: ffff88811ec25fd8 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: bpf_seq_read+0x42c/0x10d0
#2: ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
#3: ffffffff85a6a7c0 (rcu_read_lock){....}-{1:3}, at: bpf_iter_run_prog+0x51d/0xb00
Call Trace:
dump_stack_lvl+0x5d/0x80
print_deadlock_bug.cold+0xc0/0xce
__lock_acquire+0x130f/0x2590
lock_acquire+0x14e/0x2b0
_raw_spin_lock+0x30/0x40
sock_map_update_elem+0xdb/0x1f0
bpf_prog_2d0075e5d9b721cd_dump_unix+0x55/0x4f4
bpf_iter_run_prog+0x5b9/0xb00
bpf_iter_unix_seq_show+0x1f7/0x2e0
bpf_seq_read+0x42c/0x10d0
vfs_read+0x171/0xb20
ksys_read+0xff/0x200
do_syscall_64+0x6b/0x3a0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
Suggested-by: Martin KaFai Lau <martin.lau@linux.dev>
Fixes: 2c860a43dd77 ("bpf: af_unix: Implement BPF iterator for UNIX domain socket.")
Signed-off-by: Michal Luczaj <mhal@rbox.co>
---
net/unix/af_unix.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 3756a93dc63a..3d2cfb4ecbcd 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
struct bpf_prog *prog;
struct sock *sk = v;
uid_t uid;
- bool slow;
int ret;
if (v == SEQ_START_TOKEN)
return 0;
- slow = lock_sock_fast(sk);
+ lock_sock(sk);
- if (unlikely(sk_unhashed(sk))) {
+ if (unlikely(sock_flag(sk, SOCK_DEAD))) {
ret = SEQ_SKIP;
goto unlock;
}
@@ -3747,7 +3746,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
prog = bpf_iter_get_info(&meta, false);
ret = unix_prog_seq_show(prog, &meta, v, uid);
unlock:
- unlock_sock_fast(sk, slow);
+ release_sock(sk);
return ret;
}
--
2.52.0
On 3/6/26 7:30 AM, Michal Luczaj wrote:
> bpf_iter_unix_seq_show() may deadlock when lock_sock_fast() takes the fast
> path and the iter prog attempts to update a sockmap. Which ends up spinning
> at sock_map_update_elem()'s bh_lock_sock():
>
> WARNING: possible recursive locking detected
> test_progs/1393 is trying to acquire lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: sock_map_update_elem+0xdb/0x1f0
>
> but task is already holding lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(slock-AF_UNIX);
> lock(slock-AF_UNIX);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 4 locks held by test_progs/1393:
> #0: ffff88814b59c790 (&p->lock){+.+.}-{4:4}, at: bpf_seq_read+0x59/0x10d0
> #1: ffff88811ec25fd8 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: bpf_seq_read+0x42c/0x10d0
> #2: ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
> #3: ffffffff85a6a7c0 (rcu_read_lock){....}-{1:3}, at: bpf_iter_run_prog+0x51d/0xb00
>
> Call Trace:
> dump_stack_lvl+0x5d/0x80
> print_deadlock_bug.cold+0xc0/0xce
> __lock_acquire+0x130f/0x2590
> lock_acquire+0x14e/0x2b0
> _raw_spin_lock+0x30/0x40
> sock_map_update_elem+0xdb/0x1f0
> bpf_prog_2d0075e5d9b721cd_dump_unix+0x55/0x4f4
> bpf_iter_run_prog+0x5b9/0xb00
> bpf_iter_unix_seq_show+0x1f7/0x2e0
> bpf_seq_read+0x42c/0x10d0
> vfs_read+0x171/0xb20
> ksys_read+0xff/0x200
> do_syscall_64+0x6b/0x3a0
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
> Suggested-by: Martin KaFai Lau <martin.lau@linux.dev>
> Fixes: 2c860a43dd77 ("bpf: af_unix: Implement BPF iterator for UNIX domain socket.")
> Signed-off-by: Michal Luczaj <mhal@rbox.co>
Reviewed-by: Jiayuan Chen <jiayuan.chen@linux.dev>
On 3/6/26 7:30 AM, Michal Luczaj wrote:
> bpf_iter_unix_seq_show() may deadlock when lock_sock_fast() takes the fast
> path and the iter prog attempts to update a sockmap. Which ends up spinning
> at sock_map_update_elem()'s bh_lock_sock():
>
> WARNING: possible recursive locking detected
> test_progs/1393 is trying to acquire lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: sock_map_update_elem+0xdb/0x1f0
>
> but task is already holding lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(slock-AF_UNIX);
> lock(slock-AF_UNIX);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 4 locks held by test_progs/1393:
> #0: ffff88814b59c790 (&p->lock){+.+.}-{4:4}, at: bpf_seq_read+0x59/0x10d0
> #1: ffff88811ec25fd8 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: bpf_seq_read+0x42c/0x10d0
> #2: ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
> #3: ffffffff85a6a7c0 (rcu_read_lock){....}-{1:3}, at: bpf_iter_run_prog+0x51d/0xb00
>
> Call Trace:
> dump_stack_lvl+0x5d/0x80
> print_deadlock_bug.cold+0xc0/0xce
> __lock_acquire+0x130f/0x2590
> lock_acquire+0x14e/0x2b0
> _raw_spin_lock+0x30/0x40
> sock_map_update_elem+0xdb/0x1f0
> bpf_prog_2d0075e5d9b721cd_dump_unix+0x55/0x4f4
> bpf_iter_run_prog+0x5b9/0xb00
> bpf_iter_unix_seq_show+0x1f7/0x2e0
> bpf_seq_read+0x42c/0x10d0
> vfs_read+0x171/0xb20
> ksys_read+0xff/0x200
> do_syscall_64+0x6b/0x3a0
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
> Suggested-by: Martin KaFai Lau <martin.lau@linux.dev>
> Fixes: 2c860a43dd77 ("bpf: af_unix: Implement BPF iterator for UNIX domain socket.")
> Signed-off-by: Michal Luczaj <mhal@rbox.co>
> ---
> net/unix/af_unix.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
> index 3756a93dc63a..3d2cfb4ecbcd 100644
> --- a/net/unix/af_unix.c
> +++ b/net/unix/af_unix.c
> @@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> struct bpf_prog *prog;
> struct sock *sk = v;
> uid_t uid;
> - bool slow;
> int ret;
>
> if (v == SEQ_START_TOKEN)
> return 0;
>
> - slow = lock_sock_fast(sk);
> + lock_sock(sk);
>
> - if (unlikely(sk_unhashed(sk))) {
> + if (unlikely(sock_flag(sk, SOCK_DEAD))) {
> ret = SEQ_SKIP;
> goto unlock;
> }
Switching to lock_sock() fixes the deadlock, but it does not provide mutual
exclusion with unix_release_sock(), which uses unix_state_lock() exclusively
and does not touch lock_sock() at all. So a dying socket can still reach the
BPF prog concurrently with unix_release_sock() running on another CPU.
Both SOCK_DEAD and the clearing of unix_peer(sk) happen under
unix_state_lock() in unix_release_sock(). Without taking unix_state_lock()
before the SOCK_DEAD check, there is a window:
iter unix_release_sock()
--- lock_sock(sk)
SOCK_DEAD == 0(check passes)
unix_state_lock(sk)
unix_peer(sk) = NULL
sock_set_flag(sk, SOCK_DEAD)
unix_state_unlock(sk)
BPF prog runs
→ accesses unix_peer(sk) == NULL → crash
This was not raised in the v2 discussion.
The natural fix is to check SOCK_DEAD under unix_state_lock(). However,
holding unix_state_lock() throughout BPF prog execution would conflict with
patch 5: sock_map_sk_acquire_fast() also takes unix_state_lock() for AF_UNIX
sockets, resulting in a recursive spinlock deadlock.
Kuniyuki, Martin — what is the right approach here?
> @@ -3747,7 +3746,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> prog = bpf_iter_get_info(&meta, false);
> ret = unix_prog_seq_show(prog, &meta, v, uid);
> unlock:
> - unlock_sock_fast(sk, slow);
> + release_sock(sk);
> return ret;
> }
>
>
On 3/6/26 07:04, Jiayuan Chen wrote:
> On 3/6/26 7:30 AM, Michal Luczaj wrote:
>> @@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
>> struct bpf_prog *prog;
>> struct sock *sk = v;
>> uid_t uid;
>> - bool slow;
>> int ret;
>>
>> if (v == SEQ_START_TOKEN)
>> return 0;
>>
>> - slow = lock_sock_fast(sk);
>> + lock_sock(sk);
>>
>> - if (unlikely(sk_unhashed(sk))) {
>> + if (unlikely(sock_flag(sk, SOCK_DEAD))) {
>> ret = SEQ_SKIP;
>> goto unlock;
>> }
>
>
> Switching to lock_sock() fixes the deadlock, but it does not provide mutual
> exclusion with unix_release_sock(), which uses unix_state_lock() exclusively
> and does not touch lock_sock() at all. So a dying socket can still reach the
> BPF prog concurrently with unix_release_sock() running on another CPU.
That's right. Note that although the socket is dying, iter holds a
reference to it, so the socket is far from being freed (as in: memory
released).
> Both SOCK_DEAD and the clearing of unix_peer(sk) happen under
> unix_state_lock() in unix_release_sock(). Without taking unix_state_lock()
> before the SOCK_DEAD check, there is a window:
>
> iter unix_release_sock()
> --- lock_sock(sk)
> SOCK_DEAD == 0 (check passes)
> unix_state_lock(sk)
> unix_peer(sk) = NULL
> sock_set_flag(sk, SOCK_DEAD)
> unix_state_unlock(sk)
> BPF prog runs
> → accesses unix_peer(sk) == NULL → crash
>
> This was not raised in the v2 discussion.
It was raised in v1[1]. Conclusion was that bpf prog bytecode directly
accessing unix_peer(sk) is not an issue; bpf machinery will handle any
faults. That said, should a "bad" value of unix_peer(sk) end up as a
parameter of a bpf helper, yes, that is a well known[2] problem (that have
a solution unrelated to this series).
[1]:
https://lore.kernel.org/bpf/6de6f1bf-c8ee-4dfb-9b8c-f89185946630@linux.dev/
[2]:
https://lore.kernel.org/bpf/CAADnVQK_93g_KkNFYXSr8ZvA1fYh4hoFRJCJFPS-zs4ox0HhAA@mail.gmail.com/
March 6, 2026 at 22:06, "Michal Luczaj" <mhal@rbox.co mailto:mhal@rbox.co?to=%22Michal%20Luczaj%22%20%3Cmhal%40rbox.co%3E > wrote:
>
> On 3/6/26 07:04, Jiayuan Chen wrote:
>
> >
> > On 3/6/26 7:30 AM, Michal Luczaj wrote:
> >
> > >
> > > @@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> > > struct bpf_prog *prog;
> > > struct sock *sk = v;
> > > uid_t uid;
> > > - bool slow;
> > > int ret;
> > >
> > > if (v == SEQ_START_TOKEN)
> > > return 0;
> > >
> > > - slow = lock_sock_fast(sk);
> > > + lock_sock(sk);
> > >
> > > - if (unlikely(sk_unhashed(sk))) {
> > > + if (unlikely(sock_flag(sk, SOCK_DEAD))) {
> > > ret = SEQ_SKIP;
> > > goto unlock;
> > > }
> > >
> >
> >
> > Switching to lock_sock() fixes the deadlock, but it does not provide mutual
> > exclusion with unix_release_sock(), which uses unix_state_lock() exclusively
> > and does not touch lock_sock() at all. So a dying socket can still reach the
> > BPF prog concurrently with unix_release_sock() running on another CPU.
> >
> That's right. Note that although the socket is dying, iter holds a
> reference to it, so the socket is far from being freed (as in: memory
> released).
>
> >
> > Both SOCK_DEAD and the clearing of unix_peer(sk) happen under
> > unix_state_lock() in unix_release_sock(). Without taking unix_state_lock()
> > before the SOCK_DEAD check, there is a window:
> >
> > iter unix_release_sock()
> > --- lock_sock(sk)
> > SOCK_DEAD == 0 (check passes)
> > unix_state_lock(sk)
> > unix_peer(sk) = NULL
> > sock_set_flag(sk, SOCK_DEAD)
> > unix_state_unlock(sk)
> > BPF prog runs
> > → accesses unix_peer(sk) == NULL → crash
> >
> > This was not raised in the v2 discussion.
> >
> It was raised in v1[1]. Conclusion was that bpf prog bytecode directly
> accessing unix_peer(sk) is not an issue; bpf machinery will handle any
> faults. That said, should a "bad" value of unix_peer(sk) end up as a
> parameter of a bpf helper, yes, that is a well known[2] problem (that have
> a solution unrelated to this series).
>
> [1]:
> https://lore.kernel.org/bpf/6de6f1bf-c8ee-4dfb-9b8c-f89185946630@linux.dev/
> [2]:
> https://lore.kernel.org/bpf/CAADnVQK_93g_KkNFYXSr8ZvA1fYh4hoFRJCJFPS-zs4ox0HhAA@mail.gmail.com/
>
Thanks for letting me know.
March 6, 2026 at 14:04, "Jiayuan Chen" <jiayuan.chen@linux.dev mailto:jiayuan.chen@linux.dev?to=%22Jiayuan%20Chen%22%20%3Cjiayuan.chen%40linux.dev%3E > wrote:
>
> On 3/6/26 7:30 AM, Michal Luczaj wrote:
>
[...]
> > diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
> > index 3756a93dc63a..3d2cfb4ecbcd 100644
> > --- a/net/unix/af_unix.c
> > +++ b/net/unix/af_unix.c
> > @@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> > struct bpf_prog *prog;
> > struct sock *sk = v;
> > uid_t uid;
> > - bool slow;
> > int ret;
> > > if (v == SEQ_START_TOKEN)
> > return 0;
> > > - slow = lock_sock_fast(sk);
> > + lock_sock(sk);
> > > - if (unlikely(sk_unhashed(sk))) {
> > + if (unlikely(sock_flag(sk, SOCK_DEAD))) {
> > ret = SEQ_SKIP;
> > goto unlock;
> > }
> >
> Switching to lock_sock() fixes the deadlock, but it does not provide mutual
> exclusion with unix_release_sock(), which uses unix_state_lock() exclusively
> and does not touch lock_sock() at all. So a dying socket can still reach the
> BPF prog concurrently with unix_release_sock() running on another CPU.
>
> Both SOCK_DEAD and the clearing of unix_peer(sk) happen under
> unix_state_lock() in unix_release_sock(). Without taking unix_state_lock()
> before the SOCK_DEAD check, there is a window:
>
> iter unix_release_sock()
> --- lock_sock(sk)
> SOCK_DEAD == 0(check passes)
> unix_state_lock(sk)
> unix_peer(sk) = NULL
> sock_set_flag(sk, SOCK_DEAD)
> unix_state_unlock(sk)
> BPF prog runs
> → accesses unix_peer(sk) == NULL → crash
Sorry for malformed message.
Here is correct:
iter unix_release_sock()
--- lock_sock(sk)
SOCK_DEAD == 0 (check passes)
unix_state_lock(sk)
unix_peer(sk) = NULL
sock_set_flag(sk, SOCK_DEAD)
unix_state_unlock(sk)
BPF prog runs
→ accesses unix_peer(sk) == NULL → crash
On Thu, Mar 5, 2026 at 3:32 PM Michal Luczaj <mhal@rbox.co> wrote:
>
> bpf_iter_unix_seq_show() may deadlock when lock_sock_fast() takes the fast
> path and the iter prog attempts to update a sockmap. Which ends up spinning
> at sock_map_update_elem()'s bh_lock_sock():
>
> WARNING: possible recursive locking detected
> test_progs/1393 is trying to acquire lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: sock_map_update_elem+0xdb/0x1f0
>
> but task is already holding lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(slock-AF_UNIX);
> lock(slock-AF_UNIX);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 4 locks held by test_progs/1393:
> #0: ffff88814b59c790 (&p->lock){+.+.}-{4:4}, at: bpf_seq_read+0x59/0x10d0
> #1: ffff88811ec25fd8 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: bpf_seq_read+0x42c/0x10d0
> #2: ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
> #3: ffffffff85a6a7c0 (rcu_read_lock){....}-{1:3}, at: bpf_iter_run_prog+0x51d/0xb00
>
> Call Trace:
> dump_stack_lvl+0x5d/0x80
> print_deadlock_bug.cold+0xc0/0xce
> __lock_acquire+0x130f/0x2590
> lock_acquire+0x14e/0x2b0
> _raw_spin_lock+0x30/0x40
> sock_map_update_elem+0xdb/0x1f0
> bpf_prog_2d0075e5d9b721cd_dump_unix+0x55/0x4f4
> bpf_iter_run_prog+0x5b9/0xb00
> bpf_iter_unix_seq_show+0x1f7/0x2e0
> bpf_seq_read+0x42c/0x10d0
> vfs_read+0x171/0xb20
> ksys_read+0xff/0x200
> do_syscall_64+0x6b/0x3a0
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
> Suggested-by: Martin KaFai Lau <martin.lau@linux.dev>
> Fixes: 2c860a43dd77 ("bpf: af_unix: Implement BPF iterator for UNIX domain socket.")
> Signed-off-by: Michal Luczaj <mhal@rbox.co>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
© 2016 - 2026 Red Hat, Inc.