Use RCU lock to protect sk_socket, preventing concurrent close and release
by another thread.
Because TCP/UDP are already within a relatively large critical section:
'''
ip_local_deliver_finish
rcu_read_lock
ip_protocol_deliver_rcu
tcp_rcv/udp_rcv
rcu_read_unlock
'''
Adding rcu_read_{un}lock() at the entrance and exit of sk_data_ready
will not increase performance overhead.
Reported-by: syzbot+dd90a702f518e0eac072@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/bpf/6734c033.050a0220.2a2fcc.0015.GAE@google.com/
Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
---
net/core/skmsg.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 0ddc4c718833..1b71ae1d1bf5 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1222,27 +1222,35 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
static void sk_psock_verdict_data_ready(struct sock *sk)
{
- struct socket *sock = sk->sk_socket;
+ struct socket *sock;
const struct proto_ops *ops;
int copied;
trace_sk_data_ready(sk);
+ /* We need RCU to prevent the sk_socket from being released.
+ * Especially for Unix sockets, we are currently in the process
+ * context and do not have RCU protection.
+ */
+ rcu_read_lock();
+ sock = sk->sk_socket;
if (unlikely(!sock))
- return;
+ goto unlock;
+
ops = READ_ONCE(sock->ops);
if (!ops || !ops->read_skb)
- return;
+ goto unlock;
+
copied = ops->read_skb(sk, sk_psock_verdict_recv);
if (copied >= 0) {
struct sk_psock *psock;
- rcu_read_lock();
psock = sk_psock(sk);
if (psock)
sk_psock_data_ready(sk, psock);
- rcu_read_unlock();
}
+unlock:
+ rcu_read_unlock();
}
void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock)
--
2.47.1
On 2/26/25 5:22 AM, Jiayuan Chen wrote:
> Use RCU lock to protect sk_socket, preventing concurrent close and release
> by another thread.
>
> Because TCP/UDP are already within a relatively large critical section:
> '''
> ip_local_deliver_finish
> rcu_read_lock
> ip_protocol_deliver_rcu
> tcp_rcv/udp_rcv
> rcu_read_unlock
> '''
>
> Adding rcu_read_{un}lock() at the entrance and exit of sk_data_ready
> will not increase performance overhead.
Can it use a Fixes tag?
On Thu, Feb 27, 2025 at 03:04:26PM -0800, Martin KaFai Lau wrote:
> On 2/26/25 5:22 AM, Jiayuan Chen wrote:
> > Use RCU lock to protect sk_socket, preventing concurrent close and release
> > by another thread.
> >
> > Because TCP/UDP are already within a relatively large critical section:
> > '''
> > ip_local_deliver_finish
> > rcu_read_lock
> > ip_protocol_deliver_rcu
> > tcp_rcv/udp_rcv
> > rcu_read_unlock
> > '''
> >
> > Adding rcu_read_{un}lock() at the entrance and exit of sk_data_ready
> > will not increase performance overhead.
>
> Can it use a Fixes tag?
Thanks, Martin.
It seems that this issue has existed since sockmap supported unix.
I'll find the corresponding commit as the Fixes tag.
On Wed, Feb 26, 2025 at 09:22:40PM +0800, Jiayuan Chen wrote:
> Use RCU lock to protect sk_socket, preventing concurrent close and release
> by another thread.
>
> Because TCP/UDP are already within a relatively large critical section:
> '''
> ip_local_deliver_finish
> rcu_read_lock
> ip_protocol_deliver_rcu
> tcp_rcv/udp_rcv
> rcu_read_unlock
> '''
>
> Adding rcu_read_{un}lock() at the entrance and exit of sk_data_ready
> will not increase performance overhead.
>
> Reported-by: syzbot+dd90a702f518e0eac072@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/bpf/6734c033.050a0220.2a2fcc.0015.GAE@google.com/
> Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
sock_def_readable() already acquires RCU read lock anyway.
Reviewed-by: Cong Wang <xiyou.wangcong@gmail.com>
Thanks!
On 2025-02-27 11:45:53, Cong Wang wrote:
> On Wed, Feb 26, 2025 at 09:22:40PM +0800, Jiayuan Chen wrote:
> > Use RCU lock to protect sk_socket, preventing concurrent close and release
> > by another thread.
> >
> > Because TCP/UDP are already within a relatively large critical section:
> > '''
> > ip_local_deliver_finish
> > rcu_read_lock
> > ip_protocol_deliver_rcu
> > tcp_rcv/udp_rcv
> > rcu_read_unlock
> > '''
> >
> > Adding rcu_read_{un}lock() at the entrance and exit of sk_data_ready
> > will not increase performance overhead.
> >
> > Reported-by: syzbot+dd90a702f518e0eac072@syzkaller.appspotmail.com
> > Closes: https://lore.kernel.org/bpf/6734c033.050a0220.2a2fcc.0015.GAE@google.com/
> > Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
>
> sock_def_readable() already acquires RCU read lock anyway.
>
> Reviewed-by: Cong Wang <xiyou.wangcong@gmail.com>
Thanks.
Reviewed-by: John Fastabend <john.fastabend@gmail.com>
© 2016 - 2025 Red Hat, Inc.