From nobody Tue Dec 2 00:44:13 2025 Received: from localhost.localdomain (unknown [147.136.157.3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DEC7314A89; Tue, 25 Nov 2025 11:58:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=147.136.157.3 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764071882; cv=none; b=PxDM+stl3vIkb7NsTeTBt0UZDStFH/uYfE7h2joxNyvBmBdTXQnwHJf9LIs8Pw2feEc0uwSq6rzOZCToQtpOzh2dEf8hNt+Ggkz+0sdEOXDpT6YrtJUAPCQmbmmlR+IhdSBN8ZVnbk2TUOJ+F2r4ROTyYNLnJrI23uZaTNTv0NU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764071882; c=relaxed/simple; bh=AQADdyfLtOVxgA8NdEMOhMNPPleNcF5QRJU044YyD00=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PBrwbfxyw6+LKXpVkdSPZKrHAPRkV8s1ED6PcAYU3rLCqwHL0ErUGQ2DypryWhZpRSD18njL2LDp4kcGsvTRUrILDvF/EUU7X3NSFwedtqvqY7h/9O5rqCkuh9qVmSBorN39Uo5owwO32JOMmQCFWzaEltRSj+atH1smFdI/D3U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.dev; spf=none smtp.mailfrom=localhost.localdomain; arc=none smtp.client-ip=147.136.157.3 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=localhost.localdomain Received: by localhost.localdomain (Postfix, from userid 1007) id CE8DD8B2A0C; Tue, 25 Nov 2025 19:57:52 +0800 (+08) From: Jiayuan Chen To: bpf@vger.kernel.org Cc: Jiayuan Chen , John Fastabend , Jakub Sitnicki , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Neal Cardwell , Kuniyuki Iwashima , David Ahern , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Shuah Khan , Michal Luczaj , Stefano Garzarella , Cong Wang , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH bpf-next v4 1/3] bpf, sockmap: Fix incorrect copied_seq calculation Date: Tue, 25 Nov 2025 19:56:38 +0800 Message-ID: <20251125115709.249440-2-jiayuan.chen@linux.dev> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251125115709.249440-1-jiayuan.chen@linux.dev> References: <20251125115709.249440-1-jiayuan.chen@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A socket using sockmap has its own independent receive queue: ingress_msg. This queue may contain data from its own protocol stack or from other sockets. The issue is that when reading from ingress_msg, we update tp->copied_seq by default. However, if the data is not from its own protocol stack, tcp->rcv_nxt is not increased. Later, if we convert this socket to a native socket, reading from this socket may fail because copied_seq might be significantly larger than rcv_nxt. This fix also addresses the syzkaller-reported bug referenced in the Closes tag. This patch marks the skmsg objects in ingress_msg. When reading, we update copied_seq only if the data is from its own protocol stack. FD1:read() -- FD1->copied_seq++ | [read data] | [enqueue data] v [sockmap] -> ingress to self -> ingress_msg queue FD1 native stack ------> ^ -- FD1->rcv_nxt++ -> redirect to other | [enqueue data] | | | ingress to FD1 v ^ ... | [sockmap] FD2 native stack Closes: https://syzkaller.appspot.com/bug?extid=3D06dbd397158ec0ea4983 Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: Jiayuan Chen --- include/linux/skmsg.h | 2 ++ net/core/skmsg.c | 25 ++++++++++++++++++++++--- net/ipv4/tcp_bpf.c | 5 +++-- 3 files changed, 27 insertions(+), 5 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 49847888c287..0323a2b6cf5e 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -141,6 +141,8 @@ int sk_msg_memcopy_from_iter(struct sock *sk, struct io= v_iter *from, struct sk_msg *msg, u32 bytes); int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr = *msg, int len, int flags); +int __sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghd= r *msg, + int len, int flags, int *from_self_copied); bool sk_msg_is_readable(struct sock *sk); =20 static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 byt= es) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 2ac7731e1e0a..d73e03f7713a 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -409,14 +409,14 @@ int sk_msg_memcopy_from_iter(struct sock *sk, struct = iov_iter *from, } EXPORT_SYMBOL_GPL(sk_msg_memcopy_from_iter); =20 -/* Receive sk_msg from psock->ingress_msg to @msg. */ -int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr = *msg, - int len, int flags) +int __sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghd= r *msg, + int len, int flags, int *from_self_copied) { struct iov_iter *iter =3D &msg->msg_iter; int peek =3D flags & MSG_PEEK; struct sk_msg *msg_rx; int i, copied =3D 0; + bool to_self; =20 msg_rx =3D sk_psock_peek_msg(psock); while (copied !=3D len) { @@ -425,6 +425,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *ps= ock, struct msghdr *msg, if (unlikely(!msg_rx)) break; =20 + to_self =3D msg_rx->sk =3D=3D sk; i =3D msg_rx->sg.start; do { struct page *page; @@ -443,6 +444,9 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *ps= ock, struct msghdr *msg, } =20 copied +=3D copy; + if (to_self && from_self_copied) + *from_self_copied +=3D copy; + if (likely(!peek)) { sge->offset +=3D copy; sge->length -=3D copy; @@ -487,6 +491,14 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *p= sock, struct msghdr *msg, out: return copied; } +EXPORT_SYMBOL_GPL(__sk_msg_recvmsg); + +/* Receive sk_msg from psock->ingress_msg to @msg. */ +int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr = *msg, + int len, int flags) +{ + return __sk_msg_recvmsg(sk, psock, msg, len, flags, NULL); +} EXPORT_SYMBOL_GPL(sk_msg_recvmsg); =20 bool sk_msg_is_readable(struct sock *sk) @@ -616,6 +628,12 @@ static int sk_psock_skb_ingress_self(struct sk_psock *= psock, struct sk_buff *skb if (unlikely(!msg)) return -EAGAIN; skb_set_owner_r(skb, sk); + + /* This is used in tcp_bpf_recvmsg_parser() to determine whether the + * data originates from the socket's own protocol stack. No need to + * refcount sk because msg's lifetime is bound to sk via the ingress_msg. + */ + msg->sk =3D sk; err =3D sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg, take_= ref); if (err < 0) kfree(msg); @@ -909,6 +927,7 @@ int sk_psock_msg_verdict(struct sock *sk, struct sk_pso= ck *psock, sk_msg_compute_data_pointers(msg); msg->sk =3D sk; ret =3D bpf_prog_run_pin_on_cpu(prog, msg); + msg->sk =3D NULL; ret =3D sk_psock_map_verd(ret, msg->sk_redir); psock->apply_bytes =3D msg->apply_bytes; if (ret =3D=3D __SK_REDIRECT) { diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index a268e1595b22..6332fc36ffe6 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -226,6 +226,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, int peek =3D flags & MSG_PEEK; struct sk_psock *psock; struct tcp_sock *tcp; + int from_self_copied =3D 0; int copied =3D 0; u32 seq; =20 @@ -262,7 +263,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, } =20 msg_bytes_ready: - copied =3D sk_msg_recvmsg(sk, psock, msg, len, flags); + copied =3D __sk_msg_recvmsg(sk, psock, msg, len, flags, &from_self_copied= ); /* The typical case for EFAULT is the socket was gracefully * shutdown with a FIN pkt. So check here the other case is * some error on copy_page_to_iter which would be unexpected. @@ -277,7 +278,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, goto out; } } - seq +=3D copied; + seq +=3D from_self_copied; if (!copied) { long timeo; int data; --=20 2.43.0