With CONFIG_PROVE_RCU_LIST=y and by executing
$ netcat -l --sctp &
$ netcat --sctp localhost &
$ ss --sctp
one can trigger the following Lockdep-RCU splat(s):
WARNING: suspicious RCU usage
6.18.0-rc1-00093-g7f864458e9a6 #5 Not tainted
-----------------------------
net/sctp/diag.c:76 RCU-list traversed in non-reader section!!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
2 locks held by ss/215:
#0: ffff9c740828bec0 (nlk_cb_mutex-SOCK_DIAG){+.+.}-{4:4}, at: __netlink_dump_start+0x84/0x2b0
#1: ffff9c7401d72cd0 (sk_lock-AF_INET6){+.+.}-{0:0}, at: sctp_sock_dump+0x38/0x200
stack backtrace:
CPU: 0 UID: 0 PID: 215 Comm: ss Not tainted 6.18.0-rc1-00093-g7f864458e9a6 #5 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x5d/0x90
lockdep_rcu_suspicious.cold+0x4e/0xa3
inet_sctp_diag_fill.isra.0+0x4b1/0x5d0
sctp_sock_dump+0x131/0x200
sctp_transport_traverse_process+0x170/0x1b0
? __pfx_sctp_sock_filter+0x10/0x10
? __pfx_sctp_sock_dump+0x10/0x10
sctp_diag_dump+0x103/0x140
__inet_diag_dump+0x70/0xb0
netlink_dump+0x148/0x490
__netlink_dump_start+0x1f3/0x2b0
inet_diag_handler_cmd+0xcd/0x100
? __pfx_inet_diag_dump_start+0x10/0x10
? __pfx_inet_diag_dump+0x10/0x10
? __pfx_inet_diag_dump_done+0x10/0x10
sock_diag_rcv_msg+0x18e/0x320
? __pfx_sock_diag_rcv_msg+0x10/0x10
netlink_rcv_skb+0x4d/0x100
netlink_unicast+0x1d7/0x2b0
netlink_sendmsg+0x203/0x450
____sys_sendmsg+0x30c/0x340
___sys_sendmsg+0x94/0xf0
__sys_sendmsg+0x83/0xf0
do_syscall_64+0xbb/0x390
entry_SYSCALL_64_after_hwframe+0x77/0x7f
...
</TASK>
Fixes: 8f840e47f190 ("sctp: add the sctp_diag.c file")
Signed-off-by: Stefan Wiehler <stefan.wiehler@nokia.com>
---
net/sctp/diag.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/sctp/diag.c b/net/sctp/diag.c
index 996c2018f0e6..1a8761f87bf1 100644
--- a/net/sctp/diag.c
+++ b/net/sctp/diag.c
@@ -73,19 +73,23 @@ static int inet_diag_msg_sctpladdrs_fill(struct sk_buff *skb,
struct nlattr *attr;
void *info = NULL;
+ rcu_read_lock();
list_for_each_entry_rcu(laddr, address_list, list)
addrcnt++;
+ rcu_read_unlock();
attr = nla_reserve(skb, INET_DIAG_LOCALS, addrlen * addrcnt);
if (!attr)
return -EMSGSIZE;
info = nla_data(attr);
+ rcu_read_lock();
list_for_each_entry_rcu(laddr, address_list, list) {
memcpy(info, &laddr->a, sizeof(laddr->a));
memset(info + sizeof(laddr->a), 0, addrlen - sizeof(laddr->a));
info += addrlen;
}
+ rcu_read_unlock();
return 0;
}
--
2.51.0
On Tue, Oct 28, 2025 at 9:15 AM Stefan Wiehler <stefan.wiehler@nokia.com> wrote:
>
> With CONFIG_PROVE_RCU_LIST=y and by executing
>
> $ netcat -l --sctp &
> $ netcat --sctp localhost &
> $ ss --sctp
>
> one can trigger the following Lockdep-RCU splat(s):
>
> WARNING: suspicious RCU usage
> 6.18.0-rc1-00093-g7f864458e9a6 #5 Not tainted
> -----------------------------
> net/sctp/diag.c:76 RCU-list traversed in non-reader section!!
>
> other info that might help us debug this:
>
> rcu_scheduler_active = 2, debug_locks = 1
> 2 locks held by ss/215:
> #0: ffff9c740828bec0 (nlk_cb_mutex-SOCK_DIAG){+.+.}-{4:4}, at: __netlink_dump_start+0x84/0x2b0
> #1: ffff9c7401d72cd0 (sk_lock-AF_INET6){+.+.}-{0:0}, at: sctp_sock_dump+0x38/0x200
>
> stack backtrace:
> CPU: 0 UID: 0 PID: 215 Comm: ss Not tainted 6.18.0-rc1-00093-g7f864458e9a6 #5 PREEMPT(voluntary)
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
> Call Trace:
> <TASK>
> dump_stack_lvl+0x5d/0x90
> lockdep_rcu_suspicious.cold+0x4e/0xa3
> inet_sctp_diag_fill.isra.0+0x4b1/0x5d0
> sctp_sock_dump+0x131/0x200
> sctp_transport_traverse_process+0x170/0x1b0
> ? __pfx_sctp_sock_filter+0x10/0x10
> ? __pfx_sctp_sock_dump+0x10/0x10
> sctp_diag_dump+0x103/0x140
> __inet_diag_dump+0x70/0xb0
> netlink_dump+0x148/0x490
> __netlink_dump_start+0x1f3/0x2b0
> inet_diag_handler_cmd+0xcd/0x100
> ? __pfx_inet_diag_dump_start+0x10/0x10
> ? __pfx_inet_diag_dump+0x10/0x10
> ? __pfx_inet_diag_dump_done+0x10/0x10
> sock_diag_rcv_msg+0x18e/0x320
> ? __pfx_sock_diag_rcv_msg+0x10/0x10
> netlink_rcv_skb+0x4d/0x100
> netlink_unicast+0x1d7/0x2b0
> netlink_sendmsg+0x203/0x450
> ____sys_sendmsg+0x30c/0x340
> ___sys_sendmsg+0x94/0xf0
> __sys_sendmsg+0x83/0xf0
> do_syscall_64+0xbb/0x390
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> ...
> </TASK>
>
> Fixes: 8f840e47f190 ("sctp: add the sctp_diag.c file")
> Signed-off-by: Stefan Wiehler <stefan.wiehler@nokia.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
On Tue, Oct 28, 2025 at 05:12:26PM +0100, Stefan Wiehler wrote:
> With CONFIG_PROVE_RCU_LIST=y and by executing
>
> $ netcat -l --sctp &
> $ netcat --sctp localhost &
> $ ss --sctp
>
> one can trigger the following Lockdep-RCU splat(s):
...
> diff --git a/net/sctp/diag.c b/net/sctp/diag.c
> index 996c2018f0e6..1a8761f87bf1 100644
> --- a/net/sctp/diag.c
> +++ b/net/sctp/diag.c
> @@ -73,19 +73,23 @@ static int inet_diag_msg_sctpladdrs_fill(struct sk_buff *skb,
> struct nlattr *attr;
> void *info = NULL;
>
> + rcu_read_lock();
> list_for_each_entry_rcu(laddr, address_list, list)
> addrcnt++;
> + rcu_read_unlock();
>
> attr = nla_reserve(skb, INET_DIAG_LOCALS, addrlen * addrcnt);
> if (!attr)
> return -EMSGSIZE;
>
> info = nla_data(attr);
Hi Stefan,
If the number of entries in list increases while rcu_read_lock is not held,
between when addrcnt is calculated and when info is written, then can an
overrun occur while writing info?
> + rcu_read_lock();
> list_for_each_entry_rcu(laddr, address_list, list) {
> memcpy(info, &laddr->a, sizeof(laddr->a));
> memset(info + sizeof(laddr->a), 0, addrlen - sizeof(laddr->a));
> info += addrlen;
> }
> + rcu_read_unlock();
>
> return 0;
> }
> --
> 2.51.0
>
On Wed, Oct 29, 2025 at 04:38:44PM +0000, Simon Horman wrote:
> On Tue, Oct 28, 2025 at 05:12:26PM +0100, Stefan Wiehler wrote:
> > With CONFIG_PROVE_RCU_LIST=y and by executing
> >
> > $ netcat -l --sctp &
> > $ netcat --sctp localhost &
> > $ ss --sctp
> >
> > one can trigger the following Lockdep-RCU splat(s):
>
> ...
>
> > diff --git a/net/sctp/diag.c b/net/sctp/diag.c
> > index 996c2018f0e6..1a8761f87bf1 100644
> > --- a/net/sctp/diag.c
> > +++ b/net/sctp/diag.c
> > @@ -73,19 +73,23 @@ static int inet_diag_msg_sctpladdrs_fill(struct sk_buff *skb,
> > struct nlattr *attr;
> > void *info = NULL;
> >
> > + rcu_read_lock();
> > list_for_each_entry_rcu(laddr, address_list, list)
> > addrcnt++;
> > + rcu_read_unlock();
> >
> > attr = nla_reserve(skb, INET_DIAG_LOCALS, addrlen * addrcnt);
> > if (!attr)
> > return -EMSGSIZE;
> >
> > info = nla_data(attr);
>
> Hi Stefan,
>
> If the number of entries in list increases while rcu_read_lock is not held,
> between when addrcnt is calculated and when info is written, then can an
> overrun occur while writing info?
Oops, I now see that is addressed in patch 2/3.
Sorry for not reading that before sending my previous email.
>
> > + rcu_read_lock();
> > list_for_each_entry_rcu(laddr, address_list, list) {
> > memcpy(info, &laddr->a, sizeof(laddr->a));
> > memset(info + sizeof(laddr->a), 0, addrlen - sizeof(laddr->a));
> > info += addrlen;
> > }
> > + rcu_read_unlock();
> >
> > return 0;
> > }
> > --
> > 2.51.0
> >
© 2016 - 2026 Red Hat, Inc.