net/llc/llc_conn.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
llc_conn_handler() does not check the accept queue limit before
creating a new socket for incoming connections. This allows an
attacker to send a large number of SABME PDUs to exhaust system
memory by creating unlimited sockets.
The issue is similar to the TCP SYN flood problem, but LLC lacks
the protection mechanisms that TCP has (like SYN cookies and
accept queue limits).
Add sk_acceptq_is_full() check before creating new socket and
call sk_acceptq_added() after successful socket creation to
properly track the accept queue length. This ensures that the
backlog limit set by listen() is respected.
Fixes: d389424e00f90 ("[LLC]: Fix the accept path")
Signed-off-by: Kery Qi <qikeyu2017@gmail.com>
---
net/llc/llc_conn.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
index 5c0ac243b248..9296b5d6b04a 100644
--- a/net/llc/llc_conn.c
+++ b/net/llc/llc_conn.c
@@ -802,10 +802,15 @@ void llc_conn_handler(struct llc_sap *sap, struct sk_buff *skb)
* in the newly created struct sock private area. -acme
*/
if (unlikely(sk->sk_state == TCP_LISTEN)) {
- struct sock *newsk = llc_create_incoming_sock(sk, skb->dev,
- &saddr, &daddr);
+ struct sock *newsk;
+
+ if (sk_acceptq_is_full(sk))
+ goto drop_unlock;
+ newsk = llc_create_incoming_sock(sk, skb->dev,
+ &saddr, &daddr);
if (!newsk)
goto drop_unlock;
+ sk_acceptq_added(sk);
skb_set_owner_r(skb, newsk);
} else {
/*
--
2.34.1
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
llc: fix resource exhaustion in llc_conn_handler()
This patch adds accept queue limit checking to llc_conn_handler() to
prevent resource exhaustion from unlimited socket creation. It adds
sk_acceptq_is_full() check before creating sockets and sk_acceptq_added()
after successful creation.
> diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
> index 5c0ac243b248..9296b5d6b04a 100644
> --- a/net/llc/llc_conn.c
> +++ b/net/llc/llc_conn.c
> @@ -802,10 +802,15 @@ void llc_conn_handler(struct llc_sap *sap, struct sk_buff *skb)
> * in the newly created struct sock private area. -acme
> */
> if (unlikely(sk->sk_state == TCP_LISTEN)) {
> - struct sock *newsk = llc_create_incoming_sock(sk, skb->dev,
> - &saddr, &daddr);
> + struct sock *newsk;
> +
> + if (sk_acceptq_is_full(sk))
> + goto drop_unlock;
> + newsk = llc_create_incoming_sock(sk, skb->dev,
> + &saddr, &daddr);
> if (!newsk)
> goto drop_unlock;
> + sk_acceptq_added(sk);
> skb_set_owner_r(skb, newsk);
After sk_acceptq_added(sk) increments the accept queue counter, this code
continues to llc_conn_rcv() or sk_add_backlog(). If sk_add_backlog() fails
later in the function (around line 833), the code goes to drop_unlock which
only frees the skb:
drop_unlock:
kfree_skb(skb);
goto out;
Can newsk leak on this error path? The socket was created by
llc_create_incoming_sock() and added to the SAP via llc_sap_add_socket(),
but on sk_add_backlog() failure it appears never to be freed.
Additionally, sk_acceptq_added() incremented sk->sk_ack_backlog, but
drop_unlock does not call sk_acceptq_removed(). Does this cause the accept
queue counter to become permanently inflated?
For comparison, TCP handles this atomically in inet_csk_reqsk_queue_add()
where sk_acceptq_added() is called only after the socket is successfully
added to the accept queue, and inet_child_forget() cleans up on failure.
--
pw-bot: cr
© 2016 - 2026 Red Hat, Inc.