From nobody Sat Oct 11 10:01:33 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5862145A05 for ; Fri, 3 Oct 2025 14:02:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759500130; cv=none; b=Y86r/N3MRnRh3eD0jU3FzRkNlGGJrjamSJzr43xJtBmkAMWn0xgELw8Xpe2iJxL2JMs4MmeVRZo7jA/JMh611i0wcX6fOmNhLBJriFQuOd2bjweqkUoSi+Ddi9aCff/6An4iP/mWCMOF76r4DP11qhO4/I7PGHWMaJ82T6EdheI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759500130; c=relaxed/simple; bh=9JDDn8pPAqMvC0EvkvOe+Wxb/1fXLGpuiseQ01ZKF78=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:content-type; b=hPOosRoHVDKcIfyxldR/2CbbP3M61+D75aCrpDqAyS1i5Bo43VXxkTM/314ovpVGUHHfuSp5KPTwZ+l3MoRIjiNnkLaK0W3wU7zW3cLPbnb+WHryPXwuP7Wv/y41uadxZVSmfkQCKjiG+YTLUiYcob0FLEQRXdkaJBU7DTaPCiM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=TuddQx38; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TuddQx38" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1759500127; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lfq75vkPv4ZtWnk2fhWyZrkRDenuWNq0XzCmo4xxZXg=; b=TuddQx38pYKBR3pYhkPH/azkw3JRha05xCdugCGxeViu2Ee6/j4JurwUflamDRFTtYTrSI HQVfO01mlohJTv/C0pFp14SolLBu0slxsa+NNKZ5Fne5/iO7C4GR6AK1e/5qLe9GebbJ5y GiKz97UhJAl0vDwKKEXwrReYQDqMmBs= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-140-jZYLrqCEOvGqOqYo4nDrPQ-1; Fri, 03 Oct 2025 10:02:06 -0400 X-MC-Unique: jZYLrqCEOvGqOqYo4nDrPQ-1 X-Mimecast-MFC-AGG-ID: jZYLrqCEOvGqOqYo4nDrPQ_1759500125 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AEF1E1800447 for ; Fri, 3 Oct 2025 14:02:05 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.44.32.53]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CA9D81800577 for ; Fri, 3 Oct 2025 14:02:04 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Subject: [PATCH v4 mptcp-next 8/8] mptcp: leverage the backlog for RX packet processing Date: Fri, 3 Oct 2025 16:01:46 +0200 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: _mo8EdX9hdI89448vQ5WL9JdHzLzSuXuuSPrKl0iWnU_1759500125 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" When the msk socket is owned or the msk receive buffer is full, move the incoming skbs in a msk level backlog list. This avoid traversing the joined subflows and acquiring the subflow level socket lock at reception time, improving the RX performances. The skbs in the backlog keep using the incoming subflow receive space, to allow backpressure on the subflow flow control, and when processing the backlog, skbs exceeding the msk receive space are not dropped and re-inserted into backlog processing, as dropping packets already acked at the TCP level, is explicitly discouraged by the RFC and would corrupt the data stream for fallback sockets. As a drawback, special care is needed to avoid adding skbs to the backlog of a closed msk, and to avoid leaving dangling references into the backlog at subflow closing time. Note that we can't use sk_backlog, as such list is processed before release_cb() and the latter can release and re-acquire the msk level socket spin lock. That would cause msk-level OoO that in turn are fatal in case of fallback. Signed-off-by: Paolo Abeni --- net/mptcp/protocol.c | 204 ++++++++++++++++++++++++++++--------------- net/mptcp/protocol.h | 5 +- 2 files changed, 136 insertions(+), 73 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index e354f16f4a79f..1fcdb26b8e0a0 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -654,8 +654,35 @@ static void mptcp_dss_corruption(struct mptcp_sock *ms= k, struct sock *ssk) } } =20 +static void __mptcp_add_backlog(struct sock *sk, struct sk_buff *skb) +{ + struct mptcp_sock *msk =3D mptcp_sk(sk); + struct sk_buff *tail =3D NULL; + bool fragstolen; + int delta; + + if (unlikely(sk->sk_state =3D=3D TCP_CLOSE)) + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE); + + /* Try to coalesce with the last skb in our backlog */ + if (!list_empty(&msk->backlog_list)) + tail =3D list_last_entry(&msk->backlog_list, struct sk_buff, list); + + if (tail && MPTCP_SKB_CB(skb)->map_seq =3D=3D MPTCP_SKB_CB(tail)->end_seq= && + skb->sk =3D=3D tail->sk && + __mptcp_try_coalesce(sk, tail, skb, &fragstolen, &delta)) { + atomic_sub(skb->truesize - delta, &skb->sk->sk_rmem_alloc); + kfree_skb_partial(skb, fragstolen); + WRITE_ONCE(msk->backlog_len, msk->backlog_len + delta); + return; + } + + list_add_tail(&skb->list, &msk->backlog_list); + WRITE_ONCE(msk->backlog_len, msk->backlog_len + skb->truesize); +} + static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, - struct sock *ssk) + struct sock *ssk, bool own_msk) { struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); struct sock *sk =3D (struct sock *)msk; @@ -671,9 +698,6 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp= _sock *msk, struct sk_buff *skb; bool fin; =20 - if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf) - break; - /* try to move as much data as available */ map_remaining =3D subflow->map_data_len - mptcp_subflow_get_map_offset(subflow); @@ -701,10 +725,18 @@ static bool __mptcp_move_skbs_from_subflow(struct mpt= cp_sock *msk, int bmem; =20 bmem =3D mptcp_init_skb(ssk, skb, offset, len); - skb->sk =3D NULL; - sk_forward_alloc_add(sk, bmem); - atomic_sub(skb->truesize, &ssk->sk_rmem_alloc); - ret =3D __mptcp_move_skb(sk, skb) || ret; + if (own_msk) + sk_forward_alloc_add(sk, bmem); + else + msk->borrowed_mem +=3D bmem; + + if (own_msk && sk_rmem_alloc_get(sk) < sk->sk_rcvbuf) { + skb->sk =3D NULL; + atomic_sub(skb->truesize, &ssk->sk_rmem_alloc); + ret |=3D __mptcp_move_skb(sk, skb); + } else { + __mptcp_add_backlog(sk, skb); + } seq +=3D len; =20 if (unlikely(map_remaining < len)) { @@ -823,7 +855,7 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, st= ruct sock *ssk) struct sock *sk =3D (struct sock *)msk; bool moved; =20 - moved =3D __mptcp_move_skbs_from_subflow(msk, ssk); + moved =3D __mptcp_move_skbs_from_subflow(msk, ssk, true); __mptcp_ofo_queue(msk); if (unlikely(ssk->sk_err)) __mptcp_subflow_error_report(sk, ssk); @@ -838,18 +870,10 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, = struct sock *ssk) return moved; } =20 -static void __mptcp_data_ready(struct sock *sk, struct sock *ssk) -{ - struct mptcp_sock *msk =3D mptcp_sk(sk); - - /* Wake-up the reader only for in-sequence data */ - if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk)) - sk->sk_data_ready(sk); -} - void mptcp_data_ready(struct sock *sk, struct sock *ssk) { struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); + struct mptcp_sock *msk =3D mptcp_sk(sk); =20 /* The peer can send data while we are shutting down this * subflow at msk destruction time, but we must avoid enqueuing @@ -859,10 +883,13 @@ void mptcp_data_ready(struct sock *sk, struct sock *s= sk) return; =20 mptcp_data_lock(sk); - if (!sock_owned_by_user(sk)) - __mptcp_data_ready(sk, ssk); - else - __set_bit(MPTCP_DEQUEUE, &mptcp_sk(sk)->cb_flags); + if (!sock_owned_by_user(sk)) { + /* Wake-up the reader only for in-sequence data */ + if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk)) + sk->sk_data_ready(sk); + } else { + __mptcp_move_skbs_from_subflow(msk, ssk, false); + } mptcp_data_unlock(sk); } =20 @@ -2096,60 +2123,61 @@ static void mptcp_rcv_space_adjust(struct mptcp_soc= k *msk, int copied) msk->rcvq_space.time =3D mstamp; } =20 -static struct mptcp_subflow_context * -__mptcp_first_ready_from(struct mptcp_sock *msk, - struct mptcp_subflow_context *subflow) +static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32= *delta) { - struct mptcp_subflow_context *start_subflow =3D subflow; - - while (!READ_ONCE(subflow->data_avail)) { - subflow =3D mptcp_next_subflow(msk, subflow); - if (subflow =3D=3D start_subflow) - return NULL; - } - return subflow; -} - -static bool __mptcp_move_skbs(struct sock *sk) -{ - struct mptcp_subflow_context *subflow; + struct sk_buff *skb =3D list_first_entry(skbs, struct sk_buff, list); struct mptcp_sock *msk =3D mptcp_sk(sk); - bool ret =3D false; - - if (list_empty(&msk->conn_list)) - return false; - - subflow =3D list_first_entry(&msk->conn_list, - struct mptcp_subflow_context, node); - for (;;) { - struct sock *ssk; - bool slowpath; + bool moved =3D false; =20 - /* - * As an optimization avoid traversing the subflows list - * and ev. acquiring the subflow socket lock before baling out - */ + while (1) { + /* If the msk recvbuf is full stop, don't drop */ if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf) break; =20 - subflow =3D __mptcp_first_ready_from(msk, subflow); - if (!subflow) - break; + prefetch(skb->next); + list_del(&skb->list); + *delta +=3D skb->truesize; =20 - ssk =3D mptcp_subflow_tcp_sock(subflow); - slowpath =3D lock_sock_fast(ssk); - ret =3D __mptcp_move_skbs_from_subflow(msk, ssk) || ret; - if (unlikely(ssk->sk_err)) - __mptcp_error_report(sk); - unlock_sock_fast(ssk, slowpath); + /* Release the memory allocated on the incoming subflow before + * moving it to the msk + */ + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc); + skb->sk =3D NULL; + moved |=3D __mptcp_move_skb(sk, skb); + if (list_empty(skbs)) + break; =20 - subflow =3D mptcp_next_subflow(msk, subflow); + skb =3D list_first_entry(skbs, struct sk_buff, list); } =20 __mptcp_ofo_queue(msk); - if (ret) + if (moved) mptcp_check_data_fin((struct sock *)msk); - return ret; + return moved; +} + +static bool mptcp_move_skbs(struct sock *sk) +{ + struct mptcp_sock *msk =3D mptcp_sk(sk); + bool moved =3D false; + LIST_HEAD(skbs); + u32 delta =3D 0; + + mptcp_data_lock(sk); + while (!list_empty(&msk->backlog_list)) { + list_splice_init(&msk->backlog_list, &skbs); + mptcp_data_unlock(sk); + moved |=3D __mptcp_move_skbs(sk, &skbs, &delta); + + mptcp_data_lock(sk); + if (!list_empty(&skbs)) { + list_splice(&skbs, &msk->backlog_list); + break; + } + } + WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta); + mptcp_data_unlock(sk); + return moved; } =20 static unsigned int mptcp_inq_hint(const struct sock *sk) @@ -2215,7 +2243,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msgh= dr *msg, size_t len, =20 copied +=3D bytes_read; =20 - if (skb_queue_empty(&sk->sk_receive_queue) && __mptcp_move_skbs(sk)) + if (!list_empty(&msk->backlog_list) && mptcp_move_skbs(sk)) continue; =20 /* only the MPTCP socket status is relevant here. The exit @@ -2520,6 +2548,9 @@ static void __mptcp_close_ssk(struct sock *sk, struct= sock *ssk, void mptcp_close_ssk(struct sock *sk, struct sock *ssk, struct mptcp_subflow_context *subflow) { + struct mptcp_sock *msk =3D mptcp_sk(sk); + struct sk_buff *skb; + /* The first subflow can already be closed and still in the list */ if (subflow->close_event_done) return; @@ -2529,6 +2560,18 @@ void mptcp_close_ssk(struct sock *sk, struct sock *s= sk, if (sk->sk_state =3D=3D TCP_ESTABLISHED) mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL); =20 + /* Remove any reference from the backlog to this ssk, accounting the + * related skb directly to the main socket + */ + list_for_each_entry(skb, &msk->backlog_list, list) { + if (skb->sk !=3D ssk) + continue; + + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc); + atomic_add(skb->truesize, &sk->sk_rmem_alloc); + skb->sk =3D sk; + } + /* subflow aborted before reaching the fully_established status * attempt the creation of the next subflow */ @@ -2761,8 +2804,11 @@ static void mptcp_do_fastclose(struct sock *sk) { struct mptcp_subflow_context *subflow, *tmp; struct mptcp_sock *msk =3D mptcp_sk(sk); + struct sk_buff *skb; =20 mptcp_set_state(sk, TCP_CLOSE); + list_for_each_entry(skb, &msk->backlog_list, list) + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE); mptcp_for_each_subflow_safe(msk, subflow, tmp) __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, MPTCP_CF_FASTCLOSE); @@ -2820,6 +2866,7 @@ static void __mptcp_init_sock(struct sock *sk) INIT_LIST_HEAD(&msk->conn_list); INIT_LIST_HEAD(&msk->join_list); INIT_LIST_HEAD(&msk->rtx_queue); + INIT_LIST_HEAD(&msk->backlog_list); INIT_WORK(&msk->work, mptcp_worker); msk->out_of_order_queue =3D RB_ROOT; msk->first_pending =3D NULL; @@ -3199,9 +3246,13 @@ static void mptcp_destroy_common(struct mptcp_sock *= msk, unsigned int flags) { struct mptcp_subflow_context *subflow, *tmp; struct sock *sk =3D (struct sock *)msk; + struct sk_buff *skb; =20 __mptcp_clear_xmit(sk); =20 + list_for_each_entry(skb, &msk->backlog_list, list) + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE); + /* join list will be eventually flushed (with rst) at sock lock release t= ime */ mptcp_for_each_subflow_safe(msk, subflow, tmp) __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags); @@ -3451,23 +3502,29 @@ void __mptcp_check_push(struct sock *sk, struct soc= k *ssk) =20 #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \ BIT(MPTCP_RETRANSMIT) | \ - BIT(MPTCP_FLUSH_JOIN_LIST) | \ - BIT(MPTCP_DEQUEUE)) + BIT(MPTCP_FLUSH_JOIN_LIST)) =20 /* processes deferred events and flush wmem */ static void mptcp_release_cb(struct sock *sk) __must_hold(&sk->sk_lock.slock) { struct mptcp_sock *msk =3D mptcp_sk(sk); + u32 delta =3D 0; =20 for (;;) { unsigned long flags =3D (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED); - struct list_head join_list; + LIST_HEAD(join_list); + LIST_HEAD(skbs); + + sk_forward_alloc_add(sk, msk->borrowed_mem); + msk->borrowed_mem =3D 0; + + if (sk_rmem_alloc_get(sk) < sk->sk_rcvbuf) + list_splice_init(&msk->backlog_list, &skbs); =20 - if (!flags) + if (!flags && list_empty(&skbs)) break; =20 - INIT_LIST_HEAD(&join_list); list_splice_init(&msk->join_list, &join_list); =20 /* the following actions acquire the subflow socket lock @@ -3486,7 +3543,8 @@ static void mptcp_release_cb(struct sock *sk) __mptcp_push_pending(sk, 0); if (flags & BIT(MPTCP_RETRANSMIT)) __mptcp_retrans(sk); - if ((flags & BIT(MPTCP_DEQUEUE)) && __mptcp_move_skbs(sk)) { + if (!list_empty(&skbs) && + __mptcp_move_skbs(sk, &skbs, &delta)) { /* notify ack seq update */ mptcp_cleanup_rbuf(msk, 0); sk->sk_data_ready(sk); @@ -3494,7 +3552,9 @@ static void mptcp_release_cb(struct sock *sk) =20 cond_resched(); spin_lock_bh(&sk->sk_lock.slock); + list_splice(&skbs, &msk->backlog_list); } + WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta); =20 if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags)) __mptcp_clean_una_wakeup(sk); @@ -3726,7 +3786,7 @@ static int mptcp_ioctl(struct sock *sk, int cmd, int = *karg) return -EINVAL; =20 lock_sock(sk); - if (__mptcp_move_skbs(sk)) + if (mptcp_move_skbs(sk)) mptcp_cleanup_rbuf(msk, 0); *karg =3D mptcp_inq_hint(sk); release_sock(sk); diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 46d8432c72ee7..c9c6582b4e1c4 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -124,7 +124,6 @@ #define MPTCP_FLUSH_JOIN_LIST 5 #define MPTCP_SYNC_STATE 6 #define MPTCP_SYNC_SNDBUF 7 -#define MPTCP_DEQUEUE 8 =20 struct mptcp_skb_cb { u64 map_seq; @@ -301,6 +300,7 @@ struct mptcp_sock { u32 last_ack_recv; unsigned long timer_ival; u32 token; + u32 borrowed_mem; unsigned long flags; unsigned long cb_flags; bool recovery; /* closing subflow write queue reinjected */ @@ -358,6 +358,8 @@ struct mptcp_sock { * allow_infinite_fallback and * allow_join */ + struct list_head backlog_list; /*protected by the data lock */ + u32 backlog_len; }; =20 #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock) @@ -408,6 +410,7 @@ static inline int mptcp_space_from_win(const struct soc= k *sk, int win) static inline int __mptcp_space(const struct sock *sk) { return mptcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) - + READ_ONCE(mptcp_sk(sk)->backlog_len) - sk_rmem_alloc_get(sk)); } =20 --=20 2.51.0