From nobody Fri Oct 31 16:20:38 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C2691F4E34 for ; Mon, 27 Oct 2025 14:58:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577118; cv=none; b=T991/XeSFbBqFlhzoMt4HDYh/a/XnR7cwfovxpl+Bo8kLDPVy5TmlFMagIrWwUVhkUOLN03nMn/Lu/bGsChvQL5TiAYr+YPKSBkseiRyF5OW1EdMasrvmDbgiQCo0hmdYwDh8e7CToSVqC9w4J9JK0RP6S4ktxwCnBBQ3intRv8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577118; c=relaxed/simple; bh=sNkaqU5h/Yp4WgAY63jsWZdNg0tALq4lhdYGKoN+IB0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:content-type; b=fenCp+mkSCCu9SfOJSxXyQU9h7YtevLxETTbpj1ZK0wL+AoxEUXFgMeK4xSfkuHGSR95m9NYRy45dU/aXofWQiGjkuUtUbazH5uHhDGyHSJKPNeJdMdro2/3lrK/lpebvttoLC+HaZWpH6m1patfEt0VLDw4cShdcHJp4+fnxqE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=h+YjPPwm; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h+YjPPwm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1761577115; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3BX4HU2AUYk/Z/zbwLBbvkzZ9CJMg/EDsNpMW6dzK6M=; b=h+YjPPwmTTLpo7qulbhZ1OVy7SLmkONiHmL3Al3NVI9V2rn8ORfSUeXhMWjo3LQeKywVbD BSfUhsge14NPjwMDEhOKPKbuhhalDyGQPki56OB808MWeuE0EkIR6fBBUWOxgAyxb45Jxu rFJpS3EMzUuxlRLHSD3eU4AsiCvFbxY= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-653-s7_EDLDLNGihsdS3VBFh0g-1; Mon, 27 Oct 2025 10:58:32 -0400 X-MC-Unique: s7_EDLDLNGihsdS3VBFh0g-1 X-Mimecast-MFC-AGG-ID: s7_EDLDLNGihsdS3VBFh0g_1761577111 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 88A1718002C0; Mon, 27 Oct 2025 14:58:31 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.10]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2831919560AD; Mon, 27 Oct 2025 14:58:29 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Cc: Mat Martineau , geliang@kernel.org Subject: [PATCH RESENT v7 mptcp-next 1/4] mptcp: handle first subflow closing consistently Date: Mon, 27 Oct 2025 15:57:59 +0100 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: M7hly-iqDGBYz_4gVBM1PCUdqdurosNtXbnqPMxXso8_1761577111 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" Currently, as soon as the PM closes a subflow, the msk stops accepting data from it, even if the TCP socket could be still formally open in the incoming direction, with the notable exception of the first subflow. The root cause of such behavior is that code currently piggy back two separate semantic on the subflow->disposable bit: the subflow context must be released and that the subflow must stop accepting incoming data. The first subflow is never disposed, so it also never stop accepting incoming data. Use a separate bit to mark to mark the latter status and set such bit in __mptcp_close_ssk() for all subflows. Beyond making per subflow behaviour more consistent this will also simplify the next patch. Signed-off-by: Paolo Abeni --- net/mptcp/protocol.c | 14 +++++++++----- net/mptcp/protocol.h | 3 ++- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index f4e3d0be7c87..74be417be980 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -842,10 +842,10 @@ void mptcp_data_ready(struct sock *sk, struct sock *s= sk) struct mptcp_sock *msk =3D mptcp_sk(sk); =20 /* The peer can send data while we are shutting down this - * subflow at msk destruction time, but we must avoid enqueuing + * subflow at subflow destruction time, but we must avoid enqueuing * more data to the msk receive queue */ - if (unlikely(subflow->disposable)) + if (unlikely(subflow->closing)) return; =20 mptcp_data_lock(sk); @@ -2429,6 +2429,13 @@ static void __mptcp_close_ssk(struct sock *sk, struc= t sock *ssk, struct mptcp_sock *msk =3D mptcp_sk(sk); bool dispose_it, need_push =3D false; =20 + /* Do not pass RX data to the msk, even if the subflow socket is not + * going to be freed (i.e. even for the first subflow on graceful + * subflow close. + */ + lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); + subflow->closing =3D 1; + /* If the first subflow moved to a close state before accept, e.g. due * to an incoming reset or listener shutdown, the subflow socket is * already deleted by inet_child_forget() and the mptcp socket can't @@ -2439,7 +2446,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct= sock *ssk, /* ensure later check in mptcp_worker() will dispose the msk */ sock_set_flag(sk, SOCK_DEAD); mptcp_set_close_tout(sk, tcp_jiffies32 - (mptcp_close_timeout(sk) + 1)); - lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); mptcp_subflow_drop_ctx(ssk); goto out_release; } @@ -2448,8 +2454,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct= sock *ssk, if (dispose_it) list_del(&subflow->node); =20 - lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); - if ((flags & MPTCP_CF_FASTCLOSE) && !__mptcp_check_fallback(msk)) { /* be sure to force the tcp_close path * to generate the egress reset diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index cd6350073144..9f7e5f2c964d 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -536,12 +536,13 @@ struct mptcp_subflow_context { send_infinite_map : 1, remote_key_valid : 1, /* received the peer key from */ disposable : 1, /* ctx can be free at ulp release time */ + closing : 1, /* must not pass rx data to msk anymore */ stale : 1, /* unable to snd/rcv data, do not use for xmit */ valid_csum_seen : 1, /* at least one csum validated */ is_mptfo : 1, /* subflow is doing TFO */ close_event_done : 1, /* has done the post-closed part */ mpc_drop : 1, /* the MPC option has been dropped in a rtx */ - __unused : 9; + __unused : 8; bool data_avail; bool scheduled; bool pm_listener; /* a listener managed by the kernel PM? */ --=20 2.51.0 From nobody Fri Oct 31 16:20:38 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD12F25A62E for ; Mon, 27 Oct 2025 14:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577120; cv=none; b=nBQsVS0xpaNJIOgMQioqWgZXJ8gpg5kqTWblJoYS3LpC1MZ3qBY2L3fcrC89vUSJ1pqP3+jrhrs4qwBTXDNdJLRl88phfZ+K4Mx6RGAcG5/yZW3D4t7H5QxR1/LMceQmDTiHvH/0q+znlYI3cavgAuHvR1+UdtNcFXO0ERJU9Hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577120; c=relaxed/simple; bh=RbkjZkB/FrAMd4k9miI8xHjwZj0b4qDggwfbE9r150k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:content-type; b=NYm5CEVr782sz6c8WiczxmVJ9l5YVWDvrw7ESwQIMwAW9xYffMAxlFMHI4Ouljor7V7ZFPFFlz7QGGoQgVIYVVi/pmp+dAdgUE/E23ktvzouVney6qE820DAbJf/+QJrXU4XI2yoW7X67Ss4gaO7+ocqc46VntNa+Mwn/s3Y3+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=bn+3qfko; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bn+3qfko" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1761577117; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZCy/4QjaOZmrRReCuFIZW3dwV4HIs+r47Zwy2iUmx10=; b=bn+3qfkocW2l1NPOmPn+QAJEq0kPNogdF6dCpVSNeR37tNa23NFmmTJvf5stTFpMijCofJ ON5dQltPNzdNYM4aCLuU5Hg6AKtYKfnC+POoX8QJS5nkGt2lKsRzdDXHG71xQIBjQj/r8O B9dCcC6FYgQjHFp9cGPyW1ebHFJhdtA= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-554-ygN0PANTPTuxbce9NVnoPQ-1; Mon, 27 Oct 2025 10:58:34 -0400 X-MC-Unique: ygN0PANTPTuxbce9NVnoPQ-1 X-Mimecast-MFC-AGG-ID: ygN0PANTPTuxbce9NVnoPQ_1761577113 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7775718007EB; Mon, 27 Oct 2025 14:58:33 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.10]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 127F319560AD; Mon, 27 Oct 2025 14:58:31 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Cc: Mat Martineau , geliang@kernel.org Subject: [PATCH RESENT v7 mptcp-next 2/4] mptcp: borrow forward memory from subflow Date: Mon, 27 Oct 2025 15:58:00 +0100 Message-ID: <501fc84688c32f846e262a9fd44683afa73ea509.1761576117.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: IdH0EnAM9CJsryLge2eZVlUo0BKEi7MRF5GWkvyTPZ4_1761577113 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" In the MPTCP receive path, we release the subflow allocated fwd memory just to allocate it again shortly after for the msk. That could increases the failures chances, especially when we will add backlog processing, with other actions could consume the just released memory before the msk socket has a chance to do the rcv allocation. Replace the skb_orphan() call with an open-coded variant that explicitly borrows, the fwd memory from the subflow socket instead of releasing it. The borrowed memory does not have PAGE_SIZE granularity; rounding to the page size will make the fwd allocated memory higher than what is strictly required and could make the incoming subflow fwd mem consistently negative. Instead, keep track of the accumulated frag and borrow the full page at subflow close time. This allow removing the last drop in the TCP to MPTCP transition and the associated, now unused, MIB. Signed-off-by: Paolo Abeni --- net/mptcp/fastopen.c | 4 +++- net/mptcp/mib.c | 1 - net/mptcp/mib.h | 1 - net/mptcp/protocol.c | 23 +++++++++++++++-------- net/mptcp/protocol.h | 23 +++++++++++++++++++++++ 5 files changed, 41 insertions(+), 11 deletions(-) diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c index b9e451197902..82ec15bcfd7f 100644 --- a/net/mptcp/fastopen.c +++ b/net/mptcp/fastopen.c @@ -32,7 +32,8 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptc= p_subflow_context *subf /* dequeue the skb from sk receive queue */ __skb_unlink(skb, &ssk->sk_receive_queue); skb_ext_reset(skb); - skb_orphan(skb); + + mptcp_subflow_lend_fwdmem(subflow, skb); =20 /* We copy the fastopen data, but that don't belong to the mptcp sequence * space, need to offset it in the subflow sequence, see mptcp_subflow_ge= t_map_offset() @@ -50,6 +51,7 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptc= p_subflow_context *subf mptcp_data_lock(sk); DEBUG_NET_WARN_ON_ONCE(sock_owned_by_user_nocheck(sk)); =20 + mptcp_borrow_fwdmem(sk, skb); skb_set_owner_r(skb, sk); __skb_queue_tail(&sk->sk_receive_queue, skb); mptcp_sk(sk)->bytes_received +=3D skb->len; diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c index 171643815076..f23fda0c55a7 100644 --- a/net/mptcp/mib.c +++ b/net/mptcp/mib.c @@ -71,7 +71,6 @@ static const struct snmp_mib mptcp_snmp_list[] =3D { SNMP_MIB_ITEM("MPFastcloseRx", MPTCP_MIB_MPFASTCLOSERX), SNMP_MIB_ITEM("MPRstTx", MPTCP_MIB_MPRSTTX), SNMP_MIB_ITEM("MPRstRx", MPTCP_MIB_MPRSTRX), - SNMP_MIB_ITEM("RcvPruned", MPTCP_MIB_RCVPRUNED), SNMP_MIB_ITEM("SubflowStale", MPTCP_MIB_SUBFLOWSTALE), SNMP_MIB_ITEM("SubflowRecover", MPTCP_MIB_SUBFLOWRECOVER), SNMP_MIB_ITEM("SndWndShared", MPTCP_MIB_SNDWNDSHARED), diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h index a1d3e9369fbb..812218b5ed2b 100644 --- a/net/mptcp/mib.h +++ b/net/mptcp/mib.h @@ -70,7 +70,6 @@ enum linux_mptcp_mib_field { MPTCP_MIB_MPFASTCLOSERX, /* Received a MP_FASTCLOSE */ MPTCP_MIB_MPRSTTX, /* Transmit a MP_RST */ MPTCP_MIB_MPRSTRX, /* Received a MP_RST */ - MPTCP_MIB_RCVPRUNED, /* Incoming packet dropped due to memory limit */ MPTCP_MIB_SUBFLOWSTALE, /* Subflows entered 'stale' status */ MPTCP_MIB_SUBFLOWRECOVER, /* Subflows returned to active status after bei= ng stale */ MPTCP_MIB_SNDWNDSHARED, /* Subflow snd wnd is overridden by msk's one */ diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 74be417be980..f6d96cb01e00 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -349,7 +349,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk= , struct sk_buff *skb) static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offs= et, int copy_len) { - const struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); + struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); bool has_rxtstamp =3D TCP_SKB_CB(skb)->has_rxtstamp; =20 /* the skb map_seq accounts for the skb offset: @@ -374,11 +374,7 @@ static bool __mptcp_move_skb(struct sock *sk, struct s= k_buff *skb) struct mptcp_sock *msk =3D mptcp_sk(sk); struct sk_buff *tail; =20 - /* try to fetch required memory from subflow */ - if (!sk_rmem_schedule(sk, skb, skb->truesize)) { - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED); - goto drop; - } + mptcp_borrow_fwdmem(sk, skb); =20 if (MPTCP_SKB_CB(skb)->map_seq =3D=3D msk->ack_seq) { /* in sequence */ @@ -400,7 +396,6 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk= _buff *skb) * will retransmit as needed, if needed. */ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA); -drop: mptcp_drop(sk, skb); return false; } @@ -701,7 +696,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp= _sock *msk, size_t len =3D skb->len - offset; =20 mptcp_init_skb(ssk, skb, offset, len); - skb_orphan(skb); + mptcp_subflow_lend_fwdmem(subflow, skb); ret =3D __mptcp_move_skb(sk, skb) || ret; seq +=3D len; =20 @@ -2428,6 +2423,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct= sock *ssk, { struct mptcp_sock *msk =3D mptcp_sk(sk); bool dispose_it, need_push =3D false; + int fwd_remaning; =20 /* Do not pass RX data to the msk, even if the subflow socket is not * going to be freed (i.e. even for the first subflow on graceful @@ -2436,6 +2432,17 @@ static void __mptcp_close_ssk(struct sock *sk, struc= t sock *ssk, lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); subflow->closing =3D 1; =20 + /* Borrow the fwd allocated page left-over; fwd memory for the subflow + * could be negative at this point, but will be reach zero soon - when + * the data allocated using such fragment will be freed. + */ + if (subflow->lent_mem_frag) { + fwd_remaning =3D PAGE_SIZE - subflow->lent_mem_frag; + sk_forward_alloc_add(sk, fwd_remaning); + sk_forward_alloc_add(ssk, -fwd_remaning); + subflow->lent_mem_frag =3D 0; + } + /* If the first subflow moved to a close state before accept, e.g. due * to an incoming reset or listener shutdown, the subflow socket is * already deleted by inet_child_forget() and the mptcp socket can't diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 9f7e5f2c964d..80d520888235 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -547,6 +547,7 @@ struct mptcp_subflow_context { bool scheduled; bool pm_listener; /* a listener managed by the kernel PM? */ bool fully_established; /* path validated */ + u32 lent_mem_frag; u32 remote_nonce; u64 thmac; u32 local_nonce; @@ -646,6 +647,28 @@ mptcp_send_active_reset_reason(struct sock *sk) tcp_send_active_reset(sk, GFP_ATOMIC, reason); } =20 +static inline void mptcp_borrow_fwdmem(struct sock *sk, struct sk_buff *sk= b) +{ + struct sock *ssk =3D skb->sk; + + /* The subflow just lend the skb fwd memory, and we know that the skb + * is only accounted on the incoming subflow rcvbuf. + */ + skb->sk =3D NULL; + sk_forward_alloc_add(sk, skb->truesize); + atomic_sub(skb->truesize, &ssk->sk_rmem_alloc); +} + +static inline void +mptcp_subflow_lend_fwdmem(struct mptcp_subflow_context *subflow, + struct sk_buff *skb) +{ + int frag =3D (subflow->lent_mem_frag + skb->truesize) & (PAGE_SIZE - 1); + + skb->destructor =3D NULL; + subflow->lent_mem_frag =3D frag; +} + static inline u64 mptcp_subflow_get_map_offset(const struct mptcp_subflow_context *subflow) { --=20 2.51.0 From nobody Fri Oct 31 16:20:38 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00A3121C9E1 for ; Mon, 27 Oct 2025 14:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577120; cv=none; b=EVCHp2hy2DVD84SpLLNJ2RbY3qBRfT3wGo65SABRREUh3YgOowFCPk4wEyj4Quz3gPlkx1gU1sikvVN33qEjsZBtPb81UCw8sibxEISja5P5V0IGTKYhZS5g5PVuOFHLb6RQVjDIdMqOHNG4suijzvxIv7gkGpNehypj+QZR+no= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577120; c=relaxed/simple; bh=c5f/h11DLZwihTIzyHlUALu/sr0Scn3Ce6YPY/g0/Xg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:content-type; b=qzrOEjmtfYN2F/pl5VVmEGmEEalM7TDcttRnfMekEcIMc08k/JAjhJKlav2zQ/l31j9+nrDnrSqoBnTeUjq5AnRS+rLl8p/E/YjdT+mKe9MogVemF1dmVrjNPJ0vpgc7j1bwpL/PIMK9GrJonPf9v7B6fw3S3571hYduEir+wew= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=M10zdzGY; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="M10zdzGY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1761577118; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v6cOYK5lbjHWLzh6MOGooVm17dr3adPntXH/63t/vHI=; b=M10zdzGYoVdBFbpYJ4mRcFUUHPIe9lvyKd1VLWn9bNGsXnkXL5WJyFWXl0t6NBPQj6lKl2 v9JXmGemUiN2CO4ln+RxgYIT79V7SgZT3VuzXlVZdz1K1TdYZx71OJ+Y/qlzxDwSaeZzRd O74JGEFVfoqFJ6IgwAHzBL5UDOKlPlE= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-564-AEAGJZwVPfiSD-DNzY3J7A-1; Mon, 27 Oct 2025 10:58:36 -0400 X-MC-Unique: AEAGJZwVPfiSD-DNzY3J7A-1 X-Mimecast-MFC-AGG-ID: AEAGJZwVPfiSD-DNzY3J7A_1761577115 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 63B4418001E7; Mon, 27 Oct 2025 14:58:35 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.10]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0311A19560AD; Mon, 27 Oct 2025 14:58:33 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Cc: Mat Martineau , geliang@kernel.org Subject: [PATCH RESENT v7 mptcp-next 3/4] mptcp: introduce mptcp-level backlog Date: Mon, 27 Oct 2025 15:58:01 +0100 Message-ID: <9757b415fac6235e6037f649e75cf097aa603898.1761576117.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 4kTv-esFFG-R-XoJPlSSUmEbUMr2WGWZYgUEjQsy4Qc_1761577115 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" We are soon using it for incoming data processing. MPTCP can't leverage the sk_backlog, as the latter is processed before the release callback, and such callback for MPTCP releases and re-acquire the socket spinlock, breaking the sk_backlog processing assumption. Add a skb backlog list inside the mptcp sock struct, and implement basic helper to transfer packet to and purge such list. Packets in the backlog are memory accounted and still use the incoming subflow receive memory, to allow back-pressure. The backlog size is implicitly bounded to the sum of subflows rcvbuf. When a subflow is closed, references from the backlog to such sock are removed. No packet is currently added to the backlog, so no functional changes intended here. Signed-off-by: Paolo Abeni -- v6 -> v7: - real fwd memory account for the backlog - do not introduce a new destructor: we only have a call-site dropping packets from the backlog, handle that one explicitly - update to new borrow fwd mem API v5 -> v6: - call mptcp_bl_free() instead of inlining it. - report the bl mem in diag mem info - moved here the mptcp_close_ssk chunk from the next patch. (logically belongs here) v4 -> v5: - split out of the next path, to make the latter smaller - set a custom destructor for skbs in the backlog, this avoid duplicate code, and fix a few places where the need ssk cleanup was not performed. - factor out the backlog purge in a new helper, use spinlock protection, clear the backlog list and zero the backlog len - explicitly init the backlog_len at mptcp_init_sock() time --- net/mptcp/mptcp_diag.c | 3 +- net/mptcp/protocol.c | 77 ++++++++++++++++++++++++++++++++++++++++-- net/mptcp/protocol.h | 25 ++++++++++---- 3 files changed, 96 insertions(+), 9 deletions(-) diff --git a/net/mptcp/mptcp_diag.c b/net/mptcp/mptcp_diag.c index ac974299de71..136c2d05c0ee 100644 --- a/net/mptcp/mptcp_diag.c +++ b/net/mptcp/mptcp_diag.c @@ -195,7 +195,8 @@ static void mptcp_diag_get_info(struct sock *sk, struct= inet_diag_msg *r, struct mptcp_sock *msk =3D mptcp_sk(sk); struct mptcp_info *info =3D _info; =20 - r->idiag_rqueue =3D sk_rmem_alloc_get(sk); + r->idiag_rqueue =3D sk_rmem_alloc_get(sk) + + READ_ONCE(mptcp_sk(sk)->backlog_len); r->idiag_wqueue =3D sk_wmem_alloc_get(sk); =20 if (inet_sk_state_load(sk) =3D=3D TCP_LISTEN) { diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index f6d96cb01e00..4c62de93e132 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -649,6 +649,38 @@ static void mptcp_dss_corruption(struct mptcp_sock *ms= k, struct sock *ssk) mptcp_subflow_reset(ssk); } } +static void __mptcp_add_backlog(struct sock *sk, + struct mptcp_subflow_context *subflow, + struct sk_buff *skb) +{ + struct mptcp_sock *msk =3D mptcp_sk(sk); + struct sk_buff *tail =3D NULL; + bool fragstolen; + int delta; + + if (unlikely(sk->sk_state =3D=3D TCP_CLOSE)) { + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE); + return; + } + + /* Try to coalesce with the last skb in our backlog */ + if (!list_empty(&msk->backlog_list)) + tail =3D list_last_entry(&msk->backlog_list, struct sk_buff, list); + + if (tail && MPTCP_SKB_CB(skb)->map_seq =3D=3D MPTCP_SKB_CB(tail)->end_seq= && + skb->sk =3D=3D tail->sk && + __mptcp_try_coalesce(sk, tail, skb, &fragstolen, &delta)) { + skb->truesize -=3D delta; + kfree_skb_partial(skb, fragstolen); + __mptcp_subflow_lend_fwdmem(subflow, delta); + WRITE_ONCE(msk->backlog_len, msk->backlog_len + delta); + return; + } + + list_add_tail(&skb->list, &msk->backlog_list); + mptcp_subflow_lend_fwdmem(subflow, skb); + WRITE_ONCE(msk->backlog_len, msk->backlog_len + skb->truesize); +} =20 static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, struct sock *ssk) @@ -696,8 +728,13 @@ static bool __mptcp_move_skbs_from_subflow(struct mptc= p_sock *msk, size_t len =3D skb->len - offset; =20 mptcp_init_skb(ssk, skb, offset, len); - mptcp_subflow_lend_fwdmem(subflow, skb); - ret =3D __mptcp_move_skb(sk, skb) || ret; + + if (true) { + mptcp_subflow_lend_fwdmem(subflow, skb); + ret |=3D __mptcp_move_skb(sk, skb); + } else { + __mptcp_add_backlog(sk, subflow, skb); + } seq +=3D len; =20 if (unlikely(map_remaining < len)) { @@ -2529,6 +2566,9 @@ static void __mptcp_close_ssk(struct sock *sk, struct= sock *ssk, void mptcp_close_ssk(struct sock *sk, struct sock *ssk, struct mptcp_subflow_context *subflow) { + struct mptcp_sock *msk =3D mptcp_sk(sk); + struct sk_buff *skb; + /* The first subflow can already be closed and still in the list */ if (subflow->close_event_done) return; @@ -2538,6 +2578,17 @@ void mptcp_close_ssk(struct sock *sk, struct sock *s= sk, if (sk->sk_state =3D=3D TCP_ESTABLISHED) mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL); =20 + /* Remove any reference from the backlog to this ssk; backlog skbs consume + * space in the msk receive queue, no need to touch sk->sk_rmem_alloc + */ + list_for_each_entry(skb, &msk->backlog_list, list) { + if (skb->sk !=3D ssk) + continue; + + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc); + skb->sk =3D NULL; + } + /* subflow aborted before reaching the fully_established status * attempt the creation of the next subflow */ @@ -2766,12 +2817,31 @@ static void mptcp_mp_fail_no_response(struct mptcp_= sock *msk) unlock_sock_fast(ssk, slow); } =20 +static void mptcp_backlog_purge(struct sock *sk) +{ + struct mptcp_sock *msk =3D mptcp_sk(sk); + struct sk_buff *tmp, *skb; + LIST_HEAD(backlog); + + mptcp_data_lock(sk); + list_splice_init(&msk->backlog_list, &backlog); + msk->backlog_len =3D 0; + mptcp_data_unlock(sk); + + list_for_each_entry_safe(skb, tmp, &backlog, list) { + mptcp_borrow_fwdmem(sk, skb); + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE); + } + sk_mem_reclaim(sk); +} + static void mptcp_do_fastclose(struct sock *sk) { struct mptcp_subflow_context *subflow, *tmp; struct mptcp_sock *msk =3D mptcp_sk(sk); =20 mptcp_set_state(sk, TCP_CLOSE); + mptcp_backlog_purge(sk); mptcp_for_each_subflow_safe(msk, subflow, tmp) __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, MPTCP_CF_FASTCLOSE); @@ -2829,11 +2899,13 @@ static void __mptcp_init_sock(struct sock *sk) INIT_LIST_HEAD(&msk->conn_list); INIT_LIST_HEAD(&msk->join_list); INIT_LIST_HEAD(&msk->rtx_queue); + INIT_LIST_HEAD(&msk->backlog_list); INIT_WORK(&msk->work, mptcp_worker); msk->out_of_order_queue =3D RB_ROOT; msk->first_pending =3D NULL; msk->timer_ival =3D TCP_RTO_MIN; msk->scaling_ratio =3D TCP_DEFAULT_SCALING_RATIO; + msk->backlog_len =3D 0; =20 WRITE_ONCE(msk->first, NULL); inet_csk(sk)->icsk_sync_mss =3D mptcp_sync_mss; @@ -3210,6 +3282,7 @@ static void mptcp_destroy_common(struct mptcp_sock *m= sk, unsigned int flags) struct sock *sk =3D (struct sock *)msk; =20 __mptcp_clear_xmit(sk); + mptcp_backlog_purge(sk); =20 /* join list will be eventually flushed (with rst) at sock lock release t= ime */ mptcp_for_each_subflow_safe(msk, subflow, tmp) diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 80d520888235..cf82aefb5513 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -358,6 +358,9 @@ struct mptcp_sock { * allow_infinite_fallback and * allow_join */ + + struct list_head backlog_list; /* protected by the data lock */ + u32 backlog_len; }; =20 #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock) @@ -408,6 +411,7 @@ static inline int mptcp_space_from_win(const struct soc= k *sk, int win) static inline int __mptcp_space(const struct sock *sk) { return mptcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) - + READ_ONCE(mptcp_sk(sk)->backlog_len) - sk_rmem_alloc_get(sk)); } =20 @@ -651,22 +655,31 @@ static inline void mptcp_borrow_fwdmem(struct sock *s= k, struct sk_buff *skb) { struct sock *ssk =3D skb->sk; =20 - /* The subflow just lend the skb fwd memory, and we know that the skb - * is only accounted on the incoming subflow rcvbuf. + /* The subflow just lend the skb fwd memory; if the subflow meanwhile + * closed mptcp_close_ssk already released the ssk rcv memory. */ - skb->sk =3D NULL; sk_forward_alloc_add(sk, skb->truesize); + if (!ssk) + return; + atomic_sub(skb->truesize, &ssk->sk_rmem_alloc); + skb->sk =3D NULL; +} + +static inline void +__mptcp_subflow_lend_fwdmem(struct mptcp_subflow_context *subflow, int siz= e) +{ + int frag =3D (subflow->lent_mem_frag + size) & (PAGE_SIZE - 1); + + subflow->lent_mem_frag =3D frag; } =20 static inline void mptcp_subflow_lend_fwdmem(struct mptcp_subflow_context *subflow, struct sk_buff *skb) { - int frag =3D (subflow->lent_mem_frag + skb->truesize) & (PAGE_SIZE - 1); - + __mptcp_subflow_lend_fwdmem(subflow, skb->truesize); skb->destructor =3D NULL; - subflow->lent_mem_frag =3D frag; } =20 static inline u64 --=20 2.51.0 From nobody Fri Oct 31 16:20:38 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE7CF2C0F91 for ; Mon, 27 Oct 2025 14:58:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577126; cv=none; b=ZFl8M2gDOUzynTQUzY6lpvNZ9GcTi6ueWaou60sRrLKi5dSON4WBk/c2EZ1t79Yz75152rCO/3hT37QtmwuNLc1CEm946c70fLHeSXgKkfCLmYBHkL3GWng2XUa6xLO4mWBAIdNghZheqgOpVgyQUUxKddzM8RzlEVdUrza7oHI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761577126; c=relaxed/simple; bh=gld7mSlGeOs7c306q5yHObbCXQPowIbvEog61j5/VwM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:content-type; b=OYZa8fthNiqQNc0XW3Vp7MlhwAx2tkjlzGjWNKQuPNOpjgOwHCa7R2UMt+XtuCqx3E+sn5TeAyws245p4A3EqJMLSJ4Ve/8mXS6jrxFhW9JGEhjf9siz2eBcL4/IdH8oERIG/TIMHKY1TY9u91/FSlC5r5L3NsPT9ZtvAM9ctko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CweduOq8; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CweduOq8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1761577121; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=foQQmjMJW+LD4sPZqtnGiz8FlmCXuZB/Xd2VIIrzxk0=; b=CweduOq8ZeuJ5vrXr6NOaHpLh2m+w2Rt1BzCXURemYFBUA+Z21tZnwErp6yZJ28nzNfqk9 H1AJbEqYcMwqyw2y67kvd8QoHZd2whHBhw1TdAwlADlsqLZXATXdBe/g+zWYpIIDDSr6TR stC7a5kXHNdDifdV9/xxCle9yQze2wU= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-596-U_CwQTgWN5GDFnBCnXYLuA-1; Mon, 27 Oct 2025 10:58:38 -0400 X-MC-Unique: U_CwQTgWN5GDFnBCnXYLuA-1 X-Mimecast-MFC-AGG-ID: U_CwQTgWN5GDFnBCnXYLuA_1761577117 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 528FE180899B; Mon, 27 Oct 2025 14:58:37 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.10]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E382F19560AD; Mon, 27 Oct 2025 14:58:35 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Cc: Mat Martineau , geliang@kernel.org Subject: [PATCH RESENT v7 mptcp-next 4/4] mptcp: leverage the backlog for RX packet processing Date: Mon, 27 Oct 2025 15:58:02 +0100 Message-ID: <08f8e227a749a28a88ce245fa36870173e32c54f.1761576117.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: lq8Q35iG3c65AlEZZVZdeyBGPt5GpVUugn7IOZtG1m0_1761577117 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" When the msk socket is owned or the msk receive buffer is full, move the incoming skbs in a msk level backlog list. This avoid traversing the joined subflows and acquiring the subflow level socket lock at reception time, improving the RX performances. When processing the backlog, use the fwd alloc memory borrowed from the incoming subflow. skbs exceeding the msk receive space are not dropped; instead they are kept into the backlog until the receive buffer is freed. Dropping packets already acked at the TCP level is explicitly discouraged by the RFC and would corrupt the data stream for fallback sockets. Special care is needed to avoid adding skbs to the backlog of a closed msk and to avoid leaving dangling references into the backlog at subflow closing time. Signed-off-by: Paolo Abeni --- v6 -> v7: - do not limit the overall backlog spooling loop, it's hard to do it right and the pre backlog code did not care for the similar existing loop v5 -> v6: - do backlog len update asap to advise the correct window. - explicitly bound backlog processing loop to the maximum BL len v4 -> v5: - consolidate ssk rcvbuf accunting in __mptcp_move_skb(), remove some code duplication - return soon in __mptcp_add_backlog() when dropping skbs due to the msk closed. This avoid later UaF --- net/mptcp/protocol.c | 121 ++++++++++++++++++++++++------------------- net/mptcp/protocol.h | 1 - 2 files changed, 68 insertions(+), 54 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 4c62de93e132..f93f973a4ffb 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -683,7 +683,7 @@ static void __mptcp_add_backlog(struct sock *sk, } =20 static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, - struct sock *ssk) + struct sock *ssk, bool own_msk) { struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); struct sock *sk =3D (struct sock *)msk; @@ -699,9 +699,6 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp= _sock *msk, struct sk_buff *skb; bool fin; =20 - if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf) - break; - /* try to move as much data as available */ map_remaining =3D subflow->map_data_len - mptcp_subflow_get_map_offset(subflow); @@ -729,7 +726,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp= _sock *msk, =20 mptcp_init_skb(ssk, skb, offset, len); =20 - if (true) { + if (own_msk && sk_rmem_alloc_get(sk) < sk->sk_rcvbuf) { mptcp_subflow_lend_fwdmem(subflow, skb); ret |=3D __mptcp_move_skb(sk, skb); } else { @@ -853,7 +850,7 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, st= ruct sock *ssk) struct sock *sk =3D (struct sock *)msk; bool moved; =20 - moved =3D __mptcp_move_skbs_from_subflow(msk, ssk); + moved =3D __mptcp_move_skbs_from_subflow(msk, ssk, true); __mptcp_ofo_queue(msk); if (unlikely(ssk->sk_err)) __mptcp_subflow_error_report(sk, ssk); @@ -886,7 +883,7 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk) if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk)) sk->sk_data_ready(sk); } else { - __set_bit(MPTCP_DEQUEUE, &mptcp_sk(sk)->cb_flags); + __mptcp_move_skbs_from_subflow(msk, ssk, false); } mptcp_data_unlock(sk); } @@ -2126,60 +2123,74 @@ static void mptcp_rcv_space_adjust(struct mptcp_soc= k *msk, int copied) msk->rcvq_space.time =3D mstamp; } =20 -static struct mptcp_subflow_context * -__mptcp_first_ready_from(struct mptcp_sock *msk, - struct mptcp_subflow_context *subflow) +static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32= *delta) { - struct mptcp_subflow_context *start_subflow =3D subflow; + struct sk_buff *skb =3D list_first_entry(skbs, struct sk_buff, list); + struct mptcp_sock *msk =3D mptcp_sk(sk); + bool moved =3D false; + + *delta =3D 0; + while (1) { + /* If the msk recvbuf is full stop, don't drop */ + if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf) + break; + + prefetch(skb->next); + list_del(&skb->list); + *delta +=3D skb->truesize; + + moved |=3D __mptcp_move_skb(sk, skb); + if (list_empty(skbs)) + break; =20 - while (!READ_ONCE(subflow->data_avail)) { - subflow =3D mptcp_next_subflow(msk, subflow); - if (subflow =3D=3D start_subflow) - return NULL; + skb =3D list_first_entry(skbs, struct sk_buff, list); } - return subflow; + + __mptcp_ofo_queue(msk); + if (moved) + mptcp_check_data_fin((struct sock *)msk); + return moved; } =20 -static bool __mptcp_move_skbs(struct sock *sk) +static bool mptcp_can_spool_backlog(struct sock *sk, struct list_head *skb= s) { - struct mptcp_subflow_context *subflow; struct mptcp_sock *msk =3D mptcp_sk(sk); - bool ret =3D false; =20 - if (list_empty(&msk->conn_list)) + /* Don't spool the backlog if the rcvbuf is full. */ + if (list_empty(&msk->backlog_list) || + sk_rmem_alloc_get(sk) > sk->sk_rcvbuf) return false; =20 - subflow =3D list_first_entry(&msk->conn_list, - struct mptcp_subflow_context, node); - for (;;) { - struct sock *ssk; - bool slowpath; + INIT_LIST_HEAD(skbs); + list_splice_init(&msk->backlog_list, skbs); + return true; +} =20 - /* - * As an optimization avoid traversing the subflows list - * and ev. acquiring the subflow socket lock before baling out - */ - if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf) - break; +static void mptcp_backlog_spooled(struct sock *sk, u32 moved, + struct list_head *skbs) +{ + struct mptcp_sock *msk =3D mptcp_sk(sk); =20 - subflow =3D __mptcp_first_ready_from(msk, subflow); - if (!subflow) - break; + WRITE_ONCE(msk->backlog_len, msk->backlog_len - moved); + list_splice(skbs, &msk->backlog_list); +} =20 - ssk =3D mptcp_subflow_tcp_sock(subflow); - slowpath =3D lock_sock_fast(ssk); - ret =3D __mptcp_move_skbs_from_subflow(msk, ssk) || ret; - if (unlikely(ssk->sk_err)) - __mptcp_error_report(sk); - unlock_sock_fast(ssk, slowpath); +static bool mptcp_move_skbs(struct sock *sk) +{ + struct list_head skbs; + bool enqueued =3D false; + u32 moved; =20 - subflow =3D mptcp_next_subflow(msk, subflow); - } + mptcp_data_lock(sk); + while (mptcp_can_spool_backlog(sk, &skbs)) { + mptcp_data_unlock(sk); + enqueued |=3D __mptcp_move_skbs(sk, &skbs, &moved); =20 - __mptcp_ofo_queue(msk); - if (ret) - mptcp_check_data_fin((struct sock *)msk); - return ret; + mptcp_data_lock(sk); + mptcp_backlog_spooled(sk, moved, &skbs); + } + mptcp_data_unlock(sk); + return enqueued; } =20 static unsigned int mptcp_inq_hint(const struct sock *sk) @@ -2245,7 +2256,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msgh= dr *msg, size_t len, =20 copied +=3D bytes_read; =20 - if (skb_queue_empty(&sk->sk_receive_queue) && __mptcp_move_skbs(sk)) + if (!list_empty(&msk->backlog_list) && mptcp_move_skbs(sk)) continue; =20 /* only the MPTCP socket status is relevant here. The exit @@ -3530,8 +3541,7 @@ void __mptcp_check_push(struct sock *sk, struct sock = *ssk) =20 #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \ BIT(MPTCP_RETRANSMIT) | \ - BIT(MPTCP_FLUSH_JOIN_LIST) | \ - BIT(MPTCP_DEQUEUE)) + BIT(MPTCP_FLUSH_JOIN_LIST)) =20 /* processes deferred events and flush wmem */ static void mptcp_release_cb(struct sock *sk) @@ -3541,9 +3551,12 @@ static void mptcp_release_cb(struct sock *sk) =20 for (;;) { unsigned long flags =3D (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED); - struct list_head join_list; + struct list_head join_list, skbs; + bool spool_bl; + u32 moved; =20 - if (!flags) + spool_bl =3D mptcp_can_spool_backlog(sk, &skbs); + if (!flags && !spool_bl) break; =20 INIT_LIST_HEAD(&join_list); @@ -3565,7 +3578,7 @@ static void mptcp_release_cb(struct sock *sk) __mptcp_push_pending(sk, 0); if (flags & BIT(MPTCP_RETRANSMIT)) __mptcp_retrans(sk); - if ((flags & BIT(MPTCP_DEQUEUE)) && __mptcp_move_skbs(sk)) { + if (spool_bl && __mptcp_move_skbs(sk, &skbs, &moved)) { /* notify ack seq update */ mptcp_cleanup_rbuf(msk, 0); sk->sk_data_ready(sk); @@ -3573,6 +3586,8 @@ static void mptcp_release_cb(struct sock *sk) =20 cond_resched(); spin_lock_bh(&sk->sk_lock.slock); + if (spool_bl) + mptcp_backlog_spooled(sk, moved, &skbs); } =20 if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags)) @@ -3805,7 +3820,7 @@ static int mptcp_ioctl(struct sock *sk, int cmd, int = *karg) return -EINVAL; =20 lock_sock(sk); - if (__mptcp_move_skbs(sk)) + if (mptcp_move_skbs(sk)) mptcp_cleanup_rbuf(msk, 0); *karg =3D mptcp_inq_hint(sk); release_sock(sk); diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index cf82aefb5513..8e0f780e9210 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -124,7 +124,6 @@ #define MPTCP_FLUSH_JOIN_LIST 5 #define MPTCP_SYNC_STATE 6 #define MPTCP_SYNC_SNDBUF 7 -#define MPTCP_DEQUEUE 8 =20 struct mptcp_skb_cb { u64 map_seq; --=20 2.51.0