From nobody Fri May 17 02:02:59 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0498C19BAB for ; Fri, 22 Sep 2023 07:43:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695368591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FAxIkfuoeEoZzA8wlB8kAVhexrqSR0tArZbmsjeOmUU=; b=PPwCS8F+kOouGz12eMWAsRE5kS3yyYQygaOXApI1814H2EvguwZ4VBlm8OjqJ+mQVnc+MZ zYIVjbxuXvpCH8i97889SIP6v7TWmzaP9WgTzMPO4CERDz31GqQOdfkts21XGr2n1KgLEv sC5jOgDxrMJxNptLjcF97kuw2LhppB4= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-404-hIM3B1zTNOKLihJlWAExzw-1; Fri, 22 Sep 2023 03:43:09 -0400 X-MC-Unique: hIM3B1zTNOKLihJlWAExzw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A629D3C0E235 for ; Fri, 22 Sep 2023 07:43:09 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.210]) by smtp.corp.redhat.com (Postfix) with ESMTP id 345D7140273C for ; Fri, 22 Sep 2023 07:43:09 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Subject: [PATCH v5 mptcp-next 1/4] mptcp: fix delegated action races. Date: Fri, 22 Sep 2023 09:42:59 +0200 Message-ID: <22c38fda1552a70f4809a39e0c2c5f8317c9b3dc.1695368456.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" The delegated action infrastructure is prone to the following race: different CPUs can try to schedule different delegated actions on the same subflow at the same time. Each of them will check different bits via mptcp_subflow_delegate(), and will try to schedule the action on the related per-cpu napi instance. Depending on the timing, both can observe an empty delegated list node, causing the same entry to be added simultaneously on two different lists. The root cause is that the delegated actions infra does not provide a single synchronization point. Address the issue reserving an additional bit to mark the subflow as scheduled for delegation. Acquiring such bit guarantee the caller to own the delegated list node, and being able to safely schedule the subflow. Clear such bit only when the subflow scheduling is completed, ensuring proper barrier in place. Additionally swap the meaning of the delegated_action bitmask, to allow the usage of the existing helper to set multiple bit at once. Fixes: bcd97734318d ("mptcp: use delegate action to schedule 3rd ack retran= s") Signed-off-by: Paolo Abeni Reviewed-by: Mat Martineau --- v4 -> v5: - 'mask' -> 'set_bits' - added more comments (Mat) --- net/mptcp/protocol.c | 28 ++++++++++++++-------------- net/mptcp/protocol.h | 35 ++++++++++++----------------------- net/mptcp/subflow.c | 10 ++++++++-- 3 files changed, 34 insertions(+), 39 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 1a0b463f8c97..04eda1b8f7a4 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -3425,24 +3425,21 @@ static void schedule_3rdack_retransmission(struct s= ock *ssk) sk_reset_timer(ssk, &icsk->icsk_delack_timer, timeout); } =20 -void mptcp_subflow_process_delegated(struct sock *ssk) +void mptcp_subflow_process_delegated(struct sock *ssk, long status) { struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); struct sock *sk =3D subflow->conn; =20 - if (test_bit(MPTCP_DELEGATE_SEND, &subflow->delegated_status)) { + if (status & BIT(MPTCP_DELEGATE_SEND)) { mptcp_data_lock(sk); if (!sock_owned_by_user(sk)) __mptcp_subflow_push_pending(sk, ssk, true); else __set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->cb_flags); mptcp_data_unlock(sk); - mptcp_subflow_delegated_done(subflow, MPTCP_DELEGATE_SEND); } - if (test_bit(MPTCP_DELEGATE_ACK, &subflow->delegated_status)) { + if (status & BIT(MPTCP_DELEGATE_ACK)) schedule_3rdack_retransmission(ssk); - mptcp_subflow_delegated_done(subflow, MPTCP_DELEGATE_ACK); - } } =20 static int mptcp_hash(struct sock *sk) @@ -3968,14 +3965,17 @@ static int mptcp_napi_poll(struct napi_struct *napi= , int budget) struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); =20 bh_lock_sock_nested(ssk); - if (!sock_owned_by_user(ssk) && - mptcp_subflow_has_delegated_action(subflow)) - mptcp_subflow_process_delegated(ssk); - /* ... elsewhere tcp_release_cb_override already processed - * the action or will do at next release_sock(). - * In both case must dequeue the subflow here - on the same - * CPU that scheduled it. - */ + if (!sock_owned_by_user(ssk)) { + mptcp_subflow_process_delegated(ssk, xchg(&subflow->delegated_status, 0= )); + } else { + /* tcp_release_cb_override already processed + * the action or will do at next release_sock(). + * In both case must dequeue the subflow here - on the same + * CPU that scheduled it. + */ + smp_wmb(); + clear_bit(MPTCP_DELEGATE_SCHEDULED, &subflow->delegated_status); + } bh_unlock_sock(ssk); sock_put(ssk); =20 diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 3c938e3560e4..0fe767a3fb9c 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -444,9 +444,11 @@ struct mptcp_delegated_action { =20 DECLARE_PER_CPU(struct mptcp_delegated_action, mptcp_delegated_actions); =20 -#define MPTCP_DELEGATE_SEND 0 -#define MPTCP_DELEGATE_ACK 1 +#define MPTCP_DELEGATE_SCHEDULED 0 +#define MPTCP_DELEGATE_SEND 1 +#define MPTCP_DELEGATE_ACK 2 =20 +#define MPTCP_DELEGATE_ACTIONS_MASK (~BIT(MPTCP_DELEGATE_SCHEDULED)) /* MPTCP subflow context */ struct mptcp_subflow_context { struct list_head node;/* conn_list of subflows */ @@ -564,23 +566,24 @@ mptcp_subflow_get_mapped_dsn(const struct mptcp_subfl= ow_context *subflow) return subflow->map_seq + mptcp_subflow_get_map_offset(subflow); } =20 -void mptcp_subflow_process_delegated(struct sock *ssk); +void mptcp_subflow_process_delegated(struct sock *ssk, long actions); =20 static inline void mptcp_subflow_delegate(struct mptcp_subflow_context *su= bflow, int action) { + long old, set_bits =3D BIT(MPTCP_DELEGATE_SCHEDULED) | BIT(action); struct mptcp_delegated_action *delegated; bool schedule; =20 /* the caller held the subflow bh socket lock */ lockdep_assert_in_softirq(); =20 - /* The implied barrier pairs with mptcp_subflow_delegated_done(), and - * ensures the below list check sees list updates done prior to status - * bit changes + /* The implied barrier pairs with tcp_release_cb_override() + * mptcp_napi_poll(), and ensures the below list check sees list + * updates done prior to delegated status bits changes */ - if (!test_and_set_bit(action, &subflow->delegated_status)) { - /* still on delegated list from previous scheduling */ - if (!list_empty(&subflow->delegated_node)) + old =3D set_mask_bits(&subflow->delegated_status, 0, set_bits); + if (!(old & BIT(MPTCP_DELEGATE_SCHEDULED))) { + if (WARN_ON_ONCE(!list_empty(&subflow->delegated_node))) return; =20 delegated =3D this_cpu_ptr(&mptcp_delegated_actions); @@ -605,20 +608,6 @@ mptcp_subflow_delegated_next(struct mptcp_delegated_ac= tion *delegated) return ret; } =20 -static inline bool mptcp_subflow_has_delegated_action(const struct mptcp_s= ubflow_context *subflow) -{ - return !!READ_ONCE(subflow->delegated_status); -} - -static inline void mptcp_subflow_delegated_done(struct mptcp_subflow_conte= xt *subflow, int action) -{ - /* pairs with mptcp_subflow_delegate, ensures delegate_node is updated be= fore - * touching the status bit - */ - smp_wmb(); - clear_bit(action, &subflow->delegated_status); -} - int mptcp_is_enabled(const struct net *net); unsigned int mptcp_get_add_addr_timeout(const struct net *net); int mptcp_is_checksum_enabled(const struct net *net); diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 918c1a235790..9c1f8d1d63d2 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1956,9 +1956,15 @@ static void subflow_ulp_clone(const struct request_s= ock *req, static void tcp_release_cb_override(struct sock *ssk) { struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); + long status; =20 - if (mptcp_subflow_has_delegated_action(subflow)) - mptcp_subflow_process_delegated(ssk); + /* process and clear all the pending actions, but leave the subflow into + * the napi queue. To respect locking, only the same CPU that originated + * the action can touch the list. mptcp_napi_poll will take care of it. + */ + status =3D set_mask_bits(&subflow->delegated_status, MPTCP_DELEGATE_ACTIO= NS_MASK, 0); + if (status) + mptcp_subflow_process_delegated(ssk, status); =20 tcp_release_cb(ssk); } --=20 2.41.0 From nobody Fri May 17 02:02:59 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 086B319BB4 for ; Fri, 22 Sep 2023 07:43:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695368591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/0ydX7GitqiMH9OV311oLXEc7OjXylPdLUU13ab3XN0=; b=I6gpdxKILRbndbmA/7ClnNaikX2RHOHRjV5XgXodKb8IFJ9QvrH7kTL67zniIWgoNmdX8y iipzY8T+O0okuGEydl1CtHUVbPLWmpRyFJJ9dwocSQT475LIEN14AL23AjtJPAqAiCjuXP +D+ykmISuLAD+rlCBNLXDTArkinvEoc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-482-Y9OoMtYhPLGBOeFnziPmtA-1; Fri, 22 Sep 2023 03:43:10 -0400 X-MC-Unique: Y9OoMtYhPLGBOeFnziPmtA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 67009800B35 for ; Fri, 22 Sep 2023 07:43:10 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.210]) by smtp.corp.redhat.com (Postfix) with ESMTP id E9458140273C for ; Fri, 22 Sep 2023 07:43:09 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Subject: [PATCH v5 mptcp-next 2/4] mptcp: consolidate sockopt synchronization Date: Fri, 22 Sep 2023 09:43:00 +0200 Message-ID: <74d0d00a3c5b485289a51cf9e45473cb5d62ed67.1695368456.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" Move the socket option synchronization for active subflows at subflow creation time. This allows removing the now unused unlocked variant of such helper. While at that, clean-up a bit the mptcp_subflow_create_socket() errors path. Signed-off-by: Paolo Abeni Reviewed-by: Mat Martineau --- net/mptcp/protocol.c | 2 -- net/mptcp/sockopt.c | 22 ---------------------- net/mptcp/subflow.c | 18 +++++++++--------- 3 files changed, 9 insertions(+), 33 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 04eda1b8f7a4..f727a7ee662d 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -121,8 +121,6 @@ struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk) ret =3D __mptcp_socket_create(msk); if (ret) return ERR_PTR(ret); - - mptcp_sockopt_sync(msk, msk->first); } =20 return msk->first; diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index 8260202c0066..f44b364b0055 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -1444,28 +1444,6 @@ static void sync_socket_options(struct mptcp_sock *m= sk, struct sock *ssk) inet_assign_bit(FREEBIND, ssk, inet_test_bit(FREEBIND, sk)); } =20 -static void __mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk) -{ - bool slow =3D lock_sock_fast(ssk); - - sync_socket_options(msk, ssk); - - unlock_sock_fast(ssk, slow); -} - -void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk) -{ - struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); - - msk_owned_by_me(msk); - - if (READ_ONCE(subflow->setsockopt_seq) !=3D msk->setsockopt_seq) { - __mptcp_sockopt_sync(msk, ssk); - - subflow->setsockopt_seq =3D msk->setsockopt_seq; - } -} - void mptcp_sockopt_sync_locked(struct mptcp_sock *msk, struct sock *ssk) { struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 9c1f8d1d63d2..7b98bec4c25e 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1525,8 +1525,6 @@ int __mptcp_subflow_connect(struct sock *sk, const st= ruct mptcp_addr_info *loc, if (addr.ss_family =3D=3D AF_INET6) addrlen =3D sizeof(struct sockaddr_in6); #endif - mptcp_sockopt_sync(msk, ssk); - ssk->sk_bound_dev_if =3D ifindex; err =3D kernel_bind(sf, (struct sockaddr *)&addr, addrlen); if (err) @@ -1637,7 +1635,7 @@ int mptcp_subflow_create_socket(struct sock *sk, unsi= gned short family, =20 err =3D security_mptcp_add_subflow(sk, sf->sk); if (err) - goto release_ssk; + goto err_free; =20 /* the newly created socket has to be in the same cgroup as its parent */ mptcp_attach_cgroup(sk, sf->sk); @@ -1651,15 +1649,12 @@ int mptcp_subflow_create_socket(struct sock *sk, un= signed short family, get_net_track(net, &sf->sk->ns_tracker, GFP_KERNEL); sock_inuse_add(net, 1); err =3D tcp_set_ulp(sf->sk, "mptcp"); + if (err) + goto err_free; =20 -release_ssk: + mptcp_sockopt_sync_locked(mptcp_sk(sk), sf->sk); release_sock(sf->sk); =20 - if (err) { - sock_release(sf); - return err; - } - /* the newly created socket really belongs to the owning MPTCP master * socket, even if for additional subflows the allocation is performed * by a kernel workqueue. Adjust inode references, so that the @@ -1679,6 +1674,11 @@ int mptcp_subflow_create_socket(struct sock *sk, uns= igned short family, mptcp_subflow_ops_override(sf->sk); =20 return 0; + +err_free: + release_sock(sf->sk); + sock_release(sf); + return err; } =20 static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk, --=20 2.41.0 From nobody Fri May 17 02:02:59 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5871E19441 for ; Fri, 22 Sep 2023 07:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695368593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3spK7kfYXjBeLtRekBsY8TTdAohgmRCQPN1fCq5xwnY=; b=WWOowIhgVs3cZEx48ksppjkpNdXbABoW30zySj70JhNxNy395IR1lNMyMzDgtgsPnMWq0/ mLNwTrWqZ5DUf2xzcoSVnGQiTSHdmWTcR7bdPosSrhLXZCob453pDXCDDykxwMnHKFu2lD E8vVoCu6UxWAiWxEXz4rqyTL4Ygak8I= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-245-p66Gu6ExPoWFdII4m7RBKg-1; Fri, 22 Sep 2023 03:43:11 -0400 X-MC-Unique: p66Gu6ExPoWFdII4m7RBKg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 27AB53816C89 for ; Fri, 22 Sep 2023 07:43:11 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.210]) by smtp.corp.redhat.com (Postfix) with ESMTP id AA486140273C for ; Fri, 22 Sep 2023 07:43:10 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Subject: [PATCH v5 mptcp-next 3/4] mptcp: ignore notsent_lowat setting at the subflow level. Date: Fri, 22 Sep 2023 09:43:01 +0200 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" Any latency related tuning taking action at the subflow level does not really affect the user-space, as only the main MPTCP socket is relevant. Anyway any limiting setting may foul the MPTCP scheduler, not being able to fully use the subflow-level cwin, leading to very poor b/w usage. Enforce notsent_lowat to be a no-op on every subflow. Note that TCP_NOTSENT_LOWAT is currently not supported, and properly dealing with that will require more invasive changes. Signed-off-by: Paolo Abeni Reviewed-by: Mat Martineau --- net/mptcp/sockopt.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index f44b364b0055..453d6c78c25c 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -1450,6 +1450,12 @@ void mptcp_sockopt_sync_locked(struct mptcp_sock *ms= k, struct sock *ssk) =20 msk_owned_by_me(msk); =20 + /* subflows must ignore any latency-related settings: will not affect + * the user-space - only the msk is relevant - but will foul the + * mptcp scheduler + */ + tcp_sk(ssk)->notsent_lowat =3D UINT_MAX; + if (READ_ONCE(subflow->setsockopt_seq) !=3D msk->setsockopt_seq) { sync_socket_options(msk, ssk); =20 --=20 2.41.0 From nobody Fri May 17 02:02:59 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F064819450 for ; Fri, 22 Sep 2023 07:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695368593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rAIr5HtR/rEPpRslHYlQ4TxLEblAqzJGppcJG52iTqs=; b=Zh+1YPxLHZ2L2qOkTUZ9qvopjPuQmsesIVtODd/QtYJ9wQCCCEyz8tMTXoK0QbSYLo/Zqb RE+flTw+IiLoM/zh6sUz0GHgdNgKCt1dH2qXJjoQuJNPCB0di4svtj1zlU1lD/5gN/KpkM k0D8uQYBYMat6Fxo9sRLmehQabhMNUM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-189-nrJyzXLnP5GprG2yX2igPQ-1; Fri, 22 Sep 2023 03:43:12 -0400 X-MC-Unique: nrJyzXLnP5GprG2yX2igPQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD61F1C0CCB8 for ; Fri, 22 Sep 2023 07:43:11 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.210]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B512140273C for ; Fri, 22 Sep 2023 07:43:11 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Subject: [PATCH v5 mptcp-next 4/4] mptcp: refactor sndbuf auto-tuning. Date: Fri, 22 Sep 2023 09:43:02 +0200 Message-ID: <6e31fe305493173f6c2e33f6351da074d9068a77.1695368456.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; x-default="true" The MPTCP protocol account for the data enqueued on all the subflows to the main socket send buffer, while the send buffer auto-tuning algorithm set the main socket send buffer size as the max size among the subflows. That causes bad performances when at least one subflow is sndbuf limited, e.g. due to very high latency, as the MPTCP scheduler can't even fill such buffer. Change the send-buffer auto-tuning algorithm to compute the main socket send buffer size as the sum of all the subflows buffer size. Signed-off-by: Paolo Abeni Reviewed-by: Mat Martineau --- v4 -> v5: - '-' -> '!=3D' (Mat) - update cache_sndbuf when setting SOCK_SNDBUF_LOCK, to avoid later triggering non effective __mptcp_sync_sndbuf() potentially forever v2 -> v3: - avoid ingremental updates, always recompute sum(ssk->sndbuf) to avoid drift on memory pressure/decrease --- net/mptcp/protocol.c | 18 +++++++++++++-- net/mptcp/protocol.h | 54 ++++++++++++++++++++++++++++++++++++++++---- net/mptcp/sockopt.c | 5 +++- net/mptcp/subflow.c | 3 +-- 4 files changed, 70 insertions(+), 10 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index f727a7ee662d..0a9d00e794d4 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -891,6 +891,7 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk,= struct sock *ssk) mptcp_sockopt_sync_locked(msk, ssk); mptcp_subflow_joined(msk, ssk); mptcp_stop_tout_timer(sk); + __mptcp_propagate_sndbuf(sk, ssk); return true; } =20 @@ -1077,15 +1078,16 @@ static void mptcp_enter_memory_pressure(struct sock= *sk) struct mptcp_sock *msk =3D mptcp_sk(sk); bool first =3D true; =20 - sk_stream_moderate_sndbuf(sk); mptcp_for_each_subflow(msk, subflow) { struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); =20 if (first) tcp_enter_memory_pressure(ssk); sk_stream_moderate_sndbuf(ssk); + first =3D false; } + __mptcp_sync_sndbuf(sk); } =20 /* ensure we get enough memory for the frag hdr, beyond some minimal amoun= t of @@ -2436,6 +2438,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct= sock *ssk, WRITE_ONCE(msk->first, NULL); =20 out: + __mptcp_sync_sndbuf(sk); if (need_push) __mptcp_push_pending(sk, 0); =20 @@ -3214,7 +3217,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *s= k, * uses the correct data */ mptcp_copy_inaddrs(nsk, ssk); - mptcp_propagate_sndbuf(nsk, ssk); + __mptcp_propagate_sndbuf(nsk, ssk); =20 mptcp_rcv_space_init(msk, ssk); bh_unlock_sock(nsk); @@ -3392,6 +3395,8 @@ static void mptcp_release_cb(struct sock *sk) __mptcp_set_connected(sk); if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags)) __mptcp_error_report(sk); + if (__test_and_clear_bit(MPTCP_SYNC_SNDBUF, &msk->cb_flags)) + __mptcp_sync_sndbuf(sk); } =20 __mptcp_update_rmem(sk); @@ -3436,6 +3441,14 @@ void mptcp_subflow_process_delegated(struct sock *ss= k, long status) __set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->cb_flags); mptcp_data_unlock(sk); } + if (status & BIT(MPTCP_DELEGATE_SNDBUF)) { + mptcp_data_lock(sk); + if (!sock_owned_by_user(sk)) + __mptcp_sync_sndbuf(sk); + else + __set_bit(MPTCP_SYNC_SNDBUF, &mptcp_sk(sk)->cb_flags); + mptcp_data_unlock(sk); + } if (status & BIT(MPTCP_DELEGATE_ACK)) schedule_3rdack_retransmission(ssk); } @@ -3520,6 +3533,7 @@ bool mptcp_finish_join(struct sock *ssk) /* active subflow, already present inside the conn_list */ if (!list_empty(&subflow->node)) { mptcp_subflow_joined(msk, ssk); + mptcp_propagate_sndbuf(parent, ssk); return true; } =20 diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 0fe767a3fb9c..fa27236f58cd 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -123,6 +123,7 @@ #define MPTCP_RETRANSMIT 4 #define MPTCP_FLUSH_JOIN_LIST 5 #define MPTCP_CONNECTED 6 +#define MPTCP_SYNC_SNDBUF 7 =20 struct mptcp_skb_cb { u64 map_seq; @@ -447,6 +448,7 @@ DECLARE_PER_CPU(struct mptcp_delegated_action, mptcp_de= legated_actions); #define MPTCP_DELEGATE_SCHEDULED 0 #define MPTCP_DELEGATE_SEND 1 #define MPTCP_DELEGATE_ACK 2 +#define MPTCP_DELEGATE_SNDBUF 3 =20 #define MPTCP_DELEGATE_ACTIONS_MASK (~BIT(MPTCP_DELEGATE_SCHEDULED)) /* MPTCP subflow context */ @@ -520,6 +522,9 @@ struct mptcp_subflow_context { =20 u32 setsockopt_seq; u32 stale_rcv_tstamp; + int cached_sndbuf; /* sndbuf size when last synced with the msk s= ndbuf, + * protected by the msk socket lock + */ =20 struct sock *tcp_sock; /* tcp sk backpointer */ struct sock *conn; /* parent mptcp_sock */ @@ -768,13 +773,52 @@ static inline bool mptcp_data_fin_enabled(const struc= t mptcp_sock *msk) READ_ONCE(msk->write_seq) =3D=3D READ_ONCE(msk->snd_nxt); } =20 -static inline bool mptcp_propagate_sndbuf(struct sock *sk, struct sock *ss= k) +static inline void __mptcp_sync_sndbuf(struct sock *sk) { - if ((sk->sk_userlocks & SOCK_SNDBUF_LOCK) || ssk->sk_sndbuf <=3D READ_ONC= E(sk->sk_sndbuf)) - return false; + struct mptcp_subflow_context *subflow; + int ssk_sndbuf, new_sndbuf; + + if (sk->sk_userlocks & SOCK_SNDBUF_LOCK) + return; + + new_sndbuf =3D sock_net(sk)->ipv4.sysctl_tcp_wmem[0]; + mptcp_for_each_subflow(mptcp_sk(sk), subflow) { + ssk_sndbuf =3D READ_ONCE(mptcp_subflow_tcp_sock(subflow)->sk_sndbuf); + + subflow->cached_sndbuf =3D ssk_sndbuf; + new_sndbuf +=3D ssk_sndbuf; + } + + /* the msk max wmem limit is * tcp wmem[2] */ + WRITE_ONCE(sk->sk_sndbuf, new_sndbuf); +} + +/* The called held both the msk socket and the subflow socket locks, + * possibly under BH + */ +static inline void __mptcp_propagate_sndbuf(struct sock *sk, struct sock *= ssk) +{ + struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); + + if (READ_ONCE(ssk->sk_sndbuf) !=3D subflow->cached_sndbuf) + __mptcp_sync_sndbuf(sk); +} + +/* the caller held only the subflow socket lock, either in process or + * BH context. Additionally this can be called under the msk data lock, + * so we can't acquire such lock here: let the delegate action acquires + * the needed locks in suitable order. + */ +static inline void mptcp_propagate_sndbuf(struct sock *sk, struct sock *ss= k) +{ + struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); + + if (likely(READ_ONCE(ssk->sk_sndbuf) =3D=3D subflow->cached_sndbuf)) + return; =20 - WRITE_ONCE(sk->sk_sndbuf, ssk->sk_sndbuf); - return true; + local_bh_disable(); + mptcp_subflow_delegate(subflow, MPTCP_DELEGATE_SNDBUF); + local_bh_enable(); } =20 static inline void mptcp_write_space(struct sock *sk) diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index 453d6c78c25c..7478254801e9 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -95,6 +95,7 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_soc= k *msk, int optname, in case SO_SNDBUFFORCE: ssk->sk_userlocks |=3D SOCK_SNDBUF_LOCK; WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf); + mptcp_subflow_ctx(ssk)->cached_sndbuf =3D sk->sk_sndbuf; break; case SO_RCVBUF: case SO_RCVBUFFORCE: @@ -1415,8 +1416,10 @@ static void sync_socket_options(struct mptcp_sock *m= sk, struct sock *ssk) =20 if (sk->sk_userlocks & tx_rx_locks) { ssk->sk_userlocks |=3D sk->sk_userlocks & tx_rx_locks; - if (sk->sk_userlocks & SOCK_SNDBUF_LOCK) + if (sk->sk_userlocks & SOCK_SNDBUF_LOCK) { WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf); + mptcp_subflow_ctx(ssk)->cached_sndbuf =3D sk->sk_sndbuf; + } if (sk->sk_userlocks & SOCK_RCVBUF_LOCK) WRITE_ONCE(ssk->sk_rcvbuf, sk->sk_rcvbuf); } diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 7b98bec4c25e..a72e710b8332 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -421,6 +421,7 @@ static bool subflow_use_different_dport(struct mptcp_so= ck *msk, const struct soc =20 void __mptcp_set_connected(struct sock *sk) { + __mptcp_propagate_sndbuf(sk, mptcp_sk(sk)->first); if (sk->sk_state =3D=3D TCP_SYN_SENT) { inet_sk_state_store(sk, TCP_ESTABLISHED); sk->sk_state_change(sk); @@ -472,7 +473,6 @@ static void subflow_finish_connect(struct sock *sk, con= st struct sk_buff *skb) return; =20 msk =3D mptcp_sk(parent); - mptcp_propagate_sndbuf(parent, sk); subflow->rel_write_seq =3D 1; subflow->conn_finished =3D 1; subflow->ssn_offset =3D TCP_SKB_CB(skb)->seq; @@ -1728,7 +1728,6 @@ static void subflow_state_change(struct sock *sk) =20 msk =3D mptcp_sk(parent); if (subflow_simultaneous_connect(sk)) { - mptcp_propagate_sndbuf(parent, sk); mptcp_do_fallback(sk); mptcp_rcv_space_init(msk, sk); pr_fallback(msk); --=20 2.41.0