From nobody Mon Feb 9 02:12:37 2026 Delivered-To: wpasupplicant.patchew@gmail.com Received: by 2002:a05:6638:38c:0:0:0:0 with SMTP id y12csp1899773jap; Thu, 6 Jan 2022 16:21:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJyeXByGgYb+yg8CEgMoUkH6KUWuhwdI59QWJvv35rD05/4bK9e2+9du8wsVUnseI22N3sql X-Received: by 2002:a25:9cc7:: with SMTP id z7mr63638716ybo.3.1641514865896; Thu, 06 Jan 2022 16:21:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1641514865; cv=none; d=google.com; s=arc-20160816; b=Hc5PdcAJtXzAu/k+340GodOM1txyuyk+1znbc3YN7gd9EUWVm9ya0O1VkykXHcJzpa PGKgHNViQk2VLRoqK885Df+VaDj38Jyj+0ALHwfk38RyVijTNdsX8tm/Qu5f+x+gYsa4 LNXNcZoyOYYIRhI5nWqE7Mcy3Q77IQH6GA1UP3SnNp/ingrbKLvFFW9ToVdIFrigWZfc YevKW4+flypfARn4golj6yIOX1xyW1BhnAngbaW2vDdB3Q/25SnTEe19/HvwpOjsnHBR 1AjySrYb7tzqKbZYArRie2hK05jnYRy7eG7JDz+UMYvqn0gFffpLtNp3BrzIG5OpDU+E aoFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=Ip5KFHH3wOeA4fMIc7E62yfd7I3H6c/y0ZoQJHdxkeQ=; b=t+TqvkMaDZZZfFijzOrZDvPs3mn1oCd+2tK8FUf23a//scmFXqpU1xCSMmyYsBoZZS qqKnQale9+FE393W7o5PM6uSK51rrsd7LZj0bAbKSv3LiNTCebOFAFLRiTkaTdlszlPO frqbrLFhgk6AD2n37/vTIRF5piUp8fVT6wjP58Ucl3CFdqV+8khLe/XduSymRHEcW+Zs uPMsb2ePSXwiBFuHUrOXIQHQIj0Jj2UUZplV44MHS0i15dytpHxSejXjpnMyMDPUb7iX /pBN28u5j6Yy1LjLYnMLe8kZL2MaWlQs0Z0XwyDPG+PTMl/mrrBKvQkPzjMgN02nbrrp 3gOw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=l22AaGxQ; spf=pass (google.com: domain of mptcp+bounces-2957-wpasupplicant.patchew=gmail.com@lists.linux.dev designates 147.75.197.195 as permitted sender) smtp.mailfrom="mptcp+bounces-2957-wpasupplicant.patchew=gmail.com@lists.linux.dev"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ewr.edge.kernel.org (ewr.edge.kernel.org. [147.75.197.195]) by mx.google.com with ESMTPS id 67si2560369ybi.590.2022.01.06.16.21.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Jan 2022 16:21:05 -0800 (PST) Received-SPF: pass (google.com: domain of mptcp+bounces-2957-wpasupplicant.patchew=gmail.com@lists.linux.dev designates 147.75.197.195 as permitted sender) client-ip=147.75.197.195; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=l22AaGxQ; spf=pass (google.com: domain of mptcp+bounces-2957-wpasupplicant.patchew=gmail.com@lists.linux.dev designates 147.75.197.195 as permitted sender) smtp.mailfrom="mptcp+bounces-2957-wpasupplicant.patchew=gmail.com@lists.linux.dev"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ewr.edge.kernel.org (Postfix) with ESMTPS id 3B7451C0CC3 for ; Fri, 7 Jan 2022 00:21:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3E8002C9D; Fri, 7 Jan 2022 00:20:56 +0000 (UTC) X-Original-To: mptcp@lists.linux.dev Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E27102CAA for ; Fri, 7 Jan 2022 00:20:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1641514854; x=1673050854; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SKZwGHNQT0X+qjlikfMMm8d4BiKvecIkUe6rU24m7yM=; b=l22AaGxQqYkoDs91SwpTit/4dkJhDi2DsiPHRgccUkz2uCiX7p/HjasV RdNqQH2TxyDOwnT7OMLMhbQhnEYKjATnU1baga7f5CTWe/x1iw959xrMh KOhwF/MmgM491xZJg7/x2mjSMrR+PCTtxNGAHV5lFRQgoR3FuLof3vf6E 3Dp7n17mijggepkWp2Pu3fQYftv5p4Rk+9IuNqkKPOqRl5haXFpAJA0Ew r8aZLN8wYUqn2EDf9tIhurnqL0KOyRSsSdkNyfTitGR/yE2r3a70o5g+i ERa29MMN277Gf9f8/jvsrUS4TTCF+3P3I3ILJCI2b3qdWxXuggsZD87wc g==; X-IronPort-AV: E=McAfee;i="6200,9189,10217"; a="329111060" X-IronPort-AV: E=Sophos;i="5.88,268,1635231600"; d="scan'208";a="329111060" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2022 16:20:37 -0800 X-IronPort-AV: E=Sophos;i="5.88,268,1635231600"; d="scan'208";a="618508519" Received: from mjmartin-desk2.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.209.94.200]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2022 16:20:37 -0800 From: Mat Martineau To: netdev@vger.kernel.org Cc: Paolo Abeni , davem@davemloft.net, kuba@kernel.org, matthieu.baerts@tessares.net, mptcp@lists.linux.dev, Mat Martineau Subject: [PATCH net-next 12/13] mptcp: cleanup MPJ subflow list handling Date: Thu, 6 Jan 2022 16:20:25 -0800 Message-Id: <20220107002026.375427-13-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220107002026.375427-1-mathew.j.martineau@linux.intel.com> References: <20220107002026.375427-1-mathew.j.martineau@linux.intel.com> Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Paolo Abeni We can simplify the join list handling leveraging the mptcp_release_cb(): if we can acquire the msk socket lock at mptcp_finish_join time, move the new subflow directly into the conn_list, otherwise place it on join_list and let the release_cb process such list. Since pending MPJ connection are now always processed in a timely way, we can avoid flushing the join list every time we have to process all the current subflows. Additionally we can now use the mptcp data lock to protect the join_list, removing the additional spin lock. Finally, the MPJ handshake is now always finalized under the msk socket lock, we can drop the additional synchronization between mptcp_finish_join() and mptcp_close(). Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau --- net/mptcp/pm_netlink.c | 3 -- net/mptcp/protocol.c | 117 ++++++++++++++++++----------------------- net/mptcp/protocol.h | 15 +----- net/mptcp/sockopt.c | 24 +++------ net/mptcp/subflow.c | 5 +- 5 files changed, 60 insertions(+), 104 deletions(-) diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c index 5efb63ab1fa3..75af1f701e1d 100644 --- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -165,7 +165,6 @@ select_local_address(const struct pm_nl_pernet *pernet, msk_owned_by_me(msk); =20 rcu_read_lock(); - __mptcp_flush_join_list(msk); list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) { if (!(entry->flags & MPTCP_PM_ADDR_FLAG_SUBFLOW)) continue; @@ -595,7 +594,6 @@ static unsigned int fill_local_addresses_vec(struct mpt= cp_sock *msk, subflows_max =3D mptcp_pm_get_subflows_max(msk); =20 rcu_read_lock(); - __mptcp_flush_join_list(msk); list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) { if (!(entry->flags & MPTCP_PM_ADDR_FLAG_FULLMESH)) continue; @@ -684,7 +682,6 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk) !mptcp_pm_should_rm_signal(msk)) return; =20 - __mptcp_flush_join_list(msk); subflow =3D list_first_entry_or_null(&msk->conn_list, typeof(*subflow), n= ode); if (subflow) { struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 3e8cfaed00b5..c5f64fb0474d 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -808,47 +808,38 @@ void mptcp_data_ready(struct sock *sk, struct sock *s= sk) mptcp_data_unlock(sk); } =20 -static bool mptcp_do_flush_join_list(struct mptcp_sock *msk) +static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk) { - struct mptcp_subflow_context *subflow; - bool ret =3D false; + struct sock *sk =3D (struct sock *)msk; =20 - if (likely(list_empty(&msk->join_list))) + if (sk->sk_state !=3D TCP_ESTABLISHED) return false; =20 - spin_lock_bh(&msk->join_list_lock); - list_for_each_entry(subflow, &msk->join_list, node) { - u32 sseq =3D READ_ONCE(subflow->setsockopt_seq); - - mptcp_propagate_sndbuf((struct sock *)msk, mptcp_subflow_tcp_sock(subflo= w)); - if (READ_ONCE(msk->setsockopt_seq) !=3D sseq) - ret =3D true; - } - list_splice_tail_init(&msk->join_list, &msk->conn_list); - spin_unlock_bh(&msk->join_list_lock); - - return ret; -} - -void __mptcp_flush_join_list(struct mptcp_sock *msk) -{ - if (likely(!mptcp_do_flush_join_list(msk))) - return; + /* attach to msk socket only after we are sure we will deal with it + * at close time + */ + if (sk->sk_socket && !ssk->sk_socket) + mptcp_sock_graft(ssk, sk->sk_socket); =20 - if (!test_and_set_bit(MPTCP_WORK_SYNC_SETSOCKOPT, &msk->flags)) - mptcp_schedule_work((struct sock *)msk); + mptcp_propagate_sndbuf((struct sock *)msk, ssk); + mptcp_sockopt_sync_locked(msk, ssk); + return true; } =20 -static void mptcp_flush_join_list(struct mptcp_sock *msk) +static void __mptcp_flush_join_list(struct sock *sk) { - bool sync_needed =3D test_and_clear_bit(MPTCP_WORK_SYNC_SETSOCKOPT, &msk-= >flags); - - might_sleep(); + struct mptcp_subflow_context *tmp, *subflow; + struct mptcp_sock *msk =3D mptcp_sk(sk); =20 - if (!mptcp_do_flush_join_list(msk) && !sync_needed) - return; + list_for_each_entry_safe(subflow, tmp, &msk->join_list, node) { + struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); + bool slow =3D lock_sock_fast(ssk); =20 - mptcp_sockopt_sync_all(msk); + list_move_tail(&subflow->node, &msk->conn_list); + if (!__mptcp_finish_join(msk, ssk)) + mptcp_subflow_reset(ssk); + unlock_sock_fast(ssk, slow); + } } =20 static bool mptcp_timer_pending(struct sock *sk) @@ -1549,7 +1540,6 @@ void __mptcp_push_pending(struct sock *sk, unsigned i= nt flags) int ret =3D 0; =20 prev_ssk =3D ssk; - __mptcp_flush_join_list(msk); ssk =3D mptcp_subflow_get_send(msk); =20 /* First check. If the ssk has changed since @@ -1954,7 +1944,6 @@ static bool __mptcp_move_skbs(struct mptcp_sock *msk) unsigned int moved =3D 0; bool ret, done; =20 - mptcp_flush_join_list(msk); do { struct sock *ssk =3D mptcp_subflow_recv_lookup(msk); bool slowpath; @@ -2490,7 +2479,6 @@ static void mptcp_worker(struct work_struct *work) goto unlock; =20 mptcp_check_data_fin_ack(sk); - mptcp_flush_join_list(msk); =20 mptcp_check_fastclose(msk); =20 @@ -2528,8 +2516,6 @@ static int __mptcp_init_sock(struct sock *sk) { struct mptcp_sock *msk =3D mptcp_sk(sk); =20 - spin_lock_init(&msk->join_list_lock); - INIT_LIST_HEAD(&msk->conn_list); INIT_LIST_HEAD(&msk->join_list); INIT_LIST_HEAD(&msk->rtx_queue); @@ -2703,7 +2689,6 @@ static void __mptcp_check_send_data_fin(struct sock *= sk) } } =20 - mptcp_flush_join_list(msk); mptcp_for_each_subflow(msk, subflow) { struct sock *tcp_sk =3D mptcp_subflow_tcp_sock(subflow); =20 @@ -2736,12 +2721,7 @@ static void __mptcp_destroy_sock(struct sock *sk) =20 might_sleep(); =20 - /* be sure to always acquire the join list lock, to sync vs - * mptcp_finish_join(). - */ - spin_lock_bh(&msk->join_list_lock); - list_splice_tail_init(&msk->join_list, &msk->conn_list); - spin_unlock_bh(&msk->join_list_lock); + /* join list will be eventually flushed (with rst) at sock lock release t= ime*/ list_splice_init(&msk->conn_list, &conn_list); =20 sk_stop_timer(sk, &msk->sk.icsk_retransmit_timer); @@ -2844,8 +2824,6 @@ static int mptcp_disconnect(struct sock *sk, int flag= s) struct mptcp_subflow_context *subflow; struct mptcp_sock *msk =3D mptcp_sk(sk); =20 - mptcp_do_flush_join_list(msk); - inet_sk_state_store(sk, TCP_CLOSE); =20 mptcp_for_each_subflow(msk, subflow) { @@ -3076,6 +3054,8 @@ static void mptcp_release_cb(struct sock *sk) flags |=3D BIT(MPTCP_PUSH_PENDING); if (test_and_clear_bit(MPTCP_RETRANSMIT, &mptcp_sk(sk)->flags)) flags |=3D BIT(MPTCP_RETRANSMIT); + if (test_and_clear_bit(MPTCP_FLUSH_JOIN_LIST, &mptcp_sk(sk)->flags)) + flags |=3D BIT(MPTCP_FLUSH_JOIN_LIST); if (!flags) break; =20 @@ -3088,6 +3068,8 @@ static void mptcp_release_cb(struct sock *sk) */ =20 spin_unlock_bh(&sk->sk_lock.slock); + if (flags & BIT(MPTCP_FLUSH_JOIN_LIST)) + __mptcp_flush_join_list(sk); if (flags & BIT(MPTCP_PUSH_PENDING)) __mptcp_push_pending(sk, 0); if (flags & BIT(MPTCP_RETRANSMIT)) @@ -3232,8 +3214,7 @@ bool mptcp_finish_join(struct sock *ssk) struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); struct mptcp_sock *msk =3D mptcp_sk(subflow->conn); struct sock *parent =3D (void *)msk; - struct socket *parent_sock; - bool ret; + bool ret =3D true; =20 pr_debug("msk=3D%p, subflow=3D%p", msk, subflow); =20 @@ -3246,35 +3227,38 @@ bool mptcp_finish_join(struct sock *ssk) if (!msk->pm.server_side) goto out; =20 - if (!mptcp_pm_allow_new_subflow(msk)) { - subflow->reset_reason =3D MPTCP_RST_EPROHIBIT; - return false; - } + if (!mptcp_pm_allow_new_subflow(msk)) + goto err_prohibited; =20 - /* active connections are already on conn_list, and we can't acquire - * msk lock here. - * use the join list lock as synchronization point and double-check - * msk status to avoid racing with __mptcp_destroy_sock() + if (WARN_ON_ONCE(!list_empty(&subflow->node))) + goto err_prohibited; + + /* active connections are already on conn_list. + * If we can't acquire msk socket lock here, let the release callback + * handle it */ - spin_lock_bh(&msk->join_list_lock); - ret =3D inet_sk_state_load(parent) =3D=3D TCP_ESTABLISHED; - if (ret && !WARN_ON_ONCE(!list_empty(&subflow->node))) { - list_add_tail(&subflow->node, &msk->join_list); + mptcp_data_lock(parent); + if (!sock_owned_by_user(parent)) { + ret =3D __mptcp_finish_join(msk, ssk); + if (ret) { + sock_hold(ssk); + list_add_tail(&subflow->node, &msk->conn_list); + } + } else { sock_hold(ssk); + list_add_tail(&subflow->node, &msk->join_list); + set_bit(MPTCP_FLUSH_JOIN_LIST, &msk->flags); } - spin_unlock_bh(&msk->join_list_lock); + mptcp_data_unlock(parent); + if (!ret) { +err_prohibited: subflow->reset_reason =3D MPTCP_RST_EPROHIBIT; return false; } =20 - /* attach to msk socket only after we are sure he will deal with us - * at close time - */ - parent_sock =3D READ_ONCE(parent->sk_socket); - if (parent_sock && !ssk->sk_socket) - mptcp_sock_graft(ssk, parent_sock); subflow->map_seq =3D READ_ONCE(msk->ack_seq); + out: mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC); return true; @@ -3539,7 +3523,6 @@ static int mptcp_stream_accept(struct socket *sock, s= truct socket *newsock, /* set ssk->sk_socket of accept()ed flows to mptcp socket. * This is needed so NOSPACE flag can be set from tcp stack. */ - mptcp_flush_join_list(msk); mptcp_for_each_subflow(msk, subflow) { struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); =20 diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index a8eb32e29215..962f3b6b6a1d 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -120,7 +120,7 @@ #define MPTCP_CLEAN_UNA 7 #define MPTCP_ERROR_REPORT 8 #define MPTCP_RETRANSMIT 9 -#define MPTCP_WORK_SYNC_SETSOCKOPT 10 +#define MPTCP_FLUSH_JOIN_LIST 10 #define MPTCP_CONNECTED 11 =20 static inline bool before64(__u64 seq1, __u64 seq2) @@ -261,7 +261,6 @@ struct mptcp_sock { u8 recvmsg_inq:1, cork:1, nodelay:1; - spinlock_t join_list_lock; struct work_struct work; struct sk_buff *ooo_last_skb; struct rb_root out_of_order_queue; @@ -509,15 +508,6 @@ mptcp_subflow_get_mapped_dsn(const struct mptcp_subflo= w_context *subflow) return subflow->map_seq + mptcp_subflow_get_map_offset(subflow); } =20 -static inline void mptcp_add_pending_subflow(struct mptcp_sock *msk, - struct mptcp_subflow_context *subflow) -{ - sock_hold(mptcp_subflow_tcp_sock(subflow)); - spin_lock_bh(&msk->join_list_lock); - list_add_tail(&subflow->node, &msk->join_list); - spin_unlock_bh(&msk->join_list_lock); -} - void mptcp_subflow_process_delegated(struct sock *ssk); =20 static inline void mptcp_subflow_delegate(struct mptcp_subflow_context *su= bflow, int action) @@ -682,7 +672,6 @@ void __mptcp_data_acked(struct sock *sk); void __mptcp_error_report(struct sock *sk); void mptcp_subflow_eof(struct sock *sk); bool mptcp_update_rcv_data_fin(struct mptcp_sock *msk, u64 data_fin_seq, b= ool use_64bit); -void __mptcp_flush_join_list(struct mptcp_sock *msk); static inline bool mptcp_data_fin_enabled(const struct mptcp_sock *msk) { return READ_ONCE(msk->snd_data_fin_enable) && @@ -842,7 +831,7 @@ unsigned int mptcp_pm_get_subflows_max(struct mptcp_soc= k *msk); unsigned int mptcp_pm_get_local_addr_max(struct mptcp_sock *msk); =20 void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk); -void mptcp_sockopt_sync_all(struct mptcp_sock *msk); +void mptcp_sockopt_sync_locked(struct mptcp_sock *msk, struct sock *ssk); =20 static inline struct mptcp_ext *mptcp_get_ext(const struct sk_buff *skb) { diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index aa3fcd86dbe2..dacf3cee0027 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -1285,27 +1285,15 @@ void mptcp_sockopt_sync(struct mptcp_sock *msk, str= uct sock *ssk) } } =20 -void mptcp_sockopt_sync_all(struct mptcp_sock *msk) +void mptcp_sockopt_sync_locked(struct mptcp_sock *msk, struct sock *ssk) { - struct mptcp_subflow_context *subflow; - struct sock *sk =3D (struct sock *)msk; - u32 seq; - - seq =3D sockopt_seq_reset(sk); + struct mptcp_subflow_context *subflow =3D mptcp_subflow_ctx(ssk); =20 - mptcp_for_each_subflow(msk, subflow) { - struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); - u32 sseq =3D READ_ONCE(subflow->setsockopt_seq); + msk_owned_by_me(msk); =20 - if (sseq !=3D msk->setsockopt_seq) { - __mptcp_sockopt_sync(msk, ssk); - WRITE_ONCE(subflow->setsockopt_seq, seq); - } else if (sseq !=3D seq) { - WRITE_ONCE(subflow->setsockopt_seq, seq); - } + if (READ_ONCE(subflow->setsockopt_seq) !=3D msk->setsockopt_seq) { + sync_socket_options(msk, ssk); =20 - cond_resched(); + subflow->setsockopt_seq =3D msk->setsockopt_seq; } - - msk->setsockopt_seq =3D seq; } diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index d861307f7efe..a1cd39f97659 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1441,7 +1441,8 @@ int __mptcp_subflow_connect(struct sock *sk, const st= ruct mptcp_addr_info *loc, subflow->request_bkup =3D !!(flags & MPTCP_PM_ADDR_FLAG_BACKUP); mptcp_info2sockaddr(remote, &addr, ssk->sk_family); =20 - mptcp_add_pending_subflow(msk, subflow); + sock_hold(ssk); + list_add_tail(&subflow->node, &msk->conn_list); err =3D kernel_connect(sf, (struct sockaddr *)&addr, addrlen, O_NONBLOCK); if (err && err !=3D -EINPROGRESS) goto failed_unlink; @@ -1452,9 +1453,7 @@ int __mptcp_subflow_connect(struct sock *sk, const st= ruct mptcp_addr_info *loc, return err; =20 failed_unlink: - spin_lock_bh(&msk->join_list_lock); list_del(&subflow->node); - spin_unlock_bh(&msk->join_list_lock); sock_put(mptcp_subflow_tcp_sock(subflow)); =20 failed: --=20 2.34.1