:p
atchew
Login
The MPTCP protocol maintains an additional struct socket per connection, mainly to be able to easily use tcp-level struct socket operations. This leads to several side effects, beyond the quite unfortunate / confusing 'subflow' field name: - active and passive sockets behaviour is inconsistent: only active ones have a not NULL msk->subflow, leading to different error handling and different error code returned to the user-space in several places. - active sockets uses an unneeded, larger amount of memory - passive sockets can't successfully go through accept(), disconnect(), accept() sequence, see [1] for more details. The 13 first patches of this series are from Paolo and address all the above, finally getting rid of the blamed field: - The first patch is a minor clean-up. - In the next 11 patches, msk->subflow usage is systematically removed from the MPTCP protocol, replacing it with direct msk->first usage, eventually introducing new core helpers when needed. - The 13th patch finally disposes the field, and it's the only patch in the series intended to produce functional changes. The last and 14th patch is from Kuniyuki and it is not linked to the previous ones: it is a small clean-up to get rid of an unnecessary check in mptcp_init_sock(). [1] https://github.com/multipath-tcp/mptcp_net-next/issues/290 Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- Kuniyuki Iwashima (1): mptcp: Remove unnecessary test for __mptcp_init_sock() Paolo Abeni (13): mptcp: avoid unneeded mptcp_token_destroy() calls mptcp: avoid additional __inet_stream_connect() call mptcp: avoid subflow socket usage in mptcp_get_port() net: factor out inet{,6}_bind_sk helpers mptcp: mptcp: avoid additional indirection in mptcp_bind() net: factor out __inet_listen_sk() helper mptcp: avoid additional indirection in mptcp_listen() mptcp: avoid additional indirection in mptcp_poll() mptcp: avoid unneeded indirection in mptcp_stream_accept() mptcp: avoid additional indirection in sockopt mptcp: avoid ssock usage in mptcp_pm_nl_create_listen_socket() mptcp: change the mpc check helper to return a sk mptcp: get rid of msk->subflow include/net/inet_common.h | 2 + include/net/ipv6.h | 1 + net/ipv4/af_inet.c | 46 ++++++----- net/ipv6/af_inet6.c | 10 ++- net/mptcp/pm_netlink.c | 30 +++---- net/mptcp/protocol.c | 194 ++++++++++++++++++++++------------------------ net/mptcp/protocol.h | 15 ++-- net/mptcp/sockopt.c | 65 ++++++++-------- 8 files changed, 186 insertions(+), 177 deletions(-) --- base-commit: 80f9ad046052509d0eee9b72e11d0e8ae31b665f change-id: 20230811-upstream-net-next-20230811-mptcp-get-rid-of-msk-subflow-9ad15cd9cdcb Best regards, -- Matthieu Baerts <matthieu.baerts@tessares.net>
From: Paolo Abeni <pabeni@redhat.com> The MPTCP protocol currently clears the msk token both at connect() and listen() time. That is needed to deal with failing connect() calls that can create a new token while leaving the sk in TCP_CLOSE,SS_UNCONNECTED status and thus allowing later connect() and/or listen() calls. Let's deal with such failures explicitly, cleaning the token in a timely manner and avoid the confusing early mptcp_token_destroy(). Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) if (IS_ERR(ssock)) return PTR_ERR(ssock); - mptcp_token_destroy(msk); inet_sk_state_store(sk, TCP_SYN_SENT); subflow = mptcp_subflow_ctx(ssock->sk); #ifdef CONFIG_TCP_MD5SIG @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) * subflow_finish_connect() */ if (unlikely(err && err != -EINPROGRESS)) { + /* avoid leaving a dangling token in an unconnected socket */ + mptcp_token_destroy(msk); inet_sk_state_store(sk, inet_sk_state_load(ssock->sk)); return err; } @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) goto unlock; } - mptcp_token_destroy(msk); inet_sk_state_store(sk, TCP_LISTEN); sock_set_flag(sk, SOCK_RCU_FREE); -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> The mptcp protocol maintains an additional socket just to easily invoke a few stream operations on the first subflow. One of them is __inet_stream_connect(). We are going to remove the first subflow socket soon, so avoid the additional indirection via at connect time, calling directly into the sock-level connect() ops. The sk-level connect never return -EINPROGRESS, cleanup the error path accordingly. Additionally, the ssk status on error is always TCP_CLOSE. Avoid unneeded access to the subflow sk state. No functional change intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 49 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 16 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) struct mptcp_sock *msk = mptcp_sk(sk); struct socket *ssock; int err = -EINVAL; + struct sock *ssk; ssock = __mptcp_nmpc_socket(msk); if (IS_ERR(ssock)) return PTR_ERR(ssock); inet_sk_state_store(sk, TCP_SYN_SENT); - subflow = mptcp_subflow_ctx(ssock->sk); + ssk = msk->first; + subflow = mptcp_subflow_ctx(ssk); #ifdef CONFIG_TCP_MD5SIG /* no MPTCP if MD5SIG is enabled on this socket or we may run out of * TCP option space. */ - if (rcu_access_pointer(tcp_sk(ssock->sk)->md5sig_info)) + if (rcu_access_pointer(tcp_sk(ssk)->md5sig_info)) mptcp_subflow_early_fallback(msk, subflow); #endif - if (subflow->request_mptcp && mptcp_token_new_connect(ssock->sk)) { - MPTCP_INC_STATS(sock_net(ssock->sk), MPTCP_MIB_TOKENFALLBACKINIT); + if (subflow->request_mptcp && mptcp_token_new_connect(ssk)) { + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_TOKENFALLBACKINIT); mptcp_subflow_early_fallback(msk, subflow); } if (likely(!__mptcp_check_fallback(msk))) @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) /* if reaching here via the fastopen/sendmsg path, the caller already * acquired the subflow socket lock, too. */ - if (msk->fastopening) - err = __inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK, 1); - else - err = inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK); - inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect; + if (!msk->fastopening) + lock_sock(ssk); + + /* the following mirrors closely a very small chunk of code from + * __inet_stream_connect() + */ + if (ssk->sk_state != TCP_CLOSE) + goto out; + + if (BPF_CGROUP_PRE_CONNECT_ENABLED(ssk)) { + err = ssk->sk_prot->pre_connect(ssk, uaddr, addr_len); + if (err) + goto out; + } + + err = ssk->sk_prot->connect(ssk, uaddr, addr_len); + if (err < 0) + goto out; + + inet_sk(sk)->defer_connect = inet_sk(ssk)->defer_connect; + +out: + if (!msk->fastopening) + release_sock(ssk); /* on successful connect, the msk state will be moved to established by * subflow_finish_connect() */ - if (unlikely(err && err != -EINPROGRESS)) { + if (unlikely(err)) { /* avoid leaving a dangling token in an unconnected socket */ mptcp_token_destroy(msk); - inet_sk_state_store(sk, inet_sk_state_load(ssock->sk)); + inet_sk_state_store(sk, TCP_CLOSE); return err; } - mptcp_copy_inaddrs(sk, ssock->sk); - - /* silence EINPROGRESS and let the caller inet_stream_connect - * handle the connection in progress - */ + mptcp_copy_inaddrs(sk, ssk); return 0; } -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> We are going to remove the first subflow socket soon, so avoid accessing it in mptcp_get_port(). Instead, access directly the first subflow sock. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static void mptcp_unhash(struct sock *sk) static int mptcp_get_port(struct sock *sk, unsigned short snum) { struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; - ssock = msk->subflow; - pr_debug("msk=%p, subflow=%p", msk, ssock); - if (WARN_ON_ONCE(!ssock)) + pr_debug("msk=%p, ssk=%p", msk, msk->first); + if (WARN_ON_ONCE(!msk->first)) return -EINVAL; - return inet_csk_get_port(ssock->sk, snum); + return inet_csk_get_port(msk->first, snum); } void mptcp_finish_connect(struct sock *ssk) -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> The mptcp protocol maintains an additional socket just to easily invoke a few stream operations on the first subflow. One of them is bind(). Factor out the helpers operating directly on the struct sock, to allow get rid of the above dependency in the next patch without duplicating the existing code. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- include/net/inet_common.h | 1 + include/net/ipv6.h | 1 + net/ipv4/af_inet.c | 8 ++++++-- net/ipv6/af_inet6.c | 10 +++++++--- 4 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/net/inet_common.h b/include/net/inet_common.h index XXXXXXX..XXXXXXX 100644 --- a/include/net/inet_common.h +++ b/include/net/inet_common.h @@ -XXX,XX +XXX,XX @@ int inet_shutdown(struct socket *sock, int how); int inet_listen(struct socket *sock, int backlog); void inet_sock_destruct(struct sock *sk); int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); +int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len); /* Don't allocate port at this moment, defer to connect. */ #define BIND_FORCE_ADDRESS_NO_PORT (1 << 0) /* Grab and release socket lock. */ diff --git a/include/net/ipv6.h b/include/net/ipv6.h index XXXXXXX..XXXXXXX 100644 --- a/include/net/ipv6.h +++ b/include/net/ipv6.h @@ -XXX,XX +XXX,XX @@ void inet6_cleanup_sock(struct sock *sk); void inet6_sock_destruct(struct sock *sk); int inet6_release(struct socket *sock); int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); +int inet6_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len); int inet6_getname(struct socket *sock, struct sockaddr *uaddr, int peer); int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index XXXXXXX..XXXXXXX 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -XXX,XX +XXX,XX @@ int inet_release(struct socket *sock) } EXPORT_SYMBOL(inet_release); -int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len) { - struct sock *sk = sock->sk; u32 flags = BIND_WITH_LOCK; int err; @@ -XXX,XX +XXX,XX @@ int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) return __inet_bind(sk, uaddr, addr_len, flags); } + +int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +{ + return inet_bind_sk(sock->sk, uaddr, addr_len); +} EXPORT_SYMBOL(inet_bind); int __inet_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len, diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index XXXXXXX..XXXXXXX 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -XXX,XX +XXX,XX @@ static int __inet6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len, goto out; } -/* bind for INET6 API */ -int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +int inet6_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len) { - struct sock *sk = sock->sk; u32 flags = BIND_WITH_LOCK; const struct proto *prot; int err = 0; @@ -XXX,XX +XXX,XX @@ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) return __inet6_bind(sk, uaddr, addr_len, flags); } + +/* bind for INET6 API */ +int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +{ + return inet6_bind_sk(sock->sk, uaddr, addr_len); +} EXPORT_SYMBOL(inet6_bind); int inet6_release(struct socket *sock) -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> We are going to remove the first subflow socket soon, so avoid the additional indirection via at bind() time. Instead call directly the recently introduced helpers on the first subflow sock. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static struct proto mptcp_prot = { static int mptcp_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) { struct mptcp_sock *msk = mptcp_sk(sock->sk); + struct sock *ssk, *sk = sock->sk; struct socket *ssock; - int err; + int err = -EINVAL; - lock_sock(sock->sk); + lock_sock(sk); ssock = __mptcp_nmpc_socket(msk); if (IS_ERR(ssock)) { err = PTR_ERR(ssock); goto unlock; } - err = READ_ONCE(ssock->ops)->bind(ssock, uaddr, addr_len); + ssk = msk->first; + if (sk->sk_family == AF_INET) + err = inet_bind_sk(ssk, uaddr, addr_len); +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + else if (sk->sk_family == AF_INET6) + err = inet6_bind_sk(ssk, uaddr, addr_len); +#endif if (!err) - mptcp_copy_inaddrs(sock->sk, ssock->sk); + mptcp_copy_inaddrs(sk, ssk); unlock: - release_sock(sock->sk); + release_sock(sk); return err; } -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> The mptcp protocol maintains an additional socket just to easily invoke a few stream operations on the first subflow. One of them is inet_listen(). Factor out an helper operating directly on the (locked) struct sock, to allow get rid of the above dependency in the next patch without duplicating the existing code. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- include/net/inet_common.h | 1 + net/ipv4/af_inet.c | 38 ++++++++++++++++++++++---------------- 2 files changed, 23 insertions(+), 16 deletions(-) diff --git a/include/net/inet_common.h b/include/net/inet_common.h index XXXXXXX..XXXXXXX 100644 --- a/include/net/inet_common.h +++ b/include/net/inet_common.h @@ -XXX,XX +XXX,XX @@ int inet_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, int flags); int inet_shutdown(struct socket *sock, int how); int inet_listen(struct socket *sock, int backlog); +int __inet_listen_sk(struct sock *sk, int backlog); void inet_sock_destruct(struct sock *sk); int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len); diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index XXXXXXX..XXXXXXX 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -XXX,XX +XXX,XX @@ static int inet_autobind(struct sock *sk) return 0; } -/* - * Move a socket into listening state. - */ -int inet_listen(struct socket *sock, int backlog) +int __inet_listen_sk(struct sock *sk, int backlog) { - struct sock *sk = sock->sk; - unsigned char old_state; + unsigned char old_state = sk->sk_state; int err, tcp_fastopen; - lock_sock(sk); - - err = -EINVAL; - if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM) - goto out; - - old_state = sk->sk_state; if (!((1 << old_state) & (TCPF_CLOSE | TCPF_LISTEN))) - goto out; + return -EINVAL; WRITE_ONCE(sk->sk_max_ack_backlog, backlog); /* Really, if the socket is already in listen state @@ -XXX,XX +XXX,XX @@ int inet_listen(struct socket *sock, int backlog) err = inet_csk_listen_start(sk); if (err) - goto out; + return err; + tcp_call_bpf(sk, BPF_SOCK_OPS_TCP_LISTEN_CB, 0, NULL); } - err = 0; + return 0; +} + +/* + * Move a socket into listening state. + */ +int inet_listen(struct socket *sock, int backlog) +{ + struct sock *sk = sock->sk; + int err = -EINVAL; + + lock_sock(sk); + + if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM) + goto out; + + err = __inet_listen_sk(sk, backlog); out: release_sock(sk); -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> We are going to remove the first subflow socket soon, so avoid the additional indirection via at listen() time. Instead call directly the recently introduced helper on the first subflow sock. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) struct mptcp_sock *msk = mptcp_sk(sock->sk); struct sock *sk = sock->sk; struct socket *ssock; + struct sock *ssk; int err; pr_debug("msk=%p", msk); @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) goto unlock; } + ssk = msk->first; inet_sk_state_store(sk, TCP_LISTEN); sock_set_flag(sk, SOCK_RCU_FREE); - err = READ_ONCE(ssock->ops)->listen(ssock, backlog); - inet_sk_state_store(sk, inet_sk_state_load(ssock->sk)); + lock_sock(ssk); + err = __inet_listen_sk(ssk, backlog); + release_sock(ssk); + inet_sk_state_store(sk, inet_sk_state_load(ssk)); + if (!err) { sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1); - mptcp_copy_inaddrs(sk, ssock->sk); - mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED); + mptcp_copy_inaddrs(sk, ssk); + mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CREATED); } unlock: -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> We are going to remove the first subflow socket soon, so avoid the additional indirection at poll() time. Instead access directly the first subflow sock. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock, state = inet_sk_state_load(sk); pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags); if (state == TCP_LISTEN) { - struct socket *ssock = READ_ONCE(msk->subflow); + struct sock *ssk = READ_ONCE(msk->first); - if (WARN_ON_ONCE(!ssock || !ssock->sk)) + if (WARN_ON_ONCE(!ssk)) return 0; - return inet_csk_listen_poll(ssock->sk); + return inet_csk_listen_poll(ssk); } shutdown = READ_ONCE(sk->sk_shutdown); -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> We are going to remove the first subflow socket soon, so avoid the additional indirection at accept() time. Instead access directly the first subflow sock, and update mptcp_accept() to operate on it. This allows dropping a duplicated check in mptcp_accept(). No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk) WRITE_ONCE(msk->wnd_end, msk->snd_nxt + tcp_sk(ssk)->snd_wnd); } -static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, +static struct sock *mptcp_accept(struct sock *ssk, int flags, int *err, bool kern) { - struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *listener; struct sock *newsk; - listener = READ_ONCE(msk->subflow); - if (WARN_ON_ONCE(!listener)) { - *err = -EINVAL; - return NULL; - } - - pr_debug("msk=%p, listener=%p", msk, mptcp_subflow_ctx(listener->sk)); - newsk = inet_csk_accept(listener->sk, flags, err, kern); + pr_debug("ssk=%p, listener=%p", ssk, mptcp_subflow_ctx(ssk)); + newsk = inet_csk_accept(ssk, flags, err, kern); if (!newsk) return NULL; - pr_debug("msk=%p, subflow is mptcp=%d", msk, sk_is_mptcp(newsk)); + pr_debug("newsk=%p, subflow is mptcp=%d", newsk, sk_is_mptcp(newsk)); if (sk_is_mptcp(newsk)) { struct mptcp_subflow_context *subflow; struct sock *new_mptcp_sock; @@ -XXX,XX +XXX,XX @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, } newsk = new_mptcp_sock; - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEPASSIVEACK); + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPCAPABLEPASSIVEACK); } else { - MPTCP_INC_STATS(sock_net(sk), + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); } @@ -XXX,XX +XXX,XX @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, int flags, bool kern) { struct mptcp_sock *msk = mptcp_sk(sock->sk); - struct socket *ssock; - struct sock *newsk; + struct sock *ssk, *newsk; int err; pr_debug("msk=%p", msk); @@ -XXX,XX +XXX,XX @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, /* Buggy applications can call accept on socket states other then LISTEN * but no need to allocate the first subflow just to error out. */ - ssock = READ_ONCE(msk->subflow); - if (!ssock) + ssk = READ_ONCE(msk->first); + if (!ssk) return -EINVAL; - newsk = mptcp_accept(sock->sk, flags, &err, kern); + newsk = mptcp_accept(ssk, flags, &err, kern); if (!newsk) return err; -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> The mptcp sockopt infrastructure unneedly uses the first subflow socket struct in a few spots. We are going to remove such field soon, so use directly the first subflow sock instead. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/sockopt.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, { struct sock *sk = (struct sock *)msk; struct socket *ssock; + struct sock *ssk; int ret; switch (optname) { @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, return PTR_ERR(ssock); } - ret = sock_setsockopt(ssock, SOL_SOCKET, optname, optval, optlen); + ssk = msk->first; + ret = sk_setsockopt(ssk, SOL_SOCKET, optname, optval, optlen); if (ret == 0) { if (optname == SO_REUSEPORT) - sk->sk_reuseport = ssock->sk->sk_reuseport; + sk->sk_reuseport = ssk->sk_reuseport; else if (optname == SO_REUSEADDR) - sk->sk_reuse = ssock->sk->sk_reuse; + sk->sk_reuse = ssk->sk_reuse; else if (optname == SO_BINDTODEVICE) - sk->sk_bound_dev_if = ssock->sk->sk_bound_dev_if; + sk->sk_bound_dev_if = ssk->sk_bound_dev_if; else if (optname == SO_BINDTOIFINDEX) - sk->sk_bound_dev_if = ssock->sk->sk_bound_dev_if; + sk->sk_bound_dev_if = ssk->sk_bound_dev_if; } release_sock(sk); return ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, struct sock *sk = (struct sock *)msk; int ret = -EOPNOTSUPP; struct socket *ssock; + struct sock *ssk; switch (optname) { case IPV6_V6ONLY: @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, return PTR_ERR(ssock); } - ret = tcp_setsockopt(ssock->sk, SOL_IPV6, optname, optval, optlen); + ssk = msk->first; + ret = tcp_setsockopt(ssk, SOL_IPV6, optname, optval, optlen); if (ret != 0) { release_sock(sk); return ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, switch (optname) { case IPV6_V6ONLY: - sk->sk_ipv6only = ssock->sk->sk_ipv6only; + sk->sk_ipv6only = ssk->sk_ipv6only; break; case IPV6_TRANSPARENT: - inet_sk(sk)->transparent = inet_sk(ssock->sk)->transparent; + inet_sk(sk)->transparent = inet_sk(ssk)->transparent; break; case IPV6_FREEBIND: - inet_sk(sk)->freebind = inet_sk(ssock->sk)->freebind; + inet_sk(sk)->freebind = inet_sk(ssk)->freebind; break; } @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_ip_set_transparent(struct mptcp_sock *msk, int o return PTR_ERR(ssock); } - issk = inet_sk(ssock->sk); + issk = inet_sk(msk->first); switch (optname) { case IP_FREEBIND: @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int { struct sock *sk = (struct sock *)msk; struct socket *ssock; - int ret; struct sock *ssk; + int ret; lock_sock(sk); ssk = msk->first; @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int goto out; } - ret = tcp_getsockopt(ssock->sk, level, optname, optval, optlen); + ret = tcp_getsockopt(ssk, level, optname, optval, optlen); out: release_sock(sk); -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> This is one of the few remaining spots actually manipulating the first subflow socket. We can leverage the recently introduced inet helpers to get rid of ssock there. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/pm_netlink.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -XXX,XX +XXX,XX @@ #include <linux/inet.h> #include <linux/kernel.h> #include <net/tcp.h> +#include <net/inet_common.h> #include <net/netns/generic.h> #include <net/mptcp.h> #include <net/genetlink.h> @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, bool is_ipv6 = sk->sk_family == AF_INET6; int addrlen = sizeof(struct sockaddr_in); struct sockaddr_storage addr; + struct sock *newsk, *ssk; struct socket *ssock; - struct sock *newsk; int backlog = 1024; int err; @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, if (entry->addr.family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); #endif - err = kernel_bind(ssock, (struct sockaddr *)&addr, addrlen); + ssk = mptcp_sk(newsk)->first; + if (ssk->sk_family == AF_INET) + err = inet_bind_sk(ssk, (struct sockaddr *)&addr, addrlen); +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + else if (ssk->sk_family == AF_INET6) + err = inet6_bind_sk(ssk, (struct sockaddr *)&addr, addrlen); +#endif if (err) return err; inet_sk_state_store(newsk, TCP_LISTEN); - err = kernel_listen(ssock, backlog); - if (err) - return err; - - mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED); - - return 0; + lock_sock(ssk); + err = __inet_listen_sk(ssk, backlog); + if (!err) + mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CREATED); + release_sock(ssk); + return err; } int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc) -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> After the previous patch the __mptcp_nmpc_socket helper is used only to ensure that the MPTCP socket is a suitable status - that is, the mptcp capable handshake is not started yet. Change the return value to the relevant subflow sock, to finally remove the last references to first subflow socket in the MPTCP stack. As a bonus, we can get rid of a few local variables in different functions. No functional change intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/pm_netlink.c | 8 +++----- net/mptcp/protocol.c | 40 +++++++++++++++------------------------- net/mptcp/protocol.h | 2 +- net/mptcp/sockopt.c | 43 +++++++++++++++++++------------------------ 4 files changed, 38 insertions(+), 55 deletions(-) diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, int addrlen = sizeof(struct sockaddr_in); struct sockaddr_storage addr; struct sock *newsk, *ssk; - struct socket *ssock; int backlog = 1024; int err; @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, &mptcp_keys[is_ipv6]); lock_sock(newsk); - ssock = __mptcp_nmpc_socket(mptcp_sk(newsk)); + ssk = __mptcp_nmpc_sk(mptcp_sk(newsk)); release_sock(newsk); - if (IS_ERR(ssock)) - return PTR_ERR(ssock); + if (IS_ERR(ssk)) + return PTR_ERR(ssk); mptcp_info2sockaddr(&entry->addr, &addr, entry->addr.family); #if IS_ENABLED(CONFIG_MPTCP_IPV6) if (entry->addr.family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); #endif - ssk = mptcp_sk(newsk)->first; if (ssk->sk_family == AF_INET) err = inet_bind_sk(ssk, (struct sockaddr *)&addr, addrlen); #if IS_ENABLED(CONFIG_MPTCP_IPV6) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int __mptcp_socket_create(struct mptcp_sock *msk) /* If the MPC handshake is not started, returns the first subflow, * eventually allocating it. */ -struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk) +struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk) { struct sock *sk = (struct sock *)msk; int ret; @@ -XXX,XX +XXX,XX @@ struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk) if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) return ERR_PTR(-EINVAL); - if (!msk->subflow) { - if (msk->first) - return ERR_PTR(-EINVAL); - + if (!msk->first) { ret = __mptcp_socket_create(msk); if (ret) return ERR_PTR(ret); @@ -XXX,XX +XXX,XX @@ struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk) mptcp_sockopt_sync(msk, msk->first); } - return msk->subflow; + return msk->first; } static void mptcp_drop(struct sock *sk, struct sk_buff *skb) @@ -XXX,XX +XXX,XX @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, { unsigned int saved_flags = msg->msg_flags; struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; struct sock *ssk; int ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, * fastopen attempt, no need to check for additional subflow status. */ if (msg->msg_flags & MSG_FASTOPEN) { - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) - return PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) + return PTR_ERR(ssk); } if (!msk->first) return -EINVAL; @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) { struct mptcp_subflow_context *subflow; struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; int err = -EINVAL; struct sock *ssk; - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) - return PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) + return PTR_ERR(ssk); inet_sk_state_store(sk, TCP_SYN_SENT); - ssk = msk->first; subflow = mptcp_subflow_ctx(ssk); #ifdef CONFIG_TCP_MD5SIG /* no MPTCP if MD5SIG is enabled on this socket or we may run out of @@ -XXX,XX +XXX,XX @@ static int mptcp_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) { struct mptcp_sock *msk = mptcp_sk(sock->sk); struct sock *ssk, *sk = sock->sk; - struct socket *ssock; int err = -EINVAL; lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { - err = PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + err = PTR_ERR(ssk); goto unlock; } - ssk = msk->first; if (sk->sk_family == AF_INET) err = inet_bind_sk(ssk, uaddr, addr_len); #if IS_ENABLED(CONFIG_MPTCP_IPV6) @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) { struct mptcp_sock *msk = mptcp_sk(sock->sk); struct sock *sk = sock->sk; - struct socket *ssock; struct sock *ssk; int err; @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM) goto unlock; - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { - err = PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + err = PTR_ERR(ssk); goto unlock; } - ssk = msk->first; inet_sk_state_store(sk, TCP_LISTEN); sock_set_flag(sk, SOCK_RCU_FREE); diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -XXX,XX +XXX,XX @@ void __mptcp_subflow_send_ack(struct sock *ssk); void mptcp_subflow_reset(struct sock *ssk); void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk); void mptcp_sock_graft(struct sock *sk, struct socket *parent); -struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk); +struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk); bool __mptcp_close(struct sock *sk, long timeout); void mptcp_cancel_work(struct sock *sk); void __mptcp_unaccepted_force_close(struct sock *sk); diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, sockptr_t optval, unsigned int optlen) { struct sock *sk = (struct sock *)msk; - struct socket *ssock; struct sock *ssk; int ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, case SO_BINDTODEVICE: case SO_BINDTOIFINDEX: lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { release_sock(sk); - return PTR_ERR(ssock); + return PTR_ERR(ssk); } - ssk = msk->first; ret = sk_setsockopt(ssk, SOL_SOCKET, optname, optval, optlen); if (ret == 0) { if (optname == SO_REUSEPORT) @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, { struct sock *sk = (struct sock *)msk; int ret = -EOPNOTSUPP; - struct socket *ssock; struct sock *ssk; switch (optname) { @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, case IPV6_TRANSPARENT: case IPV6_FREEBIND: lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { release_sock(sk); - return PTR_ERR(ssock); + return PTR_ERR(ssk); } - ssk = msk->first; ret = tcp_setsockopt(ssk, SOL_IPV6, optname, optval, optlen); if (ret != 0) { release_sock(sk); @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_ip_set_transparent(struct mptcp_sock *msk, int o { struct sock *sk = (struct sock *)msk; struct inet_sock *issk; - struct socket *ssock; + struct sock *ssk; int err; err = ip_setsockopt(sk, SOL_IP, optname, optval, optlen); @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_ip_set_transparent(struct mptcp_sock *msk, int o lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { release_sock(sk); - return PTR_ERR(ssock); + return PTR_ERR(ssk); } - issk = inet_sk(msk->first); + issk = inet_sk(ssk); switch (optname) { case IP_FREEBIND: @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int sockptr_t optval, unsigned int optlen) { struct sock *sk = (struct sock *)msk; - struct socket *sock; + struct sock *ssk; int ret; /* Limit to first subflow, before the connection establishment */ lock_sock(sk); - sock = __mptcp_nmpc_socket(msk); - if (IS_ERR(sock)) { - ret = PTR_ERR(sock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + ret = PTR_ERR(ssk); goto unlock; } - ret = tcp_setsockopt(sock->sk, level, optname, optval, optlen); + ret = tcp_setsockopt(ssk, level, optname, optval, optlen); unlock: release_sock(sk); @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int char __user *optval, int __user *optlen) { struct sock *sk = (struct sock *)msk; - struct socket *ssock; struct sock *ssk; int ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int goto out; } - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { - ret = PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + ret = PTR_ERR(ssk); goto out; } -- 2.40.1
From: Paolo Abeni <pabeni@redhat.com> Such field is now unused just as a flag to control the first subflow deletion at close() time. Introduce a new bit flag for that and finally drop the mentioned field. As an intended side effect, now the first subflow sock is not freed before close() even for passive sockets. The msk has no open/active subflows if the first one is closed and the subflow list is singular, update accordingly the state check in mptcp_stream_accept(). Among other benefits, the subflow removal, reduces the amount of memory used on the client side for each mptcp connection, allows passive sockets to go through successful accept()/disconnect()/connect() and makes return error code consistent for failing both passive and active sockets. Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/290 Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 25 ++++++------------------- net/mptcp/protocol.h | 13 ++++++------- 2 files changed, 12 insertions(+), 26 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int __mptcp_socket_create(struct mptcp_sock *msk) msk->scaling_ratio = tcp_sk(ssock->sk)->scaling_ratio; WRITE_ONCE(msk->first, ssock->sk); - WRITE_ONCE(msk->subflow, ssock); subflow = mptcp_subflow_ctx(ssock->sk); list_add(&subflow->node, &msk->conn_list); sock_hold(ssock->sk); @@ -XXX,XX +XXX,XX @@ static int __mptcp_socket_create(struct mptcp_sock *msk) /* This is the first subflow, always with id 0 */ subflow->local_id_valid = 1; mptcp_sock_graft(msk->first, sk->sk_socket); + iput(SOCK_INODE(ssock)); return 0; } @@ -XXX,XX +XXX,XX @@ static struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk) return min_stale_count > 1 ? backup : NULL; } -static void mptcp_dispose_initial_subflow(struct mptcp_sock *msk) -{ - if (msk->subflow) { - iput(SOCK_INODE(msk->subflow)); - WRITE_ONCE(msk->subflow, NULL); - } -} - bool __mptcp_retransmit_pending_data(struct sock *sk) { struct mptcp_data_frag *cur, *rtx_head; @@ -XXX,XX +XXX,XX @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk, goto out_release; } - dispose_it = !msk->subflow || ssk != msk->subflow->sk; + dispose_it = msk->free_first || ssk != msk->first; if (dispose_it) list_del(&subflow->node); @@ -XXX,XX +XXX,XX @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk, * disconnect should never fail */ WARN_ON_ONCE(tcp_disconnect(ssk, 0)); - msk->subflow->state = SS_UNCONNECTED; mptcp_subflow_ctx_reset(subflow); release_sock(ssk); @@ -XXX,XX +XXX,XX @@ struct sock *mptcp_sk_clone_init(const struct sock *sk, msk = mptcp_sk(nsk); msk->local_key = subflow_req->local_key; msk->token = subflow_req->token; - WRITE_ONCE(msk->subflow, NULL); msk->in_accept_queue = 1; WRITE_ONCE(msk->fully_established, false); if (mp_opt->suboptions & OPTION_MPTCP_CSUMREQD) @@ -XXX,XX +XXX,XX @@ static void mptcp_destroy(struct sock *sk) { struct mptcp_sock *msk = mptcp_sk(sk); - /* clears msk->subflow, allowing the following to close - * even the initial subflow - */ - mptcp_dispose_initial_subflow(msk); + /* allow the following to close even the initial subflow */ + msk->free_first = 1; mptcp_destroy_common(msk, 0); sk_sockets_allocated_dec(sk); } @@ -XXX,XX +XXX,XX @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, /* Do late cleanup for the first subflow as necessary. Also * deal with bad peers not doing a complete shutdown. */ - if (msk->first && - unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) { + if (unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) { __mptcp_close_ssk(newsk, msk->first, mptcp_subflow_ctx(msk->first), 0); - if (unlikely(list_empty(&msk->conn_list))) + if (unlikely(list_is_singular(&msk->conn_list))) inet_sk_state_store(newsk, TCP_CLOSE); } } diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -XXX,XX +XXX,XX @@ struct mptcp_sock { cork:1, nodelay:1, fastopening:1, - in_accept_queue:1; + in_accept_queue:1, + free_first:1; struct work_struct work; struct sk_buff *ooo_last_skb; struct rb_root out_of_order_queue; @@ -XXX,XX +XXX,XX @@ struct mptcp_sock { struct list_head rtx_queue; struct mptcp_data_frag *first_pending; struct list_head join_list; - struct socket *subflow; /* outgoing connect/listener/!mp_capable - * The mptcp ops can safely dereference, using suitable - * ONCE annotation, the subflow outside the socket - * lock as such sock is freed after close(). - */ - struct sock *first; + struct sock *first; /* The mptcp ops can safely dereference, using suitable + * ONCE annotation, the subflow outside the socket + * lock as such sock is freed after close(). + */ struct mptcp_pm_data pm; struct { u32 space; /* bytes copied in last measurement window */ -- 2.40.1
From: Kuniyuki Iwashima <kuniyu@amazon.com> __mptcp_init_sock() always returns 0 because mptcp_init_sock() used to return the value directly. But after commit 18b683bff89d ("mptcp: queue data for mptcp level retransmission"), __mptcp_init_sock() need not return value anymore. Let's remove the unnecessary test for __mptcp_init_sock() and make it return void. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> --- net/mptcp/protocol.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static void mptcp_worker(struct work_struct *work) sock_put(sk); } -static int __mptcp_init_sock(struct sock *sk) +static void __mptcp_init_sock(struct sock *sk) { struct mptcp_sock *msk = mptcp_sk(sk); @@ -XXX,XX +XXX,XX @@ static int __mptcp_init_sock(struct sock *sk) /* re-use the csk retrans timer for MPTCP-level retrans */ timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0); timer_setup(&sk->sk_timer, mptcp_timeout_timer, 0); - - return 0; } static void mptcp_ca_reset(struct sock *sk) @@ -XXX,XX +XXX,XX @@ static void mptcp_ca_reset(struct sock *sk) static int mptcp_init_sock(struct sock *sk) { struct net *net = sock_net(sk); - int ret; - ret = __mptcp_init_sock(sk); - if (ret) - return ret; + __mptcp_init_sock(sk); if (!mptcp_is_enabled(net)) return -ENOPROTOOPT; -- 2.40.1
The mptcp protocol maintains an additional struct socket per connection, mainly to be able to use easily tcp-level struct socket operations. That lead to several ill side effects, beyond the quite unfortunate/ confusing field name - active and passive sockets behavior is incosistent, as only active ones have not NULL msk->subflow, leading to different error handling (and different error code returned to the user-space) in several places. - active sockets uses an unneeded, larger amount of memory - passive sockets can't successfully go through accept()/disconnect() accept() This series address all the above finally getting rid of the blamed field. The first patch is a minor clean-up, in the next 11 patches msk->subflow usage is sistematically removed from the mptcp protocol, replacing it with direct msk->first usage, eventually introducing new core helpers as needed. The final patch finally dispose the field, and it's the only patch in the series intened to produce functional changes. v1 -> v2: - dropped first patch (already applied) - a bunch of typos and comments, the only patches with code changes are 7/13, 9/13, 13/13, see the individual changelog for the details. Paolo Abeni (13): mptcp: avoid unneeded mptcp_token_destroy() calls mptcp: avoid additional __inet_stream_connect() call mptcp: avoid subflow socket usage in mptcp_get_port() net: factor out inet{,6}_bind_sk helpers mptcp: mptcp: avoid additional indirection in mptcp_bind() net: factor out __inet_listen_sk() helper mptcp: avoid additional indirection in mptcp_listen() mptcp: avoid additional indirection in mptcp_poll() mptcp: avoid unneeded indirection in mptcp_stream_accept() mptcp: avoid additional indirection in sockopt mptcp: avoid ssock usage in mptcp_pm_nl_create_listen_socket() mptcp: change the mpc check helper to return a sk mptcp: get rid of msk->subflow include/net/inet_common.h | 2 + include/net/ipv6.h | 1 + net/ipv4/af_inet.c | 46 ++++++---- net/ipv6/af_inet6.c | 10 ++- net/mptcp/pm_netlink.c | 30 ++++--- net/mptcp/protocol.c | 177 +++++++++++++++++++------------------- net/mptcp/protocol.h | 15 ++-- net/mptcp/sockopt.c | 65 +++++++------- 8 files changed, 181 insertions(+), 165 deletions(-) -- 2.41.0
The mptcp protocol currently clears the msk token both at connect() and listen() time. That was necessary before the mptcp protocol gained a full disconnect implementation, but after commit b29fcfb54cd7 ("mptcp: full disconnect implementation") such calls are no more necessary and a bit confusing. Just drop them. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/protocol.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) if (IS_ERR(ssock)) return PTR_ERR(ssock); - mptcp_token_destroy(msk); inet_sk_state_store(sk, TCP_SYN_SENT); subflow = mptcp_subflow_ctx(ssock->sk); #ifdef CONFIG_TCP_MD5SIG @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) goto unlock; } - mptcp_token_destroy(msk); inet_sk_state_store(sk, TCP_LISTEN); sock_set_flag(sk, SOCK_RCU_FREE); -- 2.41.0
The mptcp protocol maintains an additional socket just to easily invoke a few stream operations on the first subflow. One of them is __inet_stream_connect(). We are going to remove the first subflow socket soon, so avoid the additional indirection via at connect time, calling directly into the sock-level connect() ops. No functional change intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- v1 -> v2: - added comment pointing to __inet_stream_connect() similar code (Mattbe) - fixed typo in the commit message (Mat) --- net/mptcp/protocol.c | 43 ++++++++++++++++++++++++++++++++----------- 1 file changed, 32 insertions(+), 11 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) struct mptcp_sock *msk = mptcp_sk(sk); struct socket *ssock; int err = -EINVAL; + struct sock *ssk; ssock = __mptcp_nmpc_socket(msk); if (IS_ERR(ssock)) return PTR_ERR(ssock); inet_sk_state_store(sk, TCP_SYN_SENT); - subflow = mptcp_subflow_ctx(ssock->sk); + ssk = msk->first; + subflow = mptcp_subflow_ctx(ssk); #ifdef CONFIG_TCP_MD5SIG /* no MPTCP if MD5SIG is enabled on this socket or we may run out of * TCP option space. */ - if (rcu_access_pointer(tcp_sk(ssock->sk)->md5sig_info)) + if (rcu_access_pointer(tcp_sk(ssk)->md5sig_info)) mptcp_subflow_early_fallback(msk, subflow); #endif - if (subflow->request_mptcp && mptcp_token_new_connect(ssock->sk)) { - MPTCP_INC_STATS(sock_net(ssock->sk), MPTCP_MIB_TOKENFALLBACKINIT); + if (subflow->request_mptcp && mptcp_token_new_connect(ssk)) { + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_TOKENFALLBACKINIT); mptcp_subflow_early_fallback(msk, subflow); } if (likely(!__mptcp_check_fallback(msk))) @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) /* if reaching here via the fastopen/sendmsg path, the caller already * acquired the subflow socket lock, too. */ - if (msk->fastopening) - err = __inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK, 1); - else - err = inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK); - inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect; + if (!msk->fastopening) + lock_sock(ssk); + + /* the following mirrors closely a very small chunk of code from + * __inet_stream_connect() + */ + if (ssk->sk_state != TCP_CLOSE) + goto out; + + if (BPF_CGROUP_PRE_CONNECT_ENABLED(ssk)) { + err = ssk->sk_prot->pre_connect(ssk, uaddr, addr_len); + if (err) + goto out; + } + + err = ssk->sk_prot->connect(ssk, uaddr, addr_len); + if (err < 0) + goto out; + + inet_sk(sk)->defer_connect = inet_sk(ssk)->defer_connect; + +out: + if (!msk->fastopening) + release_sock(ssk); /* on successful connect, the msk state will be moved to established by * subflow_finish_connect() */ if (unlikely(err && err != -EINPROGRESS)) { - inet_sk_state_store(sk, inet_sk_state_load(ssock->sk)); + inet_sk_state_store(sk, inet_sk_state_load(ssk)); return err; } - mptcp_copy_inaddrs(sk, ssock->sk); + mptcp_copy_inaddrs(sk, ssk); /* silence EINPROGRESS and let the caller inet_stream_connect * handle the connection in progress -- 2.41.0
We are going to remove the first subflow socket soon, so avoid accessing it in mptcp_get_port(). Instead, access directly the first subflow sock. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/protocol.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static void mptcp_unhash(struct sock *sk) static int mptcp_get_port(struct sock *sk, unsigned short snum) { struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; - ssock = msk->subflow; - pr_debug("msk=%p, subflow=%p", msk, ssock); - if (WARN_ON_ONCE(!ssock)) + pr_debug("msk=%p, ssk=%p", msk, msk->first); + if (WARN_ON_ONCE(!msk->first)) return -EINVAL; - return inet_csk_get_port(ssock->sk, snum); + return inet_csk_get_port(msk->first, snum); } void mptcp_finish_connect(struct sock *ssk) -- 2.41.0
The mptcp protocol maintains an additional socket just to easily invoke a few stream operations on the first subflow. One of them is bind(). Factor out the helpers operating directly on the struct sock, to allow get rid of the above dependency in the next patch without duplicating the existing code. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- include/net/inet_common.h | 1 + include/net/ipv6.h | 1 + net/ipv4/af_inet.c | 8 ++++++-- net/ipv6/af_inet6.c | 10 +++++++--- 4 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/net/inet_common.h b/include/net/inet_common.h index XXXXXXX..XXXXXXX 100644 --- a/include/net/inet_common.h +++ b/include/net/inet_common.h @@ -XXX,XX +XXX,XX @@ int inet_shutdown(struct socket *sock, int how); int inet_listen(struct socket *sock, int backlog); void inet_sock_destruct(struct sock *sk); int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); +int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len); /* Don't allocate port at this moment, defer to connect. */ #define BIND_FORCE_ADDRESS_NO_PORT (1 << 0) /* Grab and release socket lock. */ diff --git a/include/net/ipv6.h b/include/net/ipv6.h index XXXXXXX..XXXXXXX 100644 --- a/include/net/ipv6.h +++ b/include/net/ipv6.h @@ -XXX,XX +XXX,XX @@ void inet6_cleanup_sock(struct sock *sk); void inet6_sock_destruct(struct sock *sk); int inet6_release(struct socket *sock); int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); +int inet6_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len); int inet6_getname(struct socket *sock, struct sockaddr *uaddr, int peer); int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index XXXXXXX..XXXXXXX 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -XXX,XX +XXX,XX @@ int inet_release(struct socket *sock) } EXPORT_SYMBOL(inet_release); -int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len) { - struct sock *sk = sock->sk; u32 flags = BIND_WITH_LOCK; int err; @@ -XXX,XX +XXX,XX @@ int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) return __inet_bind(sk, uaddr, addr_len, flags); } + +int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +{ + return inet_bind_sk(sock->sk, uaddr, addr_len); +} EXPORT_SYMBOL(inet_bind); int __inet_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len, diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index XXXXXXX..XXXXXXX 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -XXX,XX +XXX,XX @@ static int __inet6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len, goto out; } -/* bind for INET6 API */ -int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +int inet6_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len) { - struct sock *sk = sock->sk; u32 flags = BIND_WITH_LOCK; const struct proto *prot; int err = 0; @@ -XXX,XX +XXX,XX @@ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) return __inet6_bind(sk, uaddr, addr_len, flags); } + +/* bind for INET6 API */ +int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +{ + return inet6_bind_sk(sock->sk, uaddr, addr_len); +} EXPORT_SYMBOL(inet6_bind); int inet6_release(struct socket *sock) -- 2.41.0
We are going to remove the first subflow socket soon, so avoid the additional indirection via at bind() time. Instead call directly the recently introduced helpers on the first subflow sock. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/protocol.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static struct proto mptcp_prot = { static int mptcp_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) { struct mptcp_sock *msk = mptcp_sk(sock->sk); + struct sock *ssk, *sk = sock->sk; struct socket *ssock; - int err; + int err = -EINVAL; - lock_sock(sock->sk); + lock_sock(sk); ssock = __mptcp_nmpc_socket(msk); if (IS_ERR(ssock)) { err = PTR_ERR(ssock); goto unlock; } - err = ssock->ops->bind(ssock, uaddr, addr_len); + ssk = msk->first; + if (sk->sk_family == AF_INET) + err = inet_bind_sk(ssk, uaddr, addr_len); +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + else if (sk->sk_family == AF_INET6) + err = inet6_bind_sk(ssk, uaddr, addr_len); +#endif if (!err) - mptcp_copy_inaddrs(sock->sk, ssock->sk); + mptcp_copy_inaddrs(sk, ssk); unlock: - release_sock(sock->sk); + release_sock(sk); return err; } -- 2.41.0
The mptcp protocol maintains an additional socket just to easily invoke a few stream operations on the first subflow. One of them is inet_listen(). Factor out an helper operating directly on the (locked) struct sock, to allow get rid of the above dependency in the next patch without duplicating the existing code. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> -- v1 -> v2: - initialize 'err' at definition time (Mat) --- include/net/inet_common.h | 1 + net/ipv4/af_inet.c | 38 ++++++++++++++++++++++---------------- 2 files changed, 23 insertions(+), 16 deletions(-) diff --git a/include/net/inet_common.h b/include/net/inet_common.h index XXXXXXX..XXXXXXX 100644 --- a/include/net/inet_common.h +++ b/include/net/inet_common.h @@ -XXX,XX +XXX,XX @@ int inet_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, int flags); int inet_shutdown(struct socket *sock, int how); int inet_listen(struct socket *sock, int backlog); +int __inet_listen_sk(struct sock *sk, int backlog); void inet_sock_destruct(struct sock *sk); int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len); diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index XXXXXXX..XXXXXXX 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -XXX,XX +XXX,XX @@ static int inet_autobind(struct sock *sk) return 0; } -/* - * Move a socket into listening state. - */ -int inet_listen(struct socket *sock, int backlog) +int __inet_listen_sk(struct sock *sk, int backlog) { - struct sock *sk = sock->sk; - unsigned char old_state; + unsigned char old_state = sk->sk_state; int err, tcp_fastopen; - lock_sock(sk); - - err = -EINVAL; - if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM) - goto out; - - old_state = sk->sk_state; if (!((1 << old_state) & (TCPF_CLOSE | TCPF_LISTEN))) - goto out; + return -EINVAL; WRITE_ONCE(sk->sk_max_ack_backlog, backlog); /* Really, if the socket is already in listen state @@ -XXX,XX +XXX,XX @@ int inet_listen(struct socket *sock, int backlog) err = inet_csk_listen_start(sk); if (err) - goto out; + return err; + tcp_call_bpf(sk, BPF_SOCK_OPS_TCP_LISTEN_CB, 0, NULL); } - err = 0; + return 0; +} + +/* + * Move a socket into listening state. + */ +int inet_listen(struct socket *sock, int backlog) +{ + struct sock *sk = sock->sk; + int err = -EINVAL; + + lock_sock(sk); + + if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM) + goto out; + + err = __inet_listen_sk(sk, backlog); out: release_sock(sk); -- 2.41.0
We are going to remove the first subflow socket soon, so avoid the additional indirection via at listen() time. Instead call directly the recently introduced helper on the first subflow sock. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- v1 -> v2: - dropped msk backlog init chunk (Mat) - fixed commit message type (Mat) --- net/mptcp/protocol.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) struct mptcp_sock *msk = mptcp_sk(sock->sk); struct sock *sk = sock->sk; struct socket *ssock; + struct sock *ssk; int err; pr_debug("msk=%p", msk); @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) goto unlock; } + ssk = msk->first; inet_sk_state_store(sk, TCP_LISTEN); sock_set_flag(sk, SOCK_RCU_FREE); - err = ssock->ops->listen(ssock, backlog); - inet_sk_state_store(sk, inet_sk_state_load(ssock->sk)); + lock_sock(ssk); + err = __inet_listen_sk(ssk, backlog); + release_sock(ssk); + inet_sk_state_store(sk, inet_sk_state_load(ssk)); + if (!err) { sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1); - mptcp_copy_inaddrs(sk, ssock->sk); - mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED); + mptcp_copy_inaddrs(sk, ssk); + mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CREATED); } unlock: -- 2.41.0
We are going to remove the first subflow socket soon, so avoid the additional indirection at poll() time. Instead access directly the first subflow sock. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/protocol.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock, state = inet_sk_state_load(sk); pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags); if (state == TCP_LISTEN) { - struct socket *ssock = READ_ONCE(msk->subflow); + struct sock *ssk = READ_ONCE(msk->first); - if (WARN_ON_ONCE(!ssock || !ssock->sk)) + if (WARN_ON_ONCE(!ssk)) return 0; - return inet_csk_listen_poll(ssock->sk); + return inet_csk_listen_poll(ssk); } shutdown = READ_ONCE(sk->sk_shutdown); -- 2.41.0
We are going to remove the first subflow socket soon, so avoid the additional indirection at accept() time. Instead access directly the first subflow sock, and update mptcp_accept() to operate on it. This allows dropping a duplicated check in mptcp_accept(). No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- v1 -> v2: - changed mptcp_accept() to better clarify why we could remove the 2nd msk->first check there. --- net/mptcp/protocol.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk) WRITE_ONCE(msk->wnd_end, msk->snd_nxt + tcp_sk(ssk)->snd_wnd); } -static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, +static struct sock *mptcp_accept(struct sock *ssk, int flags, int *err, bool kern) { - struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *listener; struct sock *newsk; - listener = READ_ONCE(msk->subflow); - if (WARN_ON_ONCE(!listener)) { - *err = -EINVAL; - return NULL; - } - - pr_debug("msk=%p, listener=%p", msk, mptcp_subflow_ctx(listener->sk)); - newsk = inet_csk_accept(listener->sk, flags, err, kern); + pr_debug("ssk=%p, listener=%p", ssk, mptcp_subflow_ctx(ssk)); + newsk = inet_csk_accept(ssk, flags, err, kern); if (!newsk) return NULL; - pr_debug("msk=%p, subflow is mptcp=%d", msk, sk_is_mptcp(newsk)); + pr_debug("newsk=%p, subflow is mptcp=%d", newsk, sk_is_mptcp(newsk)); if (sk_is_mptcp(newsk)) { struct mptcp_subflow_context *subflow; struct sock *new_mptcp_sock; @@ -XXX,XX +XXX,XX @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, } newsk = new_mptcp_sock; - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEPASSIVEACK); + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPCAPABLEPASSIVEACK); } else { - MPTCP_INC_STATS(sock_net(sk), + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); } @@ -XXX,XX +XXX,XX @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, int flags, bool kern) { struct mptcp_sock *msk = mptcp_sk(sock->sk); - struct socket *ssock; - struct sock *newsk; + struct sock *ssk, *newsk; int err; pr_debug("msk=%p", msk); @@ -XXX,XX +XXX,XX @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, /* Buggy applications can call accept on socket states other then LISTEN * but no need to allocate the first subflow just to error out. */ - ssock = READ_ONCE(msk->subflow); - if (!ssock) + ssk = READ_ONCE(msk->first); + if (!ssk) return -EINVAL; - newsk = mptcp_accept(sock->sk, flags, &err, kern); + newsk = mptcp_accept(ssk, flags, &err, kern); if (!newsk) return err; -- 2.41.0
The mptcp sockopt infrastructure unneedly uses the first subflow socket struct in a few spots. We are going to remove such field soon, so use directly the first subflow sock instead. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/sockopt.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, { struct sock *sk = (struct sock *)msk; struct socket *ssock; + struct sock *ssk; int ret; switch (optname) { @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, return PTR_ERR(ssock); } - ret = sock_setsockopt(ssock, SOL_SOCKET, optname, optval, optlen); + ssk = msk->first; + ret = sk_setsockopt(ssk, SOL_SOCKET, optname, optval, optlen); if (ret == 0) { if (optname == SO_REUSEPORT) - sk->sk_reuseport = ssock->sk->sk_reuseport; + sk->sk_reuseport = ssk->sk_reuseport; else if (optname == SO_REUSEADDR) - sk->sk_reuse = ssock->sk->sk_reuse; + sk->sk_reuse = ssk->sk_reuse; else if (optname == SO_BINDTODEVICE) - sk->sk_bound_dev_if = ssock->sk->sk_bound_dev_if; + sk->sk_bound_dev_if = ssk->sk_bound_dev_if; else if (optname == SO_BINDTOIFINDEX) - sk->sk_bound_dev_if = ssock->sk->sk_bound_dev_if; + sk->sk_bound_dev_if = ssk->sk_bound_dev_if; } release_sock(sk); return ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, struct sock *sk = (struct sock *)msk; int ret = -EOPNOTSUPP; struct socket *ssock; + struct sock *ssk; switch (optname) { case IPV6_V6ONLY: @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, return PTR_ERR(ssock); } - ret = tcp_setsockopt(ssock->sk, SOL_IPV6, optname, optval, optlen); + ssk = msk->first; + ret = tcp_setsockopt(ssk, SOL_IPV6, optname, optval, optlen); if (ret != 0) { release_sock(sk); return ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, switch (optname) { case IPV6_V6ONLY: - sk->sk_ipv6only = ssock->sk->sk_ipv6only; + sk->sk_ipv6only = ssk->sk_ipv6only; break; case IPV6_TRANSPARENT: - inet_sk(sk)->transparent = inet_sk(ssock->sk)->transparent; + inet_sk(sk)->transparent = inet_sk(ssk)->transparent; break; case IPV6_FREEBIND: - inet_sk(sk)->freebind = inet_sk(ssock->sk)->freebind; + inet_sk(sk)->freebind = inet_sk(ssk)->freebind; break; } @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_ip_set_transparent(struct mptcp_sock *msk, int o return PTR_ERR(ssock); } - issk = inet_sk(ssock->sk); + issk = inet_sk(msk->first); switch (optname) { case IP_FREEBIND: @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int { struct sock *sk = (struct sock *)msk; struct socket *ssock; - int ret; struct sock *ssk; + int ret; lock_sock(sk); ssk = msk->first; @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int goto out; } - ret = tcp_getsockopt(ssock->sk, level, optname, optval, optlen); + ret = tcp_getsockopt(ssk, level, optname, optval, optlen); out: release_sock(sk); -- 2.41.0
This is one of the few remaining spots actually manipulating the first subflow socket. We can leverage the recently introduced inet helpers to get rid of ssock there. No functional changes intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/pm_netlink.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -XXX,XX +XXX,XX @@ #include <linux/inet.h> #include <linux/kernel.h> #include <net/tcp.h> +#include <net/inet_common.h> #include <net/netns/generic.h> #include <net/mptcp.h> #include <net/genetlink.h> @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, bool is_ipv6 = sk->sk_family == AF_INET6; int addrlen = sizeof(struct sockaddr_in); struct sockaddr_storage addr; + struct sock *newsk, *ssk; struct socket *ssock; - struct sock *newsk; int backlog = 1024; int err; @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, if (entry->addr.family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); #endif - err = kernel_bind(ssock, (struct sockaddr *)&addr, addrlen); + ssk = mptcp_sk(newsk)->first; + if (ssk->sk_family == AF_INET) + err = inet_bind_sk(ssk, (struct sockaddr *)&addr, addrlen); +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + else if (ssk->sk_family == AF_INET6) + err = inet6_bind_sk(ssk, (struct sockaddr *)&addr, addrlen); +#endif if (err) return err; inet_sk_state_store(newsk, TCP_LISTEN); - err = kernel_listen(ssock, backlog); - if (err) - return err; - - mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED); - - return 0; + lock_sock(ssk); + err = __inet_listen_sk(ssk, backlog); + if (!err) + mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CREATED); + release_sock(ssk); + return err; } int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc) -- 2.41.0
After the previous patch the __mptcp_nmpc_socket helper is used only to ensure that the MPTCP socket is a suitable status - that is, the mptcp capable handshake is not started yet. Change the return value to the relevant subflow sock, to finally remove the last references to first subflow socket in the MPTCP stack. As a bonus, we can get rid of a few local variables in different functions. No functional change intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- net/mptcp/pm_netlink.c | 8 +++----- net/mptcp/protocol.c | 40 +++++++++++++++------------------------ net/mptcp/protocol.h | 2 +- net/mptcp/sockopt.c | 43 +++++++++++++++++++----------------------- 4 files changed, 38 insertions(+), 55 deletions(-) diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, int addrlen = sizeof(struct sockaddr_in); struct sockaddr_storage addr; struct sock *newsk, *ssk; - struct socket *ssock; int backlog = 1024; int err; @@ -XXX,XX +XXX,XX @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, &mptcp_keys[is_ipv6]); lock_sock(newsk); - ssock = __mptcp_nmpc_socket(mptcp_sk(newsk)); + ssk = __mptcp_nmpc_sk(mptcp_sk(newsk)); release_sock(newsk); - if (IS_ERR(ssock)) - return PTR_ERR(ssock); + if (IS_ERR(ssk)) + return PTR_ERR(ssk); mptcp_info2sockaddr(&entry->addr, &addr, entry->addr.family); #if IS_ENABLED(CONFIG_MPTCP_IPV6) if (entry->addr.family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); #endif - ssk = mptcp_sk(newsk)->first; if (ssk->sk_family == AF_INET) err = inet_bind_sk(ssk, (struct sockaddr *)&addr, addrlen); #if IS_ENABLED(CONFIG_MPTCP_IPV6) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int __mptcp_socket_create(struct mptcp_sock *msk) /* If the MPC handshake is not started, returns the first subflow, * eventually allocating it. */ -struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk) +struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk) { struct sock *sk = (struct sock *)msk; int ret; @@ -XXX,XX +XXX,XX @@ struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk) if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) return ERR_PTR(-EINVAL); - if (!msk->subflow) { - if (msk->first) - return ERR_PTR(-EINVAL); - + if (!msk->first) { ret = __mptcp_socket_create(msk); if (ret) return ERR_PTR(ret); @@ -XXX,XX +XXX,XX @@ struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk) mptcp_sockopt_sync(msk, msk->first); } - return msk->subflow; + return msk->first; } static void mptcp_drop(struct sock *sk, struct sk_buff *skb) @@ -XXX,XX +XXX,XX @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, { unsigned int saved_flags = msg->msg_flags; struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; struct sock *ssk; int ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, * fastopen attempt, no need to check for additional subflow status. */ if (msg->msg_flags & MSG_FASTOPEN) { - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) - return PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) + return PTR_ERR(ssk); } if (!msk->first) return -EINVAL; @@ -XXX,XX +XXX,XX @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) { struct mptcp_subflow_context *subflow; struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; int err = -EINVAL; struct sock *ssk; - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) - return PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) + return PTR_ERR(ssk); inet_sk_state_store(sk, TCP_SYN_SENT); - ssk = msk->first; subflow = mptcp_subflow_ctx(ssk); #ifdef CONFIG_TCP_MD5SIG /* no MPTCP if MD5SIG is enabled on this socket or we may run out of @@ -XXX,XX +XXX,XX @@ static int mptcp_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) { struct mptcp_sock *msk = mptcp_sk(sock->sk); struct sock *ssk, *sk = sock->sk; - struct socket *ssock; int err = -EINVAL; lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { - err = PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + err = PTR_ERR(ssk); goto unlock; } - ssk = msk->first; if (sk->sk_family == AF_INET) err = inet_bind_sk(ssk, uaddr, addr_len); #if IS_ENABLED(CONFIG_MPTCP_IPV6) @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) { struct mptcp_sock *msk = mptcp_sk(sock->sk); struct sock *sk = sock->sk; - struct socket *ssock; struct sock *ssk; int err; @@ -XXX,XX +XXX,XX @@ static int mptcp_listen(struct socket *sock, int backlog) if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM) goto unlock; - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { - err = PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + err = PTR_ERR(ssk); goto unlock; } - ssk = msk->first; inet_sk_state_store(sk, TCP_LISTEN); sock_set_flag(sk, SOCK_RCU_FREE); diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -XXX,XX +XXX,XX @@ void __mptcp_subflow_send_ack(struct sock *ssk); void mptcp_subflow_reset(struct sock *ssk); void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk); void mptcp_sock_graft(struct sock *sk, struct socket *parent); -struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk); +struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk); bool __mptcp_close(struct sock *sk, long timeout); void mptcp_cancel_work(struct sock *sk); void __mptcp_unaccepted_force_close(struct sock *sk); diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, sockptr_t optval, unsigned int optlen) { struct sock *sk = (struct sock *)msk; - struct socket *ssock; struct sock *ssk; int ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname, case SO_BINDTODEVICE: case SO_BINDTOIFINDEX: lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { release_sock(sk); - return PTR_ERR(ssock); + return PTR_ERR(ssk); } - ssk = msk->first; ret = sk_setsockopt(ssk, SOL_SOCKET, optname, optval, optlen); if (ret == 0) { if (optname == SO_REUSEPORT) @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, { struct sock *sk = (struct sock *)msk; int ret = -EOPNOTSUPP; - struct socket *ssock; struct sock *ssk; switch (optname) { @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname, case IPV6_TRANSPARENT: case IPV6_FREEBIND: lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { release_sock(sk); - return PTR_ERR(ssock); + return PTR_ERR(ssk); } - ssk = msk->first; ret = tcp_setsockopt(ssk, SOL_IPV6, optname, optval, optlen); if (ret != 0) { release_sock(sk); @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_ip_set_transparent(struct mptcp_sock *msk, int o { struct sock *sk = (struct sock *)msk; struct inet_sock *issk; - struct socket *ssock; + struct sock *ssk; int err; err = ip_setsockopt(sk, SOL_IP, optname, optval, optlen); @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_sol_ip_set_transparent(struct mptcp_sock *msk, int o lock_sock(sk); - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { release_sock(sk); - return PTR_ERR(ssock); + return PTR_ERR(ssk); } - issk = inet_sk(msk->first); + issk = inet_sk(ssk); switch (optname) { case IP_FREEBIND: @@ -XXX,XX +XXX,XX @@ static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int sockptr_t optval, unsigned int optlen) { struct sock *sk = (struct sock *)msk; - struct socket *sock; + struct sock *ssk; int ret; /* Limit to first subflow, before the connection establishment */ lock_sock(sk); - sock = __mptcp_nmpc_socket(msk); - if (IS_ERR(sock)) { - ret = PTR_ERR(sock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + ret = PTR_ERR(ssk); goto unlock; } - ret = tcp_setsockopt(sock->sk, level, optname, optval, optlen); + ret = tcp_setsockopt(ssk, level, optname, optval, optlen); unlock: release_sock(sk); @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int char __user *optval, int __user *optlen) { struct sock *sk = (struct sock *)msk; - struct socket *ssock; struct sock *ssk; int ret; @@ -XXX,XX +XXX,XX @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int goto out; } - ssock = __mptcp_nmpc_socket(msk); - if (IS_ERR(ssock)) { - ret = PTR_ERR(ssock); + ssk = __mptcp_nmpc_sk(msk); + if (IS_ERR(ssk)) { + ret = PTR_ERR(ssk); goto out; } -- 2.41.0
Such field is now unused just as a flag to control the first subflow deletion at close() time. Introduce a new bit flag for that and finally drop the mentioned field. As an intended side effect, now the first subflow sock is not freed before close() even for passive sockets. The msk has no open/active subflows if the first one is closed and the subflow list is singular, update accordingly the state check in mptcp_stream_accept(). Among other benefits, the subflow removal, reduces the amount of memory used on the client side for each mptcp connection, allows passive sockets to go through successful accept()/disconnect()/connect() and makes return error code consistent for failing both passive and active sockets. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- v1 -> v2: - drop unneeded msk->first check in mptcp_accept(), only data corruption could clear that Side notes: - syzkaller will be likely happy about the new code path to possibly exploit - we could possibly avoid allocating the 'socket' struct at __mptcp_subflow_connect() time, but that will require more invasive helpers creation in inet core. --- net/mptcp/protocol.c | 25 ++++++------------------- net/mptcp/protocol.h | 13 ++++++------- 2 files changed, 12 insertions(+), 26 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -XXX,XX +XXX,XX @@ static int __mptcp_socket_create(struct mptcp_sock *msk) return err; WRITE_ONCE(msk->first, ssock->sk); - WRITE_ONCE(msk->subflow, ssock); subflow = mptcp_subflow_ctx(ssock->sk); list_add(&subflow->node, &msk->conn_list); sock_hold(ssock->sk); @@ -XXX,XX +XXX,XX @@ static int __mptcp_socket_create(struct mptcp_sock *msk) /* This is the first subflow, always with id 0 */ subflow->local_id_valid = 1; mptcp_sock_graft(msk->first, sk->sk_socket); + iput(SOCK_INODE(ssock)); return 0; } @@ -XXX,XX +XXX,XX @@ struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk) return min_stale_count > 1 ? backup : NULL; } -static void mptcp_dispose_initial_subflow(struct mptcp_sock *msk) -{ - if (msk->subflow) { - iput(SOCK_INODE(msk->subflow)); - WRITE_ONCE(msk->subflow, NULL); - } -} - bool __mptcp_retransmit_pending_data(struct sock *sk) { struct mptcp_data_frag *cur, *rtx_head; @@ -XXX,XX +XXX,XX @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk, goto out_release; } - dispose_it = !msk->subflow || ssk != msk->subflow->sk; + dispose_it = msk->free_first || ssk != msk->first; if (dispose_it) list_del(&subflow->node); @@ -XXX,XX +XXX,XX @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk, * disconnect should never fail */ WARN_ON_ONCE(tcp_disconnect(ssk, 0)); - msk->subflow->state = SS_UNCONNECTED; mptcp_subflow_ctx_reset(subflow); release_sock(ssk); @@ -XXX,XX +XXX,XX @@ struct sock *mptcp_sk_clone_init(const struct sock *sk, msk = mptcp_sk(nsk); msk->local_key = subflow_req->local_key; msk->token = subflow_req->token; - WRITE_ONCE(msk->subflow, NULL); msk->in_accept_queue = 1; WRITE_ONCE(msk->fully_established, false); if (mp_opt->suboptions & OPTION_MPTCP_CSUMREQD) @@ -XXX,XX +XXX,XX @@ static void mptcp_destroy(struct sock *sk) { struct mptcp_sock *msk = mptcp_sk(sk); - /* clears msk->subflow, allowing the following to close - * even the initial subflow - */ - mptcp_dispose_initial_subflow(msk); + /* allow the following to close even the initial subflow */ + msk->free_first = 1; mptcp_destroy_common(msk, 0); sk_sockets_allocated_dec(sk); } @@ -XXX,XX +XXX,XX @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, /* Do late cleanup for the first subflow as necessary. Also * deal with bad peers not doing a complete shutdown. */ - if (msk->first && - unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) { + if (unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) { __mptcp_close_ssk(newsk, msk->first, mptcp_subflow_ctx(msk->first), 0); - if (unlikely(list_empty(&msk->conn_list))) + if (unlikely(list_is_singular(&msk->conn_list))) inet_sk_state_store(newsk, TCP_CLOSE); } } diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index XXXXXXX..XXXXXXX 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -XXX,XX +XXX,XX @@ struct mptcp_sock { cork:1, nodelay:1, fastopening:1, - in_accept_queue:1; + in_accept_queue:1, + free_first:1; struct work_struct work; struct sk_buff *ooo_last_skb; struct rb_root out_of_order_queue; @@ -XXX,XX +XXX,XX @@ struct mptcp_sock { struct list_head rtx_queue; struct mptcp_data_frag *first_pending; struct list_head join_list; - struct socket *subflow; /* outgoing connect/listener/!mp_capable - * The mptcp ops can safely dereference, using suitable - * ONCE annotation, the subflow outside the socket - * lock as such sock is freed after close(). - */ - struct sock *first; + struct sock *first; /* The mptcp ops can safely dereference, using suitable + * ONCE annotation, the subflow outside the socket + * lock as such sock is freed after close(). + */ struct mptcp_pm_data pm; struct mptcp_sched_ops *sched; struct { -- 2.41.0