The current transition to TCP_CLOSE in __mptcp_close_ssk() is rather
obscure, especially since it's done unconditionally for all subflow.
Such action is needed only when disposing the given socket - that is
for MPJ subflows, or at mptcp close time.
Move the statement in a better place and explain it's role for future
memory.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/protocol.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index cd5a19ab3ba1..61066eb81412 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2533,9 +2533,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
if (dispose_it)
list_del(&subflow->node);
- if (subflow->send_fastclose && ssk->sk_state != TCP_CLOSE)
- tcp_set_state(ssk, TCP_CLOSE);
-
need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk);
if (!dispose_it) {
__mptcp_subflow_disconnect(ssk, subflow, flags);
@@ -2554,6 +2551,10 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
WARN_ON_ONCE(!sock_flag(ssk, SOCK_DEAD));
kfree_rcu(subflow, rcu);
} else {
+ /* Prevent __tcp_close() from sending additional resets */
+ if (subflow->send_fastclose && ssk->sk_state != TCP_CLOSE)
+ tcp_set_state(ssk, TCP_CLOSE);
+
/* otherwise tcp will dispose of the ssk and subflow ctx */
__tcp_close(ssk, 0);
--
2.52.0
Hi Paolo,
On 03/12/2025 19:25, Paolo Abeni wrote:
> The current transition to TCP_CLOSE in __mptcp_close_ssk() is rather
> obscure, especially since it's done unconditionally for all subflow.
>
> Such action is needed only when disposing the given socket - that is
> for MPJ subflows, or at mptcp close time.
>
> Move the statement in a better place and explain it's role for future
> memory.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> net/mptcp/protocol.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index cd5a19ab3ba1..61066eb81412 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -2533,9 +2533,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
> if (dispose_it)
> list_del(&subflow->node);
>
> - if (subflow->send_fastclose && ssk->sk_state != TCP_CLOSE)
> - tcp_set_state(ssk, TCP_CLOSE);
> -
> need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk);
> if (!dispose_it) {
> __mptcp_subflow_disconnect(ssk, subflow, flags);
Just to be sure, is it not an issue to transition to TCP_CLOSE after
having called __mptcp_subflow_disconnect()? I guess no because the
transition above was done if "subflow->send_fastclose", which is also
check there. Also, tcp_disconnect() will transition to TCP_CLOSE but the
old state might cause extra RST, but I think that's fine, right?
If everything is OK, while at it, do you think it is worth it to drop
"dispose_it" variable?
need_push = (...)
if (msk->free_first || ssk != msk->first) {
list_del(&subflow->node);
} else {
__mptcp_subflow_disconnect(ssk, subflow, flags);
(...)
}
> @@ -2554,6 +2551,10 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
> WARN_ON_ONCE(!sock_flag(ssk, SOCK_DEAD));
> kfree_rcu(subflow, rcu);
> } else {
> + /* Prevent __tcp_close() from sending additional resets */
> + if (subflow->send_fastclose && ssk->sk_state != TCP_CLOSE)
> + tcp_set_state(ssk, TCP_CLOSE);
> +
> /* otherwise tcp will dispose of the ssk and subflow ctx */
> __tcp_close(ssk, 0);
>
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
On 12/4/25 6:22 PM, Matthieu Baerts wrote:
> On 03/12/2025 19:25, Paolo Abeni wrote:
>> The current transition to TCP_CLOSE in __mptcp_close_ssk() is rather
>> obscure, especially since it's done unconditionally for all subflow.
>>
>> Such action is needed only when disposing the given socket - that is
>> for MPJ subflows, or at mptcp close time.
>>
>> Move the statement in a better place and explain it's role for future
>> memory.
>>
>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
>> ---
>> net/mptcp/protocol.c | 7 ++++---
>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
>> index cd5a19ab3ba1..61066eb81412 100644
>> --- a/net/mptcp/protocol.c
>> +++ b/net/mptcp/protocol.c
>> @@ -2533,9 +2533,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
>> if (dispose_it)
>> list_del(&subflow->node);
>>
>> - if (subflow->send_fastclose && ssk->sk_state != TCP_CLOSE)
>> - tcp_set_state(ssk, TCP_CLOSE);
>> -
>> need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk);
>> if (!dispose_it) {
>> __mptcp_subflow_disconnect(ssk, subflow, flags);
>
> Just to be sure, is it not an issue to transition to TCP_CLOSE after
> having called __mptcp_subflow_disconnect()?
That would be indeed an issue, but the following 2 lines (after
__mptcp_subflow_disconnect()) are:
release_sock(ssk);
goto out;
that, is
tcp_set_state(..CLOSE);
should never be called after __mptcp_subflow_disconnect().
> I guess no because the
> transition above was done if "subflow->send_fastclose", which is also
> check there. Also, tcp_disconnect() will transition to TCP_CLOSE but the
> old state might cause extra RST, but I think that's fine, right?
Good point! I cooked the patch too hastly, and did not consider such
path (I stumble upon it while preparing the previous fixes, that do not
do the tcp_disconnect() thing).
I think you can simply drop this patch, or possibly even better just add
a suitable comment to the set_state() transition.
/P
Hi Paolo,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join): Unstable: 1 failed test(s): selftest_simult_flows 🔴
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Unstable: 1 failed test(s): packetdrill_dss 🔴
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/19904924250
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/1f0e1c8a3ee8
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1030202
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
© 2016 - 2025 Red Hat, Inc.