This variable counts how many MPTCP endpoints have a 'fullmesh' flag
set. After having flushed all MPTCP endpoints, it is then needed to
reset this counter.
Without this reset, this counter exposed to the userspace is wrong, but
also non-fullmesh endpoints added after the flush will not be taken into
account to create subflows in reaction to ADD_ADDRs.
Fixes: f88191c7f361 ("mptcp: pm: in-kernel: record fullmesh endp nb")
Reported-by: Sashiko <sashiko-bot@kernel.org>
Closes: https://sashiko.dev/#/patchset/20260422-mptcp-inc-limits-v6-0-903181771530%40kernel.org?part=15
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
---
net/mptcp/pm_kernel.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
index 7895fb5f982e..382893f22f6a 100644
--- a/net/mptcp/pm_kernel.c
+++ b/net/mptcp/pm_kernel.c
@@ -1279,6 +1279,7 @@ static void __reset_counters(struct pm_nl_pernet *pernet)
WRITE_ONCE(pernet->endp_signal_max, 0);
WRITE_ONCE(pernet->endp_subflow_max, 0);
WRITE_ONCE(pernet->endp_laminar_max, 0);
+ WRITE_ONCE(pernet->endp_fullmesh_max, 0);
pernet->endpoints = 0;
}
---
base-commit: 94ca4b024f14c6f7ce32ca0c274f5b19c82f634a
change-id: 20260423-mptcp-pm-reset-endp_fullmesh_max-e1e56a51bf01
Best regards,
--
Matthieu Baerts (NGI0) <matttbe@kernel.org>
On Thu, 23 Apr 2026, Matthieu Baerts (NGI0) wrote:
> This variable counts how many MPTCP endpoints have a 'fullmesh' flag
> set. After having flushed all MPTCP endpoints, it is then needed to
> reset this counter.
>
> Without this reset, this counter exposed to the userspace is wrong, but
> also non-fullmesh endpoints added after the flush will not be taken into
> account to create subflows in reaction to ADD_ADDRs.
>
> Fixes: f88191c7f361 ("mptcp: pm: in-kernel: record fullmesh endp nb")
> Reported-by: Sashiko <sashiko-bot@kernel.org>
> Closes: https://sashiko.dev/#/patchset/20260422-mptcp-inc-limits-v6-0-903181771530%40kernel.org?part=15
> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
> ---
> net/mptcp/pm_kernel.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
> index 7895fb5f982e..382893f22f6a 100644
> --- a/net/mptcp/pm_kernel.c
> +++ b/net/mptcp/pm_kernel.c
> @@ -1279,6 +1279,7 @@ static void __reset_counters(struct pm_nl_pernet *pernet)
> WRITE_ONCE(pernet->endp_signal_max, 0);
> WRITE_ONCE(pernet->endp_subflow_max, 0);
> WRITE_ONCE(pernet->endp_laminar_max, 0);
> + WRITE_ONCE(pernet->endp_fullmesh_max, 0);
LGTM
Reviewed-by: Mat Martineau <martineau@kernel.org>
> pernet->endpoints = 0;
> }
>
>
> ---
> base-commit: 94ca4b024f14c6f7ce32ca0c274f5b19c82f634a
> change-id: 20260423-mptcp-pm-reset-endp_fullmesh_max-e1e56a51bf01
>
> Best regards,
> --
> Matthieu Baerts (NGI0) <matttbe@kernel.org>
>
>
>
Hi Mat,
On 24/04/2026 01:48, Mat Martineau wrote:
> On Thu, 23 Apr 2026, Matthieu Baerts (NGI0) wrote:
>
>> This variable counts how many MPTCP endpoints have a 'fullmesh' flag
>> set. After having flushed all MPTCP endpoints, it is then needed to
>> reset this counter.
>>
>> Without this reset, this counter exposed to the userspace is wrong, but
>> also non-fullmesh endpoints added after the flush will not be taken into
>> account to create subflows in reaction to ADD_ADDRs.
>>
>> Fixes: f88191c7f361 ("mptcp: pm: in-kernel: record fullmesh endp nb")
>> Reported-by: Sashiko <sashiko-bot@kernel.org>
>> Closes: https://sashiko.dev/#/patchset/20260422-mptcp-inc-limits-
>> v6-0-903181771530%40kernel.org?part=15
>> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
>> ---
>> net/mptcp/pm_kernel.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
>> index 7895fb5f982e..382893f22f6a 100644
>> --- a/net/mptcp/pm_kernel.c
>> +++ b/net/mptcp/pm_kernel.c
>> @@ -1279,6 +1279,7 @@ static void __reset_counters(struct pm_nl_pernet
>> *pernet)
>> WRITE_ONCE(pernet->endp_signal_max, 0);
>> WRITE_ONCE(pernet->endp_subflow_max, 0);
>> WRITE_ONCE(pernet->endp_laminar_max, 0);
>> + WRITE_ONCE(pernet->endp_fullmesh_max, 0);
>
> LGTM
>
> Reviewed-by: Mat Martineau <martineau@kernel.org>
Thank you for the review! I will wait for the other PM fixes to send all
of them upstream.
Now in our tree:
New patches for t/upstream-net and t/upstream:
- 8d936865e442: mptcp: pm: kernel: reset fullmesh counter after flush
- Results: 2795355c5cb6..152ab3edc9a1 (export-net)
- Results: c4b8cb69b9fa..2b52586ab290 (export)
Tests are now in progress:
- export-net:
https://github.com/multipath-tcp/mptcp_net-next/commit/3ec57500f66f2d20368b844f6aeeb7da4d5f6e43/checks
- export:
https://github.com/multipath-tcp/mptcp_net-next/commit/e598d5bb47cfbecb2dd9c98b5030894463445267/checks
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
Hi Matthieu,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join): Success! ✅
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Unstable: 1 failed test(s): packetdrill_dss ⚠️
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/24846434526
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/f3c6c03e4687
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1084790
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
© 2016 - 2026 Red Hat, Inc.