[PATCH mptcp-next] mptcp: increase default max additional subflows to 2

Paolo Abeni posted 1 patch 2 years, 6 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/b7e5c73d59113afe32eec831b620997c86b56460.1633712415.git.pabeni@redhat.com
Maintainers: Shuah Khan <shuah@kernel.org>, Matthieu Baerts <matthieu.baerts@tessares.net>, Mat Martineau <mathew.j.martineau@linux.intel.com>, Jakub Kicinski <kuba@kernel.org>, "David S. Miller" <davem@davemloft.net>
net/mptcp/pm_netlink.c                          | 3 +++
tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
3 files changed, 10 insertions(+), 4 deletions(-)
[PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Paolo Abeni 2 years, 6 months ago
The current default does not allowing additional subflows, mostly
as a safety restriction to avoid uncontrolled resource consumption
on busy servers.

Still the system admin and/or the application have to opt-in to
MPTCP explicitly. After that, they need to change (increase) the
default maximum number of additional subflows.

Let set that to reasonable default, and make end-users life easier.

Additionally we need to update some self-tests accordingly.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/pm_netlink.c                          | 3 +++
 tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
 tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 050eea231528..f7d33a9abd57 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -2052,6 +2052,9 @@ static int __net_init pm_nl_init_net(struct net *net)
 	struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
 
 	INIT_LIST_HEAD_RCU(&pernet->local_addr_list);
+
+	/* Cit. 2 subflows ought to be enough for anybody. */
+	pernet->subflows_max = 2;
 	pernet->next_id = 1;
 	pernet->stale_loss_cnt = 4;
 	spin_lock_init(&pernet->lock);
diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
index 255793c5ac4f..293d349e21fe 100755
--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
@@ -945,12 +945,15 @@ subflows_tests()
 
 	# subflow limited by client
 	reset
+	ip netns exec $ns1 ./pm_nl_ctl limits 0 0
+	ip netns exec $ns2 ./pm_nl_ctl limits 0 0
 	ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
 	run_tests $ns1 $ns2 10.0.1.1
 	chk_join_nr "single subflow, limited by client" 0 0 0
 
 	# subflow limited by server
 	reset
+	ip netns exec $ns1 ./pm_nl_ctl limits 0 0
 	ip netns exec $ns2 ./pm_nl_ctl limits 0 1
 	ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
 	run_tests $ns1 $ns2 10.0.1.1
@@ -973,7 +976,7 @@ subflows_tests()
 	run_tests $ns1 $ns2 10.0.1.1
 	chk_join_nr "multiple subflows" 2 2 2
 
-	# multiple subflows limited by serverf
+	# multiple subflows limited by server
 	reset
 	ip netns exec $ns1 ./pm_nl_ctl limits 0 1
 	ip netns exec $ns2 ./pm_nl_ctl limits 0 2
diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
index 3c741abe034e..cbacf9f6538b 100755
--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
+++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
@@ -70,7 +70,7 @@ check()
 
 check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list"
 check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
-subflows 0" "defaults limits"
+subflows 2" "defaults limits"
 
 ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1
 ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo
@@ -118,11 +118,11 @@ check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs"
 
 ip netns exec $ns1 ./pm_nl_ctl limits 9 1
 check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
-subflows 0" "rcv addrs above hard limit"
+subflows 2" "rcv addrs above hard limit"
 
 ip netns exec $ns1 ./pm_nl_ctl limits 1 9
 check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
-subflows 0" "subflows above hard limit"
+subflows 2" "subflows above hard limit"
 
 ip netns exec $ns1 ./pm_nl_ctl limits 8 8
 check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8
-- 
2.26.3


Re: [PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Mat Martineau 2 years, 6 months ago
On Fri, 8 Oct 2021, Paolo Abeni wrote:

> The current default does not allowing additional subflows, mostly
> as a safety restriction to avoid uncontrolled resource consumption
> on busy servers.
>
> Still the system admin and/or the application have to opt-in to
> MPTCP explicitly. After that, they need to change (increase) the
> default maximum number of additional subflows.
>
> Let set that to reasonable default, and make end-users life easier.
>
> Additionally we need to update some self-tests accordingly.
>

Seems like a sensible default to me, one less step for end users to opt-in 
to MPTCP.

Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>

> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> net/mptcp/pm_netlink.c                          | 3 +++
> tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
> tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
> 3 files changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> index 050eea231528..f7d33a9abd57 100644
> --- a/net/mptcp/pm_netlink.c
> +++ b/net/mptcp/pm_netlink.c
> @@ -2052,6 +2052,9 @@ static int __net_init pm_nl_init_net(struct net *net)
> 	struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
>
> 	INIT_LIST_HEAD_RCU(&pernet->local_addr_list);
> +
> +	/* Cit. 2 subflows ought to be enough for anybody. */

I had to google 'Cit.', and now know one more Italian word :)
(no need to change it!)

> +	pernet->subflows_max = 2;
> 	pernet->next_id = 1;
> 	pernet->stale_loss_cnt = 4;
> 	spin_lock_init(&pernet->lock);
> diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
> index 255793c5ac4f..293d349e21fe 100755
> --- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
> +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
> @@ -945,12 +945,15 @@ subflows_tests()
>
> 	# subflow limited by client
> 	reset
> +	ip netns exec $ns1 ./pm_nl_ctl limits 0 0
> +	ip netns exec $ns2 ./pm_nl_ctl limits 0 0
> 	ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
> 	run_tests $ns1 $ns2 10.0.1.1
> 	chk_join_nr "single subflow, limited by client" 0 0 0
>
> 	# subflow limited by server
> 	reset
> +	ip netns exec $ns1 ./pm_nl_ctl limits 0 0
> 	ip netns exec $ns2 ./pm_nl_ctl limits 0 1
> 	ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
> 	run_tests $ns1 $ns2 10.0.1.1
> @@ -973,7 +976,7 @@ subflows_tests()
> 	run_tests $ns1 $ns2 10.0.1.1
> 	chk_join_nr "multiple subflows" 2 2 2
>
> -	# multiple subflows limited by serverf
> +	# multiple subflows limited by server
> 	reset
> 	ip netns exec $ns1 ./pm_nl_ctl limits 0 1
> 	ip netns exec $ns2 ./pm_nl_ctl limits 0 2
> diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
> index 3c741abe034e..cbacf9f6538b 100755
> --- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
> +++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
> @@ -70,7 +70,7 @@ check()
>
> check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list"
> check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
> -subflows 0" "defaults limits"
> +subflows 2" "defaults limits"
>
> ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1
> ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo
> @@ -118,11 +118,11 @@ check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs"
>
> ip netns exec $ns1 ./pm_nl_ctl limits 9 1
> check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
> -subflows 0" "rcv addrs above hard limit"
> +subflows 2" "rcv addrs above hard limit"
>
> ip netns exec $ns1 ./pm_nl_ctl limits 1 9
> check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
> -subflows 0" "subflows above hard limit"
> +subflows 2" "subflows above hard limit"
>
> ip netns exec $ns1 ./pm_nl_ctl limits 8 8
> check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8
> -- 
> 2.26.3
>
>
>

--
Mat Martineau
Intel

Re: [PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Matthieu Baerts 2 years, 6 months ago
Hi Paolo,

On 08/10/2021 19:27, Paolo Abeni wrote:
> The current default does not allowing additional subflows, mostly
> as a safety restriction to avoid uncontrolled resource consumption
> on busy servers.
> 
> Still the system admin and/or the application have to opt-in to
> MPTCP explicitly. After that, they need to change (increase) the
> default maximum number of additional subflows.
> 
> Let set that to reasonable default, and make end-users life easier.
> 
> Additionally we need to update some self-tests accordingly.
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
>  net/mptcp/pm_netlink.c                          | 3 +++
>  tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
>  tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
>  3 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> index 050eea231528..f7d33a9abd57 100644
> --- a/net/mptcp/pm_netlink.c
> +++ b/net/mptcp/pm_netlink.c
> @@ -2052,6 +2052,9 @@ static int __net_init pm_nl_init_net(struct net *net)
>  	struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
>  
>  	INIT_LIST_HEAD_RCU(&pernet->local_addr_list);
> +
> +	/* Cit. 2 subflows ought to be enough for anybody. */
> +	pernet->subflows_max = 2;

Sorry for the late reply, I just thought about something when looking at
applying this: let say that in a typical setup we would have 2 subflows
per connections because the client has 2 interfaces and connect to a
server not announcing other addresses. With this setup, could we arrive
in situations where we would (internally) have 3 subflows?

I mean, could we arrive in a situation where 1 (or 0) subflow is OK, 1
is being deleted (network reconnection?) and the PM is already trying to
create a new one? In this case and because of the limit, the PM might
not try to create this 3rd subflow and might not retry when the dying
subflow is actually being deleted? Or maybe the PM already wait for the
full deletion before re-trying to establish a new subflow?

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

Re: [PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Paolo Abeni 2 years, 6 months ago
On Mon, 2021-10-11 at 18:09 +0200, Matthieu Baerts wrote:
> Hi Paolo,
> 
> On 08/10/2021 19:27, Paolo Abeni wrote:
> > The current default does not allowing additional subflows, mostly
> > as a safety restriction to avoid uncontrolled resource consumption
> > on busy servers.
> > 
> > Still the system admin and/or the application have to opt-in to
> > MPTCP explicitly. After that, they need to change (increase) the
> > default maximum number of additional subflows.
> > 
> > Let set that to reasonable default, and make end-users life easier.
> > 
> > Additionally we need to update some self-tests accordingly.
> > 
> > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > ---
> >  net/mptcp/pm_netlink.c                          | 3 +++
> >  tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
> >  tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
> >  3 files changed, 10 insertions(+), 4 deletions(-)
> > 
> > diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> > index 050eea231528..f7d33a9abd57 100644
> > --- a/net/mptcp/pm_netlink.c
> > +++ b/net/mptcp/pm_netlink.c
> > @@ -2052,6 +2052,9 @@ static int __net_init pm_nl_init_net(struct net *net)
> >  	struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
> >  
> >  	INIT_LIST_HEAD_RCU(&pernet->local_addr_list);
> > +
> > +	/* Cit. 2 subflows ought to be enough for anybody. */
> > +	pernet->subflows_max = 2;
> 
> Sorry for the late reply, I just thought about something when looking at
> applying this: let say that in a typical setup we would have 2 subflows
> per connections because the client has 2 interfaces and connect to a
> server not announcing other addresses. With this setup, could we arrive
> in situations where we would (internally) have 3 subflows?
> 
> I mean, could we arrive in a situation where 1 (or 0) subflow is OK, 1
> is being deleted (network reconnection?) and the PM is already trying to
> create a new one? In this case and because of the limit, the PM might
> not try to create this 3rd subflow and might not retry when the dying
> subflow is actually being deleted? Or maybe the PM already wait for the
> full deletion before re-trying to establish a new subflow?

If I understand correctly, your fear is that we can end-up with 2
additional subflows even if there are only 2 different local IP
addresses/interface, is that correct?

Anyhow the above scenario is not possible with the in-kernel path
manager: it will create new subflows only for local addresses not
already used by the mptcp connection.

Unless the 'fullmash' flag is set, but in that case user really wants
multiple subflows per local address.

Please let me know if the above solves your doubts.

Side note: the in-kernel path manager takes no additional action when a
subflow is closed.

Cheers,

Paolo


Re: [PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Matthieu Baerts 2 years, 6 months ago
Hi Paolo,

On 13/10/2021 16:54, Paolo Abeni wrote:
> On Mon, 2021-10-11 at 18:09 +0200, Matthieu Baerts wrote:
>> Hi Paolo,
>>
>> On 08/10/2021 19:27, Paolo Abeni wrote:
>>> The current default does not allowing additional subflows, mostly
>>> as a safety restriction to avoid uncontrolled resource consumption
>>> on busy servers.
>>>
>>> Still the system admin and/or the application have to opt-in to
>>> MPTCP explicitly. After that, they need to change (increase) the
>>> default maximum number of additional subflows.
>>>
>>> Let set that to reasonable default, and make end-users life easier.
>>>
>>> Additionally we need to update some self-tests accordingly.
>>>
>>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
>>> ---
>>>  net/mptcp/pm_netlink.c                          | 3 +++
>>>  tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
>>>  tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
>>>  3 files changed, 10 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
>>> index 050eea231528..f7d33a9abd57 100644
>>> --- a/net/mptcp/pm_netlink.c
>>> +++ b/net/mptcp/pm_netlink.c
>>> @@ -2052,6 +2052,9 @@ static int __net_init pm_nl_init_net(struct net *net)
>>>  	struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
>>>  
>>>  	INIT_LIST_HEAD_RCU(&pernet->local_addr_list);
>>> +
>>> +	/* Cit. 2 subflows ought to be enough for anybody. */
>>> +	pernet->subflows_max = 2;
>>
>> Sorry for the late reply, I just thought about something when looking at
>> applying this: let say that in a typical setup we would have 2 subflows
>> per connections because the client has 2 interfaces and connect to a
>> server not announcing other addresses. With this setup, could we arrive
>> in situations where we would (internally) have 3 subflows?
>>
>> I mean, could we arrive in a situation where 1 (or 0) subflow is OK, 1
>> is being deleted (network reconnection?) and the PM is already trying to
>> create a new one? In this case and because of the limit, the PM might
>> not try to create this 3rd subflow and might not retry when the dying
>> subflow is actually being deleted? Or maybe the PM already wait for the
>> full deletion before re-trying to establish a new subflow?
> 
> If I understand correctly, your fear is that we can end-up with 2
> additional subflows even if there are only 2 different local IP
> addresses/interface, is that correct?

Correct: typically in a "mobility use-case" where we move from one
network to another. Or we simply get a new IP, etc.

> Anyhow the above scenario is not possible with the in-kernel path
> manager: it will create new subflows only for local addresses not
> already used by the mptcp connection.
> 
> Unless the 'fullmash' flag is set, but in that case user really wants
> multiple subflows per local address.

Yes sorry, I was thinking about the 'fullmesh' case. But if the client
has 2 interfaces -- each of them with one IP -- and the server only has
one IP, we are in a "typical" UC I think. The behaviour with and without
the "fullmesh" flag is the same.

> Please let me know if the above solves your doubts.

I guess here, it is userspace's responsibility to first remove the
"broken" local address and then add the new one. If the opposite is
done, we will have 3 local addresses, we will not create a new subflow
but we will not create a new one when the 2nd subflow dies even if there
is another available local address. But again, that would be userspace's
fault, no? :)

> Side note: the in-kernel path manager takes no additional action when a
> subflow is closed.

Should I create a ticket to check if another local address is not
available after a subflow is closed and if we are still under the limit?
Maybe the PM could also re-establish the subflow using the same 5-tuple
depending on the err, e.g. if the subflow got disconnected after a NAT
timeout, etc.

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

Re: [PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Paolo Abeni 2 years, 6 months ago
On Wed, 2021-10-13 at 17:29 +0200, Matthieu Baerts wrote:
> On 13/10/2021 16:54, Paolo Abeni wrote:
> > On Mon, 2021-10-11 at 18:09 +0200, Matthieu Baerts wrote:
> > > Hi Paolo,
> > > 
> > > On 08/10/2021 19:27, Paolo Abeni wrote:
> > > > The current default does not allowing additional subflows, mostly
> > > > as a safety restriction to avoid uncontrolled resource consumption
> > > > on busy servers.
> > > > 
> > > > Still the system admin and/or the application have to opt-in to
> > > > MPTCP explicitly. After that, they need to change (increase) the
> > > > default maximum number of additional subflows.
> > > > 
> > > > Let set that to reasonable default, and make end-users life easier.
> > > > 
> > > > Additionally we need to update some self-tests accordingly.
> > > > 
> > > > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > > > ---
> > > >  net/mptcp/pm_netlink.c                          | 3 +++
> > > >  tools/testing/selftests/net/mptcp/mptcp_join.sh | 5 ++++-
> > > >  tools/testing/selftests/net/mptcp/pm_netlink.sh | 6 +++---
> > > >  3 files changed, 10 insertions(+), 4 deletions(-)
> > > > 
> > > > diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> > > > index 050eea231528..f7d33a9abd57 100644
> > > > --- a/net/mptcp/pm_netlink.c
> > > > +++ b/net/mptcp/pm_netlink.c
> > > > @@ -2052,6 +2052,9 @@ static int __net_init pm_nl_init_net(struct net *net)
> > > >  	struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
> > > >  
> > > >  	INIT_LIST_HEAD_RCU(&pernet->local_addr_list);
> > > > +
> > > > +	/* Cit. 2 subflows ought to be enough for anybody. */
> > > > +	pernet->subflows_max = 2;
> > > 
> > > Sorry for the late reply, I just thought about something when looking at
> > > applying this: let say that in a typical setup we would have 2 subflows
> > > per connections because the client has 2 interfaces and connect to a
> > > server not announcing other addresses. With this setup, could we arrive
> > > in situations where we would (internally) have 3 subflows?
> > > 
> > > I mean, could we arrive in a situation where 1 (or 0) subflow is OK, 1
> > > is being deleted (network reconnection?) and the PM is already trying to
> > > create a new one? In this case and because of the limit, the PM might
> > > not try to create this 3rd subflow and might not retry when the dying
> > > subflow is actually being deleted? Or maybe the PM already wait for the
> > > full deletion before re-trying to establish a new subflow?
> > 
> > If I understand correctly, your fear is that we can end-up with 2
> > additional subflows even if there are only 2 different local IP
> > addresses/interface, is that correct?
> 
> Correct: typically in a "mobility use-case" where we move from one
> network to another. Or we simply get a new IP, etc.
> 
> > Anyhow the above scenario is not possible with the in-kernel path
> > manager: it will create new subflows only for local addresses not
> > already used by the mptcp connection.
> > 
> > Unless the 'fullmash' flag is set, but in that case user really wants
> > multiple subflows per local address.
> 
> Yes sorry, I was thinking about the 'fullmesh' case. But if the client
> has 2 interfaces -- each of them with one IP -- and the server only has
> one IP, we are in a "typical" UC I think. The behaviour with and without
> the "fullmesh" flag is the same.
> 
> > Please let me know if the above solves your doubts.
> 
> I guess here, it is userspace's responsibility to first remove the
> "broken" local address and then add the new one. If the opposite is
> done, we will have 3 local addresses, we will not create a new subflow
> but we will not create a new one when the 2nd subflow dies even if there
> is another available local address. But again, that would be userspace's
> fault, no? :)
> 
> > Side note: the in-kernel path manager takes no additional action when a
> > subflow is closed.
> 
> Should I create a ticket to check if another local address is not
> available after a subflow is closed and if we are still under the limit?
> Maybe the PM could also re-establish the subflow using the same 5-tuple
> depending on the err, e.g. if the subflow got disconnected after a NAT
> timeout, etc.

I guess that will cover also the scenario you mentioned in the previous
point. +1 to create a new ticket ;)

/P


Re: [PATCH mptcp-next] mptcp: increase default max additional subflows to 2
Posted by Matthieu Baerts 2 years, 6 months ago
Hi Paolo, Mat,

On 08/10/2021 19:27, Paolo Abeni wrote:
> The current default does not allowing additional subflows, mostly
> as a safety restriction to avoid uncontrolled resource consumption
> on busy servers.
> 
> Still the system admin and/or the application have to opt-in to
> MPTCP explicitly. After that, they need to change (increase) the
> default maximum number of additional subflows.
> 
> Let set that to reasonable default, and make end-users life easier.
> 
> Additionally we need to update some self-tests accordingly.

Thank you for the patch and the review!
Now in our tree (for net-next).

- e941154465d9: mptcp: increase default max additional subflows to 2
- Results: 2f5f8088e247..0c525f1e0da1

Builds and tests are now in progress:



https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export/20211014T121105

https://github.com/multipath-tcp/mptcp_net-next/actions/workflows/build-validation.yml?query=branch:export/20211014T121105

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net