[PATCH mptcp-next v6 0/5] BPF path manager, part 6

Geliang Tang posted 5 patches 10 months, 2 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/cover.1743133948.git.tanggeliang@kylinos.cn
There is a newer version of this series
include/net/mptcp.h      |   5 ++
net/mptcp/pm.c           | 100 +++++++++++++++++++--------------------
net/mptcp/pm_kernel.c    |  47 ++++++++++++------
net/mptcp/pm_userspace.c |   9 +++-
net/mptcp/protocol.h     |  27 ++++++++++-
net/mptcp/subflow.c      |   6 +--
6 files changed, 122 insertions(+), 72 deletions(-)
[PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by Geliang Tang 10 months, 2 weeks ago
From: Geliang Tang <tanggeliang@kylinos.cn>

v6:
 - squash accept_new_subflow patches into one.
 - change "pm->subflows == subflows_max - 1" to
	pm->subflows + 1 == subflows_max.
 - not to call accept_new_subflow under the PM lock.
 - add mptcp_pm_accept_subflow helpers.
 - drop READ_ONCE in mptcp_pm_worker.
 - clear all the pm status flags once in mptcp_pm_worker.

v5:
 - add comment "call from the subflow/msk context" for mptcp_sched_ops.
 - add new helper mptcp_pm_accept_new_subflow.
 - add "bool allow" parameter for mptcp_pm_accept_new_subflow, and drop
   .allow_new_subflow interface.
 - use a copy of pm->status in mptcp_pm_worker.
 - rename mptcp_pm_create_subflow_or_signal_addr with "__" prefix.
 - drop "!update_subflows" in mptcp_pm_subflow_check_next.
 - add_addr_received/rm_addr_received interfaces will be added in the
   next series.

v4:
 - address Matt's comments in v3. 
 - update pm locks in mptcp_pm_worker.
 - move the lock inside mptcp_pm_create_subflow_or_signal_addr.
 - move the lock inside mptcp_pm_nl_add_addr_received.
 - invoke add_addr_received interface from mptcp_pm_worker.
 - invoke rm_addr_received interface from mptcp_pm_rm_addr_or_subflow.
 - simply call mptcp_pm_close_subflow() in mptcp_pm_subflow_check_next.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1742804266.git.tanggeliang@kylinos.cn/

v3:
 - merge 'bugfixes for "BPF path manager, part 6, v2"' into this set.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1742521397.git.tanggeliang@kylinos.cn/

v2:
 - address Matt's comments in v1. 
 - add add_addr_received and rm_addr_received interfaces.
 - drop subflow_check_next interface.
 - add a "required" or "optional" comment for a group of interfaces in
   struct mptcp_pm_ops.

v1:
- https://patchwork.kernel.org/project/mptcp/cover/cover.1741685260.git.tanggeliang@kylinos.cn/

New interfaces for struct mptcp_pm_ops.

Geliang Tang (5):
  mptcp: pm: call pm worker handler without pm lock
  mptcp: pm: add accept_new_subflow() interface
  mptcp: pm: add established() interface
  mptcp: pm: add subflow_established() interface
  mptcp: pm: drop is_userspace in subflow_check_next

 include/net/mptcp.h      |   5 ++
 net/mptcp/pm.c           | 100 +++++++++++++++++++--------------------
 net/mptcp/pm_kernel.c    |  47 ++++++++++++------
 net/mptcp/pm_userspace.c |   9 +++-
 net/mptcp/protocol.h     |  27 ++++++++++-
 net/mptcp/subflow.c      |   6 +--
 6 files changed, 122 insertions(+), 72 deletions(-)

-- 
2.43.0
Re: [PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by Matthieu Baerts 10 months ago
Hi Geliang,

On 28/03/2025 05:01, Geliang Tang wrote:
> From: Geliang Tang <tanggeliang@kylinos.cn>
> 
> v6:
>  - squash accept_new_subflow patches into one.
>  - change "pm->subflows == subflows_max - 1" to
> 	pm->subflows + 1 == subflows_max.
>  - not to call accept_new_subflow under the PM lock.
>  - add mptcp_pm_accept_subflow helpers.
>  - drop READ_ONCE in mptcp_pm_worker.
>  - clear all the pm status flags once in mptcp_pm_worker.

Thank you for the new version.

FYI, I'm still thinking about how to handle PM lock should be handled. I
will come back to you later. I might suggest a new version with some
modifications if that's OK, that might be easier.

Cheers,
Matt
-- 
Sponsored by the NGI0 Core fund.
Re: [PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by Geliang Tang 9 months, 4 weeks ago
Hi Matt,

On Tue, 2025-04-08 at 20:13 +0200, Matthieu Baerts wrote:
> Hi Geliang,
> 
> On 28/03/2025 05:01, Geliang Tang wrote:
> > From: Geliang Tang <tanggeliang@kylinos.cn>
> > 
> > v6:
> >  - squash accept_new_subflow patches into one.
> >  - change "pm->subflows == subflows_max - 1" to
> > 	pm->subflows + 1 == subflows_max.
> >  - not to call accept_new_subflow under the PM lock.
> >  - add mptcp_pm_accept_subflow helpers.
> >  - drop READ_ONCE in mptcp_pm_worker.
> >  - clear all the pm status flags once in mptcp_pm_worker.
> 
> Thank you for the new version.
> 
> FYI, I'm still thinking about how to handle PM lock should be
> handled. I
> will come back to you later. I might suggest a new version with some
> modifications if that's OK, that might be easier.

Thanks for helping me think about this part around the PM lock. Look
forward to your feedback.

-Geliang

> 
> Cheers,
> Matt

Re: [PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by Geliang Tang 9 months, 3 weeks ago
Hi Matt,

On Tue, 2025-04-15 at 11:16 +0800, Geliang Tang wrote:
> Hi Matt,
> 
> On Tue, 2025-04-08 at 20:13 +0200, Matthieu Baerts wrote:
> > Hi Geliang,
> > 
> > On 28/03/2025 05:01, Geliang Tang wrote:
> > > From: Geliang Tang <tanggeliang@kylinos.cn>
> > > 
> > > v6:
> > >  - squash accept_new_subflow patches into one.
> > >  - change "pm->subflows == subflows_max - 1" to
> > > 	pm->subflows + 1 == subflows_max.
> > >  - not to call accept_new_subflow under the PM lock.
> > >  - add mptcp_pm_accept_subflow helpers.
> > >  - drop READ_ONCE in mptcp_pm_worker.
> > >  - clear all the pm status flags once in mptcp_pm_worker.
> > 
> > Thank you for the new version.
> > 
> > FYI, I'm still thinking about how to handle PM lock should be
> > handled. I
> > will come back to you later. I might suggest a new version with
> > some
> > modifications if that's OK, that might be easier.
> 
> Thanks for helping me think about this part around the PM lock. Look
> forward to your feedback.

If you have time to take over this part around the PM lock, that would
be even better and I would be very grateful.

Thanks,
-Geliang

> 
> -Geliang
> 
> > 
> > Cheers,
> > Matt
> 
> 

Re: [PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by Matthieu Baerts 9 months, 3 weeks ago
Hi Geliang,

On 16/04/2025 09:29, Geliang Tang wrote:
> Hi Matt,
> 
> On Tue, 2025-04-15 at 11:16 +0800, Geliang Tang wrote:
>> Hi Matt,
>>
>> On Tue, 2025-04-08 at 20:13 +0200, Matthieu Baerts wrote:
>>> Hi Geliang,
>>>
>>> On 28/03/2025 05:01, Geliang Tang wrote:
>>>> From: Geliang Tang <tanggeliang@kylinos.cn>
>>>>
>>>> v6:
>>>>  - squash accept_new_subflow patches into one.
>>>>  - change "pm->subflows == subflows_max - 1" to
>>>> 	pm->subflows + 1 == subflows_max.
>>>>  - not to call accept_new_subflow under the PM lock.
>>>>  - add mptcp_pm_accept_subflow helpers.
>>>>  - drop READ_ONCE in mptcp_pm_worker.
>>>>  - clear all the pm status flags once in mptcp_pm_worker.
>>>
>>> Thank you for the new version.
>>>
>>> FYI, I'm still thinking about how to handle PM lock should be
>>> handled. I
>>> will come back to you later. I might suggest a new version with
>>> some
>>> modifications if that's OK, that might be easier.
>>
>> Thanks for helping me think about this part around the PM lock. Look
>> forward to your feedback.
> 
> If you have time to take over this part around the PM lock, that would
> be even better and I would be very grateful.

Sure, no problem, I will have a look. But sadly, for the moment, it is
hard for me to dedicate a long time on some tasks. It might take a bit
of time for me to look at it, but I think that's fine, we are not in a rush.

Cheers,
Matt
-- 
Sponsored by the NGI0 Core fund.

Re: [PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by MPTCP CI 10 months, 2 weeks ago
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/14121889205

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c19b176f03f3
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=947940


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)