[PATCH mptcp-next v6 0/5] BPF path manager, part 6

Geliang Tang posted 5 patches 1 week ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/cover.1743133948.git.tanggeliang@kylinos.cn
include/net/mptcp.h      |   5 ++
net/mptcp/pm.c           | 100 +++++++++++++++++++--------------------
net/mptcp/pm_kernel.c    |  47 ++++++++++++------
net/mptcp/pm_userspace.c |   9 +++-
net/mptcp/protocol.h     |  27 ++++++++++-
net/mptcp/subflow.c      |   6 +--
6 files changed, 122 insertions(+), 72 deletions(-)
[PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by Geliang Tang 1 week ago
From: Geliang Tang <tanggeliang@kylinos.cn>

v6:
 - squash accept_new_subflow patches into one.
 - change "pm->subflows == subflows_max - 1" to
	pm->subflows + 1 == subflows_max.
 - not to call accept_new_subflow under the PM lock.
 - add mptcp_pm_accept_subflow helpers.
 - drop READ_ONCE in mptcp_pm_worker.
 - clear all the pm status flags once in mptcp_pm_worker.

v5:
 - add comment "call from the subflow/msk context" for mptcp_sched_ops.
 - add new helper mptcp_pm_accept_new_subflow.
 - add "bool allow" parameter for mptcp_pm_accept_new_subflow, and drop
   .allow_new_subflow interface.
 - use a copy of pm->status in mptcp_pm_worker.
 - rename mptcp_pm_create_subflow_or_signal_addr with "__" prefix.
 - drop "!update_subflows" in mptcp_pm_subflow_check_next.
 - add_addr_received/rm_addr_received interfaces will be added in the
   next series.

v4:
 - address Matt's comments in v3. 
 - update pm locks in mptcp_pm_worker.
 - move the lock inside mptcp_pm_create_subflow_or_signal_addr.
 - move the lock inside mptcp_pm_nl_add_addr_received.
 - invoke add_addr_received interface from mptcp_pm_worker.
 - invoke rm_addr_received interface from mptcp_pm_rm_addr_or_subflow.
 - simply call mptcp_pm_close_subflow() in mptcp_pm_subflow_check_next.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1742804266.git.tanggeliang@kylinos.cn/

v3:
 - merge 'bugfixes for "BPF path manager, part 6, v2"' into this set.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1742521397.git.tanggeliang@kylinos.cn/

v2:
 - address Matt's comments in v1. 
 - add add_addr_received and rm_addr_received interfaces.
 - drop subflow_check_next interface.
 - add a "required" or "optional" comment for a group of interfaces in
   struct mptcp_pm_ops.

v1:
- https://patchwork.kernel.org/project/mptcp/cover/cover.1741685260.git.tanggeliang@kylinos.cn/

New interfaces for struct mptcp_pm_ops.

Geliang Tang (5):
  mptcp: pm: call pm worker handler without pm lock
  mptcp: pm: add accept_new_subflow() interface
  mptcp: pm: add established() interface
  mptcp: pm: add subflow_established() interface
  mptcp: pm: drop is_userspace in subflow_check_next

 include/net/mptcp.h      |   5 ++
 net/mptcp/pm.c           | 100 +++++++++++++++++++--------------------
 net/mptcp/pm_kernel.c    |  47 ++++++++++++------
 net/mptcp/pm_userspace.c |   9 +++-
 net/mptcp/protocol.h     |  27 ++++++++++-
 net/mptcp/subflow.c      |   6 +--
 6 files changed, 122 insertions(+), 72 deletions(-)

-- 
2.43.0
Re: [PATCH mptcp-next v6 0/5] BPF path manager, part 6
Posted by MPTCP CI 1 week ago
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/14121889205

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c19b176f03f3
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=947940


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)