[PATCH mptcp-next v3 0/3] BPF path manager, part 4

Geliang Tang posted 3 patches 5 days, 6 hours ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/cover.1737012662.git.tanggeliang@kylinos.cn
include/net/mptcp.h      |  27 +++
net/mptcp/pm.c           |   5 +
net/mptcp/pm_userspace.c | 374 ++++++++++++++++++++++++++++-----------
net/mptcp/protocol.c     |   1 +
net/mptcp/protocol.h     |   9 +
5 files changed, 313 insertions(+), 103 deletions(-)
[PATCH mptcp-next v3 0/3] BPF path manager, part 4
Posted by Geliang Tang 5 days, 6 hours ago
From: Geliang Tang <tanggeliang@kylinos.cn>

v3:
 - rename the 2nd parameter of get_local_id() from 'local' to 'skc'.
 - keep the 'msk_sport' check in mptcp_userspace_pm_get_local_id().
 - return 'err' instead of '0' in userspace_pm_subflow_create().
 - drop 'ret' variable inmptcp_pm_data_reset().
 - fix typos in commit log.

Depends on: "BPF path manager, part 3" v4
Based-on: <cover.1737012165.git.tanggeliang@kylinos.cn>

v2:
 - update get_local_id interface in patch 2.

get_addr() and dump_addr() interfaces of BPF userspace pm are dropped
as Matt suggested.

In order to implement BPF userspace path manager, it is necessary to
unify the interfaces of the path manager. This set contains some
cleanups and refactoring to unify the interfaces in kernel space.
Finally, define a struct mptcp_pm_ops for a userspace path manager
like this:

struct mptcp_pm_ops {
	int (*address_announce)(struct mptcp_sock *msk,
				struct mptcp_pm_addr_entry *local);
	int (*address_remove)(struct mptcp_sock *msk, u8 id);
	int (*subflow_create)(struct mptcp_sock *msk,
			      struct mptcp_pm_addr_entry *local,
			      struct mptcp_addr_info *remote);
	int (*subflow_destroy)(struct mptcp_sock *msk,
			       struct mptcp_pm_addr_entry *local,
			       struct mptcp_addr_info *remote);
	int (*get_local_id)(struct mptcp_sock *msk,
			    struct mptcp_pm_addr_entry *skc);
	u8 (*get_flags)(struct mptcp_sock *msk,
			struct mptcp_addr_info *skc);
	int (*set_flags)(struct mptcp_sock *msk,
			 struct mptcp_pm_addr_entry *local,
			 struct mptcp_addr_info *remote);

	u8			type;
	struct module		*owner;
	struct list_head	list;

	void (*init)(struct mptcp_sock *msk);
	void (*release)(struct mptcp_sock *msk);
} ____cacheline_aligned_in_smp;

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/74

Geliang Tang (3):
  mptcp: define struct mptcp_pm_ops
  mptcp: register default userspace pm
  mptcp: init and release mptcp_pm_ops

 include/net/mptcp.h      |  27 +++
 net/mptcp/pm.c           |   5 +
 net/mptcp/pm_userspace.c | 374 ++++++++++++++++++++++++++++-----------
 net/mptcp/protocol.c     |   1 +
 net/mptcp/protocol.h     |   9 +
 5 files changed, 313 insertions(+), 103 deletions(-)

-- 
2.43.0
Re: [PATCH mptcp-next v3 0/3] BPF path manager, part 4
Posted by MPTCP CI 5 days, 5 hours ago
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Unstable: 1 failed test(s): selftest_mptcp_join - Critical: 1 Call Trace(s) ❌
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/12804456330

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/50059925b7f3
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=925962


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)