[PATCH mptcp-next v3 0/9] BPF path manager, part 1

Geliang Tang posted 9 patches 2 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/cover.1730961809.git.tanggeliang@kylinos.cn
net/mptcp/pm_netlink.c   |  97 +++++--------
net/mptcp/pm_userspace.c | 306 +++++++++++++++++----------------------
net/mptcp/protocol.h     |  35 +++--
net/mptcp/subflow.c      |   2 +-
4 files changed, 198 insertions(+), 242 deletions(-)
[PATCH mptcp-next v3 0/9] BPF path manager, part 1
Posted by Geliang Tang 2 weeks ago
From: Geliang Tang <tanggeliang@kylinos.cn>

v3:
 - address Matt's comments in v2 (thanks)
 - only include cleanups and refactoring patches in this set.

v2:
 - add BPF-related code in this set (32-36).

In order to implement BPF userspace path manager, it is necessary to
unify the interfaces of the path manager. This set contains some
cleanups and refactoring to unify the interfaces in kernel space.
Finally, define a struct mptcp_pm_ops for a userspace path manager
like this:

struct mptcp_pm_ops {
        int (*address_announce)(struct mptcp_sock *msk,
                                struct mptcp_pm_addr_entry *local);
        int (*address_remove)(struct mptcp_sock *msk, u8 id);
        int (*subflow_create)(struct mptcp_sock *msk,
                              struct mptcp_pm_addr_entry *local,
                              struct mptcp_addr_info *remote);
        int (*subflow_destroy)(struct mptcp_sock *msk,
                               struct mptcp_pm_addr_entry *local,
                               struct mptcp_addr_info *remote);
        int (*get_local_id)(struct mptcp_sock *msk,
                            struct mptcp_pm_addr_entry *local);
        u8 (*get_flags)(struct mptcp_sock *msk,
                        struct mptcp_addr_info *skc);
        struct mptcp_pm_addr_entry *(*get_addr)(struct mptcp_sock *msk,
                                                u8 id);
	int (*dump_addr)(struct mptcp_sock *msk,
			 mptcp_pm_addr_id_bitmap_t *bitmap);
        int (*set_flags)(struct mptcp_sock *msk,
                         struct mptcp_pm_addr_entry *local,
                         struct mptcp_addr_info *remote);

        u8                      type;
        struct module           *owner;
        struct list_head        list;

        void (*init)(struct mptcp_sock *msk);
        void (*release)(struct mptcp_sock *msk);
} ____cacheline_aligned_in_smp;

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/74

Geliang Tang (9):
  mptcp: add mptcp_userspace_pm_lookup_addr helper
  mptcp: add mptcp_for_each_userspace_pm_addr macro
  mptcp: add mptcp_userspace_pm_get_sock helper
  mptcp: move mptcp_pm_remove_addrs into pm_userspace
  mptcp: drop free_list for deleting entries
  mptcp: use mptcp_pm_local in pm_netlink only
  mptcp: drop struct mptcp_pm_add_entry
  mptcp: change local addr type of subflow_destroy
  mptcp: drop useless "err = 0" in subflow_destroy

 net/mptcp/pm_netlink.c   |  97 +++++--------
 net/mptcp/pm_userspace.c | 306 +++++++++++++++++----------------------
 net/mptcp/protocol.h     |  35 +++--
 net/mptcp/subflow.c      |   2 +-
 4 files changed, 198 insertions(+), 242 deletions(-)

-- 
2.45.2
Re: [PATCH mptcp-next v3 0/9] BPF path manager, part 1
Posted by MPTCP CI 2 weeks ago
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Unstable: 1 failed test(s): selftest_mptcp_connect 🔴
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Unstable: 1 failed test(s): bpftest_test_progs-cpuv4_mptcp 🔴
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11718116131

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c83e8073fbae
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=907218


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)