1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
3
v13:
4
- use '__ign' suffix to ignore the argument type checks of
5
bpf_mptcp_subflow_ctx() and bpf_sk_stream_memory_free(),
6
instead of adding a new helper bpf_mptcp_send_info_to_ssk().
7
- use 'bpf_for_each(mptcp_subflow, subflow, (struct sock *)msk)' instead
8
of using 'bpf_for_each(mptcp_subflow, subflow, msk)'.
9
- keep struct mptcp_sched_data for future use.
10
11
Depends on:
12
- Squash to "Add mptcp_subflow bpf_iter support", v2
13
14
Based-on: <cover.1738470660.git.tanggeliang@kylinos.cn>
15
16
v12:
17
- drop struct mptcp_sched_data.
18
- rebased on "split get_subflow interface into two" v2.
19
20
v11:
21
If another squash-to patchset (Squash to "Add mptcp_subflow bpf_iter
22
support") under review is merged before this set, v10 will fail to run.
23
v11 fixes this issue and can run regardless of whether it is merged
24
before or after the squash-to patchset.
25
26
Compared with v10, only patches 3, 5, and 8 have been modified:
27
- use mptcp_subflow_tcp_sock instead of bpf_mptcp_subflow_tcp_sock in
28
patch 3 and patch 5.
29
- drop bpf_mptcp_sched_kfunc_set, use bpf_mptcp_common_kfunc_set instead
30
in patch 8.
31
32
v10:
33
- drop mptcp_subflow_set_scheduled() helper and WRITE_ONCE() in BPF.
34
- add new bpf helper bpf_mptcp_send_info_to_ssk() for burst scheduler.
35
36
v9:
37
- merge 'Fixes for "use bpf_iter in bpf schedulers" v8' into this set.
38
- rebased on "add netns helpers" v4
39
40
v8:
41
- address Mat's comments in v7.
42
- move sk_stream_memory_free check inside bpf_for_each() loop.
43
- implement mptcp_subflow_set_scheduled helper in BPF.
44
- add cleanup patches into this set again.
45
46
v7:
47
- move cleanup patches out of this set.
48
- rebased.
2
49
3
v6:
50
v6:
4
- rebased to "add mptcp_subflow bpf_iter" v10
51
- rebased to "add mptcp_subflow bpf_iter" v10
5
52
6
v5:
53
v5:
...
...
23
With the newly added mptcp_subflow bpf_iter, we can get rid of the
70
With the newly added mptcp_subflow bpf_iter, we can get rid of the
24
subflows array "contexts" in struct mptcp_sched_data. This set
71
subflows array "contexts" in struct mptcp_sched_data. This set
25
uses bpf_for_each(mptcp_subflow) helper to update all the bpf
72
uses bpf_for_each(mptcp_subflow) helper to update all the bpf
26
schedules:
73
schedules:
27
74
28
bpf_for_each(mptcp_subflow, subflow, msk) {
75
bpf_for_each(mptcp_subflow, subflow, (struct sock *)msk) {
29
... ...
76
... ...
30
mptcp_subflow_set_scheduled(subflow, true);
77
mptcp_subflow_set_scheduled(subflow, true);
31
}
78
}
32
79
33
Depends on:
80
Geliang Tang (9):
34
- "add mptcp_subflow bpf_iter" v10
81
Squash to "bpf: Register mptcp common kfunc set"
35
36
Based-on: <cover.1729063444.git.tanggeliang@kylinos.cn>
37
38
Geliang Tang (10):
39
Revert "mptcp: add sched_data helpers"
82
Revert "mptcp: add sched_data helpers"
40
Squash to "bpf: Add bpf_mptcp_sched_ops"
83
Squash to "bpf: Export mptcp packet scheduler helpers"
41
Squash to "bpf: Add bpf_mptcp_sched_kfunc_set"
42
Squash to "selftests/bpf: Add bpf_first scheduler & test"
84
Squash to "selftests/bpf: Add bpf_first scheduler & test"
43
Squash to "selftests/bpf: Add bpf_bkup scheduler & test"
85
Squash to "selftests/bpf: Add bpf_bkup scheduler & test"
44
Squash to "selftests/bpf: Add bpf_rr scheduler & test"
86
Squash to "selftests/bpf: Add bpf_rr scheduler & test"
45
Squash to "selftests/bpf: Add bpf_red scheduler & test"
87
Squash to "selftests/bpf: Add bpf_red scheduler & test"
46
Squash to "selftests/bpf: Add bpf_burst scheduler & test"
88
Squash to "selftests/bpf: Add bpf_burst scheduler & test"
47
mptcp: drop subflow contexts in mptcp_sched_data
89
mptcp: drop subflow contexts in mptcp_sched_data
48
Squash to "selftests/bpf: Add bpf scheduler test" - drop
49
has_bytes_sent
50
90
51
include/net/mptcp.h | 2 -
91
include/net/mptcp.h | 3 -
52
net/mptcp/bpf.c | 24 ++----
92
net/mptcp/bpf.c | 52 +++++++------
53
net/mptcp/protocol.h | 2 -
93
net/mptcp/protocol.h | 2 -
54
net/mptcp/sched.c | 22 ------
94
net/mptcp/sched.c | 22 ------
55
.../testing/selftests/bpf/prog_tests/mptcp.c | 48 ++++++------
56
tools/testing/selftests/bpf/progs/mptcp_bpf.h | 3 -
95
tools/testing/selftests/bpf/progs/mptcp_bpf.h | 3 -
57
.../selftests/bpf/progs/mptcp_bpf_bkup.c | 16 +---
96
.../selftests/bpf/progs/mptcp_bpf_bkup.c | 16 +---
58
.../selftests/bpf/progs/mptcp_bpf_burst.c | 77 ++++++++++---------
97
.../selftests/bpf/progs/mptcp_bpf_burst.c | 78 +++++++------------
59
.../selftests/bpf/progs/mptcp_bpf_bytes.c | 39 ++++++++++
60
.../selftests/bpf/progs/mptcp_bpf_first.c | 8 +-
98
.../selftests/bpf/progs/mptcp_bpf_first.c | 8 +-
61
.../selftests/bpf/progs/mptcp_bpf_red.c | 8 +-
99
.../selftests/bpf/progs/mptcp_bpf_red.c | 8 +-
62
.../selftests/bpf/progs/mptcp_bpf_rr.c | 24 +++---
100
.../selftests/bpf/progs/mptcp_bpf_rr.c | 31 ++++----
63
12 files changed, 133 insertions(+), 140 deletions(-)
101
10 files changed, 80 insertions(+), 143 deletions(-)
64
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bytes.c
65
102
66
--
103
--
67
2.43.0
104
2.43.0
diff view generated by jsdifflib
1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
2
3
Please update the subject to
3
Instead of adding a new BPF function bpf_mptcp_send_info_to_ssk() in
4
4
v12, this patch uses a much more simpler approach, which using '__ign'
5
    bpf: Add mptcp packet scheduler struct_ops
5
suffix for the argument of bpf_mptcp_subflow_ctx() to let BPF to
6
6
ignore the type check of this argument.
7
Drop mptcp_sock_type and mptcp_subflow_type.
8
7
9
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
8
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
10
---
9
---
11
net/mptcp/bpf.c | 15 ++++++---------
10
net/mptcp/bpf.c | 8 ++++----
12
1 file changed, 6 insertions(+), 9 deletions(-)
11
1 file changed, 4 insertions(+), 4 deletions(-)
13
12
14
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
13
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/net/mptcp/bpf.c
15
--- a/net/mptcp/bpf.c
17
+++ b/net/mptcp/bpf.c
16
+++ b/net/mptcp/bpf.c
18
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ struct bpf_iter_mptcp_subflow_kern {
19
18
__bpf_kfunc_start_defs();
20
#ifdef CONFIG_BPF_JIT
19
21
static struct bpf_struct_ops bpf_mptcp_sched_ops;
20
__bpf_kfunc static struct mptcp_subflow_context *
22
-static const struct btf_type *mptcp_sock_type, *mptcp_subflow_type __read_mostly;
21
-bpf_mptcp_subflow_ctx(const struct sock *sk)
23
static u32 mptcp_sock_id, mptcp_subflow_id;
22
+bpf_mptcp_subflow_ctx(const struct sock *sk__ign)
24
25
+/* MPTCP BPF packet scheduler */
26
+
27
static const struct bpf_func_proto *
28
bpf_mptcp_sched_get_func_proto(enum bpf_func_id func_id,
29
             const struct bpf_prog *prog)
30
@@ -XXX,XX +XXX,XX @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
31
                     const struct bpf_reg_state *reg,
32
                     int off, int size)
33
{
23
{
34
-    const struct btf_type *t;
24
-    if (sk && sk_fullsock(sk) &&
35
+    u32 id = reg->btf_id;
25
-     sk->sk_protocol == IPPROTO_TCP && sk_is_mptcp(sk))
36
    size_t end;
26
-        return mptcp_subflow_ctx(sk);
37
27
+    if (sk__ign && sk_fullsock(sk__ign) &&
38
-    t = btf_type_by_id(reg->btf, reg->btf_id);
28
+     sk__ign->sk_protocol == IPPROTO_TCP && sk_is_mptcp(sk__ign))
39
-
29
+        return mptcp_subflow_ctx(sk__ign);
40
-    if (t == mptcp_sock_type) {
30
41
+    if (id == mptcp_sock_id) {
31
    return NULL;
42
        switch (off) {
43
        case offsetof(struct mptcp_sock, snd_burst):
44
            end = offsetofend(struct mptcp_sock, snd_burst);
45
@@ -XXX,XX +XXX,XX @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
46
                off);
47
            return -EACCES;
48
        }
49
-    } else if (t == mptcp_subflow_type) {
50
+    } else if (id == mptcp_subflow_id) {
51
        switch (off) {
52
        case offsetof(struct mptcp_subflow_context, avg_pacing_rate):
53
            end = offsetofend(struct mptcp_subflow_context, avg_pacing_rate);
54
@@ -XXX,XX +XXX,XX @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
55
56
    if (off + size > end) {
57
        bpf_log(log, "access beyond %s at off %u size %u ended at %zu",
58
-            t == mptcp_sock_type ? "mptcp_sock" : "mptcp_subflow_context",
59
+            id == mptcp_sock_id ? "mptcp_sock" : "mptcp_subflow_context",
60
            off, size, end);
61
        return -EACCES;
62
    }
63
@@ -XXX,XX +XXX,XX @@ static int bpf_mptcp_sched_init(struct btf *btf)
64
    if (type_id < 0)
65
        return -EINVAL;
66
    mptcp_sock_id = type_id;
67
-    mptcp_sock_type = btf_type_by_id(btf, mptcp_sock_id);
68
69
    type_id = btf_find_by_name_kind(btf, "mptcp_subflow_context",
70
                    BTF_KIND_STRUCT);
71
    if (type_id < 0)
72
        return -EINVAL;
73
    mptcp_subflow_id = type_id;
74
-    mptcp_subflow_type = btf_type_by_id(btf, mptcp_subflow_id);
75
76
    return 0;
77
}
32
}
78
--
33
--
79
2.43.0
34
2.43.0
diff view generated by jsdifflib
1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
2
3
Drop this patch.
3
Drop this patch. bpf_mptcp_subflow_ctx_by_pos and
4
mptcp_sched_data_set_contexts are uesless now.
4
5
5
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
6
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
6
---
7
---
7
net/mptcp/bpf.c | 8 --------
8
net/mptcp/bpf.c | 8 --------
8
net/mptcp/protocol.h | 2 --
9
net/mptcp/protocol.h | 2 --
...
...
11
12
12
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
13
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/net/mptcp/bpf.c
15
--- a/net/mptcp/bpf.c
15
+++ b/net/mptcp/bpf.c
16
+++ b/net/mptcp/bpf.c
16
@@ -XXX,XX +XXX,XX @@ __bpf_kfunc static void bpf_mptcp_sock_release(struct mptcp_sock *msk)
17
@@ -XXX,XX +XXX,XX @@ bpf_iter_mptcp_subflow_destroy(struct bpf_iter_mptcp_subflow *it)
17
    WARN_ON_ONCE(!sk || !refcount_dec_not_one(&sk->sk_refcnt));
18
{
18
}
19
}
19
20
20
-__bpf_kfunc struct mptcp_subflow_context *
21
-__bpf_kfunc struct mptcp_subflow_context *
21
-bpf_mptcp_subflow_ctx_by_pos(const struct mptcp_sched_data *data, unsigned int pos)
22
-bpf_mptcp_subflow_ctx_by_pos(const struct mptcp_sched_data *data, unsigned int pos)
22
-{
23
-{
...
...
71
-
72
-
72
int mptcp_sched_get_send(struct mptcp_sock *msk)
73
int mptcp_sched_get_send(struct mptcp_sock *msk)
73
{
74
{
74
    struct mptcp_subflow_context *subflow;
75
    struct mptcp_subflow_context *subflow;
75
@@ -XXX,XX +XXX,XX @@ int mptcp_sched_get_send(struct mptcp_sock *msk)
76
@@ -XXX,XX +XXX,XX @@ int mptcp_sched_get_send(struct mptcp_sock *msk)
76
    data.reinject = false;
77
77
    if (msk->sched == &mptcp_sched_default || !msk->sched)
78
    if (msk->sched == &mptcp_sched_default || !msk->sched)
78
        return mptcp_sched_default_get_subflow(msk, &data);
79
        return mptcp_sched_default_get_send(msk, &data);
79
-    mptcp_sched_data_set_contexts(msk, &data);
80
-    mptcp_sched_data_set_contexts(msk, &data);
80
    return msk->sched->get_subflow(msk, &data);
81
    return msk->sched->get_send(msk, &data);
81
}
82
}
82
83
83
@@ -XXX,XX +XXX,XX @@ int mptcp_sched_get_retrans(struct mptcp_sock *msk)
84
@@ -XXX,XX +XXX,XX @@ int mptcp_sched_get_retrans(struct mptcp_sock *msk)
84
    data.reinject = true;
85
    if (msk->sched == &mptcp_sched_default || !msk->sched)
85
    if (msk->sched == &mptcp_sched_default || !msk->sched)
86
        return mptcp_sched_default_get_subflow(msk, &data);
86
        return mptcp_sched_default_get_retrans(msk, &data);
87
87
-    mptcp_sched_data_set_contexts(msk, &data);
88
-    mptcp_sched_data_set_contexts(msk, &data);
88
    return msk->sched->get_subflow(msk, &data);
89
    if (msk->sched->get_retrans)
89
}
90
        return msk->sched->get_retrans(msk, &data);
91
    return msk->sched->get_send(msk, &data);
90
--
92
--
91
2.43.0
93
2.43.0
diff view generated by jsdifflib
1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
2
3
Please update the subject to
4
5
    "bpf: Export mptcp packet scheduler helpers"
6
7
Remove bpf_mptcp_subflow_ctx_by_pos from BPF kfunc set.
3
Remove bpf_mptcp_subflow_ctx_by_pos from BPF kfunc set.
4
Drop bpf_mptcp_sched_kfunc_set, use bpf_mptcp_common_kfunc_set instead.
5
Add new helpers bpf_mptcp_subflow_tcp_sock() and
6
bpf_sk_stream_memory_free().
8
7
9
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
8
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
10
---
9
---
11
net/mptcp/bpf.c | 1 -
10
net/mptcp/bpf.c | 38 ++++++++++++++++++++++++--------------
12
1 file changed, 1 deletion(-)
11
1 file changed, 24 insertions(+), 14 deletions(-)
13
12
14
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
13
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/net/mptcp/bpf.c
15
--- a/net/mptcp/bpf.c
17
+++ b/net/mptcp/bpf.c
16
+++ b/net/mptcp/bpf.c
18
@@ -XXX,XX +XXX,XX @@ static const struct btf_kfunc_id_set bpf_mptcp_common_kfunc_set = {
17
@@ -XXX,XX +XXX,XX @@ bpf_mptcp_subflow_ctx(const struct sock *sk__ign)
19
18
    return NULL;
20
BTF_KFUNCS_START(bpf_mptcp_sched_kfunc_ids)
19
}
20
21
+__bpf_kfunc static struct sock *
22
+bpf_mptcp_subflow_tcp_sock(const struct mptcp_subflow_context *subflow)
23
+{
24
+    if (!subflow)
25
+        return NULL;
26
+
27
+    return mptcp_subflow_tcp_sock(subflow);
28
+}
29
+
30
__bpf_kfunc static int
31
bpf_iter_mptcp_subflow_new(struct bpf_iter_mptcp_subflow *it,
32
             struct sock *sk)
33
@@ -XXX,XX +XXX,XX @@ __bpf_kfunc static bool bpf_mptcp_subflow_queues_empty(struct sock *sk)
34
    return tcp_rtx_queue_empty(sk);
35
}
36
37
+__bpf_kfunc static bool bpf_sk_stream_memory_free(const struct sock *sk__ign)
38
+{
39
+    if (sk__ign && sk_fullsock(sk__ign) &&
40
+     sk__ign->sk_protocol == IPPROTO_TCP && sk_is_mptcp(sk__ign))
41
+        return sk_stream_memory_free(sk__ign);
42
+
43
+    return NULL;
44
+}
45
+
46
__bpf_kfunc_end_defs();
47
48
BTF_KFUNCS_START(bpf_mptcp_common_kfunc_ids)
49
BTF_ID_FLAGS(func, bpf_mptcp_subflow_ctx, KF_RET_NULL)
50
+BTF_ID_FLAGS(func, bpf_mptcp_subflow_tcp_sock, KF_RET_NULL)
51
BTF_ID_FLAGS(func, bpf_iter_mptcp_subflow_new, KF_ITER_NEW | KF_TRUSTED_ARGS)
52
BTF_ID_FLAGS(func, bpf_iter_mptcp_subflow_next, KF_ITER_NEXT | KF_RET_NULL)
53
BTF_ID_FLAGS(func, bpf_iter_mptcp_subflow_destroy, KF_ITER_DESTROY)
54
-BTF_KFUNCS_END(bpf_mptcp_common_kfunc_ids)
55
-
56
-static const struct btf_kfunc_id_set bpf_mptcp_common_kfunc_set = {
57
-    .owner    = THIS_MODULE,
58
-    .set    = &bpf_mptcp_common_kfunc_ids,
59
-};
60
-
61
-BTF_KFUNCS_START(bpf_mptcp_sched_kfunc_ids)
21
BTF_ID_FLAGS(func, mptcp_subflow_set_scheduled)
62
BTF_ID_FLAGS(func, mptcp_subflow_set_scheduled)
22
-BTF_ID_FLAGS(func, bpf_mptcp_subflow_ctx_by_pos)
63
-BTF_ID_FLAGS(func, bpf_mptcp_subflow_ctx_by_pos)
23
BTF_ID_FLAGS(func, mptcp_subflow_active)
64
BTF_ID_FLAGS(func, mptcp_subflow_active)
24
BTF_ID_FLAGS(func, mptcp_set_timeout)
65
BTF_ID_FLAGS(func, mptcp_set_timeout)
25
BTF_ID_FLAGS(func, mptcp_wnd_end)
66
BTF_ID_FLAGS(func, mptcp_wnd_end)
67
-BTF_ID_FLAGS(func, tcp_stream_memory_free)
68
+BTF_ID_FLAGS(func, bpf_sk_stream_memory_free, KF_RET_NULL)
69
BTF_ID_FLAGS(func, bpf_mptcp_subflow_queues_empty)
70
BTF_ID_FLAGS(func, mptcp_pm_subflow_chk_stale, KF_SLEEPABLE)
71
-BTF_KFUNCS_END(bpf_mptcp_sched_kfunc_ids)
72
+BTF_KFUNCS_END(bpf_mptcp_common_kfunc_ids)
73
74
-static const struct btf_kfunc_id_set bpf_mptcp_sched_kfunc_set = {
75
+static const struct btf_kfunc_id_set bpf_mptcp_common_kfunc_set = {
76
    .owner    = THIS_MODULE,
77
-    .set    = &bpf_mptcp_sched_kfunc_ids,
78
+    .set    = &bpf_mptcp_common_kfunc_ids,
79
};
80
81
static int __init bpf_mptcp_kfunc_init(void)
82
@@ -XXX,XX +XXX,XX @@ static int __init bpf_mptcp_kfunc_init(void)
83
    ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCKOPT,
84
                     &bpf_mptcp_common_kfunc_set);
85
    ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
86
-                     &bpf_mptcp_sched_kfunc_set);
87
+                     &bpf_mptcp_common_kfunc_set);
88
#ifdef CONFIG_BPF_JIT
89
    ret = ret ?: register_bpf_struct_ops(&bpf_mptcp_sched_ops, mptcp_sched_ops);
90
#endif
26
--
91
--
27
2.43.0
92
2.43.0
diff view generated by jsdifflib
1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
2
3
Use the newly added bpf_for_each() helper to walk the conn_list.
3
Use the newly added bpf_for_each() helper to walk the conn_list.
4
Drop bpf_mptcp_subflow_ctx_by_pos declaration.
4
5
5
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
6
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
6
---
7
---
7
tools/testing/selftests/bpf/progs/mptcp_bpf.h | 3 ---
8
tools/testing/selftests/bpf/progs/mptcp_bpf.h | 3 ---
8
tools/testing/selftests/bpf/progs/mptcp_bpf_first.c | 8 +++++++-
9
tools/testing/selftests/bpf/progs/mptcp_bpf_first.c | 8 +++++++-
...
...
23
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
24
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
24
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
25
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
26
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
26
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
27
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
27
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
28
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
28
int BPF_PROG(bpf_first_get_subflow, struct mptcp_sock *msk,
29
int BPF_PROG(bpf_first_get_send, struct mptcp_sock *msk,
29
     struct mptcp_sched_data *data)
30
     struct mptcp_sched_data *data)
30
{
31
{
31
-    mptcp_subflow_set_scheduled(bpf_mptcp_subflow_ctx_by_pos(data, 0), true);
32
-    mptcp_subflow_set_scheduled(bpf_mptcp_subflow_ctx_by_pos(data, 0), true);
32
+    struct mptcp_subflow_context *subflow;
33
+    struct mptcp_subflow_context *subflow;
33
+
34
+
34
+    bpf_for_each(mptcp_subflow, subflow, msk) {
35
+    subflow = bpf_mptcp_subflow_ctx(msk->first);
35
+        mptcp_subflow_set_scheduled(subflow, true);
36
+    if (!subflow)
36
+        break;
37
+        return -1;
37
+    }
38
+
38
+
39
+    mptcp_subflow_set_scheduled(subflow, true);
39
    return 0;
40
    return 0;
40
}
41
}
41
42
42
--
43
--
43
2.43.0
44
2.43.0
diff view generated by jsdifflib
...
...
10
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
10
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
11
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
12
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
12
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
13
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
13
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
14
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
14
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
15
int BPF_PROG(bpf_bkup_get_subflow, struct mptcp_sock *msk,
15
int BPF_PROG(bpf_bkup_get_send, struct mptcp_sock *msk,
16
     struct mptcp_sched_data *data)
16
     struct mptcp_sched_data *data)
17
{
17
{
18
-    int nr = -1;
18
-    int nr = -1;
19
-
19
-
20
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
20
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
...
...
23
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
23
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
24
-        if (!subflow)
24
-        if (!subflow)
25
-            break;
25
-            break;
26
+    struct mptcp_subflow_context *subflow;
26
+    struct mptcp_subflow_context *subflow;
27
27
28
+    bpf_for_each(mptcp_subflow, subflow, msk) {
28
+    bpf_for_each(mptcp_subflow, subflow, (struct sock *)msk) {
29
        if (!BPF_CORE_READ_BITFIELD_PROBED(subflow, backup) ||
29
        if (!BPF_CORE_READ_BITFIELD_PROBED(subflow, backup) ||
30
         !BPF_CORE_READ_BITFIELD_PROBED(subflow, request_bkup)) {
30
         !BPF_CORE_READ_BITFIELD_PROBED(subflow, request_bkup)) {
31
-            nr = i;
31
-            nr = i;
32
+            mptcp_subflow_set_scheduled(subflow, true);
32
+            mptcp_subflow_set_scheduled(subflow, true);
33
            break;
33
            break;
...
...
diff view generated by jsdifflib
1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
2
3
Use the newly added bpf_for_each() helper to walk the conn_list.
3
Use the newly added bpf_for_each() helper to walk the conn_list.
4
4
5
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
5
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
6
---
6
---
7
.../selftests/bpf/progs/mptcp_bpf_rr.c | 24 ++++++++-----------
7
.../selftests/bpf/progs/mptcp_bpf_rr.c | 31 +++++++++----------
8
1 file changed, 10 insertions(+), 14 deletions(-)
8
1 file changed, 14 insertions(+), 17 deletions(-)
9
9
10
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
10
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
11
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
12
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
12
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
13
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
13
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
14
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
14
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
15
int BPF_PROG(bpf_rr_get_subflow, struct mptcp_sock *msk,
15
int BPF_PROG(bpf_rr_get_send, struct mptcp_sock *msk,
16
     struct mptcp_sched_data *data)
16
     struct mptcp_sched_data *data)
17
{
17
{
18
-    struct mptcp_subflow_context *subflow;
18
-    struct mptcp_subflow_context *subflow;
19
+    struct mptcp_subflow_context *subflow, *next;
19
+    struct mptcp_subflow_context *subflow, *next;
20
    struct mptcp_rr_storage *ptr;
20
    struct mptcp_rr_storage *ptr;
21
    struct sock *last_snd = NULL;
21
-    struct sock *last_snd = NULL;
22
-    int nr = 0;
22
-    int nr = 0;
23
23
24
    ptr = bpf_sk_storage_get(&mptcp_rr_map, msk, 0,
24
    ptr = bpf_sk_storage_get(&mptcp_rr_map, msk, 0,
25
                 BPF_LOCAL_STORAGE_GET_F_CREATE);
25
                 BPF_LOCAL_STORAGE_GET_F_CREATE);
26
@@ -XXX,XX +XXX,XX @@ int BPF_PROG(bpf_rr_get_subflow, struct mptcp_sock *msk,
26
    if (!ptr)
27
        return -1;
27
        return -1;
28
28
29
    last_snd = ptr->last_snd;
29
-    last_snd = ptr->last_snd;
30
+    next = bpf_mptcp_subflow_ctx(msk->first);
30
+    next = bpf_mptcp_subflow_ctx(msk->first);
31
+    if (!next)
32
+        return -1;
31
33
32
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
34
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
33
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
35
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
34
-        if (!last_snd || !subflow)
36
-        if (!last_snd || !subflow)
35
+    bpf_for_each(mptcp_subflow, subflow, msk) {
37
-            break;
36
+        if (!last_snd)
38
+    if (!ptr->last_snd)
37
            break;
39
+        goto out;
38
40
39
-        if (mptcp_subflow_tcp_sock(subflow) == last_snd) {
41
-        if (mptcp_subflow_tcp_sock(subflow) == last_snd) {
40
-            if (i + 1 == MPTCP_SUBFLOWS_MAX ||
42
-            if (i + 1 == MPTCP_SUBFLOWS_MAX ||
41
-             !bpf_mptcp_subflow_ctx_by_pos(data, i + 1))
43
-             !bpf_mptcp_subflow_ctx_by_pos(data, i + 1))
42
+        if (bpf_mptcp_subflow_tcp_sock(subflow) == last_snd) {
44
+    bpf_for_each(mptcp_subflow, subflow, (struct sock *)msk) {
45
+        if (mptcp_subflow_tcp_sock(subflow) == ptr->last_snd) {
43
+            subflow = bpf_iter_mptcp_subflow_next(&___it);
46
+            subflow = bpf_iter_mptcp_subflow_next(&___it);
44
+            if (!subflow)
47
+            if (!subflow)
45
                break;
48
                break;
46
49
47
-            nr = i + 1;
50
-            nr = i + 1;
...
...
53
-    subflow = bpf_mptcp_subflow_ctx_by_pos(data, nr);
56
-    subflow = bpf_mptcp_subflow_ctx_by_pos(data, nr);
54
-    if (!subflow)
57
-    if (!subflow)
55
-        return -1;
58
-        return -1;
56
-    mptcp_subflow_set_scheduled(subflow, true);
59
-    mptcp_subflow_set_scheduled(subflow, true);
57
-    ptr->last_snd = mptcp_subflow_tcp_sock(subflow);
60
-    ptr->last_snd = mptcp_subflow_tcp_sock(subflow);
61
+out:
58
+    mptcp_subflow_set_scheduled(next, true);
62
+    mptcp_subflow_set_scheduled(next, true);
59
+    ptr->last_snd = bpf_mptcp_subflow_tcp_sock(next);
63
+    ptr->last_snd = mptcp_subflow_tcp_sock(next);
60
    return 0;
64
    return 0;
61
}
65
}
62
66
63
--
67
--
64
2.43.0
68
2.43.0
diff view generated by jsdifflib
...
...
10
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
10
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
11
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
12
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
12
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
13
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
13
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
14
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
14
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
15
int BPF_PROG(bpf_red_get_subflow, struct mptcp_sock *msk,
15
int BPF_PROG(bpf_red_get_send, struct mptcp_sock *msk,
16
     struct mptcp_sched_data *data)
16
     struct mptcp_sched_data *data)
17
{
17
{
18
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
18
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
19
-        if (!bpf_mptcp_subflow_ctx_by_pos(data, i))
19
-        if (!bpf_mptcp_subflow_ctx_by_pos(data, i))
20
-            break;
20
-            break;
21
+    struct mptcp_subflow_context *subflow;
21
+    struct mptcp_subflow_context *subflow;
22
22
23
-        mptcp_subflow_set_scheduled(bpf_mptcp_subflow_ctx_by_pos(data, i), true);
23
-        mptcp_subflow_set_scheduled(bpf_mptcp_subflow_ctx_by_pos(data, i), true);
24
-    }
24
-    }
25
+    bpf_for_each(mptcp_subflow, subflow, msk)
25
+    bpf_for_each(mptcp_subflow, subflow, (struct sock *)msk)
26
+        mptcp_subflow_set_scheduled(subflow, true);
26
+        mptcp_subflow_set_scheduled(subflow, true);
27
27
28
    return 0;
28
    return 0;
29
}
29
}
30
--
30
--
31
2.43.0
31
2.43.0
diff view generated by jsdifflib
1
From: Geliang Tang <tanggeliang@kylinos.cn>
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
2
3
Use the newly added bpf_for_each() helper to walk the conn_list.
3
Use the newly added bpf_for_each() helper to walk the conn_list.
4
4
Drop bpf_subflow_send_info, use subflow_send_info instead.
5
Drop mptcp_subflow_active declaration.
6
5
7
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
6
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
8
---
7
---
9
.../selftests/bpf/progs/mptcp_bpf_burst.c | 77 ++++++++++---------
8
.../selftests/bpf/progs/mptcp_bpf_burst.c | 78 +++++++------------
10
1 file changed, 39 insertions(+), 38 deletions(-)
9
1 file changed, 26 insertions(+), 52 deletions(-)
11
10
12
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c
11
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c
13
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c
15
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c
14
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c
16
@@ -XXX,XX +XXX,XX @@ char _license[] SEC("license") = "GPL";
15
@@ -XXX,XX +XXX,XX @@ char _license[] SEC("license") = "GPL";
17
16
18
#define min(a, b) ((a) < (b) ? (a) : (b))
17
#define min(a, b) ((a) < (b) ? (a) : (b))
19
18
20
+#define SSK_MODE_ACTIVE    0
19
-struct bpf_subflow_send_info {
21
+#define SSK_MODE_BACKUP    1
20
-    __u8 subflow_id;
22
+#define SSK_MODE_MAX    2
21
-    __u64 linger_time;
23
+
22
-};
24
struct bpf_subflow_send_info {
23
-
25
    __u8 subflow_id;
24
extern bool mptcp_subflow_active(struct mptcp_subflow_context *subflow) __ksym;
26
    __u64 linger_time;
25
extern void mptcp_set_timeout(struct sock *sk) __ksym;
27
@@ -XXX,XX +XXX,XX @@ extern bool tcp_stream_memory_free(const struct sock *sk, int wake) __ksym;
26
extern __u64 mptcp_wnd_end(const struct mptcp_sock *msk) __ksym;
27
-extern bool tcp_stream_memory_free(const struct sock *sk, int wake) __ksym;
28
+extern bool bpf_sk_stream_memory_free(const struct sock *sk) __ksym;
28
extern bool bpf_mptcp_subflow_queues_empty(struct sock *sk) __ksym;
29
extern bool bpf_mptcp_subflow_queues_empty(struct sock *sk) __ksym;
29
extern void mptcp_pm_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk) __ksym;
30
extern void mptcp_pm_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk) __ksym;
30
31
31
-#define SSK_MODE_ACTIVE    0
32
@@ -XXX,XX +XXX,XX @@ static __always_inline bool tcp_rtx_and_write_queues_empty(struct sock *sk)
32
-#define SSK_MODE_BACKUP    1
33
    return bpf_mptcp_subflow_queues_empty(sk) && tcp_write_queue_empty(sk);
33
-#define SSK_MODE_MAX    2
34
}
35
36
-static __always_inline bool __sk_stream_memory_free(const struct sock *sk, int wake)
37
-{
38
-    if (sk->sk_wmem_queued >= sk->sk_sndbuf)
39
-        return false;
34
-
40
-
35
static __always_inline __u64 div_u64(__u64 dividend, __u32 divisor)
41
-    return tcp_stream_memory_free(sk, wake);
36
{
42
-}
37
    return dividend / divisor;
43
-
38
@@ -XXX,XX +XXX,XX @@ static __always_inline bool sk_stream_memory_free(const struct sock *sk)
44
-static __always_inline bool sk_stream_memory_free(const struct sock *sk)
39
    return __sk_stream_memory_free(sk, 0);
45
-{
40
}
46
-    return __sk_stream_memory_free(sk, 0);
41
47
-}
42
+static struct mptcp_subflow_context *
48
-
43
+mptcp_lookup_subflow_by_id(struct mptcp_sock *msk, unsigned int id)
44
+{
45
+    struct mptcp_subflow_context *subflow;
46
+
47
+    bpf_for_each(mptcp_subflow, subflow, msk) {
48
+        if (subflow->subflow_id == id)
49
+            return subflow;
50
+    }
51
+
52
+    return NULL;
53
+}
54
+
55
SEC("struct_ops")
49
SEC("struct_ops")
56
void BPF_PROG(mptcp_sched_burst_init, struct mptcp_sock *msk)
50
void BPF_PROG(mptcp_sched_burst_init, struct mptcp_sock *msk)
57
{
51
{
58
@@ -XXX,XX +XXX,XX @@ void BPF_PROG(mptcp_sched_burst_release, struct mptcp_sock *msk)
52
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
53
int BPF_PROG(bpf_burst_get_send, struct mptcp_sock *msk,
54
     struct mptcp_sched_data *data)
59
{
55
{
60
}
56
-    struct bpf_subflow_send_info send_info[SSK_MODE_MAX];
61
57
+    struct subflow_send_info send_info[SSK_MODE_MAX];
62
-static int bpf_burst_get_send(struct mptcp_sock *msk,
63
-             struct mptcp_sched_data *data)
64
+static int bpf_burst_get_send(struct mptcp_sock *msk)
65
{
66
    struct bpf_subflow_send_info send_info[SSK_MODE_MAX];
67
    struct mptcp_subflow_context *subflow;
58
    struct mptcp_subflow_context *subflow;
68
@@ -XXX,XX +XXX,XX @@ static int bpf_burst_get_send(struct mptcp_sock *msk,
59
    struct sock *sk = (struct sock *)msk;
60
    __u32 pace, burst, wmem;
61
@@ -XXX,XX +XXX,XX @@ int BPF_PROG(bpf_burst_get_send, struct mptcp_sock *msk,
62
63
    /* pick the subflow with the lower wmem/wspace ratio */
64
    for (i = 0; i < SSK_MODE_MAX; ++i) {
65
-        send_info[i].subflow_id = MPTCP_SUBFLOWS_MAX;
66
+        send_info[i].ssk = NULL;
69
        send_info[i].linger_time = -1;
67
        send_info[i].linger_time = -1;
70
    }
68
    }
71
69
72
-    for (i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
70
-    for (i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
73
-        bool backup;
71
-        bool backup;
74
+    bpf_for_each(mptcp_subflow, subflow, msk) {
72
-
75
+        bool backup = subflow->backup || subflow->request_bkup;
76
77
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
73
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
78
-        if (!subflow)
74
-        if (!subflow)
79
-            break;
75
-            break;
80
-
76
-
81
-        backup = subflow->backup || subflow->request_bkup;
77
-        backup = subflow->backup || subflow->request_bkup;
82
-
78
+    bpf_for_each(mptcp_subflow, subflow, sk) {
83
-        ssk = mptcp_subflow_tcp_sock(subflow);
79
+        bool backup = subflow->backup || subflow->request_bkup;
84
+        ssk = bpf_mptcp_subflow_tcp_sock(subflow);
80
81
        ssk = mptcp_subflow_tcp_sock(subflow);
85
        if (!mptcp_subflow_active(subflow))
82
        if (!mptcp_subflow_active(subflow))
86
            continue;
83
@@ -XXX,XX +XXX,XX @@ int BPF_PROG(bpf_burst_get_send, struct mptcp_sock *msk,
87
88
@@ -XXX,XX +XXX,XX @@ static int bpf_burst_get_send(struct mptcp_sock *msk,
89
84
90
        linger_time = div_u64((__u64)ssk->sk_wmem_queued << 32, pace);
85
        linger_time = div_u64((__u64)ssk->sk_wmem_queued << 32, pace);
91
        if (linger_time < send_info[backup].linger_time) {
86
        if (linger_time < send_info[backup].linger_time) {
92
-            send_info[backup].subflow_id = i;
87
-            send_info[backup].subflow_id = i;
93
+            send_info[backup].subflow_id = subflow->subflow_id;
88
+            send_info[backup].ssk = ssk;
94
            send_info[backup].linger_time = linger_time;
89
            send_info[backup].linger_time = linger_time;
95
        }
90
        }
96
    }
91
    }
97
@@ -XXX,XX +XXX,XX @@ static int bpf_burst_get_send(struct mptcp_sock *msk,
92
@@ -XXX,XX +XXX,XX @@ int BPF_PROG(bpf_burst_get_send, struct mptcp_sock *msk,
93
94
    /* pick the best backup if no other subflow is active */
98
    if (!nr_active)
95
    if (!nr_active)
99
        send_info[SSK_MODE_ACTIVE].subflow_id = send_info[SSK_MODE_BACKUP].subflow_id;
96
-        send_info[SSK_MODE_ACTIVE].subflow_id = send_info[SSK_MODE_BACKUP].subflow_id;
97
+        send_info[SSK_MODE_ACTIVE].ssk = send_info[SSK_MODE_BACKUP].ssk;
100
98
101
-    subflow = bpf_mptcp_subflow_ctx_by_pos(data, send_info[SSK_MODE_ACTIVE].subflow_id);
99
-    subflow = bpf_mptcp_subflow_ctx_by_pos(data, send_info[SSK_MODE_ACTIVE].subflow_id);
102
+    subflow = mptcp_lookup_subflow_by_id(msk, send_info[SSK_MODE_ACTIVE].subflow_id);
100
-    if (!subflow)
103
    if (!subflow)
101
+    ssk = send_info[SSK_MODE_ACTIVE].ssk;
102
+    if (!ssk || !bpf_sk_stream_memory_free(ssk))
104
        return -1;
103
        return -1;
105
-    ssk = mptcp_subflow_tcp_sock(subflow);
104
-    ssk = mptcp_subflow_tcp_sock(subflow);
106
+    ssk = bpf_mptcp_subflow_tcp_sock(subflow);
105
-    if (!ssk || !sk_stream_memory_free(ssk))
107
    if (!ssk || !sk_stream_memory_free(ssk))
106
+
107
+    subflow = bpf_mptcp_subflow_ctx(ssk);
108
+    if (!subflow)
108
        return -1;
109
        return -1;
109
110
110
@@ -XXX,XX +XXX,XX @@ static int bpf_burst_get_send(struct mptcp_sock *msk,
111
    burst = min(MPTCP_SEND_BURST_SIZE, mptcp_wnd_end(msk) - msk->snd_nxt);
111
    return 0;
112
+    ssk = bpf_core_cast(ssk, struct sock);
112
}
113
    wmem = ssk->sk_wmem_queued;
113
114
    if (!burst)
114
-static int bpf_burst_get_retrans(struct mptcp_sock *msk,
115
        goto out;
115
-                 struct mptcp_sched_data *data)
116
@@ -XXX,XX +XXX,XX @@ SEC("struct_ops")
116
+static int bpf_burst_get_retrans(struct mptcp_sock *msk)
117
int BPF_PROG(bpf_burst_get_retrans, struct mptcp_sock *msk,
118
     struct mptcp_sched_data *data)
117
{
119
{
118
-    int backup = MPTCP_SUBFLOWS_MAX, pick = MPTCP_SUBFLOWS_MAX, subflow_id;
120
-    int backup = MPTCP_SUBFLOWS_MAX, pick = MPTCP_SUBFLOWS_MAX, subflow_id;
119
+    struct sock *backup = NULL, *pick = NULL;
121
+    struct sock *backup = NULL, *pick = NULL;
120
    struct mptcp_subflow_context *subflow;
122
    struct mptcp_subflow_context *subflow;
121
    int min_stale_count = INT_MAX;
123
    int min_stale_count = INT_MAX;
122
-    struct sock *ssk;
124
-    struct sock *ssk;
123
125
124
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
126
-    for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
125
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
127
-        subflow = bpf_mptcp_subflow_ctx_by_pos(data, i);
126
-        if (!subflow)
128
-        if (!subflow)
127
-            break;
129
-            break;
128
+    bpf_for_each(mptcp_subflow, subflow, msk) {
130
+    bpf_for_each(mptcp_subflow, subflow, (struct sock *)msk) {
129
+        struct sock *ssk = bpf_mptcp_subflow_tcp_sock(subflow);
131
+        struct sock *ssk = bpf_mptcp_subflow_tcp_sock(subflow);
130
132
131
        if (!mptcp_subflow_active(subflow))
133
-        if (!mptcp_subflow_active(subflow))
134
+        if (!ssk || !mptcp_subflow_active(subflow))
132
            continue;
135
            continue;
133
136
134
-        ssk = mptcp_subflow_tcp_sock(subflow);
137
-        ssk = mptcp_subflow_tcp_sock(subflow);
135
        /* still data outstanding at TCP level? skip this */
138
        /* still data outstanding at TCP level? skip this */
136
        if (!tcp_rtx_and_write_queues_empty(ssk)) {
139
        if (!tcp_rtx_and_write_queues_empty(ssk)) {
137
            mptcp_pm_subflow_chk_stale(msk, ssk);
140
            mptcp_pm_subflow_chk_stale(msk, ssk);
138
@@ -XXX,XX +XXX,XX @@ static int bpf_burst_get_retrans(struct mptcp_sock *msk,
141
@@ -XXX,XX +XXX,XX @@ int BPF_PROG(bpf_burst_get_retrans, struct mptcp_sock *msk,
139
        }
142
        }
140
143
141
        if (subflow->backup || subflow->request_bkup) {
144
        if (subflow->backup || subflow->request_bkup) {
142
-            if (backup == MPTCP_SUBFLOWS_MAX)
145
-            if (backup == MPTCP_SUBFLOWS_MAX)
143
-                backup = i;
146
-                backup = i;
...
...
166
+        return -1;
169
+        return -1;
167
+    subflow = bpf_mptcp_subflow_ctx(pick);
170
+    subflow = bpf_mptcp_subflow_ctx(pick);
168
    if (!subflow)
171
    if (!subflow)
169
        return -1;
172
        return -1;
170
    mptcp_subflow_set_scheduled(subflow, true);
173
    mptcp_subflow_set_scheduled(subflow, true);
171
@@ -XXX,XX +XXX,XX @@ int BPF_PROG(bpf_burst_get_subflow, struct mptcp_sock *msk,
172
     struct mptcp_sched_data *data)
173
{
174
    if (data->reinject)
175
-        return bpf_burst_get_retrans(msk, data);
176
-    return bpf_burst_get_send(msk, data);
177
+        return bpf_burst_get_retrans(msk);
178
+    return bpf_burst_get_send(msk);
179
}
180
181
SEC(".struct_ops")
182
--
174
--
183
2.43.0
175
2.43.0
diff view generated by jsdifflib
...
...
3
The mptcp_subflow bpf_iter is added now, it's better to use the helper
3
The mptcp_subflow bpf_iter is added now, it's better to use the helper
4
bpf_for_each(mptcp_subflow) to traverse all subflows on the conn_list of
4
bpf_for_each(mptcp_subflow) to traverse all subflows on the conn_list of
5
an MPTCP socket and then call kfunc to modify the fields of each subflow
5
an MPTCP socket and then call kfunc to modify the fields of each subflow
6
in the WIP MPTCP BPF packet scheduler examples, instead of converting them
6
in the WIP MPTCP BPF packet scheduler examples, instead of converting them
7
to a fixed array. With this helper, we can get rid of this subflow array
7
to a fixed array. With this helper, we can get rid of this subflow array
8
"contexts" and the size of it "subflows" in struct mptcp_sched_data.
8
"contexts" in struct mptcp_sched_data.
9
9
10
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
10
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
11
---
11
---
12
include/net/mptcp.h | 2 --
12
include/net/mptcp.h | 3 ---
13
1 file changed, 2 deletions(-)
13
1 file changed, 3 deletions(-)
14
14
15
diff --git a/include/net/mptcp.h b/include/net/mptcp.h
15
diff --git a/include/net/mptcp.h b/include/net/mptcp.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/net/mptcp.h
17
--- a/include/net/mptcp.h
18
+++ b/include/net/mptcp.h
18
+++ b/include/net/mptcp.h
19
@@ -XXX,XX +XXX,XX @@ struct mptcp_out_options {
19
@@ -XXX,XX +XXX,XX @@ struct mptcp_out_options {
20
20
#define MPTCP_SCHED_MAX        128
21
#define MPTCP_SCHED_BUF_MAX    (MPTCP_SCHED_NAME_MAX * MPTCP_SCHED_MAX)
22
23
-#define MPTCP_SUBFLOWS_MAX    8
24
-
21
struct mptcp_sched_data {
25
struct mptcp_sched_data {
22
    bool    reinject;
26
    u8    subflows;
23
-    u8    subflows;
24
-    struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
27
-    struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
25
};
28
};
26
29
27
struct mptcp_sched_ops {
30
struct mptcp_sched_ops {
28
--
31
--
29
2.43.0
32
2.43.0
diff view generated by jsdifflib
Deleted patch
1
From: Geliang Tang <tanggeliang@kylinos.cn>
2
1
3
Drop ss_search() and has_bytes_sent(), add a new bpf program to check
4
the bytes_sent.
5
6
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
7
---
8
.../testing/selftests/bpf/prog_tests/mptcp.c | 48 ++++++++++---------
9
.../selftests/bpf/progs/mptcp_bpf_bytes.c | 39 +++++++++++++++
10
2 files changed, 65 insertions(+), 22 deletions(-)
11
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bytes.c
12
13
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
16
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
17
@@ -XXX,XX +XXX,XX @@
18
#include "mptcpify.skel.h"
19
#include "mptcp_subflow.skel.h"
20
#include "mptcp_bpf_iters.skel.h"
21
+#include "mptcp_bpf_bytes.skel.h"
22
#include "mptcp_bpf_first.skel.h"
23
#include "mptcp_bpf_bkup.skel.h"
24
#include "mptcp_bpf_rr.skel.h"
25
@@ -XXX,XX +XXX,XX @@ static struct nstoken *sched_init(char *flags, char *sched)
26
    return NULL;
27
}
28
29
-static int ss_search(char *src, char *dst, char *port, char *keyword)
30
-{
31
-    return SYS_NOFAIL("ip netns exec %s ss -enita src %s dst %s %s %d | grep -q '%s'",
32
-             NS_TEST, src, dst, port, PORT_1, keyword);
33
-}
34
-
35
-static int has_bytes_sent(char *dst)
36
-{
37
-    return ss_search(ADDR_1, dst, "sport", "bytes_sent:");
38
-}
39
-
40
static void send_data_and_verify(char *sched, bool addr1, bool addr2)
41
{
42
+    int server_fd, client_fd, err;
43
+    struct mptcp_bpf_bytes *skel;
44
    struct timespec start, end;
45
-    int server_fd, client_fd;
46
    unsigned int delta_ms;
47
48
+    skel = mptcp_bpf_bytes__open_and_load();
49
+    if (!ASSERT_OK_PTR(skel, "open_and_load: bytes"))
50
+        return;
51
+
52
+    skel->bss->pid = getpid();
53
+
54
+    err = mptcp_bpf_bytes__attach(skel);
55
+    if (!ASSERT_OK(err, "skel_attach: bytes"))
56
+        goto skel_destroy;
57
+
58
    server_fd = start_mptcp_server(AF_INET, ADDR_1, PORT_1, 0);
59
    if (!ASSERT_OK_FD(server_fd, "start_mptcp_server"))
60
-        return;
61
+        goto skel_destroy;
62
63
    client_fd = connect_to_fd(server_fd, 0);
64
    if (!ASSERT_OK_FD(client_fd, "connect_to_fd"))
65
-        goto fail;
66
+        goto close_server;
67
68
    if (clock_gettime(CLOCK_MONOTONIC, &start) < 0)
69
-        goto fail;
70
+        goto close_client;
71
72
    if (!ASSERT_OK(send_recv_data(server_fd, client_fd, total_bytes),
73
         "send_recv_data"))
74
-        goto fail;
75
+        goto close_client;
76
77
    if (clock_gettime(CLOCK_MONOTONIC, &end) < 0)
78
-        goto fail;
79
+        goto close_client;
80
81
    delta_ms = (end.tv_sec - start.tv_sec) * 1000 + (end.tv_nsec - start.tv_nsec) / 1000000;
82
    printf("%s: %u ms\n", sched, delta_ms);
83
84
    if (addr1)
85
-        CHECK(has_bytes_sent(ADDR_1), sched, "should have bytes_sent on addr1\n");
86
+        ASSERT_GT(skel->bss->bytes_sent_1, 0, "should have bytes_sent on addr1");
87
    else
88
-        CHECK(!has_bytes_sent(ADDR_1), sched, "shouldn't have bytes_sent on addr1\n");
89
+        ASSERT_EQ(skel->bss->bytes_sent_1, 0, "shouldn't have bytes_sent on addr1");
90
    if (addr2)
91
-        CHECK(has_bytes_sent(ADDR_2), sched, "should have bytes_sent on addr2\n");
92
+        ASSERT_GT(skel->bss->bytes_sent_2, 0, "should have bytes_sent on addr2");
93
    else
94
-        CHECK(!has_bytes_sent(ADDR_2), sched, "shouldn't have bytes_sent on addr2\n");
95
+        ASSERT_EQ(skel->bss->bytes_sent_2, 0, "shouldn't have bytes_sent on addr2");
96
97
+close_client:
98
    close(client_fd);
99
-fail:
100
+close_server:
101
    close(server_fd);
102
+skel_destroy:
103
+    mptcp_bpf_bytes__destroy(skel);
104
}
105
106
static void test_default(void)
107
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_bytes.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_bytes.c
108
new file mode 100644
109
index XXXXXXX..XXXXXXX
110
--- /dev/null
111
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_bytes.c
112
@@ -XXX,XX +XXX,XX @@
113
+// SPDX-License-Identifier: GPL-2.0
114
+/* Copyright (c) 2024, Kylin Software */
115
+
116
+/* vmlinux.h, bpf_helpers.h and other 'define' */
117
+#include "bpf_tracing_net.h"
118
+#include "mptcp_bpf.h"
119
+
120
+char _license[] SEC("license") = "GPL";
121
+u64 bytes_sent_1 = 0;
122
+u64 bytes_sent_2 = 0;
123
+int pid;
124
+
125
+SEC("fexit/mptcp_sched_get_send")
126
+int BPF_PROG(trace_mptcp_sched_get_send, struct mptcp_sock *msk)
127
+{
128
+    struct mptcp_subflow_context *subflow;
129
+
130
+    if (bpf_get_current_pid_tgid() >> 32 != pid)
131
+        return 0;
132
+
133
+    if (!msk->pm.server_side)
134
+        return 0;
135
+
136
+    mptcp_for_each_subflow(msk, subflow) {
137
+        struct tcp_sock *tp;
138
+        struct sock *ssk;
139
+
140
+        subflow = bpf_core_cast(subflow, struct mptcp_subflow_context);
141
+        ssk = mptcp_subflow_tcp_sock(subflow);
142
+        tp = bpf_core_cast(ssk, struct tcp_sock);
143
+
144
+        if (subflow->subflow_id == 1)
145
+            bytes_sent_1 = tp->bytes_sent;
146
+        else if (subflow->subflow_id == 2)
147
+            bytes_sent_2 = tp->bytes_sent;
148
+    }
149
+
150
+    return 0;
151
+}
152
--
153
2.43.0
diff view generated by jsdifflib