kernel/trace/bpf_trace.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
bpf program should run under migration disabled, kprobe_multi_link_prog_run
called all the way from graph tracer, which disables preemption in
function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
need to use migrate_disable. As a result, some overhead maybe will be
reduced.
Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---
kernel/trace/bpf_trace.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
Change list:
v1 -> v2:
- s/called the way/called all the way/.(Jiri)
v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 3ae52978cae..5701791e3cb 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
goto out;
}
- migrate_disable();
+ /*
+ * bpf program should run under migration disabled, kprobe_multi_link_prog_run
+ * called all the way from graph tracer, which disables preemption in
+ * function_graph_enter_regs, so there is no need to use migrate_disable.
+ * Accessing the above percpu data bpf_prog_active is also safe for the same
+ * reason.
+ */
rcu_read_lock();
regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
err = bpf_prog_run(link->link.prog, regs);
bpf_reset_run_ctx(old_run_ctx);
rcu_read_unlock();
- migrate_enable();
out:
__this_cpu_dec(bpf_prog_active);
--
2.48.1
On Tue, Aug 5, 2025 at 9:28 AM Tao Chen <chen.dylane@linux.dev> wrote: > > bpf program should run under migration disabled, kprobe_multi_link_prog_run > called all the way from graph tracer, which disables preemption in > function_graph_enter_regs, as Jiri and Yonghong suggested, there is no > need to use migrate_disable. As a result, some overhead maybe will be > reduced. > > Fixes: 0dcac2725406 ("bpf: Add multi kprobe link") > Acked-by: Yonghong Song <yonghong.song@linux.dev> > Acked-by: Jiri Olsa <jolsa@kernel.org> > Signed-off-by: Tao Chen <chen.dylane@linux.dev> > --- > kernel/trace/bpf_trace.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > Change list: > v1 -> v2: > - s/called the way/called all the way/.(Jiri) > v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 3ae52978cae..5701791e3cb 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, even though bpf_prog_run() eventually calls cant_migrate(), we should add it before that __this_cpu_inc_return() call as well, because that one is relying on that non-migration independently from bpf_prog_run() > goto out; > } > > - migrate_disable(); > + /* > + * bpf program should run under migration disabled, kprobe_multi_link_prog_run > + * called all the way from graph tracer, which disables preemption in > + * function_graph_enter_regs, so there is no need to use migrate_disable. > + * Accessing the above percpu data bpf_prog_active is also safe for the same > + * reason. > + */ let's shorten this a bit to something like: /* graph tracer framework ensures we won't migrate */ cant_migrate(); all the other stuff in the comment can become outdated way too easily and/or is sort of general BPF implementation knowledge pw-bot: cr > rcu_read_lock(); > regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr()); > old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx); > err = bpf_prog_run(link->link.prog, regs); > bpf_reset_run_ctx(old_run_ctx); > rcu_read_unlock(); > - migrate_enable(); > > out: > __this_cpu_dec(bpf_prog_active); > -- > 2.48.1 >
在 2025/8/13 06:05, Andrii Nakryiko 写道: > On Tue, Aug 5, 2025 at 9:28 AM Tao Chen <chen.dylane@linux.dev> wrote: >> >> bpf program should run under migration disabled, kprobe_multi_link_prog_run >> called all the way from graph tracer, which disables preemption in >> function_graph_enter_regs, as Jiri and Yonghong suggested, there is no >> need to use migrate_disable. As a result, some overhead maybe will be >> reduced. >> >> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link") >> Acked-by: Yonghong Song <yonghong.song@linux.dev> >> Acked-by: Jiri Olsa <jolsa@kernel.org> >> Signed-off-by: Tao Chen <chen.dylane@linux.dev> >> --- >> kernel/trace/bpf_trace.c | 9 +++++++-- >> 1 file changed, 7 insertions(+), 2 deletions(-) >> >> Change list: >> v1 -> v2: >> - s/called the way/called all the way/.(Jiri) >> v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev >> >> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c >> index 3ae52978cae..5701791e3cb 100644 >> --- a/kernel/trace/bpf_trace.c >> +++ b/kernel/trace/bpf_trace.c >> @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, > > even though bpf_prog_run() eventually calls cant_migrate(), we should > add it before that __this_cpu_inc_return() call as well, because that > one is relying on that non-migration independently from bpf_prog_run() > maybe cant_sleep() is better like trace_call_bpf, cant_sleep dose not check migration_disabled again, which is done in __this_cpu_preempt_check. I will add it in v3. >> goto out; >> } >> >> - migrate_disable(); >> + /* >> + * bpf program should run under migration disabled, kprobe_multi_link_prog_run >> + * called all the way from graph tracer, which disables preemption in >> + * function_graph_enter_regs, so there is no need to use migrate_disable. >> + * Accessing the above percpu data bpf_prog_active is also safe for the same >> + * reason. >> + */ > > let's shorten this a bit to something like: > > /* graph tracer framework ensures we won't migrate */ > cant_migrate(); > > all the other stuff in the comment can become outdated way too easily > and/or is sort of general BPF implementation knowledge > > pw-bot: cr > > >> rcu_read_lock(); >> regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr()); >> old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx); >> err = bpf_prog_run(link->link.prog, regs); >> bpf_reset_run_ctx(old_run_ctx); >> rcu_read_unlock(); >> - migrate_enable(); >> >> out: >> __this_cpu_dec(bpf_prog_active); >> -- >> 2.48.1 >> -- Best Regards Tao Chen
在 2025/8/13 06:05, Andrii Nakryiko 写道: > On Tue, Aug 5, 2025 at 9:28 AM Tao Chen <chen.dylane@linux.dev> wrote: >> >> bpf program should run under migration disabled, kprobe_multi_link_prog_run >> called all the way from graph tracer, which disables preemption in >> function_graph_enter_regs, as Jiri and Yonghong suggested, there is no >> need to use migrate_disable. As a result, some overhead maybe will be >> reduced. >> >> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link") >> Acked-by: Yonghong Song <yonghong.song@linux.dev> >> Acked-by: Jiri Olsa <jolsa@kernel.org> >> Signed-off-by: Tao Chen <chen.dylane@linux.dev> >> --- >> kernel/trace/bpf_trace.c | 9 +++++++-- >> 1 file changed, 7 insertions(+), 2 deletions(-) >> >> Change list: >> v1 -> v2: >> - s/called the way/called all the way/.(Jiri) >> v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev >> >> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c >> index 3ae52978cae..5701791e3cb 100644 >> --- a/kernel/trace/bpf_trace.c >> +++ b/kernel/trace/bpf_trace.c >> @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, > > even though bpf_prog_run() eventually calls cant_migrate(), we should > add it before that __this_cpu_inc_return() call as well, because that > one is relying on that non-migration independently from bpf_prog_run() > Hi Andrii, There is __this_cpu_preempt_check in __this_cpu_inc_return, and the judgment criteria are similar to cant_migrate, and I'm not sure if it is enough. >> goto out; >> } >> >> - migrate_disable(); >> + /* >> + * bpf program should run under migration disabled, kprobe_multi_link_prog_run >> + * called all the way from graph tracer, which disables preemption in >> + * function_graph_enter_regs, so there is no need to use migrate_disable. >> + * Accessing the above percpu data bpf_prog_active is also safe for the same >> + * reason. >> + */ > > let's shorten this a bit to something like: > > /* graph tracer framework ensures we won't migrate */ will change it in v3. > cant_migrate(); > > all the other stuff in the comment can become outdated way too easily > and/or is sort of general BPF implementation knowledge > > pw-bot: cr > > >> rcu_read_lock(); >> regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr()); >> old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx); >> err = bpf_prog_run(link->link.prog, regs); >> bpf_reset_run_ctx(old_run_ctx); >> rcu_read_unlock(); >> - migrate_enable(); >> >> out: >> __this_cpu_dec(bpf_prog_active); >> -- >> 2.48.1 >> -- Best Regards Tao Chen
© 2016 - 2025 Red Hat, Inc.