arch/x86/Kconfig | 1 + include/linux/bpf.h | 7 ++- include/linux/ftrace.h | 31 +++++++++- kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------- kernel/trace/Kconfig | 3 + kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------- 6 files changed, 632 insertions(+), 75 deletions(-)
hi,
while poking the multi-tracing interface I ended up with just one ftrace_ops
object to attach all trampolines.
This change allows to use less direct API calls during the attachment changes
in the future code, so in effect speeding up the attachment.
In current code we get a speed up from using just a single ftrace_ops object.
- with current code:
Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
6,364,157,902 cycles:k
828,728,902 cycles:u
1,064,803,824 instructions:u # 1.28 insn per cycle
23,797,500,067 instructions:k # 3.74 insn per cycle
4.416004987 seconds time elapsed
0.164121000 seconds user
1.289550000 seconds sys
- with the fix:
Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
6,535,857,905 cycles:k
810,809,429 cycles:u
1,064,594,027 instructions:u # 1.31 insn per cycle
23,962,552,894 instructions:k # 3.67 insn per cycle
1.666961239 seconds time elapsed
0.157412000 seconds user
1.283396000 seconds sys
The speedup seems to be related to the fact that with single ftrace_ops object
we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
we skip the synchronize rcu calls (each ~100ms) at the end of that function.
rfc: https://lore.kernel.org/bpf/20250729102813.1531457-1-jolsa@kernel.org/
v1: https://lore.kernel.org/bpf/20250923215147.1571952-1-jolsa@kernel.org/
v2: https://lore.kernel.org/bpf/20251113123750.2507435-1-jolsa@kernel.org/
v3: https://lore.kernel.org/bpf/20251120212402.466524-1-jolsa@kernel.org/
v4: https://lore.kernel.org/bpf/20251203082402.78816-1-jolsa@kernel.org/
v5: https://lore.kernel.org/bpf/20251215211402.353056-10-jolsa@kernel.org/
v6 changes:
- rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
- factor hash_add/hash_sub [Steven]
- add kerneldoc header for update_ftrace_direct_* functions [Steven]
- few assorted smaller fixes [Steven]
- added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
case [Steven]
v5 changes:
- do not export ftrace_hash object [Steven]
- fix update_ftrace_direct_add new_filter_hash leak [ci]
v4 changes:
- rebased on top of bpf-next/master (with jmp attach changes)
added patch 1 to deal with that
- added extra checks for update_ftrace_direct_del/mod to address
the ci bot review
v3 changes:
- rebased on top of bpf-next/master
- fixed update_ftrace_direct_del cleanup path
- added missing inline to update_ftrace_direct_* stubs
v2 changes:
- rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
- renamed the API functions [2] [Steven]
- do not export the new api [Steven]
- kept the original direct interface:
I'm not sure if we want to melt both *_ftrace_direct and the new interface
into single one. It's bit different in semantic (hence the name change as
Steven suggested [2]) and I don't think the changes are not that big so
we could easily keep both APIs.
v1 changes:
- make the change x86 specific, after discussing with Mark options for
arm64 [Mark]
thanks,
jirka
[1] https://lore.kernel.org/bpf/20251027175023.1521602-1-song@kernel.org/
[2] https://lore.kernel.org/bpf/20250924050415.4aefcb91@batman.local.home/
---
Jiri Olsa (9):
ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
ftrace: Make alloc_and_copy_ftrace_hash direct friendly
ftrace: Export some of hash related functions
ftrace: Add update_ftrace_direct_add function
ftrace: Add update_ftrace_direct_del function
ftrace: Add update_ftrace_direct_mod function
bpf: Add trampoline ip hash table
ftrace: Factor ftrace_ops ops_func interface
bpf,x86: Use single ftrace_ops for direct calls
arch/x86/Kconfig | 1 +
include/linux/bpf.h | 7 ++-
include/linux/ftrace.h | 31 +++++++++-
kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
kernel/trace/Kconfig | 3 +
kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
6 files changed, 632 insertions(+), 75 deletions(-)
On Tue, Dec 30, 2025 at 6:50 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> hi,
> while poking the multi-tracing interface I ended up with just one ftrace_ops
> object to attach all trampolines.
>
> This change allows to use less direct API calls during the attachment changes
> in the future code, so in effect speeding up the attachment.
>
> In current code we get a speed up from using just a single ftrace_ops object.
>
> - with current code:
>
> Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
>
> 6,364,157,902 cycles:k
> 828,728,902 cycles:u
> 1,064,803,824 instructions:u # 1.28 insn per cycle
> 23,797,500,067 instructions:k # 3.74 insn per cycle
>
> 4.416004987 seconds time elapsed
>
> 0.164121000 seconds user
> 1.289550000 seconds sys
>
>
> - with the fix:
>
> Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
>
> 6,535,857,905 cycles:k
> 810,809,429 cycles:u
> 1,064,594,027 instructions:u # 1.31 insn per cycle
> 23,962,552,894 instructions:k # 3.67 insn per cycle
>
> 1.666961239 seconds time elapsed
>
> 0.157412000 seconds user
> 1.283396000 seconds sys
>
>
>
> The speedup seems to be related to the fact that with single ftrace_ops object
> we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
> we skip the synchronize rcu calls (each ~100ms) at the end of that function.
>
> rfc: https://lore.kernel.org/bpf/20250729102813.1531457-1-jolsa@kernel.org/
> v1: https://lore.kernel.org/bpf/20250923215147.1571952-1-jolsa@kernel.org/
> v2: https://lore.kernel.org/bpf/20251113123750.2507435-1-jolsa@kernel.org/
> v3: https://lore.kernel.org/bpf/20251120212402.466524-1-jolsa@kernel.org/
> v4: https://lore.kernel.org/bpf/20251203082402.78816-1-jolsa@kernel.org/
> v5: https://lore.kernel.org/bpf/20251215211402.353056-10-jolsa@kernel.org/
>
> v6 changes:
> - rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
> - factor hash_add/hash_sub [Steven]
> - add kerneldoc header for update_ftrace_direct_* functions [Steven]
> - few assorted smaller fixes [Steven]
> - added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> case [Steven]
>
So this looks good from BPF side, I think. Steven, if you don't mind
giving this patch set another look and if everything is to your liking
giving your ack, we can then apply it to bpf-next. Thanks!
> v5 changes:
> - do not export ftrace_hash object [Steven]
> - fix update_ftrace_direct_add new_filter_hash leak [ci]
>
> v4 changes:
> - rebased on top of bpf-next/master (with jmp attach changes)
> added patch 1 to deal with that
> - added extra checks for update_ftrace_direct_del/mod to address
> the ci bot review
>
> v3 changes:
> - rebased on top of bpf-next/master
> - fixed update_ftrace_direct_del cleanup path
> - added missing inline to update_ftrace_direct_* stubs
>
> v2 changes:
> - rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
> - renamed the API functions [2] [Steven]
> - do not export the new api [Steven]
> - kept the original direct interface:
>
> I'm not sure if we want to melt both *_ftrace_direct and the new interface
> into single one. It's bit different in semantic (hence the name change as
> Steven suggested [2]) and I don't think the changes are not that big so
> we could easily keep both APIs.
>
> v1 changes:
> - make the change x86 specific, after discussing with Mark options for
> arm64 [Mark]
>
> thanks,
> jirka
>
>
> [1] https://lore.kernel.org/bpf/20251027175023.1521602-1-song@kernel.org/
> [2] https://lore.kernel.org/bpf/20250924050415.4aefcb91@batman.local.home/
> ---
> Jiri Olsa (9):
> ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
> ftrace: Make alloc_and_copy_ftrace_hash direct friendly
> ftrace: Export some of hash related functions
> ftrace: Add update_ftrace_direct_add function
> ftrace: Add update_ftrace_direct_del function
> ftrace: Add update_ftrace_direct_mod function
> bpf: Add trampoline ip hash table
> ftrace: Factor ftrace_ops ops_func interface
> bpf,x86: Use single ftrace_ops for direct calls
>
> arch/x86/Kconfig | 1 +
> include/linux/bpf.h | 7 ++-
> include/linux/ftrace.h | 31 +++++++++-
> kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
> kernel/trace/Kconfig | 3 +
> kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
> 6 files changed, 632 insertions(+), 75 deletions(-)
>
hi,
gentle ping, thanks
jirka
On Thu, Jan 15, 2026 at 10:54:09AM -0800, Andrii Nakryiko wrote:
> On Tue, Dec 30, 2025 at 6:50 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > hi,
> > while poking the multi-tracing interface I ended up with just one ftrace_ops
> > object to attach all trampolines.
> >
> > This change allows to use less direct API calls during the attachment changes
> > in the future code, so in effect speeding up the attachment.
> >
> > In current code we get a speed up from using just a single ftrace_ops object.
> >
> > - with current code:
> >
> > Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
> >
> > 6,364,157,902 cycles:k
> > 828,728,902 cycles:u
> > 1,064,803,824 instructions:u # 1.28 insn per cycle
> > 23,797,500,067 instructions:k # 3.74 insn per cycle
> >
> > 4.416004987 seconds time elapsed
> >
> > 0.164121000 seconds user
> > 1.289550000 seconds sys
> >
> >
> > - with the fix:
> >
> > Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
> >
> > 6,535,857,905 cycles:k
> > 810,809,429 cycles:u
> > 1,064,594,027 instructions:u # 1.31 insn per cycle
> > 23,962,552,894 instructions:k # 3.67 insn per cycle
> >
> > 1.666961239 seconds time elapsed
> >
> > 0.157412000 seconds user
> > 1.283396000 seconds sys
> >
> >
> >
> > The speedup seems to be related to the fact that with single ftrace_ops object
> > we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
> > we skip the synchronize rcu calls (each ~100ms) at the end of that function.
> >
> > rfc: https://lore.kernel.org/bpf/20250729102813.1531457-1-jolsa@kernel.org/
> > v1: https://lore.kernel.org/bpf/20250923215147.1571952-1-jolsa@kernel.org/
> > v2: https://lore.kernel.org/bpf/20251113123750.2507435-1-jolsa@kernel.org/
> > v3: https://lore.kernel.org/bpf/20251120212402.466524-1-jolsa@kernel.org/
> > v4: https://lore.kernel.org/bpf/20251203082402.78816-1-jolsa@kernel.org/
> > v5: https://lore.kernel.org/bpf/20251215211402.353056-10-jolsa@kernel.org/
> >
> > v6 changes:
> > - rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
> > - factor hash_add/hash_sub [Steven]
> > - add kerneldoc header for update_ftrace_direct_* functions [Steven]
> > - few assorted smaller fixes [Steven]
> > - added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> > case [Steven]
> >
>
> So this looks good from BPF side, I think. Steven, if you don't mind
> giving this patch set another look and if everything is to your liking
> giving your ack, we can then apply it to bpf-next. Thanks!
>
> > v5 changes:
> > - do not export ftrace_hash object [Steven]
> > - fix update_ftrace_direct_add new_filter_hash leak [ci]
> >
> > v4 changes:
> > - rebased on top of bpf-next/master (with jmp attach changes)
> > added patch 1 to deal with that
> > - added extra checks for update_ftrace_direct_del/mod to address
> > the ci bot review
> >
> > v3 changes:
> > - rebased on top of bpf-next/master
> > - fixed update_ftrace_direct_del cleanup path
> > - added missing inline to update_ftrace_direct_* stubs
> >
> > v2 changes:
> > - rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
> > - renamed the API functions [2] [Steven]
> > - do not export the new api [Steven]
> > - kept the original direct interface:
> >
> > I'm not sure if we want to melt both *_ftrace_direct and the new interface
> > into single one. It's bit different in semantic (hence the name change as
> > Steven suggested [2]) and I don't think the changes are not that big so
> > we could easily keep both APIs.
> >
> > v1 changes:
> > - make the change x86 specific, after discussing with Mark options for
> > arm64 [Mark]
> >
> > thanks,
> > jirka
> >
> >
> > [1] https://lore.kernel.org/bpf/20251027175023.1521602-1-song@kernel.org/
> > [2] https://lore.kernel.org/bpf/20250924050415.4aefcb91@batman.local.home/
> > ---
> > Jiri Olsa (9):
> > ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
> > ftrace: Make alloc_and_copy_ftrace_hash direct friendly
> > ftrace: Export some of hash related functions
> > ftrace: Add update_ftrace_direct_add function
> > ftrace: Add update_ftrace_direct_del function
> > ftrace: Add update_ftrace_direct_mod function
> > bpf: Add trampoline ip hash table
> > ftrace: Factor ftrace_ops ops_func interface
> > bpf,x86: Use single ftrace_ops for direct calls
> >
> > arch/x86/Kconfig | 1 +
> > include/linux/bpf.h | 7 ++-
> > include/linux/ftrace.h | 31 +++++++++-
> > kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
> > kernel/trace/Kconfig | 3 +
> > kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
> > 6 files changed, 632 insertions(+), 75 deletions(-)
> >
On Tue, 30 Dec 2025 15:50:01 +0100 Jiri Olsa <jolsa@kernel.org> wrote: > Jiri Olsa (9): > ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag > ftrace: Make alloc_and_copy_ftrace_hash direct friendly > ftrace: Export some of hash related functions > ftrace: Add update_ftrace_direct_add function > ftrace: Add update_ftrace_direct_del function > ftrace: Add update_ftrace_direct_mod function > bpf: Add trampoline ip hash table > ftrace: Factor ftrace_ops ops_func interface > bpf,x86: Use single ftrace_ops for direct calls I reviewed all the above patches with the exception of patch 7 (which was BPF only). I even ran the entire set through my internal tests and they passed. I don't have anything for this merge window that will conflict with this series, so if you want to push it through the BPF tree, feel free to do so. For patches 1-6,8,9: Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> -- Steve
© 2016 - 2026 Red Hat, Inc.