Documentation/trace/fprobe.rst | 42 + arch/arm64/Kconfig | 3 arch/arm64/include/asm/ftrace.h | 47 + arch/arm64/kernel/asm-offsets.c | 12 arch/arm64/kernel/entry-ftrace.S | 32 - arch/arm64/kernel/ftrace.c | 21 arch/loongarch/Kconfig | 4 arch/loongarch/include/asm/ftrace.h | 32 - arch/loongarch/kernel/asm-offsets.c | 12 arch/loongarch/kernel/ftrace_dyn.c | 15 arch/loongarch/kernel/mcount.S | 17 arch/loongarch/kernel/mcount_dyn.S | 14 arch/powerpc/Kconfig | 1 arch/powerpc/include/asm/ftrace.h | 15 arch/powerpc/kernel/trace/ftrace.c | 3 arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 arch/riscv/Kconfig | 3 arch/riscv/include/asm/ftrace.h | 21 arch/riscv/kernel/ftrace.c | 15 arch/riscv/kernel/mcount.S | 24 arch/s390/Kconfig | 3 arch/s390/include/asm/ftrace.h | 39 - arch/s390/kernel/asm-offsets.c | 6 arch/s390/kernel/mcount.S | 9 arch/x86/Kconfig | 4 arch/x86/include/asm/ftrace.h | 43 - arch/x86/kernel/ftrace.c | 51 + arch/x86/kernel/ftrace_32.S | 15 arch/x86/kernel/ftrace_64.S | 17 include/linux/fprobe.h | 57 + include/linux/ftrace.h | 170 +++ include/linux/sched.h | 2 include/linux/trace_recursion.h | 39 - kernel/trace/Kconfig | 23 kernel/trace/bpf_trace.c | 14 kernel/trace/fgraph.c | 1005 ++++++++++++++++---- kernel/trace/fprobe.c | 637 +++++++++---- kernel/trace/ftrace.c | 13 kernel/trace/ftrace_internal.h | 2 kernel/trace/trace.h | 96 ++ kernel/trace/trace_fprobe.c | 147 ++- kernel/trace/trace_functions.c | 8 kernel/trace/trace_functions_graph.c | 98 +- kernel/trace/trace_irqsoff.c | 12 kernel/trace/trace_probe_tmpl.h | 2 kernel/trace/trace_sched_wakeup.c | 12 kernel/trace/trace_selftest.c | 262 +++++ lib/test_fprobe.c | 51 - samples/fprobe/fprobe_example.c | 4 .../test.d/dynevent/add_remove_fprobe_repeat.tc | 19 .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 51 files changed, 2325 insertions(+), 882 deletions(-) create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
Hi,
Here is the 9th version of the series to re-implement the fprobe on
function-graph tracer. The previous version is;
https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/
This version is ported on the latest kernel (v6.9-rc3 + probes/for-next)
and fixed some bugs + performance optimization patch[36/36].
- [12/36] Fix to clear fgraph_array entry in registration failure, also
return -ENOSPC when fgraph_array is full.
- [28/36] Add new store_fprobe_entry_data() for fprobe.
- [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation.
- [36/36] Add new flag to skip timestamp recording.
Overview
--------
This series does major 2 changes, enable multiple function-graphs on
the ftrace (e.g. allow function-graph on sub instances) and rewrite the
fprobe on this function-graph.
The former changes had been sent from Steven Rostedt 4 years ago (*),
which allows users to set different setting function-graph tracer (and
other tracers based on function-graph) in each trace-instances at the
same time.
(*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/
The purpose of latter change are;
1) Remove dependency of the rethook from fprobe so that we can reduce
the return hook code and shadow stack.
2) Make 'ftrace_regs' the common trace interface for the function
boundary.
1) Currently we have 2(or 3) different function return hook codes,
the function-graph tracer and rethook (and legacy kretprobe).
But since this is redundant and needs double maintenance cost,
I would like to unify those. From the user's viewpoint, function-
graph tracer is very useful to grasp the execution path. For this
purpose, it is hard to use the rethook in the function-graph
tracer, but the opposite is possible. (Strictly speaking, kretprobe
can not use it because it requires 'pt_regs' for historical reasons.)
2) Now the fprobe provides the 'pt_regs' for its handler, but that is
wrong for the function entry and exit. Moreover, depending on the
architecture, there is no way to accurately reproduce 'pt_regs'
outside of interrupt or exception handlers. This means fprobe should
not use 'pt_regs' because it does not use such exceptions.
(Conversely, kprobe should use 'pt_regs' because it is an abstract
interface of the software breakpoint exception.)
This series changes fprobe to use function-graph tracer for tracing
function entry and exit, instead of mixture of ftrace and rethook.
Unlike the rethook which is a per-task list of system-wide allocated
nodes, the function graph's ret_stack is a per-task shadow stack.
Thus it does not need to set 'nr_maxactive' (which is the number of
pre-allocated nodes).
Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
their register interface, this changes it to convert 'ftrace_regs' to
'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
so users must access only registers for function parameters or
return value.
Design
------
Instead of using ftrace's function entry hook directly, the new fprobe
is built on top of the function-graph's entry and return callbacks
with 'ftrace_regs'.
Since the fprobe requires access to 'ftrace_regs', the architecture
must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
entry callback with 'ftrace_regs', and also
CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
return_to_handler.
All fprobes share a single function-graph ops (means shares a common
ftrace filter) similar to the kprobe-on-ftrace. This needs another
layer to find corresponding fprobe in the common function-graph
callbacks, but has much better scalability, since the number of
registered function-graph ops is limited.
In the entry callback, the fprobe runs its entry_handler and saves the
address of 'fprobe' on the function-graph's shadow stack as data. The
return callback decodes the data to get the 'fprobe' address, and runs
the exit_handler.
The fprobe introduces two hash-tables, one is for entry callback which
searches fprobes related to the given function address passed by entry
callback. The other is for a return callback which checks if the given
'fprobe' data structure pointer is still valid. Note that it is
possible to unregister fprobe before the return callback runs. Thus
the address validation must be done before using it in the return
callback.
This series can be applied against the probes/for-next branch, which
is based on v6.9-rc3.
This series can also be found below branch.
https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
Thank you,
---
Masami Hiramatsu (Google) (21):
tracing: Add a comment about ftrace_regs definition
tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value
x86: tracing: Add ftrace_regs definition in the header
function_graph: Use a simple LRU for fgraph_array index number
ftrace: Add multiple fgraph storage selftest
function_graph: Pass ftrace_regs to entryfunc
function_graph: Replace fgraph_ret_regs with ftrace_regs
function_graph: Pass ftrace_regs to retfunc
fprobe: Use ftrace_regs in fprobe entry handler
fprobe: Use ftrace_regs in fprobe exit handler
tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
tracing: Add ftrace_fill_perf_regs() for perf event
tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
fprobe: Rewrite fprobe on function-graph tracer
tracing/fprobe: Remove nr_maxactive from fprobe
selftests: ftrace: Remove obsolate maxactive syntax check
selftests/ftrace: Add a test case for repeating register/unregister fprobe
Documentation: probes: Update fprobe on function-graph tracer
fgraph: Skip recording calltime/rettime if it is not nneeded
Steven Rostedt (VMware) (15):
function_graph: Convert ret_stack to a series of longs
fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long
function_graph: Add an array structure that will allow multiple callbacks
function_graph: Allow multiple users to attach to function graph
function_graph: Remove logic around ftrace_graph_entry and return
ftrace/function_graph: Pass fgraph_ops to function graph callbacks
ftrace: Allow function_graph tracer to be enabled in instances
ftrace: Allow ftrace startup flags exist without dynamic ftrace
function_graph: Have the instances use their own ftrace_ops for filtering
function_graph: Add "task variables" per task for fgraph_ops
function_graph: Move set_graph_function tests to shadow stack global var
function_graph: Move graph depth stored data to shadow stack global var
function_graph: Move graph notrace bit to shadow stack global var
function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()
function_graph: Add selftest for passing local variables
Documentation/trace/fprobe.rst | 42 +
arch/arm64/Kconfig | 3
arch/arm64/include/asm/ftrace.h | 47 +
arch/arm64/kernel/asm-offsets.c | 12
arch/arm64/kernel/entry-ftrace.S | 32 -
arch/arm64/kernel/ftrace.c | 21
arch/loongarch/Kconfig | 4
arch/loongarch/include/asm/ftrace.h | 32 -
arch/loongarch/kernel/asm-offsets.c | 12
arch/loongarch/kernel/ftrace_dyn.c | 15
arch/loongarch/kernel/mcount.S | 17
arch/loongarch/kernel/mcount_dyn.S | 14
arch/powerpc/Kconfig | 1
arch/powerpc/include/asm/ftrace.h | 15
arch/powerpc/kernel/trace/ftrace.c | 3
arch/powerpc/kernel/trace/ftrace_64_pg.c | 10
arch/riscv/Kconfig | 3
arch/riscv/include/asm/ftrace.h | 21
arch/riscv/kernel/ftrace.c | 15
arch/riscv/kernel/mcount.S | 24
arch/s390/Kconfig | 3
arch/s390/include/asm/ftrace.h | 39 -
arch/s390/kernel/asm-offsets.c | 6
arch/s390/kernel/mcount.S | 9
arch/x86/Kconfig | 4
arch/x86/include/asm/ftrace.h | 43 -
arch/x86/kernel/ftrace.c | 51 +
arch/x86/kernel/ftrace_32.S | 15
arch/x86/kernel/ftrace_64.S | 17
include/linux/fprobe.h | 57 +
include/linux/ftrace.h | 170 +++
include/linux/sched.h | 2
include/linux/trace_recursion.h | 39 -
kernel/trace/Kconfig | 23
kernel/trace/bpf_trace.c | 14
kernel/trace/fgraph.c | 1005 ++++++++++++++++----
kernel/trace/fprobe.c | 637 +++++++++----
kernel/trace/ftrace.c | 13
kernel/trace/ftrace_internal.h | 2
kernel/trace/trace.h | 96 ++
kernel/trace/trace_fprobe.c | 147 ++-
kernel/trace/trace_functions.c | 8
kernel/trace/trace_functions_graph.c | 98 +-
kernel/trace/trace_irqsoff.c | 12
kernel/trace/trace_probe_tmpl.h | 2
kernel/trace/trace_sched_wakeup.c | 12
kernel/trace/trace_selftest.c | 262 +++++
lib/test_fprobe.c | 51 -
samples/fprobe/fprobe_example.c | 4
.../test.d/dynevent/add_remove_fprobe_repeat.tc | 19
.../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4
51 files changed, 2325 insertions(+), 882 deletions(-)
create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
Neat! :) I had a look at mostly the "high level" part (fprobe and arm64 specific bits) and this seems to be in a good state to me. Thanks for all that work, that is quite a refactoring :) On Mon, Apr 15, 2024 at 2:49 PM Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > > Hi, > > Here is the 9th version of the series to re-implement the fprobe on > function-graph tracer. The previous version is; > > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/ > > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next) > and fixed some bugs + performance optimization patch[36/36]. > - [12/36] Fix to clear fgraph_array entry in registration failure, also > return -ENOSPC when fgraph_array is full. > - [28/36] Add new store_fprobe_entry_data() for fprobe. > - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation. > - [36/36] Add new flag to skip timestamp recording. > > Overview > -------- > This series does major 2 changes, enable multiple function-graphs on > the ftrace (e.g. allow function-graph on sub instances) and rewrite the > fprobe on this function-graph. > > The former changes had been sent from Steven Rostedt 4 years ago (*), > which allows users to set different setting function-graph tracer (and > other tracers based on function-graph) in each trace-instances at the > same time. > > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/ > > The purpose of latter change are; > > 1) Remove dependency of the rethook from fprobe so that we can reduce > the return hook code and shadow stack. > > 2) Make 'ftrace_regs' the common trace interface for the function > boundary. > > 1) Currently we have 2(or 3) different function return hook codes, > the function-graph tracer and rethook (and legacy kretprobe). > But since this is redundant and needs double maintenance cost, > I would like to unify those. From the user's viewpoint, function- > graph tracer is very useful to grasp the execution path. For this > purpose, it is hard to use the rethook in the function-graph > tracer, but the opposite is possible. (Strictly speaking, kretprobe > can not use it because it requires 'pt_regs' for historical reasons.) > > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is > wrong for the function entry and exit. Moreover, depending on the > architecture, there is no way to accurately reproduce 'pt_regs' > outside of interrupt or exception handlers. This means fprobe should > not use 'pt_regs' because it does not use such exceptions. > (Conversely, kprobe should use 'pt_regs' because it is an abstract > interface of the software breakpoint exception.) > > This series changes fprobe to use function-graph tracer for tracing > function entry and exit, instead of mixture of ftrace and rethook. > Unlike the rethook which is a per-task list of system-wide allocated > nodes, the function graph's ret_stack is a per-task shadow stack. > Thus it does not need to set 'nr_maxactive' (which is the number of > pre-allocated nodes). > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'. > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as > their register interface, this changes it to convert 'ftrace_regs' to > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs', > so users must access only registers for function parameters or > return value. > > Design > ------ > Instead of using ftrace's function entry hook directly, the new fprobe > is built on top of the function-graph's entry and return callbacks > with 'ftrace_regs'. > > Since the fprobe requires access to 'ftrace_regs', the architecture > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph > entry callback with 'ftrace_regs', and also > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to > return_to_handler. > > All fprobes share a single function-graph ops (means shares a common > ftrace filter) similar to the kprobe-on-ftrace. This needs another > layer to find corresponding fprobe in the common function-graph > callbacks, but has much better scalability, since the number of > registered function-graph ops is limited. > > In the entry callback, the fprobe runs its entry_handler and saves the > address of 'fprobe' on the function-graph's shadow stack as data. The > return callback decodes the data to get the 'fprobe' address, and runs > the exit_handler. > > The fprobe introduces two hash-tables, one is for entry callback which > searches fprobes related to the given function address passed by entry > callback. The other is for a return callback which checks if the given > 'fprobe' data structure pointer is still valid. Note that it is > possible to unregister fprobe before the return callback runs. Thus > the address validation must be done before using it in the return > callback. > > This series can be applied against the probes/for-next branch, which > is based on v6.9-rc3. > > This series can also be found below branch. > > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph > > Thank you, > > --- > > Masami Hiramatsu (Google) (21): > tracing: Add a comment about ftrace_regs definition > tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value > x86: tracing: Add ftrace_regs definition in the header > function_graph: Use a simple LRU for fgraph_array index number > ftrace: Add multiple fgraph storage selftest > function_graph: Pass ftrace_regs to entryfunc > function_graph: Replace fgraph_ret_regs with ftrace_regs > function_graph: Pass ftrace_regs to retfunc > fprobe: Use ftrace_regs in fprobe entry handler > fprobe: Use ftrace_regs in fprobe exit handler > tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs > tracing: Add ftrace_fill_perf_regs() for perf event > tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS > bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled > ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC > fprobe: Rewrite fprobe on function-graph tracer > tracing/fprobe: Remove nr_maxactive from fprobe > selftests: ftrace: Remove obsolate maxactive syntax check > selftests/ftrace: Add a test case for repeating register/unregister fprobe > Documentation: probes: Update fprobe on function-graph tracer > fgraph: Skip recording calltime/rettime if it is not nneeded > > Steven Rostedt (VMware) (15): > function_graph: Convert ret_stack to a series of longs > fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long > function_graph: Add an array structure that will allow multiple callbacks > function_graph: Allow multiple users to attach to function graph > function_graph: Remove logic around ftrace_graph_entry and return > ftrace/function_graph: Pass fgraph_ops to function graph callbacks > ftrace: Allow function_graph tracer to be enabled in instances > ftrace: Allow ftrace startup flags exist without dynamic ftrace > function_graph: Have the instances use their own ftrace_ops for filtering > function_graph: Add "task variables" per task for fgraph_ops > function_graph: Move set_graph_function tests to shadow stack global var > function_graph: Move graph depth stored data to shadow stack global var > function_graph: Move graph notrace bit to shadow stack global var > function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() > function_graph: Add selftest for passing local variables > > > Documentation/trace/fprobe.rst | 42 + > arch/arm64/Kconfig | 3 > arch/arm64/include/asm/ftrace.h | 47 + > arch/arm64/kernel/asm-offsets.c | 12 > arch/arm64/kernel/entry-ftrace.S | 32 - > arch/arm64/kernel/ftrace.c | 21 > arch/loongarch/Kconfig | 4 > arch/loongarch/include/asm/ftrace.h | 32 - > arch/loongarch/kernel/asm-offsets.c | 12 > arch/loongarch/kernel/ftrace_dyn.c | 15 > arch/loongarch/kernel/mcount.S | 17 > arch/loongarch/kernel/mcount_dyn.S | 14 > arch/powerpc/Kconfig | 1 > arch/powerpc/include/asm/ftrace.h | 15 > arch/powerpc/kernel/trace/ftrace.c | 3 > arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 > arch/riscv/Kconfig | 3 > arch/riscv/include/asm/ftrace.h | 21 > arch/riscv/kernel/ftrace.c | 15 > arch/riscv/kernel/mcount.S | 24 > arch/s390/Kconfig | 3 > arch/s390/include/asm/ftrace.h | 39 - > arch/s390/kernel/asm-offsets.c | 6 > arch/s390/kernel/mcount.S | 9 > arch/x86/Kconfig | 4 > arch/x86/include/asm/ftrace.h | 43 - > arch/x86/kernel/ftrace.c | 51 + > arch/x86/kernel/ftrace_32.S | 15 > arch/x86/kernel/ftrace_64.S | 17 > include/linux/fprobe.h | 57 + > include/linux/ftrace.h | 170 +++ > include/linux/sched.h | 2 > include/linux/trace_recursion.h | 39 - > kernel/trace/Kconfig | 23 > kernel/trace/bpf_trace.c | 14 > kernel/trace/fgraph.c | 1005 ++++++++++++++++---- > kernel/trace/fprobe.c | 637 +++++++++---- > kernel/trace/ftrace.c | 13 > kernel/trace/ftrace_internal.h | 2 > kernel/trace/trace.h | 96 ++ > kernel/trace/trace_fprobe.c | 147 ++- > kernel/trace/trace_functions.c | 8 > kernel/trace/trace_functions_graph.c | 98 +- > kernel/trace/trace_irqsoff.c | 12 > kernel/trace/trace_probe_tmpl.h | 2 > kernel/trace/trace_sched_wakeup.c | 12 > kernel/trace/trace_selftest.c | 262 +++++ > lib/test_fprobe.c | 51 - > samples/fprobe/fprobe_example.c | 4 > .../test.d/dynevent/add_remove_fprobe_repeat.tc | 19 > .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 > 51 files changed, 2325 insertions(+), 882 deletions(-) > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org>
On Wed, 24 Apr 2024 15:35:15 +0200 Florent Revest <revest@chromium.org> wrote: > Neat! :) I had a look at mostly the "high level" part (fprobe and > arm64 specific bits) and this seems to be in a good state to me. > Thanks for the review this long series! > Thanks for all that work, that is quite a refactoring :) > > On Mon, Apr 15, 2024 at 2:49 PM Masami Hiramatsu (Google) > <mhiramat@kernel.org> wrote: > > > > Hi, > > > > Here is the 9th version of the series to re-implement the fprobe on > > function-graph tracer. The previous version is; > > > > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/ > > > > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next) > > and fixed some bugs + performance optimization patch[36/36]. > > - [12/36] Fix to clear fgraph_array entry in registration failure, also > > return -ENOSPC when fgraph_array is full. > > - [28/36] Add new store_fprobe_entry_data() for fprobe. > > - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation. > > - [36/36] Add new flag to skip timestamp recording. > > > > Overview > > -------- > > This series does major 2 changes, enable multiple function-graphs on > > the ftrace (e.g. allow function-graph on sub instances) and rewrite the > > fprobe on this function-graph. > > > > The former changes had been sent from Steven Rostedt 4 years ago (*), > > which allows users to set different setting function-graph tracer (and > > other tracers based on function-graph) in each trace-instances at the > > same time. > > > > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/ > > > > The purpose of latter change are; > > > > 1) Remove dependency of the rethook from fprobe so that we can reduce > > the return hook code and shadow stack. > > > > 2) Make 'ftrace_regs' the common trace interface for the function > > boundary. > > > > 1) Currently we have 2(or 3) different function return hook codes, > > the function-graph tracer and rethook (and legacy kretprobe). > > But since this is redundant and needs double maintenance cost, > > I would like to unify those. From the user's viewpoint, function- > > graph tracer is very useful to grasp the execution path. For this > > purpose, it is hard to use the rethook in the function-graph > > tracer, but the opposite is possible. (Strictly speaking, kretprobe > > can not use it because it requires 'pt_regs' for historical reasons.) > > > > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is > > wrong for the function entry and exit. Moreover, depending on the > > architecture, there is no way to accurately reproduce 'pt_regs' > > outside of interrupt or exception handlers. This means fprobe should > > not use 'pt_regs' because it does not use such exceptions. > > (Conversely, kprobe should use 'pt_regs' because it is an abstract > > interface of the software breakpoint exception.) > > > > This series changes fprobe to use function-graph tracer for tracing > > function entry and exit, instead of mixture of ftrace and rethook. > > Unlike the rethook which is a per-task list of system-wide allocated > > nodes, the function graph's ret_stack is a per-task shadow stack. > > Thus it does not need to set 'nr_maxactive' (which is the number of > > pre-allocated nodes). > > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'. > > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as > > their register interface, this changes it to convert 'ftrace_regs' to > > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs', > > so users must access only registers for function parameters or > > return value. > > > > Design > > ------ > > Instead of using ftrace's function entry hook directly, the new fprobe > > is built on top of the function-graph's entry and return callbacks > > with 'ftrace_regs'. > > > > Since the fprobe requires access to 'ftrace_regs', the architecture > > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and > > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph > > entry callback with 'ftrace_regs', and also > > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to > > return_to_handler. > > > > All fprobes share a single function-graph ops (means shares a common > > ftrace filter) similar to the kprobe-on-ftrace. This needs another > > layer to find corresponding fprobe in the common function-graph > > callbacks, but has much better scalability, since the number of > > registered function-graph ops is limited. > > > > In the entry callback, the fprobe runs its entry_handler and saves the > > address of 'fprobe' on the function-graph's shadow stack as data. The > > return callback decodes the data to get the 'fprobe' address, and runs > > the exit_handler. > > > > The fprobe introduces two hash-tables, one is for entry callback which > > searches fprobes related to the given function address passed by entry > > callback. The other is for a return callback which checks if the given > > 'fprobe' data structure pointer is still valid. Note that it is > > possible to unregister fprobe before the return callback runs. Thus > > the address validation must be done before using it in the return > > callback. > > > > This series can be applied against the probes/for-next branch, which > > is based on v6.9-rc3. > > > > This series can also be found below branch. > > > > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph > > > > Thank you, > > > > --- > > > > Masami Hiramatsu (Google) (21): > > tracing: Add a comment about ftrace_regs definition > > tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value > > x86: tracing: Add ftrace_regs definition in the header > > function_graph: Use a simple LRU for fgraph_array index number > > ftrace: Add multiple fgraph storage selftest > > function_graph: Pass ftrace_regs to entryfunc > > function_graph: Replace fgraph_ret_regs with ftrace_regs > > function_graph: Pass ftrace_regs to retfunc > > fprobe: Use ftrace_regs in fprobe entry handler > > fprobe: Use ftrace_regs in fprobe exit handler > > tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs > > tracing: Add ftrace_fill_perf_regs() for perf event > > tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS > > bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled > > ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC > > fprobe: Rewrite fprobe on function-graph tracer > > tracing/fprobe: Remove nr_maxactive from fprobe > > selftests: ftrace: Remove obsolate maxactive syntax check > > selftests/ftrace: Add a test case for repeating register/unregister fprobe > > Documentation: probes: Update fprobe on function-graph tracer > > fgraph: Skip recording calltime/rettime if it is not nneeded > > > > Steven Rostedt (VMware) (15): > > function_graph: Convert ret_stack to a series of longs > > fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long > > function_graph: Add an array structure that will allow multiple callbacks > > function_graph: Allow multiple users to attach to function graph > > function_graph: Remove logic around ftrace_graph_entry and return > > ftrace/function_graph: Pass fgraph_ops to function graph callbacks > > ftrace: Allow function_graph tracer to be enabled in instances > > ftrace: Allow ftrace startup flags exist without dynamic ftrace > > function_graph: Have the instances use their own ftrace_ops for filtering > > function_graph: Add "task variables" per task for fgraph_ops > > function_graph: Move set_graph_function tests to shadow stack global var > > function_graph: Move graph depth stored data to shadow stack global var > > function_graph: Move graph notrace bit to shadow stack global var > > function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() > > function_graph: Add selftest for passing local variables > > > > > > Documentation/trace/fprobe.rst | 42 + > > arch/arm64/Kconfig | 3 > > arch/arm64/include/asm/ftrace.h | 47 + > > arch/arm64/kernel/asm-offsets.c | 12 > > arch/arm64/kernel/entry-ftrace.S | 32 - > > arch/arm64/kernel/ftrace.c | 21 > > arch/loongarch/Kconfig | 4 > > arch/loongarch/include/asm/ftrace.h | 32 - > > arch/loongarch/kernel/asm-offsets.c | 12 > > arch/loongarch/kernel/ftrace_dyn.c | 15 > > arch/loongarch/kernel/mcount.S | 17 > > arch/loongarch/kernel/mcount_dyn.S | 14 > > arch/powerpc/Kconfig | 1 > > arch/powerpc/include/asm/ftrace.h | 15 > > arch/powerpc/kernel/trace/ftrace.c | 3 > > arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 > > arch/riscv/Kconfig | 3 > > arch/riscv/include/asm/ftrace.h | 21 > > arch/riscv/kernel/ftrace.c | 15 > > arch/riscv/kernel/mcount.S | 24 > > arch/s390/Kconfig | 3 > > arch/s390/include/asm/ftrace.h | 39 - > > arch/s390/kernel/asm-offsets.c | 6 > > arch/s390/kernel/mcount.S | 9 > > arch/x86/Kconfig | 4 > > arch/x86/include/asm/ftrace.h | 43 - > > arch/x86/kernel/ftrace.c | 51 + > > arch/x86/kernel/ftrace_32.S | 15 > > arch/x86/kernel/ftrace_64.S | 17 > > include/linux/fprobe.h | 57 + > > include/linux/ftrace.h | 170 +++ > > include/linux/sched.h | 2 > > include/linux/trace_recursion.h | 39 - > > kernel/trace/Kconfig | 23 > > kernel/trace/bpf_trace.c | 14 > > kernel/trace/fgraph.c | 1005 ++++++++++++++++---- > > kernel/trace/fprobe.c | 637 +++++++++---- > > kernel/trace/ftrace.c | 13 > > kernel/trace/ftrace_internal.h | 2 > > kernel/trace/trace.h | 96 ++ > > kernel/trace/trace_fprobe.c | 147 ++- > > kernel/trace/trace_functions.c | 8 > > kernel/trace/trace_functions_graph.c | 98 +- > > kernel/trace/trace_irqsoff.c | 12 > > kernel/trace/trace_probe_tmpl.h | 2 > > kernel/trace/trace_sched_wakeup.c | 12 > > kernel/trace/trace_selftest.c | 262 +++++ > > lib/test_fprobe.c | 51 - > > samples/fprobe/fprobe_example.c | 4 > > .../test.d/dynevent/add_remove_fprobe_repeat.tc | 19 > > .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 > > 51 files changed, 2325 insertions(+), 882 deletions(-) > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > > > -- > > Masami Hiramatsu (Google) <mhiramat@kernel.org> -- Masami Hiramatsu (Google) <mhiramat@kernel.org>
Hi Steve, Can you review this series? Especially, [07/36] and [12/36] has been changed a lot from your original patch. Thank you, On Mon, 15 Apr 2024 21:48:59 +0900 "Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote: > Hi, > > Here is the 9th version of the series to re-implement the fprobe on > function-graph tracer. The previous version is; > > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/ > > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next) > and fixed some bugs + performance optimization patch[36/36]. > - [12/36] Fix to clear fgraph_array entry in registration failure, also > return -ENOSPC when fgraph_array is full. > - [28/36] Add new store_fprobe_entry_data() for fprobe. > - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation. > - [36/36] Add new flag to skip timestamp recording. > > Overview > -------- > This series does major 2 changes, enable multiple function-graphs on > the ftrace (e.g. allow function-graph on sub instances) and rewrite the > fprobe on this function-graph. > > The former changes had been sent from Steven Rostedt 4 years ago (*), > which allows users to set different setting function-graph tracer (and > other tracers based on function-graph) in each trace-instances at the > same time. > > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/ > > The purpose of latter change are; > > 1) Remove dependency of the rethook from fprobe so that we can reduce > the return hook code and shadow stack. > > 2) Make 'ftrace_regs' the common trace interface for the function > boundary. > > 1) Currently we have 2(or 3) different function return hook codes, > the function-graph tracer and rethook (and legacy kretprobe). > But since this is redundant and needs double maintenance cost, > I would like to unify those. From the user's viewpoint, function- > graph tracer is very useful to grasp the execution path. For this > purpose, it is hard to use the rethook in the function-graph > tracer, but the opposite is possible. (Strictly speaking, kretprobe > can not use it because it requires 'pt_regs' for historical reasons.) > > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is > wrong for the function entry and exit. Moreover, depending on the > architecture, there is no way to accurately reproduce 'pt_regs' > outside of interrupt or exception handlers. This means fprobe should > not use 'pt_regs' because it does not use such exceptions. > (Conversely, kprobe should use 'pt_regs' because it is an abstract > interface of the software breakpoint exception.) > > This series changes fprobe to use function-graph tracer for tracing > function entry and exit, instead of mixture of ftrace and rethook. > Unlike the rethook which is a per-task list of system-wide allocated > nodes, the function graph's ret_stack is a per-task shadow stack. > Thus it does not need to set 'nr_maxactive' (which is the number of > pre-allocated nodes). > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'. > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as > their register interface, this changes it to convert 'ftrace_regs' to > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs', > so users must access only registers for function parameters or > return value. > > Design > ------ > Instead of using ftrace's function entry hook directly, the new fprobe > is built on top of the function-graph's entry and return callbacks > with 'ftrace_regs'. > > Since the fprobe requires access to 'ftrace_regs', the architecture > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph > entry callback with 'ftrace_regs', and also > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to > return_to_handler. > > All fprobes share a single function-graph ops (means shares a common > ftrace filter) similar to the kprobe-on-ftrace. This needs another > layer to find corresponding fprobe in the common function-graph > callbacks, but has much better scalability, since the number of > registered function-graph ops is limited. > > In the entry callback, the fprobe runs its entry_handler and saves the > address of 'fprobe' on the function-graph's shadow stack as data. The > return callback decodes the data to get the 'fprobe' address, and runs > the exit_handler. > > The fprobe introduces two hash-tables, one is for entry callback which > searches fprobes related to the given function address passed by entry > callback. The other is for a return callback which checks if the given > 'fprobe' data structure pointer is still valid. Note that it is > possible to unregister fprobe before the return callback runs. Thus > the address validation must be done before using it in the return > callback. > > This series can be applied against the probes/for-next branch, which > is based on v6.9-rc3. > > This series can also be found below branch. > > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph > > Thank you, > > --- > > Masami Hiramatsu (Google) (21): > tracing: Add a comment about ftrace_regs definition > tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value > x86: tracing: Add ftrace_regs definition in the header > function_graph: Use a simple LRU for fgraph_array index number > ftrace: Add multiple fgraph storage selftest > function_graph: Pass ftrace_regs to entryfunc > function_graph: Replace fgraph_ret_regs with ftrace_regs > function_graph: Pass ftrace_regs to retfunc > fprobe: Use ftrace_regs in fprobe entry handler > fprobe: Use ftrace_regs in fprobe exit handler > tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs > tracing: Add ftrace_fill_perf_regs() for perf event > tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS > bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled > ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC > fprobe: Rewrite fprobe on function-graph tracer > tracing/fprobe: Remove nr_maxactive from fprobe > selftests: ftrace: Remove obsolate maxactive syntax check > selftests/ftrace: Add a test case for repeating register/unregister fprobe > Documentation: probes: Update fprobe on function-graph tracer > fgraph: Skip recording calltime/rettime if it is not nneeded > > Steven Rostedt (VMware) (15): > function_graph: Convert ret_stack to a series of longs > fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long > function_graph: Add an array structure that will allow multiple callbacks > function_graph: Allow multiple users to attach to function graph > function_graph: Remove logic around ftrace_graph_entry and return > ftrace/function_graph: Pass fgraph_ops to function graph callbacks > ftrace: Allow function_graph tracer to be enabled in instances > ftrace: Allow ftrace startup flags exist without dynamic ftrace > function_graph: Have the instances use their own ftrace_ops for filtering > function_graph: Add "task variables" per task for fgraph_ops > function_graph: Move set_graph_function tests to shadow stack global var > function_graph: Move graph depth stored data to shadow stack global var > function_graph: Move graph notrace bit to shadow stack global var > function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() > function_graph: Add selftest for passing local variables > > > Documentation/trace/fprobe.rst | 42 + > arch/arm64/Kconfig | 3 > arch/arm64/include/asm/ftrace.h | 47 + > arch/arm64/kernel/asm-offsets.c | 12 > arch/arm64/kernel/entry-ftrace.S | 32 - > arch/arm64/kernel/ftrace.c | 21 > arch/loongarch/Kconfig | 4 > arch/loongarch/include/asm/ftrace.h | 32 - > arch/loongarch/kernel/asm-offsets.c | 12 > arch/loongarch/kernel/ftrace_dyn.c | 15 > arch/loongarch/kernel/mcount.S | 17 > arch/loongarch/kernel/mcount_dyn.S | 14 > arch/powerpc/Kconfig | 1 > arch/powerpc/include/asm/ftrace.h | 15 > arch/powerpc/kernel/trace/ftrace.c | 3 > arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 > arch/riscv/Kconfig | 3 > arch/riscv/include/asm/ftrace.h | 21 > arch/riscv/kernel/ftrace.c | 15 > arch/riscv/kernel/mcount.S | 24 > arch/s390/Kconfig | 3 > arch/s390/include/asm/ftrace.h | 39 - > arch/s390/kernel/asm-offsets.c | 6 > arch/s390/kernel/mcount.S | 9 > arch/x86/Kconfig | 4 > arch/x86/include/asm/ftrace.h | 43 - > arch/x86/kernel/ftrace.c | 51 + > arch/x86/kernel/ftrace_32.S | 15 > arch/x86/kernel/ftrace_64.S | 17 > include/linux/fprobe.h | 57 + > include/linux/ftrace.h | 170 +++ > include/linux/sched.h | 2 > include/linux/trace_recursion.h | 39 - > kernel/trace/Kconfig | 23 > kernel/trace/bpf_trace.c | 14 > kernel/trace/fgraph.c | 1005 ++++++++++++++++---- > kernel/trace/fprobe.c | 637 +++++++++---- > kernel/trace/ftrace.c | 13 > kernel/trace/ftrace_internal.h | 2 > kernel/trace/trace.h | 96 ++ > kernel/trace/trace_fprobe.c | 147 ++- > kernel/trace/trace_functions.c | 8 > kernel/trace/trace_functions_graph.c | 98 +- > kernel/trace/trace_irqsoff.c | 12 > kernel/trace/trace_probe_tmpl.h | 2 > kernel/trace/trace_sched_wakeup.c | 12 > kernel/trace/trace_selftest.c | 262 +++++ > lib/test_fprobe.c | 51 - > samples/fprobe/fprobe_example.c | 4 > .../test.d/dynevent/add_remove_fprobe_repeat.tc | 19 > .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 > 51 files changed, 2325 insertions(+), 882 deletions(-) > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org> > -- Masami Hiramatsu (Google) <mhiramat@kernel.org>
On Fri, 19 Apr 2024 14:36:18 +0900 Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > Hi Steve, > > Can you review this series? Especially, [07/36] and [12/36] has been changed > a lot from your original patch. I haven't forgotten (just been a bit hectic). Worse comes to worse, I'll review it tomorrow. -- Steve > > Thank you, > > On Mon, 15 Apr 2024 21:48:59 +0900 > "Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote: > > > Hi, > > > > Here is the 9th version of the series to re-implement the fprobe on > > function-graph tracer. The previous version is; > > > > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/ > > > > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next) > > and fixed some bugs + performance optimization patch[36/36]. > > - [12/36] Fix to clear fgraph_array entry in registration failure, also > > return -ENOSPC when fgraph_array is full. > > - [28/36] Add new store_fprobe_entry_data() for fprobe. > > - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation. > > - [36/36] Add new flag to skip timestamp recording. > > > > Overview > > -------- > > This series does major 2 changes, enable multiple function-graphs on > > the ftrace (e.g. allow function-graph on sub instances) and rewrite the > > fprobe on this function-graph. > > > > The former changes had been sent from Steven Rostedt 4 years ago (*), > > which allows users to set different setting function-graph tracer (and > > other tracers based on function-graph) in each trace-instances at the > > same time. > > > > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/ > > > > The purpose of latter change are; > > > > 1) Remove dependency of the rethook from fprobe so that we can reduce > > the return hook code and shadow stack. > > > > 2) Make 'ftrace_regs' the common trace interface for the function > > boundary. > > > > 1) Currently we have 2(or 3) different function return hook codes, > > the function-graph tracer and rethook (and legacy kretprobe). > > But since this is redundant and needs double maintenance cost, > > I would like to unify those. From the user's viewpoint, function- > > graph tracer is very useful to grasp the execution path. For this > > purpose, it is hard to use the rethook in the function-graph > > tracer, but the opposite is possible. (Strictly speaking, kretprobe > > can not use it because it requires 'pt_regs' for historical reasons.) > > > > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is > > wrong for the function entry and exit. Moreover, depending on the > > architecture, there is no way to accurately reproduce 'pt_regs' > > outside of interrupt or exception handlers. This means fprobe should > > not use 'pt_regs' because it does not use such exceptions. > > (Conversely, kprobe should use 'pt_regs' because it is an abstract > > interface of the software breakpoint exception.) > > > > This series changes fprobe to use function-graph tracer for tracing > > function entry and exit, instead of mixture of ftrace and rethook. > > Unlike the rethook which is a per-task list of system-wide allocated > > nodes, the function graph's ret_stack is a per-task shadow stack. > > Thus it does not need to set 'nr_maxactive' (which is the number of > > pre-allocated nodes). > > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'. > > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as > > their register interface, this changes it to convert 'ftrace_regs' to > > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs', > > so users must access only registers for function parameters or > > return value. > > > > Design > > ------ > > Instead of using ftrace's function entry hook directly, the new fprobe > > is built on top of the function-graph's entry and return callbacks > > with 'ftrace_regs'. > > > > Since the fprobe requires access to 'ftrace_regs', the architecture > > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and > > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph > > entry callback with 'ftrace_regs', and also > > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to > > return_to_handler. > > > > All fprobes share a single function-graph ops (means shares a common > > ftrace filter) similar to the kprobe-on-ftrace. This needs another > > layer to find corresponding fprobe in the common function-graph > > callbacks, but has much better scalability, since the number of > > registered function-graph ops is limited. > > > > In the entry callback, the fprobe runs its entry_handler and saves the > > address of 'fprobe' on the function-graph's shadow stack as data. The > > return callback decodes the data to get the 'fprobe' address, and runs > > the exit_handler. > > > > The fprobe introduces two hash-tables, one is for entry callback which > > searches fprobes related to the given function address passed by entry > > callback. The other is for a return callback which checks if the given > > 'fprobe' data structure pointer is still valid. Note that it is > > possible to unregister fprobe before the return callback runs. Thus > > the address validation must be done before using it in the return > > callback. > > > > This series can be applied against the probes/for-next branch, which > > is based on v6.9-rc3. > > > > This series can also be found below branch. > > > > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph > > > > Thank you, > > > > --- > > > > Masami Hiramatsu (Google) (21): > > tracing: Add a comment about ftrace_regs definition > > tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value > > x86: tracing: Add ftrace_regs definition in the header > > function_graph: Use a simple LRU for fgraph_array index number > > ftrace: Add multiple fgraph storage selftest > > function_graph: Pass ftrace_regs to entryfunc > > function_graph: Replace fgraph_ret_regs with ftrace_regs > > function_graph: Pass ftrace_regs to retfunc > > fprobe: Use ftrace_regs in fprobe entry handler > > fprobe: Use ftrace_regs in fprobe exit handler > > tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs > > tracing: Add ftrace_fill_perf_regs() for perf event > > tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS > > bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled > > ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC > > fprobe: Rewrite fprobe on function-graph tracer > > tracing/fprobe: Remove nr_maxactive from fprobe > > selftests: ftrace: Remove obsolate maxactive syntax check > > selftests/ftrace: Add a test case for repeating register/unregister fprobe > > Documentation: probes: Update fprobe on function-graph tracer > > fgraph: Skip recording calltime/rettime if it is not nneeded > > > > Steven Rostedt (VMware) (15): > > function_graph: Convert ret_stack to a series of longs > > fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long > > function_graph: Add an array structure that will allow multiple callbacks > > function_graph: Allow multiple users to attach to function graph > > function_graph: Remove logic around ftrace_graph_entry and return > > ftrace/function_graph: Pass fgraph_ops to function graph callbacks > > ftrace: Allow function_graph tracer to be enabled in instances > > ftrace: Allow ftrace startup flags exist without dynamic ftrace > > function_graph: Have the instances use their own ftrace_ops for filtering > > function_graph: Add "task variables" per task for fgraph_ops > > function_graph: Move set_graph_function tests to shadow stack global var > > function_graph: Move graph depth stored data to shadow stack global var > > function_graph: Move graph notrace bit to shadow stack global var > > function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() > > function_graph: Add selftest for passing local variables > > > > > > Documentation/trace/fprobe.rst | 42 + > > arch/arm64/Kconfig | 3 > > arch/arm64/include/asm/ftrace.h | 47 + > > arch/arm64/kernel/asm-offsets.c | 12 > > arch/arm64/kernel/entry-ftrace.S | 32 - > > arch/arm64/kernel/ftrace.c | 21 > > arch/loongarch/Kconfig | 4 > > arch/loongarch/include/asm/ftrace.h | 32 - > > arch/loongarch/kernel/asm-offsets.c | 12 > > arch/loongarch/kernel/ftrace_dyn.c | 15 > > arch/loongarch/kernel/mcount.S | 17 > > arch/loongarch/kernel/mcount_dyn.S | 14 > > arch/powerpc/Kconfig | 1 > > arch/powerpc/include/asm/ftrace.h | 15 > > arch/powerpc/kernel/trace/ftrace.c | 3 > > arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 > > arch/riscv/Kconfig | 3 > > arch/riscv/include/asm/ftrace.h | 21 > > arch/riscv/kernel/ftrace.c | 15 > > arch/riscv/kernel/mcount.S | 24 > > arch/s390/Kconfig | 3 > > arch/s390/include/asm/ftrace.h | 39 - > > arch/s390/kernel/asm-offsets.c | 6 > > arch/s390/kernel/mcount.S | 9 > > arch/x86/Kconfig | 4 > > arch/x86/include/asm/ftrace.h | 43 - > > arch/x86/kernel/ftrace.c | 51 + > > arch/x86/kernel/ftrace_32.S | 15 > > arch/x86/kernel/ftrace_64.S | 17 > > include/linux/fprobe.h | 57 + > > include/linux/ftrace.h | 170 +++ > > include/linux/sched.h | 2 > > include/linux/trace_recursion.h | 39 - > > kernel/trace/Kconfig | 23 > > kernel/trace/bpf_trace.c | 14 > > kernel/trace/fgraph.c | 1005 ++++++++++++++++---- > > kernel/trace/fprobe.c | 637 +++++++++---- > > kernel/trace/ftrace.c | 13 > > kernel/trace/ftrace_internal.h | 2 > > kernel/trace/trace.h | 96 ++ > > kernel/trace/trace_fprobe.c | 147 ++- > > kernel/trace/trace_functions.c | 8 > > kernel/trace/trace_functions_graph.c | 98 +- > > kernel/trace/trace_irqsoff.c | 12 > > kernel/trace/trace_probe_tmpl.h | 2 > > kernel/trace/trace_sched_wakeup.c | 12 > > kernel/trace/trace_selftest.c | 262 +++++ > > lib/test_fprobe.c | 51 - > > samples/fprobe/fprobe_example.c | 4 > > .../test.d/dynevent/add_remove_fprobe_repeat.tc | 19 > > .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 > > 51 files changed, 2325 insertions(+), 882 deletions(-) > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > > > -- > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > >
On Mon, Apr 15, 2024 at 5:49 AM Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > > Hi, > > Here is the 9th version of the series to re-implement the fprobe on > function-graph tracer. The previous version is; > > https://lore.kernel.org/all/170887410337.564249.6360118840946697039.stgit@devnote2/ > > This version is ported on the latest kernel (v6.9-rc3 + probes/for-next) > and fixed some bugs + performance optimization patch[36/36]. > - [12/36] Fix to clear fgraph_array entry in registration failure, also > return -ENOSPC when fgraph_array is full. > - [28/36] Add new store_fprobe_entry_data() for fprobe. > - [31/36] Remove DIV_ROUND_UP() and fix entry data address calculation. > - [36/36] Add new flag to skip timestamp recording. > > Overview > -------- > This series does major 2 changes, enable multiple function-graphs on > the ftrace (e.g. allow function-graph on sub instances) and rewrite the > fprobe on this function-graph. > > The former changes had been sent from Steven Rostedt 4 years ago (*), > which allows users to set different setting function-graph tracer (and > other tracers based on function-graph) in each trace-instances at the > same time. > > (*) https://lore.kernel.org/all/20190525031633.811342628@goodmis.org/ > > The purpose of latter change are; > > 1) Remove dependency of the rethook from fprobe so that we can reduce > the return hook code and shadow stack. > > 2) Make 'ftrace_regs' the common trace interface for the function > boundary. > > 1) Currently we have 2(or 3) different function return hook codes, > the function-graph tracer and rethook (and legacy kretprobe). > But since this is redundant and needs double maintenance cost, > I would like to unify those. From the user's viewpoint, function- > graph tracer is very useful to grasp the execution path. For this > purpose, it is hard to use the rethook in the function-graph > tracer, but the opposite is possible. (Strictly speaking, kretprobe > can not use it because it requires 'pt_regs' for historical reasons.) > > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is > wrong for the function entry and exit. Moreover, depending on the > architecture, there is no way to accurately reproduce 'pt_regs' > outside of interrupt or exception handlers. This means fprobe should > not use 'pt_regs' because it does not use such exceptions. > (Conversely, kprobe should use 'pt_regs' because it is an abstract > interface of the software breakpoint exception.) > > This series changes fprobe to use function-graph tracer for tracing > function entry and exit, instead of mixture of ftrace and rethook. > Unlike the rethook which is a per-task list of system-wide allocated > nodes, the function graph's ret_stack is a per-task shadow stack. > Thus it does not need to set 'nr_maxactive' (which is the number of > pre-allocated nodes). > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'. > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as > their register interface, this changes it to convert 'ftrace_regs' to > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs', > so users must access only registers for function parameters or > return value. > > Design > ------ > Instead of using ftrace's function entry hook directly, the new fprobe > is built on top of the function-graph's entry and return callbacks > with 'ftrace_regs'. > > Since the fprobe requires access to 'ftrace_regs', the architecture > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph > entry callback with 'ftrace_regs', and also > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to > return_to_handler. > > All fprobes share a single function-graph ops (means shares a common > ftrace filter) similar to the kprobe-on-ftrace. This needs another > layer to find corresponding fprobe in the common function-graph > callbacks, but has much better scalability, since the number of > registered function-graph ops is limited. > > In the entry callback, the fprobe runs its entry_handler and saves the > address of 'fprobe' on the function-graph's shadow stack as data. The > return callback decodes the data to get the 'fprobe' address, and runs > the exit_handler. > > The fprobe introduces two hash-tables, one is for entry callback which > searches fprobes related to the given function address passed by entry > callback. The other is for a return callback which checks if the given > 'fprobe' data structure pointer is still valid. Note that it is > possible to unregister fprobe before the return callback runs. Thus > the address validation must be done before using it in the return > callback. > > This series can be applied against the probes/for-next branch, which > is based on v6.9-rc3. > > This series can also be found below branch. > > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph > > Thank you, > > --- Hey Masami, I can't really review most of that code as I'm completely unfamiliar with all those inner workings of fprobe/ftrace/function_graph. I left a few comments where there were somewhat more obvious BPF-related pieces. But I also did run our BPF benchmarks on probes/for-next as a baseline and then with your series applied on top. Just to see if there are any regressions. I think it will be a useful data point for you. You should be already familiar with the bench tool we have in BPF selftests (I used it on some other patches for your tree). BASELINE ======== kprobe : 24.634 ± 0.205M/s kprobe-multi : 28.898 ± 0.531M/s kretprobe : 10.478 ± 0.015M/s kretprobe-multi: 11.012 ± 0.063M/s THIS PATCH SET ON TOP ===================== kprobe : 25.144 ± 0.027M/s (+2%) kprobe-multi : 28.909 ± 0.074M/s kretprobe : 9.482 ± 0.008M/s (-9.5%) kretprobe-multi: 13.688 ± 0.027M/s (+24%) These numbers are pretty stable and look to be more or less representative. As you can see, kprobes got a bit faster, kprobe-multi seems to be about the same, though. Then (I suppose they are "legacy") kretprobes got quite noticeably slower, almost by 10%. Not sure why, but looks real after re-running benchmarks a bunch of times and getting stable results. On the other hand, multi-kretprobes got significantly faster (+24%!). Again, I don't know if it is expected or not, but it's a nice improvement. If you have any idea why kretprobes would get so much slower, it would be nice to look into that and see if you can mitigate the regression somehow. Thanks! > 51 files changed, 2325 insertions(+), 882 deletions(-) > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org> >
Hi Andrii, On Thu, 25 Apr 2024 13:31:53 -0700 Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > Hey Masami, > > I can't really review most of that code as I'm completely unfamiliar > with all those inner workings of fprobe/ftrace/function_graph. I left > a few comments where there were somewhat more obvious BPF-related > pieces. > > But I also did run our BPF benchmarks on probes/for-next as a baseline > and then with your series applied on top. Just to see if there are any > regressions. I think it will be a useful data point for you. Thanks for testing! > > You should be already familiar with the bench tool we have in BPF > selftests (I used it on some other patches for your tree). What patches we need? > > BASELINE > ======== > kprobe : 24.634 ± 0.205M/s > kprobe-multi : 28.898 ± 0.531M/s > kretprobe : 10.478 ± 0.015M/s > kretprobe-multi: 11.012 ± 0.063M/s > > THIS PATCH SET ON TOP > ===================== > kprobe : 25.144 ± 0.027M/s (+2%) > kprobe-multi : 28.909 ± 0.074M/s > kretprobe : 9.482 ± 0.008M/s (-9.5%) > kretprobe-multi: 13.688 ± 0.027M/s (+24%) This looks good. Kretprobe should also use kretprobe-multi (fprobe) eventually because it should be a single callback version of kretprobe-multi. > > These numbers are pretty stable and look to be more or less representative. > > As you can see, kprobes got a bit faster, kprobe-multi seems to be > about the same, though. > > Then (I suppose they are "legacy") kretprobes got quite noticeably > slower, almost by 10%. Not sure why, but looks real after re-running > benchmarks a bunch of times and getting stable results. Hmm, kretprobe on x86 should use ftrace + rethook even with my series. So nothing should be changed. Maybe cache access pattern has been changed? I'll check it with tracefs (to remove the effect from bpf related changes) > > On the other hand, multi-kretprobes got significantly faster (+24%!). > Again, I don't know if it is expected or not, but it's a nice > improvement. Thanks! > > If you have any idea why kretprobes would get so much slower, it would > be nice to look into that and see if you can mitigate the regression > somehow. Thanks! OK, let me check it. Thank you! > > > > 51 files changed, 2325 insertions(+), 882 deletions(-) > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > > > -- > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > -- Masami Hiramatsu (Google) <mhiramat@kernel.org>
On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > Hi Andrii, > > On Thu, 25 Apr 2024 13:31:53 -0700 > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > Hey Masami, > > > > I can't really review most of that code as I'm completely unfamiliar > > with all those inner workings of fprobe/ftrace/function_graph. I left > > a few comments where there were somewhat more obvious BPF-related > > pieces. > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline > > and then with your series applied on top. Just to see if there are any > > regressions. I think it will be a useful data point for you. > > Thanks for testing! > > > > > You should be already familiar with the bench tool we have in BPF > > selftests (I used it on some other patches for your tree). > > What patches we need? > You mean for this `bench` tool? They are part of BPF selftests (under tools/testing/selftests/bpf), you can build them by running: $ make RELEASE=1 -j$(nproc) bench After that you'll get a self-container `bench` binary, which has all the self-contained benchmarks. You might also find a small script (benchs/run_bench_trigger.sh inside BPF selftests directory) helpful, it collects final summary of the benchmark run and optionally accepts a specific set of benchmarks. So you can use it like this: $ benchs/run_bench_trigger.sh kprobe kprobe-multi kprobe : 18.731 ± 0.639M/s kprobe-multi : 23.938 ± 0.612M/s By default it will run a wider set of benchmarks (no uprobes, but a bunch of extra fentry/fexit tests and stuff like this). > > > > BASELINE > > ======== > > kprobe : 24.634 ± 0.205M/s > > kprobe-multi : 28.898 ± 0.531M/s > > kretprobe : 10.478 ± 0.015M/s > > kretprobe-multi: 11.012 ± 0.063M/s > > > > THIS PATCH SET ON TOP > > ===================== > > kprobe : 25.144 ± 0.027M/s (+2%) > > kprobe-multi : 28.909 ± 0.074M/s > > kretprobe : 9.482 ± 0.008M/s (-9.5%) > > kretprobe-multi: 13.688 ± 0.027M/s (+24%) > > This looks good. Kretprobe should also use kretprobe-multi (fprobe) > eventually because it should be a single callback version of > kretprobe-multi. > > > > > These numbers are pretty stable and look to be more or less representative. > > > > As you can see, kprobes got a bit faster, kprobe-multi seems to be > > about the same, though. > > > > Then (I suppose they are "legacy") kretprobes got quite noticeably > > slower, almost by 10%. Not sure why, but looks real after re-running > > benchmarks a bunch of times and getting stable results. > > Hmm, kretprobe on x86 should use ftrace + rethook even with my series. > So nothing should be changed. Maybe cache access pattern has been > changed? > I'll check it with tracefs (to remove the effect from bpf related changes) > > > > > On the other hand, multi-kretprobes got significantly faster (+24%!). > > Again, I don't know if it is expected or not, but it's a nice > > improvement. > > Thanks! > > > > > If you have any idea why kretprobes would get so much slower, it would > > be nice to look into that and see if you can mitigate the regression > > somehow. Thanks! > > OK, let me check it. > > Thank you! > > > > > > > > 51 files changed, 2325 insertions(+), 882 deletions(-) > > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > > > > > -- > > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > > > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org>
On Mon, 29 Apr 2024 13:25:04 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > Hi Andrii,
> >
> > On Thu, 25 Apr 2024 13:31:53 -0700
> > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > > Hey Masami,
> > >
> > > I can't really review most of that code as I'm completely unfamiliar
> > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > a few comments where there were somewhat more obvious BPF-related
> > > pieces.
> > >
> > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > and then with your series applied on top. Just to see if there are any
> > > regressions. I think it will be a useful data point for you.
> >
> > Thanks for testing!
> >
> > >
> > > You should be already familiar with the bench tool we have in BPF
> > > selftests (I used it on some other patches for your tree).
> >
> > What patches we need?
> >
>
> You mean for this `bench` tool? They are part of BPF selftests (under
> tools/testing/selftests/bpf), you can build them by running:
>
> $ make RELEASE=1 -j$(nproc) bench
>
> After that you'll get a self-container `bench` binary, which has all
> the self-contained benchmarks.
>
> You might also find a small script (benchs/run_bench_trigger.sh inside
> BPF selftests directory) helpful, it collects final summary of the
> benchmark run and optionally accepts a specific set of benchmarks. So
> you can use it like this:
>
> $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> kprobe : 18.731 ± 0.639M/s
> kprobe-multi : 23.938 ± 0.612M/s
>
> By default it will run a wider set of benchmarks (no uprobes, but a
> bunch of extra fentry/fexit tests and stuff like this).
origin:
# benchs/run_bench_trigger.sh
kretprobe : 1.329 ± 0.007M/s
kretprobe-multi: 1.341 ± 0.004M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.288 ± 0.014M/s
kretprobe-multi: 1.365 ± 0.002M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.329 ± 0.002M/s
kretprobe-multi: 1.331 ± 0.011M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.311 ± 0.003M/s
kretprobe-multi: 1.318 ± 0.002M/s s
patched:
# benchs/run_bench_trigger.sh
kretprobe : 1.274 ± 0.003M/s
kretprobe-multi: 1.397 ± 0.002M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.307 ± 0.002M/s
kretprobe-multi: 1.406 ± 0.004M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.279 ± 0.004M/s
kretprobe-multi: 1.330 ± 0.014M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.256 ± 0.010M/s
kretprobe-multi: 1.412 ± 0.003M/s
Hmm, in my case, it seems smaller differences (~3%?).
I attached perf report results for those, but I don't see large difference.
> > >
> > > BASELINE
> > > ========
> > > kprobe : 24.634 ± 0.205M/s
> > > kprobe-multi : 28.898 ± 0.531M/s
> > > kretprobe : 10.478 ± 0.015M/s
> > > kretprobe-multi: 11.012 ± 0.063M/s
> > >
> > > THIS PATCH SET ON TOP
> > > =====================
> > > kprobe : 25.144 ± 0.027M/s (+2%)
> > > kprobe-multi : 28.909 ± 0.074M/s
> > > kretprobe : 9.482 ± 0.008M/s (-9.5%)
> > > kretprobe-multi: 13.688 ± 0.027M/s (+24%)
> >
> > This looks good. Kretprobe should also use kretprobe-multi (fprobe)
> > eventually because it should be a single callback version of
> > kretprobe-multi.
I ran another benchmark (prctl loop, attached), the origin kernel result is here;
# sh ./benchmark.sh
count = 10000000, took 6.748133 sec
And the patched kernel result;
# sh ./benchmark.sh
count = 10000000, took 6.644095 sec
I confirmed that the parf result has no big difference.
Thank you,
> >
> > >
> > > These numbers are pretty stable and look to be more or less representative.
> > >
> > > As you can see, kprobes got a bit faster, kprobe-multi seems to be
> > > about the same, though.
> > >
> > > Then (I suppose they are "legacy") kretprobes got quite noticeably
> > > slower, almost by 10%. Not sure why, but looks real after re-running
> > > benchmarks a bunch of times and getting stable results.
> >
> > Hmm, kretprobe on x86 should use ftrace + rethook even with my series.
> > So nothing should be changed. Maybe cache access pattern has been
> > changed?
> > I'll check it with tracefs (to remove the effect from bpf related changes)
> >
> > >
> > > On the other hand, multi-kretprobes got significantly faster (+24%!).
> > > Again, I don't know if it is expected or not, but it's a nice
> > > improvement.
> >
> > Thanks!
> >
> > >
> > > If you have any idea why kretprobes would get so much slower, it would
> > > be nice to look into that and see if you can mitigate the regression
> > > somehow. Thanks!
> >
> > OK, let me check it.
> >
> > Thank you!
> >
> > >
> > >
> > > > 51 files changed, 2325 insertions(+), 882 deletions(-)
> > > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > > >
> > > > --
> > > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > >
> >
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 32K of event 'task-clock:ppp'
# Event count (approx.): 8035250000
#
# Children Self Command Shared Object Symbol
# ........ ........ ....... ................................................. .....................................................
#
99.56% 0.00% bench libc.so.6 [.] start_thread
|
---start_thread
|
|--97.95%--syscall
| |
| |--58.91%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.61%--__x64_sys_getpgid
| | | |
| | | |--11.69%--0xffffffffa02050de
| | | | kprobe_ftrace_handler
| | | | |
| | | | |--6.26%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.29%--objpool_pop
| | | | | |
| | | | | --1.97%--rethook_try_get
| | | | |
| | | | |--2.41%--rcu_is_watching
| | | | |
| | | | --0.93%--get_kprobe
| | | |
| | | --5.59%--do_getpgid
| | | |
| | | --4.85%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.42%--__radix_tree_lookup
| | |
| | |--14.68%--arch_rethook_trampoline
| | | |
| | | --12.96%--arch_rethook_trampoline_callback
| | | |
| | | --12.69%--rethook_trampoline_handler
| | | |
| | | |--10.89%--kretprobe_rethook_handler
| | | | |
| | | | --9.80%--kretprobe_dispatcher
| | | | |
| | | | --6.85%--kretprobe_perf_func
| | | | |
| | | | --6.57%--trace_call_bpf
| | | | |
| | | | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.67%--migrate_disable
| | | |
| | | --0.88%--objpool_push
| | |
| | --0.56%--syscall_exit_to_user_mode
| |
| --4.50%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.50%--syscall@plt
98.00% 34.25% bench libc.so.6 [.] syscall
|
|--63.76%--syscall
| |
| |--58.97%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.61%--__x64_sys_getpgid
| | | |
| | | |--11.69%--0xffffffffa02050de
| | | | kprobe_ftrace_handler
| | | | |
| | | | |--6.26%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.29%--objpool_pop
| | | | | |
| | | | | --1.97%--rethook_try_get
| | | | |
| | | | |--2.41%--rcu_is_watching
| | | | |
| | | | --0.93%--get_kprobe
| | | |
| | | --5.59%--do_getpgid
| | | |
| | | --4.85%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.42%--__radix_tree_lookup
| | |
| | |--14.68%--arch_rethook_trampoline
| | | |
| | | --12.96%--arch_rethook_trampoline_callback
| | | |
| | | --12.69%--rethook_trampoline_handler
| | | |
| | | |--10.89%--kretprobe_rethook_handler
| | | | |
| | | | --9.80%--kretprobe_dispatcher
| | | | |
| | | | --6.85%--kretprobe_perf_func
| | | | |
| | | | --6.57%--trace_call_bpf
| | | | |
| | | | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.67%--migrate_disable
| | | |
| | | --0.88%--objpool_push
| | |
| | --0.56%--syscall_exit_to_user_mode
| |
| --4.50%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--34.25%--start_thread
syscall
59.08% 0.00% bench [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
|
---entry_SYSCALL_64_after_hwframe
do_syscall_64
|
|--19.61%--__x64_sys_getpgid
| |
| |--11.69%--0xffffffffa02050de
| | kprobe_ftrace_handler
| | |
| | |--6.26%--pre_handler_kretprobe
| | | |
| | | |--3.29%--objpool_pop
| | | |
| | | --1.97%--rethook_try_get
| | |
| | |--2.41%--rcu_is_watching
| | |
| | --0.93%--get_kprobe
| |
| --5.59%--do_getpgid
| |
| --4.85%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.42%--__radix_tree_lookup
|
|--14.68%--arch_rethook_trampoline
| |
| --12.96%--arch_rethook_trampoline_callback
| |
| --12.69%--rethook_trampoline_handler
| |
| |--10.89%--kretprobe_rethook_handler
| | |
| | --9.80%--kretprobe_dispatcher
| | |
| | --6.85%--kretprobe_perf_func
| | |
| | --6.57%--trace_call_bpf
| | |
| | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.67%--migrate_disable
| |
| --0.88%--objpool_push
|
--0.56%--syscall_exit_to_user_mode
59.08% 24.07% bench [kernel.kallsyms] [k] do_syscall_64
|
|--35.01%--do_syscall_64
| |
| |--19.61%--__x64_sys_getpgid
| | |
| | |--11.69%--0xffffffffa02050de
| | | kprobe_ftrace_handler
| | | |
| | | |--6.26%--pre_handler_kretprobe
| | | | |
| | | | |--3.29%--objpool_pop
| | | | |
| | | | --1.97%--rethook_try_get
| | | |
| | | |--2.41%--rcu_is_watching
| | | |
| | | --0.93%--get_kprobe
| | |
| | --5.59%--do_getpgid
| | |
| | --4.85%--find_task_by_vpid
| | |
| | |--2.01%--idr_find
| | |
| | --1.42%--__radix_tree_lookup
| |
| |--14.68%--arch_rethook_trampoline
| | |
| | --12.96%--arch_rethook_trampoline_callback
| | |
| | --12.69%--rethook_trampoline_handler
| | |
| | |--10.89%--kretprobe_rethook_handler
| | | |
| | | --9.80%--kretprobe_dispatcher
| | | |
| | | --6.85%--kretprobe_perf_func
| | | |
| | | --6.57%--trace_call_bpf
| | | |
| | | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | --0.67%--migrate_disable
| | |
| | --0.88%--objpool_push
| |
| --0.56%--syscall_exit_to_user_mode
|
--24.06%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
19.66% 0.21% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
--19.44%--__x64_sys_getpgid
|
|--11.74%--0xffffffffa02050de
| kprobe_ftrace_handler
| |
| |--6.30%--pre_handler_kretprobe
| | |
| | |--3.29%--objpool_pop
| | |
| | --1.97%--rethook_try_get
| |
| |--2.41%--rcu_is_watching
| |
| --0.93%--get_kprobe
|
--5.59%--do_getpgid
|
--4.85%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.42%--__radix_tree_lookup
14.71% 1.75% bench [kernel.kallsyms] [k] arch_rethook_trampoline
|
|--12.96%--arch_rethook_trampoline
| |
| --12.96%--arch_rethook_trampoline_callback
| |
| --12.69%--rethook_trampoline_handler
| |
| |--10.89%--kretprobe_rethook_handler
| | |
| | --9.80%--kretprobe_dispatcher
| | |
| | --6.85%--kretprobe_perf_func
| | |
| | --6.57%--trace_call_bpf
| | |
| | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.67%--migrate_disable
| |
| --0.88%--objpool_push
|
--1.75%--start_thread
syscall
|
--1.71%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
12.96% 0.27% bench [kernel.kallsyms] [k] arch_rethook_trampoline_callback
|
--12.69%--arch_rethook_trampoline_callback
rethook_trampoline_handler
|
|--10.89%--kretprobe_rethook_handler
| |
| --9.80%--kretprobe_dispatcher
| |
| --6.85%--kretprobe_perf_func
| |
| --6.57%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--0.88%--objpool_push
12.69% 0.88% bench [kernel.kallsyms] [k] rethook_trampoline_handler
|
|--11.81%--rethook_trampoline_handler
| |
| |--10.89%--kretprobe_rethook_handler
| | |
| | --9.80%--kretprobe_dispatcher
| | |
| | --6.85%--kretprobe_perf_func
| | |
| | --6.57%--trace_call_bpf
| | |
| | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.67%--migrate_disable
| |
| --0.88%--objpool_push
|
--0.88%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
11.74% 2.10% bench [kernel.kallsyms] [k] kprobe_ftrace_handler
|
|--9.64%--kprobe_ftrace_handler
| |
| |--6.30%--pre_handler_kretprobe
| | |
| | |--3.29%--objpool_pop
| | |
| | --1.97%--rethook_try_get
| |
| |--2.41%--rcu_is_watching
| |
| --0.93%--get_kprobe
|
--2.10%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
11.74% 0.00% bench [unknown] [k] 0xffffffffa02050de
|
---0xffffffffa02050de
kprobe_ftrace_handler
|
|--6.30%--pre_handler_kretprobe
| |
| |--3.29%--objpool_pop
| |
| --1.97%--rethook_try_get
|
|--2.41%--rcu_is_watching
|
--0.93%--get_kprobe
10.89% 1.09% bench [kernel.kallsyms] [k] kretprobe_rethook_handler
|
|--9.80%--kretprobe_rethook_handler
| kretprobe_dispatcher
| |
| --6.85%--kretprobe_perf_func
| |
| --6.57%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--1.09%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
9.80% 2.94% bench [kernel.kallsyms] [k] kretprobe_dispatcher
|
|--6.86%--kretprobe_dispatcher
| |
| --6.85%--kretprobe_perf_func
| |
| --6.57%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--2.94%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
6.94% 6.93% bench bpf_prog_21856463590f61f1_bench_trigger_kretprobe [k] bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--6.93%--start_thread
syscall
|
|--4.49%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.44%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
bpf_prog_21856463590f61f1_bench_trigger_kretprobe
6.85% 0.28% bench [kernel.kallsyms] [k] kretprobe_perf_func
|
--6.57%--kretprobe_perf_func
trace_call_bpf
|
|--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--0.67%--migrate_disable
6.57% 2.91% bench [kernel.kallsyms] [k] trace_call_bpf
|
|--3.67%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--2.91%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
6.30% 0.81% bench [kernel.kallsyms] [k] pre_handler_kretprobe
|
|--5.49%--pre_handler_kretprobe
| |
| |--3.29%--objpool_pop
| |
| --1.97%--rethook_try_get
|
--0.81%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
pre_handler_kretprobe
5.59% 0.27% bench [kernel.kallsyms] [k] do_getpgid
|
--5.32%--do_getpgid
|
--4.85%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.42%--__radix_tree_lookup
4.85% 1.39% bench [kernel.kallsyms] [k] find_task_by_vpid
|
|--3.46%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.42%--__radix_tree_lookup
|
--1.39%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
3.29% 3.29% bench [kernel.kallsyms] [k] objpool_pop
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
pre_handler_kretprobe
objpool_pop
2.55% 2.55% bench [kernel.kallsyms] [k] rcu_is_watching
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
|
--2.41%--rcu_is_watching
2.01% 2.01% bench [kernel.kallsyms] [k] idr_find
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
idr_find
1.97% 1.83% bench [kernel.kallsyms] [k] rethook_try_get
|
--1.83%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
pre_handler_kretprobe
rethook_try_get
1.50% 1.50% bench bench [.] syscall@plt
|
---start_thread
syscall@plt
1.42% 1.42% bench [kernel.kallsyms] [k] __radix_tree_lookup
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
__radix_tree_lookup
0.93% 0.93% bench [kernel.kallsyms] [k] get_kprobe
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
get_kprobe
0.88% 0.88% bench [kernel.kallsyms] [k] objpool_push
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
objpool_push
0.67% 0.67% bench [kernel.kallsyms] [k] migrate_disable
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
migrate_disable
0.56% 0.56% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
syscall_exit_to_user_mode
0.45% 0.45% bench [kernel.kallsyms] [k] __rcu_read_lock
0.44% 0.44% bench [unknown] [k] 0xffffffffa0205005
0.39% 0.39% bench [kernel.kallsyms] [k] migrate_enable
0.36% 0.36% bench [unknown] [k] 0xffffffffa020515d
0.30% 0.00% bench libc.so.6 [.] __libc_start_call_main
0.30% 0.00% bench bench [.] main
0.30% 0.00% bench bench [.] setup_benchmark
0.30% 0.00% bench bench [.] trigger_kretprobe_setup
0.27% 0.27% bench [unknown] [k] 0xffffffffa0205011
0.27% 0.00% bench bench [.] trigger_bench__open_and_load
0.27% 0.00% bench bench [.] bpf_object__load_skeleton
0.27% 0.00% bench bench [.] bpf_object__load
0.27% 0.00% bench bench [.] bpf_object_load
0.23% 0.15% bench [kernel.kallsyms] [k] rethook_hook
0.22% 0.00% bench bench [.] bpf_object__load_vmlinux_btf
0.22% 0.00% bench bench [.] libbpf_find_kernel_btf
0.22% 0.00% bench bench [.] btf__parse
0.22% 0.00% bench bench [.] btf_parse
0.22% 0.00% bench bench [.] btf_parse_raw
0.21% 0.21% bench [kernel.kallsyms] [k] __x86_indirect_thunk_array
0.20% 0.20% bench [unknown] [k] 0xffffffffa0205000
0.18% 0.18% bench [kernel.kallsyms] [k] __rcu_read_unlock
0.16% 0.16% bench [unknown] [k] 0xffffffffa020506c
0.16% 0.01% bench [kernel.kallsyms] [k] do_user_addr_fault
0.16% 0.00% bench [kernel.kallsyms] [k] asm_exc_page_fault
0.16% 0.00% bench [kernel.kallsyms] [k] exc_page_fault
0.14% 0.00% bench [kernel.kallsyms] [k] handle_mm_fault
0.14% 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.14% 0.00% bench [kernel.kallsyms] [k] do_anonymous_page
0.13% 0.00% bench [kernel.kallsyms] [k] vma_alloc_folio
0.13% 0.02% bench libc.so.6 [.] __memmove_sse2_unaligned_erms
0.12% 0.00% bench [kernel.kallsyms] [k] alloc_pages_mpol
0.12% 0.00% bench [kernel.kallsyms] [k] __alloc_pages
0.12% 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.12% 0.12% bench [kernel.kallsyms] [k] clear_page_orig
0.11% 0.11% bench bench [.] trigger_producer
0.10% 0.00% bench bench [.] btf_new
0.08% 0.08% bench [kernel.kallsyms] [k] arch_rethook_prepare
0.07% 0.00% bench [unknown] [k] 0000000000000000
0.07% 0.00% bench bench [.] btf_sanity_check
0.07% 0.07% bench [unknown] [k] 0xffffffffa020508e
0.07% 0.01% bench libc.so.6 [.] read
0.06% 0.02% bench bench [.] btf_validate_type
0.05% 0.05% bench [unknown] [k] 0xffffffffa02050e6
0.05% 0.05% bench [unknown] [k] 0xffffffffa020507f
0.05% 0.05% bench [unknown] [k] 0xffffffffa0205150
0.05% 0.00% bench [kernel.kallsyms] [k] ksys_read
0.05% 0.00% bench [kernel.kallsyms] [k] vfs_read
0.05% 0.00% bench [kernel.kallsyms] [k] kernfs_file_read_iter
0.04% 0.04% bench [unknown] [k] 0xffffffffa0205016
0.04% 0.04% bench [unknown] [k] 0xffffffffa020513c
0.04% 0.00% bench [kernel.kallsyms] [k] _copy_to_iter
0.04% 0.01% bench [kernel.kallsyms] [k] rep_movs_alternative
0.04% 0.00% bench [kernel.kallsyms] [k] ftrace_modify_all_code
0.04% 0.04% bench [kernel.kallsyms] [k] arch_rethook_fixup_return
0.04% 0.04% bench [unknown] [k] 0xffffffffa02050ad
0.04% 0.03% bench [kernel.kallsyms] [k] radix_tree_lookup
0.04% 0.04% bench [unknown] [k] 0xffffffffa0205116
0.04% 0.00% bench [kernel.kallsyms] [k] 0xffffffff8108da38
0.04% 0.00% bench [kernel.kallsyms] [k] do_group_exit
0.04% 0.00% bench [kernel.kallsyms] [k] do_exit
0.03% 0.03% bench [kernel.kallsyms] [k] __do_softirq
0.03% 0.03% bench [unknown] [k] 0xffffffffa0205020
0.03% 0.03% bench [unknown] [k] 0xffffffffa02050cc
0.03% 0.00% bench [kernel.kallsyms] [k] asm_sysvec_apic_timer_interrupt
0.03% 0.00% bench [kernel.kallsyms] [k] sysvec_apic_timer_interrupt
0.03% 0.00% bench [kernel.kallsyms] [k] irq_exit_rcu
0.03% 0.00% bench [kernel.kallsyms] [k] task_work_run
0.03% 0.00% bench [kernel.kallsyms] [k] __fput
0.03% 0.03% bench [unknown] [k] 0xffffffffa020512b
0.03% 0.00% bench bench [.] feat_supported
0.03% 0.00% bench bench [.] sys_bpf_fd
0.03% 0.00% bench [kernel.kallsyms] [k] __x64_sys_bpf
0.03% 0.00% bench [kernel.kallsyms] [k] __sys_bpf
0.03% 0.00% bench [kernel.kallsyms] [k] bpf_prog_load
0.03% 0.00% bench bench [.] btf_parse_type_sec
0.03% 0.03% bench [unknown] [k] 0xffffffffa020509e
0.03% 0.03% bench [unknown] [k] 0xffffffffa0205102
0.02% 0.02% bench [unknown] [k] 0xffffffffa020502a
0.02% 0.02% bench [unknown] [k] 0xffffffffa02050bc
0.02% 0.02% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.02% 0.00% bench bench [.] bpf_program__attach
0.02% 0.00% bench bench [.] attach_kprobe
0.02% 0.00% bench bench [.] bpf_program__attach_kprobe_opts
0.02% 0.00% bench [kernel.kallsyms] [k] __do_sys_perf_event_open
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] perf_release
0.02% 0.01% bench bench [.] btf__type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_release_kernel
0.02% 0.00% bench [kernel.kallsyms] [k] perf_init_event
0.02% 0.00% bench [kernel.kallsyms] [k] _free_event
0.02% 0.00% bench [kernel.kallsyms] [k] perf_try_init_event
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_destroy
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_event_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_unreg.isra.0
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_select_runtime
0.02% 0.00% bench [kernel.kallsyms] [k] disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_int_jit_compile
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_jit_binary_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] __disarm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] alloc_new_pack
0.02% 0.00% bench [kernel.kallsyms] [k] unregister_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_shutdown.part.0
0.02% 0.01% bench [kernel.kallsyms] [k] ftrace_replace_code
0.02% 0.00% bench bench [.] bpf_object__probe_loading
0.02% 0.00% bench bench [.] bump_rlimit_memlock
0.02% 0.01% bench bench [.] btf_validate_id
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_reg
0.02% 0.00% bench [kernel.kallsyms] [k] enable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] enable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __arm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function_nolock
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_startup
0.02% 0.00% bench [kernel.kallsyms] [k] on_each_cpu_cond_mask
0.02% 0.02% bench [kernel.kallsyms] [k] memset_orig
0.02% 0.00% bench bench [.] probe_memcg_account
0.02% 0.02% bench bench [.] btf_type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_update_ftrace_func
0.02% 0.00% bench [kernel.kallsyms] [k] text_poke_bp
0.02% 0.00% bench [kernel.kallsyms] [k] text_poke_bp_batch
0.02% 0.02% bench [unknown] [k] 0xffffffffa0205004
0.02% 0.02% bench [unknown] [k] 0xffffffffa0205038
0.02% 0.02% bench [unknown] [k] 0xffffffffa0205050
0.02% 0.00% bench bench [.] bpf_object__load_progs
0.02% 0.00% bench bench [.] bpf_object_load_prog
0.02% 0.00% bench bench [.] btf_add_type_idx_entry
0.01% 0.01% bench bench [.] btf_kind
0.01% 0.01% bench [unknown] [k] 0xffffffffa020503b
0.01% 0.01% bench [unknown] [k] 0xffffffffa0205058
0.01% 0.00% bench bench [.] sys_bpf_prog_load
0.01% 0.00% bench bench [.] btf_add_type_offs_mem
0.01% 0.00% bench bench [.] btf_validate_str
0.01% 0.01% bench bench [.] btf_type_size
0.01% 0.01% bench [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.01% 0.00% bench bench [.] libbpf_prepare_prog_load
0.01% 0.00% bench bench [.] libbpf_find_attach_btf_id
0.01% 0.00% bench bench [.] find_kernel_btf_id
0.01% 0.00% bench bench [.] find_attach_btf_id
0.01% 0.00% bench bench [.] find_btf_by_prefix_kind
0.01% 0.00% bench bench [.] btf__find_by_name_kind
0.01% 0.00% bench bench [.] btf_find_by_name_kind
0.01% 0.01% bench [unknown] [k] 0xffffffffa020502f
0.01% 0.00% bench bench [.] kernel_supports
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_map_object
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] __GI___open64_nocancel
0.01% 0.00% bench libc.so.6 [.] __memset_sse2_unaligned_erms
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_openat
0.01% 0.00% bench [kernel.kallsyms] [k] do_sys_openat2
0.01% 0.00% bench [kernel.kallsyms] [k] do_filp_open
0.01% 0.00% bench [kernel.kallsyms] [k] path_openat
0.01% 0.00% bench [kernel.kallsyms] [k] p9_client_rpc
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_record
0.01% 0.01% bench [kernel.kallsyms] [k] kmem_cache_alloc
0.01% 0.01% bench [unknown] [k] 0xffffffffa0205025
0.01% 0.01% bench [unknown] [k] 0xffffffffa0205040
0.01% 0.00% bench bench [.] bpf_object__sanitize_maps
0.01% 0.00% bench [kernel.kallsyms] [k] _raw_spin_lock
0.01% 0.00% bench bench [.] probe_kern_array_mmap
0.01% 0.00% bench bench [.] bpf_map_create
0.01% 0.00% bench bench [.] probe_kern_prog_name
0.01% 0.01% bench bench [.] btf__str_by_offset
0.01% 0.01% bench bench [.] btf_vlen
0.01% 0.00% bench bench [.] bpf_prog_load
0.01% 0.00% bench [kernel.kallsyms] [k] lock_mm_and_find_vma
0.01% 0.00% bench [kernel.kallsyms] [k] zap_pte_range
0.01% 0.00% bench [kernel.kallsyms] [k] set_memory_rox
0.01% 0.00% bench [kernel.kallsyms] [k] change_page_attr_set_clr
0.01% 0.00% bench [kernel.kallsyms] [k] __change_page_attr_set_clr
0.01% 0.00% bench [kernel.kallsyms] [k] do_open
0.01% 0.00% bench [kernel.kallsyms] [k] do_dentry_open
0.01% 0.00% bench [kernel.kallsyms] [k] v9fs_file_open
0.01% 0.00% bench [kernel.kallsyms] [k] __change_page_attr
0.01% 0.00% bench [kernel.kallsyms] [k] __pte_offset_map_lock
0.01% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node_range
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_vmas
0.01% 0.00% bench [kernel.kallsyms] [k] __vmalloc_area_node
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_page_range
0.01% 0.00% bench [kernel.kallsyms] [k] p9_virtio_request
0.00% 0.00% bench [kernel.kallsyms] [k] lookup_address_in_pgd
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_pages_pte_range
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.00% 0.00% bench [kernel.kallsyms] [k] __alloc_pages_bulk
0.00% 0.00% bench [kernel.kallsyms] [k] default_send_IPI_allbutself
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_lookup_ip
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_prefixes.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] rmqueue
0.00% 0.00% bench [kernel.kallsyms] [k] in_lock_functions
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_check_record
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_test_record
0.00% 0.00% bench [kernel.kallsyms] [k] mas_walk
0.00% 0.00% bench [kernel.kallsyms] [k] __mod_memcg_lruvec_state
0.00% 0.00% bench [kernel.kallsyms] [k] kmem_cache_alloc_lru
0.00% 0.00% bench [kernel.kallsyms] [k] perf_output_begin
0.00% 0.00% bench [kernel.kallsyms] [k] iput
0.00% 0.00% bench [kernel.kallsyms] [k] mas_alloc_nodes
0.00% 0.00% bench [kernel.kallsyms] [k] memcpy_orig
0.00% 0.00% bench [kernel.kallsyms] [k] tmigr_handle_remote
0.00% 0.00% bench bench [.] bpf_object__relocate
0.00% 0.00% bench bench [.] btf_strs_data
0.00% 0.00% bench bench [.] bpf_program_fixup_func_info
0.00% 0.00% bench bench [.] libbpf_add_mem
0.00% 0.00% bench bench [.] probe_kern_arg_ctx_tag
0.00% 0.00% bench [kernel.kallsyms] [k] zap_present_ptes
0.00% 0.00% bench [unknown] [k] 0xffffffffa0205089
0.00% 0.00% bench libc.so.6 [.] _int_realloc
0.00% 0.00% bench [unknown] [k] 0x31392e3033202820
0.00% 0.00% bench libc.so.6 [.] __GI___libc_write
0.00% 0.00% bench [kernel.kallsyms] [k] create_local_trace_kprobe
0.00% 0.00% bench libc.so.6 [.] __munmap
0.00% 0.00% bench [kernel.kallsyms] [k] register_kretprobe
0.00% 0.00% bench libc.so.6 [.] __brk
0.00% 0.00% bench libc.so.6 [.] clone3
0.00% 0.00% bench libc.so.6 [.] __strcmp_sse2
0.00% 0.00% bench [kernel.kallsyms] [k] ksys_write
0.00% 0.00% bench [kernel.kallsyms] [k] register_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_release
0.00% 0.00% bench [kernel.kallsyms] [k] check_kprobe_address_safe
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mm
0.00% 0.00% bench [kernel.kallsyms] [k] vfs_write
0.00% 0.00% bench [kernel.kallsyms] [k] __do_sys_brk
0.00% 0.00% bench [kernel.kallsyms] [k] __do_sys_clone3
0.00% 0.00% bench [kernel.kallsyms] [k] __vm_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_alloc_no_stats
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_put_deferred
0.00% 0.00% bench [kernel.kallsyms] [k] cpa_process_alias
0.00% 0.00% bench [kernel.kallsyms] [k] file_tty_write.constprop.0
0.00% 0.00% bench [kernel.kallsyms] [k] jump_label_text_reserved
0.00% 0.00% bench [kernel.kallsyms] [k] mmput
0.00% 0.00% bench [kernel.kallsyms] [k] open_last_lookups
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node
0.00% 0.00% bench [kernel.kallsyms] [k] arch_jump_entry_size
0.00% 0.00% bench [kernel.kallsyms] [k] do_brk_flags
0.00% 0.00% bench [kernel.kallsyms] [k] do_vmi_align_munmap.constprop.0
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] idr_alloc_cyclic
0.00% 0.00% bench [kernel.kallsyms] [k] iterate_tty_write
0.00% 0.00% bench [kernel.kallsyms] [k] kernel_clone
0.00% 0.00% bench [kernel.kallsyms] [k] lookup_open.isra.0
0.00% 0.00% bench [kernel.kallsyms] [k] module_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_event
0.00% 0.00% bench [kernel.kallsyms] [k] v9fs_dir_release
0.00% 0.00% bench bench [.] collect_measurements
0.00% 0.00% bench libc.so.6 [.] __strnlen_ifunc
0.00% 0.00% bench [kernel.kallsyms] [k] copy_process
0.00% 0.00% bench [kernel.kallsyms] [k] d_alloc_parallel
0.00% 0.00% bench [kernel.kallsyms] [k] idr_alloc_u32
0.00% 0.00% bench [kernel.kallsyms] [k] insn_decode
0.00% 0.00% bench [kernel.kallsyms] [k] mas_store_gfp
0.00% 0.00% bench [kernel.kallsyms] [k] n_tty_write
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_clunk
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_open
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_walk
0.00% 0.00% bench [kernel.kallsyms] [k] perf_iterate_sb
0.00% 0.00% bench [kernel.kallsyms] [k] unmap_region
0.00% 0.00% bench [unknown] [.] 0x0000000000000040
0.00% 0.00% bench libc.so.6 [.] __mpn_extract_double
0.00% 0.00% bench [kernel.kallsyms] [k] __split_large_page
0.00% 0.00% bench [kernel.kallsyms] [k] d_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] dput
0.00% 0.00% bench [kernel.kallsyms] [k] dup_task_struct
0.00% 0.00% bench [kernel.kallsyms] [k] find_vma
0.00% 0.00% bench [kernel.kallsyms] [k] idr_get_free
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_displacement
0.00% 0.00% bench [kernel.kallsyms] [k] mas_wr_bnode
0.00% 0.00% bench [kernel.kallsyms] [k] perf_iterate_ctx
0.00% 0.00% bench [kernel.kallsyms] [k] process_output_block
0.00% 0.00% bench [kernel.kallsyms] [k] wp_page_copy
0.00% 0.00% bench bench [.] bpf_object__open_skeleton
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_sysdep_start
0.00% 0.00% bench libc.so.6 [.] __restore_rt
0.00% 0.00% bench [unknown] [.] 0x000055e503ff7c50
0.00% 0.00% bench libc.so.6 [.] _IO_file_xsgetn
0.00% 0.00% bench [kernel.kallsyms] [k] __anon_vma_prepare
0.00% 0.00% bench [kernel.kallsyms] [k] __d_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] __dentry_kill
0.00% 0.00% bench [kernel.kallsyms] [k] __ftrace_hash_rec_update.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] __lruvec_stat_mod_folio
0.00% 0.00% bench [kernel.kallsyms] [k] alloc_pages_bulk_array_mempolicy
0.00% 0.00% bench [kernel.kallsyms] [k] alloc_thread_stack_node
0.00% 0.00% bench [kernel.kallsyms] [k] btf_vmlinux_read
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_modrm
0.00% 0.00% bench [kernel.kallsyms] [k] lock_vma_under_rcu
0.00% 0.00% bench [kernel.kallsyms] [k] mas_split.isra.0
0.00% 0.00% bench [kernel.kallsyms] [k] mt_find
0.00% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_output
0.00% 0.00% bench [kernel.kallsyms] [k] preempt_count_add
0.00% 0.00% bench [kernel.kallsyms] [k] prepare_to_wait_event
0.00% 0.00% bench [kernel.kallsyms] [k] radix_tree_node_alloc.constprop.0
0.00% 0.00% bench [kernel.kallsyms] [k] smp_call_function
0.00% 0.00% bench [kernel.kallsyms] [k] uart_write
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_small_pages_range_noflush
0.00% 0.00% bench bench [.] populate_skeleton_progs
0.00% 0.00% bench bench [.] sigalarm_handler
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] dl_main
0.00% 0.00% bench libc.so.6 [.] __vfprintf_internal
#
# (Tip: Show individual samples with: perf script)
#
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 32K of event 'task-clock:ppp'
# Event count (approx.): 8042250000
#
# Children Self Command Shared Object Symbol
# ........ ........ ....... ................................................. .....................................................
#
99.52% 0.00% bench libc.so.6 [.] start_thread
|
---start_thread
|
|--97.57%--syscall
| |
| |--59.31%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.37%--__x64_sys_getpgid
| | | |
| | | |--12.79%--ftrace_trampoline
| | | | |
| | | | --10.73%--kprobe_ftrace_handler
| | | | |
| | | | |--6.03%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.10%--objpool_pop
| | | | | |
| | | | | --1.86%--rethook_try_get
| | | | |
| | | | |--2.00%--rcu_is_watching
| | | | |
| | | | --0.50%--get_kprobe
| | | |
| | | --6.29%--do_getpgid
| | | |
| | | --5.54%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.52%--__radix_tree_lookup
| | |
| | |--13.87%--arch_rethook_trampoline
| | | |
| | | --12.14%--arch_rethook_trampoline_callback
| | | |
| | | --11.91%--rethook_trampoline_handler
| | | |
| | | |--10.24%--kretprobe_rethook_handler
| | | | |
| | | | --9.28%--kretprobe_dispatcher
| | | | |
| | | | --6.35%--kretprobe_perf_func
| | | | |
| | | | --5.99%--trace_call_bpf
| | | | |
| | | | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.95%--migrate_disable
| | | |
| | | --0.95%--objpool_push
| | |
| | --0.53%--syscall_exit_to_user_mode
| |
| --4.37%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.75%--syscall@plt
97.63% 33.61% bench libc.so.6 [.] syscall
|
|--64.02%--syscall
| |
| |--59.37%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.37%--__x64_sys_getpgid
| | | |
| | | |--12.79%--ftrace_trampoline
| | | | |
| | | | --10.73%--kprobe_ftrace_handler
| | | | |
| | | | |--6.03%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.10%--objpool_pop
| | | | | |
| | | | | --1.86%--rethook_try_get
| | | | |
| | | | |--2.00%--rcu_is_watching
| | | | |
| | | | --0.50%--get_kprobe
| | | |
| | | --6.29%--do_getpgid
| | | |
| | | --5.54%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.52%--__radix_tree_lookup
| | |
| | |--13.87%--arch_rethook_trampoline
| | | |
| | | --12.14%--arch_rethook_trampoline_callback
| | | |
| | | --11.91%--rethook_trampoline_handler
| | | |
| | | |--10.24%--kretprobe_rethook_handler
| | | | |
| | | | --9.28%--kretprobe_dispatcher
| | | | |
| | | | --6.35%--kretprobe_perf_func
| | | | |
| | | | --5.99%--trace_call_bpf
| | | | |
| | | | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.95%--migrate_disable
| | | |
| | | --0.95%--objpool_push
| | |
| | --0.53%--syscall_exit_to_user_mode
| |
| --4.37%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--33.61%--start_thread
syscall
59.54% 0.00% bench [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
|
---entry_SYSCALL_64_after_hwframe
do_syscall_64
|
|--19.37%--__x64_sys_getpgid
| |
| |--12.79%--ftrace_trampoline
| | |
| | --10.73%--kprobe_ftrace_handler
| | |
| | |--6.03%--pre_handler_kretprobe
| | | |
| | | |--3.10%--objpool_pop
| | | |
| | | --1.86%--rethook_try_get
| | |
| | |--2.00%--rcu_is_watching
| | |
| | --0.50%--get_kprobe
| |
| --6.29%--do_getpgid
| |
| --5.54%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.52%--__radix_tree_lookup
|
|--13.87%--arch_rethook_trampoline
| |
| --12.14%--arch_rethook_trampoline_callback
| |
| --11.91%--rethook_trampoline_handler
| |
| |--10.24%--kretprobe_rethook_handler
| | |
| | --9.28%--kretprobe_dispatcher
| | |
| | --6.35%--kretprobe_perf_func
| | |
| | --5.99%--trace_call_bpf
| | |
| | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.95%--migrate_disable
| |
| --0.95%--objpool_push
|
--0.53%--syscall_exit_to_user_mode
59.54% 25.54% bench [kernel.kallsyms] [k] do_syscall_64
|
|--34.00%--do_syscall_64
| |
| |--19.37%--__x64_sys_getpgid
| | |
| | |--12.79%--ftrace_trampoline
| | | |
| | | --10.73%--kprobe_ftrace_handler
| | | |
| | | |--6.03%--pre_handler_kretprobe
| | | | |
| | | | |--3.10%--objpool_pop
| | | | |
| | | | --1.86%--rethook_try_get
| | | |
| | | |--2.00%--rcu_is_watching
| | | |
| | | --0.50%--get_kprobe
| | |
| | --6.29%--do_getpgid
| | |
| | --5.54%--find_task_by_vpid
| | |
| | |--2.01%--idr_find
| | |
| | --1.52%--__radix_tree_lookup
| |
| |--13.87%--arch_rethook_trampoline
| | |
| | --12.14%--arch_rethook_trampoline_callback
| | |
| | --11.91%--rethook_trampoline_handler
| | |
| | |--10.24%--kretprobe_rethook_handler
| | | |
| | | --9.28%--kretprobe_dispatcher
| | | |
| | | --6.35%--kretprobe_perf_func
| | | |
| | | --5.99%--trace_call_bpf
| | | |
| | | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | --0.95%--migrate_disable
| | |
| | --0.95%--objpool_push
| |
| --0.53%--syscall_exit_to_user_mode
|
--25.54%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
19.40% 0.29% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
--19.11%--__x64_sys_getpgid
|
|--12.82%--ftrace_trampoline
| |
| --10.76%--kprobe_ftrace_handler
| |
| |--6.06%--pre_handler_kretprobe
| | |
| | |--3.10%--objpool_pop
| | |
| | --1.86%--rethook_try_get
| |
| |--2.00%--rcu_is_watching
| |
| --0.50%--get_kprobe
|
--6.29%--do_getpgid
|
--5.54%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.52%--__radix_tree_lookup
13.91% 1.77% bench [kernel.kallsyms] [k] arch_rethook_trampoline
|
|--12.14%--arch_rethook_trampoline
| arch_rethook_trampoline_callback
| |
| --11.91%--rethook_trampoline_handler
| |
| |--10.24%--kretprobe_rethook_handler
| | |
| | --9.28%--kretprobe_dispatcher
| | |
| | --6.35%--kretprobe_perf_func
| | |
| | --5.99%--trace_call_bpf
| | |
| | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.95%--migrate_disable
| |
| --0.95%--objpool_push
|
--1.77%--start_thread
syscall
|
--1.73%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
12.82% 2.06% bench ftrace_trampoline [k] ftrace_trampoline
|
|--10.76%--ftrace_trampoline
| kprobe_ftrace_handler
| |
| |--6.06%--pre_handler_kretprobe
| | |
| | |--3.10%--objpool_pop
| | |
| | --1.86%--rethook_try_get
| |
| |--2.00%--rcu_is_watching
| |
| --0.50%--get_kprobe
|
--2.06%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
12.14% 0.23% bench [kernel.kallsyms] [k] arch_rethook_trampoline_callback
|
--11.91%--arch_rethook_trampoline_callback
rethook_trampoline_handler
|
|--10.24%--kretprobe_rethook_handler
| |
| --9.28%--kretprobe_dispatcher
| |
| --6.35%--kretprobe_perf_func
| |
| --5.99%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--0.95%--objpool_push
11.91% 0.69% bench [kernel.kallsyms] [k] rethook_trampoline_handler
|
|--11.22%--rethook_trampoline_handler
| |
| |--10.24%--kretprobe_rethook_handler
| | |
| | --9.28%--kretprobe_dispatcher
| | |
| | --6.35%--kretprobe_perf_func
| | |
| | --5.99%--trace_call_bpf
| | |
| | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.95%--migrate_disable
| |
| --0.95%--objpool_push
|
--0.69%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
10.76% 2.20% bench [kernel.kallsyms] [k] kprobe_ftrace_handler
|
|--8.55%--kprobe_ftrace_handler
| |
| |--6.06%--pre_handler_kretprobe
| | |
| | |--3.10%--objpool_pop
| | |
| | --1.86%--rethook_try_get
| |
| |--2.00%--rcu_is_watching
| |
| --0.50%--get_kprobe
|
--2.20%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
10.24% 0.96% bench [kernel.kallsyms] [k] kretprobe_rethook_handler
|
|--9.28%--kretprobe_rethook_handler
| kretprobe_dispatcher
| |
| --6.35%--kretprobe_perf_func
| |
| --5.99%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--0.96%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
9.28% 2.85% bench [kernel.kallsyms] [k] kretprobe_dispatcher
|
|--6.43%--kretprobe_dispatcher
| |
| --6.35%--kretprobe_perf_func
| |
| --5.99%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--2.85%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
6.35% 0.36% bench [kernel.kallsyms] [k] kretprobe_perf_func
|
--5.99%--kretprobe_perf_func
trace_call_bpf
|
|--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--0.95%--migrate_disable
6.29% 0.27% bench [kernel.kallsyms] [k] do_getpgid
|
--6.02%--do_getpgid
|
--5.54%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.52%--__radix_tree_lookup
6.23% 6.23% bench bpf_prog_21856463590f61f1_bench_trigger_kretprobe [k] bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
---start_thread
syscall
|
|--4.37%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.86%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
bpf_prog_21856463590f61f1_bench_trigger_kretprobe
6.06% 0.89% bench [kernel.kallsyms] [k] pre_handler_kretprobe
|
|--5.17%--pre_handler_kretprobe
| |
| |--3.10%--objpool_pop
| |
| --1.86%--rethook_try_get
|
--0.89%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
pre_handler_kretprobe
5.99% 2.67% bench [kernel.kallsyms] [k] trace_call_bpf
|
|--3.32%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--2.67%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
5.54% 1.97% bench [kernel.kallsyms] [k] find_task_by_vpid
|
|--3.57%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.52%--__radix_tree_lookup
|
--1.97%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
3.10% 3.10% bench [kernel.kallsyms] [k] objpool_pop
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
pre_handler_kretprobe
objpool_pop
2.08% 2.08% bench [kernel.kallsyms] [k] rcu_is_watching
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
|
--2.00%--rcu_is_watching
2.01% 2.01% bench [kernel.kallsyms] [k] idr_find
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
idr_find
1.86% 1.78% bench [kernel.kallsyms] [k] rethook_try_get
|
--1.78%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
pre_handler_kretprobe
rethook_try_get
1.75% 1.75% bench bench [.] syscall@plt
|
---start_thread
syscall@plt
1.52% 1.52% bench [kernel.kallsyms] [k] __radix_tree_lookup
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
__radix_tree_lookup
0.95% 0.95% bench [kernel.kallsyms] [k] objpool_push
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
objpool_push
0.95% 0.95% bench [kernel.kallsyms] [k] migrate_disable
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
migrate_disable
0.53% 0.53% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
syscall_exit_to_user_mode
0.50% 0.50% bench [kernel.kallsyms] [k] get_kprobe
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
get_kprobe
0.44% 0.44% bench [kernel.kallsyms] [k] __rcu_read_lock
0.35% 0.35% bench [kernel.kallsyms] [k] migrate_enable
0.29% 0.29% bench [kernel.kallsyms] [k] __x86_indirect_thunk_array
0.28% 0.00% bench libc.so.6 [.] __libc_start_call_main
0.28% 0.00% bench bench [.] main
0.28% 0.00% bench bench [.] setup_benchmark
0.27% 0.00% bench bench [.] trigger_kretprobe_setup
0.25% 0.00% bench bench [.] trigger_bench__open_and_load
0.25% 0.00% bench bench [.] bpf_object__load_skeleton
0.25% 0.00% bench bench [.] bpf_object__load
0.25% 0.00% bench bench [.] bpf_object_load
0.21% 0.21% bench [kernel.kallsyms] [k] __rcu_read_unlock
0.20% 0.14% bench [kernel.kallsyms] [k] rethook_hook
0.20% 0.20% bench bench [.] trigger_producer
0.14% 0.00% bench [kernel.kallsyms] [k] asm_exc_page_fault
0.14% 0.00% bench [kernel.kallsyms] [k] exc_page_fault
0.14% 0.01% bench [kernel.kallsyms] [k] do_user_addr_fault
0.14% 0.00% bench bench [.] libbpf_find_kernel_btf
0.13% 0.00% bench bench [.] bpf_object__load_vmlinux_btf
0.13% 0.00% bench bench [.] btf__parse
0.13% 0.00% bench bench [.] btf_parse
0.13% 0.00% bench bench [.] btf_parse_raw
0.13% 0.00% bench [kernel.kallsyms] [k] handle_mm_fault
0.13% 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.11% 0.00% bench bench [.] btf_new
0.10% 0.00% bench [unknown] [k] 0000000000000000
0.10% 0.00% bench [kernel.kallsyms] [k] do_anonymous_page
0.10% 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.10% 0.00% bench libc.so.6 [.] read
0.10% 0.00% bench [kernel.kallsyms] [k] ksys_read
0.10% 0.00% bench [kernel.kallsyms] [k] vfs_read
0.10% 0.00% bench [kernel.kallsyms] [k] rep_movs_alternative
0.10% 0.00% bench [kernel.kallsyms] [k] __alloc_pages
0.09% 0.00% bench [kernel.kallsyms] [k] kernfs_file_read_iter
0.09% 0.00% bench [kernel.kallsyms] [k] _copy_to_iter
0.09% 0.09% bench [kernel.kallsyms] [k] clear_page_orig
0.09% 0.00% bench [kernel.kallsyms] [k] alloc_pages_mpol
0.08% 0.00% bench [kernel.kallsyms] [k] vma_alloc_folio
0.07% 0.00% bench bench [.] bpf_object__load_progs
0.07% 0.00% bench bench [.] bpf_object_load_prog
0.07% 0.01% bench bench [.] btf_sanity_check
0.07% 0.00% bench bench [.] libbpf_prepare_prog_load
0.07% 0.00% bench bench [.] libbpf_find_attach_btf_id
0.07% 0.00% bench bench [.] find_kernel_btf_id
0.07% 0.00% bench bench [.] find_attach_btf_id
0.07% 0.00% bench bench [.] find_btf_by_prefix_kind
0.07% 0.00% bench bench [.] btf__find_by_name_kind
0.07% 0.07% bench [kernel.kallsyms] [k] arch_rethook_prepare
0.06% 0.01% bench bench [.] btf_validate_type
0.06% 0.02% bench bench [.] btf_find_by_name_kind
0.05% 0.05% bench [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.04% 0.00% bench libc.so.6 [.] __GI___libc_write
0.04% 0.00% bench [kernel.kallsyms] [k] ksys_write
0.04% 0.00% bench [kernel.kallsyms] [k] vfs_write
0.04% 0.00% bench [kernel.kallsyms] [k] file_tty_write.constprop.0
0.04% 0.00% bench [kernel.kallsyms] [k] iterate_tty_write
0.04% 0.00% bench [kernel.kallsyms] [k] n_tty_write
0.04% 0.00% bench [kernel.kallsyms] [k] uart_write
0.04% 0.00% bench [kernel.kallsyms] [k] process_output_block
0.03% 0.00% bench [kernel.kallsyms] [k] __x64_sys_bpf
0.03% 0.02% bench bench [.] btf__type_by_id
0.03% 0.00% bench [kernel.kallsyms] [k] __sys_bpf
0.03% 0.03% bench [kernel.kallsyms] [k] arch_rethook_fixup_return
0.03% 0.00% bench bench [.] feat_supported
0.03% 0.00% bench bench [.] sys_bpf_fd
0.03% 0.00% bench [kernel.kallsyms] [k] bpf_prog_load
0.03% 0.02% bench bench [.] btf_parse_type_sec
0.03% 0.03% bench [kernel.kallsyms] [k] radix_tree_lookup
0.03% 0.00% bench bench [.] bpf_program__attach
0.03% 0.00% bench [kernel.kallsyms] [k] do_pte_missing
0.03% 0.02% bench libc.so.6 [.] __memmove_sse2_unaligned_erms
0.03% 0.00% bench [kernel.kallsyms] [k] ftrace_modify_all_code
0.02% 0.00% bench bench [.] attach_kprobe
0.02% 0.00% bench bench [.] bpf_program__attach_kprobe_opts
0.02% 0.00% bench [kernel.kallsyms] [k] do_read_fault
0.02% 0.00% bench [kernel.kallsyms] [k] __do_fault
0.02% 0.00% bench [kernel.kallsyms] [k] 0xffffffff8108dbc8
0.02% 0.00% bench [kernel.kallsyms] [k] __do_sys_perf_event_open
0.02% 0.00% bench [kernel.kallsyms] [k] do_group_exit
0.02% 0.00% bench [kernel.kallsyms] [k] do_exit
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_replace_code
0.02% 0.00% bench [kernel.kallsyms] [k] perf_init_event
0.02% 0.00% bench [kernel.kallsyms] [k] __fput
0.02% 0.00% bench [kernel.kallsyms] [k] perf_try_init_event
0.02% 0.02% bench bench [.] btf_type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_event_init
0.02% 0.02% bench bench [.] btf_kind
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_init
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_select_runtime
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_int_jit_compile
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_jit_binary_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] filemap_fault
0.02% 0.00% bench [kernel.kallsyms] [k] alloc_new_pack
0.02% 0.01% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.02% 0.00% bench bench [.] bpf_object__probe_loading
0.02% 0.00% bench bench [.] sys_bpf_prog_load
0.02% 0.00% bench [kernel.kallsyms] [k] task_work_run
0.02% 0.01% bench bench [.] btf_validate_str
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_reg
0.02% 0.00% bench [kernel.kallsyms] [k] filemap_read_folio
0.02% 0.00% bench [kernel.kallsyms] [k] netfs_read_folio
0.02% 0.00% bench [kernel.kallsyms] [k] netfs_begin_read
0.02% 0.00% bench [kernel.kallsyms] [k] on_each_cpu_cond_mask
0.02% 0.00% bench bench [.] kernel_supports
0.02% 0.01% bench bench [.] btf__str_by_offset
0.02% 0.01% bench libc.so.6 [.] __strcmp_sse2
0.02% 0.00% bench [kernel.kallsyms] [k] perf_release
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_release_kernel
0.02% 0.00% bench [kernel.kallsyms] [k] _free_event
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_destroy
0.02% 0.00% bench [kernel.kallsyms] [k] enable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_unreg.isra.0
0.02% 0.00% bench [kernel.kallsyms] [k] disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] enable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __arm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function_nolock
0.02% 0.00% bench [kernel.kallsyms] [k] __disarm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_startup
0.02% 0.00% bench [kernel.kallsyms] [k] unregister_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_shutdown.part.0
0.02% 0.00% bench [kernel.kallsyms] [k] v9fs_issue_read
0.02% 0.00% bench [kernel.kallsyms] [k] p9_client_read
0.02% 0.00% bench [kernel.kallsyms] [k] p9_client_read_once
0.01% 0.01% bench [kernel.kallsyms] [k] memset_orig
0.01% 0.00% bench bench [.] bump_rlimit_memlock
0.01% 0.00% bench bench [.] probe_memcg_account
0.01% 0.01% bench bench [.] btf_vlen
0.01% 0.00% bench [kernel.kallsyms] [k] set_memory_rox
0.01% 0.00% bench [kernel.kallsyms] [k] change_page_attr_set_clr
0.01% 0.00% bench [kernel.kallsyms] [k] p9_client_zc_rpc.constprop.0
0.01% 0.00% bench [kernel.kallsyms] [k] p9_virtio_zc_request
0.01% 0.00% bench bench [.] bpf_object__sanitize_maps
0.01% 0.00% bench bench [.] bpf_prog_load
0.01% 0.00% bench bench [.] probe_kern_array_mmap
0.01% 0.00% bench bench [.] bpf_map_create
0.01% 0.00% bench bench [.] probe_kern_prog_name
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_map_object
0.01% 0.00% bench bench [.] btf_validate_id
0.01% 0.01% bench [kernel.kallsyms] [k] default_send_IPI_allbutself
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_check_record
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.01% 0.01% bench [kernel.kallsyms] [k] default_send_IPI_self
0.01% 0.01% bench [kernel.kallsyms] [k] finish_task_switch.isra.0
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_record
0.01% 0.00% bench [kernel.kallsyms] [k] p9_get_mapped_pages.part.0.constprop.0
0.01% 0.01% bench bench [.] btf_strs_data
0.01% 0.01% bench [kernel.kallsyms] [k] mem_cgroup_commit_charge
0.01% 0.01% bench bench [.] btf_add_type_offs_mem
0.01% 0.00% bench bench [.] btf_type_size
0.01% 0.00% bench [unknown] [k] 0x000055fcb8980c50
0.01% 0.00% bench [unknown] [k] 0x32322e3239312820
0.01% 0.00% bench [unknown] [k] 0x35372e30312d2820
0.01% 0.00% bench [unknown] [k] 0x38392e3820202820
0.01% 0.00% bench [kernel.kallsyms] [k] zap_pte_range
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_prog_release
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_prog_put_deferred
0.01% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_event
0.01% 0.00% bench [kernel.kallsyms] [k] perf_iterate_sb
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_check
0.01% 0.00% bench [kernel.kallsyms] [k] do_vmi_align_munmap.constprop.0
0.01% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_output
0.01% 0.00% bench [kernel.kallsyms] [k] kmalloc_large
0.01% 0.00% bench [kernel.kallsyms] [k] perf_output_end
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_region
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_vmas
0.01% 0.00% bench [kernel.kallsyms] [k] __kmalloc_large_node
0.01% 0.00% bench [kernel.kallsyms] [k] p9_client_rpc
0.01% 0.00% bench [kernel.kallsyms] [k] perf_output_put_handle
0.01% 0.00% bench [kernel.kallsyms] [k] rmqueue
0.01% 0.00% bench [kernel.kallsyms] [k] text_poke_bp_batch
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_page_range
0.01% 0.00% bench [kernel.kallsyms] [k] cpa_flush
0.01% 0.00% bench [kernel.kallsyms] [k] do_output_char
0.01% 0.00% bench [kernel.kallsyms] [k] irq_work_queue
0.01% 0.00% bench [kernel.kallsyms] [k] schedule
0.01% 0.01% bench libc.so.6 [.] _IO_file_xsgetn
0.01% 0.00% bench [kernel.kallsyms] [k] __mem_cgroup_charge
0.01% 0.00% bench [kernel.kallsyms] [k] __schedule
0.01% 0.00% bench [kernel.kallsyms] [k] arch_irq_work_raise
0.01% 0.00% bench bench [.] btf__name_by_offset
0.00% 0.00% bench [kernel.kallsyms] [k] allocate_slab
0.00% 0.00% bench [kernel.kallsyms] [k] _raw_spin_trylock
0.00% 0.00% bench [kernel.kallsyms] [k] iter_xarray_get_pages
0.00% 0.00% bench [unknown] [k] 0x0000000000000040
0.00% 0.00% bench bench [.] bpf_object__create_maps
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_sysdep_start
0.00% 0.00% bench [kernel.kallsyms] [k] percpu_counter_add_batch
0.00% 0.00% bench [kernel.kallsyms] [k] within_kprobe_blacklist
0.00% 0.00% bench bench [.] bpf_object__populate_internal_map
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] dl_main
0.00% 0.00% bench libm.so.6 [.] __sqrt
0.00% 0.00% bench bench [.] bpf_map_update_elem
0.00% 0.00% bench bench [.] bpf_object__relocate
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.00% 0.00% bench [unknown] [k] 0x000000000000003f
0.00% 0.00% bench bench [.] bpf_program_fixup_func_info
0.00% 0.00% bench bench [.] bpf_program__attach_perf_event_opts
0.00% 0.00% bench [kernel.kallsyms] [k] folio_add_lru
0.00% 0.00% bench [kernel.kallsyms] [k] iov_iter_advance
0.00% 0.00% bench [kernel.kallsyms] [k] strncpy_from_user
0.00% 0.00% bench bench [.] probe_kern_arg_ctx_tag
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] __GI___read_nocancel
0.00% 0.00% bench libc.so.6 [.] __close
0.00% 0.00% bench libc.so.6 [.] __printf_fp
0.00% 0.00% bench [kernel.kallsyms] [k] folio_lruvec_lock_irqsave
0.00% 0.00% bench [kernel.kallsyms] [k] mas_walk
0.00% 0.00% bench [kernel.kallsyms] [k] map_update_elem
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] __GI___open64_nocancel
0.00% 0.00% bench [kernel.kallsyms] [k] folio_mark_accessed
0.00% 0.00% bench [kernel.kallsyms] [k] _copy_from_user
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] mmap64
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_close
0.00% 0.00% bench libc.so.6 [.] _int_realloc
0.00% 0.00% bench [unknown] [k] 0x2020207374696820
0.00% 0.00% bench [unknown] [k] 0x2d6769727427206b
0.00% 0.00% bench [unknown] [k] 0x31342e33312d2820
0.00% 0.00% bench [unknown] [k] 0x31382e3631202820
0.00% 0.00% bench [unknown] [k] 0x33372e34332d2820
0.00% 0.00% bench [unknown] [k] 0x33392e3531202820
0.00% 0.00% bench [unknown] [k] 0x38342e36312d2820
0.00% 0.00% bench [unknown] [k] 0x68636e6562207075
0.00% 0.00% bench [kernel.kallsyms] [k] xas_find
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_openat
0.00% 0.00% bench bench [.] _start
0.00% 0.00% bench [kernel.kallsyms] [k] do_sys_openat2
0.00% 0.00% bench [kernel.kallsyms] [k] ksys_mmap_pgoff
0.00% 0.00% bench [kernel.kallsyms] [k] module_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_unbuffered_read_iter
0.00% 0.00% bench libc.so.6 [.] __munmap
0.00% 0.00% bench [kernel.kallsyms] [k] __register_ftrace_function
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node_range
0.00% 0.00% bench [kernel.kallsyms] [k] do_filp_open
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_unbuffered_read_iter_locked
0.00% 0.00% bench [kernel.kallsyms] [k] vm_mmap_pgoff
0.00% 0.00% bench libc.so.6 [.] _int_malloc
0.00% 0.00% bench libc.so.6 [.] __strlen_sse2
0.00% 0.00% bench [kernel.kallsyms] [k] __change_page_attr_set_clr
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_area_node
0.00% 0.00% bench [kernel.kallsyms] [k] do_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mm
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] path_openat
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] arch_ftrace_update_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] cpa_process_alias
0.00% 0.00% bench [kernel.kallsyms] [k] do_open
0.00% 0.00% bench [kernel.kallsyms] [k] mmap_region
0.00% 0.00% bench [kernel.kallsyms] [k] mmput
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_small_pages_range_noflush
0.00% 0.00% bench [kernel.kallsyms] [k] __vm_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] create_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] do_dentry_open
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_ftrace_func
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_pages_pte_range
0.00% 0.00% bench [kernel.kallsyms] [k] __change_page_attr
0.00% 0.00% bench [kernel.kallsyms] [k] __pte_alloc_kernel
0.00% 0.00% bench [kernel.kallsyms] [k] create_local_trace_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_bp
0.00% 0.00% bench [kernel.kallsyms] [k] v9fs_file_open
0.00% 0.00% bench [kernel.kallsyms] [k] __filemap_get_folio
0.00% 0.00% bench [kernel.kallsyms] [k] __split_large_page
0.00% 0.00% bench [kernel.kallsyms] [k] _vm_unmap_aliases
0.00% 0.00% bench [kernel.kallsyms] [k] lru_add_drain
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_alloc_request
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_open
0.00% 0.00% bench [kernel.kallsyms] [k] register_kretprobe
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_finish
0.00% 0.00% bench libc.so.6 [.] __GI___printf_fp_l
0.00% 0.00% bench [kernel.kallsyms] [k] __kmalloc
0.00% 0.00% bench [kernel.kallsyms] [k] __purge_vmap_area_lazy
0.00% 0.00% bench [kernel.kallsyms] [k] __rmqueue_pcplist
0.00% 0.00% bench [kernel.kallsyms] [k] filemap_map_pages
0.00% 0.00% bench [kernel.kallsyms] [k] lru_add_drain_cpu
0.00% 0.00% bench [kernel.kallsyms] [k] register_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] shmem_fault
0.00% 0.00% bench bench [.] bpf_object__open_skeleton
0.00% 0.00% bench libc.so.6 [.] __unregister_atfork
0.00% 0.00% bench [kernel.kallsyms] [k] ___slab_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] check_kprobe_address_safe
0.00% 0.00% bench [kernel.kallsyms] [k] folio_batch_move_lru
0.00% 0.00% bench [kernel.kallsyms] [k] iov_iter_get_pages_alloc2
0.00% 0.00% bench [kernel.kallsyms] [k] lock_vma_under_rcu
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_rreq_prepare_read
0.00% 0.00% bench [kernel.kallsyms] [k] next_uptodate_folio
0.00% 0.00% bench [kernel.kallsyms] [k] p9_virtio_request
0.00% 0.00% bench [kernel.kallsyms] [k] pcpu_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] rmqueue_bulk
0.00% 0.00% bench [kernel.kallsyms] [k] shmem_get_folio_gfp
0.00% 0.00% bench [kernel.kallsyms] [k] zap_present_ptes
0.00% 0.00% bench bench [.] bpf_object__open_mem
0.00% 0.00% bench bench [.] btf_add_type_idx_entry
0.00% 0.00% bench libc.so.6 [.] __vfprintf_internal
0.00% 0.00% bench [unknown] [.] 0xdfac2c2953a319ce
#
# (Tip: To separate samples by time use perf report --sort time,overhead,sym)
#
On Tue, Apr 30, 2024 at 6:32 AM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > On Mon, 29 Apr 2024 13:25:04 -0700 > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > > > > Hi Andrii, > > > > > > On Thu, 25 Apr 2024 13:31:53 -0700 > > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > > > > > Hey Masami, > > > > > > > > I can't really review most of that code as I'm completely unfamiliar > > > > with all those inner workings of fprobe/ftrace/function_graph. I left > > > > a few comments where there were somewhat more obvious BPF-related > > > > pieces. > > > > > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline > > > > and then with your series applied on top. Just to see if there are any > > > > regressions. I think it will be a useful data point for you. > > > > > > Thanks for testing! > > > > > > > > > > > You should be already familiar with the bench tool we have in BPF > > > > selftests (I used it on some other patches for your tree). > > > > > > What patches we need? > > > > > > > You mean for this `bench` tool? They are part of BPF selftests (under > > tools/testing/selftests/bpf), you can build them by running: > > > > $ make RELEASE=1 -j$(nproc) bench > > > > After that you'll get a self-container `bench` binary, which has all > > the self-contained benchmarks. > > > > You might also find a small script (benchs/run_bench_trigger.sh inside > > BPF selftests directory) helpful, it collects final summary of the > > benchmark run and optionally accepts a specific set of benchmarks. So > > you can use it like this: > > > > $ benchs/run_bench_trigger.sh kprobe kprobe-multi > > kprobe : 18.731 ± 0.639M/s > > kprobe-multi : 23.938 ± 0.612M/s > > > > By default it will run a wider set of benchmarks (no uprobes, but a > > bunch of extra fentry/fexit tests and stuff like this). > > origin: > # benchs/run_bench_trigger.sh > kretprobe : 1.329 ± 0.007M/s > kretprobe-multi: 1.341 ± 0.004M/s > # benchs/run_bench_trigger.sh > kretprobe : 1.288 ± 0.014M/s > kretprobe-multi: 1.365 ± 0.002M/s > # benchs/run_bench_trigger.sh > kretprobe : 1.329 ± 0.002M/s > kretprobe-multi: 1.331 ± 0.011M/s > # benchs/run_bench_trigger.sh > kretprobe : 1.311 ± 0.003M/s > kretprobe-multi: 1.318 ± 0.002M/s s > > patched: > > # benchs/run_bench_trigger.sh > kretprobe : 1.274 ± 0.003M/s > kretprobe-multi: 1.397 ± 0.002M/s > # benchs/run_bench_trigger.sh > kretprobe : 1.307 ± 0.002M/s > kretprobe-multi: 1.406 ± 0.004M/s > # benchs/run_bench_trigger.sh > kretprobe : 1.279 ± 0.004M/s > kretprobe-multi: 1.330 ± 0.014M/s > # benchs/run_bench_trigger.sh > kretprobe : 1.256 ± 0.010M/s > kretprobe-multi: 1.412 ± 0.003M/s > > Hmm, in my case, it seems smaller differences (~3%?). > I attached perf report results for those, but I don't see large difference. I ran my benchmarks on bare metal machine (and quite powerful at that, you can see my numbers are almost 10x of yours), with mitigations disabled, no retpolines, etc. If you have any of those mitigations it might result in smaller differences, probably. If you are running inside QEMU/VM, the results might differ significantly as well. > > > > > > > > > BASELINE > > > > ======== > > > > kprobe : 24.634 ± 0.205M/s > > > > kprobe-multi : 28.898 ± 0.531M/s > > > > kretprobe : 10.478 ± 0.015M/s > > > > kretprobe-multi: 11.012 ± 0.063M/s > > > > > > > > THIS PATCH SET ON TOP > > > > ===================== > > > > kprobe : 25.144 ± 0.027M/s (+2%) > > > > kprobe-multi : 28.909 ± 0.074M/s > > > > kretprobe : 9.482 ± 0.008M/s (-9.5%) > > > > kretprobe-multi: 13.688 ± 0.027M/s (+24%) > > > > > > This looks good. Kretprobe should also use kretprobe-multi (fprobe) > > > eventually because it should be a single callback version of > > > kretprobe-multi. > > I ran another benchmark (prctl loop, attached), the origin kernel result is here; > > # sh ./benchmark.sh > count = 10000000, took 6.748133 sec > > And the patched kernel result; > > # sh ./benchmark.sh > count = 10000000, took 6.644095 sec > > I confirmed that the parf result has no big difference. > > Thank you, > > > > > > > > > > > > > These numbers are pretty stable and look to be more or less representative. > > > > > > > > As you can see, kprobes got a bit faster, kprobe-multi seems to be > > > > about the same, though. > > > > > > > > Then (I suppose they are "legacy") kretprobes got quite noticeably > > > > slower, almost by 10%. Not sure why, but looks real after re-running > > > > benchmarks a bunch of times and getting stable results. > > > > > > Hmm, kretprobe on x86 should use ftrace + rethook even with my series. > > > So nothing should be changed. Maybe cache access pattern has been > > > changed? > > > I'll check it with tracefs (to remove the effect from bpf related changes) > > > > > > > > > > > On the other hand, multi-kretprobes got significantly faster (+24%!). > > > > Again, I don't know if it is expected or not, but it's a nice > > > > improvement. > > > > > > Thanks! > > > > > > > > > > > If you have any idea why kretprobes would get so much slower, it would > > > > be nice to look into that and see if you can mitigate the regression > > > > somehow. Thanks! > > > > > > OK, let me check it. > > > > > > Thank you! > > > > > > > > > > > > > > > > 51 files changed, 2325 insertions(+), 882 deletions(-) > > > > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc > > > > > > > > > > -- > > > > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > > > > > > > > > > > > -- > > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org>
On Tue, 30 Apr 2024 09:29:40 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> On Tue, Apr 30, 2024 at 6:32 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > On Mon, 29 Apr 2024 13:25:04 -0700
> > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > > On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > >
> > > > Hi Andrii,
> > > >
> > > > On Thu, 25 Apr 2024 13:31:53 -0700
> > > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > > Hey Masami,
> > > > >
> > > > > I can't really review most of that code as I'm completely unfamiliar
> > > > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > > > a few comments where there were somewhat more obvious BPF-related
> > > > > pieces.
> > > > >
> > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > > > and then with your series applied on top. Just to see if there are any
> > > > > regressions. I think it will be a useful data point for you.
> > > >
> > > > Thanks for testing!
> > > >
> > > > >
> > > > > You should be already familiar with the bench tool we have in BPF
> > > > > selftests (I used it on some other patches for your tree).
> > > >
> > > > What patches we need?
> > > >
> > >
> > > You mean for this `bench` tool? They are part of BPF selftests (under
> > > tools/testing/selftests/bpf), you can build them by running:
> > >
> > > $ make RELEASE=1 -j$(nproc) bench
> > >
> > > After that you'll get a self-container `bench` binary, which has all
> > > the self-contained benchmarks.
> > >
> > > You might also find a small script (benchs/run_bench_trigger.sh inside
> > > BPF selftests directory) helpful, it collects final summary of the
> > > benchmark run and optionally accepts a specific set of benchmarks. So
> > > you can use it like this:
> > >
> > > $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> > > kprobe : 18.731 ± 0.639M/s
> > > kprobe-multi : 23.938 ± 0.612M/s
> > >
> > > By default it will run a wider set of benchmarks (no uprobes, but a
> > > bunch of extra fentry/fexit tests and stuff like this).
> >
> > origin:
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.329 ± 0.007M/s
> > kretprobe-multi: 1.341 ± 0.004M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.288 ± 0.014M/s
> > kretprobe-multi: 1.365 ± 0.002M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.329 ± 0.002M/s
> > kretprobe-multi: 1.331 ± 0.011M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.311 ± 0.003M/s
> > kretprobe-multi: 1.318 ± 0.002M/s s
> >
> > patched:
> >
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.274 ± 0.003M/s
> > kretprobe-multi: 1.397 ± 0.002M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.307 ± 0.002M/s
> > kretprobe-multi: 1.406 ± 0.004M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.279 ± 0.004M/s
> > kretprobe-multi: 1.330 ± 0.014M/s
> > # benchs/run_bench_trigger.sh
> > kretprobe : 1.256 ± 0.010M/s
> > kretprobe-multi: 1.412 ± 0.003M/s
> >
> > Hmm, in my case, it seems smaller differences (~3%?).
> > I attached perf report results for those, but I don't see large difference.
>
> I ran my benchmarks on bare metal machine (and quite powerful at that,
> you can see my numbers are almost 10x of yours), with mitigations
> disabled, no retpolines, etc. If you have any of those mitigations it
> might result in smaller differences, probably. If you are running
> inside QEMU/VM, the results might differ significantly as well.
I ran it on my bare metal machines again, but could not find any difference
between them. But I think I enabled intel mitigations on, so it might make
a difference from your result.
Can you run the benchmark with perf record? If there is such differences,
there should be recorded.
e.g.
# perf record -g -o perf.data-kretprobe-nopatch-raw-bpf -- bench -w2 -d5 -a trig-kretprobe
# perf report -G -i perf.data-kretprobe-nopatch-raw-bpf -k $VMLINUX --stdio > perf-out-kretprobe-nopatch-raw-bpf
I attached the results in my side.
The interesting point is, the functions int the result are not touched by
this series. Thus there may be another reason if you see the kretprobe
regression.
Thank you,
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 32K of event 'cycles:P'
# Event count (approx.): 34378649217
#
# Children Self Command Shared Object Symbol
# ........ ........ ....... ................................................. .....................................................
#
99.62% 0.00% bench libc.so.6 [.] start_thread
|
---start_thread
|
--99.54%--syscall
|
|--55.14%--entry_SYSCALL_64
| |
| |--36.05%--do_syscall_64
| | |
| | |--19.84%--x64_sys_call
| | | |
| | | |--14.45%--arch_rethook_trampoline
| | | | |
| | | | --14.42%--arch_rethook_trampoline_callback
| | | | |
| | | | |--11.64%--rethook_trampoline_handler
| | | | | |
| | | | | |--7.77%--kretprobe_rethook_handler
| | | | | | |
| | | | | | --7.63%--kretprobe_dispatcher
| | | | | | |
| | | | | | --6.15%--kretprobe_perf_func
| | | | | | |
| | | | | | |--3.14%--trace_call_bpf
| | | | | | | |
| | | | | | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | |
| | | | | | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | |
| | | | | |--1.71%--objpool_push.isra.0
| | | | | |
| | | | | --0.79%--kretprobe_dispatcher
| | | | |
| | | | --2.18%--kretprobe_rethook_handler
| | | |
| | | --4.36%--__x64_sys_getpgid
| | | |
| | | |--2.03%--do_getpgid
| | | | |
| | | | --0.95%--find_task_by_vpid
| | | |
| | | |--0.89%--ftrace_trampoline
| | | |
| | | --0.80%--__rcu_read_unlock
| | |
| | |--6.79%--__x64_sys_getpgid
| | | |
| | | --5.92%--ftrace_trampoline
| | | |
| | | |--5.13%--kprobe_ftrace_handler
| | | | |
| | | | --3.02%--pre_handler_kretprobe
| | | | |
| | | | --2.19%--rethook_try_get
| | | |
| | | --0.67%--pre_handler_kretprobe
| | |
| | |--3.31%--arch_rethook_trampoline
| | |
| | |--1.70%--ftrace_trampoline
| | |
| | --1.64%--syscall_exit_to_user_mode
| |
| --1.28%--x64_sys_call
|
|--23.38%--entry_SYSRETQ_unsafe_stack
|
|--17.30%--syscall_return_via_sysret
|
|--0.55%--do_syscall_64
|
--0.53%--arch_rethook_trampoline
99.61% 2.41% bench libc.so.6 [.] syscall
|
|--97.20%--syscall
| |
| |--53.42%--entry_SYSCALL_64
| | |
| | |--36.11%--do_syscall_64
| | | |
| | | |--19.90%--x64_sys_call
| | | | |
| | | | |--14.45%--arch_rethook_trampoline
| | | | | |
| | | | | --14.42%--arch_rethook_trampoline_callback
| | | | | |
| | | | | |--11.64%--rethook_trampoline_handler
| | | | | | |
| | | | | | |--7.77%--kretprobe_rethook_handler
| | | | | | | |
| | | | | | | --7.63%--kretprobe_dispatcher
| | | | | | | |
| | | | | | | --6.15%--kretprobe_perf_func
| | | | | | | |
| | | | | | | |--3.14%--trace_call_bpf
| | | | | | | | |
| | | | | | | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | | |
| | | | | | | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | |
| | | | | | |--1.71%--objpool_push.isra.0
| | | | | | |
| | | | | | --0.79%--kretprobe_dispatcher
| | | | | |
| | | | | --2.18%--kretprobe_rethook_handler
| | | | |
| | | | --4.36%--__x64_sys_getpgid
| | | | |
| | | | |--2.03%--do_getpgid
| | | | | |
| | | | | --0.95%--find_task_by_vpid
| | | | |
| | | | |--0.89%--ftrace_trampoline
| | | | |
| | | | --0.80%--__rcu_read_unlock
| | | |
| | | |--6.79%--__x64_sys_getpgid
| | | | |
| | | | --5.92%--ftrace_trampoline
| | | | |
| | | | |--5.13%--kprobe_ftrace_handler
| | | | | |
| | | | | --3.02%--pre_handler_kretprobe
| | | | | |
| | | | | --2.19%--rethook_try_get
| | | | |
| | | | --0.67%--pre_handler_kretprobe
| | | |
| | | |--3.31%--arch_rethook_trampoline
| | | |
| | | |--1.70%--ftrace_trampoline
| | | |
| | | --1.64%--syscall_exit_to_user_mode
| | |
| | --1.28%--x64_sys_call
| |
| |--23.38%--entry_SYSRETQ_unsafe_stack
| |
| |--17.30%--syscall_return_via_sysret
| |
| |--0.55%--do_syscall_64
| |
| --0.53%--arch_rethook_trampoline
|
--2.41%--start_thread
syscall
|
--1.78%--entry_SYSCALL_64
56.42% 16.71% bench [kernel.kallsyms] [k] entry_SYSCALL_64
|
|--39.70%--entry_SYSCALL_64
| |
| |--36.19%--do_syscall_64
| | |
| | |--19.98%--x64_sys_call
| | | |
| | | |--14.45%--arch_rethook_trampoline
| | | | |
| | | | --14.42%--arch_rethook_trampoline_callback
| | | | |
| | | | |--11.64%--rethook_trampoline_handler
| | | | | |
| | | | | |--7.77%--kretprobe_rethook_handler
| | | | | | |
| | | | | | --7.63%--kretprobe_dispatcher
| | | | | | |
| | | | | | --6.15%--kretprobe_perf_func
| | | | | | |
| | | | | | |--3.14%--trace_call_bpf
| | | | | | | |
| | | | | | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | |
| | | | | | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | |
| | | | | |--1.71%--objpool_push.isra.0
| | | | | |
| | | | | --0.79%--kretprobe_dispatcher
| | | | |
| | | | --2.18%--kretprobe_rethook_handler
| | | |
| | | --4.36%--__x64_sys_getpgid
| | | |
| | | |--2.03%--do_getpgid
| | | | |
| | | | --0.95%--find_task_by_vpid
| | | |
| | | |--0.89%--ftrace_trampoline
| | | |
| | | --0.80%--__rcu_read_unlock
| | |
| | |--6.79%--__x64_sys_getpgid
| | | |
| | | --5.92%--ftrace_trampoline
| | | |
| | | |--5.13%--kprobe_ftrace_handler
| | | | |
| | | | --3.02%--pre_handler_kretprobe
| | | | |
| | | | --2.19%--rethook_try_get
| | | |
| | | --0.67%--pre_handler_kretprobe
| | |
| | |--3.31%--arch_rethook_trampoline
| | |
| | |--1.70%--ftrace_trampoline
| | |
| | --1.64%--syscall_exit_to_user_mode
| |
| --1.28%--x64_sys_call
|
--16.71%--start_thread
syscall
|
|--15.58%--entry_SYSCALL_64
|
--0.84%--syscall_return_via_sysret
36.98% 2.62% bench [kernel.kallsyms] [k] do_syscall_64
|
|--34.35%--do_syscall_64
| |
| |--19.98%--x64_sys_call
| | |
| | |--14.45%--arch_rethook_trampoline
| | | |
| | | --14.42%--arch_rethook_trampoline_callback
| | | |
| | | |--11.64%--rethook_trampoline_handler
| | | | |
| | | | |--7.77%--kretprobe_rethook_handler
| | | | | |
| | | | | --7.63%--kretprobe_dispatcher
| | | | | |
| | | | | --6.15%--kretprobe_perf_func
| | | | | |
| | | | | |--3.14%--trace_call_bpf
| | | | | | |
| | | | | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | |
| | | | | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | |--1.71%--objpool_push.isra.0
| | | | |
| | | | --0.79%--kretprobe_dispatcher
| | | |
| | | --2.18%--kretprobe_rethook_handler
| | |
| | --4.36%--__x64_sys_getpgid
| | |
| | |--2.03%--do_getpgid
| | | |
| | | --0.95%--find_task_by_vpid
| | |
| | |--0.89%--ftrace_trampoline
| | |
| | --0.80%--__rcu_read_unlock
| |
| |--6.79%--__x64_sys_getpgid
| | |
| | --5.92%--ftrace_trampoline
| | |
| | |--5.13%--kprobe_ftrace_handler
| | | |
| | | --3.02%--pre_handler_kretprobe
| | | |
| | | --2.19%--rethook_try_get
| | |
| | --0.67%--pre_handler_kretprobe
| |
| |--3.31%--arch_rethook_trampoline
| |
| |--1.70%--ftrace_trampoline
| |
| --1.64%--syscall_exit_to_user_mode
|
--2.62%--start_thread
syscall
|
--2.37%--entry_SYSCALL_64
|
--2.13%--do_syscall_64
25.29% 25.13% bench [kernel.kallsyms] [k] entry_SYSRETQ_unsafe_stack
|
--25.12%--start_thread
syscall
|
--23.22%--entry_SYSRETQ_unsafe_stack
21.46% 1.81% bench [kernel.kallsyms] [k] x64_sys_call
|
|--19.65%--x64_sys_call
| |
| |--14.45%--arch_rethook_trampoline
| | |
| | --14.42%--arch_rethook_trampoline_callback
| | |
| | |--11.64%--rethook_trampoline_handler
| | | |
| | | |--7.77%--kretprobe_rethook_handler
| | | | |
| | | | --7.63%--kretprobe_dispatcher
| | | | |
| | | | --6.15%--kretprobe_perf_func
| | | | |
| | | | |--3.14%--trace_call_bpf
| | | | | |
| | | | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | |--1.71%--objpool_push.isra.0
| | | |
| | | --0.79%--kretprobe_dispatcher
| | |
| | --2.18%--kretprobe_rethook_handler
| |
| --4.36%--__x64_sys_getpgid
| |
| |--2.03%--do_getpgid
| | |
| | --0.95%--find_task_by_vpid
| |
| |--0.89%--ftrace_trampoline
| |
| --0.80%--__rcu_read_unlock
|
--1.81%--start_thread
syscall
entry_SYSCALL_64
|
|--1.21%--x64_sys_call
|
--0.59%--do_syscall_64
18.62% 3.53% bench [kernel.kallsyms] [k] arch_rethook_trampoline
|
|--15.09%--arch_rethook_trampoline
| |
| --14.85%--arch_rethook_trampoline_callback
| |
| |--11.94%--rethook_trampoline_handler
| | |
| | |--7.77%--kretprobe_rethook_handler
| | | |
| | | --7.63%--kretprobe_dispatcher
| | | |
| | | --6.15%--kretprobe_perf_func
| | | |
| | | |--3.14%--trace_call_bpf
| | | | |
| | | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | |--1.71%--objpool_push.isra.0
| | |
| | --0.79%--kretprobe_dispatcher
| |
| --2.18%--kretprobe_rethook_handler
|
--3.53%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
|
--3.20%--arch_rethook_trampoline
17.47% 16.63% bench [kernel.kallsyms] [k] syscall_return_via_sysret
|
|--16.62%--start_thread
| syscall
| |
| --16.46%--syscall_return_via_sysret
|
--0.84%--syscall_return_via_sysret
15.11% 0.40% bench [kernel.kallsyms] [k] arch_rethook_trampoline_callback
|
--14.71%--arch_rethook_trampoline_callback
|
|--11.94%--rethook_trampoline_handler
| |
| |--7.77%--kretprobe_rethook_handler
| | |
| | --7.63%--kretprobe_dispatcher
| | |
| | --6.15%--kretprobe_perf_func
| | |
| | |--3.14%--trace_call_bpf
| | | |
| | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| |--1.71%--objpool_push.isra.0
| |
| --0.79%--kretprobe_dispatcher
|
--2.18%--kretprobe_rethook_handler
12.59% 2.12% bench [kernel.kallsyms] [k] rethook_trampoline_handler
|
|--10.47%--rethook_trampoline_handler
| |
| |--7.77%--kretprobe_rethook_handler
| | |
| | --7.63%--kretprobe_dispatcher
| | |
| | --6.15%--kretprobe_perf_func
| | |
| | |--3.14%--trace_call_bpf
| | | |
| | | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| |--1.71%--objpool_push.isra.0
| |
| --0.79%--kretprobe_dispatcher
|
--2.12%--start_thread
syscall
|
--1.76%--entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
|
--1.73%--arch_rethook_trampoline_callback
|
--1.22%--rethook_trampoline_handler
11.33% 0.61% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
|--10.72%--__x64_sys_getpgid
| |
| |--6.81%--ftrace_trampoline
| | |
| | |--5.13%--kprobe_ftrace_handler
| | | |
| | | --3.02%--pre_handler_kretprobe
| | | |
| | | --2.19%--rethook_try_get
| | |
| | --0.67%--pre_handler_kretprobe
| |
| |--2.03%--do_getpgid
| | |
| | --0.95%--find_task_by_vpid
| |
| --0.80%--__rcu_read_unlock
|
--0.61%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
10.00% 2.20% bench [kernel.kallsyms] [k] kretprobe_rethook_handler
|
|--7.80%--kretprobe_rethook_handler
| |
| --7.63%--kretprobe_dispatcher
| |
| --6.15%--kretprobe_perf_func
| |
| |--3.14%--trace_call_bpf
| | |
| | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.20%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
|
--2.14%--kretprobe_rethook_handler
9.03% 3.01% bench ftrace_trampoline [k] ftrace_trampoline
|
|--6.02%--ftrace_trampoline
| |
| |--5.13%--kprobe_ftrace_handler
| | |
| | --3.02%--pre_handler_kretprobe
| | |
| | --2.19%--rethook_try_get
| |
| --0.67%--pre_handler_kretprobe
|
--3.01%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
|
|--1.60%--ftrace_trampoline
|
--1.01%--x64_sys_call
|
--0.89%--__x64_sys_getpgid
ftrace_trampoline
8.56% 2.20% bench [kernel.kallsyms] [k] kretprobe_dispatcher
|
|--6.37%--kretprobe_dispatcher
| |
| --6.15%--kretprobe_perf_func
| |
| |--3.14%--trace_call_bpf
| | |
| | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.20%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
|
|--1.45%--kretprobe_rethook_handler
| |
| --1.32%--kretprobe_dispatcher
|
--0.74%--kretprobe_dispatcher
6.32% 0.98% bench [kernel.kallsyms] [k] kretprobe_perf_func
|
|--5.35%--kretprobe_perf_func
| |
| |--3.14%--trace_call_bpf
| | |
| | --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --1.13%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--0.98%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
|
--0.88%--kretprobe_perf_func
5.36% 2.00% bench [kernel.kallsyms] [k] kprobe_ftrace_handler
|
|--3.37%--kprobe_ftrace_handler
| |
| --3.02%--pre_handler_kretprobe
| |
| --2.19%--rethook_try_get
|
--2.00%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
|
--1.89%--__x64_sys_getpgid
ftrace_trampoline
|
--1.82%--kprobe_ftrace_handler
3.74% 1.32% bench [kernel.kallsyms] [k] pre_handler_kretprobe
|
|--2.42%--pre_handler_kretprobe
| |
| --2.19%--rethook_try_get
|
--1.32%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
|
|--0.69%--kprobe_ftrace_handler
| |
| --0.63%--pre_handler_kretprobe
|
--0.63%--pre_handler_kretprobe
3.51% 1.63% bench [kernel.kallsyms] [k] trace_call_bpf
|
|--1.88%--trace_call_bpf
| |
| --1.21%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.63%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
|
--1.57%--kretprobe_perf_func
|
--1.26%--trace_call_bpf
2.63% 1.00% bench [kernel.kallsyms] [k] do_getpgid
|
|--1.62%--do_getpgid
| |
| --0.95%--find_task_by_vpid
|
--1.00%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
|
--0.83%--__x64_sys_getpgid
2.34% 2.27% bench bpf_prog_21856463590f61f1_bench_trigger_kretprobe [k] bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.27%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
|
|--1.21%--trace_call_bpf
| bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.06%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
2.33% 2.28% bench [kernel.kallsyms] [k] rethook_try_get
|
--2.28%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
|
--2.19%--pre_handler_kretprobe
rethook_try_get
1.96% 1.61% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
|
--1.61%--start_thread
syscall
entry_SYSCALL_64
|
--1.32%--do_syscall_64
syscall_exit_to_user_mode
1.81% 1.75% bench [kernel.kallsyms] [k] objpool_push.isra.0
|
--1.75%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
|
--1.71%--objpool_push.isra.0
1.44% 0.79% bench [kernel.kallsyms] [k] find_task_by_vpid
|
|--0.79%--start_thread
| syscall
| entry_SYSCALL_64
| do_syscall_64
| x64_sys_call
| __x64_sys_getpgid
|
--0.65%--find_task_by_vpid
1.10% 0.81% bench [kernel.kallsyms] [k] __rcu_read_unlock
|
--0.81%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
|
--0.56%--__x64_sys_getpgid
|
--0.53%--__rcu_read_unlock
0.91% 0.36% bench [kernel.kallsyms] [k] idr_find
|
--0.54%--idr_find
0.66% 0.56% bench [kernel.kallsyms] [k] __rcu_read_lock
|
--0.56%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
0.60% 0.20% bench [kernel.kallsyms] [k] arch_rethook_fixup_return
0.59% 0.55% bench [kernel.kallsyms] [k] migrate_enable
|
--0.55%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
0.48% 0.44% bench [kernel.kallsyms] [k] migrate_disable
0.46% 0.11% bench [kernel.kallsyms] [k] radix_tree_lookup
0.44% 0.40% bench [kernel.kallsyms] [k] __radix_tree_lookup
0.44% 0.44% bench [kernel.kallsyms] [k] fpregs_assert_state_consistent
0.28% 0.00% bench libc.so.6 [.] __libc_start_call_main
0.28% 0.00% bench bench [.] main
0.28% 0.00% bench bench [.] setup_benchmark
0.28% 0.00% bench bench [.] trigger_kretprobe_setup
0.25% 0.20% bench [kernel.kallsyms] [k] rethook_hook
0.22% 0.00% bench bench [.] trigger_bench__open_and_load
0.22% 0.00% bench bench [.] bpf_object__load_skeleton
0.22% 0.00% bench bench [.] bpf_object__load
0.22% 0.00% bench bench [.] bpf_object_load
0.18% 0.15% bench [kernel.kallsyms] [k] get_kprobe
0.14% 0.00% bench bench [.] bpf_object__load_vmlinux_btf
0.14% 0.00% bench bench [.] libbpf_find_kernel_btf
0.14% 0.00% bench bench [.] btf__parse
0.14% 0.00% bench bench [.] btf_parse
0.14% 0.00% bench bench [.] btf_parse_raw
0.13% 0.13% bench [kernel.kallsyms] [k] amd_clear_divider
0.13% 0.08% bench [kernel.kallsyms] [k] arch_rethook_prepare
0.11% 0.00% bench bench [.] btf_new
0.09% 0.06% bench bench [.] syscall@plt
0.08% 0.01% bench bench [.] btf_sanity_check
0.08% 0.08% bench bench [.] trigger_producer
0.08% 0.00% bench bench [.] bpf_object__load_progs
0.08% 0.00% bench bench [.] bpf_object_load_prog
0.07% 0.00% bench bench [.] libbpf_prepare_prog_load
0.07% 0.00% bench bench [.] libbpf_find_attach_btf_id
0.07% 0.00% bench bench [.] find_kernel_btf_id
0.07% 0.00% bench bench [.] find_attach_btf_id
0.07% 0.00% bench bench [.] find_btf_by_prefix_kind
0.07% 0.00% bench bench [.] btf__find_by_name_kind
0.06% 0.00% bench [kernel.kallsyms] [k] kprobe_register
0.06% 0.03% bench bench [.] btf_validate_type
0.06% 0.00% bench [kernel.kallsyms] [k] arch_ftrace_update_code
0.06% 0.00% bench [kernel.kallsyms] [k] ftrace_modify_all_code
0.05% 0.00% bench bench [.] bpf_program__attach
0.05% 0.00% bench bench [.] attach_kprobe
0.05% 0.00% bench bench [.] bpf_program__attach_kprobe_opts
0.05% 0.00% bench [kernel.kallsyms] [k] __x64_sys_perf_event_open
0.05% 0.00% bench [kernel.kallsyms] [k] __do_sys_perf_event_open
0.05% 0.00% bench [kernel.kallsyms] [k] perf_event_alloc
0.05% 0.00% bench [kernel.kallsyms] [k] perf_try_init_event
0.05% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_event_init
0.05% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_init
0.05% 0.04% bench [kernel.kallsyms] [k] ftrace_replace_code
0.05% 0.02% bench bench [.] btf_find_by_name_kind
0.04% 0.00% bench [kernel.kallsyms] [k] asm_sysvec_apic_timer_interrupt
0.04% 0.00% bench [kernel.kallsyms] [k] sysvec_apic_timer_interrupt
0.04% 0.00% bench [unknown] [k] 0000000000000000
0.04% 0.00% bench [kernel.kallsyms] [k] do_group_exit
0.04% 0.00% bench [kernel.kallsyms] [k] do_exit
0.04% 0.00% bench libc.so.6 [.] read
0.04% 0.00% bench [kernel.kallsyms] [k] 0xffffffff8fac2f2c
0.03% 0.00% bench [kernel.kallsyms] [k] task_work_run
0.03% 0.00% bench [kernel.kallsyms] [k] ____fput
0.03% 0.00% bench [kernel.kallsyms] [k] __fput
0.03% 0.00% bench [kernel.kallsyms] [k] perf_release
0.03% 0.00% bench [kernel.kallsyms] [k] perf_event_release_kernel
0.03% 0.00% bench [kernel.kallsyms] [k] _free_event
0.03% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_destroy
0.03% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_unreg.isra.0
0.03% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_init
0.03% 0.00% bench [kernel.kallsyms] [k] enable_trace_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] enable_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] arm_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function
0.03% 0.00% bench [kernel.kallsyms] [k] ftrace_startup
0.03% 0.00% bench [kernel.kallsyms] [k] ftrace_shutdown.part.0
0.03% 0.00% bench [kernel.kallsyms] [k] disable_trace_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] __disable_trace_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] disable_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] __disable_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] disarm_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] disarm_kprobe_ftrace
0.03% 0.00% bench [kernel.kallsyms] [k] unregister_ftrace_function
0.03% 0.01% bench bench [.] btf_parse_type_sec
0.03% 0.01% bench libc.so.6 [.] __memmove_avx_unaligned_erms
0.03% 0.00% bench [kernel.kallsyms] [k] __x64_sys_read
0.03% 0.00% bench [kernel.kallsyms] [k] ksys_read
0.03% 0.00% bench [kernel.kallsyms] [k] vfs_read
0.03% 0.00% bench [kernel.kallsyms] [k] kernfs_fop_read_iter
0.03% 0.01% bench bench [.] btf__type_by_id
0.03% 0.03% bench bench [.] btf_kind
0.02% 0.00% bench [kernel.kallsyms] [k] __sysvec_apic_timer_interrupt
0.02% 0.00% bench [kernel.kallsyms] [k] hrtimer_interrupt
0.02% 0.00% bench [kernel.kallsyms] [k] asm_exc_page_fault
0.02% 0.02% bench bench [.] btf_type_size
0.02% 0.02% bench libc.so.6 [.] __strcmp_avx2
0.02% 0.02% bench bench [.] btf_type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] __hrtimer_run_queues
0.02% 0.00% bench [kernel.kallsyms] [k] tick_nohz_handler
0.02% 0.00% bench [kernel.kallsyms] [k] asm_sysvec_thermal
0.02% 0.00% bench [kernel.kallsyms] [k] create_local_trace_kprobe
0.02% 0.01% bench [kernel.kallsyms] [k] ftrace_test_record
0.02% 0.01% bench bench [.] btf_validate_str
0.02% 0.00% bench [kernel.kallsyms] [k] exc_page_fault
0.02% 0.00% bench [kernel.kallsyms] [k] do_user_addr_fault
0.02% 0.01% bench bench [.] btf__str_by_offset
0.02% 0.00% bench [kernel.kallsyms] [k] sysvec_thermal
0.02% 0.00% bench [kernel.kallsyms] [k] __sysvec_thermal
0.02% 0.00% bench [kernel.kallsyms] [k] intel_thermal_interrupt
0.01% 0.00% bench [kernel.kallsyms] [k] ftrace_rec_iter_record
0.01% 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.01% 0.00% bench [kernel.kallsyms] [k] handle_mm_fault
0.01% 0.01% bench [kernel.kallsyms] [k] __irqentry_text_end
0.01% 0.00% bench [kernel.kallsyms] [k] __queue_work
0.01% 0.00% bench [kernel.kallsyms] [k] irq_exit_rcu
0.01% 0.00% bench [kernel.kallsyms] [k] __irq_exit_rcu
0.01% 0.00% bench [kernel.kallsyms] [k] __do_softirq
0.01% 0.00% bench [kernel.kallsyms] [k] run_timer_softirq
0.01% 0.00% bench [kernel.kallsyms] [k] tmigr_handle_remote
0.01% 0.00% bench [kernel.kallsyms] [k] timer_expire_remote
0.01% 0.00% bench [kernel.kallsyms] [k] __run_timers
0.01% 0.00% bench [kernel.kallsyms] [k] call_timer_fn
0.01% 0.00% bench [kernel.kallsyms] [k] delayed_work_timer_fn
0.01% 0.01% bench [kernel.kallsyms] [k] native_read_msr
0.01% 0.01% bench [kernel.kallsyms] [k] count_mod_symbols
0.01% 0.00% bench [kernel.kallsyms] [k] module_kallsyms_on_each_symbol
0.01% 0.00% bench [kernel.kallsyms] [k] update_process_times
0.01% 0.00% bench [kernel.kallsyms] [k] scheduler_tick
0.01% 0.01% bench [kernel.kallsyms] [k] rep_movs_alternative
0.01% 0.01% bench [kernel.kallsyms] [k] native_irq_return_iret
0.01% 0.01% bench bench [.] btf_strs_data
0.01% 0.00% bench bench [.] btf_validate_id
0.01% 0.00% bench [kernel.kallsyms] [k] tick_do_update_jiffies64
0.01% 0.00% bench [kernel.kallsyms] [k] update_wall_time
0.01% 0.00% bench [kernel.kallsyms] [k] timekeeping_advance
0.01% 0.00% bench [kernel.kallsyms] [k] timekeeping_update
0.01% 0.01% bench [kernel.kallsyms] [k] native_write_msr
0.01% 0.00% bench [kernel.kallsyms] [k] folios_put_refs
0.01% 0.00% bench [kernel.kallsyms] [k] __tlb_batch_free_encoded_pages
0.01% 0.00% bench [kernel.kallsyms] [k] free_pages_and_swap_cache
0.01% 0.00% bench [x86_pkg_temp_thermal] [k] pkg_thermal_notify
0.01% 0.00% bench [kernel.kallsyms] [k] vma_alloc_folio
0.01% 0.00% bench [kernel.kallsyms] [k] alloc_pages_mpol
0.01% 0.00% bench [kernel.kallsyms] [k] __alloc_pages
0.01% 0.01% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.01% 0.00% bench [kernel.kallsyms] [k] on_each_cpu_cond_mask
0.01% 0.00% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_check_record
0.01% 0.00% bench [unknown] [k] 0x00007f64de4eb08b
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_execve
0.01% 0.00% bench [kernel.kallsyms] [k] do_execveat_common.isra.0
0.01% 0.00% bench [kernel.kallsyms] [k] bprm_execve
0.01% 0.00% bench [kernel.kallsyms] [k] load_elf_binary
0.01% 0.00% bench bench [.] bpf_prog_load
0.01% 0.00% bench bench [.] sys_bpf_prog_load
0.01% 0.00% bench bench [.] sys_bpf_fd
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_bpf
0.01% 0.00% bench [kernel.kallsyms] [k] __sys_bpf
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_prog_load
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_check
0.01% 0.00% bench libc.so.6 [.] __munmap
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] __vm_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] do_vmi_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] do_vmi_align_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_region
0.01% 0.00% bench [kernel.kallsyms] [k] tlb_finish_mmu
0.01% 0.01% bench [kvm] [k] pvclock_gtod_notify
0.01% 0.00% bench [kernel.kallsyms] [k] raw_notifier_call_chain
0.01% 0.00% bench [kernel.kallsyms] [k] x86_pmu_enable
0.01% 0.00% bench [kernel.kallsyms] [k] intel_pmu_enable_all
0.01% 0.01% bench [kernel.kallsyms] [k] strcmp
0.01% 0.00% bench [kernel.kallsyms] [k] perf_event_task_tick
0.01% 0.00% bench [kernel.kallsyms] [k] perf_adjust_freq_unthr_context
0.01% 0.00% bench [kernel.kallsyms] [k] text_poke_bp_batch
0.01% 0.01% bench [kernel.kallsyms] [k] clear_page_erms
0.01% 0.00% bench [kernel.kallsyms] [k] rep_stos_alternative
0.01% 0.00% bench [kernel.kallsyms] [k] do_fault
0.01% 0.01% bench ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.01% 0.00% bench [unknown] [.] 0x0000000000000040
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_sysdep_start
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] dl_main
0.01% 0.01% bench bench [.] elf_sec_by_name
0.01% 0.00% bench bench [.] bpf_object__open_skeleton
0.01% 0.00% bench bench [.] bpf_object__open_mem
0.01% 0.00% bench bench [.] bpf_object_open
0.01% 0.00% bench bench [.] bpf_object__elf_collect
0.01% 0.00% bench bench [.] bpf_object__init_btf
0.00% 0.00% bench [kernel.kallsyms] [k] memset_orig
0.00% 0.00% bench [kernel.kallsyms] [k] kfree
0.00% 0.00% bench [kernel.kallsyms] [k] memcpy_orig
0.00% 0.00% bench [kernel.kallsyms] [k] sysfs_kf_bin_read
0.00% 0.00% bench [kernel.kallsyms] [k] __virt_addr_valid
0.00% 0.00% bench [kernel.kallsyms] [k] __rmqueue_pcplist
0.00% 0.00% bench [kernel.kallsyms] [k] do_anonymous_page
0.00% 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.00% 0.00% bench [kernel.kallsyms] [k] down_read_trylock
0.00% 0.00% bench bench [.] btf_vlen
0.00% 0.00% bench bench [.] libbpf_add_mem
0.00% 0.00% bench bench [.] btf_add_type_idx_entry
0.00% 0.00% bench [kernel.kallsyms] [k] mas_destroy
0.00% 0.00% bench [unknown] [k] 0x000000280000001c
0.00% 0.00% bench libc.so.6 [.] __GI___mremap
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_mremap
0.00% 0.00% bench [kernel.kallsyms] [k] __do_sys_mremap
0.00% 0.00% bench [kernel.kallsyms] [k] vma_merge_extend
0.00% 0.00% bench [kernel.kallsyms] [k] vma_merge.isra.0
0.00% 0.00% bench [kernel.kallsyms] [k] mas_store_prealloc
0.00% 0.00% bench bench [.] btf__name_by_offset
0.00% 0.00% bench [kernel.kallsyms] [k] insert_vmap_area
0.00% 0.00% bench [kernel.kallsyms] [k] resolve_pseudo_ldimm64
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_calc_tag
0.00% 0.00% bench [kernel.kallsyms] [k] vmalloc
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node_range
0.00% 0.00% bench [kernel.kallsyms] [k] __get_vm_area_node
0.00% 0.00% bench [kernel.kallsyms] [k] alloc_vmap_area
0.00% 0.00% bench [kernel.kallsyms] [k] __mod_lruvec_state
0.00% 0.00% bench [kernel.kallsyms] [k] __page_cache_release
0.00% 0.00% bench [kernel.kallsyms] [k] __vunmap_range_noflush
0.00% 0.00% bench [kernel.kallsyms] [k] vfree
0.00% 0.00% bench [kernel.kallsyms] [k] vfree.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] remove_vm_area
0.00% 0.00% bench [kernel.kallsyms] [k] free_unref_folios
0.00% 0.00% bench [kernel.kallsyms] [k] pwq_tryinc_nr_active
0.00% 0.00% bench [kernel.kallsyms] [k] __register_ftrace_function
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] arch_ftrace_update_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] set_memory_rox
0.00% 0.00% bench [kernel.kallsyms] [k] change_page_attr_set_clr
0.00% 0.00% bench [kernel.kallsyms] [k] cpa_flush
0.00% 0.00% bench [kernel.kallsyms] [k] flush_tlb_all
0.00% 0.00% bench [kernel.kallsyms] [k] update_fast_timekeeper
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_prefixes.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] __register_trace_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] register_kretprobe
0.00% 0.00% bench [kernel.kallsyms] [k] register_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] jump_label_text_reserved
0.00% 0.00% bench [kernel.kallsyms] [k] arch_jump_entry_size
0.00% 0.00% bench [kernel.kallsyms] [k] insn_decode
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_displacement
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_modrm
0.00% 0.00% bench [kernel.kallsyms] [k] hrtimer_active
0.00% 0.00% bench [kernel.kallsyms] [k] arch_scale_freq_tick
0.00% 0.00% bench [kernel.kallsyms] [k] sync_regs
0.00% 0.00% bench [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.00% 0.00% bench [kernel.kallsyms] [k] kick_pool
0.00% 0.00% bench [kernel.kallsyms] [k] wake_up_process
0.00% 0.00% bench [kernel.kallsyms] [k] try_to_wake_up
0.00% 0.00% bench [kernel.kallsyms] [k] acct_collect
0.00% 0.00% bench [kernel.kallsyms] [k] arch_do_signal_or_restart
0.00% 0.00% bench [kernel.kallsyms] [k] get_signal
0.00% 0.00% bench [kernel.kallsyms] [k] lapic_next_deadline
0.00% 0.00% bench [kernel.kallsyms] [k] tick_program_event
0.00% 0.00% bench [kernel.kallsyms] [k] clockevents_program_event
0.00% 0.00% bench [kernel.kallsyms] [k] lock_timer_base
0.00% 0.00% bench [kernel.kallsyms] [k] queue_delayed_work_on
0.00% 0.00% bench [kernel.kallsyms] [k] __queue_delayed_work
0.00% 0.00% bench [kernel.kallsyms] [k] error_entry
0.00% 0.00% bench [kernel.kallsyms] [k] native_apic_msr_eoi
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_finish
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_ftrace_func
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_bp
0.00% 0.00% bench [kernel.kallsyms] [k] sched_clock_cpu
0.00% 0.00% bench libc.so.6 [.] pthread_setaffinity_np@@GLIBC_2.34
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_sched_setaffinity
0.00% 0.00% bench [kernel.kallsyms] [k] sched_setaffinity
0.00% 0.00% bench [kernel.kallsyms] [k] __sched_setaffinity
0.00% 0.00% bench [kernel.kallsyms] [k] __set_cpus_allowed_ptr
0.00% 0.00% bench [kernel.kallsyms] [k] __set_cpus_allowed_ptr_locked
0.00% 0.00% bench [kernel.kallsyms] [k] update_rq_clock
0.00% 0.00% bench [kernel.kallsyms] [k] mmput
0.00% 0.00% bench [kernel.kallsyms] [k] __mmput
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] unmap_vmas
0.00% 0.00% bench [kernel.kallsyms] [k] unmap_single_vma
0.00% 0.00% bench [kernel.kallsyms] [k] unmap_page_range
0.00% 0.00% bench [kernel.kallsyms] [k] tlb_flush_mmu
0.00% 0.00% bench libc.so.6 [.] pthread_cond_wait@@GLIBC_2.3.2
0.00% 0.00% bench [kernel.kallsyms] [k] pcpu_next_md_free_region
0.00% 0.00% bench [kernel.kallsyms] [k] free_percpu
0.00% 0.00% bench [kernel.kallsyms] [k] pcpu_free_area
0.00% 0.00% bench [kernel.kallsyms] [k] pcpu_chunk_refresh_hint
0.00% 0.00% bench [kernel.kallsyms] [k] chacha_block_generic
0.00% 0.00% bench [kernel.kallsyms] [k] setup_arg_pages
0.00% 0.00% bench [kernel.kallsyms] [k] arch_align_stack
0.00% 0.00% bench [kernel.kallsyms] [k] get_random_u16
0.00% 0.00% bench [kernel.kallsyms] [k] _get_random_bytes.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] crng_make_state
0.00% 0.00% bench [kernel.kallsyms] [k] crng_fast_key_erasure
0.00% 0.00% bench [kernel.kallsyms] [k] finish_task_switch.isra.0
0.00% 0.00% bench [kernel.kallsyms] [k] __perf_event_task_sched_in
0.00% 0.00% bench [kernel.kallsyms] [k] perf_ctx_enable
0.00% 0.00% bench libc.so.6 [.] clone3
0.00% 0.00% bench [kernel.kallsyms] [k] ret_from_fork_asm
0.00% 0.00% bench [kernel.kallsyms] [k] ret_from_fork
0.00% 0.00% bench [kernel.kallsyms] [k] schedule_tail
0.00% 0.00% bench [kernel.kallsyms] [k] nmi_restore
0.00% 0.00% bench [kernel.kallsyms] [k] schedule_timeout
0.00% 0.00% bench [kernel.kallsyms] [k] synchronize_rcu
0.00% 0.00% bench [kernel.kallsyms] [k] __wait_rcu_gp
0.00% 0.00% bench [kernel.kallsyms] [k] wait_for_completion
0.00% 0.00% bench [kernel.kallsyms] [k] __wait_for_common
0.00% 0.00% perf-ex [unknown] [k] 0x00007f64de4eb08b
0.00% 0.00% perf-ex [kernel.kallsyms] [k] entry_SYSCALL_64
0.00% 0.00% perf-ex [kernel.kallsyms] [k] do_syscall_64
0.00% 0.00% perf-ex [kernel.kallsyms] [k] x64_sys_call
0.00% 0.00% perf-ex [kernel.kallsyms] [k] __x64_sys_execve
0.00% 0.00% perf-ex [kernel.kallsyms] [k] perf_event_exec
0.00% 0.00% perf-ex [kernel.kallsyms] [k] do_execveat_common.isra.0
0.00% 0.00% perf-ex [kernel.kallsyms] [k] bprm_execve
0.00% 0.00% perf-ex [kernel.kallsyms] [k] load_elf_binary
0.00% 0.00% perf-ex [kernel.kallsyms] [k] begin_new_exec
0.00% 0.00% bench [kernel.kallsyms] [k] schedule
0.00% 0.00% bench [kernel.kallsyms] [k] __schedule
0.00% 0.00% perf-ex [kernel.kallsyms] [k] native_write_msr
0.00% 0.00% perf-ex [kernel.kallsyms] [k] ctx_resched
0.00% 0.00% perf-ex [kernel.kallsyms] [k] perf_ctx_enable
0.00% 0.00% perf-ex [kernel.kallsyms] [k] x86_pmu_enable
0.00% 0.00% perf-ex [kernel.kallsyms] [k] intel_pmu_enable_all
#
# (Tip: Generate a script for your data: perf script -g <lang>)
#
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 32K of event 'cycles:P'
# Event count (approx.): 34983230838
#
# Children Self Command Shared Object Symbol
# ........ ........ ....... ................................................. .....................................................
#
99.66% 0.00% bench libc.so.6 [.] start_thread
|
---start_thread
|
--99.54%--syscall
|
|--55.13%--entry_SYSCALL_64
| |
| |--36.05%--do_syscall_64
| | |
| | |--20.07%--x64_sys_call
| | | |
| | | |--14.40%--arch_rethook_trampoline
| | | | |
| | | | --14.36%--arch_rethook_trampoline_callback
| | | | |
| | | | |--11.51%--rethook_trampoline_handler
| | | | | |
| | | | | |--7.24%--kretprobe_rethook_handler
| | | | | | |
| | | | | | --7.13%--kretprobe_dispatcher
| | | | | | |
| | | | | | --5.74%--kretprobe_perf_func
| | | | | | |
| | | | | | |--2.78%--trace_call_bpf
| | | | | | | |
| | | | | | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | |
| | | | | | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | |
| | | | | |--1.73%--objpool_push.isra.0
| | | | | |
| | | | | --1.03%--kretprobe_dispatcher
| | | | |
| | | | --2.31%--kretprobe_rethook_handler
| | | |
| | | --4.56%--__x64_sys_getpgid
| | | |
| | | |--2.14%--do_getpgid
| | | | |
| | | | --0.99%--find_task_by_vpid
| | | |
| | | --0.82%--__rcu_read_unlock
| | |
| | |--6.88%--__x64_sys_getpgid
| | | |
| | | --6.04%--0xffffffffc1bc50f9
| | | |
| | | |--5.15%--kprobe_ftrace_handler
| | | | |
| | | | --3.03%--pre_handler_kretprobe
| | | | |
| | | | --2.30%--rethook_try_get
| | | |
| | | --0.77%--pre_handler_kretprobe
| | |
| | |--2.87%--arch_rethook_trampoline
| | |
| | |--1.56%--syscall_exit_to_user_mode
| | |
| | --0.67%--0xffffffffc1bc5178
| |
| --1.21%--x64_sys_call
|
|--23.48%--entry_SYSRETQ_unsafe_stack
|
|--17.38%--syscall_return_via_sysret
|
|--0.58%--do_syscall_64
|
--0.57%--arch_rethook_trampoline
99.60% 2.40% bench libc.so.6 [.] syscall
|
|--97.20%--syscall
| |
| |--53.41%--entry_SYSCALL_64
| | |
| | |--36.11%--do_syscall_64
| | | |
| | | |--20.13%--x64_sys_call
| | | | |
| | | | |--14.40%--arch_rethook_trampoline
| | | | | |
| | | | | --14.36%--arch_rethook_trampoline_callback
| | | | | |
| | | | | |--11.51%--rethook_trampoline_handler
| | | | | | |
| | | | | | |--7.24%--kretprobe_rethook_handler
| | | | | | | |
| | | | | | | --7.13%--kretprobe_dispatcher
| | | | | | | |
| | | | | | | --5.74%--kretprobe_perf_func
| | | | | | | |
| | | | | | | |--2.78%--trace_call_bpf
| | | | | | | | |
| | | | | | | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | | |
| | | | | | | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | |
| | | | | | |--1.73%--objpool_push.isra.0
| | | | | | |
| | | | | | --1.03%--kretprobe_dispatcher
| | | | | |
| | | | | --2.31%--kretprobe_rethook_handler
| | | | |
| | | | --4.56%--__x64_sys_getpgid
| | | | |
| | | | |--2.14%--do_getpgid
| | | | | |
| | | | | --0.99%--find_task_by_vpid
| | | | |
| | | | --0.82%--__rcu_read_unlock
| | | |
| | | |--6.88%--__x64_sys_getpgid
| | | | |
| | | | --6.04%--0xffffffffc1bc50f9
| | | | |
| | | | |--5.15%--kprobe_ftrace_handler
| | | | | |
| | | | | --3.03%--pre_handler_kretprobe
| | | | | |
| | | | | --2.30%--rethook_try_get
| | | | |
| | | | --0.77%--pre_handler_kretprobe
| | | |
| | | |--2.87%--arch_rethook_trampoline
| | | |
| | | |--1.56%--syscall_exit_to_user_mode
| | | |
| | | --0.67%--0xffffffffc1bc5178
| | |
| | --1.21%--x64_sys_call
| |
| |--23.48%--entry_SYSRETQ_unsafe_stack
| |
| |--17.38%--syscall_return_via_sysret
| |
| |--0.58%--do_syscall_64
| |
| --0.57%--arch_rethook_trampoline
|
--2.40%--start_thread
syscall
|
--1.79%--entry_SYSCALL_64
56.47% 16.77% bench [kernel.kallsyms] [k] entry_SYSCALL_64
|
|--39.70%--entry_SYSCALL_64
| |
| |--36.20%--do_syscall_64
| | |
| | |--20.22%--x64_sys_call
| | | |
| | | |--14.40%--arch_rethook_trampoline
| | | | |
| | | | --14.36%--arch_rethook_trampoline_callback
| | | | |
| | | | |--11.51%--rethook_trampoline_handler
| | | | | |
| | | | | |--7.24%--kretprobe_rethook_handler
| | | | | | |
| | | | | | --7.13%--kretprobe_dispatcher
| | | | | | |
| | | | | | --5.74%--kretprobe_perf_func
| | | | | | |
| | | | | | |--2.78%--trace_call_bpf
| | | | | | | |
| | | | | | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | | |
| | | | | | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | |
| | | | | |--1.73%--objpool_push.isra.0
| | | | | |
| | | | | --1.03%--kretprobe_dispatcher
| | | | |
| | | | --2.31%--kretprobe_rethook_handler
| | | |
| | | --4.56%--__x64_sys_getpgid
| | | |
| | | |--2.14%--do_getpgid
| | | | |
| | | | --0.99%--find_task_by_vpid
| | | |
| | | --0.82%--__rcu_read_unlock
| | |
| | |--6.88%--__x64_sys_getpgid
| | | |
| | | --6.04%--0xffffffffc1bc50f9
| | | |
| | | |--5.15%--kprobe_ftrace_handler
| | | | |
| | | | --3.03%--pre_handler_kretprobe
| | | | |
| | | | --2.30%--rethook_try_get
| | | |
| | | --0.77%--pre_handler_kretprobe
| | |
| | |--2.87%--arch_rethook_trampoline
| | |
| | |--1.56%--syscall_exit_to_user_mode
| | |
| | --0.67%--0xffffffffc1bc5178
| |
| --1.21%--x64_sys_call
|
--16.77%--start_thread
syscall
|
|--15.59%--entry_SYSCALL_64
|
--0.90%--syscall_return_via_sysret
37.01% 2.71% bench [kernel.kallsyms] [k] do_syscall_64
|
|--34.30%--do_syscall_64
| |
| |--20.22%--x64_sys_call
| | |
| | |--14.40%--arch_rethook_trampoline
| | | |
| | | --14.36%--arch_rethook_trampoline_callback
| | | |
| | | |--11.51%--rethook_trampoline_handler
| | | | |
| | | | |--7.24%--kretprobe_rethook_handler
| | | | | |
| | | | | --7.13%--kretprobe_dispatcher
| | | | | |
| | | | | --5.74%--kretprobe_perf_func
| | | | | |
| | | | | |--2.78%--trace_call_bpf
| | | | | | |
| | | | | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | | |
| | | | | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | |--1.73%--objpool_push.isra.0
| | | | |
| | | | --1.03%--kretprobe_dispatcher
| | | |
| | | --2.31%--kretprobe_rethook_handler
| | |
| | --4.56%--__x64_sys_getpgid
| | |
| | |--2.14%--do_getpgid
| | | |
| | | --0.99%--find_task_by_vpid
| | |
| | --0.82%--__rcu_read_unlock
| |
| |--6.88%--__x64_sys_getpgid
| | |
| | --6.04%--0xffffffffc1bc50f9
| | |
| | |--5.15%--kprobe_ftrace_handler
| | | |
| | | --3.03%--pre_handler_kretprobe
| | | |
| | | --2.30%--rethook_try_get
| | |
| | --0.77%--pre_handler_kretprobe
| |
| |--2.87%--arch_rethook_trampoline
| |
| |--1.56%--syscall_exit_to_user_mode
| |
| --0.67%--0xffffffffc1bc5178
|
--2.71%--start_thread
syscall
|
--2.42%--entry_SYSCALL_64
|
--2.20%--do_syscall_64
25.19% 25.03% bench [kernel.kallsyms] [k] entry_SYSRETQ_unsafe_stack
|
--25.03%--start_thread
syscall
|
--23.32%--entry_SYSRETQ_unsafe_stack
21.68% 1.87% bench [kernel.kallsyms] [k] x64_sys_call
|
|--19.81%--x64_sys_call
| |
| |--14.40%--arch_rethook_trampoline
| | |
| | --14.36%--arch_rethook_trampoline_callback
| | |
| | |--11.51%--rethook_trampoline_handler
| | | |
| | | |--7.24%--kretprobe_rethook_handler
| | | | |
| | | | --7.13%--kretprobe_dispatcher
| | | | |
| | | | --5.74%--kretprobe_perf_func
| | | | |
| | | | |--2.78%--trace_call_bpf
| | | | | |
| | | | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | |--1.73%--objpool_push.isra.0
| | | |
| | | --1.03%--kretprobe_dispatcher
| | |
| | --2.31%--kretprobe_rethook_handler
| |
| --4.56%--__x64_sys_getpgid
| |
| |--2.14%--do_getpgid
| | |
| | --0.99%--find_task_by_vpid
| |
| --0.82%--__rcu_read_unlock
|
--1.87%--start_thread
syscall
entry_SYSCALL_64
|
|--1.17%--x64_sys_call
|
--0.70%--do_syscall_64
18.25% 3.13% bench [kernel.kallsyms] [k] arch_rethook_trampoline
|
|--15.13%--arch_rethook_trampoline
| |
| --14.83%--arch_rethook_trampoline_callback
| |
| |--11.87%--rethook_trampoline_handler
| | |
| | |--7.24%--kretprobe_rethook_handler
| | | |
| | | --7.13%--kretprobe_dispatcher
| | | |
| | | --5.74%--kretprobe_perf_func
| | | |
| | | |--2.78%--trace_call_bpf
| | | | |
| | | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | |--1.73%--objpool_push.isra.0
| | |
| | --1.03%--kretprobe_dispatcher
| |
| --2.31%--kretprobe_rethook_handler
|
--3.13%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
|
--2.71%--arch_rethook_trampoline
17.54% 16.65% bench [kernel.kallsyms] [k] syscall_return_via_sysret
|
|--16.65%--start_thread
| syscall
| |
| --16.49%--syscall_return_via_sysret
|
--0.90%--syscall_return_via_sysret
15.15% 0.41% bench [kernel.kallsyms] [k] arch_rethook_trampoline_callback
|
--14.74%--arch_rethook_trampoline_callback
|
|--11.87%--rethook_trampoline_handler
| |
| |--7.24%--kretprobe_rethook_handler
| | |
| | --7.13%--kretprobe_dispatcher
| | |
| | --5.74%--kretprobe_perf_func
| | |
| | |--2.78%--trace_call_bpf
| | | |
| | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| |--1.73%--objpool_push.isra.0
| |
| --1.03%--kretprobe_dispatcher
|
--2.31%--kretprobe_rethook_handler
12.47% 2.33% bench [kernel.kallsyms] [k] rethook_trampoline_handler
|
|--10.14%--rethook_trampoline_handler
| |
| |--7.24%--kretprobe_rethook_handler
| | |
| | --7.13%--kretprobe_dispatcher
| | |
| | --5.74%--kretprobe_perf_func
| | |
| | |--2.78%--trace_call_bpf
| | | |
| | | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| |--1.73%--objpool_push.isra.0
| |
| --1.03%--kretprobe_dispatcher
|
--2.33%--start_thread
syscall
|
--1.92%--entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
|
--1.88%--arch_rethook_trampoline_callback
|
--1.41%--rethook_trampoline_handler
11.67% 0.59% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
|--11.08%--__x64_sys_getpgid
| |
| |--6.04%--0xffffffffc1bc50f9
| | |
| | |--5.15%--kprobe_ftrace_handler
| | | |
| | | --3.03%--pre_handler_kretprobe
| | | |
| | | --2.30%--rethook_try_get
| | |
| | --0.77%--pre_handler_kretprobe
| |
| |--2.14%--do_getpgid
| | |
| | --0.99%--find_task_by_vpid
| |
| --0.82%--__rcu_read_unlock
|
--0.59%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
9.61% 2.35% bench [kernel.kallsyms] [k] kretprobe_rethook_handler
|
|--7.26%--kretprobe_rethook_handler
| |
| --7.13%--kretprobe_dispatcher
| |
| --5.74%--kretprobe_perf_func
| |
| |--2.78%--trace_call_bpf
| | |
| | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.35%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
|
--2.29%--kretprobe_rethook_handler
8.27% 2.34% bench [kernel.kallsyms] [k] kretprobe_dispatcher
|
|--5.93%--kretprobe_dispatcher
| |
| --5.74%--kretprobe_perf_func
| |
| |--2.78%--trace_call_bpf
| | |
| | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.34%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
|
|--1.37%--kretprobe_rethook_handler
| |
| --1.27%--kretprobe_dispatcher
|
--0.97%--kretprobe_dispatcher
6.07% 0.03% bench [unknown] [k] 0xffffffffc1bc50f9
|
--6.04%--0xffffffffc1bc50f9
|
|--5.15%--kprobe_ftrace_handler
| |
| --3.03%--pre_handler_kretprobe
| |
| --2.30%--rethook_try_get
|
--0.77%--pre_handler_kretprobe
5.86% 0.88% bench [kernel.kallsyms] [k] kretprobe_perf_func
|
|--4.99%--kretprobe_perf_func
| |
| |--2.78%--trace_call_bpf
| | |
| | --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --1.15%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--0.88%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
|
--0.87%--kretprobe_dispatcher
|
--0.80%--kretprobe_perf_func
5.40% 2.01% bench [kernel.kallsyms] [k] kprobe_ftrace_handler
|
|--3.38%--kprobe_ftrace_handler
| |
| --3.03%--pre_handler_kretprobe
| |
| --2.30%--rethook_try_get
|
--2.01%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
|
--1.90%--__x64_sys_getpgid
0xffffffffc1bc50f9
|
--1.83%--kprobe_ftrace_handler
3.88% 1.30% bench [kernel.kallsyms] [k] pre_handler_kretprobe
|
|--2.59%--pre_handler_kretprobe
| |
| --2.30%--rethook_try_get
|
--1.30%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
__x64_sys_getpgid
0xffffffffc1bc50f9
|
|--0.71%--pre_handler_kretprobe
|
--0.58%--kprobe_ftrace_handler
3.10% 1.49% bench [kernel.kallsyms] [k] trace_call_bpf
|
|--1.61%--trace_call_bpf
| |
| --1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.49%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
|
--1.44%--kretprobe_perf_func
|
--1.17%--trace_call_bpf
2.78% 1.15% bench [kernel.kallsyms] [k] do_getpgid
|
|--1.64%--do_getpgid
| |
| --0.99%--find_task_by_vpid
|
--1.15%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
|
--0.95%--__x64_sys_getpgid
|
--0.54%--do_getpgid
2.45% 2.36% bench [kernel.kallsyms] [k] rethook_try_get
|
--2.36%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
__x64_sys_getpgid
0xffffffffc1bc50f9
kprobe_ftrace_handler
|
--2.30%--pre_handler_kretprobe
|
--2.30%--rethook_try_get
2.23% 2.16% bench bpf_prog_21856463590f61f1_bench_trigger_kretprobe [k] bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.16%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
|
|--1.08%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.08%--trace_call_bpf
bpf_prog_21856463590f61f1_bench_trigger_kretprobe
1.91% 1.55% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
|
--1.55%--start_thread
syscall
entry_SYSCALL_64
|
--1.24%--do_syscall_64
syscall_exit_to_user_mode
1.85% 1.77% bench [kernel.kallsyms] [k] objpool_push.isra.0
|
--1.77%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
|
--1.73%--objpool_push.isra.0
1.49% 0.80% bench [kernel.kallsyms] [k] find_task_by_vpid
|
|--0.80%--start_thread
| syscall
| entry_SYSCALL_64
| do_syscall_64
| x64_sys_call
| __x64_sys_getpgid
|
--0.69%--find_task_by_vpid
1.21% 0.89% bench [kernel.kallsyms] [k] __rcu_read_unlock
|
--0.89%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
x64_sys_call
|
--0.60%--__x64_sys_getpgid
|
--0.55%--__rcu_read_unlock
1.01% 0.33% bench [unknown] [k] 0xffffffffc1bc5178
|
--0.67%--0xffffffffc1bc5178
0.88% 0.32% bench [kernel.kallsyms] [k] idr_find
|
--0.55%--idr_find
0.71% 0.67% bench [unknown] [k] 0xffffffffc1bc5177
|
--0.67%--start_thread
syscall
entry_SYSCALL_64
do_syscall_64
0xffffffffc1bc5178
0.56% 0.45% bench [kernel.kallsyms] [k] __rcu_read_lock
0.53% 0.50% bench [kernel.kallsyms] [k] migrate_enable
0.51% 0.16% bench [kernel.kallsyms] [k] radix_tree_lookup
0.48% 0.11% bench [kernel.kallsyms] [k] arch_rethook_fixup_return
0.47% 0.42% bench [kernel.kallsyms] [k] migrate_disable
0.47% 0.47% bench [kernel.kallsyms] [k] fpregs_assert_state_consistent
0.43% 0.39% bench [kernel.kallsyms] [k] __radix_tree_lookup
0.36% 0.15% bench [unknown] [k] 0xffffffffc1bc500a
0.29% 0.24% bench [kernel.kallsyms] [k] rethook_hook
0.24% 0.00% bench libc.so.6 [.] __libc_start_call_main
0.24% 0.00% bench bench [.] main
0.24% 0.00% bench bench [.] setup_benchmark
0.24% 0.00% bench bench [.] trigger_kretprobe_setup
0.21% 0.21% bench [unknown] [k] 0xffffffffc1bc5006
0.19% 0.00% bench bench [.] trigger_bench__open_and_load
0.19% 0.00% bench bench [.] bpf_object__load_skeleton
0.19% 0.00% bench bench [.] bpf_object__load
0.19% 0.00% bench bench [.] bpf_object_load
0.18% 0.15% bench [kernel.kallsyms] [k] get_kprobe
0.15% 0.00% bench [unknown] [k] 0xffffffffc1bc500b
0.15% 0.08% bench [kernel.kallsyms] [k] arch_rethook_prepare
0.14% 0.00% bench bench [.] bpf_object__load_vmlinux_btf
0.14% 0.00% bench bench [.] libbpf_find_kernel_btf
0.14% 0.00% bench bench [.] btf__parse
0.14% 0.00% bench bench [.] btf_parse
0.14% 0.00% bench bench [.] btf_parse_raw
0.14% 0.14% bench [kernel.kallsyms] [k] amd_clear_divider
0.13% 0.06% bench [unknown] [k] 0xffffffffc1bc509a
0.12% 0.06% bench [unknown] [k] 0xffffffffc1bc509f
0.12% 0.07% bench [unknown] [k] 0xffffffffc1bc5095
0.12% 0.12% bench bench [.] trigger_producer
0.12% 0.00% bench bench [.] btf_new
0.11% 0.11% bench [unknown] [k] 0xffffffffc1bc5004
0.11% 0.00% bench [unknown] [k] 0xffffffffc1bc5005
0.11% 0.08% bench [unknown] [k] 0xffffffffc1bc5101
0.11% 0.05% bench [unknown] [k] 0xffffffffc1bc5090
0.11% 0.05% bench [unknown] [k] 0xffffffffc1bc5086
0.11% 0.06% bench [unknown] [k] 0xffffffffc1bc508b
0.09% 0.04% bench bench [.] syscall@plt
0.09% 0.08% bench [unknown] [k] 0xffffffffc1bc5022
0.08% 0.00% bench [unknown] [k] 0xffffffffc1bc5000
0.08% 0.00% bench [unknown] [k] 0xffffffffc1bc5027
0.08% 0.01% bench [unknown] [k] 0xffffffffc1bc5012
0.08% 0.02% bench [unknown] [k] 0xffffffffc1bc5161
0.08% 0.00% bench bench [.] btf_sanity_check
0.08% 0.00% bench [unknown] [k] 0xffffffffc1bc5109
0.08% 0.08% bench [unknown] [k] 0xffffffffc1bc500e
0.07% 0.07% bench [unknown] [k] 0xffffffffc1bc514d
0.07% 0.00% bench [unknown] [k] 0xffffffffc1bc5152
0.07% 0.01% bench bench [.] btf_validate_type
0.07% 0.07% bench [unknown] [k] 0xffffffffc1bc5066
0.07% 0.00% bench [unknown] [k] 0xffffffffc1bc506e
0.06% 0.06% bench [unknown] [k] 0xffffffffc1bc513b
0.06% 0.00% bench [unknown] [k] 0xffffffffc1bc5143
0.06% 0.06% bench [unknown] [k] 0xffffffffc1bc515c
0.06% 0.06% bench [unknown] [k] 0xffffffffc1bc5082
0.06% 0.00% bench [unknown] [k] 0xffffffffc1bc50a4
0.06% 0.06% bench [unknown] [k] 0xffffffffc1bc5111
0.06% 0.00% bench [unknown] [k] 0xffffffffc1bc5119
0.06% 0.00% bench [kernel.kallsyms] [k] kprobe_register
0.06% 0.00% bench [kernel.kallsyms] [k] arch_ftrace_update_code
0.06% 0.00% bench [kernel.kallsyms] [k] ftrace_modify_all_code
0.06% 0.06% bench [unknown] [k] 0xffffffffc1bc50f4
0.05% 0.05% bench [unknown] [k] 0xffffffffc1bc50ca
0.05% 0.00% bench [unknown] [k] 0xffffffffc1bc50d2
0.05% 0.00% bench [kernel.kallsyms] [k] asm_sysvec_apic_timer_interrupt
0.05% 0.05% bench [unknown] [k] 0xffffffffc1bc5051
0.05% 0.00% bench [unknown] [k] 0xffffffffc1bc5056
0.05% 0.00% bench bench [.] bpf_program__attach
0.05% 0.00% bench bench [.] attach_kprobe
0.05% 0.00% bench bench [.] bpf_program__attach_kprobe_opts
0.05% 0.00% bench [kernel.kallsyms] [k] __x64_sys_perf_event_open
0.05% 0.00% bench [kernel.kallsyms] [k] __do_sys_perf_event_open
0.05% 0.00% bench [kernel.kallsyms] [k] perf_event_alloc
0.05% 0.00% bench [kernel.kallsyms] [k] perf_try_init_event
0.05% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_event_init
0.05% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_init
0.05% 0.05% bench [unknown] [k] 0xffffffffc1bc50bb
0.05% 0.00% bench [unknown] [k] 0xffffffffc1bc50c3
0.05% 0.00% bench bench [.] bpf_object__load_progs
0.05% 0.00% bench bench [.] bpf_object_load_prog
0.05% 0.04% bench [unknown] [k] 0xffffffffc1bc5040
0.05% 0.05% bench [unknown] [k] 0xffffffffc1bc50da
0.05% 0.00% bench [unknown] [k] 0xffffffffc1bc50e2
0.05% 0.03% bench [unknown] [k] 0xffffffffc1bc5036
0.04% 0.00% bench [unknown] [.] 0000000000000000
0.04% 0.00% bench [kernel.kallsyms] [k] sysvec_apic_timer_interrupt
0.04% 0.04% bench [unknown] [k] 0xffffffffc1bc5127
0.04% 0.00% bench [unknown] [k] 0xffffffffc1bc512c
0.04% 0.02% bench bench [.] btf__type_by_id
0.04% 0.00% bench bench [.] libbpf_prepare_prog_load
0.04% 0.00% bench bench [.] libbpf_find_attach_btf_id
0.04% 0.00% bench bench [.] find_kernel_btf_id
0.04% 0.00% bench bench [.] find_attach_btf_id
0.04% 0.00% bench bench [.] find_btf_by_prefix_kind
0.04% 0.00% bench bench [.] btf__find_by_name_kind
0.04% 0.01% bench [unknown] [k] 0xffffffffc1bc503b
0.04% 0.02% bench [kernel.kallsyms] [k] ftrace_replace_code
0.04% 0.00% bench [kernel.kallsyms] [k] __sysvec_apic_timer_interrupt
0.04% 0.00% bench [kernel.kallsyms] [k] hrtimer_interrupt
0.04% 0.00% bench [unknown] [k] 0xffffffffc1bc5049
0.04% 0.00% bench bench [.] btf_find_by_name_kind
0.04% 0.00% bench [unknown] [k] 0xffffffffc1bc501d
0.04% 0.02% bench [unknown] [k] 0xffffffffc1bc5031
0.04% 0.00% bench [kernel.kallsyms] [k] 0xffffffff8d2c3f2c
0.04% 0.00% bench [kernel.kallsyms] [k] do_group_exit
0.04% 0.00% bench [kernel.kallsyms] [k] do_exit
0.04% 0.00% bench [kernel.kallsyms] [k] asm_exc_page_fault
0.03% 0.01% bench bench [.] btf_validate_id
0.03% 0.00% bench [kernel.kallsyms] [k] __hrtimer_run_queues
0.03% 0.00% bench libc.so.6 [.] read
0.03% 0.00% bench [kernel.kallsyms] [k] __x64_sys_read
0.03% 0.00% bench [kernel.kallsyms] [k] ksys_read
0.03% 0.00% bench [kernel.kallsyms] [k] vfs_read
0.03% 0.00% bench [kernel.kallsyms] [k] kernfs_fop_read_iter
0.03% 0.03% bench [unknown] [k] 0xffffffffc1bc5016
0.03% 0.00% bench [kernel.kallsyms] [k] task_work_run
0.03% 0.00% bench [kernel.kallsyms] [k] ____fput
0.03% 0.00% bench [kernel.kallsyms] [k] __fput
0.03% 0.00% bench [kernel.kallsyms] [k] tick_nohz_handler
0.03% 0.03% bench [unknown] [k] 0xffffffffc1bc5170
0.03% 0.03% bench [unknown] [k] 0xffffffffc1bc50ac
0.03% 0.00% bench [unknown] [k] 0xffffffffc1bc50b4
0.03% 0.00% bench [kernel.kallsyms] [k] asm_sysvec_thermal
0.03% 0.00% bench [kernel.kallsyms] [k] sysvec_thermal
0.03% 0.00% bench [kernel.kallsyms] [k] __sysvec_thermal
0.03% 0.00% bench [kernel.kallsyms] [k] intel_thermal_interrupt
0.03% 0.00% bench [kernel.kallsyms] [k] perf_release
0.03% 0.00% bench [kernel.kallsyms] [k] perf_event_release_kernel
0.03% 0.00% bench [kernel.kallsyms] [k] _free_event
0.03% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_destroy
0.03% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_unreg.isra.0
0.03% 0.00% bench [kernel.kallsyms] [k] disable_trace_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] __disable_trace_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] disable_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] __disable_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] disarm_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] disarm_kprobe_ftrace
0.03% 0.00% bench [kernel.kallsyms] [k] unregister_ftrace_function
0.03% 0.00% bench [kernel.kallsyms] [k] ftrace_shutdown.part.0
0.03% 0.00% bench bench [.] btf_parse_type_sec
0.03% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_init
0.03% 0.00% bench [kernel.kallsyms] [k] enable_trace_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] enable_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] arm_kprobe
0.03% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function
0.03% 0.00% bench [kernel.kallsyms] [k] ftrace_startup
0.03% 0.02% bench libc.so.6 [.] __memmove_avx_unaligned_erms
0.03% 0.00% bench [kernel.kallsyms] [k] rep_movs_alternative
0.03% 0.00% bench [kernel.kallsyms] [k] exc_page_fault
0.02% 0.02% bench [kernel.kallsyms] [k] native_read_msr
0.02% 0.00% bench [kernel.kallsyms] [k] create_local_trace_kprobe
0.02% 0.02% bench bench [.] btf_type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] handle_mm_fault
0.02% 0.00% bench [kernel.kallsyms] [k] do_user_addr_fault
0.02% 0.00% bench [unknown] [k] 0xffffffffc1bc5166
0.02% 0.02% bench [unknown] [k] 0xffffffffc1bc502c
0.02% 0.02% bench [kernel.kallsyms] [k] ftrace_check_record
0.02% 0.02% bench bench [.] btf__str_by_offset
0.02% 0.02% bench [kernel.kallsyms] [k] ftrace_test_record
0.02% 0.02% bench bench [.] btf_kind
0.02% 0.00% bench [kernel.kallsyms] [k] update_process_times
0.02% 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.02% 0.01% bench bench [.] btf_add_type_idx_entry
0.02% 0.01% bench bench [.] btf_type_size
0.02% 0.01% bench bench [.] btf_validate_str
0.01% 0.00% bench [kernel.kallsyms] [k] timekeeping_advance
0.01% 0.00% bench [kernel.kallsyms] [k] tick_do_update_jiffies64
0.01% 0.00% bench [kernel.kallsyms] [k] update_wall_time
0.01% 0.01% bench [kernel.kallsyms] [k] count_mod_symbols
0.01% 0.00% bench [kernel.kallsyms] [k] scheduler_tick
0.01% 0.00% bench bench [.] sys_bpf_fd
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_bpf
0.01% 0.00% bench [kernel.kallsyms] [k] __sys_bpf
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_prog_load
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_check
0.01% 0.00% bench [kernel.kallsyms] [k] do_anonymous_page
0.01% 0.01% bench [kernel.kallsyms] [k] native_write_msr
0.01% 0.00% bench bench [.] btf__name_by_offset
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.01% 0.01% bench [kernel.kallsyms] [k] __irqentry_text_end
0.01% 0.01% bench [kernel.kallsyms] [k] native_irq_return_iret
0.01% 0.01% bench [kernel.kallsyms] [k] memcpy_orig
0.01% 0.00% bench [kernel.kallsyms] [k] sysfs_kf_bin_read
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_vmas
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_single_vma
0.01% 0.00% bench [kernel.kallsyms] [k] ftrace_rec_iter_record
0.01% 0.00% bench [kernel.kallsyms] [k] vma_alloc_folio
0.01% 0.00% bench [unknown] [k] 0x00007fb2140eb08b
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_execve
0.01% 0.00% bench [kernel.kallsyms] [k] do_execveat_common.isra.0
0.01% 0.00% bench [kernel.kallsyms] [k] bprm_execve
0.01% 0.00% bench [kernel.kallsyms] [k] load_elf_binary
0.01% 0.00% bench bench [.] btf_add_type_offs_mem
0.01% 0.01% bench [kernel.kallsyms] [k] sync_regs
0.01% 0.00% bench [kernel.kallsyms] [k] perf_adjust_freq_unthr_context
0.01% 0.00% bench [kernel.kallsyms] [k] perf_event_task_tick
0.01% 0.00% bench [kernel.kallsyms] [k] module_kallsyms_on_each_symbol
0.01% 0.00% bench libc.so.6 [.] __munmap
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] __vm_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] do_vmi_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] do_vmi_align_munmap
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_region
0.01% 0.01% bench [kernel.kallsyms] [k] update_fast_timekeeper
0.01% 0.00% bench bench [.] bpf_prog_load
0.01% 0.00% bench bench [.] sys_bpf_prog_load
0.01% 0.00% bench bench [.] btf_strs_data
0.01% 0.00% bench [kernel.kallsyms] [k] arch_scale_freq_tick
0.01% 0.00% bench [kernel.kallsyms] [k] irq_exit_rcu
0.01% 0.00% bench [kernel.kallsyms] [k] __irq_exit_rcu
0.01% 0.00% bench [kernel.kallsyms] [k] __do_softirq
0.01% 0.00% bench [unknown] [k] 0xffffffffc1bc5013
0.01% 0.01% bench [kernel.kallsyms] [k] vm_area_alloc
0.01% 0.00% bench [kernel.kallsyms] [k] elf_load
0.01% 0.01% bench ld-linux-x86-64.so.2 [.] do_lookup_x
0.01% 0.00% bench [kernel.kallsyms] [k] vm_brk_flags
0.01% 0.01% bench [kernel.kallsyms] [k] alloc_vmap_area
0.01% 0.00% bench bench [.] bpf_object__probe_loading
0.01% 0.00% bench bench [.] bump_rlimit_memlock
0.01% 0.00% bench bench [.] feat_supported
0.01% 0.00% bench bench [.] probe_memcg_account
0.01% 0.00% bench [kernel.kallsyms] [k] vzalloc
0.01% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node_range
0.01% 0.00% bench [kernel.kallsyms] [k] __get_vm_area_node
0.00% 0.00% bench [kernel.kallsyms] [k] __mem_cgroup_charge
0.00% 0.00% bench [kernel.kallsyms] [k] _compound_head
0.00% 0.00% bench [kernel.kallsyms] [k] __set_task_blocked
0.00% 0.00% bench [kernel.kallsyms] [k] mmput
0.00% 0.00% bench bench [.] sigalarm_handler
0.00% 0.00% bench [kernel.kallsyms] [k] __mmput
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] arch_do_signal_or_restart
0.00% 0.00% bench [kernel.kallsyms] [k] pfn_pte
0.00% 0.00% bench [kernel.kallsyms] [k] irqentry_enter
0.00% 0.00% bench [kernel.kallsyms] [k] pte_offset_map_nolock
0.00% 0.00% bench [kernel.kallsyms] [k] clear_page_erms
0.00% 0.00% bench [kernel.kallsyms] [k] alloc_pages_mpol
0.00% 0.00% bench [kernel.kallsyms] [k] __alloc_pages
0.00% 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.00% 0.00% bench bench [.] libbpf_add_mem
0.00% 0.00% bench libc.so.6 [.] __memset_avx2_unaligned_erms
0.00% 0.00% bench [kernel.kallsyms] [k] x86_pmu_enable
0.00% 0.00% bench [kernel.kallsyms] [k] intel_pmu_enable_all
0.00% 0.00% bench [kernel.kallsyms] [k] error_entry
0.00% 0.00% bench [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.00% 0.00% bench [kernel.kallsyms] [k] timekeeping_update
0.00% 0.00% bench [kernel.kallsyms] [k] convert_ctx_accesses
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_prefixes.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] __register_trace_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] register_kretprobe
0.00% 0.00% bench [kernel.kallsyms] [k] register_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] jump_label_text_reserved
0.00% 0.00% bench [kernel.kallsyms] [k] arch_jump_entry_size
0.00% 0.00% bench [kernel.kallsyms] [k] insn_decode
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_displacement
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_modrm
0.00% 0.00% bench [kernel.kallsyms] [k] __mem_cgroup_uncharge_folios
0.00% 0.00% bench [kernel.kallsyms] [k] tlb_finish_mmu
0.00% 0.00% bench [kernel.kallsyms] [k] __tlb_batch_free_encoded_pages
0.00% 0.00% bench [kernel.kallsyms] [k] free_pages_and_swap_cache
0.00% 0.00% bench [kernel.kallsyms] [k] folios_put_refs
0.00% 0.00% bench [kernel.kallsyms] [k] strcmp
0.00% 0.00% bench [kernel.kallsyms] [k] folio_remove_rmap_ptes
0.00% 0.00% bench [kernel.kallsyms] [k] unmap_page_range
0.00% 0.00% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_ftrace_func
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_bp
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_bp_batch
0.00% 0.00% bench [kernel.kallsyms] [k] on_each_cpu_cond_mask
0.00% 0.00% bench libc.so.6 [.] __strcmp_avx2
0.00% 0.00% bench [kernel.kallsyms] [k] watchdog_timer_fn
0.00% 0.00% bench [kernel.kallsyms] [k] __fdget
0.00% 0.00% bench [kernel.kallsyms] [k] native_read_msr_safe
0.00% 0.00% bench [kernel.kallsyms] [k] notify_hwp_interrupt
0.00% 0.00% bench [kernel.kallsyms] [k] update_rt_rq_load_avg
0.00% 0.00% bench [kernel.kallsyms] [k] run_rebalance_domains
0.00% 0.00% bench [kernel.kallsyms] [k] update_blocked_averages
0.00% 0.00% bench [kernel.kallsyms] [k] tmigr_requires_handle_remote
0.00% 0.00% bench [kernel.kallsyms] [k] therm_throt_process
0.00% 0.00% bench [kernel.kallsyms] [k] clockevents_program_event
0.00% 0.00% bench [kernel.kallsyms] [k] tick_program_event
0.00% 0.00% bench [kernel.kallsyms] [k] __memcg_slab_free_hook
0.00% 0.00% bench [kernel.kallsyms] [k] dput
0.00% 0.00% bench [kernel.kallsyms] [k] __dentry_kill
0.00% 0.00% bench [kernel.kallsyms] [k] dentry_free
0.00% 0.00% bench [kernel.kallsyms] [k] kmem_cache_free
0.00% 0.00% bench [kernel.kallsyms] [k] task_tick_fair
0.00% 0.00% bench [x86_pkg_temp_thermal] [k] pkg_thermal_notify
0.00% 0.00% bench [kernel.kallsyms] [k] __run_timers
0.00% 0.00% bench [kernel.kallsyms] [k] shift_arg_pages
0.00% 0.00% bench libc.so.6 [.] _IO_file_overflow@@GLIBC_2.2.5
0.00% 0.00% bench [unknown] [.] 0x000056411ba752a0
0.00% 0.00% bench [kernel.kallsyms] [k] finish_task_switch.isra.0
0.00% 0.00% bench libc.so.6 [.] clone3
0.00% 0.00% bench [kernel.kallsyms] [k] ret_from_fork_asm
0.00% 0.00% bench [kernel.kallsyms] [k] ret_from_fork
0.00% 0.00% bench [kernel.kallsyms] [k] schedule_tail
0.00% 0.00% perf-ex [unknown] [k] 0x00007fb2140eb08b
0.00% 0.00% perf-ex [kernel.kallsyms] [k] entry_SYSCALL_64
0.00% 0.00% perf-ex [kernel.kallsyms] [k] do_syscall_64
0.00% 0.00% perf-ex [kernel.kallsyms] [k] perf_ctx_enable
0.00% 0.00% perf-ex [kernel.kallsyms] [k] x64_sys_call
0.00% 0.00% perf-ex [kernel.kallsyms] [k] __x64_sys_execve
0.00% 0.00% perf-ex [kernel.kallsyms] [k] do_execveat_common.isra.0
0.00% 0.00% perf-ex [kernel.kallsyms] [k] bprm_execve
0.00% 0.00% perf-ex [kernel.kallsyms] [k] load_elf_binary
0.00% 0.00% perf-ex [kernel.kallsyms] [k] begin_new_exec
0.00% 0.00% perf-ex [kernel.kallsyms] [k] perf_event_exec
0.00% 0.00% perf-ex [kernel.kallsyms] [k] ctx_resched
0.00% 0.00% bench [kernel.kallsyms] [k] __perf_event_task_sched_in
0.00% 0.00% bench [kernel.kallsyms] [k] perf_ctx_enable
0.00% 0.00% bench libc.so.6 [.] pthread_setaffinity_np@@GLIBC_2.34
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_sched_setaffinity
0.00% 0.00% bench [kernel.kallsyms] [k] sched_setaffinity
0.00% 0.00% bench [kernel.kallsyms] [k] __sched_setaffinity
0.00% 0.00% bench [kernel.kallsyms] [k] __set_cpus_allowed_ptr
0.00% 0.00% bench [kernel.kallsyms] [k] __set_cpus_allowed_ptr_locked
0.00% 0.00% bench [kernel.kallsyms] [k] affine_move_task
0.00% 0.00% bench [kernel.kallsyms] [k] wait_for_completion
0.00% 0.00% bench [kernel.kallsyms] [k] __wait_for_common
0.00% 0.00% bench [kernel.kallsyms] [k] schedule_timeout
0.00% 0.00% bench [kernel.kallsyms] [k] schedule
0.00% 0.00% bench [kernel.kallsyms] [k] __schedule
0.00% 0.00% bench [kernel.kallsyms] [k] nmi_restore
0.00% 0.00% bench [kernel.kallsyms] [k] intel_pmu_handle_irq
0.00% 0.00% perf-ex [kernel.kallsyms] [k] native_write_msr
0.00% 0.00% perf-ex [kernel.kallsyms] [k] x86_pmu_enable
0.00% 0.00% perf-ex [kernel.kallsyms] [k] intel_pmu_enable_all
#
# (Tip: Print event counts in CSV format with: perf stat -x,)
#
On Wed, May 1, 2024 at 7:06 PM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > On Tue, 30 Apr 2024 09:29:40 -0700 > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > On Tue, Apr 30, 2024 at 6:32 AM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > > > > On Mon, 29 Apr 2024 13:25:04 -0700 > > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > > > > > On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > > > > > > > > Hi Andrii, > > > > > > > > > > On Thu, 25 Apr 2024 13:31:53 -0700 > > > > > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > > > > > > > > > Hey Masami, > > > > > > > > > > > > I can't really review most of that code as I'm completely unfamiliar > > > > > > with all those inner workings of fprobe/ftrace/function_graph. I left > > > > > > a few comments where there were somewhat more obvious BPF-related > > > > > > pieces. > > > > > > > > > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline > > > > > > and then with your series applied on top. Just to see if there are any > > > > > > regressions. I think it will be a useful data point for you. > > > > > > > > > > Thanks for testing! > > > > > > > > > > > > > > > > > You should be already familiar with the bench tool we have in BPF > > > > > > selftests (I used it on some other patches for your tree). > > > > > > > > > > What patches we need? > > > > > > > > > > > > > You mean for this `bench` tool? They are part of BPF selftests (under > > > > tools/testing/selftests/bpf), you can build them by running: > > > > > > > > $ make RELEASE=1 -j$(nproc) bench > > > > > > > > After that you'll get a self-container `bench` binary, which has all > > > > the self-contained benchmarks. > > > > > > > > You might also find a small script (benchs/run_bench_trigger.sh inside > > > > BPF selftests directory) helpful, it collects final summary of the > > > > benchmark run and optionally accepts a specific set of benchmarks. So > > > > you can use it like this: > > > > > > > > $ benchs/run_bench_trigger.sh kprobe kprobe-multi > > > > kprobe : 18.731 ± 0.639M/s > > > > kprobe-multi : 23.938 ± 0.612M/s > > > > > > > > By default it will run a wider set of benchmarks (no uprobes, but a > > > > bunch of extra fentry/fexit tests and stuff like this). > > > > > > origin: > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.329 ± 0.007M/s > > > kretprobe-multi: 1.341 ± 0.004M/s > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.288 ± 0.014M/s > > > kretprobe-multi: 1.365 ± 0.002M/s > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.329 ± 0.002M/s > > > kretprobe-multi: 1.331 ± 0.011M/s > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.311 ± 0.003M/s > > > kretprobe-multi: 1.318 ± 0.002M/s s > > > > > > patched: > > > > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.274 ± 0.003M/s > > > kretprobe-multi: 1.397 ± 0.002M/s > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.307 ± 0.002M/s > > > kretprobe-multi: 1.406 ± 0.004M/s > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.279 ± 0.004M/s > > > kretprobe-multi: 1.330 ± 0.014M/s > > > # benchs/run_bench_trigger.sh > > > kretprobe : 1.256 ± 0.010M/s > > > kretprobe-multi: 1.412 ± 0.003M/s > > > > > > Hmm, in my case, it seems smaller differences (~3%?). > > > I attached perf report results for those, but I don't see large difference. > > > > I ran my benchmarks on bare metal machine (and quite powerful at that, > > you can see my numbers are almost 10x of yours), with mitigations > > disabled, no retpolines, etc. If you have any of those mitigations it > > might result in smaller differences, probably. If you are running > > inside QEMU/VM, the results might differ significantly as well. > > I ran it on my bare metal machines again, but could not find any difference > between them. But I think I enabled intel mitigations on, so it might make > a difference from your result. > > Can you run the benchmark with perf record? If there is such differences, > there should be recorded. I can, yes, will try to do this week, I'm just trying to keep up with the rest of the stuff on my plate and haven't found yet time to do this. I'll get back to you (and I'll use the latest version of your patch set, of course). > e.g. > > # perf record -g -o perf.data-kretprobe-nopatch-raw-bpf -- bench -w2 -d5 -a trig-kretprobe > # perf report -G -i perf.data-kretprobe-nopatch-raw-bpf -k $VMLINUX --stdio > perf-out-kretprobe-nopatch-raw-bpf > > I attached the results in my side. > The interesting point is, the functions int the result are not touched by > this series. Thus there may be another reason if you see the kretprobe > regression. > > Thank you, > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org>
On Thu, 25 Apr 2024 13:31:53 -0700 Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: I'm just coming back from Japan (work and then a vacation), and catching up on my email during the 6 hour layover in Detroit. > Hey Masami, > > I can't really review most of that code as I'm completely unfamiliar > with all those inner workings of fprobe/ftrace/function_graph. I left > a few comments where there were somewhat more obvious BPF-related > pieces. > > But I also did run our BPF benchmarks on probes/for-next as a baseline > and then with your series applied on top. Just to see if there are any > regressions. I think it will be a useful data point for you. > > You should be already familiar with the bench tool we have in BPF > selftests (I used it on some other patches for your tree). I should get familiar with your tools too. > > BASELINE > ======== > kprobe : 24.634 ± 0.205M/s > kprobe-multi : 28.898 ± 0.531M/s > kretprobe : 10.478 ± 0.015M/s > kretprobe-multi: 11.012 ± 0.063M/s > > THIS PATCH SET ON TOP > ===================== > kprobe : 25.144 ± 0.027M/s (+2%) > kprobe-multi : 28.909 ± 0.074M/s > kretprobe : 9.482 ± 0.008M/s (-9.5%) > kretprobe-multi: 13.688 ± 0.027M/s (+24%) > > These numbers are pretty stable and look to be more or less representative. Thanks for running this. > > As you can see, kprobes got a bit faster, kprobe-multi seems to be > about the same, though. > > Then (I suppose they are "legacy") kretprobes got quite noticeably > slower, almost by 10%. Not sure why, but looks real after re-running > benchmarks a bunch of times and getting stable results. > > On the other hand, multi-kretprobes got significantly faster (+24%!). > Again, I don't know if it is expected or not, but it's a nice > improvement. > > If you have any idea why kretprobes would get so much slower, it would > be nice to look into that and see if you can mitigate the regression > somehow. Thanks! My guess is that this patch set helps generic use cases for tracing the return of functions, but will likely add more overhead for single use cases. That is, kretprobe is made to be specific for a single function, but kretprobe-multi is more generic. Hence the generic version will improve at the sacrifice of the specific function. I did expect as much. That said, I think there's probably a lot of low hanging fruit that can be done to this series to help improve the kretprobe performance. I'm not sure we can get back to the baseline, but I'm hoping we can at least make it much better than that 10% slowdown. I'll be reviewing this patch set this week as I recover from jetlag. -- Steve
On Sun, Apr 28, 2024 at 4:25 PM Steven Rostedt <rostedt@goodmis.org> wrote: > > On Thu, 25 Apr 2024 13:31:53 -0700 > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > I'm just coming back from Japan (work and then a vacation), and > catching up on my email during the 6 hour layover in Detroit. > > > Hey Masami, > > > > I can't really review most of that code as I'm completely unfamiliar > > with all those inner workings of fprobe/ftrace/function_graph. I left > > a few comments where there were somewhat more obvious BPF-related > > pieces. > > > > But I also did run our BPF benchmarks on probes/for-next as a baseline > > and then with your series applied on top. Just to see if there are any > > regressions. I think it will be a useful data point for you. > > > > You should be already familiar with the bench tool we have in BPF > > selftests (I used it on some other patches for your tree). > > I should get familiar with your tools too. > It's a nifty and self-contained tool to do some micro-benchmarking, I replied to Masami with a few details on how to build and use it. > > > > BASELINE > > ======== > > kprobe : 24.634 ± 0.205M/s > > kprobe-multi : 28.898 ± 0.531M/s > > kretprobe : 10.478 ± 0.015M/s > > kretprobe-multi: 11.012 ± 0.063M/s > > > > THIS PATCH SET ON TOP > > ===================== > > kprobe : 25.144 ± 0.027M/s (+2%) > > kprobe-multi : 28.909 ± 0.074M/s > > kretprobe : 9.482 ± 0.008M/s (-9.5%) > > kretprobe-multi: 13.688 ± 0.027M/s (+24%) > > > > These numbers are pretty stable and look to be more or less representative. > > Thanks for running this. > > > > > As you can see, kprobes got a bit faster, kprobe-multi seems to be > > about the same, though. > > > > Then (I suppose they are "legacy") kretprobes got quite noticeably > > slower, almost by 10%. Not sure why, but looks real after re-running > > benchmarks a bunch of times and getting stable results. > > > > On the other hand, multi-kretprobes got significantly faster (+24%!). > > Again, I don't know if it is expected or not, but it's a nice > > improvement. > > > > If you have any idea why kretprobes would get so much slower, it would > > be nice to look into that and see if you can mitigate the regression > > somehow. Thanks! > > My guess is that this patch set helps generic use cases for tracing the > return of functions, but will likely add more overhead for single use > cases. That is, kretprobe is made to be specific for a single function, > but kretprobe-multi is more generic. Hence the generic version will > improve at the sacrifice of the specific function. I did expect as much. > > That said, I think there's probably a lot of low hanging fruit that can > be done to this series to help improve the kretprobe performance. I'm > not sure we can get back to the baseline, but I'm hoping we can at > least make it much better than that 10% slowdown. That would certainly be appreciated, thanks! But I'm also considering trying to switch to multi-kprobe/kretprobe automatically on libbpf side, whenever possible, so that users can get the best performance. There might still be situations where this can't be done, so singular kprobe/kretprobe can't be completely deprecated, but multi variants seems to be universally faster, so I'm going to make them a default (I need to handle some backwards compat aspect, but that's libbpf-specific stuff you shouldn't be concerned with). > > I'll be reviewing this patch set this week as I recover from jetlag. > > -- Steve
© 2016 - 2026 Red Hat, Inc.