[PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph

Masami Hiramatsu (Google) posted 20 patches 2 months, 2 weeks ago
Documentation/trace/fprobe.rst                     |   42 +
arch/arm64/Kconfig                                 |    2
arch/arm64/include/asm/Kbuild                      |    1
arch/arm64/include/asm/ftrace.h                    |   51 +-
arch/arm64/kernel/asm-offsets.c                    |   12
arch/arm64/kernel/entry-ftrace.S                   |   32 +
arch/arm64/kernel/ftrace.c                         |   78 ++
arch/loongarch/Kconfig                             |    4
arch/loongarch/include/asm/fprobe.h                |   12
arch/loongarch/include/asm/ftrace.h                |   32 -
arch/loongarch/kernel/asm-offsets.c                |   12
arch/loongarch/kernel/ftrace_dyn.c                 |   10
arch/loongarch/kernel/mcount.S                     |   17 -
arch/loongarch/kernel/mcount_dyn.S                 |   14
arch/powerpc/Kconfig                               |    1
arch/powerpc/include/asm/ftrace.h                  |   13
arch/powerpc/kernel/trace/ftrace.c                 |    8
arch/powerpc/kernel/trace/ftrace_64_pg.c           |   16
arch/riscv/Kconfig                                 |    3
arch/riscv/include/asm/Kbuild                      |    1
arch/riscv/include/asm/ftrace.h                    |   45 +
arch/riscv/kernel/ftrace.c                         |   17 -
arch/riscv/kernel/mcount.S                         |   24 -
arch/s390/Kconfig                                  |    4
arch/s390/include/asm/fprobe.h                     |   10
arch/s390/include/asm/ftrace.h                     |   37 +
arch/s390/kernel/asm-offsets.c                     |    6
arch/s390/kernel/entry.h                           |    1
arch/s390/kernel/ftrace.c                          |   48 -
arch/s390/kernel/mcount.S                          |   23 -
arch/x86/Kconfig                                   |    4
arch/x86/include/asm/Kbuild                        |    1
arch/x86/include/asm/ftrace.h                      |   54 +-
arch/x86/kernel/ftrace.c                           |   47 +
arch/x86/kernel/ftrace_32.S                        |   13
arch/x86/kernel/ftrace_64.S                        |   17 -
include/asm-generic/fprobe.h                       |   46 +
include/linux/fprobe.h                             |   62 +-
include/linux/ftrace.h                             |  116 +++
include/linux/ftrace_regs.h                        |    2
kernel/trace/Kconfig                               |   22 -
kernel/trace/bpf_trace.c                           |   28 +
kernel/trace/fgraph.c                              |   65 +-
kernel/trace/fprobe.c                              |  664 +++++++++++++++-----
kernel/trace/ftrace.c                              |    6
kernel/trace/trace.h                               |    6
kernel/trace/trace_fprobe.c                        |  146 ++--
kernel/trace/trace_functions_graph.c               |   10
kernel/trace/trace_irqsoff.c                       |    6
kernel/trace/trace_probe_tmpl.h                    |    2
kernel/trace/trace_sched_wakeup.c                  |    6
kernel/trace/trace_selftest.c                      |   11
lib/test_fprobe.c                                  |   51 --
samples/fprobe/fprobe_example.c                    |    4
.../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 +
.../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4
56 files changed, 1318 insertions(+), 670 deletions(-)
create mode 100644 arch/loongarch/include/asm/fprobe.h
create mode 100644 arch/s390/include/asm/fprobe.h
create mode 100644 include/asm-generic/fprobe.h
create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
[PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
Posted by Masami Hiramatsu (Google) 2 months, 2 weeks ago
Hi,

Here is the 22nd version of the series to re-implement the fprobe on
function-graph tracer. The previous version is;

https://lore.kernel.org/all/173379652547.973433.2311391879173461183.stgit@devnote2/

This version is rebased on v6.13-rc4 with fixes on [3/20] for x86-32 and
[5/20] for build error.

Overview
--------
This series rewrites the fprobe on this function-graph.
The purposes of this change are;

 1) Remove dependency of the rethook from fprobe so that we can reduce
   the return hook code and shadow stack.

 2) Make 'ftrace_regs' the common trace interface for the function
   boundary.

1) Currently we have 2(or 3) different function return hook codes,
 the function-graph tracer and rethook (and legacy kretprobe).
 But since this  is redundant and needs double maintenance cost,
 I would like to unify those. From the user's viewpoint, function-
 graph tracer is very useful to grasp the execution path. For this
 purpose, it is hard to use the rethook in the function-graph
 tracer, but the opposite is possible. (Strictly speaking, kretprobe
 can not use it because it requires 'pt_regs' for historical reasons.)

2) Now the fprobe provides the 'pt_regs' for its handler, but that is
 wrong for the function entry and exit. Moreover, depending on the
 architecture, there is no way to accurately reproduce 'pt_regs'
 outside of interrupt or exception handlers. This means fprobe should
 not use 'pt_regs' because it does not use such exceptions.
 (Conversely, kprobe should use 'pt_regs' because it is an abstract
  interface of the software breakpoint exception.)

This series changes fprobe to use function-graph tracer for tracing
function entry and exit, instead of mixture of ftrace and rethook.
Unlike the rethook which is a per-task list of system-wide allocated
nodes, the function graph's ret_stack is a per-task shadow stack.
Thus it does not need to set 'nr_maxactive' (which is the number of
pre-allocated nodes).
Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
their register interface, this changes it to convert 'ftrace_regs' to
'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
so users must access only registers for function parameters or
return value. 

Design
------
Instead of using ftrace's function entry hook directly, the new fprobe
is built on top of the function-graph's entry and return callbacks
with 'ftrace_regs'.

Since the fprobe requires access to 'ftrace_regs', the architecture
must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
entry callback with 'ftrace_regs', and also
CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
return_to_handler.

All fprobes share a single function-graph ops (means shares a common
ftrace filter) similar to the kprobe-on-ftrace. This needs another
layer to find corresponding fprobe in the common function-graph
callbacks, but has much better scalability, since the number of
registered function-graph ops is limited.

In the entry callback, the fprobe runs its entry_handler and saves the
address of 'fprobe' on the function-graph's shadow stack as data. The
return callback decodes the data to get the 'fprobe' address, and runs
the exit_handler.

The fprobe introduces two hash-tables, one is for entry callback which
searches fprobes related to the given function address passed by entry
callback. The other is for a return callback which checks if the given
'fprobe' data structure pointer is still valid. Note that it is
possible to unregister fprobe before the return callback runs. Thus
the address validation must be done before using it in the return
callback.

Download
--------
This series can be applied against the v6.13-rc2 kernel.

This series can also be found below branch.

https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph

Thank you,

---

Masami Hiramatsu (Google) (19):
      fgraph: Get ftrace recursion lock in function_graph_enter
      fgraph: Pass ftrace_regs to entryfunc
      fgraph: Replace fgraph_ret_regs with ftrace_regs
      fgraph: Pass ftrace_regs to retfunc
      fprobe: Use ftrace_regs in fprobe entry handler
      fprobe: Use ftrace_regs in fprobe exit handler
      tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
      tracing: Add ftrace_fill_perf_regs() for perf event
      tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
      bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
      ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
      fprobe: Rewrite fprobe on function-graph tracer
      fprobe: Add fprobe_header encoding feature
      tracing/fprobe: Remove nr_maxactive from fprobe
      selftests: ftrace: Remove obsolate maxactive syntax check
      selftests/ftrace: Add a test case for repeating register/unregister fprobe
      Documentation: probes: Update fprobe on function-graph tracer
      ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr
      bpf: Use ftrace_get_symaddr() for kprobe_multi probes

Sven Schnelle (1):
      s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC


 Documentation/trace/fprobe.rst                     |   42 +
 arch/arm64/Kconfig                                 |    2 
 arch/arm64/include/asm/Kbuild                      |    1 
 arch/arm64/include/asm/ftrace.h                    |   51 +-
 arch/arm64/kernel/asm-offsets.c                    |   12 
 arch/arm64/kernel/entry-ftrace.S                   |   32 +
 arch/arm64/kernel/ftrace.c                         |   78 ++
 arch/loongarch/Kconfig                             |    4 
 arch/loongarch/include/asm/fprobe.h                |   12 
 arch/loongarch/include/asm/ftrace.h                |   32 -
 arch/loongarch/kernel/asm-offsets.c                |   12 
 arch/loongarch/kernel/ftrace_dyn.c                 |   10 
 arch/loongarch/kernel/mcount.S                     |   17 -
 arch/loongarch/kernel/mcount_dyn.S                 |   14 
 arch/powerpc/Kconfig                               |    1 
 arch/powerpc/include/asm/ftrace.h                  |   13 
 arch/powerpc/kernel/trace/ftrace.c                 |    8 
 arch/powerpc/kernel/trace/ftrace_64_pg.c           |   16 
 arch/riscv/Kconfig                                 |    3 
 arch/riscv/include/asm/Kbuild                      |    1 
 arch/riscv/include/asm/ftrace.h                    |   45 +
 arch/riscv/kernel/ftrace.c                         |   17 -
 arch/riscv/kernel/mcount.S                         |   24 -
 arch/s390/Kconfig                                  |    4 
 arch/s390/include/asm/fprobe.h                     |   10 
 arch/s390/include/asm/ftrace.h                     |   37 +
 arch/s390/kernel/asm-offsets.c                     |    6 
 arch/s390/kernel/entry.h                           |    1 
 arch/s390/kernel/ftrace.c                          |   48 -
 arch/s390/kernel/mcount.S                          |   23 -
 arch/x86/Kconfig                                   |    4 
 arch/x86/include/asm/Kbuild                        |    1 
 arch/x86/include/asm/ftrace.h                      |   54 +-
 arch/x86/kernel/ftrace.c                           |   47 +
 arch/x86/kernel/ftrace_32.S                        |   13 
 arch/x86/kernel/ftrace_64.S                        |   17 -
 include/asm-generic/fprobe.h                       |   46 +
 include/linux/fprobe.h                             |   62 +-
 include/linux/ftrace.h                             |  116 +++
 include/linux/ftrace_regs.h                        |    2 
 kernel/trace/Kconfig                               |   22 -
 kernel/trace/bpf_trace.c                           |   28 +
 kernel/trace/fgraph.c                              |   65 +-
 kernel/trace/fprobe.c                              |  664 +++++++++++++++-----
 kernel/trace/ftrace.c                              |    6 
 kernel/trace/trace.h                               |    6 
 kernel/trace/trace_fprobe.c                        |  146 ++--
 kernel/trace/trace_functions_graph.c               |   10 
 kernel/trace/trace_irqsoff.c                       |    6 
 kernel/trace/trace_probe_tmpl.h                    |    2 
 kernel/trace/trace_sched_wakeup.c                  |    6 
 kernel/trace/trace_selftest.c                      |   11 
 lib/test_fprobe.c                                  |   51 --
 samples/fprobe/fprobe_example.c                    |    4 
 .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 +
 .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
 56 files changed, 1318 insertions(+), 670 deletions(-)
 create mode 100644 arch/loongarch/include/asm/fprobe.h
 create mode 100644 arch/s390/include/asm/fprobe.h
 create mode 100644 include/asm-generic/fprobe.h
 create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc

--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
Re: [PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
Posted by Jiri Olsa 2 months, 1 week ago
On Thu, Dec 26, 2024 at 02:11:16PM +0900, Masami Hiramatsu (Google) wrote:
> Hi,
> 
> Here is the 22nd version of the series to re-implement the fprobe on
> function-graph tracer. The previous version is;
> 
> https://lore.kernel.org/all/173379652547.973433.2311391879173461183.stgit@devnote2/
> 
> This version is rebased on v6.13-rc4 with fixes on [3/20] for x86-32 and
> [5/20] for build error.


hi,
I ran the bench and I'm seeing native_sched_clock being used
again kretprobe_multi bench:

     5.85%  bench            [kernel.kallsyms]                                        [k] native_sched_clock
            |
            ---native_sched_clock
               sched_clock
               |
                --5.83%--trace_clock_local
                          ftrace_return_to_handler
                          return_to_handler
                          syscall
                          bpf_prog_test_run_opts
                          trigger_producer_batch
                          start_thread
                          __GI___clone3

I recall we tried to fix that before with [1] change, but that replaced
later with [2] changes

When I remove the trace_clock_local call in __ftrace_return_to_handler
than the kretprobe-multi gets much faster (see last block below), so it
seems worth to make it optional

there's some decrease in kprobe_multi benchmark compared to base numbers,
which I'm not sure yet why, but other than that it seems ok

base:
        kprobe         :   12.873 ± 0.011M/s
        kprobe-multi   :   13.088 ± 0.052M/s
        kretprobe      :    6.339 ± 0.003M/s
        kretprobe-multi:    7.240 ± 0.002M/s

fprobe_on_fgraph:
        kprobe         :   12.816 ± 0.002M/s
        kprobe-multi   :   12.126 ± 0.004M/s
        kretprobe      :    6.305 ± 0.018M/s
        kretprobe-multi:    7.740 ± 0.003M/s

removed native_sched_clock call:
        kprobe         :   12.850 ± 0.006M/s
        kprobe-multi   :   12.115 ± 0.006M/s
        kretprobe      :    6.270 ± 0.017M/s
        kretprobe-multi:    9.190 ± 0.005M/s


happy new year ;-) thanks,

jirka


[1] https://lore.kernel.org/bpf/172615389864.133222.14452329708227900626.stgit@devnote2/
[2] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/

> 
> Overview
> --------
> This series rewrites the fprobe on this function-graph.
> The purposes of this change are;
> 
>  1) Remove dependency of the rethook from fprobe so that we can reduce
>    the return hook code and shadow stack.
> 
>  2) Make 'ftrace_regs' the common trace interface for the function
>    boundary.
> 
> 1) Currently we have 2(or 3) different function return hook codes,
>  the function-graph tracer and rethook (and legacy kretprobe).
>  But since this  is redundant and needs double maintenance cost,
>  I would like to unify those. From the user's viewpoint, function-
>  graph tracer is very useful to grasp the execution path. For this
>  purpose, it is hard to use the rethook in the function-graph
>  tracer, but the opposite is possible. (Strictly speaking, kretprobe
>  can not use it because it requires 'pt_regs' for historical reasons.)
> 
> 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
>  wrong for the function entry and exit. Moreover, depending on the
>  architecture, there is no way to accurately reproduce 'pt_regs'
>  outside of interrupt or exception handlers. This means fprobe should
>  not use 'pt_regs' because it does not use such exceptions.
>  (Conversely, kprobe should use 'pt_regs' because it is an abstract
>   interface of the software breakpoint exception.)
> 
> This series changes fprobe to use function-graph tracer for tracing
> function entry and exit, instead of mixture of ftrace and rethook.
> Unlike the rethook which is a per-task list of system-wide allocated
> nodes, the function graph's ret_stack is a per-task shadow stack.
> Thus it does not need to set 'nr_maxactive' (which is the number of
> pre-allocated nodes).
> Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> their register interface, this changes it to convert 'ftrace_regs' to
> 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> so users must access only registers for function parameters or
> return value. 
> 
> Design
> ------
> Instead of using ftrace's function entry hook directly, the new fprobe
> is built on top of the function-graph's entry and return callbacks
> with 'ftrace_regs'.
> 
> Since the fprobe requires access to 'ftrace_regs', the architecture
> must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> entry callback with 'ftrace_regs', and also
> CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> return_to_handler.
> 
> All fprobes share a single function-graph ops (means shares a common
> ftrace filter) similar to the kprobe-on-ftrace. This needs another
> layer to find corresponding fprobe in the common function-graph
> callbacks, but has much better scalability, since the number of
> registered function-graph ops is limited.
> 
> In the entry callback, the fprobe runs its entry_handler and saves the
> address of 'fprobe' on the function-graph's shadow stack as data. The
> return callback decodes the data to get the 'fprobe' address, and runs
> the exit_handler.
> 
> The fprobe introduces two hash-tables, one is for entry callback which
> searches fprobes related to the given function address passed by entry
> callback. The other is for a return callback which checks if the given
> 'fprobe' data structure pointer is still valid. Note that it is
> possible to unregister fprobe before the return callback runs. Thus
> the address validation must be done before using it in the return
> callback.
> 
> Download
> --------
> This series can be applied against the v6.13-rc2 kernel.
> 
> This series can also be found below branch.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
> 
> Thank you,
> 
> ---
> 
> Masami Hiramatsu (Google) (19):
>       fgraph: Get ftrace recursion lock in function_graph_enter
>       fgraph: Pass ftrace_regs to entryfunc
>       fgraph: Replace fgraph_ret_regs with ftrace_regs
>       fgraph: Pass ftrace_regs to retfunc
>       fprobe: Use ftrace_regs in fprobe entry handler
>       fprobe: Use ftrace_regs in fprobe exit handler
>       tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
>       tracing: Add ftrace_fill_perf_regs() for perf event
>       tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
>       bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
>       ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
>       fprobe: Rewrite fprobe on function-graph tracer
>       fprobe: Add fprobe_header encoding feature
>       tracing/fprobe: Remove nr_maxactive from fprobe
>       selftests: ftrace: Remove obsolate maxactive syntax check
>       selftests/ftrace: Add a test case for repeating register/unregister fprobe
>       Documentation: probes: Update fprobe on function-graph tracer
>       ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr
>       bpf: Use ftrace_get_symaddr() for kprobe_multi probes
> 
> Sven Schnelle (1):
>       s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC
> 
> 
>  Documentation/trace/fprobe.rst                     |   42 +
>  arch/arm64/Kconfig                                 |    2 
>  arch/arm64/include/asm/Kbuild                      |    1 
>  arch/arm64/include/asm/ftrace.h                    |   51 +-
>  arch/arm64/kernel/asm-offsets.c                    |   12 
>  arch/arm64/kernel/entry-ftrace.S                   |   32 +
>  arch/arm64/kernel/ftrace.c                         |   78 ++
>  arch/loongarch/Kconfig                             |    4 
>  arch/loongarch/include/asm/fprobe.h                |   12 
>  arch/loongarch/include/asm/ftrace.h                |   32 -
>  arch/loongarch/kernel/asm-offsets.c                |   12 
>  arch/loongarch/kernel/ftrace_dyn.c                 |   10 
>  arch/loongarch/kernel/mcount.S                     |   17 -
>  arch/loongarch/kernel/mcount_dyn.S                 |   14 
>  arch/powerpc/Kconfig                               |    1 
>  arch/powerpc/include/asm/ftrace.h                  |   13 
>  arch/powerpc/kernel/trace/ftrace.c                 |    8 
>  arch/powerpc/kernel/trace/ftrace_64_pg.c           |   16 
>  arch/riscv/Kconfig                                 |    3 
>  arch/riscv/include/asm/Kbuild                      |    1 
>  arch/riscv/include/asm/ftrace.h                    |   45 +
>  arch/riscv/kernel/ftrace.c                         |   17 -
>  arch/riscv/kernel/mcount.S                         |   24 -
>  arch/s390/Kconfig                                  |    4 
>  arch/s390/include/asm/fprobe.h                     |   10 
>  arch/s390/include/asm/ftrace.h                     |   37 +
>  arch/s390/kernel/asm-offsets.c                     |    6 
>  arch/s390/kernel/entry.h                           |    1 
>  arch/s390/kernel/ftrace.c                          |   48 -
>  arch/s390/kernel/mcount.S                          |   23 -
>  arch/x86/Kconfig                                   |    4 
>  arch/x86/include/asm/Kbuild                        |    1 
>  arch/x86/include/asm/ftrace.h                      |   54 +-
>  arch/x86/kernel/ftrace.c                           |   47 +
>  arch/x86/kernel/ftrace_32.S                        |   13 
>  arch/x86/kernel/ftrace_64.S                        |   17 -
>  include/asm-generic/fprobe.h                       |   46 +
>  include/linux/fprobe.h                             |   62 +-
>  include/linux/ftrace.h                             |  116 +++
>  include/linux/ftrace_regs.h                        |    2 
>  kernel/trace/Kconfig                               |   22 -
>  kernel/trace/bpf_trace.c                           |   28 +
>  kernel/trace/fgraph.c                              |   65 +-
>  kernel/trace/fprobe.c                              |  664 +++++++++++++++-----
>  kernel/trace/ftrace.c                              |    6 
>  kernel/trace/trace.h                               |    6 
>  kernel/trace/trace_fprobe.c                        |  146 ++--
>  kernel/trace/trace_functions_graph.c               |   10 
>  kernel/trace/trace_irqsoff.c                       |    6 
>  kernel/trace/trace_probe_tmpl.h                    |    2 
>  kernel/trace/trace_sched_wakeup.c                  |    6 
>  kernel/trace/trace_selftest.c                      |   11 
>  lib/test_fprobe.c                                  |   51 --
>  samples/fprobe/fprobe_example.c                    |    4 
>  .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 +
>  .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
>  56 files changed, 1318 insertions(+), 670 deletions(-)
>  create mode 100644 arch/loongarch/include/asm/fprobe.h
>  create mode 100644 arch/s390/include/asm/fprobe.h
>  create mode 100644 include/asm-generic/fprobe.h
>  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> 
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
Re: [PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
Posted by Masami Hiramatsu (Google) 1 month, 4 weeks ago
On Thu, 2 Jan 2025 14:20:58 +0100
Jiri Olsa <olsajiri@gmail.com> wrote:

> On Thu, Dec 26, 2024 at 02:11:16PM +0900, Masami Hiramatsu (Google) wrote:
> > Hi,
> > 
> > Here is the 22nd version of the series to re-implement the fprobe on
> > function-graph tracer. The previous version is;
> > 
> > https://lore.kernel.org/all/173379652547.973433.2311391879173461183.stgit@devnote2/
> > 
> > This version is rebased on v6.13-rc4 with fixes on [3/20] for x86-32 and
> > [5/20] for build error.
> 
> 
> hi,
> I ran the bench and I'm seeing native_sched_clock being used
> again kretprobe_multi bench:
> 
>      5.85%  bench            [kernel.kallsyms]                                        [k] native_sched_clock
>             |
>             ---native_sched_clock
>                sched_clock
>                |
>                 --5.83%--trace_clock_local
>                           ftrace_return_to_handler
>                           return_to_handler
>                           syscall
>                           bpf_prog_test_run_opts
>                           trigger_producer_batch
>                           start_thread
>                           __GI___clone3
> 
> I recall we tried to fix that before with [1] change, but that replaced
> later with [2] changes
> 
> When I remove the trace_clock_local call in __ftrace_return_to_handler
> than the kretprobe-multi gets much faster (see last block below), so it
> seems worth to make it optional
> 
> there's some decrease in kprobe_multi benchmark compared to base numbers,
> which I'm not sure yet why, but other than that it seems ok
> 
> base:
>         kprobe         :   12.873 ± 0.011M/s
>         kprobe-multi   :   13.088 ± 0.052M/s
>         kretprobe      :    6.339 ± 0.003M/s
>         kretprobe-multi:    7.240 ± 0.002M/s
> 
> fprobe_on_fgraph:
>         kprobe         :   12.816 ± 0.002M/s
>         kprobe-multi   :   12.126 ± 0.004M/s
>         kretprobe      :    6.305 ± 0.018M/s
>         kretprobe-multi:    7.740 ± 0.003M/s
> 
> removed native_sched_clock call:
>         kprobe         :   12.850 ± 0.006M/s
>         kprobe-multi   :   12.115 ± 0.006M/s
>         kretprobe      :    6.270 ± 0.017M/s
>         kretprobe-multi:    9.190 ± 0.005M/s

Hmm, this still look good. Ok, let me move the trace_clock_local() to
the function_graph.

Thanks!

> 
> 
> happy new year ;-) thanks,
> 
> jirka
> 
> 
> [1] https://lore.kernel.org/bpf/172615389864.133222.14452329708227900626.stgit@devnote2/
> [2] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
> 
> > 
> > Overview
> > --------
> > This series rewrites the fprobe on this function-graph.
> > The purposes of this change are;
> > 
> >  1) Remove dependency of the rethook from fprobe so that we can reduce
> >    the return hook code and shadow stack.
> > 
> >  2) Make 'ftrace_regs' the common trace interface for the function
> >    boundary.
> > 
> > 1) Currently we have 2(or 3) different function return hook codes,
> >  the function-graph tracer and rethook (and legacy kretprobe).
> >  But since this  is redundant and needs double maintenance cost,
> >  I would like to unify those. From the user's viewpoint, function-
> >  graph tracer is very useful to grasp the execution path. For this
> >  purpose, it is hard to use the rethook in the function-graph
> >  tracer, but the opposite is possible. (Strictly speaking, kretprobe
> >  can not use it because it requires 'pt_regs' for historical reasons.)
> > 
> > 2) Now the fprobe provides the 'pt_regs' for its handler, but that is
> >  wrong for the function entry and exit. Moreover, depending on the
> >  architecture, there is no way to accurately reproduce 'pt_regs'
> >  outside of interrupt or exception handlers. This means fprobe should
> >  not use 'pt_regs' because it does not use such exceptions.
> >  (Conversely, kprobe should use 'pt_regs' because it is an abstract
> >   interface of the software breakpoint exception.)
> > 
> > This series changes fprobe to use function-graph tracer for tracing
> > function entry and exit, instead of mixture of ftrace and rethook.
> > Unlike the rethook which is a per-task list of system-wide allocated
> > nodes, the function graph's ret_stack is a per-task shadow stack.
> > Thus it does not need to set 'nr_maxactive' (which is the number of
> > pre-allocated nodes).
> > Also the handlers will get the 'ftrace_regs' instead of 'pt_regs'.
> > Since eBPF mulit_kprobe/multi_kretprobe events still use 'pt_regs' as
> > their register interface, this changes it to convert 'ftrace_regs' to
> > 'pt_regs'. Of course this conversion makes an incomplete 'pt_regs',
> > so users must access only registers for function parameters or
> > return value. 
> > 
> > Design
> > ------
> > Instead of using ftrace's function entry hook directly, the new fprobe
> > is built on top of the function-graph's entry and return callbacks
> > with 'ftrace_regs'.
> > 
> > Since the fprobe requires access to 'ftrace_regs', the architecture
> > must support CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS and
> > CONFIG_HAVE_FTRACE_GRAPH_FUNC, which enables to call function-graph
> > entry callback with 'ftrace_regs', and also
> > CONFIG_HAVE_FUNCTION_GRAPH_FREGS, which passes the ftrace_regs to
> > return_to_handler.
> > 
> > All fprobes share a single function-graph ops (means shares a common
> > ftrace filter) similar to the kprobe-on-ftrace. This needs another
> > layer to find corresponding fprobe in the common function-graph
> > callbacks, but has much better scalability, since the number of
> > registered function-graph ops is limited.
> > 
> > In the entry callback, the fprobe runs its entry_handler and saves the
> > address of 'fprobe' on the function-graph's shadow stack as data. The
> > return callback decodes the data to get the 'fprobe' address, and runs
> > the exit_handler.
> > 
> > The fprobe introduces two hash-tables, one is for entry callback which
> > searches fprobes related to the given function address passed by entry
> > callback. The other is for a return callback which checks if the given
> > 'fprobe' data structure pointer is still valid. Note that it is
> > possible to unregister fprobe before the return callback runs. Thus
> > the address validation must be done before using it in the return
> > callback.
> > 
> > Download
> > --------
> > This series can be applied against the v6.13-rc2 kernel.
> > 
> > This series can also be found below branch.
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git/log/?h=topic/fprobe-on-fgraph
> > 
> > Thank you,
> > 
> > ---
> > 
> > Masami Hiramatsu (Google) (19):
> >       fgraph: Get ftrace recursion lock in function_graph_enter
> >       fgraph: Pass ftrace_regs to entryfunc
> >       fgraph: Replace fgraph_ret_regs with ftrace_regs
> >       fgraph: Pass ftrace_regs to retfunc
> >       fprobe: Use ftrace_regs in fprobe entry handler
> >       fprobe: Use ftrace_regs in fprobe exit handler
> >       tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
> >       tracing: Add ftrace_fill_perf_regs() for perf event
> >       tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
> >       bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
> >       ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
> >       fprobe: Rewrite fprobe on function-graph tracer
> >       fprobe: Add fprobe_header encoding feature
> >       tracing/fprobe: Remove nr_maxactive from fprobe
> >       selftests: ftrace: Remove obsolate maxactive syntax check
> >       selftests/ftrace: Add a test case for repeating register/unregister fprobe
> >       Documentation: probes: Update fprobe on function-graph tracer
> >       ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr
> >       bpf: Use ftrace_get_symaddr() for kprobe_multi probes
> > 
> > Sven Schnelle (1):
> >       s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC
> > 
> > 
> >  Documentation/trace/fprobe.rst                     |   42 +
> >  arch/arm64/Kconfig                                 |    2 
> >  arch/arm64/include/asm/Kbuild                      |    1 
> >  arch/arm64/include/asm/ftrace.h                    |   51 +-
> >  arch/arm64/kernel/asm-offsets.c                    |   12 
> >  arch/arm64/kernel/entry-ftrace.S                   |   32 +
> >  arch/arm64/kernel/ftrace.c                         |   78 ++
> >  arch/loongarch/Kconfig                             |    4 
> >  arch/loongarch/include/asm/fprobe.h                |   12 
> >  arch/loongarch/include/asm/ftrace.h                |   32 -
> >  arch/loongarch/kernel/asm-offsets.c                |   12 
> >  arch/loongarch/kernel/ftrace_dyn.c                 |   10 
> >  arch/loongarch/kernel/mcount.S                     |   17 -
> >  arch/loongarch/kernel/mcount_dyn.S                 |   14 
> >  arch/powerpc/Kconfig                               |    1 
> >  arch/powerpc/include/asm/ftrace.h                  |   13 
> >  arch/powerpc/kernel/trace/ftrace.c                 |    8 
> >  arch/powerpc/kernel/trace/ftrace_64_pg.c           |   16 
> >  arch/riscv/Kconfig                                 |    3 
> >  arch/riscv/include/asm/Kbuild                      |    1 
> >  arch/riscv/include/asm/ftrace.h                    |   45 +
> >  arch/riscv/kernel/ftrace.c                         |   17 -
> >  arch/riscv/kernel/mcount.S                         |   24 -
> >  arch/s390/Kconfig                                  |    4 
> >  arch/s390/include/asm/fprobe.h                     |   10 
> >  arch/s390/include/asm/ftrace.h                     |   37 +
> >  arch/s390/kernel/asm-offsets.c                     |    6 
> >  arch/s390/kernel/entry.h                           |    1 
> >  arch/s390/kernel/ftrace.c                          |   48 -
> >  arch/s390/kernel/mcount.S                          |   23 -
> >  arch/x86/Kconfig                                   |    4 
> >  arch/x86/include/asm/Kbuild                        |    1 
> >  arch/x86/include/asm/ftrace.h                      |   54 +-
> >  arch/x86/kernel/ftrace.c                           |   47 +
> >  arch/x86/kernel/ftrace_32.S                        |   13 
> >  arch/x86/kernel/ftrace_64.S                        |   17 -
> >  include/asm-generic/fprobe.h                       |   46 +
> >  include/linux/fprobe.h                             |   62 +-
> >  include/linux/ftrace.h                             |  116 +++
> >  include/linux/ftrace_regs.h                        |    2 
> >  kernel/trace/Kconfig                               |   22 -
> >  kernel/trace/bpf_trace.c                           |   28 +
> >  kernel/trace/fgraph.c                              |   65 +-
> >  kernel/trace/fprobe.c                              |  664 +++++++++++++++-----
> >  kernel/trace/ftrace.c                              |    6 
> >  kernel/trace/trace.h                               |    6 
> >  kernel/trace/trace_fprobe.c                        |  146 ++--
> >  kernel/trace/trace_functions_graph.c               |   10 
> >  kernel/trace/trace_irqsoff.c                       |    6 
> >  kernel/trace/trace_probe_tmpl.h                    |    2 
> >  kernel/trace/trace_sched_wakeup.c                  |    6 
> >  kernel/trace/trace_selftest.c                      |   11 
> >  lib/test_fprobe.c                                  |   51 --
> >  samples/fprobe/fprobe_example.c                    |    4 
> >  .../test.d/dynevent/add_remove_fprobe_repeat.tc    |   19 +
> >  .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |    4 
> >  56 files changed, 1318 insertions(+), 670 deletions(-)
> >  create mode 100644 arch/loongarch/include/asm/fprobe.h
> >  create mode 100644 arch/s390/include/asm/fprobe.h
> >  create mode 100644 include/asm-generic/fprobe.h
> >  create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > 
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>
Re: [PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
Posted by Andrii Nakryiko 1 month, 4 weeks ago
On Thu, Jan 2, 2025 at 5:21 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Thu, Dec 26, 2024 at 02:11:16PM +0900, Masami Hiramatsu (Google) wrote:
> > Hi,
> >
> > Here is the 22nd version of the series to re-implement the fprobe on
> > function-graph tracer. The previous version is;
> >
> > https://lore.kernel.org/all/173379652547.973433.2311391879173461183.stgit@devnote2/
> >
> > This version is rebased on v6.13-rc4 with fixes on [3/20] for x86-32 and
> > [5/20] for build error.
>
>
> hi,
> I ran the bench and I'm seeing native_sched_clock being used
> again kretprobe_multi bench:
>
>      5.85%  bench            [kernel.kallsyms]                                        [k] native_sched_clock
>             |
>             ---native_sched_clock
>                sched_clock
>                |
>                 --5.83%--trace_clock_local
>                           ftrace_return_to_handler
>                           return_to_handler
>                           syscall
>                           bpf_prog_test_run_opts

completely unrelated, Jiri, but we should stop using
bpf_prog_test_run_opts() for benchmarking. It goes through FD
refcounting, which is unnecessary tiny overhead, but more importantly
it causes cache line bouncing between multiple CPUs (when doing
multi-threaded benchmarks), which skews and limits results.

>                           trigger_producer_batch
>                           start_thread
>                           __GI___clone3
>
> I recall we tried to fix that before with [1] change, but that replaced
> later with [2] changes
>
> When I remove the trace_clock_local call in __ftrace_return_to_handler
> than the kretprobe-multi gets much faster (see last block below), so it
> seems worth to make it optional
>
> there's some decrease in kprobe_multi benchmark compared to base numbers,
> which I'm not sure yet why, but other than that it seems ok
>
> base:
>         kprobe         :   12.873 ± 0.011M/s
>         kprobe-multi   :   13.088 ± 0.052M/s
>         kretprobe      :    6.339 ± 0.003M/s
>         kretprobe-multi:    7.240 ± 0.002M/s
>
> fprobe_on_fgraph:
>         kprobe         :   12.816 ± 0.002M/s
>         kprobe-multi   :   12.126 ± 0.004M/s
>         kretprobe      :    6.305 ± 0.018M/s
>         kretprobe-multi:    7.740 ± 0.003M/s
>
> removed native_sched_clock call:
>         kprobe         :   12.850 ± 0.006M/s
>         kprobe-multi   :   12.115 ± 0.006M/s
>         kretprobe      :    6.270 ± 0.017M/s
>         kretprobe-multi:    9.190 ± 0.005M/s
>
>
> happy new year ;-) thanks,
>
> jirka
>
>
> [1] https://lore.kernel.org/bpf/172615389864.133222.14452329708227900626.stgit@devnote2/
> [2] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
>

[...]
Re: [PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
Posted by Jiri Olsa 1 month, 3 weeks ago
On Fri, Jan 10, 2025 at 04:04:37PM -0800, Andrii Nakryiko wrote:
> On Thu, Jan 2, 2025 at 5:21 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Thu, Dec 26, 2024 at 02:11:16PM +0900, Masami Hiramatsu (Google) wrote:
> > > Hi,
> > >
> > > Here is the 22nd version of the series to re-implement the fprobe on
> > > function-graph tracer. The previous version is;
> > >
> > > https://lore.kernel.org/all/173379652547.973433.2311391879173461183.stgit@devnote2/
> > >
> > > This version is rebased on v6.13-rc4 with fixes on [3/20] for x86-32 and
> > > [5/20] for build error.
> >
> >
> > hi,
> > I ran the bench and I'm seeing native_sched_clock being used
> > again kretprobe_multi bench:
> >
> >      5.85%  bench            [kernel.kallsyms]                                        [k] native_sched_clock
> >             |
> >             ---native_sched_clock
> >                sched_clock
> >                |
> >                 --5.83%--trace_clock_local
> >                           ftrace_return_to_handler
> >                           return_to_handler
> >                           syscall
> >                           bpf_prog_test_run_opts
> 
> completely unrelated, Jiri, but we should stop using
> bpf_prog_test_run_opts() for benchmarking. It goes through FD
> refcounting, which is unnecessary tiny overhead, but more importantly
> it causes cache line bouncing between multiple CPUs (when doing
> multi-threaded benchmarks), which skews and limits results.

so you mean to switch directly to attaching/hitting kernel functions
or perhaps better have kernel module for that?

jirka

> 
> >                           trigger_producer_batch
> >                           start_thread
> >                           __GI___clone3
> >
> > I recall we tried to fix that before with [1] change, but that replaced
> > later with [2] changes
> >
> > When I remove the trace_clock_local call in __ftrace_return_to_handler
> > than the kretprobe-multi gets much faster (see last block below), so it
> > seems worth to make it optional
> >
> > there's some decrease in kprobe_multi benchmark compared to base numbers,
> > which I'm not sure yet why, but other than that it seems ok
> >
> > base:
> >         kprobe         :   12.873 ± 0.011M/s
> >         kprobe-multi   :   13.088 ± 0.052M/s
> >         kretprobe      :    6.339 ± 0.003M/s
> >         kretprobe-multi:    7.240 ± 0.002M/s
> >
> > fprobe_on_fgraph:
> >         kprobe         :   12.816 ± 0.002M/s
> >         kprobe-multi   :   12.126 ± 0.004M/s
> >         kretprobe      :    6.305 ± 0.018M/s
> >         kretprobe-multi:    7.740 ± 0.003M/s
> >
> > removed native_sched_clock call:
> >         kprobe         :   12.850 ± 0.006M/s
> >         kprobe-multi   :   12.115 ± 0.006M/s
> >         kretprobe      :    6.270 ± 0.017M/s
> >         kretprobe-multi:    9.190 ± 0.005M/s
> >
> >
> > happy new year ;-) thanks,
> >
> > jirka
> >
> >
> > [1] https://lore.kernel.org/bpf/172615389864.133222.14452329708227900626.stgit@devnote2/
> > [2] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
> >
> 
> [...]
Re: [PATCH v22 00/20] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
Posted by Andrii Nakryiko 1 month, 3 weeks ago
On Tue, Jan 14, 2025 at 7:12 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Fri, Jan 10, 2025 at 04:04:37PM -0800, Andrii Nakryiko wrote:
> > On Thu, Jan 2, 2025 at 5:21 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Thu, Dec 26, 2024 at 02:11:16PM +0900, Masami Hiramatsu (Google) wrote:
> > > > Hi,
> > > >
> > > > Here is the 22nd version of the series to re-implement the fprobe on
> > > > function-graph tracer. The previous version is;
> > > >
> > > > https://lore.kernel.org/all/173379652547.973433.2311391879173461183.stgit@devnote2/
> > > >
> > > > This version is rebased on v6.13-rc4 with fixes on [3/20] for x86-32 and
> > > > [5/20] for build error.
> > >
> > >
> > > hi,
> > > I ran the bench and I'm seeing native_sched_clock being used
> > > again kretprobe_multi bench:
> > >
> > >      5.85%  bench            [kernel.kallsyms]                                        [k] native_sched_clock
> > >             |
> > >             ---native_sched_clock
> > >                sched_clock
> > >                |
> > >                 --5.83%--trace_clock_local
> > >                           ftrace_return_to_handler
> > >                           return_to_handler
> > >                           syscall
> > >                           bpf_prog_test_run_opts
> >
> > completely unrelated, Jiri, but we should stop using
> > bpf_prog_test_run_opts() for benchmarking. It goes through FD
> > refcounting, which is unnecessary tiny overhead, but more importantly
> > it causes cache line bouncing between multiple CPUs (when doing
> > multi-threaded benchmarks), which skews and limits results.
>
> so you mean to switch directly to attaching/hitting kernel functions
> or perhaps better have kernel module for that?
>

yes, cheap syscall (getpgid or something). Not a kernel module, that's
logistical hassle.

> jirka
>
> >
> > >                           trigger_producer_batch
> > >                           start_thread
> > >                           __GI___clone3
> > >
> > > I recall we tried to fix that before with [1] change, but that replaced
> > > later with [2] changes
> > >
> > > When I remove the trace_clock_local call in __ftrace_return_to_handler
> > > than the kretprobe-multi gets much faster (see last block below), so it
> > > seems worth to make it optional
> > >
> > > there's some decrease in kprobe_multi benchmark compared to base numbers,
> > > which I'm not sure yet why, but other than that it seems ok
> > >
> > > base:
> > >         kprobe         :   12.873 ± 0.011M/s
> > >         kprobe-multi   :   13.088 ± 0.052M/s
> > >         kretprobe      :    6.339 ± 0.003M/s
> > >         kretprobe-multi:    7.240 ± 0.002M/s
> > >
> > > fprobe_on_fgraph:
> > >         kprobe         :   12.816 ± 0.002M/s
> > >         kprobe-multi   :   12.126 ± 0.004M/s
> > >         kretprobe      :    6.305 ± 0.018M/s
> > >         kretprobe-multi:    7.740 ± 0.003M/s
> > >
> > > removed native_sched_clock call:
> > >         kprobe         :   12.850 ± 0.006M/s
> > >         kprobe-multi   :   12.115 ± 0.006M/s
> > >         kretprobe      :    6.270 ± 0.017M/s
> > >         kretprobe-multi:    9.190 ± 0.005M/s
> > >
> > >
> > > happy new year ;-) thanks,
> > >
> > > jirka
> > >
> > >
> > > [1] https://lore.kernel.org/bpf/172615389864.133222.14452329708227900626.stgit@devnote2/
> > > [2] https://lore.kernel.org/all/20240914214805.779822616@goodmis.org/
> > >
> >
> > [...]
[PATCH] fgraph: Move trace_clock_local() for return time to function_graph tracer
Posted by Masami Hiramatsu (Google) 1 month, 4 weeks ago
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Since the ftrace_graph_ret::rettime is only referred in the function_graph
tracer, the trace_clock_local() call in fgraph is just an overhead for
other fgraph users.

Move the trace_clock_local() for recording return time to function_graph
tracer and the rettime field is just zeroed in the fgraph side.
That rettime field is updated by one of the function_graph tracer and
cached for other function_graph tracer instances.

According to Jiri's report[1], removing this function will gain fprobe
performance ~27%.

[1] https://lore.kernel.org/all/Z3aSuql3fnXMVMoM@krava/

Reported-by: Jiri Olsa <olsajiri@gmail.com>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 include/linux/ftrace.h               |   11 +++++++++++
 kernel/trace/fgraph.c                |    2 +-
 kernel/trace/trace_functions_graph.c |    2 ++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 07092dfb21a4..a2fb7360d6e2 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -1155,6 +1155,17 @@ struct ftrace_graph_ret {
 	unsigned long long rettime;
 } __packed;
 
+static inline void ftrace_graph_ret_record_time(struct ftrace_graph_ret *trace)
+{
+	if (!trace->rettime)
+		trace->rettime = trace_clock_local();
+}
+
+static inline void ftrace_graph_ret_init_time(struct ftrace_graph_ret *trace)
+{
+	trace->rettime = 0;
+}
+
 struct fgraph_ops;
 
 /* Type of the callback handlers for tracing function graph*/
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index c928527251e3..bc1e5f0493b6 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -826,7 +826,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
 		return (unsigned long)panic;
 	}
 
-	trace.rettime = trace_clock_local();
+	ftrace_graph_ret_init_time(&trace);
 	if (fregs)
 		ftrace_regs_set_instruction_pointer(fregs, ret);
 
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index d0e4f412c298..7e4d91d42d8e 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -327,6 +327,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 	int size;
 	int cpu;
 
+	ftrace_graph_ret_record_time(trace);
 	ftrace_graph_addr_finish(gops, trace);
 
 	if (*task_var & TRACE_GRAPH_NOTRACE) {
@@ -361,6 +362,7 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
 	struct fgraph_times *ftimes;
 	int size;
 
+	ftrace_graph_ret_record_time(trace);
 	ftrace_graph_addr_finish(gops, trace);
 
 	if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) {
Re: [PATCH] fgraph: Move trace_clock_local() for return time to function_graph tracer
Posted by Steven Rostedt 1 month, 3 weeks ago
On Sun, 12 Jan 2025 14:26:35 +0900
"Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:

> From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> Since the ftrace_graph_ret::rettime is only referred in the function_graph
> tracer, the trace_clock_local() call in fgraph is just an overhead for
> other fgraph users.
> 
> Move the trace_clock_local() for recording return time to function_graph
> tracer and the rettime field is just zeroed in the fgraph side.
> That rettime field is updated by one of the function_graph tracer and
> cached for other function_graph tracer instances.
> 
> According to Jiri's report[1], removing this function will gain fprobe
> performance ~27%.
> 
> [1] https://lore.kernel.org/all/Z3aSuql3fnXMVMoM@krava/
> 
> Reported-by: Jiri Olsa <olsajiri@gmail.com>
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

I'd rather just nuke the calltime and rettime from struct ftrace_graph_ret.

How about something like this instead?

[ based on: https://lore.kernel.org/linux-trace-kernel/20250113183124.61767419@gandalf.local.home/ ]

I can start testing it (it compiles and boots).

 include/linux/ftrace.h               |  2 --
 kernel/trace/fgraph.c                |  1 -
 kernel/trace/trace.h                 |  4 +++-
 kernel/trace/trace_entries.h         |  8 +++----
 kernel/trace/trace_functions_graph.c | 33 ++++++++++++++++------------
 kernel/trace/trace_irqsoff.c         |  5 +++--
 kernel/trace/trace_sched_wakeup.c    |  6 +++--
 7 files changed, 33 insertions(+), 26 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index aa9ddd1e4bb6..848ddf0d8f89 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -1061,8 +1061,6 @@ struct ftrace_graph_ret {
 	int depth;
 	/* Number of functions that overran the depth limit for current task */
 	unsigned int overrun;
-	unsigned long long calltime;
-	unsigned long long rettime;
 } __packed;
 
 struct fgraph_ops;
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 30e3ddc8a8a8..a7e6c8488b7f 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -818,7 +818,6 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
 		return (unsigned long)panic;
 	}
 
-	trace.rettime = trace_clock_local();
 #ifdef CONFIG_FUNCTION_GRAPH_RETVAL
 	trace.retval = fgraph_ret_regs_return_value(ret_regs);
 #endif
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9691b47b5f3d..467a8143fbb9 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -911,7 +911,9 @@ extern int __trace_graph_retaddr_entry(struct trace_array *tr,
 				unsigned long retaddr);
 extern void __trace_graph_return(struct trace_array *tr,
 				 struct ftrace_graph_ret *trace,
-				 unsigned int trace_ctx);
+				 unsigned int trace_ctx,
+				 u64 calltime, u64 rettime);
+
 extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops);
 extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops);
 extern void free_fgraph_ops(struct trace_array *tr);
diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
index 82fd174ebbe0..fbfb396905a6 100644
--- a/kernel/trace/trace_entries.h
+++ b/kernel/trace/trace_entries.h
@@ -124,8 +124,8 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
 		__field_packed(	unsigned long,	ret,		retval	)
 		__field_packed(	int,		ret,		depth	)
 		__field_packed(	unsigned int,	ret,		overrun	)
-		__field_packed(	unsigned long long, ret,	calltime)
-		__field_packed(	unsigned long long, ret,	rettime	)
+		__field(unsigned long long,	calltime		)
+		__field(unsigned long long,	rettime			)
 	),
 
 	F_printk("<-- %ps (%d) (start: %llx  end: %llx) over: %d retval: %lx",
@@ -146,8 +146,8 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
 		__field_packed(	unsigned long,	ret,		func	)
 		__field_packed(	int,		ret,		depth	)
 		__field_packed(	unsigned int,	ret,		overrun	)
-		__field_packed(	unsigned long long, ret,	calltime)
-		__field_packed(	unsigned long long, ret,	rettime	)
+		__field(unsigned long long,	calltime		)
+		__field(unsigned long long,	rettime			)
 	),
 
 	F_printk("<-- %ps (%d) (start: %llx  end: %llx) over: %d",
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 5504b5e4e7b4..7bff133c1543 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -270,12 +270,10 @@ __trace_graph_function(struct trace_array *tr,
 	struct ftrace_graph_ret ret = {
 		.func     = ip,
 		.depth    = 0,
-		.calltime = time,
-		.rettime  = time,
 	};
 
 	__trace_graph_entry(tr, &ent, trace_ctx);
-	__trace_graph_return(tr, &ret, trace_ctx);
+	__trace_graph_return(tr, &ret, trace_ctx, time, time);
 }
 
 void
@@ -287,8 +285,9 @@ trace_graph_function(struct trace_array *tr,
 }
 
 void __trace_graph_return(struct trace_array *tr,
-				struct ftrace_graph_ret *trace,
-				unsigned int trace_ctx)
+			  struct ftrace_graph_ret *trace,
+			  unsigned int trace_ctx,
+			  u64 calltime, u64 rettime)
 {
 	struct ring_buffer_event *event;
 	struct trace_buffer *buffer = tr->array_buffer.buffer;
@@ -300,6 +299,8 @@ void __trace_graph_return(struct trace_array *tr,
 		return;
 	entry	= ring_buffer_event_data(event);
 	entry->ret				= *trace;
+	entry->calltime				= calltime;
+	entry->rettime				= rettime;
 	trace_buffer_unlock_commit_nostack(buffer, event);
 }
 
@@ -322,10 +323,13 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 	struct fgraph_times *ftimes;
 	unsigned long flags;
 	unsigned int trace_ctx;
+	u64 calltime, rettime;
 	long disabled;
 	int size;
 	int cpu;
 
+	rettime = trace_clock_local();
+
 	ftrace_graph_addr_finish(gops, trace);
 
 	if (*task_var & TRACE_GRAPH_NOTRACE) {
@@ -339,7 +343,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 
 	handle_nosleeptime(trace, ftimes, size);
 
-	trace->calltime = ftimes->calltime;
+	calltime = ftimes->calltime;
 
 	local_irq_save(flags);
 	cpu = raw_smp_processor_id();
@@ -347,7 +351,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 	disabled = atomic_inc_return(&data->disabled);
 	if (likely(disabled == 1)) {
 		trace_ctx = tracing_gen_ctx_flags(flags);
-		__trace_graph_return(tr, trace, trace_ctx);
+		__trace_graph_return(tr, trace, trace_ctx, calltime, rettime);
 	}
 	atomic_dec(&data->disabled);
 	local_irq_restore(flags);
@@ -372,10 +376,8 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
 
 	handle_nosleeptime(trace, ftimes, size);
 
-	trace->calltime = ftimes->calltime;
-
 	if (tracing_thresh &&
-	    (trace->rettime - ftimes->calltime < tracing_thresh))
+	    (trace_clock_local() - ftimes->calltime < tracing_thresh))
 		return;
 	else
 		trace_graph_return(trace, gops);
@@ -861,7 +863,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 
 	graph_ret = &ret_entry->ret;
 	call = &entry->graph_ent;
-	duration = graph_ret->rettime - graph_ret->calltime;
+	duration = ret_entry->rettime - ret_entry->calltime;
 
 	func = call->func + iter->tr->text_delta;
 
@@ -1142,11 +1144,14 @@ print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s,
 }
 
 static enum print_line_t
-print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
+print_graph_return(struct ftrace_graph_ret_entry *retentry, struct trace_seq *s,
 		   struct trace_entry *ent, struct trace_iterator *iter,
 		   u32 flags)
 {
-	unsigned long long duration = trace->rettime - trace->calltime;
+	struct ftrace_graph_ret *trace = &retentry->ret;
+	u64 calltime = retentry->calltime;
+	u64 rettime = retentry->rettime;
+	unsigned long long duration = rettime - calltime;
 	struct fgraph_data *data = iter->private;
 	struct trace_array *tr = iter->tr;
 	unsigned long func;
@@ -1347,7 +1352,7 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
 	case TRACE_GRAPH_RET: {
 		struct ftrace_graph_ret_entry *field;
 		trace_assign_type(field, entry);
-		return print_graph_return(&field->ret, s, entry, iter, flags);
+		return print_graph_return(field, s, entry, iter, flags);
 	}
 	case TRACE_STACK:
 	case TRACE_FN:
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index a4e799c1e767..1d3646ce9c34 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -221,6 +221,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
 	unsigned long flags;
 	unsigned int trace_ctx;
 	u64 *calltime;
+	u64 rettime;
 	int size;
 
 	ftrace_graph_addr_finish(gops, trace);
@@ -228,13 +229,13 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
 	if (!func_prolog_dec(tr, &data, &flags))
 		return;
 
+	rettime = trace_clock_local();
 	calltime = fgraph_retrieve_data(gops->idx, &size);
 	if (!calltime)
 		return;
-	trace->calltime = *calltime;
 
 	trace_ctx = tracing_gen_ctx_flags(flags);
-	__trace_graph_return(tr, trace, trace_ctx);
+	__trace_graph_return(tr, trace, trace_ctx, *calltime, rettime);
 	atomic_dec(&data->disabled);
 }
 
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index c58292e424d5..c4bbaae2a2a3 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -156,6 +156,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace,
 	struct trace_array_cpu *data;
 	unsigned int trace_ctx;
 	u64 *calltime;
+	u64 rettime;
 	int size;
 
 	ftrace_graph_addr_finish(gops, trace);
@@ -163,12 +164,13 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace,
 	if (!func_prolog_preempt_disable(tr, &data, &trace_ctx))
 		return;
 
+	rettime = trace_clock_local();
+
 	calltime = fgraph_retrieve_data(gops->idx, &size);
 	if (!calltime)
 		return;
-	trace->calltime = *calltime;
 
-	__trace_graph_return(tr, trace, trace_ctx);
+	__trace_graph_return(tr, trace, trace_ctx, *calltime, rettime);
 	atomic_dec(&data->disabled);
 
 	preempt_enable_notrace();
-- 
2.45.2

-- Steve
Re: [PATCH] fgraph: Move trace_clock_local() for return time to function_graph tracer
Posted by Masami Hiramatsu (Google) 1 month, 3 weeks ago
On Mon, 13 Jan 2025 19:54:49 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Sun, 12 Jan 2025 14:26:35 +0900
> "Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:
> 
> > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > 
> > Since the ftrace_graph_ret::rettime is only referred in the function_graph
> > tracer, the trace_clock_local() call in fgraph is just an overhead for
> > other fgraph users.
> > 
> > Move the trace_clock_local() for recording return time to function_graph
> > tracer and the rettime field is just zeroed in the fgraph side.
> > That rettime field is updated by one of the function_graph tracer and
> > cached for other function_graph tracer instances.
> > 
> > According to Jiri's report[1], removing this function will gain fprobe
> > performance ~27%.
> > 
> > [1] https://lore.kernel.org/all/Z3aSuql3fnXMVMoM@krava/
> > 
> > Reported-by: Jiri Olsa <olsajiri@gmail.com>
> > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> I'd rather just nuke the calltime and rettime from struct ftrace_graph_ret.

Yeah, this one is good to me. But this will make a slightly different time
for the same function when we enable multiple function graph tracer on
different instances (mine records it once and reuse it but this will record
on each instance). We may need to notice it on the document. :)

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thank you,

> 
> How about something like this instead?
> 
> [ based on: https://lore.kernel.org/linux-trace-kernel/20250113183124.61767419@gandalf.local.home/ ]
> 
> I can start testing it (it compiles and boots).
> 
>  include/linux/ftrace.h               |  2 --
>  kernel/trace/fgraph.c                |  1 -
>  kernel/trace/trace.h                 |  4 +++-
>  kernel/trace/trace_entries.h         |  8 +++----
>  kernel/trace/trace_functions_graph.c | 33 ++++++++++++++++------------
>  kernel/trace/trace_irqsoff.c         |  5 +++--
>  kernel/trace/trace_sched_wakeup.c    |  6 +++--
>  7 files changed, 33 insertions(+), 26 deletions(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index aa9ddd1e4bb6..848ddf0d8f89 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -1061,8 +1061,6 @@ struct ftrace_graph_ret {
>  	int depth;
>  	/* Number of functions that overran the depth limit for current task */
>  	unsigned int overrun;
> -	unsigned long long calltime;
> -	unsigned long long rettime;
>  } __packed;
>  
>  struct fgraph_ops;
> diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
> index 30e3ddc8a8a8..a7e6c8488b7f 100644
> --- a/kernel/trace/fgraph.c
> +++ b/kernel/trace/fgraph.c
> @@ -818,7 +818,6 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
>  		return (unsigned long)panic;
>  	}
>  
> -	trace.rettime = trace_clock_local();
>  #ifdef CONFIG_FUNCTION_GRAPH_RETVAL
>  	trace.retval = fgraph_ret_regs_return_value(ret_regs);
>  #endif
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 9691b47b5f3d..467a8143fbb9 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -911,7 +911,9 @@ extern int __trace_graph_retaddr_entry(struct trace_array *tr,
>  				unsigned long retaddr);
>  extern void __trace_graph_return(struct trace_array *tr,
>  				 struct ftrace_graph_ret *trace,
> -				 unsigned int trace_ctx);
> +				 unsigned int trace_ctx,
> +				 u64 calltime, u64 rettime);
> +
>  extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops);
>  extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops);
>  extern void free_fgraph_ops(struct trace_array *tr);
> diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
> index 82fd174ebbe0..fbfb396905a6 100644
> --- a/kernel/trace/trace_entries.h
> +++ b/kernel/trace/trace_entries.h
> @@ -124,8 +124,8 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
>  		__field_packed(	unsigned long,	ret,		retval	)
>  		__field_packed(	int,		ret,		depth	)
>  		__field_packed(	unsigned int,	ret,		overrun	)
> -		__field_packed(	unsigned long long, ret,	calltime)
> -		__field_packed(	unsigned long long, ret,	rettime	)
> +		__field(unsigned long long,	calltime		)
> +		__field(unsigned long long,	rettime			)
>  	),
>  
>  	F_printk("<-- %ps (%d) (start: %llx  end: %llx) over: %d retval: %lx",
> @@ -146,8 +146,8 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
>  		__field_packed(	unsigned long,	ret,		func	)
>  		__field_packed(	int,		ret,		depth	)
>  		__field_packed(	unsigned int,	ret,		overrun	)
> -		__field_packed(	unsigned long long, ret,	calltime)
> -		__field_packed(	unsigned long long, ret,	rettime	)
> +		__field(unsigned long long,	calltime		)
> +		__field(unsigned long long,	rettime			)
>  	),
>  
>  	F_printk("<-- %ps (%d) (start: %llx  end: %llx) over: %d",
> diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
> index 5504b5e4e7b4..7bff133c1543 100644
> --- a/kernel/trace/trace_functions_graph.c
> +++ b/kernel/trace/trace_functions_graph.c
> @@ -270,12 +270,10 @@ __trace_graph_function(struct trace_array *tr,
>  	struct ftrace_graph_ret ret = {
>  		.func     = ip,
>  		.depth    = 0,
> -		.calltime = time,
> -		.rettime  = time,
>  	};
>  
>  	__trace_graph_entry(tr, &ent, trace_ctx);
> -	__trace_graph_return(tr, &ret, trace_ctx);
> +	__trace_graph_return(tr, &ret, trace_ctx, time, time);
>  }
>  
>  void
> @@ -287,8 +285,9 @@ trace_graph_function(struct trace_array *tr,
>  }
>  
>  void __trace_graph_return(struct trace_array *tr,
> -				struct ftrace_graph_ret *trace,
> -				unsigned int trace_ctx)
> +			  struct ftrace_graph_ret *trace,
> +			  unsigned int trace_ctx,
> +			  u64 calltime, u64 rettime)
>  {
>  	struct ring_buffer_event *event;
>  	struct trace_buffer *buffer = tr->array_buffer.buffer;
> @@ -300,6 +299,8 @@ void __trace_graph_return(struct trace_array *tr,
>  		return;
>  	entry	= ring_buffer_event_data(event);
>  	entry->ret				= *trace;
> +	entry->calltime				= calltime;
> +	entry->rettime				= rettime;
>  	trace_buffer_unlock_commit_nostack(buffer, event);
>  }
>  
> @@ -322,10 +323,13 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
>  	struct fgraph_times *ftimes;
>  	unsigned long flags;
>  	unsigned int trace_ctx;
> +	u64 calltime, rettime;
>  	long disabled;
>  	int size;
>  	int cpu;
>  
> +	rettime = trace_clock_local();
> +
>  	ftrace_graph_addr_finish(gops, trace);
>  
>  	if (*task_var & TRACE_GRAPH_NOTRACE) {
> @@ -339,7 +343,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
>  
>  	handle_nosleeptime(trace, ftimes, size);
>  
> -	trace->calltime = ftimes->calltime;
> +	calltime = ftimes->calltime;
>  
>  	local_irq_save(flags);
>  	cpu = raw_smp_processor_id();
> @@ -347,7 +351,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
>  	disabled = atomic_inc_return(&data->disabled);
>  	if (likely(disabled == 1)) {
>  		trace_ctx = tracing_gen_ctx_flags(flags);
> -		__trace_graph_return(tr, trace, trace_ctx);
> +		__trace_graph_return(tr, trace, trace_ctx, calltime, rettime);
>  	}
>  	atomic_dec(&data->disabled);
>  	local_irq_restore(flags);
> @@ -372,10 +376,8 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
>  
>  	handle_nosleeptime(trace, ftimes, size);
>  
> -	trace->calltime = ftimes->calltime;
> -
>  	if (tracing_thresh &&
> -	    (trace->rettime - ftimes->calltime < tracing_thresh))
> +	    (trace_clock_local() - ftimes->calltime < tracing_thresh))
>  		return;
>  	else
>  		trace_graph_return(trace, gops);
> @@ -861,7 +863,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
>  
>  	graph_ret = &ret_entry->ret;
>  	call = &entry->graph_ent;
> -	duration = graph_ret->rettime - graph_ret->calltime;
> +	duration = ret_entry->rettime - ret_entry->calltime;
>  
>  	func = call->func + iter->tr->text_delta;
>  
> @@ -1142,11 +1144,14 @@ print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s,
>  }
>  
>  static enum print_line_t
> -print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
> +print_graph_return(struct ftrace_graph_ret_entry *retentry, struct trace_seq *s,
>  		   struct trace_entry *ent, struct trace_iterator *iter,
>  		   u32 flags)
>  {
> -	unsigned long long duration = trace->rettime - trace->calltime;
> +	struct ftrace_graph_ret *trace = &retentry->ret;
> +	u64 calltime = retentry->calltime;
> +	u64 rettime = retentry->rettime;
> +	unsigned long long duration = rettime - calltime;
>  	struct fgraph_data *data = iter->private;
>  	struct trace_array *tr = iter->tr;
>  	unsigned long func;
> @@ -1347,7 +1352,7 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
>  	case TRACE_GRAPH_RET: {
>  		struct ftrace_graph_ret_entry *field;
>  		trace_assign_type(field, entry);
> -		return print_graph_return(&field->ret, s, entry, iter, flags);
> +		return print_graph_return(field, s, entry, iter, flags);
>  	}
>  	case TRACE_STACK:
>  	case TRACE_FN:
> diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> index a4e799c1e767..1d3646ce9c34 100644
> --- a/kernel/trace/trace_irqsoff.c
> +++ b/kernel/trace/trace_irqsoff.c
> @@ -221,6 +221,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
>  	unsigned long flags;
>  	unsigned int trace_ctx;
>  	u64 *calltime;
> +	u64 rettime;
>  	int size;
>  
>  	ftrace_graph_addr_finish(gops, trace);
> @@ -228,13 +229,13 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
>  	if (!func_prolog_dec(tr, &data, &flags))
>  		return;
>  
> +	rettime = trace_clock_local();
>  	calltime = fgraph_retrieve_data(gops->idx, &size);
>  	if (!calltime)
>  		return;
> -	trace->calltime = *calltime;
>  
>  	trace_ctx = tracing_gen_ctx_flags(flags);
> -	__trace_graph_return(tr, trace, trace_ctx);
> +	__trace_graph_return(tr, trace, trace_ctx, *calltime, rettime);
>  	atomic_dec(&data->disabled);
>  }
>  
> diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
> index c58292e424d5..c4bbaae2a2a3 100644
> --- a/kernel/trace/trace_sched_wakeup.c
> +++ b/kernel/trace/trace_sched_wakeup.c
> @@ -156,6 +156,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace,
>  	struct trace_array_cpu *data;
>  	unsigned int trace_ctx;
>  	u64 *calltime;
> +	u64 rettime;
>  	int size;
>  
>  	ftrace_graph_addr_finish(gops, trace);
> @@ -163,12 +164,13 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace,
>  	if (!func_prolog_preempt_disable(tr, &data, &trace_ctx))
>  		return;
>  
> +	rettime = trace_clock_local();
> +
>  	calltime = fgraph_retrieve_data(gops->idx, &size);
>  	if (!calltime)
>  		return;
> -	trace->calltime = *calltime;
>  
> -	__trace_graph_return(tr, trace, trace_ctx);
> +	__trace_graph_return(tr, trace, trace_ctx, *calltime, rettime);
>  	atomic_dec(&data->disabled);
>  
>  	preempt_enable_notrace();
> -- 
> 2.45.2
> 
> -- Steve


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>
Re: [PATCH] fgraph: Move trace_clock_local() for return time to function_graph tracer
Posted by Steven Rostedt 1 month, 3 weeks ago
On Tue, 14 Jan 2025 10:18:06 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> > I'd rather just nuke the calltime and rettime from struct ftrace_graph_ret.  
> 
> Yeah, this one is good to me. But this will make a slightly different time
> for the same function when we enable multiple function graph tracer on
> different instances (mine records it once and reuse it but this will record
> on each instance). We may need to notice it on the document. :)

And I believe it's separate for each instance now anyway, so I was going to
send this to Linus as well:

diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index 272464bb7c60..2b74f96d09d5 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -810,6 +810,12 @@ Here is the list of current tracers that may be configured.
 	to draw a graph of function calls similar to C code
 	source.
 
+	Note that the function graph calculates the timings of when the
+	function starts and returns internally and for each instance. If
+	there are two instances that run function graph tracer and traces
+	the same functions, the length of the timings may be slightly off as
+	each read the timestamp separately and not at the same time.
+
   "blk"
 
 	The block tracer. The tracer used by the blktrace user


> 
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thanks,

-- Steve