From: Steven Rostedt <rostedt@goodmis.org>
The per CPU "disabled" value was the original way to disable tracing when
the tracing subsystem was first created. Today, the ring buffer
infrastructure has its own way to disable tracing. In fact, things have
changed so much since 2008 that many things ignore the disable flag.
The kdb_ftdump() function iterates over all the current tracing CPUs and
increments the "disabled" counter before doing the dump, and decrements it
afterward.
As the disabled flag can be ignored, doing this today is not reliable.
Instead, simply call tracer_tracing_off() and then tracer_tracing_on() to
disable and then enabled the entire ring buffer in one go!
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Daniel Thompson <danielt@kernel.org>
Cc: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_kdb.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
index 1e72d20b3c2f..b5cf3fdde8cb 100644
--- a/kernel/trace/trace_kdb.c
+++ b/kernel/trace/trace_kdb.c
@@ -120,9 +120,7 @@ static int kdb_ftdump(int argc, const char **argv)
trace_init_global_iter(&iter);
iter.buffer_iter = buffer_iter;
- for_each_tracing_cpu(cpu) {
- atomic_inc(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
- }
+ tracer_tracing_off(iter.tr);
/* A negative skip_entries means skip all but the last entries */
if (skip_entries < 0) {
@@ -135,9 +133,7 @@ static int kdb_ftdump(int argc, const char **argv)
ftrace_dump_buf(skip_entries, cpu_file);
- for_each_tracing_cpu(cpu) {
- atomic_dec(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
- }
+ tracer_tracing_on(iter.tr);
kdb_trap_printk--;
--
2.47.2
Hi,
On Fri, May 2, 2025 at 1:53 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> From: Steven Rostedt <rostedt@goodmis.org>
>
> The per CPU "disabled" value was the original way to disable tracing when
> the tracing subsystem was first created. Today, the ring buffer
> infrastructure has its own way to disable tracing. In fact, things have
> changed so much since 2008 that many things ignore the disable flag.
>
> The kdb_ftdump() function iterates over all the current tracing CPUs and
> increments the "disabled" counter before doing the dump, and decrements it
> afterward.
>
> As the disabled flag can be ignored, doing this today is not reliable.
> Instead, simply call tracer_tracing_off() and then tracer_tracing_on() to
> disable and then enabled the entire ring buffer in one go!
>
> Cc: Jason Wessel <jason.wessel@windriver.com>
> Cc: Daniel Thompson <danielt@kernel.org>
> Cc: Douglas Anderson <dianders@chromium.org>
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
> kernel/trace/trace_kdb.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
> index 1e72d20b3c2f..b5cf3fdde8cb 100644
> --- a/kernel/trace/trace_kdb.c
> +++ b/kernel/trace/trace_kdb.c
> @@ -120,9 +120,7 @@ static int kdb_ftdump(int argc, const char **argv)
> trace_init_global_iter(&iter);
> iter.buffer_iter = buffer_iter;
>
> - for_each_tracing_cpu(cpu) {
> - atomic_inc(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
> - }
> + tracer_tracing_off(iter.tr);
>
> /* A negative skip_entries means skip all but the last entries */
> if (skip_entries < 0) {
> @@ -135,9 +133,7 @@ static int kdb_ftdump(int argc, const char **argv)
>
> ftrace_dump_buf(skip_entries, cpu_file);
>
> - for_each_tracing_cpu(cpu) {
> - atomic_dec(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
> - }
> + tracer_tracing_on(iter.tr);
This new change seems less safe than the old one. Previously you'd
always increment by one at the start of the function and decrement by
one at the end. Now at the start of the function you'll set
"buffer_disabled" to 1 and at the end you'll set it to 0. If
"buffer_disabled" was already 1 at the start of the function your new
sequence will end up having the side effect of changing it to 0.
-Doug
On Mon, 5 May 2025 08:42:52 -0700 Doug Anderson <dianders@chromium.org> wrote: > This new change seems less safe than the old one. Previously you'd Well, it matters what you your definition of "safe" is ;-) The new change prevents the ring buffer from having anything written to it, where as the old change didn't disable everything. > always increment by one at the start of the function and decrement by > one at the end. Now at the start of the function you'll set > "buffer_disabled" to 1 and at the end you'll set it to 0. If > "buffer_disabled" was already 1 at the start of the function your new > sequence will end up having the side effect of changing it to 0. Good point. How about I add a tracer_tracing_disable() and tracer_tracing_enable() that is not an on off switch and uses: ring_buffer_disable/enable() that decrements/increments disabling of the ring buffer? That way it keeps the same semantics. -- STeve
Hi Steven,
kernel test robot noticed the following build warnings:
[auto build test WARNING on trace/for-next]
[also build test WARNING on linus/master v6.15-rc4 next-20250502]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Steven-Rostedt/tracing-mmiotrace-Remove-reference-to-unused-per-CPU-data-pointer/20250503-050317
base: https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace for-next
patch link: https://lore.kernel.org/r/20250502205348.643055437%40goodmis.org
patch subject: [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
config: arc-randconfig-002-20250503 (https://download.01.org/0day-ci/archive/20250505/202505051213.exaXF8qp-lkp@intel.com/config)
compiler: arc-linux-gcc (GCC) 11.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250505/202505051213.exaXF8qp-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505051213.exaXF8qp-lkp@intel.com/
All warnings (new ones prefixed by >>):
kernel/trace/trace_kdb.c: In function 'kdb_ftdump':
>> kernel/trace/trace_kdb.c:101:13: warning: unused variable 'cpu' [-Wunused-variable]
101 | int cpu;
| ^~~
vim +/cpu +101 kernel/trace/trace_kdb.c
955b61e5979847 Jason Wessel 2010-08-05 91
955b61e5979847 Jason Wessel 2010-08-05 92 /*
955b61e5979847 Jason Wessel 2010-08-05 93 * kdb_ftdump - Dump the ftrace log buffer
955b61e5979847 Jason Wessel 2010-08-05 94 */
955b61e5979847 Jason Wessel 2010-08-05 95 static int kdb_ftdump(int argc, const char **argv)
955b61e5979847 Jason Wessel 2010-08-05 96 {
dbfe67334a1767 Douglas Anderson 2019-03-19 97 int skip_entries = 0;
19063c776fe745 Jason Wessel 2010-08-05 98 long cpu_file;
0c10cc2435115c Yuran Pereira 2024-10-28 99 int err;
03197fc02b3566 Douglas Anderson 2019-03-19 100 int cnt;
03197fc02b3566 Douglas Anderson 2019-03-19 @101 int cpu;
955b61e5979847 Jason Wessel 2010-08-05 102
19063c776fe745 Jason Wessel 2010-08-05 103 if (argc > 2)
955b61e5979847 Jason Wessel 2010-08-05 104 return KDB_ARGCOUNT;
955b61e5979847 Jason Wessel 2010-08-05 105
0c10cc2435115c Yuran Pereira 2024-10-28 106 if (argc && kstrtoint(argv[1], 0, &skip_entries))
0c10cc2435115c Yuran Pereira 2024-10-28 107 return KDB_BADINT;
955b61e5979847 Jason Wessel 2010-08-05 108
19063c776fe745 Jason Wessel 2010-08-05 109 if (argc == 2) {
0c10cc2435115c Yuran Pereira 2024-10-28 110 err = kstrtol(argv[2], 0, &cpu_file);
0c10cc2435115c Yuran Pereira 2024-10-28 111 if (err || cpu_file >= NR_CPUS || cpu_file < 0 ||
19063c776fe745 Jason Wessel 2010-08-05 112 !cpu_online(cpu_file))
19063c776fe745 Jason Wessel 2010-08-05 113 return KDB_BADINT;
19063c776fe745 Jason Wessel 2010-08-05 114 } else {
ae3b5093ad6004 Steven Rostedt 2013-01-23 115 cpu_file = RING_BUFFER_ALL_CPUS;
19063c776fe745 Jason Wessel 2010-08-05 116 }
19063c776fe745 Jason Wessel 2010-08-05 117
955b61e5979847 Jason Wessel 2010-08-05 118 kdb_trap_printk++;
03197fc02b3566 Douglas Anderson 2019-03-19 119
03197fc02b3566 Douglas Anderson 2019-03-19 120 trace_init_global_iter(&iter);
03197fc02b3566 Douglas Anderson 2019-03-19 121 iter.buffer_iter = buffer_iter;
03197fc02b3566 Douglas Anderson 2019-03-19 122
6a4611f6bb763d Steven Rostedt 2025-05-02 123 tracer_tracing_off(iter.tr);
03197fc02b3566 Douglas Anderson 2019-03-19 124
03197fc02b3566 Douglas Anderson 2019-03-19 125 /* A negative skip_entries means skip all but the last entries */
03197fc02b3566 Douglas Anderson 2019-03-19 126 if (skip_entries < 0) {
03197fc02b3566 Douglas Anderson 2019-03-19 127 if (cpu_file == RING_BUFFER_ALL_CPUS)
03197fc02b3566 Douglas Anderson 2019-03-19 128 cnt = trace_total_entries(NULL);
03197fc02b3566 Douglas Anderson 2019-03-19 129 else
03197fc02b3566 Douglas Anderson 2019-03-19 130 cnt = trace_total_entries_cpu(NULL, cpu_file);
03197fc02b3566 Douglas Anderson 2019-03-19 131 skip_entries = max(cnt + skip_entries, 0);
03197fc02b3566 Douglas Anderson 2019-03-19 132 }
03197fc02b3566 Douglas Anderson 2019-03-19 133
dbfe67334a1767 Douglas Anderson 2019-03-19 134 ftrace_dump_buf(skip_entries, cpu_file);
03197fc02b3566 Douglas Anderson 2019-03-19 135
6a4611f6bb763d Steven Rostedt 2025-05-02 136 tracer_tracing_on(iter.tr);
03197fc02b3566 Douglas Anderson 2019-03-19 137
955b61e5979847 Jason Wessel 2010-08-05 138 kdb_trap_printk--;
955b61e5979847 Jason Wessel 2010-08-05 139
955b61e5979847 Jason Wessel 2010-08-05 140 return 0;
955b61e5979847 Jason Wessel 2010-08-05 141 }
955b61e5979847 Jason Wessel 2010-08-05 142
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
© 2016 - 2026 Red Hat, Inc.