From nobody Sun Feb 8 03:27:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8101431A044 for ; Thu, 8 Jan 2026 18:33:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767897182; cv=none; b=LxdpE3W0PpneO3+d1w7V/O/0SzxtIOrvMv8uMuQwuAAOvVzA6R2BZVO4TD0BxHAaQPvMx4X6FqfPByPKKSoXW8FXjtfih2Ph3M8HNbMjM90L9y4aDjCK1Pnc7gTNXRn7BX8NLYcxZor9tELnYBLIBD4TQQZvI5qohzPJ2pYbxXQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767897182; c=relaxed/simple; bh=iXxrbMpaJ9qf6hLGKfSDv2gFc2LaxQF1fOxycUh1GFU=; h=Date:From:To:Cc:Subject:Message-ID:MIME-Version:Content-Type; b=ZGb+DrAWEMQHrdproHdKig15Fml0oTR7onCcBHaEzMPrHmkSQHrTDqh4WZXWthwI7r0pRdL1NSkS2QN90hWRynA4b1w/OzP73+BJMKMVtQzaNIWmhAKoE9XvtZGZxvlWKJVkzoHzivAzETZCWRfM/Apw0HrOMZLSurT2XrsAhmo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=W4pmawSe; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="W4pmawSe" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A30CC116C6; Thu, 8 Jan 2026 18:33:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767897181; bh=iXxrbMpaJ9qf6hLGKfSDv2gFc2LaxQF1fOxycUh1GFU=; h=Date:From:To:Cc:Subject:From; b=W4pmawSejcnXW7EH0FPT3CPBBbjZqiwYsL0GWsm2XPRW4ClvC44yGS3C+xQcFez2I T/tpj7DK9TEZVV1y2/l7OAK2qwH2/Xas4QqU/Eqq5KDpkOSWjq6G77VfVwP33deWyd T4AURc3jZiWyiDIP395utVT0NkDXma2rAkryGe6II+BAXt6Ui5W1ot+JaNyNjR4nza 878A5KzXTjnbZ4h4MGQT4jnPf69cBtQz+wbv1Cw1jhL34cGgkdKgWRDglR+7Zg1Ttu Sl2eFAlm1Gmz1K0VXiRrzX5TVUkJgEOcbAuhwTJCOvQ1muXnpMsZVvtxOyAVmQCweH yAtUAhtbpb+tQ== Date: Thu, 8 Jan 2026 13:33:29 -0500 From: Steven Rostedt To: Linus Torvalds Cc: LKML , Masami Hiramatsu , Mathieu Desnoyers , Ben Dooks , Julia Lawall , Wupeng Ma Subject: [GIT PULL] tracing: Fixes for v6.19 Message-ID: <20260108133329.78a73fed@gandalf.local.home> X-Mailer: Claws Mail 3.20.0git84 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Linus, tracing fixes for v6.19: - Remove useless assignment of soft_mode variable The function __ftrace_event_enable_disable() sets "soft_mode" in one of the branch paths but doesn't use it after that. Remove the setting of that variable. - Add a cond_resched() in ring_buffer_resize() The resize function that allocates all the pages for the ring buffer was causing a soft lockup on PREEMPT_NONE configs when allocating large buffers on machines with many CPUs. Hopefully this is the last cond_resched() needed to be added as PREEMPT_LAZY becomes the norm in the future. - Make ftrace_graph_ent depth field signed The "depth" field of struct ftrace_graph_ent was converted from "int" to "unsigned long" for alignment reasons to work with being embedded in other structures. The conversion from a signed to unsigned caused integrity checks to always pass as they were comparing "depth" to less than zero. Make the field signed long. - Add recursion protection to stack trace events A infinite recursion was triggered by a stack trace event calling RCU which internally called rcu_read_unlock_special(), which triggered an event that was also doing stacktraces which cause it to trigger the same RCU lock that called rcu_read_unlock_special() again. Update the trace_test_and_set_recursion() to add a set of context checks for events to use, and have the stack trace event use that for recursion protection. - Make the variable ftrace_dump_on_oops static The cleanup of sysctl that moved all the updates to the files that use them moved the reference of ftrace_dump_on_oops to where it is used. It is no longer used outside of the trace.c file. Make it static. Please pull the latest trace-v6.19-rc4 tree, which can be found at: git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git trace-v6.19-rc4 Tag SHA1: 95c6df5f47e592a81fe0d78a16141b66158ca058 Head SHA1: 1e2ed4bfd50ace3c4272cfab7e9aa90956fb7ae0 Ben Dooks (1): trace: ftrace_dump_on_oops[] is not exported, make it static Julia Lawall (1): tracing: Drop unneeded assignment to soft_mode Steven Rostedt (2): ftrace: Make ftrace_graph_ent depth field signed tracing: Add recursion protection in kernel stack trace recording Wupeng Ma (1): ring-buffer: Avoid softlockup in ring_buffer_resize() during memory f= ree ---- include/linux/ftrace.h | 2 +- include/linux/trace_recursion.h | 9 +++++++++ kernel/trace/ring_buffer.c | 2 ++ kernel/trace/trace.c | 8 +++++++- kernel/trace/trace_events.c | 7 +++---- 5 files changed, 22 insertions(+), 6 deletions(-) --------------------------- diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 770f0dc993cc..a3a8989e3268 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1167,7 +1167,7 @@ static inline void ftrace_init(void) { } */ struct ftrace_graph_ent { unsigned long func; /* Current function */ - unsigned long depth; + long depth; /* signed to check for less than zero */ } __packed; =20 /* diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursio= n.h index ae04054a1be3..e6ca052b2a85 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -34,6 +34,13 @@ enum { TRACE_INTERNAL_SIRQ_BIT, TRACE_INTERNAL_TRANSITION_BIT, =20 + /* Internal event use recursion bits */ + TRACE_INTERNAL_EVENT_BIT, + TRACE_INTERNAL_EVENT_NMI_BIT, + TRACE_INTERNAL_EVENT_IRQ_BIT, + TRACE_INTERNAL_EVENT_SIRQ_BIT, + TRACE_INTERNAL_EVENT_TRANSITION_BIT, + TRACE_BRANCH_BIT, /* * Abuse of the trace_recursion. @@ -58,6 +65,8 @@ enum { =20 #define TRACE_LIST_START TRACE_INTERNAL_BIT =20 +#define TRACE_EVENT_START TRACE_INTERNAL_EVENT_BIT + #define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS))= - 1) =20 /* diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 41c9f5d079be..630221b00838 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -3137,6 +3137,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, u= nsigned long size, list) { list_del_init(&bpage->list); free_buffer_page(bpage); + + cond_resched(); } } out_err_unlock: diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 6f2148df14d9..baec63134ab6 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -138,7 +138,7 @@ cpumask_var_t __read_mostly tracing_buffer_mask; * by commas. */ /* Set to string format zero to disable by default */ -char ftrace_dump_on_oops[MAX_TRACER_SIZE] =3D "0"; +static char ftrace_dump_on_oops[MAX_TRACER_SIZE] =3D "0"; =20 /* When set, tracing will stop when a WARN*() is hit */ static int __disable_trace_on_warning; @@ -3012,6 +3012,11 @@ static void __ftrace_trace_stack(struct trace_array = *tr, struct ftrace_stack *fstack; struct stack_entry *entry; int stackidx; + int bit; + + bit =3D trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_STA= RT); + if (bit < 0) + return; =20 /* * Add one, for this function and the call to save_stack_trace() @@ -3080,6 +3085,7 @@ static void __ftrace_trace_stack(struct trace_array *= tr, /* Again, don't let gcc optimize things here */ barrier(); __this_cpu_dec(ftrace_stack_reserve); + trace_clear_recursion(bit); } =20 static inline void ftrace_trace_stack(struct trace_array *tr, diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 76067529db61..137b4d9bb116 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -826,16 +826,15 @@ static int __ftrace_event_enable_disable(struct trace= _event_file *file, * When soft_disable is set and enable is set, we want to * register the tracepoint for the event, but leave the event * as is. That means, if the event was already enabled, we do - * nothing (but set soft_mode). If the event is disabled, we - * set SOFT_DISABLED before enabling the event tracepoint, so - * it still seems to be disabled. + * nothing. If the event is disabled, we set SOFT_DISABLED + * before enabling the event tracepoint, so it still seems + * to be disabled. */ if (!soft_disable) clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags); else { if (atomic_inc_return(&file->sm_ref) > 1) break; - soft_mode =3D true; /* Enable use of trace_buffered_event */ trace_buffered_event_enable(); }