kernel/trace/ftrace.c | 2 ++ 1 file changed, 2 insertions(+)
From: Vladimir Riabchun <ferr.lambarginio@gmail.com>
[ Upstream commit 4099b98203d6b33d990586542fa5beee408032a3 ]
A soft lockup was observed when loading amdgpu module.
If a module has a lot of tracable functions, multiple calls
to kallsyms_lookup can spend too much time in RCU critical
section and with disabled preemption, causing kernel panic.
This is the same issue that was fixed in
commit d0b24b4e91fc ("ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY
kernels") and commit 42ea22e754ba ("ftrace: Add cond_resched() to
ftrace_graph_set_hash()").
Fix it the same way by adding cond_resched() in ftrace_module_enable.
Link: https://lore.kernel.org/aMQD9_lxYmphT-up@vova-pc
Signed-off-by: Vladimir Riabchun <ferr.lambarginio@gmail.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
LLM Generated explanations, may be completely bogus:
YES
- The change inserts `cond_resched()` inside the inner iteration over
every ftrace record (`kernel/trace/ftrace.c:7538`). That loop holds
the ftrace mutex and, for each record, invokes heavy helpers like
`test_for_valid_rec()` which in turn calls `kallsyms_lookup()`
(`kernel/trace/ftrace.c:4289`). On huge modules (e.g. amdgpu) this can
run for tens of milliseconds with preemption disabled, triggering the
documented soft lockup/panic during module load.
- `ftrace_module_enable()` runs only in process context via
`prepare_coming_module()` (`kernel/module/main.c:3279`), so adding a
voluntary reschedule point is safe; the same pattern already exists in
other long-running ftrace loops (see commits d0b24b4e91fc and
42ea22e754ba), so this brings consistency without changing control
flow or semantics.
- No data structures or interfaces change, and the code still executes
under the same locking (`ftrace_lock`, `text_mutex` when the arch
overrides `ftrace_arch_code_modify_prepare()`), so the risk of
regression is minimal: the new call simply yields CPU if needed while
keeping the locks held, preventing watchdog-induced crashes but
otherwise behaving identically.
Given it fixes a real, user-visible soft lockup with a contained and
well-understood tweak, this is an excellent candidate for stable
backporting.
kernel/trace/ftrace.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index a69067367c296..42bd2ba68a821 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -7535,6 +7535,8 @@ void ftrace_module_enable(struct module *mod)
if (!within_module(rec->ip, mod))
break;
+ cond_resched();
+
/* Weak functions should still be ignored */
if (!test_for_valid_rec(rec)) {
/* Clear all other flags. Should not be enabled anyway */
--
2.51.0
On Sat, 25 Oct 2025 12:00:16 -0400 Sasha Levin <sashal@kernel.org> wrote: > - The change inserts `cond_resched()` inside the inner iteration over > every ftrace record (`kernel/trace/ftrace.c:7538`). That loop holds > the ftrace mutex and, for each record, invokes heavy helpers like > `test_for_valid_rec()` which in turn calls `kallsyms_lookup()` > (`kernel/trace/ftrace.c:4289`). On huge modules (e.g. amdgpu) this can > run for tens of milliseconds with preemption disabled, triggering the It got the "preemption disabled" wrong. Well maybe when running PREEMPT_NONE it is, but the description doesn't imply that. -- Steve > documented soft lockup/panic during module load. > - `ftrace_module_enable()` runs only in process context via > `prepare_coming_module()` (`kernel/module/main.c:3279`), so adding a > voluntary reschedule point is safe; the same pattern already exists in > other long-running ftrace loops (see commits d0b24b4e91fc and > 42ea22e754ba), so this brings consistency without changing control > flow or semantics. > - No data structures or interfaces change, and the code still executes > under the same locking (`ftrace_lock`, `text_mutex` when the arch > overrides `ftrace_arch_code_modify_prepare()`), so the risk of > regression is minimal: the new call simply yields CPU if needed while > keeping the locks held, preventing watchdog-induced crashes but > otherwise behaving identically.
On Sat, Oct 25, 2025 at 03:25:45PM -0400, Steven Rostedt wrote: >On Sat, 25 Oct 2025 12:00:16 -0400 >Sasha Levin <sashal@kernel.org> wrote: > >> - The change inserts `cond_resched()` inside the inner iteration over >> every ftrace record (`kernel/trace/ftrace.c:7538`). That loop holds >> the ftrace mutex and, for each record, invokes heavy helpers like >> `test_for_valid_rec()` which in turn calls `kallsyms_lookup()` >> (`kernel/trace/ftrace.c:4289`). On huge modules (e.g. amdgpu) this can >> run for tens of milliseconds with preemption disabled, triggering the > >It got the "preemption disabled" wrong. Well maybe when running >PREEMPT_NONE it is, but the description doesn't imply that. Thanks for the review! I've been trying a new LLM for part of this series, and it seems to underperform the one I was previously using. -- Thanks, Sasha
© 2016 - 2026 Red Hat, Inc.