From: "Vineeth Pillai (Google)" <vineeth@bitbyteword.org>
Replace trace_damos_stat_after_apply_interval() with
trace_call__damos_stat_after_apply_interval() at a site already guarded
by an early return when !trace_damos_stat_after_apply_interval_enabled(),
avoiding a redundant static_branch_unlikely() re-evaluation inside the
tracepoint.
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://patch.msgid.link/20260323160052.17528-19-vineeth@bitbyteword.org
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
Reviewed-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
mm/damon/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index c1d1091d307e..6ed6ad240ed9 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -2347,7 +2347,7 @@ static void damos_trace_stat(struct damon_ctx *c, struct damos *s)
break;
sidx++;
}
- trace_damos_stat_after_apply_interval(cidx, sidx, &s->stat);
+ trace_call__damos_stat_after_apply_interval(cidx, sidx, &s->stat);
}
static void kdamond_apply_schemes(struct damon_ctx *c)
--
2.51.0