kernel/bpf/stackmap.c | 1 + 1 file changed, 1 insertion(+)
syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
triggered via bpf_get_stack() when capturing a kernel stack trace.
After the recent refactor that introduced stack_map_calculate_max_depth(),
the code in stack_map_get_build_id_offset() (and related helpers) stopped
clamping the number of trace entries (`trace_nr`) to the number of elements
that fit into the stack map value (`num_elem`).
As a result, if the captured stack contained more frames than the map value
can hold, the subsequent memcpy() would write past the end of the buffer,
triggering a KASAN report like:
BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
Write of size N at addr ... by task syz-executor...
Restore the missing clamp by limiting `trace_nr` to `num_elem` before
computing the copy length. This mirrors the pre-refactor logic and ensures
we never copy more bytes than the destination buffer can hold.
No functional change intended beyond reintroducing the missing bound check.
Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Signed-off-by: Brahmajit Das <listout@listout.xyz>
---
Changes in v2:
- Use max_depth instead of num_elem logic, this logic is similar to what
we are already using __bpf_get_stackid
Changes in v1:
- RFC patch that restores the number of trace entries by setting
trace_nr to trace_nr or num_elem based on whichever is the smallest.
Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
---
kernel/bpf/stackmap.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 2365541c81dd..f9081de43689 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
}
trace_nr = trace->nr - skip;
+ trace_nr = min_t(u32, trace_nr, max_depth - skip);
copy_len = trace_nr * elem_size;
ips = trace->ip + skip;
--
2.51.2
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c8..f9081de43 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
> }
>
> trace_nr = trace->nr - skip;
> + trace_nr = min_t(u32, trace_nr, max_depth - skip);
> copy_len = trace_nr * elem_size;
>
> ips = trace->ip + skip;
Can max_depth - skip underflow when max_depth < skip?
The stack_map_calculate_max_depth() function can return a value less than
skip when sysctl_perf_event_max_stack is lowered below the skip value:
max_depth = size / elem_size;
max_depth += skip;
if (max_depth > curr_sysctl_max_stack)
return curr_sysctl_max_stack;
If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
causing min_t() to not limit trace_nr at all. This means the original OOB
write is not fixed in cases where skip > max_depth.
With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
scenario is reachable even without admin changing sysctls.
The pre-refactor code used:
num_elem = size / elem_size;
trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;
Perhaps the fix should directly use num_elem instead of max_depth - skip:
u32 num_elem = size / elem_size;
trace_nr = min_t(u32, trace_nr, num_elem);
Or check for underflow:
if (max_depth > skip)
trace_nr = min_t(u32, trace_nr, max_depth - skip);
else
trace_nr = 0;
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19251115736
© 2016 - 2026 Red Hat, Inc.