kernel/bpf/stackmap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
triggered via bpf_get_stack() when capturing a kernel stack trace.
After the recent refactor that introduced stack_map_calculate_max_depth(),
the code in stack_map_get_build_id_offset() (and related helpers) stopped
clamping the number of trace entries (`trace_nr`) to the number of elements
that fit into the stack map value (`num_elem`).
As a result, if the captured stack contained more frames than the map value
can hold, the subsequent memcpy() would write past the end of the buffer,
triggering a KASAN report like:
BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
Write of size N at addr ... by task syz-executor...
Restore the missing clamp by limiting `trace_nr` to `num_elem` before
computing the copy length. This mirrors the pre-refactor logic and ensures
we never copy more bytes than the destination buffer can hold.
No functional change intended beyond reintroducing the missing bound check.
Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Signed-off-by: Brahmajit Das <listout@listout.xyz>
---
Changes in v3:
Revert back to num_elem based logic for setting trace_nr. This was
suggested by bpf-ci bot, mainly pointing out the chances of underflow
when max_depth < skip.
Quoting the bot's reply:
The stack_map_calculate_max_depth() function can return a value less than
skip when sysctl_perf_event_max_stack is lowered below the skip value:
max_depth = size / elem_size;
max_depth += skip;
if (max_depth > curr_sysctl_max_stack)
return curr_sysctl_max_stack;
If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
causing min_t() to not limit trace_nr at all. This means the original OOB
write is not fixed in cases where skip > max_depth.
With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
scenario is reachable even without admin changing sysctls.
Changes in v2:
- Use max_depth instead of num_elem logic, this logic is similar to what
we are already using __bpf_get_stackid
Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/
Changes in v1:
- RFC patch that restores the number of trace entries by setting
trace_nr to trace_nr or num_elem based on whichever is the smallest.
Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
---
kernel/bpf/stackmap.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 2365541c81dd..cef79d9517ab 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
struct perf_callchain_entry *trace_in,
void *buf, u32 size, u64 flags, bool may_fault)
{
- u32 trace_nr, copy_len, elem_size, max_depth;
+ u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
bool user_build_id = flags & BPF_F_USER_BUILD_ID;
bool crosstask = task && task != current;
u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
@@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
}
trace_nr = trace->nr - skip;
+ num_elem = size / elem_size;
+ trace_nr = min_t(u32, trace_nr, num_elem);
copy_len = trace_nr * elem_size;
ips = trace->ip + skip;
--
2.51.2
On Tue, 11 Nov 2025 13:42:54 +0530
Brahmajit Das <listout@listout.xyz> wrote:
> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
>
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
>
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
>
> BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
> Write of size N at addr ... by task syz-executor...
>
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
>
> No functional change intended beyond reintroducing the missing bound check.
>
> Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@listout.xyz>
> ---
> Changes in v3:
> Revert back to num_elem based logic for setting trace_nr. This was
> suggested by bpf-ci bot, mainly pointing out the chances of underflow
> when max_depth < skip.
>
> Quoting the bot's reply:
> The stack_map_calculate_max_depth() function can return a value less than
> skip when sysctl_perf_event_max_stack is lowered below the skip value:
>
> max_depth = size / elem_size;
> max_depth += skip;
> if (max_depth > curr_sysctl_max_stack)
> return curr_sysctl_max_stack;
>
> If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
>
> Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
> causing min_t() to not limit trace_nr at all. This means the original OOB
> write is not fixed in cases where skip > max_depth.
>
> With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
> scenario is reachable even without admin changing sysctls.
>
> Changes in v2:
> - Use max_depth instead of num_elem logic, this logic is similar to what
> we are already using __bpf_get_stackid
> Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/
>
> Changes in v1:
> - RFC patch that restores the number of trace entries by setting
> trace_nr to trace_nr or num_elem based on whichever is the smallest.
> Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
> ---
> kernel/bpf/stackmap.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c81dd..cef79d9517ab 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
> struct perf_callchain_entry *trace_in,
> void *buf, u32 size, u64 flags, bool may_fault)
> {
> - u32 trace_nr, copy_len, elem_size, max_depth;
> + u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
> bool user_build_id = flags & BPF_F_USER_BUILD_ID;
> bool crosstask = task && task != current;
> u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
> @@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
> }
>
> trace_nr = trace->nr - skip;
> + num_elem = size / elem_size;
> + trace_nr = min_t(u32, trace_nr, num_elem);
Please can we have no unnecessary min_t().
You wouldn't write:
x = (u32)a < (u32)b ? (u32)a : (u32)b;
David
> copy_len = trace_nr * elem_size;
>
> ips = trace->ip + skip;
On 12.11.2025 13:35, David Laight wrote: > On Tue, 11 Nov 2025 13:42:54 +0530 > Brahmajit Das <listout@listout.xyz> wrote: > ...snip... > > Please can we have no unnecessary min_t(). > You wouldn't write: > x = (u32)a < (u32)b ? (u32)a : (u32)b; > > David > > > copy_len = trace_nr * elem_size; > > > > ips = trace->ip + skip; > Hi David, Sorry, I didn't quite get that. Would prefer something like: trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem; The pre-refactor code. -- Regards, listout
On 12/11/2025 14:47, Brahmajit Das wrote: > On 12.11.2025 13:35, David Laight wrote: >> On Tue, 11 Nov 2025 13:42:54 +0530 >> Brahmajit Das <listout@listout.xyz> wrote: >> > ...snip... >> Please can we have no unnecessary min_t(). >> You wouldn't write: >> x = (u32)a < (u32)b ? (u32)a : (u32)b; >> >> David >> >>> copy_len = trace_nr * elem_size; >>> >>> ips = trace->ip + skip; > Hi David, > > Sorry, I didn't quite get that. Would prefer something like: > trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem; min_t is a min with casting which is unnecessary in this case as trace_nr and num_elem are already u32. > The pre-refactor code. >
On Wed, 12 Nov 2025 16:11:41 +0000 "Lecomte, Arnaud" <contact@arnaud-lcm.com> wrote: > On 12/11/2025 14:47, Brahmajit Das wrote: > > On 12.11.2025 13:35, David Laight wrote: > >> On Tue, 11 Nov 2025 13:42:54 +0530 > >> Brahmajit Das <listout@listout.xyz> wrote: > >> > > ...snip... > >> Please can we have no unnecessary min_t(). > >> You wouldn't write: > >> x = (u32)a < (u32)b ? (u32)a : (u32)b; > >> > >> David > >> > >>> copy_len = trace_nr * elem_size; > >>> > >>> ips = trace->ip + skip; > > Hi David, > > > > Sorry, I didn't quite get that. Would prefer something like: > > trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem; > > min_t is a min with casting which is unnecessary in this case as > trace_nr and num_elem are already u32. Correct David > > > The pre-refactor code. > > >
I am a not sure this is the right solution and I am scared that by
forcing this clamping, we are hiding something else.
If we have a look at the code below:
```
|
if (trace_in) {
trace = trace_in;
trace->nr = min_t(u32, trace->nr, max_depth);
} else if (kernel && task) {
trace = get_callchain_entry_for_task(task, max_depth);
} else {
trace = get_perf_callchain(regs, kernel, user, max_depth,
crosstask, false, 0);
} ``` trace should be (if I remember correctly) clamped there. If not,
it might hide something else. I would like to have a look at the return
for each if case through gdb. |
On 11/11/2025 08:12, Brahmajit Das wrote:
> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
>
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
>
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
>
> BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
> Write of size N at addr ... by task syz-executor...
>
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
>
> No functional change intended beyond reintroducing the missing bound check.
>
> Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@listout.xyz>
> ---
> Changes in v3:
> Revert back to num_elem based logic for setting trace_nr. This was
> suggested by bpf-ci bot, mainly pointing out the chances of underflow
> when max_depth < skip.
>
> Quoting the bot's reply:
> The stack_map_calculate_max_depth() function can return a value less than
> skip when sysctl_perf_event_max_stack is lowered below the skip value:
>
> max_depth = size / elem_size;
> max_depth += skip;
> if (max_depth > curr_sysctl_max_stack)
> return curr_sysctl_max_stack;
>
> If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
>
> Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
> causing min_t() to not limit trace_nr at all. This means the original OOB
> write is not fixed in cases where skip > max_depth.
>
> With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
> scenario is reachable even without admin changing sysctls.
>
> Changes in v2:
> - Use max_depth instead of num_elem logic, this logic is similar to what
> we are already using __bpf_get_stackid
> Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/
>
> Changes in v1:
> - RFC patch that restores the number of trace entries by setting
> trace_nr to trace_nr or num_elem based on whichever is the smallest.
> Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
> ---
> kernel/bpf/stackmap.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c81dd..cef79d9517ab 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
> struct perf_callchain_entry *trace_in,
> void *buf, u32 size, u64 flags, bool may_fault)
> {
> - u32 trace_nr, copy_len, elem_size, max_depth;
> + u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
> bool user_build_id = flags & BPF_F_USER_BUILD_ID;
> bool crosstask = task && task != current;
> u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
> @@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
> }
>
> trace_nr = trace->nr - skip;
> + num_elem = size / elem_size;
> + trace_nr = min_t(u32, trace_nr, num_elem);
> copy_len = trace_nr * elem_size;
>
> ips = trace->ip + skip;
Thanks,
Arnaud
On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
> I am a not sure this is the right solution and I am scared that by
> forcing this clamping, we are hiding something else.
> If we have a look at the code below:
> ```
>
> |
>
> if (trace_in) {
> trace = trace_in;
> trace->nr = min_t(u32, trace->nr, max_depth);
> } else if (kernel && task) {
> trace = get_callchain_entry_for_task(task, max_depth);
> } else {
> trace = get_perf_callchain(regs, kernel, user, max_depth,
> crosstask, false, 0);
> } ``` trace should be (if I remember correctly) clamped there. If not, it
> might hide something else. I would like to have a look at the return for
> each if case through gdb. |
Hi Arnaud,
So I've been debugging this the reproducer always takes the else branch
so trace holds whatever get_perf_callchain returns; in this situation.
I mostly found it to be a value around 4.
In some case the value would exceed to something 27 or 44, just after
the code block
if (unlikely(!trace) || trace->nr < skip) {
if (may_fault)
rcu_read_unlock();
goto err_fault;
}
So I'm assuming there's some race condition that might be going on
somewhere.
I'm still debugging bug I'm open to ideas and definitely I could be
wrong here, please feel free to correct/point out.
--
Regards,
listout
On 13/11/2025 12:49, Brahmajit Das wrote:
> On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
>> I am a not sure this is the right solution and I am scared that by
>> forcing this clamping, we are hiding something else.
>> If we have a look at the code below:
>> ```
>>
>> |
>>
>> if (trace_in) {
>> trace = trace_in;
>> trace->nr = min_t(u32, trace->nr, max_depth);
>> } else if (kernel && task) {
>> trace = get_callchain_entry_for_task(task, max_depth);
>> } else {
>> trace = get_perf_callchain(regs, kernel, user, max_depth,
>> crosstask, false, 0);
>> } ``` trace should be (if I remember correctly) clamped there. If not, it
>> might hide something else. I would like to have a look at the return for
>> each if case through gdb. |
> Hi Arnaud,
> So I've been debugging this the reproducer always takes the else branch
> so trace holds whatever get_perf_callchain returns; in this situation.
>
> I mostly found it to be a value around 4.
>
> In some case the value would exceed to something 27 or 44, just after
> the code block
>
> if (unlikely(!trace) || trace->nr < skip) {
> if (may_fault)
> rcu_read_unlock();
> goto err_fault;
> }
>
> So I'm assuming there's some race condition that might be going on
> somewhere.
Which value ? trace->nr ?
> I'm still debugging bug I'm open to ideas and definitely I could be
> wrong here, please feel free to correct/point out.
I should be able to have a look tomorrow evening as I am currently a bit
overloaded
with my work.
Thanks,
Arnaud
On 13.11.2025 13:26, Lecomte, Arnaud wrote:
>
> On 13/11/2025 12:49, Brahmajit Das wrote:
> > On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
> > > I am a not sure this is the right solution and I am scared that by
> > > forcing this clamping, we are hiding something else.
> > > If we have a look at the code below:
...snip...
> > > might hide something else. I would like to have a look at the return for
> > > each if case through gdb. |
> > Hi Arnaud,
> > So I've been debugging this the reproducer always takes the else branch
> > so trace holds whatever get_perf_callchain returns; in this situation.
> >
> > I mostly found it to be a value around 4.
> >
> > In some case the value would exceed to something 27 or 44, just after
> > the code block
> >
> > if (unlikely(!trace) || trace->nr < skip) {
> > if (may_fault)
> > rcu_read_unlock();
> > goto err_fault;
> > }
> >
> > So I'm assuming there's some race condition that might be going on
> > somewhere.
> Which value ? trace->nr ?
Yep, trace->nr
> > I'm still debugging bug I'm open to ideas and definitely I could be
> > wrong here, please feel free to correct/point out.
>
> I should be able to have a look tomorrow evening as I am currently a bit
> overloaded
> with my work.
Awesome, thank you. I'll try to dig around a bit more meanwhile.
--
Regards,
listout
On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
> I am a not sure this is the right solution and I am scared that by
> forcing this clamping, we are hiding something else.
> If we have a look at the code below:
> ```
>
> |
>
> if (trace_in) {
> trace = trace_in;
> trace->nr = min_t(u32, trace->nr, max_depth);
> } else if (kernel && task) {
> trace = get_callchain_entry_for_task(task, max_depth);
> } else {
> trace = get_perf_callchain(regs, kernel, user, max_depth,
> crosstask, false, 0);
> } ``` trace should be (if I remember correctly) clamped there. If not, it
> might hide something else. I would like to have a look at the return for
> each if case through gdb. |
Sure, I can do that.
>
> Thanks,
> Arnaud
--
Regards,
listout
On 11/11/25 12:12 AM, Brahmajit Das wrote:
> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
>
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
>
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
>
> BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
> Write of size N at addr ... by task syz-executor...
>
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
>
> No functional change intended beyond reintroducing the missing bound check.
>
> Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@listout.xyz>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
© 2016 - 2026 Red Hat, Inc.