From nobody Sat Feb 7 12:34:56 2026 Received: from plesk.hostmyservers.fr (plesk.hostmyservers.fr [45.145.164.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DA14350A0E; Wed, 7 Jan 2026 18:12:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.145.164.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767809576; cv=none; b=dHHxZVgHs3tSyyqP7qkwMAsPMc3lGq26ffP/7bNCTH9BtFB31Z92AvBs1HfHB9QGOyuZJ+QCYx76mVY+a56k3fZCk07EjCz6plJMGmPVASiWA940xeTfKBNcjKrCFvoDadtdsYKj7MxSiv6NRToE1Jc1kb8h8x0fUcKUoX/ZKZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767809576; c=relaxed/simple; bh=ryrestnb926D+RVdbTzuIT2CgKK558InMSvyrXOSHUI=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=hhN6MLFEJPokOobIGb6mHHJcnLP1DUC8dCLDCtjtSj4OJHUAmclgGAaGn2oIh5OWUbKXm1GI9NBkQAZBTxNs0rriOi1odIWzu4rLwc8a19UjhfQbV2xMgK3LWLsVB3lsGCK8FeQD5GYLcCFqUV3vzaYOP8g5bK+M8xb1ZymL4Co= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=arnaud-lcm.com; spf=pass smtp.mailfrom=arnaud-lcm.com; arc=none smtp.client-ip=45.145.164.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=arnaud-lcm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arnaud-lcm.com Received: from localhost.localdomain (vps-f4c04b7b.vps.ovh.net [IPv6:2001:41d0:305:2100::d563]) by plesk.hostmyservers.fr (Postfix) with ESMTPSA id 5940D40215; Wed, 7 Jan 2026 18:12:52 +0000 (UTC) Authentication-Results: Plesk; spf=pass (sender IP is 2001:41d0:305:2100::d563) smtp.mailfrom=contact@arnaud-lcm.com smtp.helo=localhost.localdomain Received-SPF: pass (Plesk: connection is authenticated) From: Arnaud Lecomte To: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com Cc: andrii@kernel.org, ast@kernel.org, bpf@vger.kernel.org, contact@arnaud-lcm.com, daniel@iogearbox.net, eddyz87@gmail.com, haoluo@google.com, john.fastabend@gmail.com, jolsa@kernel.org, kpsingh@kernel.org, linux-kernel@vger.kernel.org, martin.lau@linux.dev, netdev@vger.kernel.org, sdf@fomichev.me, song@kernel.org, syzkaller-bugs@googlegroups.com, yonghong.song@linux.dev, Brahmajit Das Subject: [PATCH v2] bpf-next: Prevent out of bound buffer write in __bpf_get_stack Date: Wed, 7 Jan 2026 18:12:37 +0000 Message-ID: <20260107181237.1075490-1-contact@arnaud-lcm.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-PPP-Message-ID: <176780957287.21808.12715779468988145766@Plesk> X-PPP-Vhost: arnaud-lcm.com Content-Type: text/plain; charset="utf-8" Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stack() during stack trace copying. The issue occurs when: the callchain entry (stored as a per-cpu variable) grow between collection and buffer copy, causing it to exceed the initially calculated buffer size based on max_depth. The callchain collection intentionally avoids locking for performance reasons, but this creates a window where concurrent modifications can occur during the copy operation. To prevent this from happening, we clamp the trace len to the max depth initially calculated with the buffer size and the size of a trace. Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/691231dc.a70a0220.22f260.0101.GAE@googl= e.com/T/ Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into = helper function") Tested-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com Cc: Brahmajit Das Signed-off-by: Arnaud Lecomte --- Changes in v2: - Moved the trace_nr clamping to max_depth above trace->nr skip verification. Link to v1: https://lore.kernel.org/all/20260104205220.980752-1-contact@arn= aud-lcm.com/ Thanks Brahmajit Das for the initial fix he proposed that I tweaked with the correct justification and a better implementation in my opinion. --- kernel/bpf/stackmap.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index da3d328f5c15..c0a430f9eafb 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -465,7 +465,6 @@ static long __bpf_get_stack(struct pt_regs *regs, struc= t task_struct *task, =20 if (trace_in) { trace =3D trace_in; - trace->nr =3D min_t(u32, trace->nr, max_depth); } else if (kernel && task) { trace =3D get_callchain_entry_for_task(task, max_depth); } else { @@ -473,13 +472,15 @@ static long __bpf_get_stack(struct pt_regs *regs, str= uct task_struct *task, crosstask, false, 0); } =20 - if (unlikely(!trace) || trace->nr < skip) { + trace_nr =3D min(trace->nr, max_depth); + + if (unlikely(!trace) || trace_nr < skip) { if (may_fault) rcu_read_unlock(); goto err_fault; } =20 - trace_nr =3D trace->nr - skip; + trace_nr =3D trace_nr - skip; copy_len =3D trace_nr * elem_size; =20 ips =3D trace->ip + skip; --=20 2.43.0