From nobody Wed Oct 1 22:33:19 2025 Received: from out-186.mta0.migadu.com (out-186.mta0.migadu.com [91.218.175.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76AFA1D5154 for ; Fri, 26 Sep 2025 15:40:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.186 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758901224; cv=none; b=ou9mAPslx476uHjl4YFZSlhdJkBGHCZQurrda1G2HnxZplL9dUbuC16pRwtL/0O1gtPxwl5z3hXEzp/O/DuksijqSHmCowkVuI4R3CO87Vo3Hqc+TXFThkRiQtLwf6LSzY4OsQ8GnFqvvTcb5woMGqCJ8kNsFNsCg0jUf5CzD7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758901224; c=relaxed/simple; bh=OiLMjwD6YYiOtFjG9N2Py1ORaI9GhEFST8KctH4GqKU=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=gycgigh4vTKGNJdjib1j7yYZVjKuyOTBJncw5+Bo0uv5rI1Xr3ZzndJS0D4Z4K9MYldt0ARTmGNRKjetovPTdU1RLeyIr5fzuMVVpI4oMV80ppRhOmPbHMdCwlmeSNDycnl+5rI/DMYccGAJt8ix9wxflVqvVar7xa1rmQSFrOo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=MmquPv5s; arc=none smtp.client-ip=91.218.175.186 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="MmquPv5s" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1758901220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=vsn9shmdm9oEX/iOQrOVgeb6RG1rrnA0a9na9r1L0Og=; b=MmquPv5s5IcolI3VnVJWTr+n62AZ9kxki5J/BsYrzb/H4Lmw/tHYkRJku066VyzNcO24CV d2S6bgnfFNen4/w4mQlPjzUhIO96vW/Ilwn4a9m0mpqMsPxuXFyaqkKAL6zw7oBAQ5PY8j tQFKFPq8AX5yZk7ipvf7HcPP3g9WxuE= From: Tao Chen To: song@kernel.org, jolsa@kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, Tao Chen Subject: [PATCH bpf-next v2] bpf: Add preempt_disable to protect get_perf_callchain Date: Fri, 26 Sep 2025 23:39:52 +0800 Message-ID: <20250926153952.1661146-1-chen.dylane@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" As Alexei noted, get_perf_callchain() return values may be reused if a task is preempted after the BPF program enters migrate disable mode. We therefore use bpf_perf_callchain_entries percpu entries similarly to bpf_try_get_buffers to preserve the current task's callchain and prevent overwriting by preempting tasks. And we also add preempt_disable to protect get_perf_callchain. Reported-by: Alexei Starovoitov Closes: https://lore.kernel.org/bpf/CAADnVQ+s8B7-fvR1TNO-bniSyKv57cH_ihRszm= ZV7pQDyV=3DVDQ@mail.gmail.com Signed-off-by: Tao Chen --- kernel/bpf/stackmap.c | 76 ++++++++++++++++++++++++++++++++++--------- 1 file changed, 61 insertions(+), 15 deletions(-) Change list: v1 -> v2: From Alexei - create percpu entris to preserve current task's callchain similarly to bpf_try_get_buffers. v1: https://lore.kernel.org/bpf/20250922075333.1452803-1-chen.dylane@linu= x.dev diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 2e182a3ac4c..8788c219926 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -31,6 +31,55 @@ struct bpf_stack_map { struct stack_map_bucket *buckets[] __counted_by(n_buckets); }; =20 +struct bpf_perf_callchain_entry { + u64 nr; + u64 ip[PERF_MAX_STACK_DEPTH]; +}; + +#define MAX_PERF_CALLCHAIN_PREEMPT 3 +static DEFINE_PER_CPU(struct bpf_perf_callchain_entry[MAX_PERF_CALLCHAIN_P= REEMPT], + bpf_perf_callchain_entries); +static DEFINE_PER_CPU(int, bpf_perf_callchain_preempt_cnt); + +static int bpf_get_perf_callchain(struct bpf_perf_callchain_entry **entry, + struct pt_regs *regs, u32 init_nr, bool kernel, + bool user, u32 max_stack, bool crosstack, + bool add_mark) +{ + struct bpf_perf_callchain_entry *bpf_entry; + struct perf_callchain_entry *perf_entry; + int preempt_cnt; + + preempt_cnt =3D this_cpu_inc_return(bpf_perf_callchain_preempt_cnt); + if (WARN_ON_ONCE(preempt_cnt > MAX_PERF_CALLCHAIN_PREEMPT)) { + this_cpu_dec(bpf_perf_callchain_preempt_cnt); + return -EBUSY; + } + + bpf_entry =3D this_cpu_ptr(&bpf_perf_callchain_entries[preempt_cnt - 1]); + + preempt_disable(); + perf_entry =3D get_perf_callchain(regs, init_nr, kernel, user, max_stack, + crosstack, add_mark); + if (unlikely(!perf_entry)) { + preempt_enable(); + this_cpu_dec(bpf_perf_callchain_preempt_cnt); + return -EFAULT; + } + memcpy(bpf_entry, perf_entry, sizeof(u64) * (perf_entry->nr + 1)); + *entry =3D bpf_entry; + preempt_enable(); + + return 0; +} + +static void bpf_put_perf_callchain(void) +{ + if (WARN_ON_ONCE(this_cpu_read(bpf_perf_callchain_preempt_cnt) =3D=3D 0)) + return; + this_cpu_dec(bpf_perf_callchain_preempt_cnt); +} + static inline bool stack_map_use_build_id(struct bpf_map *map) { return (map->map_flags & BPF_F_STACK_BUILD_ID); @@ -303,8 +352,9 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, str= uct bpf_map *, map, u32 max_depth =3D map->value_size / stack_map_data_size(map); u32 skip =3D flags & BPF_F_SKIP_FIELD_MASK; bool user =3D flags & BPF_F_USER_STACK; - struct perf_callchain_entry *trace; + struct bpf_perf_callchain_entry *trace; bool kernel =3D !user; + int err; =20 if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID))) @@ -314,14 +364,15 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, s= truct bpf_map *, map, if (max_depth > sysctl_perf_event_max_stack) max_depth =3D sysctl_perf_event_max_stack; =20 - trace =3D get_perf_callchain(regs, 0, kernel, user, max_depth, - false, false); + err =3D bpf_get_perf_callchain(&trace, regs, 0, kernel, user, max_depth, + false, false); + if (err) + return err; =20 - if (unlikely(!trace)) - /* couldn't fetch the stack trace */ - return -EFAULT; + err =3D __bpf_get_stackid(map, (struct perf_callchain_entry *)trace, flag= s); + bpf_put_perf_callchain(); =20 - return __bpf_get_stackid(map, trace, flags); + return err; } =20 const struct bpf_func_proto bpf_get_stackid_proto =3D { @@ -443,8 +494,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struc= t task_struct *task, if (sysctl_perf_event_max_stack < max_depth) max_depth =3D sysctl_perf_event_max_stack; =20 - if (may_fault) - rcu_read_lock(); /* need RCU for perf's callchain below */ + preempt_disable(); =20 if (trace_in) trace =3D trace_in; @@ -455,8 +505,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struc= t task_struct *task, crosstask, false); =20 if (unlikely(!trace) || trace->nr < skip) { - if (may_fault) - rcu_read_unlock(); + preempt_enable(); goto err_fault; } =20 @@ -474,10 +523,7 @@ static long __bpf_get_stack(struct pt_regs *regs, stru= ct task_struct *task, } else { memcpy(buf, ips, copy_len); } - - /* trace/ips should not be dereferenced after this point */ - if (may_fault) - rcu_read_unlock(); + preempt_enable(); =20 if (user_build_id) stack_map_get_build_id_offset(buf, trace_nr, user, may_fault); --=20 2.48.1