From nobody Mon Apr 6 09:10:51 2026 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0ACDF36EAA4 for ; Sun, 29 Mar 2026 14:05:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774793127; cv=none; b=HHCTBVIzRhT/yYMyVr4T0Dr6XcwN5J4oqCtama1xW7bJx1yeyMMCmo6eWkPydxAj+T+7Il8wvQAJVX1au0zQLvpXtsIWyywXScwmh7CjpKTnT2YWDIDSPKDoGv/Ak7vkBAvtamVb5Md8qAlGY5D6y15m8gqNtj4jRUHCAu4mfeE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774793127; c=relaxed/simple; bh=4gRW1+mtRMcdZfAHAzXm2KNnewoZoCSS46EQVHhhBP4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i9UksDMSvKDvuNA7itR0g/4yaoIuzJFR/kRUWGzn5X3rQ7EA4gp1A5xlzZNe22pz5xpFXnuEDtEU9vowCe9nMnX2NZs1ygzlknEwM9dIQ3LlXvEmOsrFCZshn42y28y2Ol0ZKvUAuLE9dRRlRhsAg3d0QdB0extVI6LOEncI8ds= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CsqBWlf/; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CsqBWlf/" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-35c1d101355so1525806a91.1 for ; Sun, 29 Mar 2026 07:05:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774793125; x=1775397925; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mi9szT9Syd9CFHAlHUGFYC/MCx4DDJNTxIJ9cbtZIrc=; b=CsqBWlf/TJjr0wVtBvbWZ5At3LF6/QmJjR6a0TOMhtQJ8zyKgqZkYjYyo3KXw/MQEH Nyn0nyOn22yAfoXAcFAkJ2RkmNn0kwQQX7JMUn9KSlJJ7qx9lp/lIyneRS6pj5kotAlH fKu6gqiqPCTLFy+UL9/8lZbSs3D0wGu4toU0d6NCBt2TtXkIHYhfur1/td5GguN+vz3l 1twyGiIzGzO6qEbPsS3M3XtIK1XQHGNpDrutVJGTNC4ugHHJLtrY5E4KId2HV8cb7uNh +etHvajuyQBZELZFTq4l+cz3PgHQqJKDEKP69aVRbc/bvXxVqp/15bkBmV0G4yKQxbQ6 kZUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774793125; x=1775397925; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=mi9szT9Syd9CFHAlHUGFYC/MCx4DDJNTxIJ9cbtZIrc=; b=sguSidZ+LZ8lWKZbUIRzy42KVIapzVn87HZ5SbGDJnmrRCJsxFbUwfgE/v15qXz9tw OaHMqyuy3DU5KtX8jnuv8LirxwtQNoUfZXkSxU3TfgWUcICrQOGQgS8GE6KqxiGPj0U/ cxhp/KSq6keCnwrZMngW1xA3R0tulufj5PeGhHIoAylihSIdQ4+lkotya/4exBsI6fkn B/V1Mt4P7LBaGCRu7D/ltDj7L+CCXovGF59lBWcaicdMhCW3xoDn/O7zZf6g1BgutVvp PAS//u6CClhV8L3ThNpfxSfTQbj/QiBvsyScFCHrrIKWvWN2Gg0f2hSRnz8eySn4cw07 UK1Q== X-Forwarded-Encrypted: i=1; AJvYcCUNwZut+iJKmEQSgeGZxZdkwIU4QeU0NYHtBuKhZ3rhZ0BfYtM8nC9qdDuTmzAZb1YcdBa8K+q3Wx+Cj54=@vger.kernel.org X-Gm-Message-State: AOJu0Yx9AXVeRiCea0bG4nVZ7dpo/V2iR+hDz/h6945Tu9lUkpEMUjZ9 7MvLfaDYnZ+kiZGzRh+WBDTTuqwVFMYt5DnbfO/+J+rDi6zOFWgm+qFp X-Gm-Gg: ATEYQzw3gvSCg6GzAtjPSoSz2T16Ri8KKBDdKZwm0dHHKirrYAnOkNY4dkNHlnUwdnL XceZgZ+jMNecyfvud0UXsQdpXC97CRALaS+l2E2dlkEwhKAHgMQ5PhIc8+/PwADTvSZzAmDSCcO zvSFGTa+jvHjn+MXJOnJlmZMyUy9LOfDM3Tn8vbzjgtyEXp51JqvRgqwNcUwNJyDESrY0WW8YwL L/R9iq+bTMyzD6DEfJrEoHrduhGNOn56TlUt+yIR/MF8c/kY9i1hoxTLv0u+dtMGPHTyMI+4F9E 6YmtU4nt9b5NPseI1JzqY0IYkqJKXYoeTt0YALQb8mcnXvhohNjDV4Dbd9N1L0qEeB5fQ/TEmJn uX/iDHZG+cWn4kCzuncTGJiYts1UIa/kiRB7zRs9X6CxVRZpNkpBgE48ZbQxM4y1taER8/hvokA lxIW8RBNo9exvdA6CE9YwCj/y7tXJcB8KvttWb5J1OnfFTHLt18wLzngHP+KsGXobp X-Received: by 2002:a17:90a:1048:b0:35d:93ff:2854 with SMTP id 98e67ed59e1d1-35d93ff2b0amr3089975a91.8.1774793125137; Sun, 29 Mar 2026 07:05:25 -0700 (PDT) Received: from localhost.localdomain ([113.218.252.111]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-35d950bd630sm4417454a91.16.2026.03.29.07.05.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 29 Mar 2026 07:05:24 -0700 (PDT) From: Chengkaitao To: martin.lau@linux.dev, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, shuah@kernel.org, chengkaitao@kylinos.cn, linux-kselftest@vger.kernel.org Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next v9 1/9] bpf: refactor kfunc checks using table-driven approach in verifier Date: Sun, 29 Mar 2026 22:04:58 +0800 Message-ID: <20260329140506.9595-2-pilgrimtao@gmail.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20260329140506.9595-1-pilgrimtao@gmail.com> References: <20260329140506.9595-1-pilgrimtao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kaitao Cheng Replace per-kfunc btf_id chains check with btf_id_in_kfunc_table() and static kfunc tables for easier maintenance. Prepare for future extensions to the bpf_list API family. Signed-off-by: Kaitao Cheng --- kernel/bpf/verifier.c | 261 +++++++++++++++++++++++------------------- 1 file changed, 144 insertions(+), 117 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 4fbacd2149cd..f2d9863bb290 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -544,9 +544,6 @@ static bool is_async_callback_calling_kfunc(u32 btf_id); static bool is_callback_calling_kfunc(u32 btf_id); static bool is_bpf_throw_kfunc(struct bpf_insn *insn); =20 -static bool is_bpf_wq_set_callback_kfunc(u32 btf_id); -static bool is_task_work_add_kfunc(u32 func_id); - static bool is_sync_callback_calling_function(enum bpf_func_id func_id) { return func_id =3D=3D BPF_FUNC_for_each_map_elem || @@ -586,7 +583,7 @@ static bool is_async_cb_sleepable(struct bpf_verifier_e= nv *env, struct bpf_insn =20 /* bpf_wq and bpf_task_work callbacks are always sleepable. */ if (bpf_pseudo_kfunc_call(insn) && insn->off =3D=3D 0 && - (is_bpf_wq_set_callback_kfunc(insn->imm) || is_task_work_add_kfunc(in= sn->imm))) + is_async_callback_calling_kfunc(insn->imm)) return true; =20 verifier_bug(env, "unhandled async callback in is_async_cb_sleepable"); @@ -11203,31 +11200,6 @@ static int set_task_work_schedule_callback_state(s= truct bpf_verifier_env *env, return 0; } =20 -static bool is_rbtree_lock_required_kfunc(u32 btf_id); - -/* Are we currently verifying the callback for a rbtree helper that must - * be called with lock held? If so, no need to complain about unreleased - * lock - */ -static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env) -{ - struct bpf_verifier_state *state =3D env->cur_state; - struct bpf_insn *insn =3D env->prog->insnsi; - struct bpf_func_state *callee; - int kfunc_btf_id; - - if (!state->curframe) - return false; - - callee =3D state->frame[state->curframe]; - - if (!callee->in_callback_fn) - return false; - - kfunc_btf_id =3D insn[callee->callsite].imm; - return is_rbtree_lock_required_kfunc(kfunc_btf_id); -} - static bool retval_range_within(struct bpf_retval_range range, const struc= t bpf_reg_state *reg) { if (range.return_32bit) @@ -12639,11 +12611,103 @@ BTF_ID(func, bpf_session_is_return) BTF_ID(func, bpf_stream_vprintk) BTF_ID(func, bpf_stream_print_stack) =20 -static bool is_task_work_add_kfunc(u32 func_id) -{ - return func_id =3D=3D special_kfunc_list[KF_bpf_task_work_schedule_signal= ] || - func_id =3D=3D special_kfunc_list[KF_bpf_task_work_schedule_resume= ]; -} +/* Kfunc family related to list. */ +static const enum special_kfunc_type bpf_list_api_kfuncs[] =3D { + KF_bpf_list_push_front_impl, + KF_bpf_list_push_back_impl, + KF_bpf_list_pop_front, + KF_bpf_list_pop_back, + KF_bpf_list_front, + KF_bpf_list_back, +}; + +/* Kfuncs that take a list node argument (bpf_list_node *). */ +static const enum special_kfunc_type bpf_list_node_api_kfuncs[] =3D { + KF_bpf_list_push_front_impl, + KF_bpf_list_push_back_impl, +}; + +/* Kfuncs that take an rbtree node argument (bpf_rb_node *). */ +static const enum special_kfunc_type bpf_rbtree_node_api_kfuncs[] =3D { + KF_bpf_rbtree_remove, + KF_bpf_rbtree_add_impl, + KF_bpf_rbtree_left, + KF_bpf_rbtree_right, +}; + +/* Kfunc family related to rbtree. */ +static const enum special_kfunc_type bpf_rbtree_api_kfuncs[] =3D { + KF_bpf_rbtree_add_impl, + KF_bpf_rbtree_remove, + KF_bpf_rbtree_first, + KF_bpf_rbtree_root, + KF_bpf_rbtree_left, + KF_bpf_rbtree_right, +}; + +/* Kfunc family related to spin_lock. */ +static const enum special_kfunc_type bpf_res_spin_lock_api_kfuncs[] =3D { + KF_bpf_res_spin_lock, + KF_bpf_res_spin_unlock, + KF_bpf_res_spin_lock_irqsave, + KF_bpf_res_spin_unlock_irqrestore, +}; + +/* Kfunc family related to iter_num. */ +static const enum special_kfunc_type bpf_iter_num_api_kfuncs[] =3D { + KF_bpf_iter_num_new, + KF_bpf_iter_num_next, + KF_bpf_iter_num_destroy, +}; + +/* Kfunc family related to arena. */ +static const enum special_kfunc_type bpf_arena_api_kfuncs[] =3D { + KF_bpf_arena_alloc_pages, + KF_bpf_arena_free_pages, + KF_bpf_arena_reserve_pages, +}; + +/* Kfunc family related to stream. */ +static const enum special_kfunc_type bpf_stream_api_kfuncs[] =3D { + KF_bpf_stream_vprintk, + KF_bpf_stream_print_stack, +}; + +/* Kfuncs that must be called when inserting a node in list/rbtree. */ +static const enum special_kfunc_type bpf_collection_insert_kfuncs[] =3D { + KF_bpf_list_push_front_impl, + KF_bpf_list_push_back_impl, + KF_bpf_rbtree_add_impl, +}; + +/* KF_ACQUIRE kfuncs whose vmlinux BTF return type is void* */ +static const enum special_kfunc_type bpf_obj_acquire_ptr_kfuncs[] =3D { + KF_bpf_obj_new_impl, + KF_bpf_percpu_obj_new_impl, + KF_bpf_refcount_acquire_impl, +}; + +/* Kfunc family related to task_work. */ +static const enum special_kfunc_type bpf_task_work_api_kfuncs[] =3D { + KF_bpf_task_work_schedule_signal, + KF_bpf_task_work_schedule_resume, +}; + +/* __kfuncs must be an array identifier (not a pointer), for ARRAY_SIZE. */ +#define btf_id_in_kfunc_table(__btf_id, __kfuncs) \ + ({ \ + u32 ___id =3D (__btf_id); \ + unsigned int ___i; \ + bool ___found =3D false; \ + \ + for (___i =3D 0; ___i < ARRAY_SIZE(__kfuncs); ___i++) { \ + if (___id =3D=3D special_kfunc_list[(__kfuncs)[___i]]) { \ + ___found =3D true; \ + break; \ + } \ + } \ + ___found; \ + }) =20 static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -12680,6 +12744,29 @@ static bool is_kfunc_pkt_changing(struct bpf_kfunc= _call_arg_meta *meta) return meta->func_id =3D=3D special_kfunc_list[KF_bpf_xdp_pull_data]; } =20 +/* Are we currently verifying the callback for a rbtree helper that must + * be called with lock held? If so, no need to complain about unreleased + * lock + */ +static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env) +{ + struct bpf_verifier_state *state =3D env->cur_state; + struct bpf_insn *insn =3D env->prog->insnsi; + struct bpf_func_state *callee; + int kfunc_btf_id; + + if (!state->curframe) + return false; + + callee =3D state->frame[state->curframe]; + + if (!callee->in_callback_fn) + return false; + + kfunc_btf_id =3D insn[callee->callsite].imm; + return btf_id_in_kfunc_table(kfunc_btf_id, bpf_rbtree_api_kfuncs); +} + static enum kfunc_ptr_arg_type get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta, @@ -13036,65 +13123,20 @@ static int check_reg_allocation_locked(struct bpf= _verifier_env *env, struct bpf_ return 0; } =20 -static bool is_bpf_list_api_kfunc(u32 btf_id) -{ - return btf_id =3D=3D special_kfunc_list[KF_bpf_list_push_front_impl] || - btf_id =3D=3D special_kfunc_list[KF_bpf_list_push_back_impl] || - btf_id =3D=3D special_kfunc_list[KF_bpf_list_pop_front] || - btf_id =3D=3D special_kfunc_list[KF_bpf_list_pop_back] || - btf_id =3D=3D special_kfunc_list[KF_bpf_list_front] || - btf_id =3D=3D special_kfunc_list[KF_bpf_list_back]; -} - -static bool is_bpf_rbtree_api_kfunc(u32 btf_id) -{ - return btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_add_impl] || - btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_remove] || - btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_first] || - btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_root] || - btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_left] || - btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_right]; -} - -static bool is_bpf_iter_num_api_kfunc(u32 btf_id) -{ - return btf_id =3D=3D special_kfunc_list[KF_bpf_iter_num_new] || - btf_id =3D=3D special_kfunc_list[KF_bpf_iter_num_next] || - btf_id =3D=3D special_kfunc_list[KF_bpf_iter_num_destroy]; -} - static bool is_bpf_graph_api_kfunc(u32 btf_id) { - return is_bpf_list_api_kfunc(btf_id) || is_bpf_rbtree_api_kfunc(btf_id) || + return btf_id_in_kfunc_table(btf_id, bpf_list_api_kfuncs) || + btf_id_in_kfunc_table(btf_id, bpf_rbtree_api_kfuncs) || btf_id =3D=3D special_kfunc_list[KF_bpf_refcount_acquire_impl]; } =20 -static bool is_bpf_res_spin_lock_kfunc(u32 btf_id) -{ - return btf_id =3D=3D special_kfunc_list[KF_bpf_res_spin_lock] || - btf_id =3D=3D special_kfunc_list[KF_bpf_res_spin_unlock] || - btf_id =3D=3D special_kfunc_list[KF_bpf_res_spin_lock_irqsave] || - btf_id =3D=3D special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore= ]; -} - -static bool is_bpf_arena_kfunc(u32 btf_id) -{ - return btf_id =3D=3D special_kfunc_list[KF_bpf_arena_alloc_pages] || - btf_id =3D=3D special_kfunc_list[KF_bpf_arena_free_pages] || - btf_id =3D=3D special_kfunc_list[KF_bpf_arena_reserve_pages]; -} - -static bool is_bpf_stream_kfunc(u32 btf_id) -{ - return btf_id =3D=3D special_kfunc_list[KF_bpf_stream_vprintk] || - btf_id =3D=3D special_kfunc_list[KF_bpf_stream_print_stack]; -} - static bool kfunc_spin_allowed(u32 btf_id) { - return is_bpf_graph_api_kfunc(btf_id) || is_bpf_iter_num_api_kfunc(btf_id= ) || - is_bpf_res_spin_lock_kfunc(btf_id) || is_bpf_arena_kfunc(btf_id) || - is_bpf_stream_kfunc(btf_id); + return is_bpf_graph_api_kfunc(btf_id) || + btf_id_in_kfunc_table(btf_id, bpf_iter_num_api_kfuncs) || + btf_id_in_kfunc_table(btf_id, bpf_res_spin_lock_api_kfuncs) || + btf_id_in_kfunc_table(btf_id, bpf_arena_api_kfuncs) || + btf_id_in_kfunc_table(btf_id, bpf_stream_api_kfuncs); } =20 static bool is_sync_callback_calling_kfunc(u32 btf_id) @@ -13102,12 +13144,6 @@ static bool is_sync_callback_calling_kfunc(u32 btf= _id) return btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_add_impl]; } =20 -static bool is_async_callback_calling_kfunc(u32 btf_id) -{ - return is_bpf_wq_set_callback_kfunc(btf_id) || - is_task_work_add_kfunc(btf_id); -} - static bool is_bpf_throw_kfunc(struct bpf_insn *insn) { return bpf_pseudo_kfunc_call(insn) && insn->off =3D=3D 0 && @@ -13119,15 +13155,16 @@ static bool is_bpf_wq_set_callback_kfunc(u32 btf_= id) return btf_id =3D=3D special_kfunc_list[KF_bpf_wq_set_callback]; } =20 -static bool is_callback_calling_kfunc(u32 btf_id) +static bool is_async_callback_calling_kfunc(u32 btf_id) { - return is_sync_callback_calling_kfunc(btf_id) || - is_async_callback_calling_kfunc(btf_id); + return is_bpf_wq_set_callback_kfunc(btf_id) || + btf_id_in_kfunc_table(btf_id, bpf_task_work_api_kfuncs); } =20 -static bool is_rbtree_lock_required_kfunc(u32 btf_id) +static bool is_callback_calling_kfunc(u32 btf_id) { - return is_bpf_rbtree_api_kfunc(btf_id); + return is_sync_callback_calling_kfunc(btf_id) || + is_async_callback_calling_kfunc(btf_id); } =20 static bool check_kfunc_is_graph_root_api(struct bpf_verifier_env *env, @@ -13138,10 +13175,10 @@ static bool check_kfunc_is_graph_root_api(struct = bpf_verifier_env *env, =20 switch (head_field_type) { case BPF_LIST_HEAD: - ret =3D is_bpf_list_api_kfunc(kfunc_btf_id); + ret =3D btf_id_in_kfunc_table(kfunc_btf_id, bpf_list_api_kfuncs); break; case BPF_RB_ROOT: - ret =3D is_bpf_rbtree_api_kfunc(kfunc_btf_id); + ret =3D btf_id_in_kfunc_table(kfunc_btf_id, bpf_rbtree_api_kfuncs); break; default: verbose(env, "verifier internal error: unexpected graph root argument ty= pe %s\n", @@ -13163,14 +13200,10 @@ static bool check_kfunc_is_graph_node_api(struct = bpf_verifier_env *env, =20 switch (node_field_type) { case BPF_LIST_NODE: - ret =3D (kfunc_btf_id =3D=3D special_kfunc_list[KF_bpf_list_push_front_i= mpl] || - kfunc_btf_id =3D=3D special_kfunc_list[KF_bpf_list_push_back_impl= ]); + ret =3D btf_id_in_kfunc_table(kfunc_btf_id, bpf_list_node_api_kfuncs); break; case BPF_RB_NODE: - ret =3D (kfunc_btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_remove] || - kfunc_btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_add_impl] || - kfunc_btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_left] || - kfunc_btf_id =3D=3D special_kfunc_list[KF_bpf_rbtree_right]); + ret =3D btf_id_in_kfunc_table(kfunc_btf_id, bpf_rbtree_node_api_kfuncs); break; default: verbose(env, "verifier internal error: unexpected graph node argument ty= pe %s\n", @@ -13878,7 +13911,7 @@ static int check_kfunc_args(struct bpf_verifier_env= *env, struct bpf_kfunc_call_ return -EINVAL; } =20 - if (!is_bpf_res_spin_lock_kfunc(meta->func_id)) + if (!btf_id_in_kfunc_table(meta->func_id, bpf_res_spin_lock_api_kfuncs)) return -EFAULT; if (meta->func_id =3D=3D special_kfunc_list[KF_bpf_res_spin_lock] || meta->func_id =3D=3D special_kfunc_list[KF_bpf_res_spin_lock_irqsav= e]) @@ -14215,7 +14248,7 @@ static int check_kfunc_call(struct bpf_verifier_env= *env, struct bpf_insn *insn, } } =20 - if (is_task_work_add_kfunc(meta.func_id)) { + if (btf_id_in_kfunc_table(meta.func_id, bpf_task_work_api_kfuncs)) { err =3D push_callback_call(env, insn, insn_idx, meta.subprogno, set_task_work_schedule_callback_state); if (err) { @@ -14304,9 +14337,7 @@ static int check_kfunc_call(struct bpf_verifier_env= *env, struct bpf_insn *insn, return err; } =20 - if (meta.func_id =3D=3D special_kfunc_list[KF_bpf_list_push_front_impl] || - meta.func_id =3D=3D special_kfunc_list[KF_bpf_list_push_back_impl] || - meta.func_id =3D=3D special_kfunc_list[KF_bpf_rbtree_add_impl]) { + if (btf_id_in_kfunc_table(meta.func_id, bpf_collection_insert_kfuncs)) { release_ref_obj_id =3D regs[BPF_REG_2].ref_obj_id; insn_aux->insert_off =3D regs[BPF_REG_2].var_off.value; insn_aux->kptr_struct_meta =3D btf_find_struct_meta(meta.arg_btf, meta.a= rg_btf_id); @@ -14354,11 +14385,9 @@ static int check_kfunc_call(struct bpf_verifier_en= v *env, struct bpf_insn *insn, t =3D btf_type_skip_modifiers(desc_btf, meta.func_proto->type, NULL); =20 if (is_kfunc_acquire(&meta) && !btf_type_is_struct_ptr(meta.btf, t)) { - /* Only exception is bpf_obj_new_impl */ + /* Only exception is bpf_obj_acquire_ptr_kfuncs */ if (meta.btf !=3D btf_vmlinux || - (meta.func_id !=3D special_kfunc_list[KF_bpf_obj_new_impl] && - meta.func_id !=3D special_kfunc_list[KF_bpf_percpu_obj_new_impl] && - meta.func_id !=3D special_kfunc_list[KF_bpf_refcount_acquire_impl])= ) { + !btf_id_in_kfunc_table(meta.func_id, bpf_obj_acquire_ptr_kfuncs)) { verbose(env, "acquire kernel function does not return PTR_TO_BTF_ID\n"); return -EINVAL; } @@ -23316,9 +23345,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env= *env, struct bpf_insn *insn, insn_buf[1] =3D addr[1]; insn_buf[2] =3D *insn; *cnt =3D 3; - } else if (desc->func_id =3D=3D special_kfunc_list[KF_bpf_list_push_back_= impl] || - desc->func_id =3D=3D special_kfunc_list[KF_bpf_list_push_front_impl] = || - desc->func_id =3D=3D special_kfunc_list[KF_bpf_rbtree_add_impl]) { + } else if (btf_id_in_kfunc_table(desc->func_id, bpf_collection_insert_kfu= ncs)) { struct btf_struct_meta *kptr_struct_meta =3D env->insn_aux_data[insn_idx= ].kptr_struct_meta; int struct_meta_reg =3D BPF_REG_3; int node_offset_reg =3D BPF_REG_4; --=20 2.50.1 (Apple Git-155)