From nobody Thu Apr 9 15:03:54 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB7E439B941; Mon, 2 Mar 2026 10:01:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445721; cv=none; b=iEEkoSMBrxra3prO9EGvaiFSBmw+ooKqY6XkQRB6vdgN7dJr+4a4tSN/CfI3+VrNo/yYkS/qBL+/0sav4Dlu4EnEHT7GEpSQgHYIZmMhDQVxI+oHjMTSy9w9fPuj5ACmEt79kzdreGC03ihfZ6nAaXw4EFxLcptPz0JAa+NRUUU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445721; c=relaxed/simple; bh=O8mf0BZAIVrL//k8N+6T8ul4BgBk79ZWk1FzjydjvQI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hDKIVIA999QTWhF2gPaDTP5YzaYe9sVVTCENrncxfo4lmlPf/oUOzzJc0OTMGYDyNfJgENMMnBA97tNLO7e7xvR72M+tKBVEsqUJ0ZU9BGXYo0JC2SrMHxwR668rVMF+ZdrU+6zD0lO0wo+F2n33D1CpZ0oQSa9tb9s99t8m7Vk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fPZGL5SqWzKHMPq; Mon, 2 Mar 2026 18:01:46 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 6396140575; Mon, 2 Mar 2026 18:01:49 +0800 (CST) Received: from k01.k01 (unknown [10.67.174.197]) by APP4 (Coremail) with SMTP id gCh0CgCXQvMKYKVpfu0vJQ--.22492S3; Mon, 02 Mar 2026 18:01:49 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , Puranjay Mohan , Anton Protopopov Subject: [PATCH bpf-next v5 1/5] bpf: Move JIT for single-subprog programs to verifier Date: Mon, 2 Mar 2026 18:27:22 +0800 Message-ID: <20260302102726.1126019-2-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260302102726.1126019-1-xukuohai@huaweicloud.com> References: <20260302102726.1126019-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgCXQvMKYKVpfu0vJQ--.22492S3 X-Coremail-Antispam: 1UD129KBjvJXoW3Ww1xAF1rWr47Xr1kur1fCrg_yoW7XF1UpF Z3X34UCr4UKwsrAwnrAF17A345Xa10gw13Ga93tryF9r4jqr4kWF1UWFsYqFWY9ryUtr1S va109rZrZ345ZFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmab4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI 0_Jw0_GFylc7CjxVAKzI0EY4vE52x082I5MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCj c4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4 CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1x MIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF 4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsG vfC2KfnxnUUI43ZEXa7IU0xsqJUUUUU== X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Xu Kuohai JIT for single-subprog programs is done after the verification stage. This prevents the JIT stage from accessing the verifier's internal datas, like env->insn_aux_data. So move it to the verifier. After the movement, all bpf progs loaded with bpf_prog_load() are JITed in the verifier. The JIT in bpf_prog_select_runtime() is preserved for bpf_migrate_filter() and test cases. Signed-off-by: Xu Kuohai --- include/linux/filter.h | 2 ++ kernel/bpf/core.c | 51 +++++++++++++++++++++++++++--------------- kernel/bpf/syscall.c | 2 +- kernel/bpf/verifier.c | 7 +++++- 4 files changed, 42 insertions(+), 20 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 44d7ae95ddbc..632c03e126d9 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1108,6 +1108,8 @@ static inline int sk_filter_reason(struct sock *sk, s= truct sk_buff *skb, return sk_filter_trim_cap(sk, skb, 1, reason); } =20 +struct bpf_prog *bpf_prog_select_jit(struct bpf_prog *fp, int *err); +struct bpf_prog *__bpf_prog_select_runtime(struct bpf_prog *fp, bool jit_a= ttempted, int *err); struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err); void bpf_prog_free(struct bpf_prog *fp); =20 diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 229c74f3d6ae..00be578a438d 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2505,18 +2505,18 @@ static bool bpf_prog_select_interpreter(struct bpf_= prog *fp) return select_interpreter; } =20 -/** - * bpf_prog_select_runtime - select exec runtime for BPF program - * @fp: bpf_prog populated with BPF program - * @err: pointer to error variable - * - * Try to JIT eBPF program, if JIT is not available, use interpreter. - * The BPF program will be executed via bpf_prog_run() function. - * - * Return: the &fp argument along with &err set to 0 for success or - * a negative errno code on failure - */ -struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err) +struct bpf_prog *bpf_prog_select_jit(struct bpf_prog *fp, int *err) +{ + *err =3D bpf_prog_alloc_jited_linfo(fp); + if (*err) + return fp; + + fp =3D bpf_int_jit_compile(fp); + bpf_prog_jit_attempt_done(fp); + return fp; +} + +struct bpf_prog *__bpf_prog_select_runtime(struct bpf_prog *fp, bool jit_a= ttempted, int *err) { /* In case of BPF to BPF calls, verifier did all the prep * work with regards to JITing, etc. @@ -2540,12 +2540,11 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf= _prog *fp, int *err) * be JITed, but falls back to the interpreter. */ if (!bpf_prog_is_offloaded(fp->aux)) { - *err =3D bpf_prog_alloc_jited_linfo(fp); - if (*err) - return fp; - - fp =3D bpf_int_jit_compile(fp); - bpf_prog_jit_attempt_done(fp); + if (!jit_attempted) { + fp =3D bpf_prog_select_jit(fp, err); + if (*err) + return fp; + } if (!fp->jited && jit_needed) { *err =3D -ENOTSUPP; return fp; @@ -2570,6 +2569,22 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_= prog *fp, int *err) =20 return fp; } + +/** + * bpf_prog_select_runtime - select exec runtime for BPF program + * @fp: bpf_prog populated with BPF program + * @err: pointer to error variable + * + * Try to JIT eBPF program, if JIT is not available, use interpreter. + * The BPF program will be executed via bpf_prog_run() function. + * + * Return: the &fp argument along with &err set to 0 for success or + * a negative errno code on failure + */ +struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err) +{ + return __bpf_prog_select_runtime(fp, false, err); +} EXPORT_SYMBOL_GPL(bpf_prog_select_runtime); =20 static unsigned int __bpf_prog_ret1(const void *ctx, diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 274039e36465..d6982107ba80 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3090,7 +3090,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr= _t uattr, u32 uattr_size) if (err < 0) goto free_used_maps; =20 - prog =3D bpf_prog_select_runtime(prog, &err); + prog =3D __bpf_prog_select_runtime(prog, true, &err); if (err < 0) goto free_used_maps; =20 diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fc4ccd1de569..ab2bc0850770 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -26086,6 +26086,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_at= tr *attr, bpfptr_t uattr, __u3 convert_pseudo_ld_imm64(env); } =20 + /* constants blinding in the JIT may increase prog->len */ + len =3D env->prog->len; + if (env->subprog_cnt =3D=3D 1) + env->prog =3D bpf_prog_select_jit(env->prog, &ret); + adjust_btf_func(env); =20 err_release_maps: @@ -26111,7 +26116,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_att= r *attr, bpfptr_t uattr, __u3 err_unlock: if (!is_priv) mutex_unlock(&bpf_verifier_lock); - clear_insn_aux_data(env, 0, env->prog->len); + clear_insn_aux_data(env, 0, len); vfree(env->insn_aux_data); err_free_env: bpf_stack_liveness_free(env); --=20 2.47.3 From nobody Thu Apr 9 15:03:54 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F677396B67; Mon, 2 Mar 2026 10:01:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445718; cv=none; b=jjpMCNdt/lBs1vIWsS0RoYtSuwQiN/LZ+uLFyE/faApqYLWNDVKJKawaZgFKa2Q6o/1enjEpBy7RB3l5vHZWPAqjHuQdJ7HUPGhIXgQw/G0SyYAigcf7EcLF35h7hYrtWspaBpLnCXS1C+DlRQvRazGtXbqydclBNKJX/XCHuhw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445718; c=relaxed/simple; bh=0cYt/HqtpfoGaJjWJFXJXLXhtlMpFnB+n1R2Iu7QNHM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qgNisWxBwn2GdT3VTHAsTfFaAouiXvaS3KiZM8jV8/gSqENxHe6XSnjeCSYT80o6eComBQ249mEUJhiW2hvhLXjm3kelkXB5ibIKueduGikKhlgX/b/UrM/zC8xfERV4ug3p97tnBv4GLCCxvS+F1sWLv8SUwuBIAF6uYug6W8I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fPZFn3hjJzYQv92; Mon, 2 Mar 2026 18:01:17 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 7ABBB40577; Mon, 2 Mar 2026 18:01:49 +0800 (CST) Received: from k01.k01 (unknown [10.67.174.197]) by APP4 (Coremail) with SMTP id gCh0CgCXQvMKYKVpfu0vJQ--.22492S4; Mon, 02 Mar 2026 18:01:49 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , Puranjay Mohan , Anton Protopopov Subject: [PATCH bpf-next v5 2/5] bpf: Pass bpf_verifier_env to jit Date: Mon, 2 Mar 2026 18:27:23 +0800 Message-ID: <20260302102726.1126019-3-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260302102726.1126019-1-xukuohai@huaweicloud.com> References: <20260302102726.1126019-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgCXQvMKYKVpfu0vJQ--.22492S4 X-Coremail-Antispam: 1UD129KBjvAXoW3Zr43Ar45tryfGw1ktF4fXwb_yoW8AFyfZo W3tFn0yF48t3ykG3y7trn7GF1UZw17G397uF4fWa95W3yIq34jkrZrXrsrKa4SqF4rGrWD uFy8Gw45AFZ8KFZ8n29KB7ZKAUJUUUU8529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUOs7kC6x804xWl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK 8VAvwI8IcIk0rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr yl82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48v e4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI 0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AK xVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ew Av7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY 6r1j6r4UM4x0Y48IcxkI7VAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14 v26r1q6r43MxkF7I0Ew4C26cxK6c8Ij28IcwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE 7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI 8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWU CwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r 1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBI daVFxhVjvjDU0xZFpf9x07j5gAwUUUUU= X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Xu Kuohai Pass bpf_verifier_env to bpf_int_jit_compile() and bpf_jit_blind_constants(= ). The follow-up patch will use env->insn_aux_data in the JIT stage to detect indirect jump targets. Signed-off-by: Xu Kuohai --- arch/arc/net/bpf_jit_core.c | 19 ++++++++++--------- arch/arm/net/bpf_jit_32.c | 4 ++-- arch/arm64/net/bpf_jit_comp.c | 4 ++-- arch/loongarch/net/bpf_jit.c | 4 ++-- arch/mips/net/bpf_jit_comp.c | 4 ++-- arch/parisc/net/bpf_jit_core.c | 4 ++-- arch/powerpc/net/bpf_jit_comp.c | 4 ++-- arch/riscv/net/bpf_jit_core.c | 4 ++-- arch/s390/net/bpf_jit_comp.c | 4 ++-- arch/sparc/net/bpf_jit_comp_64.c | 4 ++-- arch/x86/net/bpf_jit_comp.c | 4 ++-- arch/x86/net/bpf_jit_comp32.c | 4 ++-- include/linux/filter.h | 6 +++--- kernel/bpf/core.c | 10 +++++----- kernel/bpf/verifier.c | 6 +++--- 15 files changed, 43 insertions(+), 42 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 1421eeced0f5..076aaf52cb80 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -157,14 +157,15 @@ static void jit_dump(const struct jit_context *ctx) } =20 /* Initialise the context so there's no garbage. */ -static int jit_ctx_init(struct jit_context *ctx, struct bpf_prog *prog) +static int jit_ctx_init(struct jit_context *ctx, struct bpf_verifier_env *= env, + struct bpf_prog *prog) { memset(ctx, 0, sizeof(*ctx)); =20 ctx->orig_prog =3D prog; =20 /* If constant blinding was requested but failed, scram. */ - ctx->prog =3D bpf_jit_blind_constants(prog); + ctx->prog =3D bpf_jit_blind_constants(env, prog); if (IS_ERR(ctx->prog)) return PTR_ERR(ctx->prog); ctx->blinded =3D (ctx->prog !=3D ctx->orig_prog); @@ -1335,7 +1336,7 @@ static int jit_patch_relocations(struct jit_context *= ctx) * to get the necessary data for the real compilation phase, * jit_compile(). */ -static struct bpf_prog *do_normal_pass(struct bpf_prog *prog) +static struct bpf_prog *do_normal_pass(struct bpf_verifier_env *env, struc= t bpf_prog *prog) { struct jit_context ctx; =20 @@ -1343,7 +1344,7 @@ static struct bpf_prog *do_normal_pass(struct bpf_pro= g *prog) if (!prog->jit_requested) return prog; =20 - if (jit_ctx_init(&ctx, prog)) { + if (jit_ctx_init(&ctx, env, prog)) { jit_ctx_cleanup(&ctx); return prog; } @@ -1374,7 +1375,7 @@ static struct bpf_prog *do_normal_pass(struct bpf_pro= g *prog) * again to get the newly translated addresses in order to resolve * the "call"s. */ -static struct bpf_prog *do_extra_pass(struct bpf_prog *prog) +static struct bpf_prog *do_extra_pass(struct bpf_verifier_env *env, struct= bpf_prog *prog) { struct jit_context ctx; =20 @@ -1382,7 +1383,7 @@ static struct bpf_prog *do_extra_pass(struct bpf_prog= *prog) if (check_jit_context(prog)) return prog; =20 - if (jit_ctx_init(&ctx, prog)) { + if (jit_ctx_init(&ctx, env, prog)) { jit_ctx_cleanup(&ctx); return prog; } @@ -1411,15 +1412,15 @@ static struct bpf_prog *do_extra_pass(struct bpf_pr= og *prog) * (re)locations involved that their addresses are not known * during the first run. */ -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { vm_dump(prog); =20 /* Was this program already translated? */ if (!prog->jited) - return do_normal_pass(prog); + return do_normal_pass(env, prog); else - return do_extra_pass(prog); + return do_extra_pass(env, prog); =20 return prog; } diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index deeb8f292454..9c07cbf1dbfc 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -2142,7 +2142,7 @@ bool bpf_jit_needs_zext(void) return true; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { struct bpf_prog *tmp, *orig_prog =3D prog; struct bpf_binary_header *header; @@ -2162,7 +2162,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *prog) * then we must fall back to the interpreter. Otherwise, we save * the new JITed code. */ - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); =20 if (IS_ERR(tmp)) return orig_prog; diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index adf84962d579..823246c7ff5d 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -2006,7 +2006,7 @@ struct arm64_jit_data { struct jit_ctx ctx; }; =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { int image_size, prog_size, extable_size, extable_align, extable_offset; struct bpf_prog *tmp, *orig_prog =3D prog; @@ -2027,7 +2027,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *prog) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); /* If blinding was requested and we failed during blinding, * we must fall back to the interpreter. */ diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c index 3bd89f55960d..b578b176ef01 100644 --- a/arch/loongarch/net/bpf_jit.c +++ b/arch/loongarch/net/bpf_jit.c @@ -1909,7 +1909,7 @@ int arch_bpf_trampoline_size(const struct btf_func_mo= del *m, u32 flags, return ret < 0 ? ret : ret * LOONGARCH_INSN_SIZE; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { bool tmp_blinded =3D false, extra_pass =3D false; u8 *image_ptr, *ro_image_ptr; @@ -1927,7 +1927,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *prog) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); /* * If blinding was requested and we failed during blinding, * we must fall back to the interpreter. Otherwise, we save diff --git a/arch/mips/net/bpf_jit_comp.c b/arch/mips/net/bpf_jit_comp.c index e355dfca4400..faf0ba098a86 100644 --- a/arch/mips/net/bpf_jit_comp.c +++ b/arch/mips/net/bpf_jit_comp.c @@ -909,7 +909,7 @@ bool bpf_jit_needs_zext(void) return true; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { struct bpf_prog *tmp, *orig_prog =3D prog; struct bpf_binary_header *header =3D NULL; @@ -931,7 +931,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *p= rog) * then we must fall back to the interpreter. Otherwise, we save * the new JITed code. */ - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); if (IS_ERR(tmp)) return orig_prog; if (tmp !=3D prog) { diff --git a/arch/parisc/net/bpf_jit_core.c b/arch/parisc/net/bpf_jit_core.c index a5eb6b51e27a..e85b6e336b19 100644 --- a/arch/parisc/net/bpf_jit_core.c +++ b/arch/parisc/net/bpf_jit_core.c @@ -41,7 +41,7 @@ bool bpf_jit_needs_zext(void) return true; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { unsigned int prog_size =3D 0, extable_size =3D 0; bool tmp_blinded =3D false, extra_pass =3D false; @@ -53,7 +53,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *pro= g) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); if (IS_ERR(tmp)) return orig_prog; if (tmp !=3D prog) { diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_com= p.c index 52162e4a7f84..fb77e8beb161 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -129,7 +129,7 @@ bool bpf_jit_needs_zext(void) return true; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *fp) { u32 proglen; u32 alloclen; @@ -154,7 +154,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *f= p) if (!fp->jit_requested) return org_fp; =20 - tmp_fp =3D bpf_jit_blind_constants(org_fp); + tmp_fp =3D bpf_jit_blind_constants(env, org_fp); if (IS_ERR(tmp_fp)) return org_fp; =20 diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c index b3581e926436..ce157319459f 100644 --- a/arch/riscv/net/bpf_jit_core.c +++ b/arch/riscv/net/bpf_jit_core.c @@ -41,7 +41,7 @@ bool bpf_jit_needs_zext(void) return true; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { unsigned int prog_size =3D 0, extable_size =3D 0; bool tmp_blinded =3D false, extra_pass =3D false; @@ -53,7 +53,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *pro= g) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); if (IS_ERR(tmp)) return orig_prog; if (tmp !=3D prog) { diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 211226748662..84aabfc8a9d6 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -2303,7 +2303,7 @@ static struct bpf_binary_header *bpf_jit_alloc(struct= bpf_jit *jit, /* * Compile eBPF program "fp" */ -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *fp) { struct bpf_prog *tmp, *orig_fp =3D fp; struct bpf_binary_header *header; @@ -2316,7 +2316,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *fp) if (!fp->jit_requested) return orig_fp; =20 - tmp =3D bpf_jit_blind_constants(fp); + tmp =3D bpf_jit_blind_constants(env, fp); /* * If blinding was requested and we failed during blinding, * we must fall back to the interpreter. diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp= _64.c index b23d1c645ae5..55da61ca2967 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1477,7 +1477,7 @@ struct sparc64_jit_data { struct jit_ctx ctx; }; =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { struct bpf_prog *tmp, *orig_prog =3D prog; struct sparc64_jit_data *jit_data; @@ -1492,7 +1492,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *prog) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); /* If blinding was requested and we failed during blinding, * we must fall back to the interpreter. */ diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8f10080e6fe3..43beacaed56d 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -3722,7 +3722,7 @@ struct x64_jit_data { #define MAX_PASSES 20 #define PADDING_PASSES (MAX_PASSES - 5) =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { struct bpf_binary_header *rw_header =3D NULL; struct bpf_binary_header *header =3D NULL; @@ -3744,7 +3744,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *prog) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); /* * If blinding was requested and we failed during blinding, * we must fall back to the interpreter. diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index dda423025c3d..957f7aa951ba 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -2518,7 +2518,7 @@ bool bpf_jit_needs_zext(void) return true; } =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog) { struct bpf_binary_header *header =3D NULL; struct bpf_prog *tmp, *orig_prog =3D prog; @@ -2533,7 +2533,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog = *prog) if (!prog->jit_requested) return orig_prog; =20 - tmp =3D bpf_jit_blind_constants(prog); + tmp =3D bpf_jit_blind_constants(env, prog); /* * If blinding was requested and we failed during blinding, * we must fall back to the interpreter. diff --git a/include/linux/filter.h b/include/linux/filter.h index 632c03e126d9..8b5e9ac9eee4 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1108,7 +1108,7 @@ static inline int sk_filter_reason(struct sock *sk, s= truct sk_buff *skb, return sk_filter_trim_cap(sk, skb, 1, reason); } =20 -struct bpf_prog *bpf_prog_select_jit(struct bpf_prog *fp, int *err); +struct bpf_prog *bpf_prog_select_jit(struct bpf_verifier_env *env, struct = bpf_prog *fp, int *err); struct bpf_prog *__bpf_prog_select_runtime(struct bpf_prog *fp, bool jit_a= ttempted, int *err); struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err); void bpf_prog_free(struct bpf_prog *fp); @@ -1155,7 +1155,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u= 64 r5); ((u64 (*)(u64, u64, u64, u64, u64, const struct bpf_insn *)) \ (void *)__bpf_call_base) =20 -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog); +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct = bpf_prog *prog); void bpf_jit_compile(struct bpf_prog *prog); bool bpf_jit_needs_zext(void); bool bpf_jit_inlines_helper_call(s32 imm); @@ -1312,7 +1312,7 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog, =20 const char *bpf_jit_get_prog_name(struct bpf_prog *prog); =20 -struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *fp); +struct bpf_prog *bpf_jit_blind_constants(struct bpf_verifier_env *env, str= uct bpf_prog *prog); void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_o= ther); =20 static inline void bpf_jit_dump(unsigned int flen, unsigned int proglen, diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 00be578a438d..7702c232c62e 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1486,7 +1486,7 @@ static void adjust_insn_arrays(struct bpf_prog *prog,= u32 off, u32 len) #endif } =20 -struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *prog) +struct bpf_prog *bpf_jit_blind_constants(struct bpf_verifier_env *env, str= uct bpf_prog *prog) { struct bpf_insn insn_buff[16], aux[2]; struct bpf_prog *clone, *tmp; @@ -2505,13 +2505,13 @@ static bool bpf_prog_select_interpreter(struct bpf_= prog *fp) return select_interpreter; } =20 -struct bpf_prog *bpf_prog_select_jit(struct bpf_prog *fp, int *err) +struct bpf_prog *bpf_prog_select_jit(struct bpf_verifier_env *env, struct = bpf_prog *fp, int *err) { *err =3D bpf_prog_alloc_jited_linfo(fp); if (*err) return fp; =20 - fp =3D bpf_int_jit_compile(fp); + fp =3D bpf_int_jit_compile(env, fp); bpf_prog_jit_attempt_done(fp); return fp; } @@ -2541,7 +2541,7 @@ struct bpf_prog *__bpf_prog_select_runtime(struct bpf= _prog *fp, bool jit_attempt */ if (!bpf_prog_is_offloaded(fp->aux)) { if (!jit_attempted) { - fp =3D bpf_prog_select_jit(fp, err); + fp =3D bpf_prog_select_jit(NULL, fp, err); if (*err) return fp; } @@ -3072,7 +3072,7 @@ const struct bpf_func_proto bpf_tail_call_proto =3D { * It is encouraged to implement bpf_int_jit_compile() instead, so that * eBPF and implicitly also cBPF can get JITed! */ -struct bpf_prog * __weak bpf_int_jit_compile(struct bpf_prog *prog) +struct bpf_prog * __weak bpf_int_jit_compile(struct bpf_verifier_env *env,= struct bpf_prog *prog) { return prog; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ab2bc0850770..1d2d42078ddf 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -22844,7 +22844,7 @@ static int jit_subprogs(struct bpf_verifier_env *en= v) * all instruction adjustments should be accumulated */ old_len =3D func[i]->len; - func[i] =3D bpf_int_jit_compile(func[i]); + func[i] =3D bpf_int_jit_compile(env, func[i]); subprog_start_adjustment +=3D func[i]->len - old_len; =20 if (!func[i]->jited) { @@ -22890,7 +22890,7 @@ static int jit_subprogs(struct bpf_verifier_env *en= v) } for (i =3D 0; i < env->subprog_cnt; i++) { old_bpf_func =3D func[i]->bpf_func; - tmp =3D bpf_int_jit_compile(func[i]); + tmp =3D bpf_int_jit_compile(env, func[i]); if (tmp !=3D func[i] || func[i]->bpf_func !=3D old_bpf_func) { verbose(env, "JIT doesn't support bpf-to-bpf calls\n"); err =3D -ENOTSUPP; @@ -26089,7 +26089,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_att= r *attr, bpfptr_t uattr, __u3 /* constants blinding in the JIT may increase prog->len */ len =3D env->prog->len; if (env->subprog_cnt =3D=3D 1) - env->prog =3D bpf_prog_select_jit(env->prog, &ret); + env->prog =3D bpf_prog_select_jit(env, env->prog, &ret); =20 adjust_btf_func(env); =20 --=20 2.47.3 From nobody Thu Apr 9 15:03:54 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA47F395D8E; Mon, 2 Mar 2026 10:01:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445718; cv=none; b=O9WmSb8XtlZC3tDywI/BTwIEBPXNGM+a9tc3yW09SqDdp35wrlWKL2vIcolkjdHgjp+C1ZgLq3Nte7e648yp9d7U6tsitwGwgfG+5e1zmFKlLf8U8fU/aD8A79cMbXvsI2FmaEOnm+AchgfKpwZvggtwqUX0el0fD4bJs9Ooyw8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445718; c=relaxed/simple; bh=IECB0xNEatG8TMaTVovvUAwsZTsiJR4ddeNASkbq11Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mKhY31cRJ5AB8+V/xiyS2RazB7JfamqRu+aauShmP1T9Otudem2rlqjExoTEihoAWE22vrZAIAto4cm7+Z1O500g66aEr8vywlOJVU2yWemRVf7993fugUcaJeYJzgpGdHiuhguHk89Li3ainU75HdI4Z6ljcKdZd1CCHvtPhZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fPZFn4KgvzYQv9Z; Mon, 2 Mar 2026 18:01:17 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 9383A40571; Mon, 2 Mar 2026 18:01:49 +0800 (CST) Received: from k01.k01 (unknown [10.67.174.197]) by APP4 (Coremail) with SMTP id gCh0CgCXQvMKYKVpfu0vJQ--.22492S5; Mon, 02 Mar 2026 18:01:49 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , Puranjay Mohan , Anton Protopopov Subject: [PATCH bpf-next v5 3/5] bpf: Add helper to detect indirect jump targets Date: Mon, 2 Mar 2026 18:27:24 +0800 Message-ID: <20260302102726.1126019-4-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260302102726.1126019-1-xukuohai@huaweicloud.com> References: <20260302102726.1126019-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgCXQvMKYKVpfu0vJQ--.22492S5 X-Coremail-Antispam: 1UD129KBjvJXoWxKw1Uuw4rKw13KF1xAr1UWrg_yoWxCw4kpF 4DX3s3Ar48JanrWrnrAF48Aryaqa1rW39rGay7W348A3yjgrn5WF4Fgr4FvF98trW0kF1x ZF4j9r45Wry7ZFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmab4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI 0_Jw0_GFylc7CjxVAKzI0EY4vE52x082I5MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCj c4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4 CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1x MIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF 4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsG vfC2KfnxnUUI43ZEXa7IU8D5r7UUUUU== X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Xu Kuohai Introduce helper bpf_insn_is_indirect_target to determine whether a BPF instruction is an indirect jump target. This helper will be used by follow-up patches to decide where to emit indirect landing pad instructions. Add a new flag to struct bpf_insn_aux_data to mark instructions that are indirect jump targets. The BPF verifier sets this flag, and the helper checks it to determine whether an instruction is an indirect jump target. Also add a new field to struct bpf_insn_aux_data to track the instruction final index in the bpf prog, as the instructions may be rewritten by constant blinding in the JIT stage. This field is used as a binary search key to find the corresponding insn_aux_data for a given instruction. Signed-off-by: Xu Kuohai --- include/linux/bpf.h | 2 ++ include/linux/bpf_verifier.h | 10 ++++++---- kernel/bpf/core.c | 38 +++++++++++++++++++++++++++++++++--- kernel/bpf/verifier.c | 13 +++++++++++- 4 files changed, 55 insertions(+), 8 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 05b34a6355b0..90760e250865 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1541,6 +1541,8 @@ bool bpf_has_frame_pointer(unsigned long ip); int bpf_jit_charge_modmem(u32 size); void bpf_jit_uncharge_modmem(u32 size); bool bpf_prog_has_trampoline(const struct bpf_prog *prog); +bool bpf_insn_is_indirect_target(const struct bpf_verifier_env *env, const= struct bpf_prog *prog, + int insn_idx); #else static inline int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr, diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index c1e30096ea7b..f8f70e5414f0 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -577,16 +577,18 @@ struct bpf_insn_aux_data { =20 /* below fields are initialized once */ unsigned int orig_idx; /* original instruction index */ - bool jmp_point; - bool prune_point; + unsigned int final_idx; /* final instruction index */ + u32 jmp_point:1; + u32 prune_point:1; /* ensure we check state equivalence and save state checkpoint and * this instruction, regardless of any heuristics */ - bool force_checkpoint; + u32 force_checkpoint:1; /* true if instruction is a call to a helper function that * accepts callback function as a parameter. */ - bool calls_callback; + u32 calls_callback:1; + u32 indirect_target:1; /* if it is an indirect jump target */ /* * CFG strongly connected component this instruction belongs to, * zero if it is a singleton SCC. diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 7702c232c62e..9a760cf43d68 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1486,13 +1486,41 @@ static void adjust_insn_arrays(struct bpf_prog *pro= g, u32 off, u32 len) #endif } =20 +static int bpf_insn_aux_cmp_by_insn_idx(const void *a, const void *b) +{ + int insn_idx =3D *(int *)a; + int final_idx =3D ((const struct bpf_insn_aux_data *)b)->final_idx; + + return insn_idx - final_idx; +} + +bool bpf_insn_is_indirect_target(const struct bpf_verifier_env *env, const= struct bpf_prog *prog, + int insn_idx) +{ + struct bpf_insn_aux_data *insn_aux; + int func_idx, subprog_start, subprog_end; + + if (!env) + return false; + + func_idx =3D prog->aux->func_idx; + subprog_start =3D env->subprog_info[func_idx].start; + subprog_end =3D env->subprog_info[func_idx + 1].start; + + insn_aux =3D bsearch(&insn_idx, &env->insn_aux_data[subprog_start], + subprog_end - subprog_start, + sizeof(struct bpf_insn_aux_data), bpf_insn_aux_cmp_by_insn_idx); + + return insn_aux && insn_aux->indirect_target; +} + struct bpf_prog *bpf_jit_blind_constants(struct bpf_verifier_env *env, str= uct bpf_prog *prog) { struct bpf_insn insn_buff[16], aux[2]; struct bpf_prog *clone, *tmp; - int insn_delta, insn_cnt; + int insn_delta, insn_cnt, subprog_start; struct bpf_insn *insn; - int i, rewritten; + int i, j, rewritten; =20 if (!prog->blinding_requested || prog->blinded) return prog; @@ -1503,8 +1531,10 @@ struct bpf_prog *bpf_jit_blind_constants(struct bpf_= verifier_env *env, struct bp =20 insn_cnt =3D clone->len; insn =3D clone->insnsi; + subprog_start =3D env->subprog_info[prog->aux->func_idx].start; =20 - for (i =3D 0; i < insn_cnt; i++, insn++) { + for (i =3D 0, j =3D 0; i < insn_cnt; i++, j++, insn++) { + env->insn_aux_data[subprog_start + j].final_idx =3D i; if (bpf_pseudo_func(insn)) { /* ld_imm64 with an address of bpf subprog is not * a user controlled constant. Don't randomize it, @@ -1512,6 +1542,8 @@ struct bpf_prog *bpf_jit_blind_constants(struct bpf_v= erifier_env *env, struct bp */ insn++; i++; + j++; + env->insn_aux_data[subprog_start + j].final_idx =3D i; continue; } =20 diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1d2d42078ddf..5f08d521e58a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3971,6 +3971,11 @@ static bool is_jmp_point(struct bpf_verifier_env *en= v, int insn_idx) return env->insn_aux_data[insn_idx].jmp_point; } =20 +static void mark_indirect_target(struct bpf_verifier_env *env, int idx) +{ + env->insn_aux_data[idx].indirect_target =3D true; +} + #define LR_FRAMENO_BITS 3 #define LR_SPI_BITS 6 #define LR_ENTRY_BITS (LR_SPI_BITS + LR_FRAMENO_BITS + 1) @@ -20943,12 +20948,14 @@ static int check_indirect_jump(struct bpf_verifie= r_env *env, struct bpf_insn *in } =20 for (i =3D 0; i < n - 1; i++) { + mark_indirect_target(env, env->gotox_tmp_buf->items[i]); other_branch =3D push_stack(env, env->gotox_tmp_buf->items[i], env->insn_idx, env->cur_state->speculative); if (IS_ERR(other_branch)) return PTR_ERR(other_branch); } env->insn_idx =3D env->gotox_tmp_buf->items[n-1]; + mark_indirect_target(env, env->insn_idx); return 0; } =20 @@ -22817,6 +22824,7 @@ static int jit_subprogs(struct bpf_verifier_env *en= v) num_exentries =3D 0; insn =3D func[i]->insnsi; for (j =3D 0; j < func[i]->len; j++, insn++) { + env->insn_aux_data[subprog_start + j].final_idx =3D j; if (BPF_CLASS(insn->code) =3D=3D BPF_LDX && (BPF_MODE(insn->code) =3D=3D BPF_PROBE_MEM || BPF_MODE(insn->code) =3D=3D BPF_PROBE_MEM32 || @@ -26088,8 +26096,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_at= tr *attr, bpfptr_t uattr, __u3 =20 /* constants blinding in the JIT may increase prog->len */ len =3D env->prog->len; - if (env->subprog_cnt =3D=3D 1) + if (env->subprog_cnt =3D=3D 1) { + for (i =3D 0; i < len; i++) + env->insn_aux_data[i].final_idx =3D i; env->prog =3D bpf_prog_select_jit(env, env->prog, &ret); + } =20 adjust_btf_func(env); =20 --=20 2.47.3 From nobody Thu Apr 9 15:03:54 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33A6D395D9B; Mon, 2 Mar 2026 10:01:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445718; cv=none; b=JZsFdwf4lJ1+X+jzHwyao59aQ4IXTqiaPjVFf1sz2wODsb1KT3Zcm+bG/eU4ndO1PK5iR/7GPRkswHicgZgiJ8obyrDbUboBQ1QcyzU6WuCYr0TLS6lws5VKRvf03A2vnS2exm2YuKfcK/jmIj5sr/XyLvA86pBCVjsqFbYZCLA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445718; c=relaxed/simple; bh=ElikAv8DBlKlyYjzKf2SgjKxE42gDmoFv4uUKsOcX4Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qRPBbHNL+r74PJpFunxTfaN3GO3pfLahiqOXykircTXb5SzFEOS6KFooKTi0AI9txYWZkrLDEgxGTkKQSSb0KSjJgn4xr/Yto08fD6NOhkylQF0Gzt0OGEZ6KS4h/aVgxvhkCcafw/lwZJMT5F7ilXuO8ptigLZ9JTBy6Xz+C54= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fPZFn5035zYQvBp; Mon, 2 Mar 2026 18:01:17 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id AC55A4056E; Mon, 2 Mar 2026 18:01:49 +0800 (CST) Received: from k01.k01 (unknown [10.67.174.197]) by APP4 (Coremail) with SMTP id gCh0CgCXQvMKYKVpfu0vJQ--.22492S6; Mon, 02 Mar 2026 18:01:49 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , Puranjay Mohan , Anton Protopopov Subject: [PATCH bpf-next v5 4/5] bpf, x86: Emit ENDBR for indirect jump targets Date: Mon, 2 Mar 2026 18:27:25 +0800 Message-ID: <20260302102726.1126019-5-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260302102726.1126019-1-xukuohai@huaweicloud.com> References: <20260302102726.1126019-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgCXQvMKYKVpfu0vJQ--.22492S6 X-Coremail-Antispam: 1UD129KBjvJXoWxZr4DXF1DAFWUKFW7KF1UGFg_yoWrWF43pa 9xA3savrZ8Wr4DKrn7XF42yr9IkF1vgryxJF4ft3yrZw42gr95WF1a9a4SqFyYkrWrGrn3 XFyjkF1Du3WkurUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmmb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_Jw0_GFylc7CjxVAKzI0EY4vE52x082I5MxAIw28IcxkI7VAKI48JMxC20s026xCa FVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_Jr Wlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j 6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04k26cxKx2IYs7xG6r 1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1U YxBIdaVFxhVjvjDU0xZFpf9x07j5CzZUUUUU= X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Xu Kuohai On CPUs that support CET/IBT, the indirect jump selftest triggers a kernel panic because the indirect jump targets lack ENDBR instructions. To fix it, emit an ENDBR instruction to each indirect jump target. Since the ENDBR instruction shifts the position of original jited instructions, fix the instruction address calculation wherever the addresses are used. For reference, below is a sample panic log. Missing ENDBR: bpf_prog_2e5f1c71c13ac3e0_big_jump_table+0x97/0xe1 ------------[ cut here ]------------ kernel BUG at arch/x86/kernel/cet.c:133! Oops: invalid opcode: 0000 [#1] SMP NOPTI ... ? 0xffffffffc00fb258 ? bpf_prog_2e5f1c71c13ac3e0_big_jump_table+0x97/0xe1 bpf_prog_test_run_syscall+0x110/0x2f0 ? fdget+0xba/0xe0 __sys_bpf+0xe4b/0x2590 ? __kmalloc_node_track_caller_noprof+0x1c7/0x680 ? bpf_prog_test_run_syscall+0x215/0x2f0 __x64_sys_bpf+0x21/0x30 do_syscall_64+0x85/0x620 ? bpf_prog_test_run_syscall+0x1e2/0x2f0 Fixes: 493d9e0d6083 ("bpf, x86: add support for indirect jumps") Signed-off-by: Xu Kuohai --- arch/x86/net/bpf_jit_comp.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 43beacaed56d..7a2fa828558a 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1658,8 +1658,8 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *i= p, return 0; } =20 -static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw= _image, - int oldproglen, struct jit_context *ctx, bool jmp_padding) +static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog,= int *addrs, u8 *image, + u8 *rw_image, int oldproglen, struct jit_context *ctx, bool jmp_paddin= g) { bool tail_call_reachable =3D bpf_prog->aux->tail_call_reachable; struct bpf_insn *insn =3D bpf_prog->insnsi; @@ -1743,6 +1743,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *add= rs, u8 *image, u8 *rw_image dst_reg =3D X86_REG_R9; } =20 + if (bpf_insn_is_indirect_target(env, bpf_prog, i - 1)) + EMIT_ENDBR(); + switch (insn->code) { /* ALU */ case BPF_ALU | BPF_ADD | BPF_X: @@ -2449,7 +2452,7 @@ st: if (is_imm8(insn->off)) =20 /* call */ case BPF_JMP | BPF_CALL: { - u8 *ip =3D image + addrs[i - 1]; + u8 *ip =3D image + addrs[i - 1] + (prog - temp); =20 func =3D (u8 *) __bpf_call_base + imm32; if (src_reg =3D=3D BPF_PSEUDO_CALL && tail_call_reachable) { @@ -2474,7 +2477,8 @@ st: if (is_imm8(insn->off)) if (imm32) emit_bpf_tail_call_direct(bpf_prog, &bpf_prog->aux->poke_tab[imm32 - 1], - &prog, image + addrs[i - 1], + &prog, + image + addrs[i - 1] + (prog - temp), callee_regs_used, stack_depth, ctx); @@ -2483,7 +2487,7 @@ st: if (is_imm8(insn->off)) &prog, callee_regs_used, stack_depth, - image + addrs[i - 1], + image + addrs[i - 1] + (prog - temp), ctx); break; =20 @@ -2648,7 +2652,8 @@ st: if (is_imm8(insn->off)) break; =20 case BPF_JMP | BPF_JA | BPF_X: - emit_indirect_jump(&prog, insn->dst_reg, image + addrs[i - 1]); + emit_indirect_jump(&prog, insn->dst_reg, + image + addrs[i - 1] + (prog - temp)); break; case BPF_JMP | BPF_JA: case BPF_JMP32 | BPF_JA: @@ -2738,7 +2743,7 @@ st: if (is_imm8(insn->off)) ctx->cleanup_addr =3D proglen; if (bpf_prog_was_classic(bpf_prog) && !ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN)) { - u8 *ip =3D image + addrs[i - 1]; + u8 *ip =3D image + addrs[i - 1] + (prog - temp); =20 if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog)) return -EINVAL; @@ -3820,7 +3825,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_verif= ier_env *env, struct bpf_pr for (pass =3D 0; pass < MAX_PASSES || image; pass++) { if (!padding && pass >=3D PADDING_PASSES) padding =3D true; - proglen =3D do_jit(prog, addrs, image, rw_image, oldproglen, &ctx, paddi= ng); + proglen =3D do_jit(env, prog, addrs, image, rw_image, oldproglen, &ctx, = padding); if (proglen <=3D 0) { out_image: image =3D NULL; --=20 2.47.3 From nobody Thu Apr 9 15:03:54 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57EC93939D3; Mon, 2 Mar 2026 10:02:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445723; cv=none; b=GqQwsvs6RKBeP07VgT+pchv+o69SjMyp93FdZ64oUOIne6lYlwElHVnNi4vyPzwrhVHdQvElqI1SE9ZEZWxmVdpd7wMXvwTgppJUAb542QvyO+KhLYcxoONvWDrUvUPhpyaa30zIsao+J2d2beZIET/f/UyF5O6JN7XK/rmXbwE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772445723; c=relaxed/simple; bh=O+p2NVXMnLMe9Do8deslWiQiT6Qh8ds7WRj+qTfpg/w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z10eZbcananmU4JpR4zz8jyuG3dYY9A67RTXUMPd2EV9a2EHbGme/KvO6SwTlV8w47j52BcxdxT6t8XDTJ6PD1AzjPQvVFRPoeCNP+Q4sIjCT6qcKv1XcsHujSQE4hezwUM6uIDzdf3n0470bPEzIQ6TplnjjEY4X++KiHA3wss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fPZGM1J25zKHMXT; Mon, 2 Mar 2026 18:01:47 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id CA94F40579; Mon, 2 Mar 2026 18:01:49 +0800 (CST) Received: from k01.k01 (unknown [10.67.174.197]) by APP4 (Coremail) with SMTP id gCh0CgCXQvMKYKVpfu0vJQ--.22492S7; Mon, 02 Mar 2026 18:01:49 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , Puranjay Mohan , Anton Protopopov Subject: [PATCH bpf-next v5 5/5] bpf, arm64: Emit BTI for indirect jump target Date: Mon, 2 Mar 2026 18:27:26 +0800 Message-ID: <20260302102726.1126019-6-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260302102726.1126019-1-xukuohai@huaweicloud.com> References: <20260302102726.1126019-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgCXQvMKYKVpfu0vJQ--.22492S7 X-Coremail-Antispam: 1UD129KBjvJXoWxXrW8trW5XF45Cw1rWw4Uurg_yoW5Zw1fpF 4DC3s0krW8Gr4jg3WDXayDAFyakF4kGFW3GFyFk3ySkrZ0qF98WF1UKF12kF93A3yrur1f Za90kr1UW34xJrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmmb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_Jw0_GFylc7CjxVAKzI0EY4vE52x082I5MxAIw28IcxkI7VAKI48JMxC20s026xCa FVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_Jr Wlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1I 6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04k26cxKx2IYs7xG6r 1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1U YxBIdaVFxhVjvjDU0xZFpf9x07j5CzZUUUUU= X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Xu Kuohai On CPUs that support BTI, the indirect jump selftest triggers a kernel panic because there is no BTI instructions at the indirect jump targets. Fix it by emitting a BTI instruction for each indirect jump target. For reference, below is a sample panic log. Internal error: Oops - BTI: 0000000036000003 [#1] SMP ... Call trace: bpf_prog_2e5f1c71c13ac3e0_big_jump_table+0x54/0xf8 (P) bpf_prog_run_pin_on_cpu+0x140/0x468 bpf_prog_test_run_syscall+0x280/0x3b8 bpf_prog_test_run+0x22c/0x2c0 Fixes: f4a66cf1cb14 ("bpf: arm64: Add support for indirect jumps") Signed-off-by: Xu Kuohai --- arch/arm64/net/bpf_jit_comp.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 823246c7ff5d..127e099d3d3a 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1198,8 +1198,8 @@ static int add_exception_handler(const struct bpf_ins= n *insn, * >0 - successfully JITed a 16-byte eBPF instruction. * <0 - failed to JIT. */ -static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, - bool extra_pass) +static int build_insn(const struct bpf_verifier_env *env, const struct bpf= _insn *insn, + struct jit_ctx *ctx, bool extra_pass) { const u8 code =3D insn->code; u8 dst =3D bpf2a64[insn->dst_reg]; @@ -1224,6 +1224,9 @@ static int build_insn(const struct bpf_insn *insn, st= ruct jit_ctx *ctx, int ret; bool sign_extend; =20 + if (bpf_insn_is_indirect_target(env, ctx->prog, i)) + emit_bti(A64_BTI_J, ctx); + switch (code) { /* dst =3D src */ case BPF_ALU | BPF_MOV | BPF_X: @@ -1899,7 +1902,7 @@ static int build_insn(const struct bpf_insn *insn, st= ruct jit_ctx *ctx, return 0; } =20 -static int build_body(struct jit_ctx *ctx, bool extra_pass) +static int build_body(struct bpf_verifier_env *env, struct jit_ctx *ctx, b= ool extra_pass) { const struct bpf_prog *prog =3D ctx->prog; int i; @@ -1918,7 +1921,7 @@ static int build_body(struct jit_ctx *ctx, bool extra= _pass) int ret; =20 ctx->offset[i] =3D ctx->idx; - ret =3D build_insn(insn, ctx, extra_pass); + ret =3D build_insn(env, insn, ctx, extra_pass); if (ret > 0) { i++; ctx->offset[i] =3D ctx->idx; @@ -2100,7 +2103,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_verif= ier_env *env, struct bpf_pr goto out_off; } =20 - if (build_body(&ctx, extra_pass)) { + if (build_body(env, &ctx, extra_pass)) { prog =3D orig_prog; goto out_off; } @@ -2152,7 +2155,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_verif= ier_env *env, struct bpf_pr /* Dont write body instructions to memory for now */ ctx.write =3D false; =20 - if (build_body(&ctx, extra_pass)) { + if (build_body(env, &ctx, extra_pass)) { prog =3D orig_prog; goto out_free_hdr; } @@ -2163,7 +2166,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_verif= ier_env *env, struct bpf_pr ctx.write =3D true; =20 /* Pass 3: Adjust jump offset and write final image */ - if (build_body(&ctx, extra_pass) || + if (build_body(env, &ctx, extra_pass) || WARN_ON_ONCE(ctx.idx !=3D ctx.epilogue_offset)) { prog =3D orig_prog; goto out_free_hdr; --=20 2.47.3