From nobody Tue Dec 16 16:35:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EF27C4167B for ; Thu, 30 Nov 2023 13:44:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345667AbjK3Nnv (ORCPT ); Thu, 30 Nov 2023 08:43:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235181AbjK3Nnq (ORCPT ); Thu, 30 Nov 2023 08:43:46 -0500 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A54FDD; Thu, 30 Nov 2023 05:43:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-Id:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=YZ0GfTPU5Y9/KIDudfiquHny4LAq8atLh8MsbRLgf+Y=; b=UyepradYwE1I39llEfbubbm/q+ w0zCrEkYd5IS8pW3dI+UeR5IBw/z3fy9J5WdUeqUp5Mmkjn+a0AaksqAiv8O/4CBpUXh3+/skEVIN +5sDwVZq4vsGXroUSi3gOLpbf2Es/qCoC+9AsHEmtaYOfYXkn8eOmyF2oSO1kQZCKTM4s9E+DcHyB tPpQo93S5Z+BcpudJi+lYex/A6Fhrwi2vx5spgZHjyyDbwpK2UehezXKHiCBR86CX6fsa4F0n9boK OCjx6zjDz3kniu4YhoY5/V40m2tZw1HdgxWKxjU8YgNruo3ZUR97w1jx+6gkOynAI9KjmQNGD9Wgm FPCR79jQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r8hJs-0013s8-0p; Thu, 30 Nov 2023 13:43:08 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 0) id 563A7300A7E; Thu, 30 Nov 2023 14:43:07 +0100 (CET) Message-Id: <20231130134204.026354676@infradead.org> User-Agent: quilt/0.65 Date: Thu, 30 Nov 2023 14:36:31 +0100 From: Peter Zijlstra To: peterz@infradead.org Cc: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, davem@davemloft.net, dsahern@kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, Arnd Bergmann , samitolvanen@google.com, keescook@chromium.org, nathan@kernel.org, ndesaulniers@google.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, jpoimboe@kernel.org, joao@overdrivepizza.com, mark.rutland@arm.com Subject: [PATCH v2 1/2] cfi: Flip headers References: <20231130133630.192490507@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Normal include order is that linux/foo.h should include asm/foo.h, CFI has = it the wrong way around. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Sami Tolvanen --- arch/riscv/include/asm/cfi.h | 3 ++- arch/riscv/kernel/cfi.c | 2 +- arch/x86/include/asm/cfi.h | 3 ++- arch/x86/kernel/cfi.c | 4 ++-- include/asm-generic/Kbuild | 1 + include/asm-generic/cfi.h | 5 +++++ include/linux/cfi.h | 1 + 7 files changed, 14 insertions(+), 5 deletions(-) --- a/arch/riscv/include/asm/cfi.h +++ b/arch/riscv/include/asm/cfi.h @@ -7,8 +7,9 @@ * * Copyright (C) 2023 Google LLC */ +#include =20 -#include +struct pt_regs; =20 #ifdef CONFIG_CFI_CLANG enum bug_trap_type handle_cfi_failure(struct pt_regs *regs); --- a/arch/riscv/kernel/cfi.c +++ b/arch/riscv/kernel/cfi.c @@ -4,7 +4,7 @@ * * Copyright (C) 2023 Google LLC */ -#include +#include #include =20 /* --- a/arch/x86/include/asm/cfi.h +++ b/arch/x86/include/asm/cfi.h @@ -7,8 +7,9 @@ * * Copyright (C) 2022 Google LLC */ +#include =20 -#include +struct pt_regs; =20 #ifdef CONFIG_CFI_CLANG enum bug_trap_type handle_cfi_failure(struct pt_regs *regs); --- a/arch/x86/kernel/cfi.c +++ b/arch/x86/kernel/cfi.c @@ -4,10 +4,10 @@ * * Copyright (C) 2022 Google LLC */ -#include +#include +#include #include #include -#include =20 /* * Returns the target address and the expected type when regs->ip points --- a/include/asm-generic/Kbuild +++ b/include/asm-generic/Kbuild @@ -11,6 +11,7 @@ mandatory-y +=3D bitops.h mandatory-y +=3D bug.h mandatory-y +=3D bugs.h mandatory-y +=3D cacheflush.h +mandatory-y +=3D cfi.h mandatory-y +=3D checksum.h mandatory-y +=3D compat.h mandatory-y +=3D current.h --- /dev/null +++ b/include/asm-generic/cfi.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_GENERIC_CFI_H +#define __ASM_GENERIC_CFI_H + +#endif /* __ASM_GENERIC_CFI_H */ --- a/include/linux/cfi.h +++ b/include/linux/cfi.h @@ -9,6 +9,7 @@ =20 #include #include +#include =20 #ifdef CONFIG_CFI_CLANG enum bug_trap_type report_cfi_failure(struct pt_regs *regs, unsigned long = addr, From nobody Tue Dec 16 16:35:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFF27C4167B for ; Thu, 30 Nov 2023 13:43:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345637AbjK3Nnp (ORCPT ); Thu, 30 Nov 2023 08:43:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232050AbjK3Nnn (ORCPT ); Thu, 30 Nov 2023 08:43:43 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A9EADD; Thu, 30 Nov 2023 05:43:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-Id:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=89DeeaBjwlEE7RwgSFnDKuYif8D8CI/kXtgkrlUTaUQ=; b=qdwTyjuWnwpl943jb8NYbreVkm 9Y7q0qX2ZluVaT5/NQ7LszGfztCSd78Aud7m/MqyVNeVf2ojf1Pf78i5EG2hCBceTMJZYXLD2ScK5 dmeA5cBaq/8G0+TW6b4ukbVRrHcCLqCjFE+ICBUodpE3L68Oezn2Gu6edr/27vSyPsShYrybFNnuV pk7MXFZJ3lVS4I5WQ+M/Xv5TdKLFqPYny3nZZK4lBI64HYpgyhKdbsgDSHB+PLOB3xysizPLfykzj JhWhP9qE7Iotkl07mh4nCySAch7XoCQ85wjT3sOjJaOH8OFom3LH81CHbzPu7mVbbhmzLsd8wWZCX fgnaBAjA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1r8hJs-00EUoE-0d; Thu, 30 Nov 2023 13:43:08 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 0) id 5A5923011E8; Thu, 30 Nov 2023 14:43:07 +0100 (CET) Message-Id: <20231130134204.136058029@infradead.org> User-Agent: quilt/0.65 Date: Thu, 30 Nov 2023 14:36:32 +0100 From: Peter Zijlstra To: peterz@infradead.org Cc: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, davem@davemloft.net, dsahern@kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, Arnd Bergmann , samitolvanen@google.com, keescook@chromium.org, nathan@kernel.org, ndesaulniers@google.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, jpoimboe@kernel.org, joao@overdrivepizza.com, mark.rutland@arm.com Subject: [PATCH v2 2/2] x86/cfi,bpf: Fix BPF JIT call References: <20231130133630.192490507@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current BPF call convention is __nocfi, except when it calls !JIT thing= s, then it calls regular C functions. It so happens that with FineIBT the __nocfi and C calling conventions are incompatible. Specifically __nocfi will call at func+0, while FineIBT will = have endbr-poison there, which is not a valid indirect target. Causing #CP. Notably this only triggers on IBT enabled hardware, which is probably why t= his hasn't been reported (also, most people will have JIT on anyway). Implement proper CFI prologues for the BPF JIT codegen and drop __nocfi for x86. Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/include/asm/cfi.h | 94 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/alternative.c | 47 +++++++++++++++--- arch/x86/net/bpf_jit_comp.c | 108 +++++++++++++++++++++++++++++++++++++= ----- include/linux/bpf.h | 12 +++- kernel/bpf/core.c | 20 +++++++ 5 files changed, 260 insertions(+), 21 deletions(-) --- a/arch/x86/include/asm/cfi.h +++ b/arch/x86/include/asm/cfi.h @@ -9,15 +9,109 @@ */ #include =20 +/* + * An overview of the various calling conventions... + * + * Traditional: + * + * foo: + * ... code here ... + * ret + * + * direct caller: + * call foo + * + * indirect caller: + * lea foo(%rip), %r11 + * ... + * call *%r11 + * + * + * IBT: + * + * foo: + * endbr64 + * ... code here ... + * ret + * + * direct caller: + * call foo / call foo+4 + * + * indirect caller: + * lea foo(%rip), %r11 + * ... + * call *%r11 + * + * + * kCFI: + * + * __cfi_foo: + * movl $0x12345678, %eax + * # 11 nops when CONFIG_CALL_PADDING + * foo: + * endbr64 # when IBT + * ... code here ... + * ret + * + * direct call: + * call foo # / call foo+4 when IBT + * + * indirect call: + * lea foo(%rip), %r11 + * ... + * movl $(-0x12345678), %r10d + * addl -4(%r11), %r10d # -15 when CONFIG_CALL_PADDING + * jz 1f + * ud2 + * 1:call *%r11 + * + * + * FineIBT (builds as kCFI + CALL_PADDING + IBT + RETPOLINE and runtime pa= tches into): + * + * __cfi_foo: + * endbr64 + * subl 0x12345678, %r10d + * jz foo + * ud2 + * nop + * foo: + * osp nop3 # was endbr64 + * ... code here ... + * ret + * + * direct caller: + * call foo / call foo+4 + * + * indirect caller: + * lea foo(%rip), %r11 + * ... + * movl $0x12345678, %r10d + * subl $16, %r11 + * nop4 + * call *%r11 + * + */ +enum cfi_mode { + CFI_DEFAULT, /* FineIBT if hardware has IBT, otherwise kCFI */ + CFI_OFF, /* Taditional / IBT depending on .config */ + CFI_KCFI, /* Optionally CALL_PADDING, IBT, RETPOLINE */ + CFI_FINEIBT, /* see arch/x86/kernel/alternative.c */ +}; + +extern enum cfi_mode cfi_mode; + struct pt_regs; =20 #ifdef CONFIG_CFI_CLANG enum bug_trap_type handle_cfi_failure(struct pt_regs *regs); +#define __bpfcall +extern u32 cfi_bpf_hash; #else static inline enum bug_trap_type handle_cfi_failure(struct pt_regs *regs) { return BUG_TRAP_TYPE_NONE; } +#define cfi_bpf_hash 0U #endif /* CONFIG_CFI_CLANG */ =20 #endif /* _ASM_X86_CFI_H */ --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -30,6 +30,7 @@ #include #include #include +#include =20 int __read_mostly alternatives_patched; =20 @@ -832,15 +833,43 @@ void __init_or_module apply_seal_endbr(s #endif /* CONFIG_X86_KERNEL_IBT */ =20 #ifdef CONFIG_FINEIBT +#define __CFI_DEFAULT CFI_DEFAULT +#elif defined(CONFIG_CFI_CLANG) +#define __CFI_DEFAULT CFI_KCFI +#else +#define __CFI_DEFAULT CFI_OFF +#endif =20 -enum cfi_mode { - CFI_DEFAULT, - CFI_OFF, - CFI_KCFI, - CFI_FINEIBT, -}; +enum cfi_mode cfi_mode __ro_after_init =3D __CFI_DEFAULT; + +#ifdef CONFIG_CFI_CLANG +struct bpf_insn; + +/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */ +extern unsigned int __bpf_prog_runX(const void *ctx, + const struct bpf_insn *insn); + +/* + * Force a reference to the external symbol so the compiler generates + * __kcfi_typid. + */ +__ADDRESSABLE(__bpf_prog_runX); + +/* u32 __ro_after_init cfi_bpf_hash =3D __kcfi_typeid___bpf_prog_runX; */ +asm ( +" .pushsection .data..ro_after_init,\"aw\",@progbits \n" +" .type cfi_bpf_hash,@object \n" +" .globl cfi_bpf_hash \n" +" .p2align 2, 0x0 \n" +"cfi_bpf_hash: \n" +" .long __kcfi_typeid___bpf_prog_runX \n" +" .size cfi_bpf_hash, 4 \n" +" .popsection \n" +); +#endif + +#ifdef CONFIG_FINEIBT =20 -static enum cfi_mode cfi_mode __ro_after_init =3D CFI_DEFAULT; static bool cfi_rand __ro_after_init =3D true; static u32 cfi_seed __ro_after_init; =20 @@ -1149,8 +1178,10 @@ static void __apply_fineibt(s32 *start_r goto err; =20 if (cfi_rand) { - if (builtin) + if (builtin) { cfi_seed =3D get_random_u32(); + cfi_bpf_hash =3D cfi_rehash(cfi_bpf_hash); + } =20 ret =3D cfi_rand_preamble(start_cfi, end_cfi); if (ret) --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -17,6 +17,7 @@ #include #include #include +#include =20 static bool all_callee_regs_used[4] =3D {true, true, true, true}; =20 @@ -51,9 +52,11 @@ static u8 *emit_code(u8 *ptr, u32 bytes, do { EMIT4(b1, b2, b3, b4); EMIT(off, 4); } while (0) =20 #ifdef CONFIG_X86_KERNEL_IBT -#define EMIT_ENDBR() EMIT(gen_endbr(), 4) +#define EMIT_ENDBR() EMIT(gen_endbr(), 4) +#define EMIT_ENDBR_POISON() EMIT(gen_endbr_poison(), 4) #else #define EMIT_ENDBR() +#define EMIT_ENDBR_POISON() #endif =20 static bool is_imm8(int value) @@ -247,6 +250,7 @@ struct jit_context { */ int tail_call_direct_label; int tail_call_indirect_label; + int prog_offset; }; =20 /* Maximum number of bytes emitted while JITing one eBPF insn */ @@ -305,20 +309,90 @@ static void pop_callee_regs(u8 **pprog, } =20 /* + * Emit the various CFI preambles, see asm/cfi.h and the comments about Fi= neIBT + * in arch/x86/kernel/alternative.c + */ + +static int emit_fineibt(u8 **pprog) +{ + u8 *prog =3D *pprog; + + EMIT_ENDBR(); + EMIT3_off32(0x41, 0x81, 0xea, cfi_bpf_hash); /* subl $hash, %r10d */ + EMIT2(0x74, 0x07); /* jz.d8 +7 */ + EMIT2(0x0f, 0x0b); /* ud2 */ + EMIT1(0x90); /* nop */ + EMIT_ENDBR_POISON(); + + *pprog =3D prog; + return 16; +} + +static int emit_kcfi(u8 **pprog) +{ + u8 *prog =3D *pprog; + int offset =3D 5; + + EMIT1_off32(0xb8, cfi_bpf_hash); /* movl $hash, %eax */ +#ifdef CONFIG_CALL_PADDING + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + EMIT1(0x90); + offset +=3D 11; +#endif + EMIT_ENDBR(); + + *pprog =3D prog; + return offset; +} + +static int emit_cfi(u8 **pprog) +{ + u8 *prog =3D *pprog; + int offset =3D 0; + + switch (cfi_mode) { + case CFI_FINEIBT: + offset =3D emit_fineibt(&prog); + break; + + case CFI_KCFI: + offset =3D emit_kcfi(&prog); + break; + + default: + EMIT_ENDBR(); + break; + } + + *pprog =3D prog; + return offset; +} + +/* * Emit x86-64 prologue code for BPF program. * bpf_tail_call helper will skip the first X86_TAIL_CALL_OFFSET bytes * while jumping to another program */ -static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, - bool tail_call_reachable, bool is_subprog, - bool is_exception_cb) +static int emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, + bool tail_call_reachable, bool is_subprog, + bool is_exception_cb) { u8 *prog =3D *pprog; + int offset; =20 + offset =3D emit_cfi(&prog); /* BPF trampoline can be made to work without these nops, * but let's waste 5 bytes for now and optimize later */ - EMIT_ENDBR(); memcpy(prog, x86_nops[5], X86_PATCH_SIZE); prog +=3D X86_PATCH_SIZE; if (!ebpf_from_cbpf) { @@ -357,6 +431,8 @@ static void emit_prologue(u8 **pprog, u3 if (tail_call_reachable) EMIT1(0x50); /* push rax */ *pprog =3D prog; + + return offset; } =20 static int emit_patch(u8 **pprog, void *func, void *ip, u8 opcode) @@ -1083,8 +1159,8 @@ static int do_jit(struct bpf_prog *bpf_p bool tail_call_seen =3D false; bool seen_exit =3D false; u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY]; - int i, excnt =3D 0; int ilen, proglen =3D 0; + int i, excnt =3D 0; u8 *prog =3D temp; int err; =20 @@ -1094,9 +1170,12 @@ static int do_jit(struct bpf_prog *bpf_p /* tail call's presence in current prog implies it is reachable */ tail_call_reachable |=3D tail_call_seen; =20 - emit_prologue(&prog, bpf_prog->aux->stack_depth, - bpf_prog_was_classic(bpf_prog), tail_call_reachable, - bpf_is_subprog(bpf_prog), bpf_prog->aux->exception_cb); + ctx->prog_offset =3D emit_prologue(&prog, bpf_prog->aux->stack_depth, + bpf_prog_was_classic(bpf_prog), + tail_call_reachable, + bpf_is_subprog(bpf_prog), + bpf_prog->aux->exception_cb); + /* Exception callback will clobber callee regs for its own use, and * restore the original callee regs from main prog's stack frame. */ @@ -2935,9 +3014,16 @@ struct bpf_prog *bpf_int_jit_compile(str jit_data->header =3D header; jit_data->rw_header =3D rw_header; } - prog->bpf_func =3D (void *)image; + /* + * ctx.prog_offset is used when CFI preambles put code *before* + * the function. See emit_cfi(). For FineIBT specifically this code + * can also be executed and bpf_prog_kallsyms_add() will + * generate an additional symbol to cover this, hence also + * decrement proglen. + */ + prog->bpf_func =3D (void *)image + ctx.prog_offset; prog->jited =3D 1; - prog->jited_len =3D proglen; + prog->jited_len =3D proglen - ctx.prog_offset; } else { prog =3D orig_prog; } --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -29,6 +29,7 @@ #include #include #include +#include =20 struct bpf_verifier_env; struct bpf_verifier_log; @@ -1188,7 +1189,11 @@ struct bpf_dispatcher { #endif }; =20 -static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func( +#ifndef __bpfcall +#define __bpfcall __nocfi +#endif + +static __always_inline __bpfcall unsigned int bpf_dispatcher_nop_func( const void *ctx, const struct bpf_insn *insnsi, bpf_func_t bpf_func) @@ -1278,7 +1283,7 @@ int arch_prepare_bpf_dispatcher(void *im =20 #define DEFINE_BPF_DISPATCHER(name) \ __BPF_DISPATCHER_SC(name); \ - noinline __nocfi unsigned int bpf_dispatcher_##name##_func( \ + noinline __bpfcall unsigned int bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ bpf_func_t bpf_func) \ @@ -1426,6 +1431,9 @@ struct bpf_prog_aux { struct bpf_kfunc_desc_tab *kfunc_tab; struct bpf_kfunc_btf_tab *kfunc_btf_tab; u32 size_poke_tab; +#ifdef CONFIG_FINEIBT + struct bpf_ksym ksym_prefix; +#endif struct bpf_ksym ksym; const struct bpf_prog_ops *ops; struct bpf_map **used_maps; --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -683,6 +683,23 @@ void bpf_prog_kallsyms_add(struct bpf_pr fp->aux->ksym.prog =3D true; =20 bpf_ksym_add(&fp->aux->ksym); + +#ifdef CONFIG_FINEIBT + /* + * When FineIBT, code in the __cfi_foo() symbols can get executed + * and hence unwinder needs help. + */ + if (cfi_mode !=3D CFI_FINEIBT) + return; + + snprintf(fp->aux->ksym_prefix.name, KSYM_NAME_LEN, + "__cfi_%s", fp->aux->ksym.name); + + fp->aux->ksym_prefix.start =3D (unsigned long) fp->bpf_func - 16; + fp->aux->ksym_prefix.end =3D (unsigned long) fp->bpf_func; + + bpf_ksym_add(&fp->aux->ksym_prefix); +#endif } =20 void bpf_prog_kallsyms_del(struct bpf_prog *fp) @@ -691,6 +708,9 @@ void bpf_prog_kallsyms_del(struct bpf_pr return; =20 bpf_ksym_del(&fp->aux->ksym); +#ifdef CONFIG_FINEIBT + bpf_ksym_del(&fp->aux->ksym_prefix); +#endif } =20 static struct bpf_ksym *bpf_ksym_find(unsigned long addr)