[PATCH bpf-next v2 4/6] bpf: Disallow !call_get_func_ip progs tail-calling call_get_func_ip progs

Leon Hwang posted 6 patches 1 month, 1 week ago
There is a newer version of this series
[PATCH bpf-next v2 4/6] bpf: Disallow !call_get_func_ip progs tail-calling call_get_func_ip progs
Posted by Leon Hwang 1 month, 1 week ago
Trampoline-based tracing programs that call bpf_get_func_ip() rely on
the func IP stored on the stack. Mixing !call_get_func_ip progs with
call_get_func_ip progs via tail calls could break this assumption.

To address this, reject the combination of !call_get_func_ip progs with
call_get_func_ip progs in bpf_map_owner_matches(), which prevents the
tail callee from getting a bogus func IP.

Also reject call_get_func_ip mismatches during initialization to
prevent bypassing the above restriction.

Without this check, the above restriction can be bypassed as follows.

struct {
	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
	__uint(max_entries, 1);
	__uint(key_size, sizeof(__u32));
	__uint(value_size, sizeof(__u32));
} jmp_table SEC(".maps");

SEC("?fentry")
int BPF_PROG(prog_a)
{
	bpf_printk("FUNC IP: 0x%llx\n", bpf_get_func_ip());
	bpf_tail_call_static(ctx, &jmp_table, 0);
	return 0;
}

SEC("?fentry")
int BPF_PROG(prog_b)
{
	bpf_tail_call_static(ctx, &jmp_table, 0);
	return 0;
}

The jmp_table is shared between prog_a and prog_b.

* Load prog_a first.
  At this point, owner->call_get_func_ip=true.
* Load prog_b next.
  At this point, prog_b passes the compatibility check.
* Add prog_a to jmp_table.
* Attach prog_b to a kernel function.

When the kernel function runs, prog_a will get a bogus func IP because
no func IP is prepared on the trampoline stack.

Fixes: 1e37392cccde ("bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip")
Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
 include/linux/bpf.h | 1 +
 kernel/bpf/core.c   | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index dbafed52b2ba..fb978650b169 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -289,6 +289,7 @@ struct bpf_map_owner {
 	u32 xdp_has_frags:1;
 	u32 sleepable:1;
 	u32 kprobe_write_ctx:1;
+	u32 call_get_func_ip:1;
 	u64 storage_cookie[MAX_BPF_CGROUP_STORAGE_TYPE];
 	const struct btf_type *attach_func_proto;
 	enum bpf_attach_type expected_attach_type;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 121a697d4da5..1b88878fe5c5 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2391,6 +2391,7 @@ static void bpf_map_owner_init(struct bpf_map_owner *owner, const struct bpf_pro
 	owner->xdp_has_frags = aux->xdp_has_frags;
 	owner->sleepable = fp->sleepable;
 	owner->kprobe_write_ctx = aux->kprobe_write_ctx;
+	owner->call_get_func_ip = fp->call_get_func_ip;
 	owner->expected_attach_type = fp->expected_attach_type;
 	owner->attach_func_proto = aux->attach_func_proto;
 	for_each_cgroup_storage_type(i)
@@ -2422,11 +2423,17 @@ static bool bpf_map_owner_matches(const struct bpf_map *map, const struct bpf_pr
 	case BPF_MAP_OWNER_MATCH_FOR_INIT:
 		if (owner->kprobe_write_ctx != aux->kprobe_write_ctx)
 			return false;
+		if (owner->call_get_func_ip != fp->call_get_func_ip)
+			return false;
 		break;
 
 	case BPF_MAP_OWNER_MATCH_FOR_UPDATE:
 		if (!owner->kprobe_write_ctx && aux->kprobe_write_ctx)
 			return false;
+		if (bpf_prog_has_trampoline(fp)) {
+			if (!owner->call_get_func_ip && fp->call_get_func_ip)
+				return false;
+		}
 		break;
 	}
 
-- 
2.52.0
Re: [PATCH bpf-next v2 4/6] bpf: Disallow !call_get_func_ip progs tail-calling call_get_func_ip progs
Posted by bot+bpf-ci@kernel.org 1 month, 1 week ago
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c

[ ... ]

> @@ -2422,11 +2423,17 @@ static bool bpf_map_owner_matches(const struct bpf_map *map, const struct bpf_prog *fp,
>  	case BPF_MAP_OWNER_MATCH_FOR_INIT:
>  		if (owner->kprobe_write_ctx != aux->kprobe_write_ctx)
>  			return false;
> +		if (owner->call_get_func_ip != fp->call_get_func_ip)
> +			return false;
>  		break;
>
>  	case BPF_MAP_OWNER_MATCH_FOR_UPDATE:
>  		if (!owner->kprobe_write_ctx && aux->kprobe_write_ctx)
>  			return false;
> +		if (bpf_prog_has_trampoline(fp)) {
> +			if (!owner->call_get_func_ip && fp->call_get_func_ip)
> +				return false;
> +		}
>  		break;

The FOR_UPDATE path correctly guards the call_get_func_ip check with
bpf_prog_has_trampoline(fp), since only trampoline-based programs
rely on the func IP stored on the trampoline stack. Kprobe programs
get func_ip from kprobe_running()->addr via bpf_get_func_ip_kprobe(),
which is independent of the trampoline.

Should the FOR_INIT path have the same bpf_prog_has_trampoline()
guard? Without it, two kprobe programs that share a prog_array will
be rejected if one calls bpf_get_func_ip() and the other does not,
even though mixing is safe for kprobes.

bpf_check_tail_call() calls this with BPF_MAP_OWNER_MATCH_FOR_INIT:

    kprobe prog_a (call_get_func_ip=1) loads, references jmp_table
      -> owner->call_get_func_ip = 1
    kprobe prog_b (call_get_func_ip=0) loads, references same jmp_table
      -> owner->call_get_func_ip != fp->call_get_func_ip
      -> returns false, prog_b fails to load

Something like this would match the FOR_UPDATE scoping:

    case BPF_MAP_OWNER_MATCH_FOR_INIT:
            if (owner->kprobe_write_ctx != aux->kprobe_write_ctx)
                    return false;
            if (bpf_prog_has_trampoline(fp)) {
                    if (owner->call_get_func_ip != fp->call_get_func_ip)
                            return false;
            }
            break;


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22582690876

AI-authorship-score: medium
AI-authorship-explanation: The v1 selftests carried Assisted-by Codex tags, and the kernel code follows a mechanical copy-paste pattern across commits; the v2 selftests were rewritten but the overall series structure suggests AI-assisted development.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: The FOR_INIT strict equality check on call_get_func_ip is missing a bpf_prog_has_trampoline() guard, causing unnecessary rejection of kprobe programs that share a prog_array with different call_get_func_ip values.
Re: [PATCH bpf-next v2 4/6] bpf: Disallow !call_get_func_ip progs tail-calling call_get_func_ip progs
Posted by Leon Hwang 1 month, 1 week ago

On 2/3/26 23:53, bot+bpf-ci@kernel.org wrote:
>> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
>> --- a/kernel/bpf/core.c
>> +++ b/kernel/bpf/core.c
> 
> [ ... ]
> 
>> @@ -2422,11 +2423,17 @@ static bool bpf_map_owner_matches(const struct bpf_map *map, const struct bpf_prog *fp,
>>  	case BPF_MAP_OWNER_MATCH_FOR_INIT:
>>  		if (owner->kprobe_write_ctx != aux->kprobe_write_ctx)
>>  			return false;
>> +		if (owner->call_get_func_ip != fp->call_get_func_ip)
>> +			return false;
>>  		break;
>>
>>  	case BPF_MAP_OWNER_MATCH_FOR_UPDATE:
>>  		if (!owner->kprobe_write_ctx && aux->kprobe_write_ctx)
>>  			return false;
>> +		if (bpf_prog_has_trampoline(fp)) {
>> +			if (!owner->call_get_func_ip && fp->call_get_func_ip)
>> +				return false;
>> +		}
>>  		break;
> 
> The FOR_UPDATE path correctly guards the call_get_func_ip check with
> bpf_prog_has_trampoline(fp), since only trampoline-based programs
> rely on the func IP stored on the trampoline stack. Kprobe programs
> get func_ip from kprobe_running()->addr via bpf_get_func_ip_kprobe(),
> which is independent of the trampoline.
> 
> Should the FOR_INIT path have the same bpf_prog_has_trampoline()
> guard? Without it, two kprobe programs that share a prog_array will
> be rejected if one calls bpf_get_func_ip() and the other does not,
> even though mixing is safe for kprobes.
> 
> bpf_check_tail_call() calls this with BPF_MAP_OWNER_MATCH_FOR_INIT:
> 
>     kprobe prog_a (call_get_func_ip=1) loads, references jmp_table
>       -> owner->call_get_func_ip = 1
>     kprobe prog_b (call_get_func_ip=0) loads, references same jmp_table
>       -> owner->call_get_func_ip != fp->call_get_func_ip
>       -> returns false, prog_b fails to load
> 
> Something like this would match the FOR_UPDATE scoping:
> 
>     case BPF_MAP_OWNER_MATCH_FOR_INIT:
>             if (owner->kprobe_write_ctx != aux->kprobe_write_ctx)
>                     return false;
>             if (bpf_prog_has_trampoline(fp)) {
>                     if (owner->call_get_func_ip != fp->call_get_func_ip)
>                             return false;
>             }
>             break;
> 

Correct.

The FOR_INIT path should have the same bpf_prog_has_trampoline() guard.

I'll do it in the next revision.

Thanks,
Leon

> 
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
> 
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22582690876
> 
> AI-authorship-score: medium
> AI-authorship-explanation: The v1 selftests carried Assisted-by Codex tags, and the kernel code follows a mechanical copy-paste pattern across commits; the v2 selftests were rewritten but the overall series structure suggests AI-assisted development.
> issues-found: 1
> issue-severity-score: low
> issue-severity-explanation: The FOR_INIT strict equality check on call_get_func_ip is missing a bpf_prog_has_trampoline() guard, causing unnecessary rejection of kprobe programs that share a prog_array with different call_get_func_ip values.