[PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime

Leon Hwang posted 4 patches 1 month, 1 week ago
arch/arm64/net/bpf_jit_comp.c | 71 +++++++++++++++++++++++++----------
arch/x86/net/bpf_jit_comp.c   | 51 ++++++++++++++++++-------
include/linux/bpf.h           |  1 +
kernel/bpf/arraymap.c         | 27 ++++++++++++-
kernel/bpf/verifier.c         | 30 ++++++++++++++-
lib/test_bpf.c                | 39 ++++++++++++++++---
6 files changed, 178 insertions(+), 41 deletions(-)
[PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Leon Hwang 1 month, 1 week ago
This patch series optimizes BPF tail calls on x86_64 and arm64 by
eliminating runtime memory accesses for max_entries and 'prog->bpf_func'
when the prog array map is known at verification time.

Currently, every tail call requires:
  1. Loading max_entries from the prog array map
  2. Dereferencing 'prog->bpf_func' to get the target address

This series introduces a mechanism to precompute and cache the tail call
target addresses (bpf_func + prologue_offset) in the prog array itself:
  array->ptrs[max_entries + index] = prog->bpf_func + prologue_offset

When a program is added to or removed from the prog array, the cached
target is atomically updated via xchg().

The verifier now encodes additional information in the tail call
instruction's imm field:
  - bits 0-7:   map index in used_maps[]
  - bits 8-15:  dynamic array flag (1 if map pointer is poisoned)
  - bits 16-31: poke table index + 1 for direct tail calls

For static tail calls (map known at verification time):
  - max_entries is embedded as an immediate in the comparison instruction
  - The cached target from array->ptrs[max_entries + index] is used
    directly, avoiding the 'prog->bpf_func' dereference

For dynamic tail calls (map pointer poisoned):
  - Fall back to runtime lookup of max_entries and prog->bpf_func

This reduces cache misses and improves tail call performance for the
common case where the prog array is statically known.

Leon Hwang (4):
  bpf: tailcall: Introduce bpf_arch_tail_call_prologue_offset
  bpf, x64: tailcall: Eliminate max_entries and bpf_func access at
    runtime
  bpf, arm64: tailcall: Eliminate max_entries and bpf_func access at
    runtime
  bpf, lib/test_bpf: Fix broken tailcall tests

 arch/arm64/net/bpf_jit_comp.c | 71 +++++++++++++++++++++++++----------
 arch/x86/net/bpf_jit_comp.c   | 51 ++++++++++++++++++-------
 include/linux/bpf.h           |  1 +
 kernel/bpf/arraymap.c         | 27 ++++++++++++-
 kernel/bpf/verifier.c         | 30 ++++++++++++++-
 lib/test_bpf.c                | 39 ++++++++++++++++---
 6 files changed, 178 insertions(+), 41 deletions(-)

--
2.52.0
Re: [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Alexei Starovoitov 1 month, 1 week ago
On Fri, Jan 2, 2026 at 7:01 AM Leon Hwang <leon.hwang@linux.dev> wrote:
>
> This patch series optimizes BPF tail calls on x86_64 and arm64 by
> eliminating runtime memory accesses for max_entries and 'prog->bpf_func'
> when the prog array map is known at verification time.
>
> Currently, every tail call requires:
>   1. Loading max_entries from the prog array map
>   2. Dereferencing 'prog->bpf_func' to get the target address
>
> This series introduces a mechanism to precompute and cache the tail call
> target addresses (bpf_func + prologue_offset) in the prog array itself:
>   array->ptrs[max_entries + index] = prog->bpf_func + prologue_offset
>
> When a program is added to or removed from the prog array, the cached
> target is atomically updated via xchg().
>
> The verifier now encodes additional information in the tail call
> instruction's imm field:
>   - bits 0-7:   map index in used_maps[]
>   - bits 8-15:  dynamic array flag (1 if map pointer is poisoned)
>   - bits 16-31: poke table index + 1 for direct tail calls
>
> For static tail calls (map known at verification time):
>   - max_entries is embedded as an immediate in the comparison instruction
>   - The cached target from array->ptrs[max_entries + index] is used
>     directly, avoiding the 'prog->bpf_func' dereference
>
> For dynamic tail calls (map pointer poisoned):
>   - Fall back to runtime lookup of max_entries and prog->bpf_func
>
> This reduces cache misses and improves tail call performance for the
> common case where the prog array is statically known.

Sorry, I don't like this. tail_calls are complex enough and
I'd rather let them be as-is and deprecate their usage altogether
instead of trying to optimize them in certain conditions.
We have indirect jumps now. The next step is indirect calls.
When it lands there will be no need to use tail_calls.
Consider tail_calls to be legacy. No reason to improve them.

pw-bot: cr
Re: [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Jiri Olsa 3 weeks, 5 days ago
On Fri, Jan 02, 2026 at 04:10:01PM -0800, Alexei Starovoitov wrote:
> On Fri, Jan 2, 2026 at 7:01 AM Leon Hwang <leon.hwang@linux.dev> wrote:
> >
> > This patch series optimizes BPF tail calls on x86_64 and arm64 by
> > eliminating runtime memory accesses for max_entries and 'prog->bpf_func'
> > when the prog array map is known at verification time.
> >
> > Currently, every tail call requires:
> >   1. Loading max_entries from the prog array map
> >   2. Dereferencing 'prog->bpf_func' to get the target address
> >
> > This series introduces a mechanism to precompute and cache the tail call
> > target addresses (bpf_func + prologue_offset) in the prog array itself:
> >   array->ptrs[max_entries + index] = prog->bpf_func + prologue_offset
> >
> > When a program is added to or removed from the prog array, the cached
> > target is atomically updated via xchg().
> >
> > The verifier now encodes additional information in the tail call
> > instruction's imm field:
> >   - bits 0-7:   map index in used_maps[]
> >   - bits 8-15:  dynamic array flag (1 if map pointer is poisoned)
> >   - bits 16-31: poke table index + 1 for direct tail calls
> >
> > For static tail calls (map known at verification time):
> >   - max_entries is embedded as an immediate in the comparison instruction
> >   - The cached target from array->ptrs[max_entries + index] is used
> >     directly, avoiding the 'prog->bpf_func' dereference
> >
> > For dynamic tail calls (map pointer poisoned):
> >   - Fall back to runtime lookup of max_entries and prog->bpf_func
> >
> > This reduces cache misses and improves tail call performance for the
> > common case where the prog array is statically known.
> 
> Sorry, I don't like this. tail_calls are complex enough and
> I'd rather let them be as-is and deprecate their usage altogether
> instead of trying to optimize them in certain conditions.
> We have indirect jumps now. The next step is indirect calls.
> When it lands there will be no need to use tail_calls.
> Consider tail_calls to be legacy. No reason to improve them.

hi,
I'd like to make tail calls available in sleepable programs. I still
need to check if there's technical reason we don't have that, but seeing
this answer I wonder you'd be against that anyway ?

fyi I briefly discussed that with Andrii indicating that it might not
be worth the effort at this stage.

thanks,
jirka
Re: [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Alexei Starovoitov 3 weeks, 5 days ago
On Wed, Jan 14, 2026 at 3:28 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Fri, Jan 02, 2026 at 04:10:01PM -0800, Alexei Starovoitov wrote:
> > On Fri, Jan 2, 2026 at 7:01 AM Leon Hwang <leon.hwang@linux.dev> wrote:
> > >
> > > This patch series optimizes BPF tail calls on x86_64 and arm64 by
> > > eliminating runtime memory accesses for max_entries and 'prog->bpf_func'
> > > when the prog array map is known at verification time.
> > >
> > > Currently, every tail call requires:
> > >   1. Loading max_entries from the prog array map
> > >   2. Dereferencing 'prog->bpf_func' to get the target address
> > >
> > > This series introduces a mechanism to precompute and cache the tail call
> > > target addresses (bpf_func + prologue_offset) in the prog array itself:
> > >   array->ptrs[max_entries + index] = prog->bpf_func + prologue_offset
> > >
> > > When a program is added to or removed from the prog array, the cached
> > > target is atomically updated via xchg().
> > >
> > > The verifier now encodes additional information in the tail call
> > > instruction's imm field:
> > >   - bits 0-7:   map index in used_maps[]
> > >   - bits 8-15:  dynamic array flag (1 if map pointer is poisoned)
> > >   - bits 16-31: poke table index + 1 for direct tail calls
> > >
> > > For static tail calls (map known at verification time):
> > >   - max_entries is embedded as an immediate in the comparison instruction
> > >   - The cached target from array->ptrs[max_entries + index] is used
> > >     directly, avoiding the 'prog->bpf_func' dereference
> > >
> > > For dynamic tail calls (map pointer poisoned):
> > >   - Fall back to runtime lookup of max_entries and prog->bpf_func
> > >
> > > This reduces cache misses and improves tail call performance for the
> > > common case where the prog array is statically known.
> >
> > Sorry, I don't like this. tail_calls are complex enough and
> > I'd rather let them be as-is and deprecate their usage altogether
> > instead of trying to optimize them in certain conditions.
> > We have indirect jumps now. The next step is indirect calls.
> > When it lands there will be no need to use tail_calls.
> > Consider tail_calls to be legacy. No reason to improve them.
>
> hi,
> I'd like to make tail calls available in sleepable programs. I still
> need to check if there's technical reason we don't have that, but seeing
> this answer I wonder you'd be against that anyway ?

tail_calls are not allowed in sleepable progs?
I don't remember such a limitation.
What prevents it?
prog_type needs to match, so all sleepable progs should be fine.
The mix and match is problematic due to rcu vs srcu life times.

> fyi I briefly discussed that with Andrii indicating that it might not
> be worth the effort at this stage.

depending on complexity of course.
Re: [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Jiri Olsa 3 weeks, 5 days ago
On Wed, Jan 14, 2026 at 08:04:38AM -0800, Alexei Starovoitov wrote:
> On Wed, Jan 14, 2026 at 3:28 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Fri, Jan 02, 2026 at 04:10:01PM -0800, Alexei Starovoitov wrote:
> > > On Fri, Jan 2, 2026 at 7:01 AM Leon Hwang <leon.hwang@linux.dev> wrote:
> > > >
> > > > This patch series optimizes BPF tail calls on x86_64 and arm64 by
> > > > eliminating runtime memory accesses for max_entries and 'prog->bpf_func'
> > > > when the prog array map is known at verification time.
> > > >
> > > > Currently, every tail call requires:
> > > >   1. Loading max_entries from the prog array map
> > > >   2. Dereferencing 'prog->bpf_func' to get the target address
> > > >
> > > > This series introduces a mechanism to precompute and cache the tail call
> > > > target addresses (bpf_func + prologue_offset) in the prog array itself:
> > > >   array->ptrs[max_entries + index] = prog->bpf_func + prologue_offset
> > > >
> > > > When a program is added to or removed from the prog array, the cached
> > > > target is atomically updated via xchg().
> > > >
> > > > The verifier now encodes additional information in the tail call
> > > > instruction's imm field:
> > > >   - bits 0-7:   map index in used_maps[]
> > > >   - bits 8-15:  dynamic array flag (1 if map pointer is poisoned)
> > > >   - bits 16-31: poke table index + 1 for direct tail calls
> > > >
> > > > For static tail calls (map known at verification time):
> > > >   - max_entries is embedded as an immediate in the comparison instruction
> > > >   - The cached target from array->ptrs[max_entries + index] is used
> > > >     directly, avoiding the 'prog->bpf_func' dereference
> > > >
> > > > For dynamic tail calls (map pointer poisoned):
> > > >   - Fall back to runtime lookup of max_entries and prog->bpf_func
> > > >
> > > > This reduces cache misses and improves tail call performance for the
> > > > common case where the prog array is statically known.
> > >
> > > Sorry, I don't like this. tail_calls are complex enough and
> > > I'd rather let them be as-is and deprecate their usage altogether
> > > instead of trying to optimize them in certain conditions.
> > > We have indirect jumps now. The next step is indirect calls.
> > > When it lands there will be no need to use tail_calls.
> > > Consider tail_calls to be legacy. No reason to improve them.
> >
> > hi,
> > I'd like to make tail calls available in sleepable programs. I still
> > need to check if there's technical reason we don't have that, but seeing
> > this answer I wonder you'd be against that anyway ?
> 
> tail_calls are not allowed in sleepable progs?
> I don't remember such a limitation.
> What prevents it?
> prog_type needs to match, so all sleepable progs should be fine.

right, that's what we have, tail-called uprobe programs that we
need to become sleepable

> The mix and match is problematic due to rcu vs srcu life times.
> 
> > fyi I briefly discussed that with Andrii indicating that it might not
> > be worth the effort at this stage.
> 
> depending on complexity of course.

for my tests I just had to allow BPF_MAP_TYPE_PROG_ARRAY map
for sleepable programs

jirka


---
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index faa1ecc1fe9d..1f6fc74c7ea1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20969,6 +20969,7 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
 		case BPF_MAP_TYPE_STACK:
 		case BPF_MAP_TYPE_ARENA:
 		case BPF_MAP_TYPE_INSN_ARRAY:
+		case BPF_MAP_TYPE_PROG_ARRAY:
 			break;
 		default:
 			verbose(env,
Re: [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Alexei Starovoitov 3 weeks, 5 days ago
On Wed, Jan 14, 2026 at 1:00 PM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> >
> > > fyi I briefly discussed that with Andrii indicating that it might not
> > > be worth the effort at this stage.
> >
> > depending on complexity of course.
>
> for my tests I just had to allow BPF_MAP_TYPE_PROG_ARRAY map
> for sleepable programs
>
> jirka
>
>
> ---
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index faa1ecc1fe9d..1f6fc74c7ea1 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -20969,6 +20969,7 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
>                 case BPF_MAP_TYPE_STACK:
>                 case BPF_MAP_TYPE_ARENA:
>                 case BPF_MAP_TYPE_INSN_ARRAY:
> +               case BPF_MAP_TYPE_PROG_ARRAY:
>                         break;
>                 default:
>                         verbose(env,

Think it through, add selftests, ship it.
On the surface the easy part is to make
__bpf_prog_map_compatible() reject sleepable/non-sleepable combo.
Maybe there are other things.
Re: [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
Posted by Jiri Olsa 3 weeks, 4 days ago
On Wed, Jan 14, 2026 at 01:56:11PM -0800, Alexei Starovoitov wrote:
> On Wed, Jan 14, 2026 at 1:00 PM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > >
> > > > fyi I briefly discussed that with Andrii indicating that it might not
> > > > be worth the effort at this stage.
> > >
> > > depending on complexity of course.
> >
> > for my tests I just had to allow BPF_MAP_TYPE_PROG_ARRAY map
> > for sleepable programs
> >
> > jirka
> >
> >
> > ---
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index faa1ecc1fe9d..1f6fc74c7ea1 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -20969,6 +20969,7 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
> >                 case BPF_MAP_TYPE_STACK:
> >                 case BPF_MAP_TYPE_ARENA:
> >                 case BPF_MAP_TYPE_INSN_ARRAY:
> > +               case BPF_MAP_TYPE_PROG_ARRAY:
> >                         break;
> >                 default:
> >                         verbose(env,
> 
> Think it through, add selftests, ship it.
> On the surface the easy part is to make
> __bpf_prog_map_compatible() reject sleepable/non-sleepable combo.
> Maybe there are other things.

ok, thanks

jirka