Inline bpf_get_current_task() and bpf_get_current_task_btf() for x86_64
to obtain better performance.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
v5:
- don't support the !CONFIG_SMP case
v4:
- handle the !CONFIG_SMP case
v3:
- implement it in the verifier with BPF_MOV64_PERCPU_REG() instead of in
x86_64 JIT.
---
kernel/bpf/verifier.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9de0ec0c3ed9..c4e2ffadfb1f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17739,6 +17739,10 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
switch (imm) {
#ifdef CONFIG_X86_64
case BPF_FUNC_get_smp_processor_id:
+#ifdef CONFIG_SMP
+ case BPF_FUNC_get_current_task_btf:
+ case BPF_FUNC_get_current_task:
+#endif
return env->prog->jit_requested && bpf_jit_supports_percpu_insn();
#endif
default:
@@ -23319,6 +23323,24 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn = new_prog->insnsi + i + delta;
goto next_insn;
}
+
+ /* Implement bpf_get_current_task() and bpf_get_current_task_btf() inline. */
+ if ((insn->imm == BPF_FUNC_get_current_task || insn->imm == BPF_FUNC_get_current_task_btf) &&
+ verifier_inlines_helper_call(env, insn->imm)) {
+ insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, (u32)(unsigned long)¤t_task);
+ insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+ insn_buf[2] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
+ cnt = 3;
+
+ new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+ if (!new_prog)
+ return -ENOMEM;
+
+ delta += cnt - 1;
+ env->prog = prog = new_prog;
+ insn = new_prog->insnsi + i + delta;
+ goto next_insn;
+ }
#endif
/* Implement bpf_get_func_arg inline. */
if (prog_type == BPF_PROG_TYPE_TRACING &&
--
2.52.0
On Mon, Jan 19, 2026 at 11:06 PM Menglong Dong <menglong8.dong@gmail.com> wrote:
>
> Inline bpf_get_current_task() and bpf_get_current_task_btf() for x86_64
> to obtain better performance.
>
> Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> v5:
> - don't support the !CONFIG_SMP case
>
> v4:
> - handle the !CONFIG_SMP case
>
> v3:
> - implement it in the verifier with BPF_MOV64_PERCPU_REG() instead of in
> x86_64 JIT.
> ---
> kernel/bpf/verifier.c | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 9de0ec0c3ed9..c4e2ffadfb1f 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -17739,6 +17739,10 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
> switch (imm) {
> #ifdef CONFIG_X86_64
> case BPF_FUNC_get_smp_processor_id:
> +#ifdef CONFIG_SMP
> + case BPF_FUNC_get_current_task_btf:
> + case BPF_FUNC_get_current_task:
> +#endif
Does this have to be x86-64 specific inlining? With verifier inlining
and per_cpu instruction support it should theoretically work across
all architectures that do support per-cpu instruction, no?
Eduard pointed out [0] to me for why we have that x86-64 specific
check. But looking at do_misc_fixups(), we have that early
bpf_jit_inlines_helper_call(insn->imm)) check, so if some JIT has more
performant inlining implementation, we will just do that.
So it seems like we can just drop all that x86-64 specific logic and
claim all three of these functions as inlinable, no?
And even more. We can drop rather confusing
verifier_inlines_helper_call() that duplicates the decision of which
helpers can be inlined or not, and have:
if (env->prog->jit_requested && bpf_jit_supports_percpu_insn() {
switch (insn->imm) {
case BPF_FUNC_get_smp_processor_id:
...
break;
case BPF_FUNC_get_current_task_btf:
case BPF_FUNC_get_current_task_btf:
...
break;
default:
}
And the decision about inlining will live in one place.
Or am I missing some complications?
And with all that, should we mark get_current_task and
get_current_task_btf as __bpf_fastcall?
[0] https://lore.kernel.org/all/20240722233844.1406874-4-eddyz87@gmail.com/
> return env->prog->jit_requested && bpf_jit_supports_percpu_insn();
> #endif
> default:
> @@ -23319,6 +23323,24 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> insn = new_prog->insnsi + i + delta;
> goto next_insn;
> }
> +
> + /* Implement bpf_get_current_task() and bpf_get_current_task_btf() inline. */
> + if ((insn->imm == BPF_FUNC_get_current_task || insn->imm == BPF_FUNC_get_current_task_btf) &&
> + verifier_inlines_helper_call(env, insn->imm)) {
> + insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, (u32)(unsigned long)¤t_task);
> + insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> + insn_buf[2] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
> + cnt = 3;
> +
> + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> + if (!new_prog)
> + return -ENOMEM;
> +
> + delta += cnt - 1;
> + env->prog = prog = new_prog;
> + insn = new_prog->insnsi + i + delta;
> + goto next_insn;
> + }
> #endif
> /* Implement bpf_get_func_arg inline. */
> if (prog_type == BPF_PROG_TYPE_TRACING &&
> --
> 2.52.0
>
On Tue, Jan 20, 2026 at 5:24 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Mon, Jan 19, 2026 at 11:06 PM Menglong Dong <menglong8.dong@gmail.com> wrote:
> >
> > Inline bpf_get_current_task() and bpf_get_current_task_btf() for x86_64
> > to obtain better performance.
> >
> > Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> > Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> > v5:
> > - don't support the !CONFIG_SMP case
> >
> > v4:
> > - handle the !CONFIG_SMP case
> >
> > v3:
> > - implement it in the verifier with BPF_MOV64_PERCPU_REG() instead of in
> > x86_64 JIT.
> > ---
> > kernel/bpf/verifier.c | 22 ++++++++++++++++++++++
> > 1 file changed, 22 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 9de0ec0c3ed9..c4e2ffadfb1f 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -17739,6 +17739,10 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
> > switch (imm) {
> > #ifdef CONFIG_X86_64
> > case BPF_FUNC_get_smp_processor_id:
> > +#ifdef CONFIG_SMP
> > + case BPF_FUNC_get_current_task_btf:
> > + case BPF_FUNC_get_current_task:
> > +#endif
>
> Does this have to be x86-64 specific inlining? With verifier inlining
> and per_cpu instruction support it should theoretically work across
> all architectures that do support per-cpu instruction, no?
>
> Eduard pointed out [0] to me for why we have that x86-64 specific
> check. But looking at do_misc_fixups(), we have that early
> bpf_jit_inlines_helper_call(insn->imm)) check, so if some JIT has more
> performant inlining implementation, we will just do that.
>
> So it seems like we can just drop all that x86-64 specific logic and
> claim all three of these functions as inlinable, no?
>
> And even more. We can drop rather confusing
> verifier_inlines_helper_call() that duplicates the decision of which
> helpers can be inlined or not, and have:
>
> if (env->prog->jit_requested && bpf_jit_supports_percpu_insn() {
> switch (insn->imm) {
> case BPF_FUNC_get_smp_processor_id:
> ...
> break;
> case BPF_FUNC_get_current_task_btf:
> case BPF_FUNC_get_current_task_btf:
> ...
> break;
> default:
> }
>
> And the decision about inlining will live in one place.
>
> Or am I missing some complications?
I think it needs to be arch specific, since 'current' is arch
specific. x86 is different from arm64.
Though both JITs support percpu pseudo insn, it doesn't help
to make get_current inlining generic.
One has to analyze each arch individually.
© 2016 - 2026 Red Hat, Inc.