[PATCH] selftests/bpf: Make x86 preempt_count access compatible across v6.14+

Changwoo Min posted 1 patch 1 week, 2 days ago
There is a newer version of this series
tools/testing/selftests/bpf/bpf_experimental.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
[PATCH] selftests/bpf: Make x86 preempt_count access compatible across v6.14+
Posted by Changwoo Min 1 week, 2 days ago
Recent x86 kernels (v6.15+) export __preempt_count as a ksym, while older
kernels expose the preemption counter via pcpu_hot.preempt_count. The
existing selftest helper unconditionally dereferenced __preempt_count,
which breaks BPF program loading on older kernels.

Make the x86 preemption count lookup version-agnostic by:
- Marking __preempt_count and pcpu_hot as weak ksyms.
- Introducing a BTF-described pcpu_hot___local layout with
  preserve_access_index.
- Selecting the appropriate access path at runtime using ksym availability
  and bpf_core_field_exists().

This allows a single BPF binary to run correctly on both v6.14-and-older
and v6.15-and-newer kernels without relying on compile-time version checks.

Fixes: 4b69e31329b6 ("selftests/bpf: Introduce experimental bpf_in_interrupt()")
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 tools/testing/selftests/bpf/bpf_experimental.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index a39576c8ba04..0194c0090e50 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -614,7 +614,13 @@ extern int bpf_cgroup_read_xattr(struct cgroup *cgroup, const char *name__str,
 
 extern bool CONFIG_PREEMPT_RT __kconfig __weak;
 #ifdef bpf_target_x86
-extern const int __preempt_count __ksym;
+extern const int __preempt_count __ksym __weak;
+
+struct pcpu_hot___local {
+	int preempt_count;
+} __attribute__((preserve_access_index));
+
+extern struct pcpu_hot___local pcpu_hot __ksym __weak;
 #endif
 
 struct task_struct___preempt_rt {
@@ -624,7 +630,13 @@ struct task_struct___preempt_rt {
 static inline int get_preempt_count(void)
 {
 #if defined(bpf_target_x86)
-	return *(int *) bpf_this_cpu_ptr(&__preempt_count);
+	/* v6.15 or later */
+	if (&__preempt_count)
+		return *(int *) bpf_this_cpu_ptr(&__preempt_count);
+	/* v6.14 or older */
+	if (bpf_core_field_exists(pcpu_hot.preempt_count))
+		return ((struct pcpu_hot___local *)
+			bpf_this_cpu_ptr(&pcpu_hot))->preempt_count;
 #elif defined(bpf_target_arm64)
 	return bpf_get_current_task_btf()->thread_info.preempt.count;
 #endif
-- 
2.52.0
Re: [PATCH] selftests/bpf: Make x86 preempt_count access compatible across v6.14+
Posted by Alexei Starovoitov 1 week, 1 day ago
On Thu, Jan 29, 2026 at 5:54 AM Changwoo Min <changwoo@igalia.com> wrote:
>
> Recent x86 kernels (v6.15+) export __preempt_count as a ksym, while older
> kernels expose the preemption counter via pcpu_hot.preempt_count. The
> existing selftest helper unconditionally dereferenced __preempt_count,
> which breaks BPF program loading on older kernels.
>
> Make the x86 preemption count lookup version-agnostic by:
> - Marking __preempt_count and pcpu_hot as weak ksyms.
> - Introducing a BTF-described pcpu_hot___local layout with
>   preserve_access_index.
> - Selecting the appropriate access path at runtime using ksym availability
>   and bpf_core_field_exists().
>
> This allows a single BPF binary to run correctly on both v6.14-and-older
> and v6.15-and-newer kernels without relying on compile-time version checks.

See.. with bpf approach instead of kfunc this new helpers
can work on old kernels without backporting kfuncs :)

> Fixes: 4b69e31329b6 ("selftests/bpf: Introduce experimental bpf_in_interrupt()")

fixes tag is not appropriate. It's not a bug fix.

> Signed-off-by: Changwoo Min <changwoo@igalia.com>
> ---
>  tools/testing/selftests/bpf/bpf_experimental.h | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> index a39576c8ba04..0194c0090e50 100644
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h
> @@ -614,7 +614,13 @@ extern int bpf_cgroup_read_xattr(struct cgroup *cgroup, const char *name__str,
>
>  extern bool CONFIG_PREEMPT_RT __kconfig __weak;
>  #ifdef bpf_target_x86
> -extern const int __preempt_count __ksym;
> +extern const int __preempt_count __ksym __weak;
> +
> +struct pcpu_hot___local {
> +       int preempt_count;
> +} __attribute__((preserve_access_index));
> +
> +extern struct pcpu_hot___local pcpu_hot __ksym __weak;
>  #endif
>
>  struct task_struct___preempt_rt {
> @@ -624,7 +630,13 @@ struct task_struct___preempt_rt {
>  static inline int get_preempt_count(void)
>  {
>  #if defined(bpf_target_x86)
> -       return *(int *) bpf_this_cpu_ptr(&__preempt_count);
> +       /* v6.15 or later */
> +       if (&__preempt_count)
> +               return *(int *) bpf_this_cpu_ptr(&__preempt_count);

please use bpf_ksym_exists().
It helps to catch missing __weak. This patch adds it,
but let's demonstrate best coding practices.

> +       /* v6.14 or older */
> +       if (bpf_core_field_exists(pcpu_hot.preempt_count))
> +               return ((struct pcpu_hot___local *)
> +                       bpf_this_cpu_ptr(&pcpu_hot))->preempt_count;

iirc pcpu_hot approach was there for a short time.
Like 5.x kernel didn't have it. It was per-cpu var too.
Pls adjust the comment.

pw-bot: cr
Re: [PATCH] selftests/bpf: Make x86 preempt_count access compatible across v6.14+
Posted by Changwoo Min 1 week, 1 day ago
Thank you, Alexei, for quick review.

On 1/30/26 2:21 AM, Alexei Starovoitov wrote:
> On Thu, Jan 29, 2026 at 5:54 AM Changwoo Min <changwoo@igalia.com> wrote:
>>
>> Recent x86 kernels (v6.15+) export __preempt_count as a ksym, while older
>> kernels expose the preemption counter via pcpu_hot.preempt_count. The
>> existing selftest helper unconditionally dereferenced __preempt_count,
>> which breaks BPF program loading on older kernels.
>>
>> Make the x86 preemption count lookup version-agnostic by:
>> - Marking __preempt_count and pcpu_hot as weak ksyms.
>> - Introducing a BTF-described pcpu_hot___local layout with
>>    preserve_access_index.
>> - Selecting the appropriate access path at runtime using ksym availability
>>    and bpf_core_field_exists().
>>
>> This allows a single BPF binary to run correctly on both v6.14-and-older
>> and v6.15-and-newer kernels without relying on compile-time version checks.
> 
> See.. with bpf approach instead of kfunc this new helpers
> can work on old kernels without backporting kfuncs :)

You are right. I love the flexibility of BPF in deployment! \o/

> 
>> Fixes: 4b69e31329b6 ("selftests/bpf: Introduce experimental bpf_in_interrupt()")
> 
> fixes tag is not appropriate. It's not a bug fix.

Sure, I will drop the Fixes tag.

> 
>> Signed-off-by: Changwoo Min <changwoo@igalia.com>
>> ---
>>   tools/testing/selftests/bpf/bpf_experimental.h | 16 ++++++++++++++--
>>   1 file changed, 14 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
>> index a39576c8ba04..0194c0090e50 100644
>> --- a/tools/testing/selftests/bpf/bpf_experimental.h
>> +++ b/tools/testing/selftests/bpf/bpf_experimental.h
>> @@ -614,7 +614,13 @@ extern int bpf_cgroup_read_xattr(struct cgroup *cgroup, const char *name__str,
>>
>>   extern bool CONFIG_PREEMPT_RT __kconfig __weak;
>>   #ifdef bpf_target_x86
>> -extern const int __preempt_count __ksym;
>> +extern const int __preempt_count __ksym __weak;
>> +
>> +struct pcpu_hot___local {
>> +       int preempt_count;
>> +} __attribute__((preserve_access_index));
>> +
>> +extern struct pcpu_hot___local pcpu_hot __ksym __weak;
>>   #endif
>>
>>   struct task_struct___preempt_rt {
>> @@ -624,7 +630,13 @@ struct task_struct___preempt_rt {
>>   static inline int get_preempt_count(void)
>>   {
>>   #if defined(bpf_target_x86)
>> -       return *(int *) bpf_this_cpu_ptr(&__preempt_count);
>> +       /* v6.15 or later */
>> +       if (&__preempt_count)
>> +               return *(int *) bpf_this_cpu_ptr(&__preempt_count);
> 
> please use bpf_ksym_exists().
> It helps to catch missing __weak. This patch adds it,
> but let's demonstrate best coding practices.

Sure, will change it as suggested.


> 
>> +       /* v6.14 or older */
>> +       if (bpf_core_field_exists(pcpu_hot.preempt_count))
>> +               return ((struct pcpu_hot___local *)
>> +                       bpf_this_cpu_ptr(&pcpu_hot))->preempt_count;
> 
> iirc pcpu_hot approach was there for a short time.
> Like 5.x kernel didn't have it. It was per-cpu var too.
> Pls adjust the comment.

Sure, I found that pcpu_hot had been used only between 6.1 -- 6.14.

I will send out v2 shortly.
Regards,
Changwoo Min

> 
> pw-bot: cr
>