For now, migrate_enable and migrate_disable are global, which makes them
become hotspots in some case. Take BPF for example, the function calling
to migrate_enable and migrate_disable in BPF trampoline can introduce
significant overhead, and following is the 'perf top' of FENTRY's
benchmark (./tools/testing/selftests/bpf/bench trig-fentry):
54.63% bpf_prog_2dcccf652aac1793_bench_trigger_fentry [k]
bpf_prog_2dcccf652aac1793_bench_trigger_fentry
10.43% [kernel] [k] migrate_enable
10.07% bpf_trampoline_6442517037 [k] bpf_trampoline_6442517037
8.06% [kernel] [k] __bpf_prog_exit_recur
4.11% libc.so.6 [.] syscall
2.15% [kernel] [k] entry_SYSCALL_64
1.48% [kernel] [k] memchr_inv
1.32% [kernel] [k] fput
1.16% [kernel] [k] _copy_to_user
0.73% [kernel] [k] bpf_prog_test_run_raw_tp
So in this commit, we make migrate_enable/migrate_disable inline to obtain
better performance. The struct rq is defined internally in
kernel/sched/sched.h, and the field "nr_pinned" is accessed in
migrate_enable/migrate_disable, which makes it hard to make them inline.
Alexei Starovoitov suggests to generate the offset of "nr_pinned" in [1],
so we can define the migrate_enable/migrate_disable in
include/linux/sched.h and access "this_rq()->nr_pinned" with
"(void *)this_rq() + RQ_nr_pinned".
The offset of "nr_pinned" is generated in include/generated/rq-offsets.h
by kernel/sched/rq-offsets.c.
Generally speaking, we move the definition of migrate_enable and
migrate_disable to include/linux/sched.h from kernel/sched/core.c. The
calling to __set_cpus_allowed_ptr() is leaved in __migrate_enable().
The "struct rq" is not available in include/linux/sched.h, so we can't
access the "runqueues" with this_cpu_ptr(), as the compilation will fail
in this_cpu_ptr() -> raw_cpu_ptr() -> __verify_pcpu_ptr():
typeof((ptr) + 0)
So we introduce the this_rq_raw() and access the runqueues with
arch_raw_cpu_ptr() directly.
Before this patch, the performance of BPF FENTRY is:
fentry : 113.030 ± 0.149M/s
fentry : 112.501 ± 0.187M/s
fentry : 112.828 ± 0.267M/s
fentry : 115.287 ± 0.241M/s
After this patch, the performance of BPF FENTRY increases to:
fentry : 143.644 ± 0.670M/s
fentry : 149.764 ± 0.362M/s
fentry : 149.642 ± 0.156M/s
fentry : 145.263 ± 0.221M/s
Link: https://lore.kernel.org/bpf/CAADnVQ+5sEDKHdsJY5ZsfGDO_1SEhhQWHrt2SMBG5SYyQ+jt7w@mail.gmail.com/ [1]
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
v2:
- use PERCPU_PTR() for this_rq_raw() if !CONFIG_SMP
---
Kbuild | 13 ++++++-
include/linux/preempt.h | 3 --
include/linux/sched.h | 77 +++++++++++++++++++++++++++++++++++++++
kernel/bpf/verifier.c | 3 +-
kernel/sched/core.c | 56 ++--------------------------
kernel/sched/rq-offsets.c | 12 ++++++
6 files changed, 106 insertions(+), 58 deletions(-)
create mode 100644 kernel/sched/rq-offsets.c
diff --git a/Kbuild b/Kbuild
index f327ca86990c..13324b4bbe23 100644
--- a/Kbuild
+++ b/Kbuild
@@ -34,13 +34,24 @@ arch/$(SRCARCH)/kernel/asm-offsets.s: $(timeconst-file) $(bounds-file)
$(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
$(call filechk,offsets,__ASM_OFFSETS_H__)
+# Generate rq-offsets.h
+
+rq-offsets-file := include/generated/rq-offsets.h
+
+targets += kernel/sched/rq-offsets.s
+
+kernel/sched/rq-offsets.s: $(offsets-file)
+
+$(rq-offsets-file): kernel/sched/rq-offsets.s FORCE
+ $(call filechk,offsets,__RQ_OFFSETS_H__)
+
# Check for missing system calls
quiet_cmd_syscalls = CALL $<
cmd_syscalls = $(CONFIG_SHELL) $< $(CC) $(c_flags) $(missing_syscalls_flags)
PHONY += missing-syscalls
-missing-syscalls: scripts/checksyscalls.sh $(offsets-file)
+missing-syscalls: scripts/checksyscalls.sh $(rq-offsets-file)
$(call cmd,syscalls)
# Check the manual modification of atomic headers
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 1fad1c8a4c76..92237c319035 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -424,8 +424,6 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
* work-conserving schedulers.
*
*/
-extern void migrate_disable(void);
-extern void migrate_enable(void);
/**
* preempt_disable_nested - Disable preemption inside a normally preempt disabled section
@@ -471,7 +469,6 @@ static __always_inline void preempt_enable_nested(void)
DEFINE_LOCK_GUARD_0(preempt, preempt_disable(), preempt_enable())
DEFINE_LOCK_GUARD_0(preempt_notrace, preempt_disable_notrace(), preempt_enable_notrace())
-DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
#ifdef CONFIG_PREEMPT_DYNAMIC
diff --git a/include/linux/sched.h b/include/linux/sched.h
index f8188b833350..b554a1e65e3e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -49,6 +49,9 @@
#include <linux/tracepoint-defs.h>
#include <linux/unwind_deferred_types.h>
#include <asm/kmap_size.h>
+#ifndef COMPILE_OFFSETS
+#include <generated/rq-offsets.h>
+#endif
/* task_struct member predeclarations (sorted alphabetically): */
struct audit_context;
@@ -2312,4 +2315,78 @@ static __always_inline void alloc_tag_restore(struct alloc_tag *tag, struct allo
#define alloc_tag_restore(_tag, _old) do {} while (0)
#endif
+#ifndef COMPILE_OFFSETS
+
+extern void __migrate_enable(void);
+
+struct rq;
+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+
+#ifdef CONFIG_SMP
+#define this_rq_raw() arch_raw_cpu_ptr(&runqueues)
+#else
+#define this_rq_raw() PERCPU_PTR(&runqueues)
+#endif
+
+static inline void migrate_enable(void)
+{
+ struct task_struct *p = current;
+
+#ifdef CONFIG_DEBUG_PREEMPT
+ /*
+ * Check both overflow from migrate_disable() and superfluous
+ * migrate_enable().
+ */
+ if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
+ return;
+#endif
+
+ if (p->migration_disabled > 1) {
+ p->migration_disabled--;
+ return;
+ }
+
+ /*
+ * Ensure stop_task runs either before or after this, and that
+ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
+ */
+ guard(preempt)();
+ if (unlikely(p->cpus_ptr != &p->cpus_mask))
+ __migrate_enable();
+ /*
+ * Mustn't clear migration_disabled() until cpus_ptr points back at the
+ * regular cpus_mask, otherwise things that race (eg.
+ * select_fallback_rq) get confused.
+ */
+ barrier();
+ p->migration_disabled = 0;
+ (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))--;
+}
+
+static inline void migrate_disable(void)
+{
+ struct task_struct *p = current;
+
+ if (p->migration_disabled) {
+#ifdef CONFIG_DEBUG_PREEMPT
+ /*
+ *Warn about overflow half-way through the range.
+ */
+ WARN_ON_ONCE((s16)p->migration_disabled < 0);
+#endif
+ p->migration_disabled++;
+ return;
+ }
+
+ guard(preempt)();
+ (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))++;
+ p->migration_disabled = 1;
+}
+#else
+static inline void migrate_disable(void) { }
+static inline void migrate_enable(void) { }
+#endif
+
+DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
+
#endif
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c4f69a9e9af6..88bf2ef3e60c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -23855,8 +23855,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
BTF_SET_START(btf_id_deny)
BTF_ID_UNUSED
#ifdef CONFIG_SMP
-BTF_ID(func, migrate_disable)
-BTF_ID(func, migrate_enable)
+BTF_ID(func, __migrate_enable)
#endif
#if !defined CONFIG_PREEMPT_RCU && !defined CONFIG_TINY_RCU
BTF_ID(func, rcu_read_unlock_strict)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index be00629f0ba4..00383fed9f63 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -119,6 +119,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_update_nr_running_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp);
DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+EXPORT_SYMBOL_GPL(runqueues);
#ifdef CONFIG_SCHED_PROXY_EXEC
DEFINE_STATIC_KEY_TRUE(__sched_proxy_exec);
@@ -2381,28 +2382,7 @@ static void migrate_disable_switch(struct rq *rq, struct task_struct *p)
__do_set_cpus_allowed(p, &ac);
}
-void migrate_disable(void)
-{
- struct task_struct *p = current;
-
- if (p->migration_disabled) {
-#ifdef CONFIG_DEBUG_PREEMPT
- /*
- *Warn about overflow half-way through the range.
- */
- WARN_ON_ONCE((s16)p->migration_disabled < 0);
-#endif
- p->migration_disabled++;
- return;
- }
-
- guard(preempt)();
- this_rq()->nr_pinned++;
- p->migration_disabled = 1;
-}
-EXPORT_SYMBOL_GPL(migrate_disable);
-
-void migrate_enable(void)
+void __migrate_enable(void)
{
struct task_struct *p = current;
struct affinity_context ac = {
@@ -2410,37 +2390,9 @@ void migrate_enable(void)
.flags = SCA_MIGRATE_ENABLE,
};
-#ifdef CONFIG_DEBUG_PREEMPT
- /*
- * Check both overflow from migrate_disable() and superfluous
- * migrate_enable().
- */
- if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
- return;
-#endif
-
- if (p->migration_disabled > 1) {
- p->migration_disabled--;
- return;
- }
-
- /*
- * Ensure stop_task runs either before or after this, and that
- * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
- */
- guard(preempt)();
- if (p->cpus_ptr != &p->cpus_mask)
- __set_cpus_allowed_ptr(p, &ac);
- /*
- * Mustn't clear migration_disabled() until cpus_ptr points back at the
- * regular cpus_mask, otherwise things that race (eg.
- * select_fallback_rq) get confused.
- */
- barrier();
- p->migration_disabled = 0;
- this_rq()->nr_pinned--;
+ __set_cpus_allowed_ptr(p, &ac);
}
-EXPORT_SYMBOL_GPL(migrate_enable);
+EXPORT_SYMBOL_GPL(__migrate_enable);
static inline bool rq_has_pinned_tasks(struct rq *rq)
{
diff --git a/kernel/sched/rq-offsets.c b/kernel/sched/rq-offsets.c
new file mode 100644
index 000000000000..a23747bbe25b
--- /dev/null
+++ b/kernel/sched/rq-offsets.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+#define COMPILE_OFFSETS
+#include <linux/kbuild.h>
+#include <linux/types.h>
+#include "sched.h"
+
+int main(void)
+{
+ DEFINE(RQ_nr_pinned, offsetof(struct rq, nr_pinned));
+
+ return 0;
+}
--
2.50.1
On Tue, Aug 19, 2025 at 09:58:31AM +0800, Menglong Dong wrote: > The "struct rq" is not available in include/linux/sched.h, so we can't > access the "runqueues" with this_cpu_ptr(), as the compilation will fail > in this_cpu_ptr() -> raw_cpu_ptr() -> __verify_pcpu_ptr(): > typeof((ptr) + 0) > > So we introduce the this_rq_raw() and access the runqueues with > arch_raw_cpu_ptr() directly. ^ That, wants to be a comment near here: > @@ -2312,4 +2315,78 @@ static __always_inline void alloc_tag_restore(struct alloc_tag *tag, struct allo > #define alloc_tag_restore(_tag, _old) do {} while (0) > #endif > > +#ifndef COMPILE_OFFSETS > + > +extern void __migrate_enable(void); > + > +struct rq; > +DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); > + > +#ifdef CONFIG_SMP > +#define this_rq_raw() arch_raw_cpu_ptr(&runqueues) > +#else > +#define this_rq_raw() PERCPU_PTR(&runqueues) > +#endif Because that arch_ thing really is weird. > + (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))--; > + (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))++; And since you did a macro anyway, why not fold that magic in there, instead of duplicating it? #define __this_rq_raw() ((void *)arch_raw_cpu_ptr(&runqueues)) #define this_rq_pinned() (*(unsigned int *)(__this_rq_raw() + RQ_nr_pinned)) this_rq_pinned()--; this_rq_pinned()++; is nicer, no?
On Tue, Aug 19, 2025 at 8:40 PM Peter Zijlstra <peterz@infradead.org> wrote: > > On Tue, Aug 19, 2025 at 09:58:31AM +0800, Menglong Dong wrote: > > > The "struct rq" is not available in include/linux/sched.h, so we can't > > access the "runqueues" with this_cpu_ptr(), as the compilation will fail > > in this_cpu_ptr() -> raw_cpu_ptr() -> __verify_pcpu_ptr(): > > typeof((ptr) + 0) > > > > So we introduce the this_rq_raw() and access the runqueues with > > arch_raw_cpu_ptr() directly. > > ^ That, wants to be a comment near here: > > > @@ -2312,4 +2315,78 @@ static __always_inline void alloc_tag_restore(struct alloc_tag *tag, struct allo > > #define alloc_tag_restore(_tag, _old) do {} while (0) > > #endif > > > > +#ifndef COMPILE_OFFSETS > > + > > +extern void __migrate_enable(void); > > + > > +struct rq; > > +DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); > > + > > +#ifdef CONFIG_SMP > > +#define this_rq_raw() arch_raw_cpu_ptr(&runqueues) > > +#else > > +#define this_rq_raw() PERCPU_PTR(&runqueues) > > +#endif > > Because that arch_ thing really is weird. OK! I'll comment on this part. > > > + (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))--; > > + (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))++; > > And since you did a macro anyway, why not fold that magic in there, > instead of duplicating it? > > #define __this_rq_raw() ((void *)arch_raw_cpu_ptr(&runqueues)) > #define this_rq_pinned() (*(unsigned int *)(__this_rq_raw() + RQ_nr_pinned)) > > this_rq_pinned()--; > this_rq_pinned()++; > > is nicer, no? Yeah, much better!
On Tue, Aug 19, 2025 at 09:58:31AM +0800, Menglong Dong wrote: > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index be00629f0ba4..00383fed9f63 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -119,6 +119,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_update_nr_running_tp); > EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); > > DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); > +EXPORT_SYMBOL_GPL(runqueues); Oh no, absolutely not. You never, ever, export a variable, and certainly not this one. How about something like so? I tried 'clever' things with export inline, but the compiler hates me, so the below is the best I could make work. --- --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2315,6 +2315,7 @@ static __always_inline void alloc_tag_re #define alloc_tag_restore(_tag, _old) do {} while (0) #endif +#ifndef MODULE #ifndef COMPILE_OFFSETS extern void __migrate_enable(void); @@ -2328,7 +2329,7 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq #define this_rq_raw() PERCPU_PTR(&runqueues) #endif -static inline void migrate_enable(void) +static inline void _migrate_enable(void) { struct task_struct *p = current; @@ -2363,7 +2364,7 @@ static inline void migrate_enable(void) (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))--; } -static inline void migrate_disable(void) +static inline void _migrate_disable(void) { struct task_struct *p = current; @@ -2382,10 +2383,30 @@ static inline void migrate_disable(void) (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))++; p->migration_disabled = 1; } -#else -static inline void migrate_disable(void) { } -static inline void migrate_enable(void) { } -#endif +#else /* !COMPILE_OFFSETS */ +static inline void _migrate_disable(void) { } +static inline void _migrate_enable(void) { } +#endif /* !COMPILE_OFFSETS */ + +#ifndef CREATE_MIGRATE_DISABLE +static inline void migrate_disable(void) +{ + _migrate_disable(); +} + +static inline void migrate_enable(void) +{ + _migrate_enable(); +} +#else /* CREATE_MIGRATE_DISABLE */ +extern void migrate_disable(void); +extern void migrate_enable(void); +#endif /* CREATE_MIGRATE_DISABLE */ + +#else /* !MODULE */ +extern void migrate_disable(void); +extern void migrate_enable(void); +#endif /* !MODULE */ DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable()) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7,6 +7,9 @@ * Copyright (C) 1991-2002 Linus Torvalds * Copyright (C) 1998-2024 Ingo Molnar, Red Hat */ +#define CREATE_MIGRATE_DISABLE +#include <linux/sched.h> + #include <linux/highmem.h> #include <linux/hrtimer_api.h> #include <linux/ktime_api.h> @@ -119,7 +122,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_updat EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); -EXPORT_SYMBOL_GPL(runqueues); #ifdef CONFIG_SCHED_PROXY_EXEC DEFINE_STATIC_KEY_TRUE(__sched_proxy_exec); @@ -2382,6 +2384,11 @@ static void migrate_disable_switch(struc __do_set_cpus_allowed(p, &ac); } +void migrate_disable(void) +{ + _migrate_disable(); +} + void __migrate_enable(void) { struct task_struct *p = current; @@ -2392,7 +2399,11 @@ void __migrate_enable(void) __set_cpus_allowed_ptr(p, &ac); } -EXPORT_SYMBOL_GPL(__migrate_enable); + +void migrate_enable(void) +{ + _migrate_enable(); +} static inline bool rq_has_pinned_tasks(struct rq *rq) {
On Tue, Aug 19, 2025 at 8:32 PM Peter Zijlstra <peterz@infradead.org> wrote: > > On Tue, Aug 19, 2025 at 09:58:31AM +0800, Menglong Dong wrote: > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index be00629f0ba4..00383fed9f63 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -119,6 +119,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_update_nr_running_tp); > > EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); > > > > DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); > > +EXPORT_SYMBOL_GPL(runqueues); > > Oh no, absolutely not. > > You never, ever, export a variable, and certainly not this one. > > How about something like so? > > I tried 'clever' things with export inline, but the compiler hates me, > so the below is the best I could make work. I see. You mean that we don't export the various, and use the inlined version in vmlinux, and use the external version in modules, which I think is nice ;) (I were not aware that we should export various :/) I'll try your advice. Thanks! Menglong Dong > > --- > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -2315,6 +2315,7 @@ static __always_inline void alloc_tag_re > #define alloc_tag_restore(_tag, _old) do {} while (0) > #endif > > +#ifndef MODULE > #ifndef COMPILE_OFFSETS > > extern void __migrate_enable(void); > @@ -2328,7 +2329,7 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq > #define this_rq_raw() PERCPU_PTR(&runqueues) > #endif > > -static inline void migrate_enable(void) > +static inline void _migrate_enable(void) > { > struct task_struct *p = current; > > @@ -2363,7 +2364,7 @@ static inline void migrate_enable(void) > (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))--; > } > > -static inline void migrate_disable(void) > +static inline void _migrate_disable(void) > { > struct task_struct *p = current; > > @@ -2382,10 +2383,30 @@ static inline void migrate_disable(void) > (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_pinned))++; > p->migration_disabled = 1; > } > -#else > -static inline void migrate_disable(void) { } > -static inline void migrate_enable(void) { } > -#endif > +#else /* !COMPILE_OFFSETS */ > +static inline void _migrate_disable(void) { } > +static inline void _migrate_enable(void) { } > +#endif /* !COMPILE_OFFSETS */ > + > +#ifndef CREATE_MIGRATE_DISABLE > +static inline void migrate_disable(void) > +{ > + _migrate_disable(); > +} > + > +static inline void migrate_enable(void) > +{ > + _migrate_enable(); > +} > +#else /* CREATE_MIGRATE_DISABLE */ > +extern void migrate_disable(void); > +extern void migrate_enable(void); > +#endif /* CREATE_MIGRATE_DISABLE */ > + > +#else /* !MODULE */ > +extern void migrate_disable(void); > +extern void migrate_enable(void); > +#endif /* !MODULE */ > > DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable()) > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -7,6 +7,9 @@ > * Copyright (C) 1991-2002 Linus Torvalds > * Copyright (C) 1998-2024 Ingo Molnar, Red Hat > */ > +#define CREATE_MIGRATE_DISABLE > +#include <linux/sched.h> > + > #include <linux/highmem.h> > #include <linux/hrtimer_api.h> > #include <linux/ktime_api.h> > @@ -119,7 +122,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_updat > EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); > > DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); > -EXPORT_SYMBOL_GPL(runqueues); > > #ifdef CONFIG_SCHED_PROXY_EXEC > DEFINE_STATIC_KEY_TRUE(__sched_proxy_exec); > @@ -2382,6 +2384,11 @@ static void migrate_disable_switch(struc > __do_set_cpus_allowed(p, &ac); > } > > +void migrate_disable(void) > +{ > + _migrate_disable(); > +} > + > void __migrate_enable(void) > { > struct task_struct *p = current; > @@ -2392,7 +2399,11 @@ void __migrate_enable(void) > > __set_cpus_allowed_ptr(p, &ac); > } > -EXPORT_SYMBOL_GPL(__migrate_enable); > + > +void migrate_enable(void) > +{ > + _migrate_enable(); > +} > > static inline bool rq_has_pinned_tasks(struct rq *rq) > {
On Tue, 19 Aug 2025, Peter Zijlstra <peterz@infradead.org> wrote: >> +EXPORT_SYMBOL_GPL(runqueues); > > Oh no, absolutely not. > > You never, ever, export a variable, and certainly not this one. Tangential thought: I think it would be possible to warn about non-function exports at build time, and maybe plug it in W=1 builds. BR, Jani. -- Jani Nikula, Intel
On Tue, Aug 19, 2025 at 03:49:54PM +0300, Jani Nikula wrote: > On Tue, 19 Aug 2025, Peter Zijlstra <peterz@infradead.org> wrote: > >> +EXPORT_SYMBOL_GPL(runqueues); > > > > Oh no, absolutely not. > > > > You never, ever, export a variable, and certainly not this one. > > Tangential thought: > > I think it would be possible to warn about non-function exports at build > time, and maybe plug it in W=1 builds. > Too much noise, there's a metric ton of variables exported. Sometimes its unavoidable. I just try and avoid wherever possible.
On Tue, Aug 19, 2025 at 02:32:14PM +0200, Peter Zijlstra wrote: > On Tue, Aug 19, 2025 at 09:58:31AM +0800, Menglong Dong wrote: > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index be00629f0ba4..00383fed9f63 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -119,6 +119,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_update_nr_running_tp); > > EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); > > > > DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); > > +EXPORT_SYMBOL_GPL(runqueues); > > Oh no, absolutely not. > > You never, ever, export a variable, and certainly not this one. > > How about something like so? > > I tried 'clever' things with export inline, but the compiler hates me, > so the below is the best I could make work. extern inline, that is, obviously...
© 2016 - 2025 Red Hat, Inc.