kernel/smp.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
The functions calls get_cpu()/put_cpu() meaningless because the actual
CPU that would execute the caller's function is not necessarily the
current one.
The smp_call_function_single() which is called by
smp_call_function_any() does the right get/put protection.
Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
---
kernel/smp.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/kernel/smp.c b/kernel/smp.c
index 02f52291fae4..fa50ed459703 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -754,17 +754,13 @@ EXPORT_SYMBOL_GPL(smp_call_function_single_async);
int smp_call_function_any(const struct cpumask *mask,
smp_call_func_t func, void *info, int wait)
{
- unsigned int cpu;
- int ret;
+ unsigned int cpu = smp_processor_id();
/* Try for same CPU (cheapest) */
- cpu = get_cpu();
if (!cpumask_test_cpu(cpu, mask))
cpu = sched_numa_find_nth_cpu(mask, 0, cpu_to_node(cpu));
- ret = smp_call_function_single(cpu, func, info, wait);
- put_cpu();
- return ret;
+ return smp_call_function_single(cpu, func, info, wait);
}
EXPORT_SYMBOL_GPL(smp_call_function_any);
--
2.43.0
On 2025-10-08 12:57, Yury Norov (NVIDIA) wrote:
> The functions calls get_cpu()/put_cpu() meaningless because the actual
> CPU that would execute the caller's function is not necessarily the
> current one.
>
> The smp_call_function_single() which is called by
> smp_call_function_any() does the right get/put protection.
>
> Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
> ---
> kernel/smp.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 02f52291fae4..fa50ed459703 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -754,17 +754,13 @@ EXPORT_SYMBOL_GPL(smp_call_function_single_async);
> int smp_call_function_any(const struct cpumask *mask,
> smp_call_func_t func, void *info, int wait)
> {
> - unsigned int cpu;
> - int ret;
> + unsigned int cpu = smp_processor_id();
I wonder whether this passes any moderate testing with kernel debug
options enabled. I would at the very least expect a
raw_smp_processor_id() call here not to trip debug warnings.
AFAIU smp_call_function_any call be called from preemptible context,
right ?
Thanks,
Mathieu
>
> /* Try for same CPU (cheapest) */
> - cpu = get_cpu();
> if (!cpumask_test_cpu(cpu, mask))
> cpu = sched_numa_find_nth_cpu(mask, 0, cpu_to_node(cpu));
>
> - ret = smp_call_function_single(cpu, func, info, wait);
> - put_cpu();
> - return ret;
> + return smp_call_function_single(cpu, func, info, wait);
> }
> EXPORT_SYMBOL_GPL(smp_call_function_any);
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Wed, Oct 08, 2025 at 01:06:18PM -0400, Mathieu Desnoyers wrote:
> On 2025-10-08 12:57, Yury Norov (NVIDIA) wrote:
> > The functions calls get_cpu()/put_cpu() meaningless because the actual
> > CPU that would execute the caller's function is not necessarily the
> > current one.
> >
> > The smp_call_function_single() which is called by
> > smp_call_function_any() does the right get/put protection.
> >
> > Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
> > ---
> > kernel/smp.c | 8 ++------
> > 1 file changed, 2 insertions(+), 6 deletions(-)
> >
> > diff --git a/kernel/smp.c b/kernel/smp.c
> > index 02f52291fae4..fa50ed459703 100644
> > --- a/kernel/smp.c
> > +++ b/kernel/smp.c
> > @@ -754,17 +754,13 @@ EXPORT_SYMBOL_GPL(smp_call_function_single_async);
> > int smp_call_function_any(const struct cpumask *mask,
> > smp_call_func_t func, void *info, int wait)
> > {
> > - unsigned int cpu;
> > - int ret;
> > + unsigned int cpu = smp_processor_id();
>
> I wonder whether this passes any moderate testing with kernel debug
> options enabled. I would at the very least expect a
> raw_smp_processor_id() call here not to trip debug warnings.
>
> AFAIU smp_call_function_any call be called from preemptible context,
> right ?
You're right, we need to retain current CPU unless the work is
scheduled. I need to test better. Sorry for the noise.
© 2016 - 2026 Red Hat, Inc.