在 2026/2/5 17:42, Peter Zijlstra 写道:
> On Tue, Feb 03, 2026 at 07:23:53PM +0800, Chuyi Zhou wrote:
>> Now smp_call_function_single() would enable preemption before
>> csd_lock_wait() to reduce the critical section. To allow callers of
>> smp_call_function_any() to also benefit from this optimization, remove
>> get_cpu()/put_cpu() from smp_call_function_any().
>>
>> Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
>> ---
>> kernel/smp.c | 9 +++++++--
>> 1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/smp.c b/kernel/smp.c
>> index 0858553f3666..f572716c3c7d 100644
>> --- a/kernel/smp.c
>> +++ b/kernel/smp.c
>> @@ -772,13 +772,18 @@ int smp_call_function_any(const struct cpumask *mask,
>> unsigned int cpu;
>> int ret;
>>
>> + /*
>> + * Prevent migration to another CPU after selecting the current CPU
>> + * as the target.
>> + */
>> + guard(migrate)();
>> +
>> /* Try for same CPU (cheapest) */
>> - cpu = get_cpu();
>> + cpu = smp_processor_id();
>> if (!cpumask_test_cpu(cpu, mask))
>> cpu = sched_numa_find_nth_cpu(mask, 0, cpu_to_node(cpu));
>>
>> ret = smp_call_function_single(cpu, func, info, wait);
>> - put_cpu();
>> return ret;
>
> Urgh, that's horrible.
>
> Basically what you want is something like so:
>
> bool enable = true;
> unsigned int cpu;
> int ret;
>
> preempt_disable();
> cpu = smp_processor_id();
> if (!cpumask_test_cpu(cpu, mask)) {
> cpu = sched_numa_find_nth_cpu(mask, 0, cpu_to_node(cpu))
> enable = false;
> preempt_enable();
> }
>
> ret = smp_call_function_single(cpu, func, info, wait);
> if (enable)
> preempt_enable();
> return ret;
OK, I will update it in the next version.