kernel/bpf/bpf_lru_list.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
Replace the manual sequence of cpumask_next() and cpumask_first()
with a single call to cpumask_next_wrap() in get_next_cpu().
Signed-off-by: Fushuai Wang <wangfushuai@baidu.com>
---
kernel/bpf/bpf_lru_list.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
index 2d6e1c98d8ad..34881f4da8ae 100644
--- a/kernel/bpf/bpf_lru_list.c
+++ b/kernel/bpf/bpf_lru_list.c
@@ -21,10 +21,7 @@
static int get_next_cpu(int cpu)
{
- cpu = cpumask_next(cpu, cpu_possible_mask);
- if (cpu >= nr_cpu_ids)
- cpu = cpumask_first(cpu_possible_mask);
- return cpu;
+ return cpumask_next_wrap(cpu, cpu_possible_mask);
}
/* Local list helpers */
--
2.36.1
On 8/7/25 4:48 AM, Fushuai Wang wrote:
> Replace the manual sequence of cpumask_next() and cpumask_first()
> with a single call to cpumask_next_wrap() in get_next_cpu().
>
> Signed-off-by: Fushuai Wang <wangfushuai@baidu.com>
> ---
> kernel/bpf/bpf_lru_list.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
> index 2d6e1c98d8ad..34881f4da8ae 100644
> --- a/kernel/bpf/bpf_lru_list.c
> +++ b/kernel/bpf/bpf_lru_list.c
> @@ -21,10 +21,7 @@
>
> static int get_next_cpu(int cpu)
> {
> - cpu = cpumask_next(cpu, cpu_possible_mask);
> - if (cpu >= nr_cpu_ids)
> - cpu = cpumask_first(cpu_possible_mask);
> - return cpu;
> + return cpumask_next_wrap(cpu, cpu_possible_mask);
> }
Lets then get rid of the get_next_cpu() function since its only used
once, and just use the cpumask_next_wrap() at call site ?
[...]
raw_spin_unlock_irqrestore(&steal_loc_l->lock, flags);
steal = cpumask_next_wrap(steal, cpu_possible_mask);
} while (!node && steal != first_steal);
[...]
Btw, in $subj please target [PATCH bpf-next] given its a cleanup,
not a fix.
Thanks,
Daniel
>> Replace the manual sequence of cpumask_next() and cpumask_first()
>> with a single call to cpumask_next_wrap() in get_next_cpu().
>>
>> Signed-off-by: Fushuai Wang <wangfushuai@baidu.com>
>> ---
>> kernel/bpf/bpf_lru_list.c | 5 +----
>> 1 file changed, 1 insertion(+), 4 deletions(-)
>>
>> diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
>> index 2d6e1c98d8ad..34881f4da8ae 100644
>> --- a/kernel/bpf/bpf_lru_list.c
>> +++ b/kernel/bpf/bpf_lru_list.c
>> @@ -21,10 +21,7 @@
>>
>> static int get_next_cpu(int cpu)
>> {
>> - cpu = cpumask_next(cpu, cpu_possible_mask);
>> - if (cpu >= nr_cpu_ids)
>> - cpu = cpumask_first(cpu_possible_mask);
>> - return cpu;
>> + return cpumask_next_wrap(cpu, cpu_possible_mask);
>> }
>
> Lets then get rid of the get_next_cpu() function since its only used
> once, and just use the cpumask_next_wrap() at call site ?
>
> [...]
> raw_spin_unlock_irqrestore(&steal_loc_l->lock, flags);
>
> steal = cpumask_next_wrap(steal, cpu_possible_mask);
> } while (!node && steal != first_steal);
> [...]
>
Thank you for your suggestion.
> Btw, in $subj please target [PATCH bpf-next] given its a cleanup,
> not a fix.
I will send a v2 shortly.
Regards,
Wang.
On 8/6/25 7:48 PM, Fushuai Wang wrote: > Replace the manual sequence of cpumask_next() and cpumask_first() > with a single call to cpumask_next_wrap() in get_next_cpu(). > > Signed-off-by: Fushuai Wang <wangfushuai@baidu.com> Acked-by: Yonghong Song <yonghong.song@linux.dev>
© 2016 - 2026 Red Hat, Inc.