Use for_each_cpu_and() and drop some housekeeping code.
Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com>
Reviewed-by: Phil Auld <pauld@redhat.com>
Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
---
Rebased and re-tested on top of master. Original patch:
https://lore.kernel.org/all/20250911203136.548844-1-yury.norov@gmail.com/
kernel/sched/fair.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index da46c3164537..3ead55e4b8a5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2458,11 +2458,8 @@ static void task_numa_find_cpu(struct task_numa_env *env,
maymove = !load_too_imbalanced(src_load, dst_load, env);
}
- for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) {
- /* Skip this CPU if the source task cannot migrate */
- if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
- continue;
-
+ /* Skip CPUs if the source task cannot migrate */
+ for_each_cpu_and(cpu, cpumask_of_node(env->dst_nid), env->p->cpus_ptr) {
env->dst_cpu = cpu;
if (task_numa_compare(env, taskimp, groupimp, maymove))
break;
--
2.43.0