[PATCH v2] sched_ext: Introduce LLC awareness to the default idle selection policy

Andrea Righi posted 1 patch 1 month ago
There is a newer version of this series
kernel/sched/ext.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 88 insertions(+)
[PATCH v2] sched_ext: Introduce LLC awareness to the default idle selection policy
Posted by Andrea Righi 1 month ago
Rely on the scheduler topology information to implement basic LLC
awareness in the sched_ext build-in idle selection policy.

This allows schedulers using the built-in policy to make more informed
decisions when selecting an idle CPU in systems with multiple LLCs, such
as NUMA systems or chiplet-based architectures, and it helps keep tasks
within the same LLC domain, thereby improving cache locality.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/ext.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 88 insertions(+)

ChangeLog v1 -> v2:
  - get rid of expensive cpumask_copy()
  - depend on CONFIG_SCHED_MC (there is no point enabling llc awareness
    if the kernel doesn't keep track of llc information)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index a13a6461a290..370493c4d109 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3119,9 +3119,56 @@ static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
 		goto retry;
 }
 
+#ifdef CONFIG_SCHED_MC
+/*
+ * Per-CPU cpumasks used by the built-in idle CPU selection policy to determine
+ * task's LLC domain.
+ */
+static DEFINE_PER_CPU(cpumask_var_t, __select_llc_mask);
+
+static void init_select_llc_mask(void)
+{
+	int i;
+
+	for_each_possible_cpu(i)
+		zalloc_cpumask_var_node(&per_cpu(__select_llc_mask, i),
+					GFP_KERNEL, cpu_to_node(i));
+}
+
+static struct cpumask *this_llc_mask(void)
+{
+	return this_cpu_cpumask_var_ptr(__select_llc_mask);
+}
+
+static inline const struct cpumask *llc_domain(s32 cpu)
+{
+	struct sched_domain *sd = rcu_dereference(per_cpu(sd_llc, cpu));
+
+	return sd ? sched_domain_span(sd) : NULL;
+}
+#else /* CONFIG_SCHED_MC */
+static inline void init_select_llc_mask(void) {}
+
+static inline struct cpumask *this_llc_mask(void)
+{
+	return NULL;
+}
+
+static inline const struct cpumask *llc_domain(s32 cpu)
+{
+	return NULL;
+}
+#endif /* CONFIG_SCHED_MC */
+
+/*
+ * Built-in cpu idle selection policy.
+ */
 static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
 			      u64 wake_flags, bool *found)
 {
+	struct cpumask *llc_cpus = this_llc_mask();
+	const struct cpumask *llc_mask;
+	bool llc_empty;
 	s32 cpu;
 
 	*found = false;
@@ -3168,27 +3215,66 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
 		}
 	}
 
+	/*
+	 * Determine the task's LLC domain.
+	 */
+	llc_mask = llc_domain(prev_cpu);
+	if (llc_cpus && llc_mask)
+		llc_empty = !cpumask_and(llc_cpus, llc_mask, p->cpus_ptr);
+	else
+		llc_empty = true;
+
 	/*
 	 * If CPU has SMT, any wholly idle CPU is likely a better pick than
 	 * partially idle @prev_cpu.
 	 */
 	if (sched_smt_active()) {
+		/*
+		 * Keep using @prev_cpu if it's part of a fully idle core.
+		 */
 		if (cpumask_test_cpu(prev_cpu, idle_masks.smt) &&
 		    test_and_clear_cpu_idle(prev_cpu)) {
 			cpu = prev_cpu;
 			goto cpu_found;
 		}
 
+		/*
+		 * Search for any fully idle core in the same LLC domain.
+		 */
+		if (!llc_empty) {
+			cpu = scx_pick_idle_cpu(llc_cpus, SCX_PICK_IDLE_CORE);
+			if (cpu >= 0)
+				goto cpu_found;
+		}
+
+		/*
+		 * Search for any full idle core usable by the task.
+		 */
 		cpu = scx_pick_idle_cpu(p->cpus_ptr, SCX_PICK_IDLE_CORE);
 		if (cpu >= 0)
 			goto cpu_found;
 	}
 
+	/*
+	 * Use @prev_cpu if it's idle.
+	 */
 	if (test_and_clear_cpu_idle(prev_cpu)) {
 		cpu = prev_cpu;
 		goto cpu_found;
 	}
 
+	/*
+	 * Search for any idle CPU in the same LLC domain.
+	 */
+	if (!llc_empty) {
+		cpu = scx_pick_idle_cpu(llc_cpus, 0);
+		if (cpu >= 0)
+			goto cpu_found;
+	}
+
+	/*
+	 * Search for any idle CPU usable by the task.
+	 */
 	cpu = scx_pick_idle_cpu(p->cpus_ptr, 0);
 	if (cpu >= 0)
 		goto cpu_found;
@@ -7250,6 +7336,8 @@ static int __init scx_init(void)
 		return ret;
 	}
 
+	init_select_llc_mask();
+
 	return 0;
 }
 __initcall(scx_init);
-- 
2.47.0
Re: [PATCH v2] sched_ext: Introduce LLC awareness to the default idle selection policy
Posted by Andrea Righi 1 month ago
On Tue, Oct 22, 2024 at 12:14:22PM +0200, Andrea Righi wrote:
...
> +	/*
> +	 * Determine the task's LLC domain.
> +	 */
> +	llc_mask = llc_domain(prev_cpu);
> +	if (llc_cpus && llc_mask)
> +		llc_empty = !cpumask_and(llc_cpus, llc_mask, p->cpus_ptr);
> +	else
> +		llc_empty = true;

Thinking more about this, we can avoid re-generating the llc_cpus
cpumask when the task can run on all CPUs (likely the majority of the
cases) and it's probably more efficient to check for
cpumask_equal(p->cpus_ptr, cpu_possible_mask) and just use llc_mask in
this case.

We could also optimize tasks that can only run on 1 CPU, but we never
call ops.select_cpu() for them, they're just skipped in
select_task_rq(), so I'm not sure if we should handle this special case
(maybe I can add a comment, to make it more clear).

-Andrea
Re: [PATCH v2] sched_ext: Introduce LLC awareness to the default idle selection policy
Posted by Tejun Heo 1 month ago
Hello,

On Tue, Oct 22, 2024 at 04:55:51PM +0200, Andrea Righi wrote:
...
> Thinking more about this, we can avoid re-generating the llc_cpus
> cpumask when the task can run on all CPUs (likely the majority of the
> cases) and it's probably more efficient to check for
> cpumask_equal(p->cpus_ptr, cpu_possible_mask) and just use llc_mask in
> this case.

At the simplest, we can just skip llc-aware idle picking if not all CPUs are
allowed. Also, it's probably cheaper to test p->nr_cpus_allowed than testing
cpus_ptr.

> We could also optimize tasks that can only run on 1 CPU, but we never
> call ops.select_cpu() for them, they're just skipped in
> select_task_rq(), so I'm not sure if we should handle this special case
> (maybe I can add a comment, to make it more clear).

Yeah, a comment can be helpful.

Thanks.

-- 
tejun
Re: [PATCH v2] sched_ext: Introduce LLC awareness to the default idle selection policy
Posted by Andrea Righi 1 month ago
On Tue, Oct 22, 2024 at 09:11:52AM -1000, Tejun Heo wrote:
> Hello,
> 
> On Tue, Oct 22, 2024 at 04:55:51PM +0200, Andrea Righi wrote:
> ...
> > Thinking more about this, we can avoid re-generating the llc_cpus
> > cpumask when the task can run on all CPUs (likely the majority of the
> > cases) and it's probably more efficient to check for
> > cpumask_equal(p->cpus_ptr, cpu_possible_mask) and just use llc_mask in
> > this case.
> 
> At the simplest, we can just skip llc-aware idle picking if not all CPUs are
> allowed. Also, it's probably cheaper to test p->nr_cpus_allowed than testing
> cpus_ptr.

That's probably the easiest and most efficient way, at the end if you're
restricting the CPU affinity from user-space, then you can just set the
LLC affinity as well. In this way we can completely get rid of the
cpumask_and() and just use sd->span directly. And rely on
p->nr_cpus_allowed to detect when the task is allowed to run on all
CPUs (and receive the LLC awareness optimization).

> 
> > We could also optimize tasks that can only run on 1 CPU, but we never
> > call ops.select_cpu() for them, they're just skipped in
> > select_task_rq(), so I'm not sure if we should handle this special case
> > (maybe I can add a comment, to make it more clear).
> 
> Yeah, a comment can be helpful.

Ok, will add a comment.

Thanks,
-Andrea