[PATCH v4] sched_ext: Introduce NUMA awareness to the default idle selection policy

Andrea Righi posted 1 patch 3 weeks, 6 days ago
There is a newer version of this series
kernel/sched/ext.c | 138 +++++++++++++++++++++++++++++++++++++++------
1 file changed, 121 insertions(+), 17 deletions(-)
[PATCH v4] sched_ext: Introduce NUMA awareness to the default idle selection policy
Posted by Andrea Righi 3 weeks, 6 days ago
Similarly to commit dfa4ed29b18c ("sched_ext: Introduce LLC awareness to
the default idle selection policy"), extend the built-in idle CPU
selection policy to also prioritize CPUs within the same NUMA node.

With this change applied, the built-in CPU idle selection policy follows
this logic:
 - always prioritize CPUs from fully idle SMT cores,
 - select the same CPU if possible,
 - select a CPU within the same LLC domain,
 - select a CPU within the same NUMA node.

Both NUMA and LLC awareness features are enabled only when the system
has multiple NUMA nodes or multiple LLC domains.

In the future, we may want to improve the NUMA node selection to account
the node distance from prev_cpu. Currently, the logic only tries to keep
tasks running on the same NUMA node. If all CPUs within a node are busy,
the next NUMA node is chosen randomly.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/ext.c | 138 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 121 insertions(+), 17 deletions(-)

ChangeLog v3 -> v4:
  - check only the first possible CPU to determine if there is a single
    LLC or NUMA domain that contins all CPUs
  - use static_branch_maybe() to check LLC and NUMA static keys
  - fix build with !CONFIG_SMP

ChangeLog v2 -> v3:
  - fix RCU locking
  - use highest_flag_domain() to determine the NUMA cpumasks
  - rely on num_possible_cpus() instead of nr_cpu_ids
  - refresh NUMA/LLC static_keys when an scx scheduler is loaded and on
    CPU hotplug events
  - rename static_keys to make it more clear that they are used only by
    the built-in select_cpu

ChangeLog v1 -> v2:
  - autodetect at boot whether NUMA and LLC capabilities should be used
    and use static_keys to control their activation
  - rely on cpumask_of_node/cpu_to_node() to determine the NUMA domain

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 6705c2e67c99..4d3170b264ae 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -870,6 +870,11 @@ static DEFINE_STATIC_KEY_FALSE(scx_ops_enq_exiting);
 static DEFINE_STATIC_KEY_FALSE(scx_ops_cpu_preempt);
 static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
 
+#ifdef CONFIG_SMP
+static DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_llc);
+static DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_numa);
+#endif
+
 static struct static_key_false scx_has_op[SCX_OPI_END] =
 	{ [0 ... SCX_OPI_END-1] = STATIC_KEY_FALSE_INIT };
 
@@ -3124,31 +3129,79 @@ static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
 		goto retry;
 }
 
-#ifdef CONFIG_SCHED_MC
 /*
- * Return the cpumask of CPUs usable by task @p in the same LLC domain of @cpu,
- * or NULL if the LLC domain cannot be determined.
+ * Initialize topology-aware scheduling.
+ *
+ * Detect if the system has multiple LLC or multiple NUMA domains and enable
+ * cache-aware / NUMA-aware scheduling optimizations in the default CPU idle
+ * selection policy.
  */
-static const struct cpumask *llc_domain(const struct task_struct *p, s32 cpu)
+static void update_selcpu_topology(void)
 {
-	struct sched_domain *sd = rcu_dereference(per_cpu(sd_llc, cpu));
-	const struct cpumask *llc_cpus = sd ? sched_domain_span(sd) : NULL;
+	bool enable_llc = false, enable_numa = false;
+	struct sched_domain *sd;
+	const struct cpumask *cpus;
+	s32 cpu = cpumask_first(cpu_possible_mask);
 
 	/*
-	 * Return the LLC domain only if the task is allowed to run on all
-	 * CPUs.
+	 * We only need to check the NUMA node and LLC domain of the first
+	 * available CPU to determine if they cover all CPUs.
+	 *
+	 * If all CPUs belong to the same NUMA node or share the same LLC
+	 * domain, enabling NUMA or LLC optimizations is unnecessary.
+	 * Otherwise, these optimizations can be enabled.
 	 */
-	return p->nr_cpus_allowed == nr_cpu_ids ? llc_cpus : NULL;
-}
-#else /* CONFIG_SCHED_MC */
-static inline const struct cpumask *llc_domain(struct task_struct *p, s32 cpu)
-{
-	return NULL;
+	rcu_read_lock();
+	sd = rcu_dereference(per_cpu(sd_llc, cpu));
+	if (sd) {
+		cpus = sched_domain_span(sd);
+		if (cpumask_weight(cpus) < num_possible_cpus())
+			enable_llc = true;
+	}
+	sd = highest_flag_domain(cpu, SD_NUMA);
+	if (sd) {
+		cpus = sched_group_span(sd->groups);
+		if (cpumask_weight(cpus) < num_possible_cpus())
+			enable_numa = true;
+	}
+	rcu_read_unlock();
+
+	pr_debug("sched_ext: LLC idle selection %s\n",
+		 enable_llc ? "enabled" : "disabled");
+	pr_debug("sched_ext: NUMA idle selection %s\n",
+		 enable_numa ? "enabled" : "disabled");
+
+	if (enable_llc)
+		static_branch_enable_cpuslocked(&scx_selcpu_topo_llc);
+	else
+		static_branch_disable_cpuslocked(&scx_selcpu_topo_llc);
+	if (enable_numa)
+		static_branch_enable_cpuslocked(&scx_selcpu_topo_numa);
+	else
+		static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
 }
-#endif /* CONFIG_SCHED_MC */
 
 /*
- * Built-in cpu idle selection policy.
+ * Built-in CPU idle selection policy:
+ *
+ * 1. Prioritize full-idle cores:
+ *   - always prioritize CPUs from fully idle cores (both logical CPUs are
+ *     idle) to avoid interference caused by SMT.
+ *
+ * 2. Reuse the same CPU:
+ *   - prefer the last used CPU to take advantage of cached data (L1, L2) and
+ *     branch prediction optimizations.
+ *
+ * 3. Pick a CPU within the same LLC (Last-Level Cache):
+ *   - if the above conditions aren't met, pick a CPU that shares the same LLC
+ *     to maintain cache locality.
+ *
+ * 4. Pick a CPU within the same NUMA node, if enabled:
+ *   - choose a CPU from the same NUMA node to reduce memory access latency.
+ *
+ * Step 3 and 4 are performed only if the system has, respectively, multiple
+ * LLC domains / multiple NUMA nodes (see scx_selcpu_topo_llc and
+ * scx_selcpu_topo_numa).
  *
  * NOTE: tasks that can only run on 1 CPU are excluded by this logic, because
  * we never call ops.select_cpu() for them, see select_task_rq().
@@ -3156,7 +3209,8 @@ static inline const struct cpumask *llc_domain(struct task_struct *p, s32 cpu)
 static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
 			      u64 wake_flags, bool *found)
 {
-	const struct cpumask *llc_cpus = llc_domain(p, prev_cpu);
+	const struct cpumask *llc_cpus = NULL;
+	const struct cpumask *numa_cpus = NULL;
 	s32 cpu;
 
 	*found = false;
@@ -3166,6 +3220,30 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
 		return prev_cpu;
 	}
 
+	/*
+	 * Determine the scheduling domain only if the task is allowed to run
+	 * on all CPUs.
+	 *
+	 * This is done primarily for efficiency, as it avoids the overhead of
+	 * updating a cpumask every time we need to select an idle CPU (which
+	 * can be costly in large SMP systems), but it also aligns logically:
+	 * if a task's scheduling domain is restricted by user-space (through
+	 * CPU affinity), the task will simply use the flat scheduling domain
+	 * defined by user-space.
+	 */
+	if (p->nr_cpus_allowed >= num_possible_cpus()) {
+		if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa))
+			numa_cpus = cpumask_of_node(cpu_to_node(prev_cpu));
+
+		if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) {
+			struct sched_domain *sd;
+
+			sd = rcu_dereference(per_cpu(sd_llc, prev_cpu));
+			if (sd)
+				llc_cpus = sched_domain_span(sd);
+		}
+	}
+
 	/*
 	 * If WAKE_SYNC, try to migrate the wakee to the waker's CPU.
 	 */
@@ -3226,6 +3304,15 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
 				goto cpu_found;
 		}
 
+		/*
+		 * Search for any fully idle core in the same NUMA node.
+		 */
+		if (numa_cpus) {
+			cpu = scx_pick_idle_cpu(numa_cpus, SCX_PICK_IDLE_CORE);
+			if (cpu >= 0)
+				goto cpu_found;
+		}
+
 		/*
 		 * Search for any full idle core usable by the task.
 		 */
@@ -3251,6 +3338,15 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
 			goto cpu_found;
 	}
 
+	/*
+	 * Search for any idle CPU in the same NUMA node.
+	 */
+	if (numa_cpus) {
+		cpu = scx_pick_idle_cpu(numa_cpus, 0);
+		if (cpu >= 0)
+			goto cpu_found;
+	}
+
 	/*
 	 * Search for any idle CPU usable by the task.
 	 */
@@ -3383,6 +3479,10 @@ static void handle_hotplug(struct rq *rq, bool online)
 
 	atomic_long_inc(&scx_hotplug_seq);
 
+	if ((SCX_HAS_OP(cpu_online) || SCX_HAS_OP(cpu_offline)) &&
+	    static_branch_likely(&scx_builtin_idle_enabled))
+		update_selcpu_topology();
+
 	if (online && SCX_HAS_OP(cpu_online))
 		SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);
 	else if (!online && SCX_HAS_OP(cpu_offline))
@@ -5202,6 +5302,10 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
 			static_branch_enable_cpuslocked(&scx_has_op[i]);
 
 	check_hotplug_seq(ops);
+#ifdef CONFIG_SMP
+	if (!ops->update_idle || (ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))
+		update_selcpu_topology();
+#endif
 	cpus_read_unlock();
 
 	ret = validate_ops(ops);
-- 
2.47.0
Re: [PATCH v4] sched_ext: Introduce NUMA awareness to the default idle selection policy
Posted by Tejun Heo 3 weeks, 6 days ago
Hello,

On Mon, Oct 28, 2024 at 12:33:38PM +0100, Andrea Righi wrote:
...
> +static void update_selcpu_topology(void)
>  {
> +	bool enable_llc = false, enable_numa = false;
> +	struct sched_domain *sd;
> +	const struct cpumask *cpus;
> +	s32 cpu = cpumask_first(cpu_possible_mask);

On x86, the first CPU is guaranteed to be online but there are archs that
allow the first CPU to go down in which case the topo information might not
be available. Use cpumask_first(cpu_online_mask) instead?

...
> @@ -3383,6 +3479,10 @@ static void handle_hotplug(struct rq *rq, bool online)
>  
>  	atomic_long_inc(&scx_hotplug_seq);
>  
> +	if ((SCX_HAS_OP(cpu_online) || SCX_HAS_OP(cpu_offline)) &&
> +	    static_branch_likely(&scx_builtin_idle_enabled))
> +		update_selcpu_topology();

Hmm... this feels a bit too complicated. Just gate it with scx_enabled()?

>  	if (online && SCX_HAS_OP(cpu_online))
>  		SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);
>  	else if (!online && SCX_HAS_OP(cpu_offline))
> @@ -5202,6 +5302,10 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
>  			static_branch_enable_cpuslocked(&scx_has_op[i]);
>  
>  	check_hotplug_seq(ops);
> +#ifdef CONFIG_SMP
> +	if (!ops->update_idle || (ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))
> +		update_selcpu_topology();
> +#endif

And always update here?

Thanks.

-- 
tejun
Re: [PATCH v4] sched_ext: Introduce NUMA awareness to the default idle selection policy
Posted by Andrea Righi 3 weeks, 6 days ago
On Mon, Oct 28, 2024 at 08:11:55AM -1000, Tejun Heo wrote:
> External email: Use caution opening links or attachments
> 
> 
> Hello,
> 
> On Mon, Oct 28, 2024 at 12:33:38PM +0100, Andrea Righi wrote:
> ...
> > +static void update_selcpu_topology(void)
> >  {
> > +     bool enable_llc = false, enable_numa = false;
> > +     struct sched_domain *sd;
> > +     const struct cpumask *cpus;
> > +     s32 cpu = cpumask_first(cpu_possible_mask);
> 
> On x86, the first CPU is guaranteed to be online but there are archs that
> allow the first CPU to go down in which case the topo information might not
> be available. Use cpumask_first(cpu_online_mask) instead?

Ok, I agree, cpu_online_mask is probably more reliable.

> 
> ...
> > @@ -3383,6 +3479,10 @@ static void handle_hotplug(struct rq *rq, bool online)
> >
> >       atomic_long_inc(&scx_hotplug_seq);
> >
> > +     if ((SCX_HAS_OP(cpu_online) || SCX_HAS_OP(cpu_offline)) &&
> > +         static_branch_likely(&scx_builtin_idle_enabled))
> > +             update_selcpu_topology();
> 
> Hmm... this feels a bit too complicated. Just gate it with scx_enabled()?

Ok, update_selcpu_topology() is not that expensive, so we can probably
just check scx_enabled() here.

> 
> >       if (online && SCX_HAS_OP(cpu_online))
> >               SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);
> >       else if (!online && SCX_HAS_OP(cpu_offline))
> > @@ -5202,6 +5302,10 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
> >                       static_branch_enable_cpuslocked(&scx_has_op[i]);
> >
> >       check_hotplug_seq(ops);
> > +#ifdef CONFIG_SMP
> > +     if (!ops->update_idle || (ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))
> > +             update_selcpu_topology();
> > +#endif
> 
> And always update here?

Ok.

Thanks,
-Andrea