[PATCH v2 sched_ext/for-6.13] sched_ext: Do not enable LLC/NUMA optimizations when domains overlap

Andrea Righi posted 1 patch 2 weeks, 2 days ago
There is a newer version of this series
kernel/sched/ext.c | 92 ++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 85 insertions(+), 7 deletions(-)
[PATCH v2 sched_ext/for-6.13] sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
Posted by Andrea Righi 2 weeks, 2 days ago
When the LLC and NUMA domains fully overlap, enabling both optimizations
in the built-in idle CPU selection policy is redundant, as it leads to
searching for an idle CPU within the same domain twice.

Likewise, if all online CPUs are within a single LLC domain, LLC
optimization is unnecessary.

Therefore, detect overlapping domains and enable topology optimizations
only when necessary.

Moreover, rely on the online CPUs for this detection logic, instead of
using the possible CPUs.

Fixes: 860a45219bce ("sched_ext: Introduce NUMA awareness to the default idle selection policy")
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/ext.c | 92 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 85 insertions(+), 7 deletions(-)

ChangeLog v1 -> v2:
  - rely on the online CPUs, instead of the possible CPUs
  - handle asymmetric NUMA configurations (which may arise from CPU
    hotplugging or virtualization)
  - add more comments to clarify the possible LLC/NUMA scenarios

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index fc7f15eefe54..a51847f79d01 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3129,12 +3129,77 @@ static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
 		goto retry;
 }
 
+/*
+ * Return true if the LLC domains do not perfectly overlap with the NUMA
+ * domains, false otherwise.
+ */
+static bool llc_and_numa_mismatch(void)
+{
+	struct sched_domain *sd;
+	int cpu;
+
+	/*
+	 * We need to scan all online CPUs to verify whether their scheduling
+	 * domains overlap.
+	 *
+	 * While it is rare to encounter architectures with asymmetric NUMA
+	 * topologies, CPU hotplugging or virtualized environments can result
+	 * in asymmetric configurations.
+	 *
+	 * For example:
+	 *
+	 *  NUMA 0:
+	 *    - LLC 0: cpu0..cpu7
+	 *    - LLC 1: cpu8..cpu15 [offline]
+	 *
+	 *  NUMA 1:
+	 *    - LLC 0: cpu16..cpu23
+	 *    - LLC 1: cpu24..cpu31
+	 *
+	 * In this case, if we only check the first online CPU (cpu0), we might
+	 * incorrectly assume that the LLC and NUMA domains are fully
+	 * overlapping, which is incorrect (as NUMA 1 has two distinct LLC
+	 * domains).
+	 */
+	for_each_online_cpu(cpu) {
+		sd = cpu_rq(cpu)->sd;
+
+		while (sd) {
+			bool is_llc = sd->flags & SD_SHARE_LLC;
+			bool is_numa = sd->flags & SD_NUMA;
+
+			if (is_llc != is_numa)
+				return true;
+
+			sd = sd->parent;
+		}
+	}
+
+	return false;
+}
+
 /*
  * Initialize topology-aware scheduling.
  *
  * Detect if the system has multiple LLC or multiple NUMA domains and enable
  * cache-aware / NUMA-aware scheduling optimizations in the default CPU idle
  * selection policy.
+ *
+ * Assumption: under normal circumstances we can assume that each CPU belongs
+ * to a single NUMA domain and a single LLC domain.
+ *
+ * However, in complex or highly specialized systems (e.g., multi-socket,
+ * chiplet-based, or virtualized systems), the relationship between NUMA and
+ * LLC domains can become more intricate, though each CPU is still considered
+ * to belong to a single NUMA and LLC domain in the kernel's internal
+ * representation.
+ *
+ * Another assumption is that each LLC domain is always fully contained within
+ * a single NUMA domain. In reality, in chiplet-based or virtualized systems,
+ * LLC domains may logically span multiple NUMA nodes. However, the kernel’s
+ * internal topology representation does not account for this, so this logic is
+ * also making the assumption that each LLC domain is always fully contained
+ * inside a single NUMA domain.
  */
 static void update_selcpu_topology(void)
 {
@@ -3144,24 +3209,37 @@ static void update_selcpu_topology(void)
 	s32 cpu = cpumask_first(cpu_online_mask);
 
 	/*
-	 * We only need to check the NUMA node and LLC domain of the first
-	 * available CPU to determine if they cover all CPUs.
+	 * Enable LLC domain optimization only when there are multiple LLC
+	 * domains among the online CPUs. If all online CPUs are part of a
+	 * single LLC domain, the idle CPU selection logic can choose any
+	 * online CPU without bias.
 	 *
-	 * If all CPUs belong to the same NUMA node or share the same LLC
-	 * domain, enabling NUMA or LLC optimizations is unnecessary.
-	 * Otherwise, these optimizations can be enabled.
+	 * Note that it is sufficient to check the LLC domain of the first
+	 * online CPU to determine whether a single LLC domain includes all
+	 * CPUs.
 	 */
 	rcu_read_lock();
 	sd = rcu_dereference(per_cpu(sd_llc, cpu));
 	if (sd) {
 		cpus = sched_domain_span(sd);
-		if (cpumask_weight(cpus) < num_possible_cpus())
+		if (cpumask_weight(cpus) < num_online_cpus())
 			enable_llc = true;
 	}
+
+	/*
+	 * Enable NUMA optimization only when there are multiple NUMA domains
+	 * among the online CPUs and the NUMA domains don't perfectly overlaps
+	 * with the LLC domains.
+	 *
+	 * If all CPUs belong to the same NUMA node and the same LLC domain,
+	 * enabling both NUMA and LLC optimizations is unnecessary, as checking
+	 * for an idle CPU in the same domain twice is redundant.
+	 */
 	sd = highest_flag_domain(cpu, SD_NUMA);
 	if (sd) {
 		cpus = sched_group_span(sd->groups);
-		if (cpumask_weight(cpus) < num_possible_cpus())
+		if ((cpumask_weight(cpus) < num_online_cpus()) &&
+		    llc_and_numa_mismatch())
 			enable_numa = true;
 	}
 	rcu_read_unlock();
-- 
2.47.0

Re: [PATCH v2 sched_ext/for-6.13] sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
Posted by Tejun Heo 2 weeks, 2 days ago
Hello, Andrea.

Some nits below:

On Thu, Nov 07, 2024 at 09:48:03AM +0100, Andrea Righi wrote:
> +static bool llc_and_numa_mismatch(void)
> +{
...
> +	for_each_online_cpu(cpu) {
> +		sd = cpu_rq(cpu)->sd;
> +
> +		while (sd) {

This can be for_each_domain(cpu, sd).

> +			bool is_llc = sd->flags & SD_SHARE_LLC;
> +			bool is_numa = sd->flags & SD_NUMA;
> +
> +			if (is_llc != is_numa)
> +				return true;
> +
> +			sd = sd->parent;
> +		}
> +	}
> +
> +	return false;
> +}
> +
>  /*
>   * Initialize topology-aware scheduling.
>   *
>   * Detect if the system has multiple LLC or multiple NUMA domains and enable
>   * cache-aware / NUMA-aware scheduling optimizations in the default CPU idle
>   * selection policy.
> + *
> + * Assumption: under normal circumstances we can assume that each CPU belongs
> + * to a single NUMA domain and a single LLC domain.
> + *
> + * However, in complex or highly specialized systems (e.g., multi-socket,
> + * chiplet-based, or virtualized systems), the relationship between NUMA and
> + * LLC domains can become more intricate, though each CPU is still considered
> + * to belong to a single NUMA and LLC domain in the kernel's internal
> + * representation.
> + *
> + * Another assumption is that each LLC domain is always fully contained within
> + * a single NUMA domain. In reality, in chiplet-based or virtualized systems,
> + * LLC domains may logically span multiple NUMA nodes. However, the kernel’s

Are there any actual systems that have a single LLC spanning multiple NUMA
nodes? I think it'd be sufficient to state that the kernel assumes that a
CPU belongs to a single LLC and a single LLC belongs to a single socket.

Otherwise, looks great to me.

Thanks.

-- 
tejun
Re: [PATCH v2 sched_ext/for-6.13] sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
Posted by Andrea Righi 2 weeks, 2 days ago
On Thu, Nov 07, 2024 at 09:04:56AM -1000, Tejun Heo wrote:
> Hello, Andrea.
> 
> Some nits below:
> 
> On Thu, Nov 07, 2024 at 09:48:03AM +0100, Andrea Righi wrote:
> > +static bool llc_and_numa_mismatch(void)
> > +{
> ...
> > +     for_each_online_cpu(cpu) {
> > +             sd = cpu_rq(cpu)->sd;
> > +
> > +             while (sd) {
> 
> This can be for_each_domain(cpu, sd).

Oh that's nicer, thanks!

> 
> > +                     bool is_llc = sd->flags & SD_SHARE_LLC;
> > +                     bool is_numa = sd->flags & SD_NUMA;
> > +
> > +                     if (is_llc != is_numa)
> > +                             return true;
> > +
> > +                     sd = sd->parent;
> > +             }
> > +     }
> > +
> > +     return false;
> > +}
> > +
> >  /*
> >   * Initialize topology-aware scheduling.
> >   *
> >   * Detect if the system has multiple LLC or multiple NUMA domains and enable
> >   * cache-aware / NUMA-aware scheduling optimizations in the default CPU idle
> >   * selection policy.
> > + *
> > + * Assumption: under normal circumstances we can assume that each CPU belongs
> > + * to a single NUMA domain and a single LLC domain.
> > + *
> > + * However, in complex or highly specialized systems (e.g., multi-socket,
> > + * chiplet-based, or virtualized systems), the relationship between NUMA and
> > + * LLC domains can become more intricate, though each CPU is still considered
> > + * to belong to a single NUMA and LLC domain in the kernel's internal
> > + * representation.
> > + *
> > + * Another assumption is that each LLC domain is always fully contained within
> > + * a single NUMA domain. In reality, in chiplet-based or virtualized systems,
> > + * LLC domains may logically span multiple NUMA nodes. However, the kernel’s
> 
> Are there any actual systems that have a single LLC spanning multiple NUMA
> nodes? I think it'd be sufficient to state that the kernel assumes that a
> CPU belongs to a single LLC and a single LLC belongs to a single socket.

I've searched quite a bit, but haven't found any architecture that
explicitly shows an LLC shared across different NUMA nodes. While there
are technologies that enable L3 cache coherency / communication between
multiple CCDs (such as AMD's Infinity Fabric in EPYC processors or
Intel's UPI in some Xeon models), these are not technically LLCs
spanning multiple NUMA nodes.

So, I think it's fine to just state that the kernel is assuming the
hierarchy CPU -> single LLC -> single NUMA.

I'll apply these changes and send a v3, thanks!

-Andrea

> 
> Otherwise, looks great to me.
> 
> Thanks.
> 
> --
> tejun
Re: [PATCH v2 sched_ext/for-6.13] sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
Posted by Andrea Righi 2 weeks, 2 days ago
On Thu, Nov 07, 2024 at 09:14:07PM +0100, Andrea Righi wrote:
> On Thu, Nov 07, 2024 at 09:04:56AM -1000, Tejun Heo wrote:
> > Hello, Andrea.
> > 
> > Some nits below:
> > 
> > On Thu, Nov 07, 2024 at 09:48:03AM +0100, Andrea Righi wrote:
> > > +static bool llc_and_numa_mismatch(void)
> > > +{
> > ...
> > > +     for_each_online_cpu(cpu) {
> > > +             sd = cpu_rq(cpu)->sd;
> > > +
> > > +             while (sd) {
> > 
> > This can be for_each_domain(cpu, sd).
> 
> Oh that's nicer, thanks!
> 
> > 
> > > +                     bool is_llc = sd->flags & SD_SHARE_LLC;
> > > +                     bool is_numa = sd->flags & SD_NUMA;
> > > +
> > > +                     if (is_llc != is_numa)
> > > +                             return true;
> > > +
> > > +                     sd = sd->parent;
> > > +             }
> > > +     }
> > > +
> > > +     return false;
> > > +}

Actually the logic here is not correct at all, because it's inspecting
also the sd of SMT CPUs for example, so it will end up enabling NUMA
when it's not needed. I'll rethink this part.

-Andrea