Optimize topology_span_sane() by removing duplicate comparisons.
The total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 (per non-NUMA scheduling domain level).
Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
---
kernel/sched/topology.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 99ea5986038c..b6bcafc09969 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
static bool topology_span_sane(struct sched_domain_topology_level *tl,
const struct cpumask *cpu_map, int cpu)
{
- int i;
+ int i = cpu + 1;
/* NUMA levels are allowed to overlap */
if (tl->flags & SDTL_OVERLAP)
@@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
* breaking the sched_group lists - i.e. a later get_group() pass
* breaks the linking done for an earlier span.
*/
- for_each_cpu(i, cpu_map) {
- if (i == cpu)
- continue;
+ for_each_cpu_from(i, cpu_map) {
/*
* We should 'and' all those masks with 'cpu_map' to exactly
* match the topology we're about to build, but that can only
--
2.44.0
On Tue, 9 Apr 2024 at 17:54, Kyle Meyer <kyle.meyer@hpe.com> wrote:
>
> Optimize topology_span_sane() by removing duplicate comparisons.
>
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level).
>
> Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
> Reviewed-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
> kernel/sched/topology.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 99ea5986038c..b6bcafc09969 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
> static bool topology_span_sane(struct sched_domain_topology_level *tl,
> const struct cpumask *cpu_map, int cpu)
> {
> - int i;
> + int i = cpu + 1;
>
> /* NUMA levels are allowed to overlap */
> if (tl->flags & SDTL_OVERLAP)
> @@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
> * breaking the sched_group lists - i.e. a later get_group() pass
> * breaks the linking done for an earlier span.
> */
> - for_each_cpu(i, cpu_map) {
> - if (i == cpu)
> - continue;
> + for_each_cpu_from(i, cpu_map) {
> /*
> * We should 'and' all those masks with 'cpu_map' to exactly
> * match the topology we're about to build, but that can only
> --
> 2.44.0
>
On Tue, Apr 09, 2024 at 10:52:50AM -0500, Kyle Meyer wrote:
> Optimize topology_span_sane() by removing duplicate comparisons.
>
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level).
...
> - for_each_cpu(i, cpu_map) {
> - if (i == cpu)
> - continue;
> + for_each_cpu_from(i, cpu_map) {
Hmm... I'm not familiar with the for_each_cpu*(), but from the above
it seems only a single comparison? Or i.o.w. can i ever repeat the value?
And what about i < cpu cases?
--
With Best Regards,
Andy Shevchenko
On Tue, Apr 09, 2024 at 07:25:06PM +0300, Andy Shevchenko wrote:
> On Tue, Apr 09, 2024 at 10:52:50AM -0500, Kyle Meyer wrote:
> > Optimize topology_span_sane() by removing duplicate comparisons.
> >
> > The total number of comparisons is reduced from N * (N - 1) to
> > N * (N - 1) / 2 (per non-NUMA scheduling domain level).
>
> ...
>
> > - for_each_cpu(i, cpu_map) {
> > - if (i == cpu)
> > - continue;
> > + for_each_cpu_from(i, cpu_map) {
>
> Hmm... I'm not familiar with the for_each_cpu*(), but from the above
> it seems only a single comparison? Or i.o.w. can i ever repeat the value?
for_each_cpu() is a macro around for_each_set_bit() which iterates over each set
bit in a bitmap starting at zero.
for_each_cpu_from() is a macro around for_each_set_bit_from() which iterates
over each set bit in a bitmap starting at the specified bit.
The above (topology_span_sane()) currently does a "single comparison" for each
CPU in cpu_map, but it's called for each CPU in cpu_map and for each scheduling
domain level (please see build_sched_domains() in kernel/sched/topology.c).
> And what about i < cpu cases?
Those values have already been passed to topology_span_sane(). This patch uses
for_each_cpu_from() starting at cpu + 1 to prevent those duplicate comparisons.
Thanks,
Kyle Meyer
On Tue, Apr 09, 2024 at 02:29:09PM -0500, Kyle Meyer wrote:
> On Tue, Apr 09, 2024 at 07:25:06PM +0300, Andy Shevchenko wrote:
> > On Tue, Apr 09, 2024 at 10:52:50AM -0500, Kyle Meyer wrote:
> > > Optimize topology_span_sane() by removing duplicate comparisons.
> > >
> > > The total number of comparisons is reduced from N * (N - 1) to
> > > N * (N - 1) / 2 (per non-NUMA scheduling domain level).
...
> > > - for_each_cpu(i, cpu_map) {
> > > - if (i == cpu)
> > > - continue;
> > > + for_each_cpu_from(i, cpu_map) {
> >
> > Hmm... I'm not familiar with the for_each_cpu*(), but from the above
> > it seems only a single comparison? Or i.o.w. can i ever repeat the value?
>
> for_each_cpu() is a macro around for_each_set_bit() which iterates over each set
> bit in a bitmap starting at zero.
>
> for_each_cpu_from() is a macro around for_each_set_bit_from() which iterates
> over each set bit in a bitmap starting at the specified bit.
>
> The above (topology_span_sane()) currently does a "single comparison" for each
> CPU in cpu_map, but it's called for each CPU in cpu_map and for each scheduling
> domain level (please see build_sched_domains() in kernel/sched/topology.c).
>
> > And what about i < cpu cases?
>
> Those values have already been passed to topology_span_sane(). This patch uses
> for_each_cpu_from() starting at cpu + 1 to prevent those duplicate comparisons.
So, it appears to me that commit message has a room to improve / elaborate based
on what you explained to me above.
Thanks!
--
With Best Regards,
Andy Shevchenko
© 2016 - 2026 Red Hat, Inc.