Change relax_domain_level checks so that it would be possible
to include or exclude all domains from newidle balancing.
This matches the behavior described in the documentation:
-1 no request. use system default or follow request of others.
0 no search.
1 search siblings (hyperthreads in a core).
"2" enables levels 0 and 1, level_max excludes the last (level_max)
level, and level_max+1 includes all levels.
Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on cpuset domain relax")
Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/cgroup/cpuset.c | 2 +-
kernel/sched/topology.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 4237c8748715..da24187c4e02 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
static int update_relax_domain_level(struct cpuset *cs, s64 val)
{
#ifdef CONFIG_SMP
- if (val < -1 || val >= sched_domain_level_max)
+ if (val < -1 || val > sched_domain_level_max + 1)
return -EINVAL;
#endif
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 63aecd2a7a9f..67a777b31743 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1475,7 +1475,7 @@ static void set_domain_attribute(struct sched_domain *sd,
} else
request = attr->relax_domain_level;
- if (sd->level > request) {
+ if (sd->level >= request) {
/* Turn off idle balance on this domain: */
sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
}
--
2.20.1
On 03/04/24 16:28, Vitalii Bursov wrote:
> Change relax_domain_level checks so that it would be possible
> to include or exclude all domains from newidle balancing.
>
> This matches the behavior described in the documentation:
> -1 no request. use system default or follow request of others.
> 0 no search.
> 1 search siblings (hyperthreads in a core).
>
> "2" enables levels 0 and 1, level_max excludes the last (level_max)
> level, and level_max+1 includes all levels.
>
> Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on cpuset domain relax")
Not that it matters too much, but wasn't the behaviour the same back then?
i.e.
if (request < sd->level)
sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
So if relax_domain_level=0 we wouldn't clear the flags on e.g. SMT
(level=0)
AFAICT the docs & the code have always been misaligned:
4d5f35533fb9 ("sched, cpuset: customize sched domains, docs") [2008]
1d3504fcf560 ("sched, cpuset: customize sched domains, core") [2008]
History nitpicking aside, I think this makes sense, but existing users are
going to get a surprise...
On Thu, 4 Apr 2024 at 16:14, Valentin Schneider <vschneid@redhat.com> wrote:
>
> On 03/04/24 16:28, Vitalii Bursov wrote:
> > Change relax_domain_level checks so that it would be possible
> > to include or exclude all domains from newidle balancing.
> >
> > This matches the behavior described in the documentation:
> > -1 no request. use system default or follow request of others.
> > 0 no search.
> > 1 search siblings (hyperthreads in a core).
> >
> > "2" enables levels 0 and 1, level_max excludes the last (level_max)
> > level, and level_max+1 includes all levels.
> >
> > Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on cpuset domain relax")
>
> Not that it matters too much, but wasn't the behaviour the same back then?
> i.e.
>
> if (request < sd->level)
> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
>
> So if relax_domain_level=0 we wouldn't clear the flags on e.g. SMT
> (level=0)
Yes, I have been too quick: this patch [2019] was quite "old" and the
last one which changes the condition so I assumed it was the culprit
>
> AFAICT the docs & the code have always been misaligned:
>
> 4d5f35533fb9 ("sched, cpuset: customize sched domains, docs") [2008]
> 1d3504fcf560 ("sched, cpuset: customize sched domains, core") [2008]
>
> History nitpicking aside, I think this makes sense, but existing users are
> going to get a surprise...
>
© 2016 - 2026 Red Hat, Inc.