When uclamp_max is being used, the util of the task could be higher than
the spare capacity of the CPU, but due to uclamp_max value we force fit
it there.
The way the condition for checking for max_spare_cap in
find_energy_efficient_cpu() was constructed; it ignored any CPU that has
its spare_cap less than or _equal_ to max_spare_cap. Since we initialize
max_spare_cap to 0; this lead to never setting max_spare_cap_cpu and
hence ending up never performing compute_energy() for this cluster and
missing an opportunity for a better energy efficient placement to honour
uclamp_max setting.
max_spare_cap = 0;
cpu_cap = capacity_of(cpu) - task_util(p); // 0 if task_util(p) is high
...
util_fits_cpu(...); // will return true if uclamp_max forces it to fit
...
// this logic will fail to update max_spare_cap_cpu if cpu_cap is 0
if (cpu_cap > max_spare_cap) {
max_spare_cap = cpu_cap;
max_spare_cap_cpu = cpu;
}
prev_spare_cap suffers from a similar problem.
Fix the logic by converting the variables into long and treating -1
value as 'not populated' instead of 0 which is a viable and correct
spare capacity value. We need to be careful signed comparison is used
when comparing with cpu_cap in one of the conditions.
Fixes: 1d42509e475c ("sched/fair: Make EAS wakeup placement consider uclamp restrictions")
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
---
kernel/sched/fair.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0b7445cd5af9..5da6538ed220 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7707,11 +7707,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
for (; pd; pd = pd->next) {
unsigned long util_min = p_util_min, util_max = p_util_max;
unsigned long cpu_cap, cpu_thermal_cap, util;
- unsigned long cur_delta, max_spare_cap = 0;
+ long prev_spare_cap = -1, max_spare_cap = -1;
unsigned long rq_util_min, rq_util_max;
- unsigned long prev_spare_cap = 0;
+ unsigned long cur_delta, base_energy;
int max_spare_cap_cpu = -1;
- unsigned long base_energy;
int fits, max_fits = -1;
cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);
@@ -7774,7 +7773,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
prev_spare_cap = cpu_cap;
prev_fits = fits;
} else if ((fits > max_fits) ||
- ((fits == max_fits) && (cpu_cap > max_spare_cap))) {
+ ((fits == max_fits) && ((long)cpu_cap > max_spare_cap))) {
/*
* Find the CPU with the maximum spare capacity
* among the remaining CPUs in the performance
@@ -7786,7 +7785,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
}
}
- if (max_spare_cap_cpu < 0 && prev_spare_cap == 0)
+ if (max_spare_cap_cpu < 0 && prev_spare_cap < 0)
continue;
eenv_pd_busy_time(&eenv, cpus, p);
@@ -7794,7 +7793,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
base_energy = compute_energy(&eenv, pd, cpus, p, -1);
/* Evaluate the energy impact of using prev_cpu. */
- if (prev_spare_cap > 0) {
+ if (prev_spare_cap > -1) {
prev_delta = compute_energy(&eenv, pd, cpus, p,
prev_cpu);
/* CPU utilization has changed */
--
2.34.1
On 22/08/2023 00:45, Qais Yousef wrote:
> When uclamp_max is being used, the util of the task could be higher than
> the spare capacity of the CPU, but due to uclamp_max value we force fit
> it there.
>
> The way the condition for checking for max_spare_cap in
> find_energy_efficient_cpu() was constructed; it ignored any CPU that has
> its spare_cap less than or _equal_ to max_spare_cap. Since we initialize
> max_spare_cap to 0; this lead to never setting max_spare_cap_cpu and
> hence ending up never performing compute_energy() for this cluster and
> missing an opportunity for a better energy efficient placement to honour
> uclamp_max setting.
>
> max_spare_cap = 0;
> cpu_cap = capacity_of(cpu) - task_util(p); // 0 if task_util(p) is high
Nitpick:
s/task_util(p)/cpu_util(cpu, p, cpu, ...) which is
max(cpu_util + task_util, cpu_util_est + task_util_est)
>
> ...
>
> util_fits_cpu(...); // will return true if uclamp_max forces it to fit
>
> ...
>
> // this logic will fail to update max_spare_cap_cpu if cpu_cap is 0
> if (cpu_cap > max_spare_cap) {
> max_spare_cap = cpu_cap;
> max_spare_cap_cpu = cpu;
> }
>
> prev_spare_cap suffers from a similar problem.
>
> Fix the logic by converting the variables into long and treating -1
> value as 'not populated' instead of 0 which is a viable and correct
> spare capacity value. We need to be careful signed comparison is used
> when comparing with cpu_cap in one of the conditions.
>
> Fixes: 1d42509e475c ("sched/fair: Make EAS wakeup placement consider uclamp restrictions")
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
> ---
> kernel/sched/fair.c | 11 +++++------
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0b7445cd5af9..5da6538ed220 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7707,11 +7707,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> for (; pd; pd = pd->next) {
> unsigned long util_min = p_util_min, util_max = p_util_max;
> unsigned long cpu_cap, cpu_thermal_cap, util;
> - unsigned long cur_delta, max_spare_cap = 0;
> + long prev_spare_cap = -1, max_spare_cap = -1;
> unsigned long rq_util_min, rq_util_max;
> - unsigned long prev_spare_cap = 0;
> + unsigned long cur_delta, base_energy;
> int max_spare_cap_cpu = -1;
> - unsigned long base_energy;
> int fits, max_fits = -1;
>
> cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);
> @@ -7774,7 +7773,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> prev_spare_cap = cpu_cap;
> prev_fits = fits;
> } else if ((fits > max_fits) ||
> - ((fits == max_fits) && (cpu_cap > max_spare_cap))) {
> + ((fits == max_fits) && ((long)cpu_cap > max_spare_cap))) {
> /*
> * Find the CPU with the maximum spare capacity
> * among the remaining CPUs in the performance
> @@ -7786,7 +7785,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> }
> }
>
> - if (max_spare_cap_cpu < 0 && prev_spare_cap == 0)
> + if (max_spare_cap_cpu < 0 && prev_spare_cap < 0)
> continue;
>
> eenv_pd_busy_time(&eenv, cpus, p);
> @@ -7794,7 +7793,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> base_energy = compute_energy(&eenv, pd, cpus, p, -1);
>
> /* Evaluate the energy impact of using prev_cpu. */
> - if (prev_spare_cap > 0) {
> + if (prev_spare_cap > -1) {
> prev_delta = compute_energy(&eenv, pd, cpus, p,
> prev_cpu);
> /* CPU utilization has changed */
We still need a solution to deal with situations in which `pd + task
contribution` > `pd_capacity`:
compute_energy()
if (dst_cpu >= 0)
busy_time = min(pd_capacity, pd_busy_time + task_busy_time);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pd + task contribution
busy_time is based on util (ENERGY_UTIL), not on the uclamp values
(FREQUENCY_UTIL) we try to fit into a PD (and finally onto a CPU).
With that as a reminder for us and the change in the cover letter:
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
On 08/23/23 12:30, Dietmar Eggemann wrote:
> On 22/08/2023 00:45, Qais Yousef wrote:
> > When uclamp_max is being used, the util of the task could be higher than
> > the spare capacity of the CPU, but due to uclamp_max value we force fit
> > it there.
> >
> > The way the condition for checking for max_spare_cap in
> > find_energy_efficient_cpu() was constructed; it ignored any CPU that has
> > its spare_cap less than or _equal_ to max_spare_cap. Since we initialize
> > max_spare_cap to 0; this lead to never setting max_spare_cap_cpu and
> > hence ending up never performing compute_energy() for this cluster and
> > missing an opportunity for a better energy efficient placement to honour
> > uclamp_max setting.
> >
> > max_spare_cap = 0;
> > cpu_cap = capacity_of(cpu) - task_util(p); // 0 if task_util(p) is high
>
> Nitpick:
>
> s/task_util(p)/cpu_util(cpu, p, cpu, ...) which is
>
> max(cpu_util + task_util, cpu_util_est + task_util_est)
>
> >
> > ...
> >
> > util_fits_cpu(...); // will return true if uclamp_max forces it to fit
> >
> > ...
> >
> > // this logic will fail to update max_spare_cap_cpu if cpu_cap is 0
> > if (cpu_cap > max_spare_cap) {
> > max_spare_cap = cpu_cap;
> > max_spare_cap_cpu = cpu;
> > }
> >
> > prev_spare_cap suffers from a similar problem.
> >
> > Fix the logic by converting the variables into long and treating -1
> > value as 'not populated' instead of 0 which is a viable and correct
> > spare capacity value. We need to be careful signed comparison is used
> > when comparing with cpu_cap in one of the conditions.
> >
> > Fixes: 1d42509e475c ("sched/fair: Make EAS wakeup placement consider uclamp restrictions")
> > Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
> > ---
> > kernel/sched/fair.c | 11 +++++------
> > 1 file changed, 5 insertions(+), 6 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0b7445cd5af9..5da6538ed220 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7707,11 +7707,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> > for (; pd; pd = pd->next) {
> > unsigned long util_min = p_util_min, util_max = p_util_max;
> > unsigned long cpu_cap, cpu_thermal_cap, util;
> > - unsigned long cur_delta, max_spare_cap = 0;
> > + long prev_spare_cap = -1, max_spare_cap = -1;
> > unsigned long rq_util_min, rq_util_max;
> > - unsigned long prev_spare_cap = 0;
> > + unsigned long cur_delta, base_energy;
> > int max_spare_cap_cpu = -1;
> > - unsigned long base_energy;
> > int fits, max_fits = -1;
> >
> > cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);
> > @@ -7774,7 +7773,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> > prev_spare_cap = cpu_cap;
> > prev_fits = fits;
> > } else if ((fits > max_fits) ||
> > - ((fits == max_fits) && (cpu_cap > max_spare_cap))) {
> > + ((fits == max_fits) && ((long)cpu_cap > max_spare_cap))) {
> > /*
> > * Find the CPU with the maximum spare capacity
> > * among the remaining CPUs in the performance
> > @@ -7786,7 +7785,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> > }
> > }
> >
> > - if (max_spare_cap_cpu < 0 && prev_spare_cap == 0)
> > + if (max_spare_cap_cpu < 0 && prev_spare_cap < 0)
> > continue;
> >
> > eenv_pd_busy_time(&eenv, cpus, p);
> > @@ -7794,7 +7793,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> > base_energy = compute_energy(&eenv, pd, cpus, p, -1);
> >
> > /* Evaluate the energy impact of using prev_cpu. */
> > - if (prev_spare_cap > 0) {
> > + if (prev_spare_cap > -1) {
> > prev_delta = compute_energy(&eenv, pd, cpus, p,
> > prev_cpu);
> > /* CPU utilization has changed */
>
> We still need a solution to deal with situations in which `pd + task
> contribution` > `pd_capacity`:
>
> compute_energy()
>
> if (dst_cpu >= 0)
> busy_time = min(pd_capacity, pd_busy_time + task_busy_time);
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> pd + task contribution
>
> busy_time is based on util (ENERGY_UTIL), not on the uclamp values
> (FREQUENCY_UTIL) we try to fit into a PD (and finally onto a CPU).
>
> With that as a reminder for us and the change in the cover letter:
This is not being ignored, but I don't see this as an urgent problem too. There
are more pressing issues that make uclamp_max not effective in practice, and
this ain't a bottleneck yet. Actually it might be doing a good thing as there's
a desire to keep those tasks away on smallest CPU. But we shall visit this
later for sure, don't worry :-) Ultimately we want EAS algorithm to be the
judge of best placement for sure.
I hope to send patches to address load balancer and max aggregation issues in
the coming weeks.
>
> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Thanks for the review!
I will wait for the maintainers to see if they would like a v5 to address the
nitpicks or it's actually good enough and happy to pick this up. I think the
commit messages explain the problem clear enough and doesn't warrant sending
a new version. But happy to do so if there's insistence :-)
Thanks!
--
Qais Yousef
* Qais Yousef <qyousef@layalina.io> wrote: > > Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> > > Thanks for the review! > > I will wait for the maintainers to see if they would like a v5 to address > the nitpicks or it's actually good enough and happy to pick this up. I > think the commit messages explain the problem clear enough and doesn't > warrant sending a new version. But happy to do so if there's insistence > :-) Yeah, please always do that: sensible review replies with actionable feedback cause a semi-atomatic "mark this thread as read, there will be a next version" reflexive action from maintainers, especially if a series is in its 4th iteration already... Thanks, Ingo
On 09/14/23 08:39, Ingo Molnar wrote: > > * Qais Yousef <qyousef@layalina.io> wrote: > > > > Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> > > > > Thanks for the review! > > > > I will wait for the maintainers to see if they would like a v5 to address > > the nitpicks or it's actually good enough and happy to pick this up. I > > think the commit messages explain the problem clear enough and doesn't > > warrant sending a new version. But happy to do so if there's insistence > > :-) > > Yeah, please always do that: sensible review replies with actionable > feedback cause a semi-atomatic "mark this thread as read, there will be a > next version" reflexive action from maintainers, especially if a series is > in its 4th iteration already... Apologies. I did realize that and intended to send a new version last weekend, but failed to get to it. I hope to be able to do so today or tomorrow. Thanks! -- Qais Yousef
© 2016 - 2025 Red Hat, Inc.