[PATCH 2/3] sched/fair: Fixes for capacity inversion detection

Qais Yousef posted 3 patches 2 years, 9 months ago
There is a newer version of this series
[PATCH 2/3] sched/fair: Fixes for capacity inversion detection
Posted by Qais Yousef 2 years, 9 months ago
Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). rcu_read_lock() is held while in
load_balance(), add an assert to ensure this is always the case.

Also skip capacity inversion detection for our own pd; which was an
error.

Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
---
 kernel/sched/fair.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 89dadaafc1ec..7c0dd57e562a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8856,16 +8856,22 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 	 *   * Thermal pressure will impact all cpus in this perf domain
 	 *     equally.
 	 */
-	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+	if (sched_energy_enabled()) {
 		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
 		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
 
 		rq->cpu_capacity_inverted = 0;
 
+		SCHED_WARN_ON(!rcu_read_lock_held());
+
 		for (; pd; pd = pd->next) {
 			struct cpumask *pd_span = perf_domain_span(pd);
 			unsigned long pd_cap_orig, pd_cap;
 
+			/* We can't be inverted against our own pd */
+			if (cpumask_test_cpu(cpu_of(rq), pd_span))
+				continue;
+
 			cpu = cpumask_any(pd_span);
 			pd_cap_orig = arch_scale_cpu_capacity(cpu);
 
-- 
2.25.1
Re: [PATCH 2/3] sched/fair: Fixes for capacity inversion detection
Posted by Dietmar Eggemann 2 years, 9 months ago
On 27/11/2022 15:17, Qais Yousef wrote:
> Traversing the Perf Domains requires rcu_read_lock() to be held and is
> conditional on sched_energy_enabled(). rcu_read_lock() is held while in
> load_balance(), add an assert to ensure this is always the case.
> 
> Also skip capacity inversion detection for our own pd; which was an
> error.
> 
> Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
> Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
> ---
>  kernel/sched/fair.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 89dadaafc1ec..7c0dd57e562a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8856,16 +8856,22 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
>  	 *   * Thermal pressure will impact all cpus in this perf domain
>  	 *     equally.
>  	 */
> -	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> +	if (sched_energy_enabled()) {
>  		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
>  		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
>  
>  		rq->cpu_capacity_inverted = 0;
>  
> +		SCHED_WARN_ON(!rcu_read_lock_held());

This will trigger in CPU hotplug via build_sched_domains() ->
update_group_capacity() -> update_cpu_capacity() on an EAS system.

> +
>  		for (; pd; pd = pd->next) {
>  			struct cpumask *pd_span = perf_domain_span(pd);
>  			unsigned long pd_cap_orig, pd_cap;
>  
> +			/* We can't be inverted against our own pd */
> +			if (cpumask_test_cpu(cpu_of(rq), pd_span))
> +				continue;
> +

This should fix the issue with `cpu` function parameter in its own PD.

>  			cpu = cpumask_any(pd_span);
>  			pd_cap_orig = arch_scale_cpu_capacity(cpu);
>  

I still don't get the benefit of the CPU capacity inversion patches in
tip/sched/core which should be fixed by this patch:

aa69c36f31aa - sched/fair: Consider capacity inversion in
               util_fits_cpu()
44c7b80bffc3 - sched/fair: Detect capacity inversion

I have to ask again. Why should we use thermal_load_avg() instead of
arch_scale_thermal_pressure() for a CPUx in `CPU capacity inversion
state` (i.e. w/ higher `CPU capacity orig` but lower `CPU capacity` than
CPUy?
Re: [PATCH 2/3] sched/fair: Fixes for capacity inversion detection
Posted by Qais Yousef 2 years, 9 months ago
On 12/01/22 23:39, Dietmar Eggemann wrote:
> On 27/11/2022 15:17, Qais Yousef wrote:
> > Traversing the Perf Domains requires rcu_read_lock() to be held and is
> > conditional on sched_energy_enabled(). rcu_read_lock() is held while in
> > load_balance(), add an assert to ensure this is always the case.
> > 
> > Also skip capacity inversion detection for our own pd; which was an
> > error.
> > 
> > Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
> > Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
> > ---
> >  kernel/sched/fair.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 89dadaafc1ec..7c0dd57e562a 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8856,16 +8856,22 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
> >  	 *   * Thermal pressure will impact all cpus in this perf domain
> >  	 *     equally.
> >  	 */
> > -	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> > +	if (sched_energy_enabled()) {
> >  		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
> >  		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
> >  
> >  		rq->cpu_capacity_inverted = 0;
> >  
> > +		SCHED_WARN_ON(!rcu_read_lock_held());
> 
> This will trigger in CPU hotplug via build_sched_domains() ->
> update_group_capacity() -> update_cpu_capacity() on an EAS system.

Aargh

> 
> > +
> >  		for (; pd; pd = pd->next) {
> >  			struct cpumask *pd_span = perf_domain_span(pd);
> >  			unsigned long pd_cap_orig, pd_cap;
> >  
> > +			/* We can't be inverted against our own pd */
> > +			if (cpumask_test_cpu(cpu_of(rq), pd_span))
> > +				continue;
> > +
> 
> This should fix the issue with `cpu` function parameter in its own PD.

Thanks for confirming!

> 
> >  			cpu = cpumask_any(pd_span);
> >  			pd_cap_orig = arch_scale_cpu_capacity(cpu);
> >  
> 
> I still don't get the benefit of the CPU capacity inversion patches in
> tip/sched/core which should be fixed by this patch:
> 
> aa69c36f31aa - sched/fair: Consider capacity inversion in
>                util_fits_cpu()
> 44c7b80bffc3 - sched/fair: Detect capacity inversion
> 
> I have to ask again. Why should we use thermal_load_avg() instead of

Is this directed to me?

> arch_scale_thermal_pressure() for a CPUx in `CPU capacity inversion
> state` (i.e. w/ higher `CPU capacity orig` but lower `CPU capacity` than
> CPUy?

If yes, I did answer that here

	https://lore.kernel.org/lkml/20221120213013.t67xisvqxmftri52@airbuntu/


Thanks a lot Dietmar!

--
Qais Yousef
[PATCH v2] sched/fair: Fixes for capacity inversion detection
Posted by Qais Yousef 2 years, 9 months ago
Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). rcu_read_lock() is held while in
load_balance(), add an assert to ensure this is always the case.

Also skip capacity inversion detection for our own pd; which was an
error.

Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
---

Changes in v2:

	* Make sure to hold rcu_read_lock() as we need it's not held in all
	  paths (thanks Dietmar!)

 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6a2fc2ca5078..2b1442093bd6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8856,16 +8856,23 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 	 *   * Thermal pressure will impact all cpus in this perf domain
 	 *     equally.
 	 */
-	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+	if (sched_energy_enabled()) {
 		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
-		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
+		struct perf_domain *pd;
+
+		rcu_read_lock();
 
+		pd = rcu_dereference(rq->rd->pd);
 		rq->cpu_capacity_inverted = 0;
 
 		for (; pd; pd = pd->next) {
 			struct cpumask *pd_span = perf_domain_span(pd);
 			unsigned long pd_cap_orig, pd_cap;
 
+			/* We can't be inverted against our own pd */
+			if (cpumask_test_cpu(cpu_of(rq), pd_span))
+				continue;
+
 			cpu = cpumask_any(pd_span);
 			pd_cap_orig = arch_scale_cpu_capacity(cpu);
 
@@ -8890,6 +8897,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 				break;
 			}
 		}
+
+		rcu_read_unlock();
 	}
 
 	trace_sched_cpu_capacity_tp(rq);
-- 
2.25.1
Re: [PATCH v2] sched/fair: Fixes for capacity inversion detection
Posted by Qais Yousef 2 years, 9 months ago
On 12/08/22 14:54, Qais Yousef wrote:
> Traversing the Perf Domains requires rcu_read_lock() to be held and is
> conditional on sched_energy_enabled(). rcu_read_lock() is held while in
> load_balance(), add an assert to ensure this is always the case.

Err that should say instead

	Traversing the Perf Domains requires rcu_read_lock() to be held and is
	conditional on sched_energy_enabled(). Ensure right protections applied.

Peter, let me know if you want me to resend with that fixed or happy to apply
yourself.


Thanks!

--
Qais Yousef

> 
> Also skip capacity inversion detection for our own pd; which was an
> error.
> 
> Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
> Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
> ---
> 
> Changes in v2:
> 
> 	* Make sure to hold rcu_read_lock() as we need it's not held in all
> 	  paths (thanks Dietmar!)