[Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active

Juergen Gross posted 1 patch 4 years, 1 month ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/xen tags/patchew/20200303122750.26996-1-jgross@suse.com
There is a newer version of this series
xen/common/sched/cpupool.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
[Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
Posted by Juergen Gross 4 years, 1 month ago
When onlining a cpu cpupool_cpu_add() checks whether all siblings of
the new cpu are free in order to decide whether to add it to cpupool0.
In case the added cpu is not the last sibling to be onlined this test
is wrong as it only checks for all online siblings to be free. The
test should include the check for the number of siblings having
reached the scheduling granularity of cpupool0, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 9f70c7ec17..4a67df8584 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
     get_sched_res(cpu)->cpupool = NULL;
 
     cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
-    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
+    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
+         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
         ret = cpupool_assign_cpu_locked(cpupool0, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
Posted by Jan Beulich 4 years, 1 month ago
On 03.03.2020 13:27, Juergen Gross wrote:
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>      get_sched_res(cpu)->cpupool = NULL;
>  
>      cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )

Why >= , not == ? And is the other part of the condition needed?
Isn't this rather a condition that could be ASSERT()ed, as CPUs
shouldn't move out of the "free" set before reaching the
granularity?

Jan

>          ret = cpupool_assign_cpu_locked(cpupool0, cpu);
>  
>      rcu_read_unlock(&sched_res_rculock);
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
Posted by Jürgen Groß 4 years, 1 month ago
On 03.03.20 14:31, Jan Beulich wrote:
> On 03.03.2020 13:27, Juergen Gross wrote:
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>       get_sched_res(cpu)->cpupool = NULL;
>>   
>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
> 
> Why >= , not == ? And is the other part of the condition needed?

I can switch to ==.

> Isn't this rather a condition that could be ASSERT()ed, as CPUs
> shouldn't move out of the "free" set before reaching the
> granularity?

Probably, yes. I'll give it some testing and change it in the case
of (expected) success.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
Posted by Jürgen Groß 4 years, 1 month ago
On 03.03.20 17:04, Jürgen Groß wrote:
> On 03.03.20 14:31, Jan Beulich wrote:
>> On 03.03.2020 13:27, Juergen Gross wrote:
>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>>       get_sched_res(cpu)->cpupool = NULL;
>>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
>>
>> Why >= , not == ? And is the other part of the condition needed?
> 
> I can switch to ==.
> 
>> Isn't this rather a condition that could be ASSERT()ed, as CPUs
>> shouldn't move out of the "free" set before reaching the
>> granularity?
> 
> Probably, yes. I'll give it some testing and change it in the case
> of (expected) success.

Thinking more about it I'm inclined to keep testing both conditions.
In case we are supporting cpupools with different granularities we'll
need to test for all cpus to be free in case the other sibling has been
moved to a cpupool with gran=1 already.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
Posted by Jan Beulich 4 years, 1 month ago
On 10.03.2020 09:16, Jürgen Groß wrote:
> On 03.03.20 17:04, Jürgen Groß wrote:
>> On 03.03.20 14:31, Jan Beulich wrote:
>>> On 03.03.2020 13:27, Juergen Gross wrote:
>>>> --- a/xen/common/sched/cpupool.c
>>>> +++ b/xen/common/sched/cpupool.c
>>>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>>>       get_sched_res(cpu)->cpupool = NULL;
>>>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>>>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>>>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>>>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
>>>
>>> Why >= , not == ? And is the other part of the condition needed?
>>
>> I can switch to ==.
>>
>>> Isn't this rather a condition that could be ASSERT()ed, as CPUs
>>> shouldn't move out of the "free" set before reaching the
>>> granularity?
>>
>> Probably, yes. I'll give it some testing and change it in the case
>> of (expected) success.
> 
> Thinking more about it I'm inclined to keep testing both conditions.
> In case we are supporting cpupools with different granularities we'll
> need to test for all cpus to be free in case the other sibling has been
> moved to a cpupool with gran=1 already.

Ah, yes, makes sense.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel