[PATCH v4] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug

Vishal Chourasia posted 1 patch 1 year ago
There is a newer version of this series
kernel/sched/fair.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
[PATCH v4] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
Posted by Vishal Chourasia 1 year ago
CPU controller limits are not properly enforced during CPU hotplug
operations, particularly during CPU offline. When a CPU goes offline,
throttled processes are unintentionally being unthrottled across all CPUs
in the system, allowing them to exceed their assigned quota limits.

Consider below for an example,

Assigning 6.25% bandwidth limit to a cgroup
in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
100% CPU utilization, expected (user+sys) time = 10 seconds.

$ cat /sys/fs/cgroup/test/cpu.max
50000 100000

$ ./ebizzy -t 8 -S 20        // non-hotplug case
real 20.00 s
user 10.81 s                 // intended behaviour
sys   0.00 s

$ ./ebizzy -t 8 -S 20        // hotplug case
real 20.00 s
user 14.43 s                 // Workload is able to run for 14 secs
sys   0.00 s                 // when it should have only run for 10 secs

During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
is called for every active CPU to update the root domain. That ends up
calling rq_offline_fair which un-throttles any throttled hierarchies.

Unthrottling should only occur for the CPU being hotplugged to allow its
throttled processes to become runnable and get migrated to other CPUs.

With current patch applied,
$ ./ebizzy -t 8 -S 20        // hotplug case
real 21.00 s
user 10.16 s                 // intended behaviour
sys   0.00 s

This also has another symptom, when a CPU goes offline, and if the cfs_rq
is not in throttled state and the runtime_remaining still had plenty
remaining, it gets reset to 1 here, causing the runtime_remaining of
cfs_rq to be quickly depleted.

Note: hotplug operation (online, offline) was performed in while(1) loop

Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
Suggested-by: Zhang Qiao <zhangqiao22@huawei.com>
Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>

v3: https://lore.kernel.org/all/20241210102346.228663-2-vishalc@linux.ibm.com
v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com

---
 kernel/sched/fair.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa0238ee4857..72746e75700c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6679,6 +6679,10 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 
 	lockdep_assert_rq_held(rq);
 
+	// Do not unthrottle for an active CPU
+	if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
+		return;
+
 	/*
 	 * The rq clock has already been updated in the
 	 * set_rq_offline(), so we should skip updating
@@ -6693,19 +6697,21 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 		if (!cfs_rq->runtime_enabled)
 			continue;
 
-		/*
-		 * clock_task is not advancing so we just need to make sure
-		 * there's some valid quota amount
-		 */
-		cfs_rq->runtime_remaining = 1;
 		/*
 		 * Offline rq is schedulable till CPU is completely disabled
 		 * in take_cpu_down(), so we prevent new cfs throttling here.
 		 */
 		cfs_rq->runtime_enabled = 0;
 
-		if (cfs_rq_throttled(cfs_rq))
-			unthrottle_cfs_rq(cfs_rq);
+		if (!cfs_rq_throttled(cfs_rq))
+			continue;
+
+		/*
+		 * clock_task is not advancing so we just need to make sure
+		 * there's some valid quota amount
+		 */
+		cfs_rq->runtime_remaining = 1;
+		unthrottle_cfs_rq(cfs_rq);
 	}
 	rcu_read_unlock();
 

base-commit: 231825b2e1ff6ba799c5eaf396d3ab2354e37c6b
-- 
2.47.0
Re: [PATCH v4] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
Posted by samir 1 year ago
On 2024-12-12 10:01, Vishal Chourasia wrote:
> CPU controller limits are not properly enforced during CPU hotplug
> operations, particularly during CPU offline. When a CPU goes offline,
> throttled processes are unintentionally being unthrottled across all 
> CPUs
> in the system, allowing them to exceed their assigned quota limits.
> 
> Consider below for an example,
> 
> Assigning 6.25% bandwidth limit to a cgroup
> in a 8 CPU system, where, workload is running 8 threads for 20 seconds 
> at
> 100% CPU utilization, expected (user+sys) time = 10 seconds.
> 
> $ cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
> 
> $ ./ebizzy -t 8 -S 20        // non-hotplug case
> real 20.00 s
> user 10.81 s                 // intended behaviour
> sys   0.00 s
> 
> $ ./ebizzy -t 8 -S 20        // hotplug case
> real 20.00 s
> user 14.43 s                 // Workload is able to run for 14 secs
> sys   0.00 s                 // when it should have only run for 10 
> secs
> 
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
> 
> Unthrottling should only occur for the CPU being hotplugged to allow 
> its
> throttled processes to become runnable and get migrated to other CPUs.
> 
> With current patch applied,
> $ ./ebizzy -t 8 -S 20        // hotplug case
> real 21.00 s
> user 10.16 s                 // intended behaviour
> sys   0.00 s
> 
> This also has another symptom, when a CPU goes offline, and if the 
> cfs_rq
> is not in throttled state and the runtime_remaining still had plenty
> remaining, it gets reset to 1 here, causing the runtime_remaining of
> cfs_rq to be quickly depleted.
> 
> Note: hotplug operation (online, offline) was performed in while(1) 
> loop
> 
> Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> Suggested-by: Zhang Qiao <zhangqiao22@huawei.com>
> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
> 
> v3: 
> https://lore.kernel.org/all/20241210102346.228663-2-vishalc@linux.ibm.com
> v2: 
> https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
> v1: 
> https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
> 
> ---
>  kernel/sched/fair.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index aa0238ee4857..72746e75700c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6679,6 +6679,10 @@ static void __maybe_unused
> unthrottle_offline_cfs_rqs(struct rq *rq)
> 
>  	lockdep_assert_rq_held(rq);
> 
> +	// Do not unthrottle for an active CPU
> +	if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> +		return;
> +
>  	/*
>  	 * The rq clock has already been updated in the
>  	 * set_rq_offline(), so we should skip updating
> @@ -6693,19 +6697,21 @@ static void __maybe_unused
> unthrottle_offline_cfs_rqs(struct rq *rq)
>  		if (!cfs_rq->runtime_enabled)
>  			continue;
> 
> -		/*
> -		 * clock_task is not advancing so we just need to make sure
> -		 * there's some valid quota amount
> -		 */
> -		cfs_rq->runtime_remaining = 1;
>  		/*
>  		 * Offline rq is schedulable till CPU is completely disabled
>  		 * in take_cpu_down(), so we prevent new cfs throttling here.
>  		 */
>  		cfs_rq->runtime_enabled = 0;
> 
> -		if (cfs_rq_throttled(cfs_rq))
> -			unthrottle_cfs_rq(cfs_rq);
> +		if (!cfs_rq_throttled(cfs_rq))
> +			continue;
> +
> +		/*
> +		 * clock_task is not advancing so we just need to make sure
> +		 * there's some valid quota amount
> +		 */
> +		cfs_rq->runtime_remaining = 1;
> +		unthrottle_cfs_rq(cfs_rq);
>  	}
>  	rcu_read_unlock();
> 
> 
> base-commit: 231825b2e1ff6ba799c5eaf396d3ab2354e37c6b

Hello,

I have verified this issue using the ebizzy workload and a Podman 
container. The tests confirm that the problem is resolved, and the 
provided fix is working as expected. Below are the results for 
reference, where the ebizzy workload was executed within the container 
with --cpu-quota=50000 allocated.
Additionally, I tested the patch under load conditions both with and 
without hot-plug operations. Observations are as follows:

Test Results
Without Hot-Plug Operation:
Command: ./ebizzy -t 64 -S 20
	Performance: 43,506 records/s
	Execution Time:
       	Real: 20.00 s
	User: 10.46 s
	Sys: 0.00 s
With Hot-Plug Operation:
Command: ./ebizzy -t 64 -S 20
	Performance: 35,642 records/s
	Execution Time:
	Real: 20.00 s
	User: 10.45 s
	Sys: 0.01 s

Tested-by: Samir Mulani <samir@linux.ibm.com>

Thanks for the fix!
Re: [PATCH v4] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
Posted by Vincent Guittot 1 year ago
On Thu, 12 Dec 2024 at 05:32, Vishal Chourasia <vishalc@linux.ibm.com> wrote:
>
> CPU controller limits are not properly enforced during CPU hotplug
> operations, particularly during CPU offline. When a CPU goes offline,
> throttled processes are unintentionally being unthrottled across all CPUs
> in the system, allowing them to exceed their assigned quota limits.
>
> Consider below for an example,
>
> Assigning 6.25% bandwidth limit to a cgroup
> in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> 100% CPU utilization, expected (user+sys) time = 10 seconds.
>
> $ cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
>
> $ ./ebizzy -t 8 -S 20        // non-hotplug case
> real 20.00 s
> user 10.81 s                 // intended behaviour
> sys   0.00 s
>
> $ ./ebizzy -t 8 -S 20        // hotplug case
> real 20.00 s
> user 14.43 s                 // Workload is able to run for 14 secs
> sys   0.00 s                 // when it should have only run for 10 secs
>
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
>
> Unthrottling should only occur for the CPU being hotplugged to allow its
> throttled processes to become runnable and get migrated to other CPUs.
>
> With current patch applied,
> $ ./ebizzy -t 8 -S 20        // hotplug case
> real 21.00 s
> user 10.16 s                 // intended behaviour
> sys   0.00 s
>
> This also has another symptom, when a CPU goes offline, and if the cfs_rq
> is not in throttled state and the runtime_remaining still had plenty
> remaining, it gets reset to 1 here, causing the runtime_remaining of
> cfs_rq to be quickly depleted.
>
> Note: hotplug operation (online, offline) was performed in while(1) loop
>
> Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> Suggested-by: Zhang Qiao <zhangqiao22@huawei.com>
> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>

With fixing the typo below
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>

>
> v3: https://lore.kernel.org/all/20241210102346.228663-2-vishalc@linux.ibm.com
> v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
> v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
>
> ---
>  kernel/sched/fair.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index aa0238ee4857..72746e75700c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6679,6 +6679,10 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>
>         lockdep_assert_rq_held(rq);
>
> +       // Do not unthrottle for an active CPU

typo: please use /* my comment */

> +       if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> +               return;
> +
>         /*
>          * The rq clock has already been updated in the
>          * set_rq_offline(), so we should skip updating
> @@ -6693,19 +6697,21 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>                 if (!cfs_rq->runtime_enabled)
>                         continue;
>
> -               /*
> -                * clock_task is not advancing so we just need to make sure
> -                * there's some valid quota amount
> -                */
> -               cfs_rq->runtime_remaining = 1;
>                 /*
>                  * Offline rq is schedulable till CPU is completely disabled
>                  * in take_cpu_down(), so we prevent new cfs throttling here.
>                  */
>                 cfs_rq->runtime_enabled = 0;
>
> -               if (cfs_rq_throttled(cfs_rq))
> -                       unthrottle_cfs_rq(cfs_rq);
> +               if (!cfs_rq_throttled(cfs_rq))
> +                       continue;
> +
> +               /*
> +                * clock_task is not advancing so we just need to make sure
> +                * there's some valid quota amount
> +                */
> +               cfs_rq->runtime_remaining = 1;
> +               unthrottle_cfs_rq(cfs_rq);
>         }
>         rcu_read_unlock();
>
>
> base-commit: 231825b2e1ff6ba799c5eaf396d3ab2354e37c6b
> --
> 2.47.0
>
[tip: sched/core] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
Posted by tip-bot2 for Vishal Chourasia 11 months, 3 weeks ago
The following commit has been merged into the sched/core branch of tip:

Commit-ID:     af98d8a36a963e758e84266d152b92c7b51d4ecb
Gitweb:        https://git.kernel.org/tip/af98d8a36a963e758e84266d152b92c7b51d4ecb
Author:        Vishal Chourasia <vishalc@linux.ibm.com>
AuthorDate:    Thu, 12 Dec 2024 10:01:03 +05:30
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 17 Dec 2024 17:47:22 +01:00

sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug

CPU controller limits are not properly enforced during CPU hotplug
operations, particularly during CPU offline. When a CPU goes offline,
throttled processes are unintentionally being unthrottled across all CPUs
in the system, allowing them to exceed their assigned quota limits.

Consider below for an example,

Assigning 6.25% bandwidth limit to a cgroup
in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
100% CPU utilization, expected (user+sys) time = 10 seconds.

$ cat /sys/fs/cgroup/test/cpu.max
50000 100000

$ ./ebizzy -t 8 -S 20        // non-hotplug case
real 20.00 s
user 10.81 s                 // intended behaviour
sys   0.00 s

$ ./ebizzy -t 8 -S 20        // hotplug case
real 20.00 s
user 14.43 s                 // Workload is able to run for 14 secs
sys   0.00 s                 // when it should have only run for 10 secs

During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
is called for every active CPU to update the root domain. That ends up
calling rq_offline_fair which un-throttles any throttled hierarchies.

Unthrottling should only occur for the CPU being hotplugged to allow its
throttled processes to become runnable and get migrated to other CPUs.

With current patch applied,
$ ./ebizzy -t 8 -S 20        // hotplug case
real 21.00 s
user 10.16 s                 // intended behaviour
sys   0.00 s

This also has another symptom, when a CPU goes offline, and if the cfs_rq
is not in throttled state and the runtime_remaining still had plenty
remaining, it gets reset to 1 here, causing the runtime_remaining of
cfs_rq to be quickly depleted.

Note: hotplug operation (online, offline) was performed in while(1) loop

v3: https://lore.kernel.org/all/20241210102346.228663-2-vishalc@linux.ibm.com
v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
Suggested-by: Zhang Qiao <zhangqiao22@huawei.com>
Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
Tested-by: Samir Mulani <samir@linux.ibm.com>
Link: https://lore.kernel.org/r/20241212043102.584863-2-vishalc@linux.ibm.com
---
 kernel/sched/fair.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2c4ebfc..8f641c9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6696,6 +6696,10 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 
 	lockdep_assert_rq_held(rq);
 
+	// Do not unthrottle for an active CPU
+	if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
+		return;
+
 	/*
 	 * The rq clock has already been updated in the
 	 * set_rq_offline(), so we should skip updating
@@ -6711,18 +6715,20 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 			continue;
 
 		/*
-		 * clock_task is not advancing so we just need to make sure
-		 * there's some valid quota amount
-		 */
-		cfs_rq->runtime_remaining = 1;
-		/*
 		 * Offline rq is schedulable till CPU is completely disabled
 		 * in take_cpu_down(), so we prevent new cfs throttling here.
 		 */
 		cfs_rq->runtime_enabled = 0;
 
-		if (cfs_rq_throttled(cfs_rq))
-			unthrottle_cfs_rq(cfs_rq);
+		if (!cfs_rq_throttled(cfs_rq))
+			continue;
+
+		/*
+		 * clock_task is not advancing so we just need to make sure
+		 * there's some valid quota amount
+		 */
+		cfs_rq->runtime_remaining = 1;
+		unthrottle_cfs_rq(cfs_rq);
 	}
 	rcu_read_unlock();