kernel/sched/fair.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
CPU controller limits are not properly enforced during CPU hotplug
operations, particularly during CPU offline. When a CPU goes offline,
throttled processes are unintentionally being unthrottled across all CPUs
in the system, allowing them to exceed their assigned quota limits.
Consider below for an example,
Assigning 6.25% bandwidth limit to a cgroup
in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
100% CPU utilization, expected (user+sys) time = 10 seconds.
$ cat /sys/fs/cgroup/test/cpu.max
50000 100000
$ ./ebizzy -t 8 -S 20 // non-hotplug case
real 20.00 s
user 10.81 s // intended behaviour
sys 0.00 s
$ ./ebizzy -t 8 -S 20 // hotplug case
real 20.00 s
user 14.43 s // Workload is able to run for 14 secs
sys 0.00 s // when it should have only run for 10 secs
During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
is called for every active CPU to update the root domain. That ends up
calling rq_offline_fair which un-throttles any throttled hierarchies.
Unthrottling should only occur for the CPU being hotplugged to allow its
throttled processes to become runnable and get migrated to other CPUs.
With current patch applied,
$ ./ebizzy -t 8 -S 20 // hotplug case
real 21.00 s
user 10.16 s // intended behaviour
sys 0.00 s
Note: hotplug operation (online, offline) was performed in while(1) loop
Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fbdca89c677f..e28a8e056ebf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
list_for_each_entry_rcu(tg, &task_groups, list) {
struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
- if (!cfs_rq->runtime_enabled)
+ /* Only unthrottle the CPU being hotplugged */
+ if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
continue;
/*
--
2.47.0
Hi Vishal,
在 2024/12/7 13:27, Vishal Chourasia 写道:
> CPU controller limits are not properly enforced during CPU hotplug
> operations, particularly during CPU offline. When a CPU goes offline,
> throttled processes are unintentionally being unthrottled across all CPUs
> in the system, allowing them to exceed their assigned quota limits.
>
I encountered a similar issue where cfs_rq is not in throttled state and the runtime_remaining still
had plenty remaining, but it was reset to 1 here, causing the runtime_remaining of cfs_rq to be
quickly depleted and the actual running time slice is smaller than the configured quota limits.
> Consider below for an example,
>
> Assigning 6.25% bandwidth limit to a cgroup
> in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> 100% CPU utilization, expected (user+sys) time = 10 seconds.
>
> $ cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
>
> $ ./ebizzy -t 8 -S 20 // non-hotplug case
> real 20.00 s
> user 10.81 s // intended behaviour
> sys 0.00 s
>
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 20.00 s
> user 14.43 s // Workload is able to run for 14 secs
> sys 0.00 s // when it should have only run for 10 secs
>
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
>
> Unthrottling should only occur for the CPU being hotplugged to allow its
> throttled processes to become runnable and get migrated to other CPUs.
>
> With current patch applied,
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 21.00 s
> user 10.16 s // intended behaviour
> sys 0.00 s
>
> Note: hotplug operation (online, offline) was performed in while(1) loop
> Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
>
> v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
>
> ---
> kernel/sched/fair.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fbdca89c677f..e28a8e056ebf 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> list_for_each_entry_rcu(tg, &task_groups, list) {
> struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>
> - if (!cfs_rq->runtime_enabled)
> + /* Only unthrottle the CPU being hotplugged */
> + if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> continue;
The cpu_of(rq) is fixed value, so the ret of cpumask_test_cpu() is also a fixed value. We could
check this before traversing the task_groups list, avoiding unnecessary traversal, is right?
Something like this:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2d16c8545c71..79e9e5323112 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6687,25 +6687,29 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
rq_clock_start_loop_update(rq);
rcu_read_lock();
- list_for_each_entry_rcu(tg, &task_groups, list) {
- struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+ if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
+ list_for_each_entry_rcu(tg, &task_groups, list) {
+ struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
- if (!cfs_rq->runtime_enabled)
- continue;
+ if (!cfs_rq->runtime_enabled)
+ continue;
- /*
- * clock_task is not advancing so we just need to make sure
- * there's some valid quota amount
- */
- cfs_rq->runtime_remaining = 1;
- /*
- * Offline rq is schedulable till CPU is completely disabled
- * in take_cpu_down(), so we prevent new cfs throttling here.
- */
- cfs_rq->runtime_enabled = 0;
+ /*
+ * Offline rq is schedulable till CPU is completely disabled
+ * in take_cpu_down(), so we prevent new cfs throttling here.
+ */
+ cfs_rq->runtime_enabled = 0;
- if (cfs_rq_throttled(cfs_rq))
+ if (!cfs_rq_throttled(cfs_rq))
+ continue;
+
+ /*
+ * clock_task is not advancing so we just need to make sure
+ * there's some valid quota amount
+ */
+ cfs_rq->runtime_remaining = 1;
unthrottle_cfs_rq(cfs_rq);
+ }
}
--
Zhang Qiao
>
> /*
On Tue, Dec 10, 2024 at 02:55:36PM +0800, Zhang Qiao wrote:
> Hi Vishal,
>
Thanks for looking into this!
>
>
> 在 2024/12/7 13:27, Vishal Chourasia 写道:
> > CPU controller limits are not properly enforced during CPU hotplug
> > operations, particularly during CPU offline. When a CPU goes offline,
> > throttled processes are unintentionally being unthrottled across all CPUs
> > in the system, allowing them to exceed their assigned quota limits.
> >
>
> I encountered a similar issue where cfs_rq is not in throttled state and the runtime_remaining still
> had plenty remaining, but it was reset to 1 here, causing the runtime_remaining of cfs_rq to be
> quickly depleted and the actual running time slice is smaller than the configured quota limits.
>
Correct.
> > Consider below for an example,
> >
> > Assigning 6.25% bandwidth limit to a cgroup
> > in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> > 100% CPU utilization, expected (user+sys) time = 10 seconds.
> >
> > $ cat /sys/fs/cgroup/test/cpu.max
> > 50000 100000
> >
> > $ ./ebizzy -t 8 -S 20 // non-hotplug case
> > real 20.00 s
> > user 10.81 s // intended behaviour
> > sys 0.00 s
> >
> > $ ./ebizzy -t 8 -S 20 // hotplug case
> > real 20.00 s
> > user 14.43 s // Workload is able to run for 14 secs
> > sys 0.00 s // when it should have only run for 10 secs
> >
> > During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> > is called for every active CPU to update the root domain. That ends up
> > calling rq_offline_fair which un-throttles any throttled hierarchies.
> >
> > Unthrottling should only occur for the CPU being hotplugged to allow its
> > throttled processes to become runnable and get migrated to other CPUs.
> >
> > With current patch applied,
> > $ ./ebizzy -t 8 -S 20 // hotplug case
> > real 21.00 s
> > user 10.16 s // intended behaviour
> > sys 0.00 s
> >
> > Note: hotplug operation (online, offline) was performed in while(1) loop
> > Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> > Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
> >
> > v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
> >
> > ---
> > kernel/sched/fair.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index fbdca89c677f..e28a8e056ebf 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> > list_for_each_entry_rcu(tg, &task_groups, list) {
> > struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> >
> > - if (!cfs_rq->runtime_enabled)
> > + /* Only unthrottle the CPU being hotplugged */
> > + if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> > continue;
>
> The cpu_of(rq) is fixed value, so the ret of cpumask_test_cpu() is also a fixed value. We could
> check this before traversing the task_groups list, avoiding unnecessary traversal, is right?
Yes, I will sent out another version. Thanks for pointing it out!
>
> Something like this:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2d16c8545c71..79e9e5323112 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6687,25 +6687,29 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> rq_clock_start_loop_update(rq);
>
> rcu_read_lock();
> - list_for_each_entry_rcu(tg, &task_groups, list) {
> - struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> + if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
> + list_for_each_entry_rcu(tg, &task_groups, list) {
> + struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>
> - if (!cfs_rq->runtime_enabled)
> - continue;
> + if (!cfs_rq->runtime_enabled)
> + continue;
>
> - /*
> - * clock_task is not advancing so we just need to make sure
> - * there's some valid quota amount
> - */
> - cfs_rq->runtime_remaining = 1;
> - /*
> - * Offline rq is schedulable till CPU is completely disabled
> - * in take_cpu_down(), so we prevent new cfs throttling here.
> - */
> - cfs_rq->runtime_enabled = 0;
> + /*
> + * Offline rq is schedulable till CPU is completely disabled
> + * in take_cpu_down(), so we prevent new cfs throttling here.
> + */
> + cfs_rq->runtime_enabled = 0;
>
> - if (cfs_rq_throttled(cfs_rq))
> + if (!cfs_rq_throttled(cfs_rq))
> + continue;
> +
> + /*
> + * clock_task is not advancing so we just need to make sure
> + * there's some valid quota amount
> + */
> + cfs_rq->runtime_remaining = 1;
> unthrottle_cfs_rq(cfs_rq);
> + }
> }
Only traverse the thread group list for inactive CPUs, and if the cfs_rq
is throttled then set it's runtime_remaining to 1 and unthrottle it.
- vishalc
>
> --
> Zhang Qiao
> >
> > /*
© 2016 - 2025 Red Hat, Inc.