kernel/sched/core.c | 2 +- kernel/sched/deadline.c | 65 +++++++++++++++++++++++++++++++++-------- kernel/sched/sched.h | 2 +- kernel/sched/topology.c | 8 +++-- 4 files changed, 60 insertions(+), 17 deletions(-)
Hello! v2 of a patch series [3] that addresses two issues affecting DEADLINE bandwidth accounting during non-destructive changes to root domains and hotplug operations. The series is based on top of Waiman's "cgroup/cpuset: Remove redundant rebuild_sched_domains_locked() calls" series [1] which is now merged into cgroups/for-6.13 (this series is based on top of that, commit c4c9cebe2fb9). The discussion that eventually led to these two series can be found at [2]. Waiman reported that v1 still failed to make his test_cpuset_prs.sh happy, so I had to change both patches a little. It now seems to pass on my runs. Patch 01/02 deals with non-destructive root domain changes. With respect to v1 we now always restore dl_server contributions, considering root domain span and active cpus mask (otherwise accounting on the default root domain would end up to be incorrect). Patch 02/02 deals with hotplug. With respect to v1 I added special casing when total_bw = 0 (so no DEADLINE tasks to consider) and when a root domain is left with no cpus due to hotplug. In all honesty, I still see intermittent issues that seems to however be related to the dance we do in sched_cpu_deactivate(), where we first turn everything related to a cpu/rq off and revert that if cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these seem to be orthogonal to the original discussion we started from, I wanted to send this out as an hopefully meaningful update/improvement since yesterday. Will continue looking into this. Please go forth and test/review. Series also available at git@github.com:jlelli/linux.git upstream/dl-server-apply Best, Juri [1] https://lore.kernel.org/lkml/20241110025023.664487-1-longman@redhat.com/ [2] https://lore.kernel.org/lkml/20241029225116.3998487-1-joel@joelfernandes.org/ [3] v1 - https://lore.kernel.org/lkml/20241113125724.450249-1-juri.lelli@redhat.com/ Juri Lelli (2): sched/deadline: Restore dl_server bandwidth on non-destructive root domain changes sched/deadline: Correctly account for allocated bandwidth during hotplug kernel/sched/core.c | 2 +- kernel/sched/deadline.c | 65 +++++++++++++++++++++++++++++++++-------- kernel/sched/sched.h | 2 +- kernel/sched/topology.c | 8 +++-- 4 files changed, 60 insertions(+), 17 deletions(-) -- 2.47.0
Thanks Waiman and Phil for the super quick review/test of this v2! On 14/11/24 14:28, Juri Lelli wrote: ... > In all honesty, I still see intermittent issues that seems to however be > related to the dance we do in sched_cpu_deactivate(), where we first > turn everything related to a cpu/rq off and revert that if > cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these > seem to be orthogonal to the original discussion we started from, I > wanted to send this out as an hopefully meaningful update/improvement > since yesterday. Will continue looking into this. About this that I mentioned, it looks like the below cures it (and hopefully doesn't regress wrt the other 2 patches). What do everybody think? --- Subject: [PATCH] sched/deadline: Check bandwidth overflow earlier for hotplug Currently we check for bandwidth overflow potentially due to hotplug operations at the end of sched_cpu_deactivate(), after the cpu going offline has already been removed from scheduling, active_mask, etc. This can create issues for DEADLINE tasks, as there is a substantial race window between the start of sched_cpu_deactivate() and the moment we possibly decide to roll-back the operation if dl_bw_deactivate() returns failure in cpuset_cpu_inactive(). An example is a throttled task that sees its replenishment timer firing while the cpu it was previously running on is considered offline, but before dl_bw_deactivate() had a chance to say no and roll-back happened. Fix this by directly calling dl_bw_deactivate() first thing in sched_cpu_deactivate() and do the required calculation in the former function considering the cpu passed as an argument as offline already. Signed-off-by: Juri Lelli <juri.lelli@redhat.com> --- kernel/sched/core.c | 9 +++++---- kernel/sched/deadline.c | 12 ++++++++++-- 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d1049e784510..43dfb3968eb8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8057,10 +8057,6 @@ static void cpuset_cpu_active(void) static int cpuset_cpu_inactive(unsigned int cpu) { if (!cpuhp_tasks_frozen) { - int ret = dl_bw_deactivate(cpu); - - if (ret) - return ret; cpuset_update_active_cpus(); } else { num_cpus_frozen++; @@ -8128,6 +8124,11 @@ int sched_cpu_deactivate(unsigned int cpu) struct rq *rq = cpu_rq(cpu); int ret; + ret = dl_bw_deactivate(cpu); + + if (ret) + return ret; + /* * Remove CPU from nohz.idle_cpus_mask to prevent participating in * load balancing when not active diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 267ea8bacaf6..6e988d4cd787 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) } break; case dl_bw_req_deactivate: + /* + * cpu is not off yet, but we need to do the math by + * considering it off already (i.e., what would happen if we + * turn cpu off?). + */ + cap -= arch_scale_cpu_capacity(cpu); + /* * cpu is going offline and NORMAL tasks will be moved away * from it. We can thus discount dl_server bandwidth @@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) if (dl_b->total_bw - fair_server_bw > 0) { /* * Leaving at least one CPU for DEADLINE tasks seems a - * wise thing to do. + * wise thing to do. As said above, cpu is not offline + * yet, so account for that. */ - if (dl_bw_cpus(cpu)) + if (dl_bw_cpus(cpu) - 1) overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0); else overflow = 1;
On Thu, Nov 14, 2024 at 04:14:00PM +0000 Juri Lelli wrote: > Thanks Waiman and Phil for the super quick review/test of this v2! > > On 14/11/24 14:28, Juri Lelli wrote: > > ... > > > In all honesty, I still see intermittent issues that seems to however be > > related to the dance we do in sched_cpu_deactivate(), where we first > > turn everything related to a cpu/rq off and revert that if > > cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these > > seem to be orthogonal to the original discussion we started from, I > > wanted to send this out as an hopefully meaningful update/improvement > > since yesterday. Will continue looking into this. > > About this that I mentioned, it looks like the below cures it (and > hopefully doesn't regress wrt the other 2 patches). > > What do everybody think? > I think that makes sense. I think it's better not to have that deadline call buried the cpuset code as well. Reviewed-by: Phil Auld <pauld@redhat.com> > --- > Subject: [PATCH] sched/deadline: Check bandwidth overflow earlier for hotplug > > Currently we check for bandwidth overflow potentially due to hotplug > operations at the end of sched_cpu_deactivate(), after the cpu going > offline has already been removed from scheduling, active_mask, etc. > This can create issues for DEADLINE tasks, as there is a substantial > race window between the start of sched_cpu_deactivate() and the moment > we possibly decide to roll-back the operation if dl_bw_deactivate() > returns failure in cpuset_cpu_inactive(). An example is a throttled > task that sees its replenishment timer firing while the cpu it was > previously running on is considered offline, but before > dl_bw_deactivate() had a chance to say no and roll-back happened. > > Fix this by directly calling dl_bw_deactivate() first thing in > sched_cpu_deactivate() and do the required calculation in the former > function considering the cpu passed as an argument as offline already. > > Signed-off-by: Juri Lelli <juri.lelli@redhat.com> > --- > kernel/sched/core.c | 9 +++++---- > kernel/sched/deadline.c | 12 ++++++++++-- > 2 files changed, 15 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index d1049e784510..43dfb3968eb8 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -8057,10 +8057,6 @@ static void cpuset_cpu_active(void) > static int cpuset_cpu_inactive(unsigned int cpu) > { > if (!cpuhp_tasks_frozen) { > - int ret = dl_bw_deactivate(cpu); > - > - if (ret) > - return ret; > cpuset_update_active_cpus(); > } else { > num_cpus_frozen++; > @@ -8128,6 +8124,11 @@ int sched_cpu_deactivate(unsigned int cpu) > struct rq *rq = cpu_rq(cpu); > int ret; > > + ret = dl_bw_deactivate(cpu); > + > + if (ret) > + return ret; > + > /* > * Remove CPU from nohz.idle_cpus_mask to prevent participating in > * load balancing when not active > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 267ea8bacaf6..6e988d4cd787 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) > } > break; > case dl_bw_req_deactivate: > + /* > + * cpu is not off yet, but we need to do the math by > + * considering it off already (i.e., what would happen if we > + * turn cpu off?). > + */ > + cap -= arch_scale_cpu_capacity(cpu); > + > /* > * cpu is going offline and NORMAL tasks will be moved away > * from it. We can thus discount dl_server bandwidth > @@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) > if (dl_b->total_bw - fair_server_bw > 0) { > /* > * Leaving at least one CPU for DEADLINE tasks seems a > - * wise thing to do. > + * wise thing to do. As said above, cpu is not offline > + * yet, so account for that. > */ > - if (dl_bw_cpus(cpu)) > + if (dl_bw_cpus(cpu) - 1) > overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0); > else > overflow = 1; > --
On 11/14/24 11:14 AM, Juri Lelli wrote: > Thanks Waiman and Phil for the super quick review/test of this v2! > > On 14/11/24 14:28, Juri Lelli wrote: > > ... > >> In all honesty, I still see intermittent issues that seems to however be >> related to the dance we do in sched_cpu_deactivate(), where we first >> turn everything related to a cpu/rq off and revert that if >> cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these >> seem to be orthogonal to the original discussion we started from, I >> wanted to send this out as an hopefully meaningful update/improvement >> since yesterday. Will continue looking into this. > About this that I mentioned, it looks like the below cures it (and > hopefully doesn't regress wrt the other 2 patches). > > What do everybody think? > > --- > Subject: [PATCH] sched/deadline: Check bandwidth overflow earlier for hotplug > > Currently we check for bandwidth overflow potentially due to hotplug > operations at the end of sched_cpu_deactivate(), after the cpu going > offline has already been removed from scheduling, active_mask, etc. > This can create issues for DEADLINE tasks, as there is a substantial > race window between the start of sched_cpu_deactivate() and the moment > we possibly decide to roll-back the operation if dl_bw_deactivate() > returns failure in cpuset_cpu_inactive(). An example is a throttled > task that sees its replenishment timer firing while the cpu it was > previously running on is considered offline, but before > dl_bw_deactivate() had a chance to say no and roll-back happened. > > Fix this by directly calling dl_bw_deactivate() first thing in > sched_cpu_deactivate() and do the required calculation in the former > function considering the cpu passed as an argument as offline already. > > Signed-off-by: Juri Lelli <juri.lelli@redhat.com> > --- > kernel/sched/core.c | 9 +++++---- > kernel/sched/deadline.c | 12 ++++++++++-- > 2 files changed, 15 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index d1049e784510..43dfb3968eb8 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -8057,10 +8057,6 @@ static void cpuset_cpu_active(void) > static int cpuset_cpu_inactive(unsigned int cpu) > { > if (!cpuhp_tasks_frozen) { > - int ret = dl_bw_deactivate(cpu); > - > - if (ret) > - return ret; > cpuset_update_active_cpus(); > } else { > num_cpus_frozen++; > @@ -8128,6 +8124,11 @@ int sched_cpu_deactivate(unsigned int cpu) > struct rq *rq = cpu_rq(cpu); > int ret; > > + ret = dl_bw_deactivate(cpu); > + > + if (ret) > + return ret; > + > /* > * Remove CPU from nohz.idle_cpus_mask to prevent participating in > * load balancing when not active > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 267ea8bacaf6..6e988d4cd787 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) > } > break; > case dl_bw_req_deactivate: > + /* > + * cpu is not off yet, but we need to do the math by > + * considering it off already (i.e., what would happen if we > + * turn cpu off?). > + */ > + cap -= arch_scale_cpu_capacity(cpu); > + > /* > * cpu is going offline and NORMAL tasks will be moved away > * from it. We can thus discount dl_server bandwidth > @@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) > if (dl_b->total_bw - fair_server_bw > 0) { > /* > * Leaving at least one CPU for DEADLINE tasks seems a > - * wise thing to do. > + * wise thing to do. As said above, cpu is not offline > + * yet, so account for that. > */ > - if (dl_bw_cpus(cpu)) > + if (dl_bw_cpus(cpu) - 1) > overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0); > else > overflow = 1; > I have applied this new patch to my test system and there was no regression to the test_cpuet_prs.sh test. Tested-by: Waiman Long <longman@redhat.com>
Currently we check for bandwidth overflow potentially due to hotplug
operations at the end of sched_cpu_deactivate(), after the cpu going
offline has already been removed from scheduling, active_mask, etc.
This can create issues for DEADLINE tasks, as there is a substantial
race window between the start of sched_cpu_deactivate() and the moment
we possibly decide to roll-back the operation if dl_bw_deactivate()
returns failure in cpuset_cpu_inactive(). An example is a throttled
task that sees its replenishment timer firing while the cpu it was
previously running on is considered offline, but before
dl_bw_deactivate() had a chance to say no and roll-back happened.
Fix this by directly calling dl_bw_deactivate() first thing in
sched_cpu_deactivate() and do the required calculation in the former
function considering the cpu passed as an argument as offline already.
By doing so we also simplify sched_cpu_deactivate(), as there is no need
anymore for any kind of roll-back if we fail early.
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
---
Thanks Waiman and Phil for testing and reviewing the scratch version of
this change. I think the below might be better, as we end up with a
clean-up as well.
Please take another look when you/others have time.
---
kernel/sched/core.c | 22 +++++++---------------
kernel/sched/deadline.c | 12 ++++++++++--
2 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d1049e784510..e2c6eacf793e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8054,19 +8054,14 @@ static void cpuset_cpu_active(void)
cpuset_update_active_cpus();
}
-static int cpuset_cpu_inactive(unsigned int cpu)
+static void cpuset_cpu_inactive(unsigned int cpu)
{
if (!cpuhp_tasks_frozen) {
- int ret = dl_bw_deactivate(cpu);
-
- if (ret)
- return ret;
cpuset_update_active_cpus();
} else {
num_cpus_frozen++;
partition_sched_domains(1, NULL, NULL);
}
- return 0;
}
static inline void sched_smt_present_inc(int cpu)
@@ -8128,6 +8123,11 @@ int sched_cpu_deactivate(unsigned int cpu)
struct rq *rq = cpu_rq(cpu);
int ret;
+ ret = dl_bw_deactivate(cpu);
+
+ if (ret)
+ return ret;
+
/*
* Remove CPU from nohz.idle_cpus_mask to prevent participating in
* load balancing when not active
@@ -8173,15 +8173,7 @@ int sched_cpu_deactivate(unsigned int cpu)
return 0;
sched_update_numa(cpu, false);
- ret = cpuset_cpu_inactive(cpu);
- if (ret) {
- sched_smt_present_inc(cpu);
- sched_set_rq_online(rq, cpu);
- balance_push_set(cpu, false);
- set_cpu_active(cpu, true);
- sched_update_numa(cpu, true);
- return ret;
- }
+ cpuset_cpu_inactive(cpu);
sched_domains_numa_masks_clear(cpu);
return 0;
}
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 267ea8bacaf6..6e988d4cd787 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3505,6 +3505,13 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
}
break;
case dl_bw_req_deactivate:
+ /*
+ * cpu is not off yet, but we need to do the math by
+ * considering it off already (i.e., what would happen if we
+ * turn cpu off?).
+ */
+ cap -= arch_scale_cpu_capacity(cpu);
+
/*
* cpu is going offline and NORMAL tasks will be moved away
* from it. We can thus discount dl_server bandwidth
@@ -3522,9 +3529,10 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
if (dl_b->total_bw - fair_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
- * wise thing to do.
+ * wise thing to do. As said above, cpu is not offline
+ * yet, so account for that.
*/
- if (dl_bw_cpus(cpu))
+ if (dl_bw_cpus(cpu) - 1)
overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
else
overflow = 1;
--
2.47.0
On 11/14/24 9:28 AM, Juri Lelli wrote: > Hello! > > v2 of a patch series [3] that addresses two issues affecting DEADLINE > bandwidth accounting during non-destructive changes to root domains and > hotplug operations. The series is based on top of Waiman's > "cgroup/cpuset: Remove redundant rebuild_sched_domains_locked() calls" > series [1] which is now merged into cgroups/for-6.13 (this series is > based on top of that, commit c4c9cebe2fb9). The discussion that > eventually led to these two series can be found at [2]. > > Waiman reported that v1 still failed to make his test_cpuset_prs.sh > happy, so I had to change both patches a little. It now seems to pass on > my runs. > > Patch 01/02 deals with non-destructive root domain changes. With respect > to v1 we now always restore dl_server contributions, considering root > domain span and active cpus mask (otherwise accounting on the default > root domain would end up to be incorrect). > > Patch 02/02 deals with hotplug. With respect to v1 I added special > casing when total_bw = 0 (so no DEADLINE tasks to consider) and when a > root domain is left with no cpus due to hotplug. > > In all honesty, I still see intermittent issues that seems to however be > related to the dance we do in sched_cpu_deactivate(), where we first > turn everything related to a cpu/rq off and revert that if > cpuset_cpu_inactive() reveals failing DEADLINE checks. But, since these > seem to be orthogonal to the original discussion we started from, I > wanted to send this out as an hopefully meaningful update/improvement > since yesterday. Will continue looking into this. > > Please go forth and test/review. > > Series also available at > > git@github.com:jlelli/linux.git upstream/dl-server-apply > > Best, > Juri > > [1] https://lore.kernel.org/lkml/20241110025023.664487-1-longman@redhat.com/ > [2] https://lore.kernel.org/lkml/20241029225116.3998487-1-joel@joelfernandes.org/ > [3] v1 - https://lore.kernel.org/lkml/20241113125724.450249-1-juri.lelli@redhat.com/ > > Juri Lelli (2): > sched/deadline: Restore dl_server bandwidth on non-destructive root > domain changes > sched/deadline: Correctly account for allocated bandwidth during > hotplug > > kernel/sched/core.c | 2 +- > kernel/sched/deadline.c | 65 +++++++++++++++++++++++++++++++++-------- > kernel/sched/sched.h | 2 +- > kernel/sched/topology.c | 8 +++-- > 4 files changed, 60 insertions(+), 17 deletions(-) > Thanks for this new patch series. I have confirmed that with some minor twisting of the cpuset code, all the test cases in the test_cpuset_prs.sh script passed. Tested-by: Waiman Long <longman@redhat.com>
© 2016 - 2024 Red Hat, Inc.