[RFC PATCH v3 6/6] Propagate negative bias

Hongyan Xia posted 6 patches 1 year, 9 months ago
[RFC PATCH v3 6/6] Propagate negative bias
Posted by Hongyan Xia 1 year, 9 months ago
Negative bias is interesting, because dequeuing such a task will
actually increase utilization.

Solve by applying PELT decay to negative biases as well. This in fact
can be implemented easily with some math tricks.

Signed-off-by: Hongyan Xia <hongyan.xia2@arm.com>
---
 kernel/sched/fair.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0177d7e8f364..7259a61e9ae5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4863,6 +4863,45 @@ static inline unsigned long task_util_est_uclamp(struct task_struct *p)
 {
 	return max(task_util_uclamp(p), _task_util_est_uclamp(p));
 }
+
+/*
+ * Negative biases are tricky. If we remove them right away then dequeuing a
+ * uclamp_max task has the interesting effect that dequeuing results in a higher
+ * rq utilization. Solve this by applying PELT decay to the bias itself.
+ *
+ * Keeping track of a PELT-decayed negative bias is extra overhead. However, we
+ * observe this interesting math property, where y is the decay factor and p is
+ * the number of periods elapsed:
+ *
+ *	util_new = util_old * y^p - neg_bias * y^p
+ *		 = (util_old - neg_bias) * y^p
+ *
+ * Therefore, we simply subtract the negative bias from util_avg the moment we
+ * dequeue, then the PELT signal itself is the total of util_avg and the decayed
+ * negative bias, and we no longer need to track the decayed bias separately.
+ */
+static void propagate_negative_bias(struct task_struct *p)
+{
+	if (task_util_bias(p) < 0 && !task_on_rq_migrating(p)) {
+		unsigned long neg_bias = -task_util_bias(p);
+		struct sched_entity *se = &p->se;
+		struct cfs_rq *cfs_rq;
+
+		p->se.avg.util_avg_bias = 0;
+
+		for_each_sched_entity(se) {
+			u32 divider, neg_sum;
+
+			cfs_rq = cfs_rq_of(se);
+			divider = get_pelt_divider(&cfs_rq->avg);
+			neg_sum = neg_bias * divider;
+			sub_positive(&se->avg.util_avg, neg_bias);
+			sub_positive(&se->avg.util_sum, neg_sum);
+			sub_positive(&cfs_rq->avg.util_avg, neg_bias);
+			sub_positive(&cfs_rq->avg.util_sum, neg_sum);
+		}
+	}
+}
 #else
 static inline long task_util_bias(struct task_struct *p)
 {
@@ -4883,6 +4922,10 @@ static inline unsigned long task_util_est_uclamp(struct task_struct *p)
 {
 	return task_util_est(p);
 }
+
+static void propagate_negative_bias(struct task_struct *p)
+{
+}
 #endif
 
 static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
@@ -6844,6 +6887,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	/* At this point se is NULL and we are at root level*/
 	sub_nr_running(rq, 1);
 	util_bias_dequeue(&rq->cfs.avg, p);
+	propagate_negative_bias(p);
 	/* XXX: We should skip the update above and only do it once here. */
 	cpufreq_update_util(rq, 0);
 
-- 
2.34.1
Re: [RFC PATCH v3 6/6] Propagate negative bias
Posted by Dietmar Eggemann 1 year, 8 months ago
On 07/05/2024 14:50, Hongyan Xia wrote:
> Negative bias is interesting, because dequeuing such a task will
> actually increase utilization.
> 
> Solve by applying PELT decay to negative biases as well. This in fact
> can be implemented easily with some math tricks.
> 
> Signed-off-by: Hongyan Xia <hongyan.xia2@arm.com>
> ---
>  kernel/sched/fair.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0177d7e8f364..7259a61e9ae5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4863,6 +4863,45 @@ static inline unsigned long task_util_est_uclamp(struct task_struct *p)
>  {
>  	return max(task_util_uclamp(p), _task_util_est_uclamp(p));
>  }
> +
> +/*
> + * Negative biases are tricky. If we remove them right away then dequeuing a
> + * uclamp_max task has the interesting effect that dequeuing results in a higher
> + * rq utilization. Solve this by applying PELT decay to the bias itself.
> + *
> + * Keeping track of a PELT-decayed negative bias is extra overhead. However, we
> + * observe this interesting math property, where y is the decay factor and p is
> + * the number of periods elapsed:
> + *
> + *	util_new = util_old * y^p - neg_bias * y^p
> + *		 = (util_old - neg_bias) * y^p
> + *
> + * Therefore, we simply subtract the negative bias from util_avg the moment we
> + * dequeue, then the PELT signal itself is the total of util_avg and the decayed
> + * negative bias, and we no longer need to track the decayed bias separately.
> + */
> +static void propagate_negative_bias(struct task_struct *p)
> +{
> +	if (task_util_bias(p) < 0 && !task_on_rq_migrating(p)) {
> +		unsigned long neg_bias = -task_util_bias(p);
> +		struct sched_entity *se = &p->se;
> +		struct cfs_rq *cfs_rq;
> +
> +		p->se.avg.util_avg_bias = 0;
> +
> +		for_each_sched_entity(se) {
> +			u32 divider, neg_sum;
> +
> +			cfs_rq = cfs_rq_of(se);
> +			divider = get_pelt_divider(&cfs_rq->avg);
> +			neg_sum = neg_bias * divider;
> +			sub_positive(&se->avg.util_avg, neg_bias);
> +			sub_positive(&se->avg.util_sum, neg_sum);
> +			sub_positive(&cfs_rq->avg.util_avg, neg_bias);
> +			sub_positive(&cfs_rq->avg.util_sum, neg_sum);
> +		}
> +	}

So you remove the 'task bias = clamp(util_avg, uclamp_min, uclamp_max) -
util_avg' from the se and cfs_rq util_avg' in case it's negative. I.e.
if the task is capped hard.

Looks like this is the old issue that PELT has blocked contribution
whereas uclamp does not (runnable only).

What's the rationale behind this? Is it because the task didn't get the
runtime it needed so we can remove this (artificially accrued) util_avg?

Normally we wouldn't remove blocked util_avg and let it rather decay
periodically for cfs_rq's and at wakeup for tasks.

[...]
Re: [RFC PATCH v3 6/6] Propagate negative bias
Posted by Hongyan Xia 1 year, 8 months ago
On 26/05/2024 23:53, Dietmar Eggemann wrote:
> On 07/05/2024 14:50, Hongyan Xia wrote:
>> Negative bias is interesting, because dequeuing such a task will
>> actually increase utilization.
>>
>> Solve by applying PELT decay to negative biases as well. This in fact
>> can be implemented easily with some math tricks.
>>
>> Signed-off-by: Hongyan Xia <hongyan.xia2@arm.com>
>> ---
>>   kernel/sched/fair.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 0177d7e8f364..7259a61e9ae5 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4863,6 +4863,45 @@ static inline unsigned long task_util_est_uclamp(struct task_struct *p)
>>   {
>>   	return max(task_util_uclamp(p), _task_util_est_uclamp(p));
>>   }
>> +
>> +/*
>> + * Negative biases are tricky. If we remove them right away then dequeuing a
>> + * uclamp_max task has the interesting effect that dequeuing results in a higher
>> + * rq utilization. Solve this by applying PELT decay to the bias itself.
>> + *
>> + * Keeping track of a PELT-decayed negative bias is extra overhead. However, we
>> + * observe this interesting math property, where y is the decay factor and p is
>> + * the number of periods elapsed:
>> + *
>> + *	util_new = util_old * y^p - neg_bias * y^p
>> + *		 = (util_old - neg_bias) * y^p
>> + *
>> + * Therefore, we simply subtract the negative bias from util_avg the moment we
>> + * dequeue, then the PELT signal itself is the total of util_avg and the decayed
>> + * negative bias, and we no longer need to track the decayed bias separately.
>> + */
>> +static void propagate_negative_bias(struct task_struct *p)
>> +{
>> +	if (task_util_bias(p) < 0 && !task_on_rq_migrating(p)) {
>> +		unsigned long neg_bias = -task_util_bias(p);
>> +		struct sched_entity *se = &p->se;
>> +		struct cfs_rq *cfs_rq;
>> +
>> +		p->se.avg.util_avg_bias = 0;
>> +
>> +		for_each_sched_entity(se) {
>> +			u32 divider, neg_sum;
>> +
>> +			cfs_rq = cfs_rq_of(se);
>> +			divider = get_pelt_divider(&cfs_rq->avg);
>> +			neg_sum = neg_bias * divider;
>> +			sub_positive(&se->avg.util_avg, neg_bias);
>> +			sub_positive(&se->avg.util_sum, neg_sum);
>> +			sub_positive(&cfs_rq->avg.util_avg, neg_bias);
>> +			sub_positive(&cfs_rq->avg.util_sum, neg_sum);
>> +		}
>> +	}
> 
> So you remove the 'task bias = clamp(util_avg, uclamp_min, uclamp_max) -
> util_avg' from the se and cfs_rq util_avg' in case it's negative. I.e.
> if the task is capped hard.
> 
> Looks like this is the old issue that PELT has blocked contribution
> whereas uclamp does not (runnable only).
> 
> What's the rationale behind this? Is it because the task didn't get the
> runtime it needed so we can remove this (artificially accrued) util_avg?
> 
> Normally we wouldn't remove blocked util_avg and let it rather decay
> periodically for cfs_rq's and at wakeup for tasks.

Sorry I may not have understood what you asked.

PELT has decaying effect whereas uclamp does not, so you will have the 
effect that dequeuing a task will immediately remove the bias, but 
util_avg won't be immediately gone.

In the case of uclamp_max, assuming an always-running task with 
uclamp_max of 200, which means util_avg 1024 and util_avg_bias of -824. 
The moment this task is dequeued, the rq uclamp util will immediately go 
from 200 to 1024, and then 1024 slowly decay to 0. This patch is to 
mitigate this effect, so that it will just decay from 200 to 0 without 
spiking to 1024 first.

Hopefully this answers your question.

> [...]