[PATCH v5 8/9] sched/fair: defer task sched_avg attach to enqueue_entity()

Chengming Zhou posted 9 patches 3 years, 7 months ago
There is a newer version of this series
[PATCH v5 8/9] sched/fair: defer task sched_avg attach to enqueue_entity()
Posted by Chengming Zhou 3 years, 7 months ago
When wake_up_new_task(), we would use post_init_entity_util_avg()
to init util_avg/runnable_avg based on cpu's util_avg at that time,
then attach task sched_avg to cfs_rq.

Since enqueue_entity() would always attach any unattached task entity,
so we can defer this work to enqueue_entity().

post_init_entity_util_avg(p)
  attach_entity_cfs_rq()  --> (1)
activate_task(rq, p)
  enqueue_task() := enqueue_task_fair()
  enqueue_entity()
    update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH)
      if (!se->avg.last_update_time && (flags & DO_ATTACH))
        attach_entity_load_avg()  --> (2)

This patch defer attach from (1) to (2)

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 kernel/sched/fair.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e0d34ecdabae..aacf38a72714 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -799,8 +799,6 @@ void init_entity_runnable_average(struct sched_entity *se)
 	/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
 }
 
-static void attach_entity_cfs_rq(struct sched_entity *se);
-
 /*
  * With new tasks being created, their initial util_avgs are extrapolated
  * based on the cfs_rq's current util_avg:
@@ -863,8 +861,6 @@ void post_init_entity_util_avg(struct task_struct *p)
 		se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
 		return;
 	}
-
-	attach_entity_cfs_rq(se);
 }
 
 #else /* !CONFIG_SMP */
-- 
2.37.2
Re: [PATCH v5 8/9] sched/fair: defer task sched_avg attach to enqueue_entity()
Posted by Peter Zijlstra 3 years, 7 months ago
On Thu, Aug 18, 2022 at 11:43:42AM +0800, Chengming Zhou wrote:
> When wake_up_new_task(), we would use post_init_entity_util_avg()
> to init util_avg/runnable_avg based on cpu's util_avg at that time,
> then attach task sched_avg to cfs_rq.
> 
> Since enqueue_entity() would always attach any unattached task entity,
> so we can defer this work to enqueue_entity().
> 
> post_init_entity_util_avg(p)
>   attach_entity_cfs_rq()  --> (1)
> activate_task(rq, p)
>   enqueue_task() := enqueue_task_fair()
>   enqueue_entity()
>     update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH)
>       if (!se->avg.last_update_time && (flags & DO_ATTACH))
>         attach_entity_load_avg()  --> (2)
> 
> This patch defer attach from (1) to (2)
> 
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> ---
>  kernel/sched/fair.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e0d34ecdabae..aacf38a72714 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -799,8 +799,6 @@ void init_entity_runnable_average(struct sched_entity *se)
>  	/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
>  }
>  
> -static void attach_entity_cfs_rq(struct sched_entity *se);
> -
>  /*
>   * With new tasks being created, their initial util_avgs are extrapolated
>   * based on the cfs_rq's current util_avg:
> @@ -863,8 +861,6 @@ void post_init_entity_util_avg(struct task_struct *p)
>  		se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
>  		return;
>  	}
> -
> -	attach_entity_cfs_rq(se);
>  }

There are comments with update_cfs_rq_load_avg() and
remove_entity_load_avg() that seem to rely on post_init_entity_util()
doing this attach.

If that is no longer true; at the very least those comments need to be
updated, but also, I don't immediately see why that's no longer the
case, so please explain.
Re: [PATCH v5 8/9] sched/fair: defer task sched_avg attach to enqueue_entity()
Posted by Chengming Zhou 3 years, 7 months ago
On 2022/8/18 18:39, Peter Zijlstra wrote:
> On Thu, Aug 18, 2022 at 11:43:42AM +0800, Chengming Zhou wrote:
>> When wake_up_new_task(), we would use post_init_entity_util_avg()
>> to init util_avg/runnable_avg based on cpu's util_avg at that time,
>> then attach task sched_avg to cfs_rq.
>>
>> Since enqueue_entity() would always attach any unattached task entity,
>> so we can defer this work to enqueue_entity().
>>
>> post_init_entity_util_avg(p)
>>   attach_entity_cfs_rq()  --> (1)
>> activate_task(rq, p)
>>   enqueue_task() := enqueue_task_fair()
>>   enqueue_entity()
>>     update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH)
>>       if (!se->avg.last_update_time && (flags & DO_ATTACH))
>>         attach_entity_load_avg()  --> (2)
>>
>> This patch defer attach from (1) to (2)
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
>> ---
>>  kernel/sched/fair.c | 4 ----
>>  1 file changed, 4 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index e0d34ecdabae..aacf38a72714 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -799,8 +799,6 @@ void init_entity_runnable_average(struct sched_entity *se)
>>  	/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
>>  }
>>  
>> -static void attach_entity_cfs_rq(struct sched_entity *se);
>> -
>>  /*
>>   * With new tasks being created, their initial util_avgs are extrapolated
>>   * based on the cfs_rq's current util_avg:
>> @@ -863,8 +861,6 @@ void post_init_entity_util_avg(struct task_struct *p)
>>  		se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
>>  		return;
>>  	}
>> -
>> -	attach_entity_cfs_rq(se);
>>  }
> 
> There are comments with update_cfs_rq_load_avg() and
> remove_entity_load_avg() that seem to rely on post_init_entity_util()
> doing this attach.
> 
> If that is no longer true; at the very least those comments need to be
> updated, but also, I don't immediately see why that's no longer the
> case, so please explain.

This attach in post_init_entity_util() will be done in enqueue_entity()
-> update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH) loop.

So these comments should be updated in the next version.

Thanks!