kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
When reading the sched_avg related code, I found the comments in
enqueue/dequeue_entity() are not updated with the current code.
We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg
during enqueue/dequeue_entity(), those are done only for attach/detach.
This patch updates the comments to reflect the current code working.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
kernel/sched/fair.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b3371fa40548..e0cd4052e32f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4348,7 +4348,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
/*
* When enqueuing a sched_entity, we must:
* - Update loads to have both entity and cfs_rq synced with now.
- * - Add its load to cfs_rq->runnable_avg
+ * - For group_entity, update its runnable_weight to reflect the new
+ * h_nr_running of its group cfs_rq.
* - For group_entity, update its weight to reflect the new share of
* its group cfs_rq
* - Add its new weight to cfs_rq->load.weight
@@ -4433,7 +4434,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
/*
* When dequeuing a sched_entity, we must:
* - Update loads to have both entity and cfs_rq synced with now.
- * - Subtract its load from the cfs_rq->runnable_avg.
+ * - For group_entity, update its runnable_weight to reflect the new
+ * h_nr_running of its group cfs_rq.
* - Subtract its previous weight from cfs_rq->load.weight.
* - For group entity, update its weight to reflect the new share
* of its group cfs_rq.
--
2.36.1
On Wed, 1 Jun 2022 at 05:55, Chengming Zhou <zhouchengming@bytedance.com> wrote: > > When reading the sched_avg related code, I found the comments in > enqueue/dequeue_entity() are not updated with the current code. > > We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg > during enqueue/dequeue_entity(), those are done only for attach/detach. > > This patch updates the comments to reflect the current code working. > > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> > --- > kernel/sched/fair.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index b3371fa40548..e0cd4052e32f 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -4348,7 +4348,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) > /* > * When enqueuing a sched_entity, we must: > * - Update loads to have both entity and cfs_rq synced with now. > - * - Add its load to cfs_rq->runnable_avg > + * - For group_entity, update its runnable_weight to reflect the new > + * h_nr_running of its group cfs_rq. > * - For group_entity, update its weight to reflect the new share of > * its group cfs_rq > * - Add its new weight to cfs_rq->load.weight > @@ -4433,7 +4434,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) > /* > * When dequeuing a sched_entity, we must: > * - Update loads to have both entity and cfs_rq synced with now. > - * - Subtract its load from the cfs_rq->runnable_avg. > + * - For group_entity, update its runnable_weight to reflect the new > + * h_nr_running of its group cfs_rq. > * - Subtract its previous weight from cfs_rq->load.weight. > * - For group entity, update its weight to reflect the new share > * of its group cfs_rq. > -- > 2.36.1 >
On 2022/6/3 20:08, Vincent Guittot wrote: > On Wed, 1 Jun 2022 at 05:55, Chengming Zhou <zhouchengming@bytedance.com> wrote: >> >> When reading the sched_avg related code, I found the comments in >> enqueue/dequeue_entity() are not updated with the current code. >> >> We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg >> during enqueue/dequeue_entity(), those are done only for attach/detach. >> >> This patch updates the comments to reflect the current code working. >> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > > Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Hello Peter, would you mind picking up this little patch too? Thanks. > >> --- >> kernel/sched/fair.c | 6 ++++-- >> 1 file changed, 4 insertions(+), 2 deletions(-) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index b3371fa40548..e0cd4052e32f 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -4348,7 +4348,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) >> /* >> * When enqueuing a sched_entity, we must: >> * - Update loads to have both entity and cfs_rq synced with now. >> - * - Add its load to cfs_rq->runnable_avg >> + * - For group_entity, update its runnable_weight to reflect the new >> + * h_nr_running of its group cfs_rq. >> * - For group_entity, update its weight to reflect the new share of >> * its group cfs_rq >> * - Add its new weight to cfs_rq->load.weight >> @@ -4433,7 +4434,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) >> /* >> * When dequeuing a sched_entity, we must: >> * - Update loads to have both entity and cfs_rq synced with now. >> - * - Subtract its load from the cfs_rq->runnable_avg. >> + * - For group_entity, update its runnable_weight to reflect the new >> + * h_nr_running of its group cfs_rq. >> * - Subtract its previous weight from cfs_rq->load.weight. >> * - For group entity, update its weight to reflect the new share >> * of its group cfs_rq. >> -- >> 2.36.1 >>
© 2016 - 2026 Red Hat, Inc.