From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23E85C00140 for ; Thu, 18 Aug 2022 12:48:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244739AbiHRMsk (ORCPT ); Thu, 18 Aug 2022 08:48:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244483AbiHRMsZ (ORCPT ); Thu, 18 Aug 2022 08:48:25 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9403126108 for ; Thu, 18 Aug 2022 05:48:22 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 24so1218502pgr.7 for ; Thu, 18 Aug 2022 05:48:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=pWnT33mPHdMvZcJ8SNdBSZ87EcCdDL5ffg44dY+n5ac=; b=yb5KLdXUXulR/7gbFuViIcRGvsePBu5ewFDaPy5og+Aojdgeg9N0K0S0JbZ3pqWHxE hEj23JrDQhWiGrybp82Bp2EDvXvDCfvGK9ICkLOEQLCIOFrTS22P6imp37hY3Zm0qREn WDd5SyUGy5kyW3LpzIIm5DvvrkZlLSzxCIvoJ3I4E6W6RRA16co3sRZJjHSIb69Bzark m25dyQe9uNhfFijDoIMU0ZjAlw7umwxkwhYox95rVOTPcN32ydP09IGkmhclalI0HG6Z rPoyrDGiWVNtKcRL7To5IFEEvYnuzLTdfJ2j/DCpupCzIFsfjW2b9oYaURXBn46fm2+0 EWRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=pWnT33mPHdMvZcJ8SNdBSZ87EcCdDL5ffg44dY+n5ac=; b=G/T+uR654Z2A+FWjGMA5+u0Phl8+O81OY3vclygDaKs1GBS9mdAjdY0hGhZ9Kjylit ZLA8i5U8x1oTUNGB/22/v2aD/Kg9gZFGsH2by/ksz5aoTklKeS062D1KEDETZQGqhuU8 zGAmdSCQlHOK/faH005AAXSaVZZ/QdmZulOm+m9/TyvR/7xNsxM/lWtuT2ht8EFNBriZ W+srxvdSqynNLF0LGqO1UtItuNpZSnhh6A5U8Ceo3cxWK9wdCLboCiuaJlAIYI99HiF1 9kQhake9WES/6TZtV43BANjzROBna46peuz/FnX4WXCQZmjHStDlf7fStEOhJAXi2Nhy ryLg== X-Gm-Message-State: ACgBeo1xRsuJHzu/fIwEBqWy8Q7W6N8dQf5mHJzzVDIgP8tHjEJ4W9Wk DMDyySU8QUPGar7OJjGR6Ldecg== X-Google-Smtp-Source: AA6agR6A/+NDPmoxO2HtCCBQavads7LcUaDt81u7HLGsoN+scpZqsbK+QOfdUvI40Yu+FTZT1jJh3Q== X-Received: by 2002:a63:41c4:0:b0:429:8c1b:61df with SMTP id o187-20020a6341c4000000b004298c1b61dfmr2364558pga.518.1660826902124; Thu, 18 Aug 2022 05:48:22 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:21 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 1/9] sched/fair: maintain task se depth in set_task_rq() Date: Thu, 18 Aug 2022 20:47:57 +0800 Message-Id: <20220818124805.601-2-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Previously we only maintain task se depth in task_move_group_fair(), if a !fair task change task group, its se depth will not be updated, so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to F= AIR") fix the problem by updating se depth in switched_to_fair() too. Then commit daa59407b558 ("sched/fair: Unify switched_{from,to}_fair() and task_move_group_fair()") unified these two functions, moved se.depth setting to attach_task_cfs_rq(), which further into attach_entity_cfs_rq() with commit df217913e72e ("sched/fair: Factorize attach/detach entity"). This patch move task se depth maintenance from attach_entity_cfs_rq() to set_task_rq(), which will be called when CPU/cgroup change, so its depth will always be correct. This patch is preparation for the next patch. Signed-off-by: Chengming Zhou Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 8 -------- kernel/sched/sched.h | 1 + 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a71d6686149b..c5ee08b187ec 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11726,14 +11726,6 @@ static void attach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - /* - * Since the real-depth could have been changed (only FAIR - * class maintain depth value), reset depth properly. - */ - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -#endif - /* Synchronize entity with its cfs_rq */ update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); attach_entity_load_avg(cfs_rq, se); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ddcfc7837595..628ffa974123 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1932,6 +1932,7 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; p->se.parent =3D tg->se[cpu]; + p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BD46C00140 for ; Thu, 18 Aug 2022 12:48:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244754AbiHRMsv (ORCPT ); Thu, 18 Aug 2022 08:48:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244747AbiHRMs2 (ORCPT ); Thu, 18 Aug 2022 08:48:28 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C39D3B854 for ; Thu, 18 Aug 2022 05:48:26 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id o14-20020a17090a0a0e00b001fabfd3369cso1721719pjo.5 for ; Thu, 18 Aug 2022 05:48:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=cq9WBgnay6MlRUXX21E7/GvZMXn3RD4DJtZwNfGfMLY=; b=MH9s3OqHeFf6BrTx2CRm8tfeOjIQkIAZW7KCI4kmTer8jNWJ0cGG5dNHWXSFP2yaLb ILGWvk4VJQ+2JBRd0l4wvoMB1Sm/hRkYk/jE8gGtsuB3oZ9jt3zKP4fc2iR1aTUdgFy4 y7MTCgn3Pt1rNZDAUTBQ5Xt4PYSNwD7lUDeVbTxyEPDhSAgxFe/T/Ki/8Mlf4lcagGNK GVuj3MvmWMLJ60930bK51puLfcTN3A0rM3W25dZ9z9V/aLPd4aQV+BxzzIKR3S51C66n rd1YS7Jbt4PTVklu+k7pcdZZpuWRRPnRDFc8HY4o0b+dAbXuWCvHJzAnmrUGB7q+mPV0 WhGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=cq9WBgnay6MlRUXX21E7/GvZMXn3RD4DJtZwNfGfMLY=; b=umQ4WUl7ouG4SiEDPJX/6MYq3lIbhhGCjOTF5vtzmvUE2gX29bgLr0RxiMFfew/5dC S2ZhBuSfIxRPWd+L2wxGsDOvyi0+q6zYNZoVo5RAslG8GbKZQnOsWPxFRQmskf+wt3H0 9ZyaTpcysYb6enrr49xuGMi/daP+gLacw5DEPVvsKuzM7mZFippTb0ubKQR1eT3W0Ba4 cpNYu58VxFhQda2tiv25/pxMiyZPYt10ILcVTpIIw9YlQ/JurnsRf6XZTL/Mi+GZ+Qsu li3e7aA/bxiVdulQCUbaVAHirUfDV2qk2sf27AYqNxKomL5xUHuBQGTUvHXMJhroLCH+ wnXQ== X-Gm-Message-State: ACgBeo2kwtaapifky4593L0SNceWeJzHt+Z6krzrChahHHuIcI8jVDvF rI6wp0MscY+FsIdLCuh9U3bjug== X-Google-Smtp-Source: AA6agR4jZ9z5H20ACncBMCJrxjhEkJCzCwybWWKsqanlPFc0hI5Gs+LL5bydg6UR1ZpbLb0XwXFcdQ== X-Received: by 2002:a17:902:e945:b0:16a:1c41:f66 with SMTP id b5-20020a170902e94500b0016a1c410f66mr2476751pll.129.1660826906305; Thu, 18 Aug 2022 05:48:26 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:26 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 2/9] sched/fair: remove redundant cpu_cgrp_subsys->fork() Date: Thu, 18 Aug 2022 20:47:58 +0800 Message-Id: <20220818124805.601-3-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use cpu_cgrp_subsys->fork() to set task group for the new fair task in cgroup_post_fork(). Since commit b1e8206582f9 ("sched: Fix yet more sched_fork() races") has already set_task_rq() for the new fair task in sched_cgroup_fork(), so cpu_cgrp_subsys->fork() can be removed. cgroup_can_fork() --> pin parent's sched_task_group sched_cgroup_fork() __set_task_cpu() set_task_rq() cgroup_post_fork() ss->fork() :=3D cpu_cgroup_fork() sched_change_group(..., TASK_SET_GROUP) task_set_group_fair() set_task_rq() --> can be removed After this patch's change, task_change_group_fair() only need to care about task cgroup migration, make the code much simplier. Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot Reviewed-by: Dietmar Eggemann --- kernel/sched/core.c | 27 ++++----------------------- kernel/sched/fair.c | 23 +---------------------- kernel/sched/sched.h | 5 +---- 3 files changed, 6 insertions(+), 49 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 863b5203e357..8e3f1c3f0b2c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -481,8 +481,7 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p= , int flags) { } * p->se.load, p->rt_priority, * p->dl.dl_{runtime, deadline, period, flags, bw, density} * - sched_setnuma(): p->numa_preferred_nid - * - sched_move_task()/ - * cpu_cgroup_fork(): p->sched_task_group + * - sched_move_task(): p->sched_task_group * - uclamp_update_active() p->uclamp* * * p->state <- TASK_*: @@ -10166,7 +10165,7 @@ void sched_release_group(struct task_group *tg) spin_unlock_irqrestore(&task_group_lock, flags); } =20 -static void sched_change_group(struct task_struct *tsk, int type) +static void sched_change_group(struct task_struct *tsk) { struct task_group *tg; =20 @@ -10182,7 +10181,7 @@ static void sched_change_group(struct task_struct *= tsk, int type) =20 #ifdef CONFIG_FAIR_GROUP_SCHED if (tsk->sched_class->task_change_group) - tsk->sched_class->task_change_group(tsk, type); + tsk->sched_class->task_change_group(tsk); else #endif set_task_rq(tsk, task_cpu(tsk)); @@ -10213,7 +10212,7 @@ void sched_move_task(struct task_struct *tsk) if (running) put_prev_task(rq, tsk); =20 - sched_change_group(tsk, TASK_MOVE_GROUP); + sched_change_group(tsk); =20 if (queued) enqueue_task(rq, tsk, queue_flags); @@ -10291,23 +10290,6 @@ static void cpu_cgroup_css_free(struct cgroup_subs= ys_state *css) sched_unregister_group(tg); } =20 -/* - * This is called before wake_up_new_task(), therefore we really only - * have to set its group bits, all the other stuff does not apply. - */ -static void cpu_cgroup_fork(struct task_struct *task) -{ - struct rq_flags rf; - struct rq *rq; - - rq =3D task_rq_lock(task, &rf); - - update_rq_clock(rq); - sched_change_group(task, TASK_SET_GROUP); - - task_rq_unlock(rq, task, &rf); -} - static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) { struct task_struct *task; @@ -11173,7 +11155,6 @@ struct cgroup_subsys cpu_cgrp_subsys =3D { .css_released =3D cpu_cgroup_css_released, .css_free =3D cpu_cgroup_css_free, .css_extra_stat_show =3D cpu_extra_stat_show, - .fork =3D cpu_cgroup_fork, .can_attach =3D cpu_cgroup_can_attach, .attach =3D cpu_cgroup_attach, .legacy_cftypes =3D cpu_legacy_files, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c5ee08b187ec..4b95599aa951 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11821,15 +11821,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) } =20 #ifdef CONFIG_FAIR_GROUP_SCHED -static void task_set_group_fair(struct task_struct *p) -{ - struct sched_entity *se =3D &p->se; - - set_task_rq(p, task_cpu(p)); - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -} - -static void task_move_group_fair(struct task_struct *p) +static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); set_task_rq(p, task_cpu(p)); @@ -11841,19 +11833,6 @@ static void task_move_group_fair(struct task_struc= t *p) attach_task_cfs_rq(p); } =20 -static void task_change_group_fair(struct task_struct *p, int type) -{ - switch (type) { - case TASK_SET_GROUP: - task_set_group_fair(p); - break; - - case TASK_MOVE_GROUP: - task_move_group_fair(p); - break; - } -} - void free_fair_sched_group(struct task_group *tg) { int i; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 628ffa974123..2db7b0494c19 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2195,11 +2195,8 @@ struct sched_class { =20 void (*update_curr)(struct rq *rq); =20 -#define TASK_SET_GROUP 0 -#define TASK_MOVE_GROUP 1 - #ifdef CONFIG_FAIR_GROUP_SCHED - void (*task_change_group)(struct task_struct *p, int type); + void (*task_change_group)(struct task_struct *p); #endif }; =20 --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36D9AC00140 for ; Thu, 18 Aug 2022 12:48:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244746AbiHRMsp (ORCPT ); Thu, 18 Aug 2022 08:48:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244755AbiHRMsd (ORCPT ); Thu, 18 Aug 2022 08:48:33 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E84A52A718 for ; Thu, 18 Aug 2022 05:48:30 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id ch17-20020a17090af41100b001fa74771f61so5520635pjb.0 for ; Thu, 18 Aug 2022 05:48:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=T9NjXKCirMmjalxT2TDdOclIHIt09Z9HaaeNsJZTPXw=; b=lTOUDukXsvG/6rAvduxob2gKwGj+B639SQL6eLxF3eEcf0yx5heQIWGyLcA9Gq+/H/ P43lhpJ0RO5NS7hDc9DKjef4RGuOISGA2c9S7w5Kpry1/XtYhD7nnXjyWzuhxAWIHcar q5zam4fFzlgvDkBG0kyLxRcO61TSkm4m/gPxip5MQ4YgTNLbXZFLYwZt8kNpzFx6PvdL yncd1gCyRwbTW0xHgC/uktt1IJSfDp4m1tC1IBCpaYemhf9Vx1H9usy9Q6mXFuyL35rX p80E4gDReRmMAGgqJhbQ3tUGt8k6waokN3gslH6VhAUzF3Zuvje11REHybrRjyHqKwC0 1Ukg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=T9NjXKCirMmjalxT2TDdOclIHIt09Z9HaaeNsJZTPXw=; b=S76DdmQdRLPOv5AA4fLvf59NKhNwE2IUtKfsXHLap+cVAVxw4AOa7WDTUkSo55NCA2 y3mYj6l8/aedlyULaUCmlNjuDdLSKOgAb8u8zPKmasLdxDDw4rDLduXAarZ04cSZn1Oq WXbUvMHwRbXR+nXc105DdiQjP+D3MtiCvWHfIu+gfNSHbS5bEUy0Fyh4coLU6pp4cDAK xQHjalc6hawLcR+hZ/KtuwdSvM+EZ2kGzDd261einX8pVjMlbZA211D85/Q8Z9stOXJc cAYu1jgR8+168+Uxv9K0wC5eK/nZgvmyNWxj3PV/gf8qjaiDE7DWnOySDPaoRRH4cEUQ UEqA== X-Gm-Message-State: ACgBeo1OPv/W8QQQl6IT7J8D5DpJtnit483IoXYvrODVznWlVZKhG59a FGUtSNG1lrufa0hkbwCTOA3eUg== X-Google-Smtp-Source: AA6agR6h8+4VINgdP6HSuwC/yqQ83xfHm/C0Lse34ove4ebilNZgFlymrKn7vwoCfQy/JIE5U11Euw== X-Received: by 2002:a17:902:d503:b0:16f:1503:4815 with SMTP id b3-20020a170902d50300b0016f15034815mr2704563plg.17.1660826910475; Thu, 18 Aug 2022 05:48:30 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:30 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 3/9] sched/fair: reset sched_avg last_update_time before set_task_rq() Date: Thu, 18 Aug 2022 20:47:59 +0800 Message-Id: <20220818124805.601-4-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked task's sched_avg when migrate, which is not needed for already detached task. task_change_group_fair() will detached the task sched_avg from prev cfs_rq first, so reset sched_avg last_update_time before set_task_rq() to avoid th= at. Signed-off-by: Chengming Zhou Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4b95599aa951..5a704109472a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11824,12 +11824,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); - set_task_rq(p, task_cpu(p)); =20 #ifdef CONFIG_SMP /* Tell se's cfs_rq has been changed -- migrated */ p->se.avg.last_update_time =3D 0; #endif + set_task_rq(p, task_cpu(p)); attach_task_cfs_rq(p); } =20 --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B15AC00140 for ; Thu, 18 Aug 2022 12:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244758AbiHRMst (ORCPT ); Thu, 18 Aug 2022 08:48:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244576AbiHRMsh (ORCPT ); Thu, 18 Aug 2022 08:48:37 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49ACA63A5 for ; Thu, 18 Aug 2022 05:48:35 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id c2so1423191plo.3 for ; Thu, 18 Aug 2022 05:48:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=a8l9uZA41LqnoJJEYr1pG3p/xrUXauS4MtvHWBuecMk=; b=eTDbST1JqSm99p6VH4K/Iby0I/ZOLZew1SwMDRfAdmipXKZMCGDHoLk2jCEw7isnrK UybLt5xBRbIN3wqumbRsMCM5HeSkJu++G7di0Y1z1M7bLbdFJbUs5NAqpSGOWuhy7kd6 kcc38uh2jCwGQYxZgP7hBJRK2iViuwnogQjy2bRIgkNSMpb9wC5pbsSiwzIVJZwL0Otl CIE43bRxXcIdyqSMiR53grc1btEgbtcWD5b8K1OYMjr/QG7tDu0/Pkk5+27gIkwKAVcp bK2/DNiF5q4neZNIEf/JfKjqTjRyPiU/fm7UrlGI4wkZfr8MyHDn915gTim7pqil6gXz wnXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=a8l9uZA41LqnoJJEYr1pG3p/xrUXauS4MtvHWBuecMk=; b=lCBuP/qiSgTxAzZ5uK0H0EPLtmh5a78ut5YLbVv4JCB3/dPmZhKJ70nWJ23oIKQCCO WXsJ0Yac/Fy/+5ke2xV6+Sx6FWQf5Xt80edyH5lTp50z0qa8kWe/2PPxgiFYDfGYte8v u8qlhPqQmSKTFIHCGFDdKtMw2LKQOj29d/1UM4JrtlYzZCIem/JK99wxE5KJp13F7x4O 7FyglJw2z3i9niH4iL3PoOEiPHak2cnWR/in0vNXlJYhqGdL7AABO3wzd4sKjikkB62Y wZRFQ9UOwV2xaznDq+VGoB0iiCZlJt9zQrkFftrmJcbtOHE106XtIv0e2cCsO2Zae4eR QgCw== X-Gm-Message-State: ACgBeo2BbdXt8vVucOAAXByeGXY1ZlYFw9HDthLR980MDETtiDp5egK4 lsP19LmyZ8us0tNeErV2sspPww== X-Google-Smtp-Source: AA6agR6xVZK5fZ++aBXzIRlMdJ/sSEash7UXwSx9BSDc3ZIGIvj3Nt9SW4AdbGfSMTI7uxzX+v8kdg== X-Received: by 2002:a17:902:6b07:b0:172:a76d:7cc5 with SMTP id o7-20020a1709026b0700b00172a76d7cc5mr2475710plk.128.1660826914807; Thu, 18 Aug 2022 05:48:34 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:34 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 4/9] sched/fair: update comments in enqueue/dequeue_entity() Date: Thu, 18 Aug 2022 20:48:00 +0800 Message-Id: <20220818124805.601-5-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When reading the sched_avg related code, I found the comments in enqueue/dequeue_entity() are not updated with the current code. We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg during enqueue/dequeue_entity(), those are done only for attach/detach. This patch updates the comments to reflect the current code working. Signed-off-by: Chengming Zhou Acked-by: Vincent Guittot --- kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5a704109472a..372e5f4a49a3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4598,7 +4598,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Add its load to cfs_rq->runnable_avg + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -4683,7 +4684,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Subtract its load from the cfs_rq->runnable_avg. + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 602A8C00140 for ; Thu, 18 Aug 2022 12:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244774AbiHRMs6 (ORCPT ); Thu, 18 Aug 2022 08:48:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244736AbiHRMsl (ORCPT ); Thu, 18 Aug 2022 08:48:41 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 060D545061 for ; Thu, 18 Aug 2022 05:48:40 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id k14so1470128pfh.0 for ; Thu, 18 Aug 2022 05:48:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=3GAWkeAWlGlQB12EQDsOtb30kGIL9nEjgqc5w218gmE=; b=O7YDg2VmxasAb1DQCpuyjpEk6JSlTzJsb19hs4ahJpS7uTTDV6ggic5GgKY2idsj4v OSdsUFMyu1JpDHkmJIjvy9khPOIdvqB/cz0PdqKK9i4k1OVEIJJ+gqe7rEH0y5V77gDw AKNwJRp5ROCRMNqGQZ9VXpCwZk2/f1L9vLWXCRYGAEhCWrnI1//SA+LzEaTO0X+XIHjs ajq872F0J2y+lH8w7EUZffQgkL4dq6ZQ4zmHDHFk4Ul5oPGC+hPYGPaRX6QO+CziJiB4 d9/O13ogHhBg5LlXurBVFz8MUfGM1Z9exv23YwO6nT1UPswxsC+F1DXZe3U4YIXCx1Xr 8CAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=3GAWkeAWlGlQB12EQDsOtb30kGIL9nEjgqc5w218gmE=; b=JOa1kW6ZJd6iosUUPL1OqNHa3dFrOjEjSvI92UU7PSV7Nq+EW/GFlqi6lx1TO0hI43 QQs6wzsg+20rrHdOBWlQ3BElH4i+9azNoLLXjhnXqVIPGLq/JqI4+VxUN3mErvVtQuhP zQHWzcYIO4TT2ahOiIP88YCddd68VCbvzcnYklO2oEA5JFNK1rXnOozu3Ju4+6Eusvx5 NmMPK8kixxmag8U+ZfW2bz+syU1r/m+78d+WASD2v3GO5MapEyM2b6pBmcYy6o8Vpg9j zH1B0TnMs8WSni2ZCrmu4eDes+MHjlPfjIW5RyvhNfc33Ph0PMtc2tewjzuy1sRhJzJV 2xDw== X-Gm-Message-State: ACgBeo3LHRIiCRfrQt5ZmNzWqv2gTAhM9x0UVENyjT53hmdfGUY06vKk sro47HxwiKmx+82t2ZPpl/ww4Q== X-Google-Smtp-Source: AA6agR48J0rChbWVkjzEfrbmteqn64dE6NHwacBHrdrrnUfTBobL318LrBVkYwTDcHelm+2f4q/zYg== X-Received: by 2002:a63:d70b:0:b0:415:f5f1:dd19 with SMTP id d11-20020a63d70b000000b00415f5f1dd19mr2310305pgg.273.1660826919498; Thu, 18 Aug 2022 05:48:39 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:39 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 5/9] sched/fair: combine detach into dequeue when migrating task Date: Thu, 18 Aug 2022 20:48:01 +0800 Message-Id: <20220818124805.601-6-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When we are migrating task out of the CPU, we can combine detach and propagation into dequeue_entity() to save the detach_entity_cfs_rq() in migrate_task_rq_fair(). This optimization is like combining DO_ATTACH in the enqueue_entity() when migrating task to the CPU. So we don't have to traverse the CFS tree extra time to do the detach_entity_cfs_rq() -> propagate_entity_cfs_rq(), which wouldn't be called anymore with this patch's change. detach_task() deactivate_task() dequeue_task_fair() for_each_sched_entity(se) dequeue_entity() update_load_avg() /* (1) */ detach_entity_load_avg() set_task_cpu() migrate_task_rq_fair() detach_entity_cfs_rq() /* (2) */ update_load_avg(); detach_entity_load_avg(); propagate_entity_cfs_rq(); for_each_sched_entity() update_load_avg() This patch save the detach_entity_cfs_rq() called in (2) by doing the detach_entity_load_avg() for a CPU migrating task inside (1) (the task being the first se in the loop) Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 372e5f4a49a3..1eb3fb3d95c3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4167,6 +4167,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s #define UPDATE_TG 0x1 #define SKIP_AGE_LOAD 0x2 #define DO_ATTACH 0x4 +#define DO_DETACH 0x8 =20 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int flags) @@ -4196,6 +4197,13 @@ static inline void update_load_avg(struct cfs_rq *cf= s_rq, struct sched_entity *s attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq); =20 + } else if (flags & DO_DETACH) { + /* + * DO_DETACH means we're here from dequeue_entity() + * and we are migrating task out of the CPU. + */ + detach_entity_load_avg(cfs_rq, se); + update_tg_load_avg(cfs_rq); } else if (decayed) { cfs_rq_util_change(cfs_rq, 0); =20 @@ -4456,6 +4464,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *c= fs_rq) #define UPDATE_TG 0x0 #define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 +#define DO_DETACH 0x0 =20 static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int not_used1) { @@ -4676,6 +4685,11 @@ static __always_inline void return_cfs_rq_runtime(st= ruct cfs_rq *cfs_rq); static void dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { + int action =3D UPDATE_TG; + + if (entity_is_task(se) && task_on_rq_migrating(task_of(se))) + action |=3D DO_DETACH; + /* * Update run-time statistics of the 'current'. */ @@ -4690,7 +4704,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * - For group entity, update its weight to reflect the new share * of its group cfs_rq. */ - update_load_avg(cfs_rq, se, UPDATE_TG); + update_load_avg(cfs_rq, se, action); se_update_runnable(se); =20 update_stats_dequeue_fair(cfs_rq, se, flags); @@ -7242,8 +7256,6 @@ select_task_rq_fair(struct task_struct *p, int prev_c= pu, int wake_flags) return new_cpu; } =20 -static void detach_entity_cfs_rq(struct sched_entity *se); - /* * Called immediately before a task is migrated to a new CPU; task_cpu(p) = and * cfs_rq_of(p) references at time of call are still valid and identify the @@ -7265,15 +7277,7 @@ static void migrate_task_rq_fair(struct task_struct = *p, int new_cpu) se->vruntime -=3D u64_u32_load(cfs_rq->min_vruntime); } =20 - if (p->on_rq =3D=3D TASK_ON_RQ_MIGRATING) { - /* - * In case of TASK_ON_RQ_MIGRATING we in fact hold the 'old' - * rq->lock and can modify state directly. - */ - lockdep_assert_rq_held(task_rq(p)); - detach_entity_cfs_rq(se); - - } else { + if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); =20 /* --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B39DDC28B2B for ; Thu, 18 Aug 2022 12:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244765AbiHRMtC (ORCPT ); Thu, 18 Aug 2022 08:49:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244755AbiHRMsr (ORCPT ); Thu, 18 Aug 2022 08:48:47 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C038252806 for ; Thu, 18 Aug 2022 05:48:44 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id bh13so1225343pgb.4 for ; Thu, 18 Aug 2022 05:48:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=FRKOdv3Syw9Tzig7kJCUcxunGx7WyPi3SKqsZ5Hh+4Q=; b=3QQSntIC7QS74trk48+Z5+EJcgiPmnFJ4yQTTKn+tRkqZx4VfB275BkwGiXzjFxTjq Qwws4icn81Pq3SPWTzMO98McyHf/pcWzufvbrE9dXfv6hgYbeZi9+1oAnTx9eQjFQHGy sq6E8hY41EesaIy0+5KtTmpbshjLm6PffMRRnRYJLbvCs/KmzOPJwQRbFFvqiOqUf8sf 2eN3rsVyTfim3eee+9rDUNEf2T1IGRGDjp1+mfI00nAhTPedMilxiYwSPpoSXEtFcBTy 4N1OT3R/Z8buviMDzm6OU5QtGR8AxOnHPnytPYr9C17EWQQpWWC3NY0gEHfiZrxEdNpY ZSaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=FRKOdv3Syw9Tzig7kJCUcxunGx7WyPi3SKqsZ5Hh+4Q=; b=hbO3nDHlxsPC/Fcb3fnNg1DHBTZerNCef+98IVcP5Ilhae2DDqqEojK2qFlKa5A2pu pxi/I0ou/6GJlK1plhUKcagOiKsAnCL6I5DoF6faaYcOfel1tivk6lB/PC5gSTnRySJX ja/8G2Hla5ic2nXXZYizSiWsItpLimZeD6+PkNBD2onlbWIKqcv9tv0Q6rCmIxUv+RVo iR8BJpMWbKyE8/nglxW4kKgcUPnblmMeLLlXqDb6et/RT0WUS8eNsu3QXPDv/pSuff+4 I6OBFULdZ9mR67DGaac5mEl/VJ4GzXkdTNSjJ+JKCKGiVhBM3Rnwj6U+KLgrh4+SvRU4 9T1Q== X-Gm-Message-State: ACgBeo2peBxTqJHzoFHUXJz1yZmmNG+vZzsYaBauw6i/GGfYs7M6yo2j /mc5Ml1YO1I6Iuxn2qf/SDsARA== X-Google-Smtp-Source: AA6agR7L1OtILqPPpJIOMY4RUNL7kAek4rQ9cOsdgZKysLyzpHLZZnnBTvP9RczKpBNzOCdcTb59DA== X-Received: by 2002:a05:6a00:1a04:b0:52a:d4dc:5653 with SMTP id g4-20020a056a001a0400b0052ad4dc5653mr2838454pfv.69.1660826924201; Thu, 18 Aug 2022 05:48:44 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:43 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 6/9] sched/fair: fix another detach on unattached task corner case Date: Thu, 18 Aug 2022 20:48:02 +0800 Message-Id: <20220818124805.601-7-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks") fixed two load tracking problems for new task, including detach on unattached new task problem. There still left another detach on unattached task problem for the task which has been woken up by try_to_wake_up() and waiting for actually being woken up by sched_ttwu_pending(). try_to_wake_up(p) cpu =3D select_task_rq(p) if (task_cpu(p) !=3D cpu) set_task_cpu(p, cpu) migrate_task_rq_fair() remove_entity_load_avg() --> unattached se->avg.last_update_time =3D 0; __set_task_cpu() ttwu_queue(p, cpu) ttwu_queue_wakelist() __ttwu_queue_wakelist() task_change_group_fair() detach_task_cfs_rq() detach_entity_cfs_rq() detach_entity_load_avg() --> detach on unattached task set_task_rq() attach_task_cfs_rq() attach_entity_cfs_rq() attach_entity_load_avg() The reason of this problem is similar, we should check in detach_entity_cfs= _rq() that se->avg.last_update_time !=3D 0, before do detach_entity_load_avg(). Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1eb3fb3d95c3..eba8a64f905a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11721,6 +11721,17 @@ static void detach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 +#ifdef CONFIG_SMP + /* + * In case the task sched_avg hasn't been attached: + * - A forked task which hasn't been woken up by wake_up_new_task(). + * - A task which has been woken up by try_to_wake_up() but is + * waiting for actually being woken up by sched_ttwu_pending(). + */ + if (!se->avg.last_update_time) + return; +#endif + /* Catch up with the cfs_rq and remove our load when we leave */ update_load_avg(cfs_rq, se, 0); detach_entity_load_avg(cfs_rq, se); --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D55A9C00140 for ; Thu, 18 Aug 2022 12:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244788AbiHRMtG (ORCPT ); Thu, 18 Aug 2022 08:49:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244762AbiHRMsu (ORCPT ); Thu, 18 Aug 2022 08:48:50 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA9ED58DF2 for ; Thu, 18 Aug 2022 05:48:48 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id 12so1233939pga.1 for ; Thu, 18 Aug 2022 05:48:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=bLfxQYzGYYYZqztSzkQL1RsyTc0N7Fycpp5UcJb0qUk=; b=X1/vaKItXFKta54D+wi+5fxJ0JfyZESqFx0GoacFTobNMSFq1Az+VhxFvpcXTG6LmX IUeM7yG2Lu3jshqDK4yU3rt8m9yIpJ4CYub41UU5be2jPD83VH5BGZdXZ5A1zVjsWcyN FqiiYgZ/cx+lTuDZxC0/Em5nZk4Ad0vXSxkdjGAHRFcPgPyFgiJ8yRyDLH2bGI1uAeel uLiGRAiK7yuB+++jxB6gQyvEyrWaOFax0mIkrs4x3cylJ8rqHdaIrAVhmznx95qEdP4u iK6xOrK9NjiZtXlst1/dS4CwQV5xEeW1HUtQVouW4dp3pUD2kr26Cd7HlTm+4WpOeGfE clhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=bLfxQYzGYYYZqztSzkQL1RsyTc0N7Fycpp5UcJb0qUk=; b=5a0UIkkrdhcoLckSSsg6IDhushXhj/TfW5BBwWbTECukMw5dGXthO3dHCO/2uqTu8r bjW4GouW7DO3M+c79dnoF4QQXkL9ZkKAz+cJTwmkHDPHNbzzNMQM1SUzrKD6Y00BUPWR aWeBqXD7ufkMFu6VmIBgsSLvatDYgNivz1T7TLJdF8Ep5aQJ0I3+GBo8xpX+IBDfdm4L cgnePbpwmcyNU17JWdQcQ9y+8PU2O06N9wbKjyfGwJEJgHl41hpRgvAVp4cqCbWZ3XiM 6rj0Tu84dNePYKy0Oh/TL6wkOTQgP9XxM3lUYfj5kHB0s3ihUwgwXjy8934oCavaq48E HU2w== X-Gm-Message-State: ACgBeo1UYplJyoEsMlYrlkzMDgsbfcD1miCA1CGCAmzOWMLNFi9m6oZG V4pxtRngCcLzr91UmeGK/bh3bg== X-Google-Smtp-Source: AA6agR4tiuEDH0g2Q+Xaa/VF3X8Bl4h3Xw2KDh5aV7UVX/4S8I7X8Sqo5JRDFsoB9fJ1fjv/Vv5Cog== X-Received: by 2002:a05:6a00:174b:b0:52f:c4d1:d12d with SMTP id j11-20020a056a00174b00b0052fc4d1d12dmr2775939pfc.41.1660826928469; Thu, 18 Aug 2022 05:48:48 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:48 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 7/9] sched/fair: allow changing cgroup of new forked task Date: Thu, 18 Aug 2022 20:48:03 +0800 Message-Id: <20220818124805.601-8-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks") introduce a TASK_NEW state and an unnessary limitation that would fail when changing cgroup of new forked task. Because at that time, we can't handle task_change_group_fair() for new forked fair task which hasn't been woken up by wake_up_new_task(), which will cause detach on an unattached task sched_avg problem. This patch delete this unnessary limitation by adding check before do detach or attach in task_change_group_fair(). So cpu_cgrp_subsys.can_attach() has nothing to do for fair tasks, only define it in #ifdef CONFIG_RT_GROUP_SCHED. Signed-off-by: Chengming Zhou --- kernel/sched/core.c | 25 +++++-------------------- kernel/sched/fair.c | 7 +++++++ 2 files changed, 12 insertions(+), 20 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8e3f1c3f0b2c..14819bd66021 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10290,36 +10290,19 @@ static void cpu_cgroup_css_free(struct cgroup_sub= sys_state *css) sched_unregister_group(tg); } =20 +#ifdef CONFIG_RT_GROUP_SCHED static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) { struct task_struct *task; struct cgroup_subsys_state *css; - int ret =3D 0; =20 cgroup_taskset_for_each(task, css, tset) { -#ifdef CONFIG_RT_GROUP_SCHED if (!sched_rt_can_attach(css_tg(css), task)) return -EINVAL; -#endif - /* - * Serialize against wake_up_new_task() such that if it's - * running, we're sure to observe its full state. - */ - raw_spin_lock_irq(&task->pi_lock); - /* - * Avoid calling sched_move_task() before wake_up_new_task() - * has happened. This would lead to problems with PELT, due to - * move wanting to detach+attach while we're not attached yet. - */ - if (READ_ONCE(task->__state) =3D=3D TASK_NEW) - ret =3D -EINVAL; - raw_spin_unlock_irq(&task->pi_lock); - - if (ret) - break; } - return ret; + return 0; } +#endif =20 static void cpu_cgroup_attach(struct cgroup_taskset *tset) { @@ -11155,7 +11138,9 @@ struct cgroup_subsys cpu_cgrp_subsys =3D { .css_released =3D cpu_cgroup_css_released, .css_free =3D cpu_cgroup_css_free, .css_extra_stat_show =3D cpu_extra_stat_show, +#ifdef CONFIG_RT_GROUP_SCHED .can_attach =3D cpu_cgroup_can_attach, +#endif .attach =3D cpu_cgroup_attach, .legacy_cftypes =3D cpu_legacy_files, .dfl_cftypes =3D cpu_files, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eba8a64f905a..c319b0bd2bc1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11840,6 +11840,13 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) #ifdef CONFIG_FAIR_GROUP_SCHED static void task_change_group_fair(struct task_struct *p) { + /* + * We couldn't detach or attach a forked task which + * hasn't been woken up by wake_up_new_task(). + */ + if (READ_ONCE(p->__state) =3D=3D TASK_NEW) + return; + detach_task_cfs_rq(p); =20 #ifdef CONFIG_SMP --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 149BEC32772 for ; Thu, 18 Aug 2022 12:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244794AbiHRMtK (ORCPT ); Thu, 18 Aug 2022 08:49:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244767AbiHRMsy (ORCPT ); Thu, 18 Aug 2022 08:48:54 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6270263A5 for ; Thu, 18 Aug 2022 05:48:53 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id jl18so1429903plb.1 for ; Thu, 18 Aug 2022 05:48:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=fvKl2tnBqJmMcygYqKFkoFiYDs76K09gFVhZ/gWKaSc=; b=SN7hdB2jRksAJdHKQk9m76QrqW85j5Zfy+IP91zY/d3LxmxTB/B5zKFrnfJE3YlG2z VthStM5pIbohT20e7wIEJSBNJmXCV+QDuIJOKXiofI153SavMfFJpRSG/iH8VUF97khP Uoy317w1ouezpE+M/IwZKsU2pmgp3xHnzLNhaHF8z9WFn5RupF2ZV9+RLPntStl8qDK5 JDDLFlsjswTBfKDIQMl9VyfTwh+plUlDv5zL8/SNvg5QSUuYl/KbppdXn+AHXQ26DT6a xDkLHTZjOKu2rBU38GW8cOX8Bv+NTOVOEbpFjwqbJQcA2zf+jHgarDRxxzD+Mx3roe7g ppbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=fvKl2tnBqJmMcygYqKFkoFiYDs76K09gFVhZ/gWKaSc=; b=eMNw1OZPXYjih/VN/QPNNaUcLxcloE0huwIlDZmsNm4LegrPBwVZ6k/v8GyQkF5oV7 uelc+/WF4DZ4iAG8C1kT0hEJ/VQFOwgOheEkKFACwkY/Mm0VZ8ghYBkAsM8LeBduJS6+ jSv4EjpLVbjVsK49hLQaW9yuPnRrXZ6+lKT1JNqaf6WArTE96V4BmzzXSNo5u0ro9Jqq XnV1Z9XC2W/3tSnu5hYfR3DkBmdBZMUDRccEYDKVmOclaoch4IrrO32JNXOSLSu8GqUr 88fiSw+5PK5q+RvJsMVISvtO+jqUnkx+HnBN8ilQAk9aZTmuGkomYIxpB08D+6zDM0IE QFYQ== X-Gm-Message-State: ACgBeo0T5EkF7rt+3a6HMSmUvZroL/qvp+6D+2vJ8kbgabHE9jp8Vw+i /r5di/rJCPE9Oha86o0jIZx/PA== X-Google-Smtp-Source: AA6agR5cWfPOi74lPtdmDsTJhWqZmGFZdT1YSfdSzldLnO1fTVOfUBnxuAAE6gCGV/mpNZBdKYN8qg== X-Received: by 2002:a17:902:db12:b0:16e:e5fc:56db with SMTP id m18-20020a170902db1200b0016ee5fc56dbmr2544387plx.46.1660826932803; Thu, 18 Aug 2022 05:48:52 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:52 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 8/9] sched/fair: move task sched_avg attach to enqueue_task_fair() Date: Thu, 18 Aug 2022 20:48:04 +0800 Message-Id: <20220818124805.601-9-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When wake_up_new_task(), we use post_init_entity_util_avg() to init util_avg/runnable_avg based on cpu's util_avg at that time, and attach task sched_avg to cfs_rq. Since enqueue_task_fair() -> enqueue_entity() -> update_load_avg() loop will do attach, we can move this work to update_load_avg(). wake_up_new_task(p) post_init_entity_util_avg(p) attach_entity_cfs_rq() --> (1) activate_task(rq, p) enqueue_task() :=3D enqueue_task_fair() enqueue_entity() loop update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH) if (!se->avg.last_update_time && (flags & DO_ATTACH)) attach_entity_load_avg() --> (2) This patch move attach from (1) to (2), update related comments too. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c319b0bd2bc1..93d7c7b110dd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -799,8 +799,6 @@ void init_entity_runnable_average(struct sched_entity *= se) /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg= */ } =20 -static void attach_entity_cfs_rq(struct sched_entity *se); - /* * With new tasks being created, their initial util_avgs are extrapolated * based on the cfs_rq's current util_avg: @@ -863,8 +861,6 @@ void post_init_entity_util_avg(struct task_struct *p) se->avg.last_update_time =3D cfs_rq_clock_pelt(cfs_rq); return; } - - attach_entity_cfs_rq(se); } =20 #else /* !CONFIG_SMP */ @@ -4002,8 +3998,7 @@ static void migrate_se_pelt_lag(struct sched_entity *= se) {} * @cfs_rq: cfs_rq to update * * The cfs_rq avg is the direct sum of all its entities (blocked and runna= ble) - * avg. The immediate corollary is that all (fair) tasks must be attached,= see - * post_init_entity_util_avg(). + * avg. The immediate corollary is that all (fair) tasks must be attached. * * cfs_rq->avg is used for task_h_load() and update_cfs_share() for exampl= e. * @@ -4236,8 +4231,8 @@ static void remove_entity_load_avg(struct sched_entit= y *se) =20 /* * tasks cannot exit without having gone through wake_up_new_task() -> - * post_init_entity_util_avg() which will have added things to the - * cfs_rq, so we can remove unconditionally. + * enqueue_task_fair() which will have added things to the cfs_rq, + * so we can remove unconditionally. */ =20 sync_entity_load_avg(se); --=20 2.37.2 From nobody Fri Apr 10 23:24:03 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 921C3C32772 for ; Thu, 18 Aug 2022 12:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244805AbiHRMtP (ORCPT ); Thu, 18 Aug 2022 08:49:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244779AbiHRMs7 (ORCPT ); Thu, 18 Aug 2022 08:48:59 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E2F361B03 for ; Thu, 18 Aug 2022 05:48:57 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id 73so1209625pgb.9 for ; Thu, 18 Aug 2022 05:48:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=x8m79S6XeJzWHPW0WBhOsEVwccQlVpjElOwJhSo8zB4=; b=fr8Ic8NDO7wYMDMtiNtNOwQod6dsA/mmJOips3BsLPIkrev8e8LuBfMHNK4gBd/UU5 p6JniRKTbR0W6PKNZKcP7ZdjO2VL0WlRdM6HOz2qo0U52JB0jN46Rr27LsD61OTTPdcv A/7wz9y+HbgUbZLUBo+S4je0VEjKNMtA9VH92HkIybE3yUsrCXQOtDSPLEcBcZaMZLKT oBkbQZIl3pxoM7+Dd2b0TxCRsnaHybSYeyw6dCS2H57IkfBIxv5RBdo8VR+YuL3vcNYv ThJKuTXBXSp5UDWEm/lN1LWHfbYgdUc5QXl8/3Qo3oIRoy5oksIkuVHisvxgRZa+yH39 bhFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=x8m79S6XeJzWHPW0WBhOsEVwccQlVpjElOwJhSo8zB4=; b=Tnv6Vn2g5BC5+F9ZMncKYly0bvnA6n8JatCbaPek0csHk1ej9SzLAjl56xfyb5Yt24 c25TieAR1N5f3Gb7itzyOGIU/rx6zTrQ1iiUBm8BCYdt/yOSsP/wd80AIKwlnEazoa0k Ul9KGWBinNmGilFAJ6pF4Tglxvlgdl4yyfCA184grQ8kAiUAH4hHl2ngJHZNdLYzlpp8 Q20uScpCLSeUcE8hLM4Q8uMSzl488J8awbAwD/Ie7g5toRruyJIcmgm1PxkV227gdpkM O9xPuiqrby65yhAUJmbTC7/k3aFWPoXOazlMNYxEDGCBBpKcYnjc8KqgX26FwzJ7++5T 8OXA== X-Gm-Message-State: ACgBeo036cVeBzttdk/QAyN0n9GjsPTzY2gzOVn7a7CIfpxlE2kT/iUe lMl+me5yhEwifG3f8xicEb5HjQ== X-Google-Smtp-Source: AA6agR7hTf8zbLhVeq3bm60Xuw1nM5ewTOoZMgs6/QACQptaUXmvQ42OV07WKVBW/5e0lyDKGvvhRg== X-Received: by 2002:a63:6909:0:b0:41c:9f4f:a63c with SMTP id e9-20020a636909000000b0041c9f4fa63cmr2441531pgc.76.1660826937103; Thu, 18 Aug 2022 05:48:57 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id c16-20020a621c10000000b0052f3a7bc29fsm1477449pfc.202.2022.08.18.05.48.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 05:48:56 -0700 (PDT) From: Chengming Zhou To: vincent.guittot@linaro.org, dietmar.eggemann@arm.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Chengming Zhou Subject: [PATCH v6 9/9] sched/fair: don't init util/runnable_avg for !fair task Date: Thu, 18 Aug 2022 20:48:05 +0800 Message-Id: <20220818124805.601-10-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220818124805.601-1-zhouchengming@bytedance.com> References: <20220818124805.601-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" post_init_entity_util_avg() init task util_avg according to the cpu util_avg at the time of fork, which will decay when switched_to_fair() some time lat= er, we'd better to not set them at all in the case of !fair task. Suggested-by: Vincent Guittot Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 93d7c7b110dd..621bd19e10ae 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -833,20 +833,6 @@ void post_init_entity_util_avg(struct task_struct *p) long cpu_scale =3D arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))); long cap =3D (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; =20 - if (cap > 0) { - if (cfs_rq->avg.util_avg !=3D 0) { - sa->util_avg =3D cfs_rq->avg.util_avg * se->load.weight; - sa->util_avg /=3D (cfs_rq->avg.load_avg + 1); - - if (sa->util_avg > cap) - sa->util_avg =3D cap; - } else { - sa->util_avg =3D cap; - } - } - - sa->runnable_avg =3D sa->util_avg; - if (p->sched_class !=3D &fair_sched_class) { /* * For !fair tasks do: @@ -861,6 +847,20 @@ void post_init_entity_util_avg(struct task_struct *p) se->avg.last_update_time =3D cfs_rq_clock_pelt(cfs_rq); return; } + + if (cap > 0) { + if (cfs_rq->avg.util_avg !=3D 0) { + sa->util_avg =3D cfs_rq->avg.util_avg * se->load.weight; + sa->util_avg /=3D (cfs_rq->avg.load_avg + 1); + + if (sa->util_avg > cap) + sa->util_avg =3D cap; + } else { + sa->util_avg =3D cap; + } + } + + sa->runnable_avg =3D sa->util_avg; } =20 #else /* !CONFIG_SMP */ --=20 2.37.2