From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D8F4C00140 for ; Mon, 8 Aug 2022 12:58:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242526AbiHHM6Q (ORCPT ); Mon, 8 Aug 2022 08:58:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242423AbiHHM6I (ORCPT ); Mon, 8 Aug 2022 08:58:08 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B16838BA for ; Mon, 8 Aug 2022 05:58:04 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id z187so7993051pfb.12 for ; Mon, 08 Aug 2022 05:58:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sJVNRZWuthlVG6eumc2+xZt1bVXvLPuf4bfEzwEj/Wo=; b=6dDSvsFNGQ69wKIW/QKdjoE8xJFiEIeyydanuWHYbH4zsMgfaLOUOZzk6ynRtJeqUT gVRNLI2AqHldEcpMlLvDUPevPvA/KLqa4pUYt6tMd617qZ7fEFoDT8Kpo4F90dM1aF57 YTwwIcsCQf2/jmCYIKldOQ6ZjXOuDx3XSSGLHIBg86bbvtka8KMxSzh90/rovCAw+wN/ 49Dh4kGgis+RmqxEOifFSZIApEYbX5tbPllO1dhL6QTQb1UoXUV9v/ZljR58NnpaZVj2 xPobRB3isOq+rxuBAY5ysAjnMRpcSEmuxfrh+fJMq0AKCOsMspuQoJAB0yglzACu0a4n sLIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sJVNRZWuthlVG6eumc2+xZt1bVXvLPuf4bfEzwEj/Wo=; b=npnkX7pu3orpnXJkk02DU7Bro1e66oNrYrf5umPMfOuXHdCE5hT5xDXgdXzi2nVTHZ nDmM6VuUzrEdFlqFOxhSRB8fXBcTQSibA3fa98SwyKh2y6lFsj4Aeio8RLSjdJ0WKyiP dx/0Fif5RR8zXqGr1gC5A9lxyhMljsUMkieoHQW3jUfIXOEBxFigmjRFjkA85uB0Nxfa uP7tZNVL5MvQt8gNuPspX/Rhm+2O6UEGDYl6Vs98hLBEZEhOUh4YLETp7FNePsqqGP0P 4lECTo2PwLBCkLzbzzyimR13XqO4sUYdlwvVBiEjYxyAXMyLNTpALzcv6IKCjifGUovm lvvQ== X-Gm-Message-State: ACgBeo3ue+2w5XJo4bDh1UeVYefAsATyGYNeChEnU+PJHva3b7jO0y87 T5znaK7NUi9f9wYi9jvbRfUphA== X-Google-Smtp-Source: AA6agR6ywmSe8sZA101skeGGetaqxzyuock7Z4+LBpqjpoqh+sYXrQ6l6oRPu7cZdJ+MufUVONlAig== X-Received: by 2002:a05:6a00:1c49:b0:52e:4e9a:a07a with SMTP id s9-20020a056a001c4900b0052e4e9aa07amr18262720pfw.26.1659963484057; Mon, 08 Aug 2022 05:58:04 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:03 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 1/9] sched/fair: maintain task se depth in set_task_rq() Date: Mon, 8 Aug 2022 20:57:37 +0800 Message-Id: <20220808125745.22566-2-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Previously we only maintain task se depth in task_move_group_fair(), if a !fair task change task group, its se depth will not be updated, so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to F= AIR") fix the problem by updating se depth in switched_to_fair() too. Then commit daa59407b558 ("sched/fair: Unify switched_{from,to}_fair() and task_move_group_fair()") unified these two functions, moved se.depth setting to attach_task_cfs_rq(), which further into attach_entity_cfs_rq() with commit df217913e72e ("sched/fair: Factorize attach/detach entity"). This patch move task se depth maintenance from attach_entity_cfs_rq() to set_task_rq(), which will be called when CPU/cgroup change, so its depth will always be correct. This patch is preparation for the next patch. Signed-off-by: Chengming Zhou Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 8 -------- kernel/sched/sched.h | 1 + 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index da388657d5ac..a3b0f8b1029e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11562,14 +11562,6 @@ static void attach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - /* - * Since the real-depth could have been changed (only FAIR - * class maintain depth value), reset depth properly. - */ - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -#endif - /* Synchronize entity with its cfs_rq */ update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); attach_entity_load_avg(cfs_rq, se); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3ccd35c22f0f..4c4822141026 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1930,6 +1930,7 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; p->se.parent =3D tg->se[cpu]; + p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B3AC00140 for ; Mon, 8 Aug 2022 12:58:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243018AbiHHM6U (ORCPT ); Mon, 8 Aug 2022 08:58:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242678AbiHHM6K (ORCPT ); Mon, 8 Aug 2022 08:58:10 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 620A8A19E for ; Mon, 8 Aug 2022 05:58:09 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id gj1so8748138pjb.0 for ; Mon, 08 Aug 2022 05:58:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T8Jsl+ToANeHS+DRaMdUI/qyZ4cu1x0qwQ9xAfLBcwQ=; b=KZTZNnESDtHHURS/uGOMOjrwiDmLzshlTSEGMGU3qR8+0yU7J9TlkvjHghBtcW18EJ k2jeiBBPsii66rc73GPea/Dute6I6eow0l2gela1dtnRTq0379wOKrHt+eV7qqmOHzDm uBwveyFJJVRK1XJSBUDUoHOprcCZZx6s+wD6rAHirxgHBXpq0oP4oloVEzrFOQVFrqFA BG2g1W1Pd7kqZQz7SckBYIa6c/T/5MFIZyUXk2AQeqScWO92bXfA2V3Wj7wStjpOPRah 18VANXsyooZBSTYkyxUU453CQm8KNA+HEmAJeW41PQ+Hr7pQHWuu+EAH4DxJI20GF5AF H0og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T8Jsl+ToANeHS+DRaMdUI/qyZ4cu1x0qwQ9xAfLBcwQ=; b=zH/X9NQg5LgUI5DkZG4kyGVBOl1fpgBPpucZEQnh1dTV9sNpkM69anOLkarw6MW/lB 7FAfEGJz5rt6it0eE7n0N87lCOT9XeUpfbn70ju5rO5O+xvV1NY+DiVcxt7ebcmnbtIF /VOxJjS36fmaBXvrESkh3CifZ0J4ziTyNkbBGtxx5x/YWeUOVGNUOWQL+zUQXZDjTovB IgVa6rvFI57w17rZrQpS4SOj86+siJNaFPinj78C0Wl2U5ExbA6o5MCCPG3r9S8Qu5xU CSWZi0k5Juv+HRwqdyUE2L/7fAcZl9MdZE9lJX3qarv9X3pOScYah3a0f1fxPBciu7Nb sbpQ== X-Gm-Message-State: ACgBeo1mWt16I3MqJKZ8rcpC+UofuAZy6BjqGfdVMS925QEXB96CcDkI bl8VRsC2m98DtKDTeK04ORKH7Q== X-Google-Smtp-Source: AA6agR7vNTsvKHWcuX7VugFejUBbHh3fJvDJhU+D1bX0xP+AZVI26CMm73YagTBX2QsFkVJDlOwSGA== X-Received: by 2002:a17:90b:4c07:b0:1f5:40a:8040 with SMTP id na7-20020a17090b4c0700b001f5040a8040mr28716474pjb.121.1659963488880; Mon, 08 Aug 2022 05:58:08 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:08 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 2/9] sched/fair: remove redundant cpu_cgrp_subsys->fork() Date: Mon, 8 Aug 2022 20:57:38 +0800 Message-Id: <20220808125745.22566-3-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use cpu_cgrp_subsys->fork() to set task group for the new fair task in cgroup_post_fork(). Since commit b1e8206582f9 ("sched: Fix yet more sched_fork() races") has already set_task_rq() for the new fair task in sched_cgroup_fork(), so cpu_cgrp_subsys->fork() can be removed. cgroup_can_fork() --> pin parent's sched_task_group sched_cgroup_fork() __set_task_cpu() set_task_rq() cgroup_post_fork() ss->fork() :=3D cpu_cgroup_fork() sched_change_group(..., TASK_SET_GROUP) task_set_group_fair() set_task_rq() --> can be removed After this patch's change, task_change_group_fair() only need to care about task cgroup migration, make the code much simplier. Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot Reviewed-by: Dietmar Eggemann --- kernel/sched/core.c | 27 ++++----------------------- kernel/sched/fair.c | 23 +---------------------- kernel/sched/sched.h | 5 +---- 3 files changed, 6 insertions(+), 49 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 64c08993221b..e74e79f783af 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -481,8 +481,7 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p= , int flags) { } * p->se.load, p->rt_priority, * p->dl.dl_{runtime, deadline, period, flags, bw, density} * - sched_setnuma(): p->numa_preferred_nid - * - sched_move_task()/ - * cpu_cgroup_fork(): p->sched_task_group + * - sched_move_task(): p->sched_task_group * - uclamp_update_active() p->uclamp* * * p->state <- TASK_*: @@ -10114,7 +10113,7 @@ void sched_release_group(struct task_group *tg) spin_unlock_irqrestore(&task_group_lock, flags); } =20 -static void sched_change_group(struct task_struct *tsk, int type) +static void sched_change_group(struct task_struct *tsk) { struct task_group *tg; =20 @@ -10130,7 +10129,7 @@ static void sched_change_group(struct task_struct *= tsk, int type) =20 #ifdef CONFIG_FAIR_GROUP_SCHED if (tsk->sched_class->task_change_group) - tsk->sched_class->task_change_group(tsk, type); + tsk->sched_class->task_change_group(tsk); else #endif set_task_rq(tsk, task_cpu(tsk)); @@ -10161,7 +10160,7 @@ void sched_move_task(struct task_struct *tsk) if (running) put_prev_task(rq, tsk); =20 - sched_change_group(tsk, TASK_MOVE_GROUP); + sched_change_group(tsk); =20 if (queued) enqueue_task(rq, tsk, queue_flags); @@ -10239,23 +10238,6 @@ static void cpu_cgroup_css_free(struct cgroup_subs= ys_state *css) sched_unregister_group(tg); } =20 -/* - * This is called before wake_up_new_task(), therefore we really only - * have to set its group bits, all the other stuff does not apply. - */ -static void cpu_cgroup_fork(struct task_struct *task) -{ - struct rq_flags rf; - struct rq *rq; - - rq =3D task_rq_lock(task, &rf); - - update_rq_clock(rq); - sched_change_group(task, TASK_SET_GROUP); - - task_rq_unlock(rq, task, &rf); -} - static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) { struct task_struct *task; @@ -11121,7 +11103,6 @@ struct cgroup_subsys cpu_cgrp_subsys =3D { .css_released =3D cpu_cgroup_css_released, .css_free =3D cpu_cgroup_css_free, .css_extra_stat_show =3D cpu_extra_stat_show, - .fork =3D cpu_cgroup_fork, .can_attach =3D cpu_cgroup_can_attach, .attach =3D cpu_cgroup_attach, .legacy_cftypes =3D cpu_legacy_files, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a3b0f8b1029e..2c0eb2a4e341 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11657,15 +11657,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) } =20 #ifdef CONFIG_FAIR_GROUP_SCHED -static void task_set_group_fair(struct task_struct *p) -{ - struct sched_entity *se =3D &p->se; - - set_task_rq(p, task_cpu(p)); - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -} - -static void task_move_group_fair(struct task_struct *p) +static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); set_task_rq(p, task_cpu(p)); @@ -11677,19 +11669,6 @@ static void task_move_group_fair(struct task_struc= t *p) attach_task_cfs_rq(p); } =20 -static void task_change_group_fair(struct task_struct *p, int type) -{ - switch (type) { - case TASK_SET_GROUP: - task_set_group_fair(p); - break; - - case TASK_MOVE_GROUP: - task_move_group_fair(p); - break; - } -} - void free_fair_sched_group(struct task_group *tg) { int i; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4c4822141026..74130a69d365 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2193,11 +2193,8 @@ struct sched_class { =20 void (*update_curr)(struct rq *rq); =20 -#define TASK_SET_GROUP 0 -#define TASK_MOVE_GROUP 1 - #ifdef CONFIG_FAIR_GROUP_SCHED - void (*task_change_group)(struct task_struct *p, int type); + void (*task_change_group)(struct task_struct *p); #endif }; =20 --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FC67C25B0C for ; Mon, 8 Aug 2022 12:58:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243030AbiHHM6X (ORCPT ); Mon, 8 Aug 2022 08:58:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236067AbiHHM6P (ORCPT ); Mon, 8 Aug 2022 08:58:15 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B081F64F2 for ; Mon, 8 Aug 2022 05:58:13 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id m2so8484769pls.4 for ; Mon, 08 Aug 2022 05:58:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sKH6Lv1MrMSHH7UvSIShA9HZuky/1Wje6IcDDxkLSRU=; b=Bnlh/dVLIe/Yd0Csyb8iZjk5z9ug+FWg4a2yNyR+OMg9TKdYwfm4nL3rrViz65ARvf 8xMoNQZWnOtd/5IvC1ZducBC6jltnlzUAes0JIqhgzvlwnGtGi9JSIbP7djTmVVXPMoS PPAspj3u597/xSPYkf+DxvoiV2Z8NEU8RFa/C5Q2dgyEnzuojlrlO11WjruqD/8S1PoQ LeyDId3gK38OBe8/RbFlpQbiNeVRos7V2/5r9d0TmjZ6jiwYpmbme3q0dDMtyhnw9DZW 5FmyBI2KYhrJaopRm05cqv9t0pJTm+gpww8es4YvbUVdnMaxqIjmGuRq4OWm9PwRPR/o wglQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sKH6Lv1MrMSHH7UvSIShA9HZuky/1Wje6IcDDxkLSRU=; b=8K93xgEn95TYNcygneGCd/uQxs3Q5f59JKrm0eBzuf1XRNxS+FBBx7CMtVM6j4bqwJ UBpCbXBHEd1QujVlZZJKEU0hJlNsHKekre/g5PiAgreZun8S0ubMQcPG7fjoaMefysV1 gIN2p7zRJAwf9n5mz64CV8xpoOSajmEYAHwJyyBQiEjMt8yPbH2Gm0CXwnr5HNCMsS/7 wyk2NisKOHisQXSfxHrlzi0wMYI4kPCHJ5e5JTljrNmW+YjHWoJg7Rr9l0ovlwq6fWYQ QYuMfHmKsSAPgnRS3gMKhxc2QnaojQIlytJ/CydzVQ80HIxM2Hx2sMkLDiUkd+6r4m69 BgCQ== X-Gm-Message-State: ACgBeo1dujnwMR45GUauWdaXwXdOeKzcHg8mI7bwjsNKO7eU/rn1+OoD JrPy83wipMprcxafbzwkA8hUTg== X-Google-Smtp-Source: AA6agR5w/QIQBW9uDU/ERe4K9SrXYZK2x/qv6JKB9ve1tcNoksGgBygGVxYPcQb6ARRnlPxe2slEsA== X-Received: by 2002:a17:902:8302:b0:16d:d74f:e5cc with SMTP id bd2-20020a170902830200b0016dd74fe5ccmr18594566plb.6.1659963493267; Mon, 08 Aug 2022 05:58:13 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:12 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 3/9] sched/fair: reset sched_avg last_update_time before set_task_rq() Date: Mon, 8 Aug 2022 20:57:39 +0800 Message-Id: <20220808125745.22566-4-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked task's sched_avg when migrate, which is not needed for already detached task. task_change_group_fair() will detached the task sched_avg from prev cfs_rq first, so reset sched_avg last_update_time before set_task_rq() to avoid th= at. Signed-off-by: Chengming Zhou Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2c0eb2a4e341..e4c0929a6e71 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11660,12 +11660,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); - set_task_rq(p, task_cpu(p)); =20 #ifdef CONFIG_SMP /* Tell se's cfs_rq has been changed -- migrated */ p->se.avg.last_update_time =3D 0; #endif + set_task_rq(p, task_cpu(p)); attach_task_cfs_rq(p); } =20 --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51E08C00140 for ; Mon, 8 Aug 2022 12:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243099AbiHHM6b (ORCPT ); Mon, 8 Aug 2022 08:58:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243035AbiHHM61 (ORCPT ); Mon, 8 Aug 2022 08:58:27 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9AE0BC3F for ; Mon, 8 Aug 2022 05:58:18 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id u133so8008076pfc.10 for ; Mon, 08 Aug 2022 05:58:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mXG6GzAhlyf+jWwAIQ3mUoNzaPNS4py+noaXNcGTmPE=; b=mzJ7sHm0h18OCrfewJPIf0UuhQN3ziHZndPoqwqGGmG5APGXHS+mCQBrCozvZH38NB 10x5tL/dh+TOG9zQ2Wj4hBqub+jjokNSWV3zpVLIIAPoVT+X73xZGznzuC8ipq44g2cU uKDrVYFX0qnUTEPdZwI4ovN7F30jpF1H3EepDD/LAEy3lj3rTlTgZKAqHB6IwMVxzJEC sRqm7WjKj2+Rf75EXrT8XET2s0Y3Cpjy93MRmPsUEX4aJGKQQ+VIm0oet9iWJHG8vlKO WoEcg86+Pymx9iB4hPmZT+6G6kVWHkjhTWBIOHQ8jgBMIZUVi2igF/WCjD5r8mR+Ul8K tOuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mXG6GzAhlyf+jWwAIQ3mUoNzaPNS4py+noaXNcGTmPE=; b=OmnPYU/5Y1Vrco0eVIG6VFcHF7UeSaPaXp9vhNBqtLsdToPotetJ8MU/Nx4/ixwhye 1Wf/j2DfYcIZhouImudJfSBhaI3soq95KhXFjWgL9ylF89kI4gz7Nk7t8zT+eM9wxkWH M97Kwf33kcoh1wYytAlAkgNgw0lf4OpPqmtukTto9r7lhIkrvKPPcW0Zmi4YOt4acXPR kPD8/PkAP+s1cZLKlwiw4QwnvH/ufudYO3T9Ra383WqaGuU6UjIfXiMOajPl8es3cDU6 M1zKvXo8WuhYlrT6zZo/T/p4drHeLiC5ictW7DwlAKJx3XfpDPbBEpEErKV5qP3Qz/kH 6aBw== X-Gm-Message-State: ACgBeo3MTuRF+v2nHfVfoURzltTDJV5JXhv/6WrZl/FABM3IpZr3IrMt YN6KAqJTZf48UEaSGK3tvvpQ/w== X-Google-Smtp-Source: AA6agR75kPDC8xQZY9sliJtcC4y7lU3vJnIyYQWg8muGIlmooHCMb7S/DEHLv6R3a2PiciTXrxH2XQ== X-Received: by 2002:a05:6a00:1145:b0:52b:78c:fa26 with SMTP id b5-20020a056a00114500b0052b078cfa26mr18258641pfm.27.1659963498268; Mon, 08 Aug 2022 05:58:18 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:17 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 4/9] sched/fair: update comments in enqueue/dequeue_entity() Date: Mon, 8 Aug 2022 20:57:40 +0800 Message-Id: <20220808125745.22566-5-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When reading the sched_avg related code, I found the comments in enqueue/dequeue_entity() are not updated with the current code. We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg during enqueue/dequeue_entity(), those are done only for attach/detach. This patch updates the comments to reflect the current code working. Signed-off-by: Chengming Zhou Acked-by: Vincent Guittot --- kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e4c0929a6e71..52de8302b336 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4434,7 +4434,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Add its load to cfs_rq->runnable_avg + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -4519,7 +4520,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Subtract its load from the cfs_rq->runnable_avg. + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C0A7C00140 for ; Mon, 8 Aug 2022 12:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243125AbiHHM6g (ORCPT ); Mon, 8 Aug 2022 08:58:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243044AbiHHM62 (ORCPT ); Mon, 8 Aug 2022 08:58:28 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 324B6DF6C for ; Mon, 8 Aug 2022 05:58:23 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id gj1so8748840pjb.0 for ; Mon, 08 Aug 2022 05:58:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qdtwmrd3K+yo5ke7Mzb2tAO/7l7k81YO75qxr88Biq8=; b=FVmambr2q0CvkJzOrF5RbNJcyYhfHt7Db55rK895lJJ+t9IcjSQo5hVssX5K3NEYOR KP/ayg5VykPzjYcI4F0ks1v3WXNSJXdY8nfOsmcNOTWz0IJt2vb7ZnZFyP3eriL+xYe5 abVtTevaL5oHvwshONjEjxR3YHivCKx46HoI1ik9IOV9Lt7f5dbqh6hW7OjQPUXYch3m Ztcu50/jzx1+QKJHHtNJ2l02G11KhOa0IPmlGD8vjQODVmTHzqIrDTrMxt68/K77kCpH e+qKsmSsJwH50ZludEd8mWhs7EMMpEYYGTlu5LpHX/MNHenNT9WBfkTyC2h0nqKesL+w QLiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qdtwmrd3K+yo5ke7Mzb2tAO/7l7k81YO75qxr88Biq8=; b=yzhTpx4a9E5xztP1f1yhHVX5+rrr3ZQe9jIuS4pEuDLyNhLU4Eo1aCRwcBxw7sKRta or0W+kWCW/fem97J5DamCU8T8gmulB60auyOUZ2rvC2RUeZfdCJsgn4XVGfOWX87WkL1 +IhqN105VsU7Lgs8ZRF6qh9kYjHEifTjsJ5YrD5o8KHmHsVJ4GFH9IKckWo/N1RRiLPK TLrDxMIGfaqq7S5gi1UDhHZhEkU1oP9Zqtp+5Abpo7aUzuXbVJc3U769XlTNs7Cvz2G1 SO1m6RpxMIjjpEg8xxilF6H9uN4jis43mhA6HIHst8Ag8kemVmUNDFgdPzStsgtPD9rH MIhA== X-Gm-Message-State: ACgBeo3w4uoSTHUosZEFZ+XN1ch6DFPYeTvLEdV3At+myPfE7MeVrkwU xCmNAXr0nNpErTJelcnqj45/6A== X-Google-Smtp-Source: AA6agR4InW6AC+NUJ45iMNAeRFFnyoSy/+jreyY9wxGVYlGRG2GsZdbCmiMzi837E4ontTkS6KLmlQ== X-Received: by 2002:a17:90a:7885:b0:1f2:1825:ae7e with SMTP id x5-20020a17090a788500b001f21825ae7emr20375010pjk.39.1659963502778; Mon, 08 Aug 2022 05:58:22 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:22 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 5/9] sched/fair: combine detach into dequeue when migrating task Date: Mon, 8 Aug 2022 20:57:41 +0800 Message-Id: <20220808125745.22566-6-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When we are migrating task out of the CPU, we can combine detach and propagation into dequeue_entity() to save the detach_entity_cfs_rq() in migrate_task_rq_fair(). This optimization is like combining DO_ATTACH in the enqueue_entity() when migrating task to the CPU. So we don't have to traverse the CFS tree extra time to do the detach_entity_cfs_rq() -> propagate_entity_cfs_rq(), which wouldn't be called anymore with this patch's change. detach_task() deactivate_task() dequeue_task_fair() for_each_sched_entity(se) dequeue_entity() update_load_avg() /* (1) */ detach_entity_load_avg() set_task_cpu() migrate_task_rq_fair() detach_entity_cfs_rq() /* (2) */ update_load_avg(); detach_entity_load_avg(); propagate_entity_cfs_rq(); for_each_sched_entity() update_load_avg() This patch save the detach_entity_cfs_rq() called in (2) by doing the detach_entity_load_avg() for a CPU migrating task inside (1) (the task being the first se in the loop) Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 52de8302b336..f52e7dc7f22d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4003,6 +4003,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s #define UPDATE_TG 0x1 #define SKIP_AGE_LOAD 0x2 #define DO_ATTACH 0x4 +#define DO_DETACH 0x8 =20 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int flags) @@ -4032,6 +4033,13 @@ static inline void update_load_avg(struct cfs_rq *cf= s_rq, struct sched_entity *s attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq); =20 + } else if (flags & DO_DETACH) { + /* + * DO_DETACH means we're here from dequeue_entity() + * and we are migrating task out of the CPU. + */ + detach_entity_load_avg(cfs_rq, se); + update_tg_load_avg(cfs_rq); } else if (decayed) { cfs_rq_util_change(cfs_rq, 0); =20 @@ -4292,6 +4300,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *c= fs_rq) #define UPDATE_TG 0x0 #define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 +#define DO_DETACH 0x0 =20 static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int not_used1) { @@ -4512,6 +4521,11 @@ static __always_inline void return_cfs_rq_runtime(st= ruct cfs_rq *cfs_rq); static void dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { + int action =3D UPDATE_TG; + + if (entity_is_task(se) && task_on_rq_migrating(task_of(se))) + action |=3D DO_DETACH; + /* * Update run-time statistics of the 'current'. */ @@ -4526,7 +4540,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * - For group entity, update its weight to reflect the new share * of its group cfs_rq. */ - update_load_avg(cfs_rq, se, UPDATE_TG); + update_load_avg(cfs_rq, se, action); se_update_runnable(se); =20 update_stats_dequeue_fair(cfs_rq, se, flags); @@ -7078,8 +7092,6 @@ select_task_rq_fair(struct task_struct *p, int prev_c= pu, int wake_flags) return new_cpu; } =20 -static void detach_entity_cfs_rq(struct sched_entity *se); - /* * Called immediately before a task is migrated to a new CPU; task_cpu(p) = and * cfs_rq_of(p) references at time of call are still valid and identify the @@ -7101,15 +7113,7 @@ static void migrate_task_rq_fair(struct task_struct = *p, int new_cpu) se->vruntime -=3D u64_u32_load(cfs_rq->min_vruntime); } =20 - if (p->on_rq =3D=3D TASK_ON_RQ_MIGRATING) { - /* - * In case of TASK_ON_RQ_MIGRATING we in fact hold the 'old' - * rq->lock and can modify state directly. - */ - lockdep_assert_rq_held(task_rq(p)); - detach_entity_cfs_rq(se); - - } else { + if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); =20 /* --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 634AFC00140 for ; Mon, 8 Aug 2022 12:58:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243202AbiHHM6z (ORCPT ); Mon, 8 Aug 2022 08:58:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243105AbiHHM6c (ORCPT ); Mon, 8 Aug 2022 08:58:32 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D96DA19E for ; Mon, 8 Aug 2022 05:58:28 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id x10so8484292plb.3 for ; Mon, 08 Aug 2022 05:58:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5Za5JZ6apq/3ijjFIXcXnlKC42lXs2Lrgf85Jb4PG6Y=; b=SDxIPdLAoVvxJBcCM/Jim1gzMK7Eu+iKdtghi6HplgiyVVSTmEAe4ajs8RGQOmFVVJ k1b8CP0blCzByTSoGuvObxROnu4P5ebQ+b+FLM2EB0P2ljlD/cC3gjH/E7gYXDuFwbk/ SC1ClDRxhgDjiSzf22VfxOfbtPfutFkiiPVYD/LsmuvmTj6K9JBx68lR03NsBkYGfFWO xUMAv9fOreAUJobtkJ22eFYLpqGk+Ajo9wTBbqorxxDMX/F3jBjCcSrT11fzF0FN1Qth /X6opSe2a30h2m0dDZi0g5ARGxv6ULxI15Ox3pSI8KKFv56vbXBhBxl2csxOOdx+Ul59 Rqxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5Za5JZ6apq/3ijjFIXcXnlKC42lXs2Lrgf85Jb4PG6Y=; b=I1RzKAjyrk9mRZNzTLhcvjqiS6vbjDyVdIPWdEgBrUtT5rM1sjggrM/nHK73G4DotO J6i3eOCmHElmJnRmHg7MBClWc8Jz5Gt8u7trUM6oioeSEXil24PQD9kSfzr5+1bN8qro pSa/4s8TKx3Z13tSoGyKhbzwhAXDMhlf9KOqMEQRMHza08h1l9JB6AzOzFO/n0u7FFgj AVTz3lIMolDNkpLxD7RYw4jY1MF49523Skmm+nxY3JO5C4IUWlhcrix5niO6k6kVAW6M s76wvT8J/wUmg8K+zZ9rmT4TS787IHZ20cCKwhozmAT7n3CXWEgYy9XEcXHrvtjhNpe2 gYqA== X-Gm-Message-State: ACgBeo0AmKNKs5/mQiQG++dvweFEj/iGqPs0a4CyIk7OHEksuWsu91Iv 8WX0l4LIUuo1yNXMl8V9BwZTEXKAFhSrcQ== X-Google-Smtp-Source: AA6agR5e7h34vecteEXvBc1EI2QK4IWAebohDLEXhBcYxnjr4ih0VmsxqmsvwcXw7PqPXd7QvyXBOg== X-Received: by 2002:a17:903:41c3:b0:16f:3180:a3c9 with SMTP id u3-20020a17090341c300b0016f3180a3c9mr18475357ple.90.1659963507424; Mon, 08 Aug 2022 05:58:27 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:26 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 6/9] sched/fair: fix another detach on unattached task corner case Date: Mon, 8 Aug 2022 20:57:42 +0800 Message-Id: <20220808125745.22566-7-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks") fixed two load tracking problems for new task, including detach on unattached new task problem. There still left another detach on unattached task problem for the task which has been woken up by try_to_wake_up() and waiting for actually being woken up by sched_ttwu_pending(). try_to_wake_up(p) cpu =3D select_task_rq(p) if (task_cpu(p) !=3D cpu) set_task_cpu(p, cpu) migrate_task_rq_fair() remove_entity_load_avg() --> unattached se->avg.last_update_time =3D 0; __set_task_cpu() ttwu_queue(p, cpu) ttwu_queue_wakelist() __ttwu_queue_wakelist() task_change_group_fair() detach_task_cfs_rq() detach_entity_cfs_rq() detach_entity_load_avg() --> detach on unattached task set_task_rq() attach_task_cfs_rq() attach_entity_cfs_rq() attach_entity_load_avg() The reason of this problem is similar, we should check in detach_entity_cfs= _rq() that se->avg.last_update_time !=3D 0, before do detach_entity_load_avg(). This patch move detach/attach_entity_cfs_rq() functions up to be together with other load tracking functions to avoid to use another #ifdef CONFIG_SM= P. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 132 +++++++++++++++++++++++--------------------- 1 file changed, 68 insertions(+), 64 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f52e7dc7f22d..4bc76d95a99d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -874,9 +874,6 @@ void init_entity_runnable_average(struct sched_entity *= se) void post_init_entity_util_avg(struct task_struct *p) { } -static void update_tg_load_avg(struct cfs_rq *cfs_rq) -{ -} #endif /* CONFIG_SMP */ =20 /* @@ -3176,6 +3173,7 @@ void reweight_task(struct task_struct *p, int prio) load->inv_weight =3D sched_prio_to_wmult[prio]; } =20 +static inline int cfs_rq_throttled(struct cfs_rq *cfs_rq); static inline int throttled_hierarchy(struct cfs_rq *cfs_rq); =20 #ifdef CONFIG_FAIR_GROUP_SCHED @@ -4086,6 +4084,71 @@ static void remove_entity_load_avg(struct sched_enti= ty *se) raw_spin_unlock_irqrestore(&cfs_rq->removed.lock, flags); } =20 +#ifdef CONFIG_FAIR_GROUP_SCHED +/* + * Propagate the changes of the sched_entity across the tg tree to make it + * visible to the root + */ +static void propagate_entity_cfs_rq(struct sched_entity *se) +{ + struct cfs_rq *cfs_rq =3D cfs_rq_of(se); + + if (cfs_rq_throttled(cfs_rq)) + return; + + if (!throttled_hierarchy(cfs_rq)) + list_add_leaf_cfs_rq(cfs_rq); + + /* Start to propagate at parent */ + se =3D se->parent; + + for_each_sched_entity(se) { + cfs_rq =3D cfs_rq_of(se); + + update_load_avg(cfs_rq, se, UPDATE_TG); + + if (cfs_rq_throttled(cfs_rq)) + break; + + if (!throttled_hierarchy(cfs_rq)) + list_add_leaf_cfs_rq(cfs_rq); + } +} +#else +static void propagate_entity_cfs_rq(struct sched_entity *se) { } +#endif + +static void detach_entity_cfs_rq(struct sched_entity *se) +{ + struct cfs_rq *cfs_rq =3D cfs_rq_of(se); + + /* + * In case the task sched_avg hasn't been attached: + * - A forked task which hasn't been woken up by wake_up_new_task(). + * - A task which has been woken up by try_to_wake_up() but is + * waiting for actually being woken up by sched_ttwu_pending(). + */ + if (!se->avg.last_update_time) + return; + + /* Catch up with the cfs_rq and remove our load when we leave */ + update_load_avg(cfs_rq, se, 0); + detach_entity_load_avg(cfs_rq, se); + update_tg_load_avg(cfs_rq); + propagate_entity_cfs_rq(se); +} + +static void attach_entity_cfs_rq(struct sched_entity *se) +{ + struct cfs_rq *cfs_rq =3D cfs_rq_of(se); + + /* Synchronize entity with its cfs_rq */ + update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); + attach_entity_load_avg(cfs_rq, se); + update_tg_load_avg(cfs_rq); + propagate_entity_cfs_rq(se); +} + static inline unsigned long cfs_rq_runnable_avg(struct cfs_rq *cfs_rq) { return cfs_rq->avg.runnable_avg; @@ -4308,11 +4371,8 @@ static inline void update_load_avg(struct cfs_rq *cf= s_rq, struct sched_entity *s } =20 static inline void remove_entity_load_avg(struct sched_entity *se) {} - -static inline void -attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} -static inline void -detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} +static inline void detach_entity_cfs_rq(struct sched_entity *se) {} +static inline void attach_entity_cfs_rq(struct sched_entity *se) {} =20 static inline int newidle_balance(struct rq *rq, struct rq_flags *rf) { @@ -11519,62 +11579,6 @@ static inline bool vruntime_normalized(struct task= _struct *p) return false; } =20 -#ifdef CONFIG_FAIR_GROUP_SCHED -/* - * Propagate the changes of the sched_entity across the tg tree to make it - * visible to the root - */ -static void propagate_entity_cfs_rq(struct sched_entity *se) -{ - struct cfs_rq *cfs_rq =3D cfs_rq_of(se); - - if (cfs_rq_throttled(cfs_rq)) - return; - - if (!throttled_hierarchy(cfs_rq)) - list_add_leaf_cfs_rq(cfs_rq); - - /* Start to propagate at parent */ - se =3D se->parent; - - for_each_sched_entity(se) { - cfs_rq =3D cfs_rq_of(se); - - update_load_avg(cfs_rq, se, UPDATE_TG); - - if (cfs_rq_throttled(cfs_rq)) - break; - - if (!throttled_hierarchy(cfs_rq)) - list_add_leaf_cfs_rq(cfs_rq); - } -} -#else -static void propagate_entity_cfs_rq(struct sched_entity *se) { } -#endif - -static void detach_entity_cfs_rq(struct sched_entity *se) -{ - struct cfs_rq *cfs_rq =3D cfs_rq_of(se); - - /* Catch up with the cfs_rq and remove our load when we leave */ - update_load_avg(cfs_rq, se, 0); - detach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq); - propagate_entity_cfs_rq(se); -} - -static void attach_entity_cfs_rq(struct sched_entity *se) -{ - struct cfs_rq *cfs_rq =3D cfs_rq_of(se); - - /* Synchronize entity with its cfs_rq */ - update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); - attach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq); - propagate_entity_cfs_rq(se); -} - static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se =3D &p->se; --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE175C00140 for ; Mon, 8 Aug 2022 12:59:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243000AbiHHM7D (ORCPT ); Mon, 8 Aug 2022 08:59:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243140AbiHHM6j (ORCPT ); Mon, 8 Aug 2022 08:58:39 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F878DFB9 for ; Mon, 8 Aug 2022 05:58:32 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id d65-20020a17090a6f4700b001f303a97b14so9005636pjk.1 for ; Mon, 08 Aug 2022 05:58:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5D5Ubi901LSb9xg/ubdyA4P0B6D+gtiJzEao+2OesIo=; b=ZjR+K8xsIpNJkGEHNI1F+ySJi1fpgkEaCl0za2cj85NZenw4Mkrz+z7Dz75w93YCRV bkXNOeCviCz2EtgRqfJ/X3BWNONjj8++r93EMV7PUkHuk1v5q0kPVpCvb6GcJfM03aXj 4ChSoxzSTkiVQl+SKmcTspWnzKwoGnjYI1hgJzgBKe9hjIRHvzGEzuJjAXlnRInSpLJO m67sbESXy2kIZM05rj8iMUD9KkzqavyDVa7j2ksXSdLAxykqiubMEXuVDRKnvMdaanif vGGIpnGkMCgkZ1VDgBg7bP9C3WDq0Zt4kUskKmbXrq4+OOZ3NfoW+wvQ1arze66B0hwC OngA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5D5Ubi901LSb9xg/ubdyA4P0B6D+gtiJzEao+2OesIo=; b=0Zq5Hlg+gd04N0f0EWP/LT7q75Lf3+So/hIuXgRd6R6MCaEPB+AbQ3m206HXHvN2hS 8EEVyOpJFrxoHU2ddpz0ZY8ptwqoISSPKrYE6f7/LLri3K1gOQZX2SeEwN2UNxvokVMC DGWXdlSGdF+G6zgAvc+/3OjTSRlA2oDC1w8TlAHzOKOBzN2FgeLUdZpe9uRBtrkJEkdL VN6MaDSl5kfY8f0LFBRoOAsUZDdUYOcvxt5DBIQJ2CtUWRwoIDTRAPSpv+OrQl3KUR5j jsQuLGHZEdhi2AD9HJAnYGbijANC2In5b/5oAKjRFBcA1vZcPHIhfmMWfwXKCj780SVA Nktg== X-Gm-Message-State: ACgBeo0wTaIFSvsq3pOTYTFj8OUc1qHuJBG6ntflh1dE8y3tk0ylEkwN 2/mheed21NrupFpCP5AB4EIIUA== X-Google-Smtp-Source: AA6agR59GPV81s2b1VdiIXZhe890vekntkfm0xMPRkOUwDqV7TT4yzuDhRBDVjd9z8jWynX56PfYkw== X-Received: by 2002:a17:902:cecc:b0:170:9353:f295 with SMTP id d12-20020a170902cecc00b001709353f295mr8828665plg.63.1659963512313; Mon, 08 Aug 2022 05:58:32 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:31 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 7/9] sched/fair: allow changing cgroup of new forked task Date: Mon, 8 Aug 2022 20:57:43 +0800 Message-Id: <20220808125745.22566-8-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks") introduce a TASK_NEW state and an unnessary limitation that would fail when changing cgroup of new forked task. Because at that time, we can't handle task_change_group_fair() for new forked fair task which hasn't been woken up by wake_up_new_task(), which will cause detach on an unattached task sched_avg problem. This patch delete this unnessary limitation by adding check before do detach or attach in task_change_group_fair(). So cpu_cgrp_subsys.can_attach() has nothing to do for fair tasks, only define it in #ifdef CONFIG_RT_GROUP_SCHED. Signed-off-by: Chengming Zhou Reported-by: kernel test robot --- include/linux/sched.h | 5 ++--- kernel/sched/core.c | 30 +++++++----------------------- kernel/sched/fair.c | 7 +++++++ 3 files changed, 16 insertions(+), 26 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 88b8817b827d..b504e55bbf7a 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -95,10 +95,9 @@ struct task_group; #define TASK_WAKEKILL 0x0100 #define TASK_WAKING 0x0200 #define TASK_NOLOAD 0x0400 -#define TASK_NEW 0x0800 /* RT specific auxilliary flag to mark RT lock waiters */ -#define TASK_RTLOCK_WAIT 0x1000 -#define TASK_STATE_MAX 0x2000 +#define TASK_RTLOCK_WAIT 0x0800 +#define TASK_STATE_MAX 0x1000 =20 /* Convenience macros for the sake of set_current_state: */ #define TASK_KILLABLE (TASK_WAKEKILL | TASK_UNINTERRUPTIBLE) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e74e79f783af..d5faa1700bd7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4500,11 +4500,11 @@ int sched_fork(unsigned long clone_flags, struct ta= sk_struct *p) { __sched_fork(clone_flags, p); /* - * We mark the process as NEW here. This guarantees that + * We mark the process as running here. This guarantees that * nobody will actually run it, and a signal or other external * event cannot wake it up and insert it on the runqueue either. */ - p->__state =3D TASK_NEW; + p->__state =3D TASK_RUNNING; =20 /* * Make sure we do not leak PI boosting priority to the child. @@ -4622,7 +4622,6 @@ void wake_up_new_task(struct task_struct *p) struct rq *rq; =20 raw_spin_lock_irqsave(&p->pi_lock, rf.flags); - WRITE_ONCE(p->__state, TASK_RUNNING); #ifdef CONFIG_SMP /* * Fork balancing, do it here and not earlier because: @@ -10238,36 +10237,19 @@ static void cpu_cgroup_css_free(struct cgroup_sub= sys_state *css) sched_unregister_group(tg); } =20 +#ifdef CONFIG_RT_GROUP_SCHED static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) { struct task_struct *task; struct cgroup_subsys_state *css; - int ret =3D 0; =20 cgroup_taskset_for_each(task, css, tset) { -#ifdef CONFIG_RT_GROUP_SCHED if (!sched_rt_can_attach(css_tg(css), task)) return -EINVAL; -#endif - /* - * Serialize against wake_up_new_task() such that if it's - * running, we're sure to observe its full state. - */ - raw_spin_lock_irq(&task->pi_lock); - /* - * Avoid calling sched_move_task() before wake_up_new_task() - * has happened. This would lead to problems with PELT, due to - * move wanting to detach+attach while we're not attached yet. - */ - if (READ_ONCE(task->__state) =3D=3D TASK_NEW) - ret =3D -EINVAL; - raw_spin_unlock_irq(&task->pi_lock); - - if (ret) - break; } - return ret; + return 0; } +#endif =20 static void cpu_cgroup_attach(struct cgroup_taskset *tset) { @@ -11103,7 +11085,9 @@ struct cgroup_subsys cpu_cgrp_subsys =3D { .css_released =3D cpu_cgroup_css_released, .css_free =3D cpu_cgroup_css_free, .css_extra_stat_show =3D cpu_extra_stat_show, +#ifdef CONFIG_RT_GROUP_SCHED .can_attach =3D cpu_cgroup_can_attach, +#endif .attach =3D cpu_cgroup_attach, .legacy_cftypes =3D cpu_legacy_files, .dfl_cftypes =3D cpu_files, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4bc76d95a99d..90aba33a3780 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11669,6 +11669,13 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) #ifdef CONFIG_FAIR_GROUP_SCHED static void task_change_group_fair(struct task_struct *p) { + /* + * We couldn't detach or attach a forked task which + * hasn't been woken up by wake_up_new_task(). + */ + if (!p->on_rq && !se->sum_exec_runtime) + return; + detach_task_cfs_rq(p); =20 #ifdef CONFIG_SMP --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC4AAC00140 for ; Mon, 8 Aug 2022 12:59:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243249AbiHHM7Q (ORCPT ); Mon, 8 Aug 2022 08:59:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243155AbiHHM6u (ORCPT ); Mon, 8 Aug 2022 08:58:50 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A26D2FD0A for ; Mon, 8 Aug 2022 05:58:37 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id z187so7994497pfb.12 for ; Mon, 08 Aug 2022 05:58:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0S4/uQg7L4l7ka3/Aeqj7n7hkrnKAJNq6+H/S+7bfUs=; b=TtKbF2yVWmPUjJZAKRkNmDD2Rx3GAjkV5oDtXp/5pqozM7Yt4zKH1NdY8OUn+ytGAG Zf5n2QIsAlBGD762sAXWh+BwQ26kuv41OF6+//UrSZNKBjpkhQfQPJmTIRkaLyowiNMv laT+ky+EunvlCBfcrkMd5J05galqhlY9S7FL5IMFj8gL8VkrMTC3xfgBVxwyIrdIFQfy oR7xBXokgV2/CHcm3mUJCgtfLSSN6vtg7CA7K0WMyqMj/ugv8/G3lnZzNiZLBXbdPLFB SItmorAtTPJHTp6vEBs95yK+TDAwctBYomAMDNXXHPvkGRixIQpa557BSf71vZueAalQ NmHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0S4/uQg7L4l7ka3/Aeqj7n7hkrnKAJNq6+H/S+7bfUs=; b=o4I65brbenL2MJIDb0Fs3mJenA4pfiCIYobWlDeJHYt7jQLZ3NllCagXWgJKJA/tZS zn4adMK15S/sMWkKMVrbkdfLvwr5jVO8fQ6l87UrHfiEi2raIxbTQAlTxoUBF8s8wknE FpFnN1BXzbQ4Z/gA5mzVcygFZvGK7Hca6d1aYf1NC0/PTuA5Nu5nYO5kouHMqwXFmOEE p/o3hPDGC5nMf7ifUFujACS3X/iB7aFNqTp1erkwi6Y3T1iFLzkP9qtmN+gOI8lr0V8i rbPuStwVRr2Z1ZbqWmrkM+YqHL11Wt1yZOs+/WZ5v0wswBLYqIamtz9/jbVdtsARXx3u GAGQ== X-Gm-Message-State: ACgBeo3o0H4W3egqet8/6lPOMSMpPbTpmjSNy2K3qtkOFL+urV94vWL9 +gImgC2d/K5f20R497X3YoAQug== X-Google-Smtp-Source: AA6agR5XJ94BdTwgi+yCsDLTSQQxveikvUrql5X8ZTtWNAoRismFWu9BP/Ixul6SqvEMtWOsjoI21g== X-Received: by 2002:a65:5683:0:b0:41c:b103:6037 with SMTP id v3-20020a655683000000b0041cb1036037mr15169870pgs.367.1659963517359; Mon, 08 Aug 2022 05:58:37 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:36 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 8/9] sched/fair: defer task sched_avg attach to enqueue_entity() Date: Mon, 8 Aug 2022 20:57:44 +0800 Message-Id: <20220808125745.22566-9-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When wake_up_new_task(), we would use post_init_entity_util_avg() to init util_avg/runnable_avg based on cpu's util_avg at that time, then attach task sched_avg to cfs_rq. Since enqueue_entity() would always attach any unattached task entity, so we can defer this work to enqueue_entity(). post_init_entity_util_avg(p) attach_entity_cfs_rq() --> (1) activate_task(rq, p) enqueue_task() :=3D enqueue_task_fair() enqueue_entity() update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH) if (!se->avg.last_update_time && (flags & DO_ATTACH)) attach_entity_load_avg() --> (2) This patch defer attach from (1) to (2) Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 90aba33a3780..2063e30b2a8f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -799,8 +799,6 @@ void init_entity_runnable_average(struct sched_entity *= se) /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg= */ } =20 -static void attach_entity_cfs_rq(struct sched_entity *se); - /* * With new tasks being created, their initial util_avgs are extrapolated * based on the cfs_rq's current util_avg: @@ -863,8 +861,6 @@ void post_init_entity_util_avg(struct task_struct *p) se->avg.last_update_time =3D cfs_rq_clock_pelt(cfs_rq); return; } - - attach_entity_cfs_rq(se); } =20 #else /* !CONFIG_SMP */ --=20 2.36.1 From nobody Sat Apr 11 18:38:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8179C00140 for ; Mon, 8 Aug 2022 12:59:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243192AbiHHM71 (ORCPT ); Mon, 8 Aug 2022 08:59:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243201AbiHHM6z (ORCPT ); Mon, 8 Aug 2022 08:58:55 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02D1FE03B for ; Mon, 8 Aug 2022 05:58:42 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id y1so2859457plb.2 for ; Mon, 08 Aug 2022 05:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jpsF7ETAikN9eZ5FGIx/2HtxGomhcrX08v3cHOIgq4s=; b=bI8LP6BjxCL0bqdcccNqSD2f8AhFDRwqb8ulXK2KG669BulvbmqdNcAW8e9JRKBVJp 7IAzBihMZf5bvJyOiGRZpA9XlxbOqpHTVOQeXTNazLdsby7PUNxZUsdbZKAC/97xXZoP NVvx1+TmQtD9DL+c7ekvZxVFeZZi1U+UdHJr5kCDu2S5jf2zedoEQKHj22pN6uHJzRfm e03GymSbax+9EEOv9iRi4o6LeVnXVLaIdhBaW0zaiwyi1aOqM2aqYQaNtwoXyt4xHfDn 9mO4/IGz0RhrRWNXvJRKBT4F5st0PXGXGJQGOJuiTEZuimgRh6kw1z9xk08/xHoFjfIX Skhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jpsF7ETAikN9eZ5FGIx/2HtxGomhcrX08v3cHOIgq4s=; b=5WVOG4lIkPQAmSokcqirKXO1GuK98dR5t8Nyln3gOOHSI+82K9m2lIR9IjvJyUtiTM KbpbrV+sV2GqByNHcRaDKDJneiIS6EP6tksGPshorYtNgl/NKLNMi4Pnn/JGjVuWaM1z W2Ckf3JdGOMbxT7IcyjrVIjXRhG15f6HxES5+406R5fLOkCPsMAxW74kTge9DHGLNViQ 8PWzyAISDEbatkb6Hy9I3YIupeEOHEOQtXe3vJETH0inGIyN+wDM+0YlNw9ZCBxbIgc3 BA4tNWyoK+8PGhCLilV6xPFvfHcPLxi487fHOL5jjFluiS0Hn6pkyk9kVl01RN/1Tym8 Zp6w== X-Gm-Message-State: ACgBeo3MsSepstgB2BMMPcbGzTkld3RTZ9Mc88KvDNNxpaQrsBYfw27z CJnYp3YrruyxqJRUicJyHr1xCg== X-Google-Smtp-Source: AA6agR5U+DoK1A/N8tpDIReBu2dxP1u9In+jBYtvXn28w9TMCnLUT1a9B5QDKh1X1paQua4SwjcJcg== X-Received: by 2002:a17:903:2301:b0:16e:f916:22b4 with SMTP id d1-20020a170903230100b0016ef91622b4mr18259293plh.52.1659963521803; Mon, 08 Aug 2022 05:58:41 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d14-20020a17090ae28e00b001f4ebd47ae7sm8057722pjz.54.2022.08.08.05.58.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Aug 2022 05:58:41 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v4 9/9] sched/fair: don't init util/runnable_avg for !fair task Date: Mon, 8 Aug 2022 20:57:45 +0800 Message-Id: <20220808125745.22566-10-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220808125745.22566-1-zhouchengming@bytedance.com> References: <20220808125745.22566-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" post_init_entity_util_avg() init task util_avg according to the cpu util_avg at the time of fork, which will decay when switched_to_fair() some time lat= er, we'd better to not set them at all in the case of !fair task. Suggested-by: Vincent Guittot Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2063e30b2a8f..082174cb0e47 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -833,20 +833,6 @@ void post_init_entity_util_avg(struct task_struct *p) long cpu_scale =3D arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))); long cap =3D (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; =20 - if (cap > 0) { - if (cfs_rq->avg.util_avg !=3D 0) { - sa->util_avg =3D cfs_rq->avg.util_avg * se->load.weight; - sa->util_avg /=3D (cfs_rq->avg.load_avg + 1); - - if (sa->util_avg > cap) - sa->util_avg =3D cap; - } else { - sa->util_avg =3D cap; - } - } - - sa->runnable_avg =3D sa->util_avg; - if (p->sched_class !=3D &fair_sched_class) { /* * For !fair tasks do: @@ -861,6 +847,20 @@ void post_init_entity_util_avg(struct task_struct *p) se->avg.last_update_time =3D cfs_rq_clock_pelt(cfs_rq); return; } + + if (cap > 0) { + if (cfs_rq->avg.util_avg !=3D 0) { + sa->util_avg =3D cfs_rq->avg.util_avg * se->load.weight; + sa->util_avg /=3D (cfs_rq->avg.load_avg + 1); + + if (sa->util_avg > cap) + sa->util_avg =3D cap; + } else { + sa->util_avg =3D cap; + } + } + + sa->runnable_avg =3D sa->util_avg; } =20 #else /* !CONFIG_SMP */ --=20 2.36.1