From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32763CCA47C for ; Sat, 9 Jul 2022 15:15:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229515AbiGIPPY (ORCPT ); Sat, 9 Jul 2022 11:15:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbiGIPPV (ORCPT ); Sat, 9 Jul 2022 11:15:21 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42AAD326C9 for ; Sat, 9 Jul 2022 08:15:20 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id x18-20020a17090a8a9200b001ef83b332f5so4494813pjn.0 for ; Sat, 09 Jul 2022 08:15:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IZT91HMRNNtPd10faSgbS3j67ADoK0u+DYdIQS+mTiY=; b=pR8HidnzAjT3a8n+geJywPAeXSIBe/KfkoyaAJoQcWrNvVWGI9oI/0tRm7V6x7sRhE h68GX0ZLVSNYqeG0QVdRDCSuLbUhbzPRlCpXzbQr/v32IUEMHzRPsxW3EDBxqjeQsbm8 fq1ikyte92bthqlhFKlNQtTCQmBPWWVxuoMCYN/kzuuUSaWjR72DlIy6y4TPuTIBgWgw 18CwI6NY+3r9CopD/GWWeb7qBvrj/Gw+vy0VGTLr1qXNjblEkbKfpcjAPmlI+pB4z+1j Xz+RfcqJHGqa+bIg/jKGG2d66dacY35DvmN0KvKDL7jklg01Od1VrYQFX/CpAJS1BOjW PCnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IZT91HMRNNtPd10faSgbS3j67ADoK0u+DYdIQS+mTiY=; b=vFPCrq95dQuNTGsd+gDI+4oceVjBPnM2I1JuaFJDpgjTcHcbbD4h3OhGp4Yg3kZzdG xl1ZiodfXxaI/iin+K8d9AICChuWkMAs83pZyn9d1RaMEd3Atln8bTzE3RKyd5plVO7D ivRQpDH3LpatnB2YyRVladca1dWVU8FJx10H1dWd0Ma3rv8vZQG2WJTVohy4qlAHlVOx hfxohUVIBAbIIvrZbBv6WCgjA+ppJR5WO5g/D6/irklcdzuo6+2V2f6hyBjKp3PhFGEm /Jb9d6J5+UFa29tPWc5DnwO9kkB8a5hKjZrgsfv5i3/fJ6wPqrA3KTfbS0P2fkrvV6Q0 fyew== X-Gm-Message-State: AJIora/7CoTZsMJxhUR5FtEeV3/w0VSVcRzWxog4n5m/A6j5l3gMqgrd 85BIIkOCDyVTQOxrN11gDlLRdQ== X-Google-Smtp-Source: AGRyM1swfGvYFgVFmPaq+46N7eBvo4NdhDZosSD4HyLFgqcT1blLZz/ZLuqU9i0kZmRkEFrjgcDJOw== X-Received: by 2002:a17:90a:d195:b0:1ef:b15e:8cb5 with SMTP id fu21-20020a17090ad19500b001efb15e8cb5mr6514559pjb.228.1657379719774; Sat, 09 Jul 2022 08:15:19 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:19 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 1/8] sched/fair: combine detach into dequeue when migrating task Date: Sat, 9 Jul 2022 23:13:46 +0800 Message-Id: <20220709151353.32883-2-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When we are migrating task out of the CPU, we can combine detach and propagation into dequeue_entity() to save the detach_entity_cfs_rq() in migrate_task_rq_fair(). This optimization is like combining DO_ATTACH in the enqueue_entity() when migrating task to the CPU. So we don't have to traverse the CFS tree extra time to do the detach_entity_cfs_rq() -> propagate_entity_cfs_rq() call, which wouldn't be called anymore with this patch's change. detach_task() deactivate_task() dequeue_task_fair() for_each_sched_entity(se) dequeue_entity() update_load_avg() /* (1) */ detach_entity_load_avg() set_task_cpu() migrate_task_rq_fair() detach_entity_cfs_rq() /* (2) */ update_load_avg(); detach_entity_load_avg(); propagate_entity_cfs_rq(); for_each_sched_entity() update_load_avg() This patch save the detach_entity_cfs_rq() called in (2) by doing the detach_entity_load_avg() for a CPU migrating task inside (1) (the task being the first se in the loop) Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a78d2e3b9d49..0689b94ed70b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4003,6 +4003,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s #define UPDATE_TG 0x1 #define SKIP_AGE_LOAD 0x2 #define DO_ATTACH 0x4 +#define DO_DETACH 0x8 =20 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int flags) @@ -4020,7 +4021,14 @@ static inline void update_load_avg(struct cfs_rq *cf= s_rq, struct sched_entity *s decayed =3D update_cfs_rq_load_avg(now, cfs_rq); decayed |=3D propagate_entity_load_avg(se); =20 - if (!se->avg.last_update_time && (flags & DO_ATTACH)) { + if (flags & DO_DETACH) { + /* + * DO_DETACH means we're here from dequeue_entity() + * and we are migrating task out of the CPU. + */ + detach_entity_load_avg(cfs_rq, se); + update_tg_load_avg(cfs_rq); + } else if (!se->avg.last_update_time && (flags & DO_ATTACH)) { =20 /* * DO_ATTACH means we're here from enqueue_entity(). @@ -4292,6 +4300,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *c= fs_rq) #define UPDATE_TG 0x0 #define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 +#define DO_DETACH 0x0 =20 static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int not_used1) { @@ -4511,6 +4520,11 @@ static __always_inline void return_cfs_rq_runtime(st= ruct cfs_rq *cfs_rq); static void dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { + int action =3D UPDATE_TG; + + if (entity_is_task(se) && task_on_rq_migrating(task_of(se))) + action |=3D DO_DETACH; + /* * Update run-time statistics of the 'current'. */ @@ -4524,7 +4538,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * - For group entity, update its weight to reflect the new share * of its group cfs_rq. */ - update_load_avg(cfs_rq, se, UPDATE_TG); + update_load_avg(cfs_rq, se, action); se_update_runnable(se); =20 update_stats_dequeue_fair(cfs_rq, se, flags); @@ -7076,8 +7090,6 @@ select_task_rq_fair(struct task_struct *p, int prev_c= pu, int wake_flags) return new_cpu; } =20 -static void detach_entity_cfs_rq(struct sched_entity *se); - /* * Called immediately before a task is migrated to a new CPU; task_cpu(p) = and * cfs_rq_of(p) references at time of call are still valid and identify the @@ -7099,15 +7111,7 @@ static void migrate_task_rq_fair(struct task_struct = *p, int new_cpu) se->vruntime -=3D u64_u32_load(cfs_rq->min_vruntime); } =20 - if (p->on_rq =3D=3D TASK_ON_RQ_MIGRATING) { - /* - * In case of TASK_ON_RQ_MIGRATING we in fact hold the 'old' - * rq->lock and can modify state directly. - */ - lockdep_assert_rq_held(task_rq(p)); - detach_entity_cfs_rq(se); - - } else { + if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); =20 /* --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED21DC433EF for ; Sat, 9 Jul 2022 15:15:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229541AbiGIPPa (ORCPT ); Sat, 9 Jul 2022 11:15:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229527AbiGIPP2 (ORCPT ); Sat, 9 Jul 2022 11:15:28 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DE2B27B2C for ; Sat, 9 Jul 2022 08:15:26 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id s206so1285607pgs.3 for ; Sat, 09 Jul 2022 08:15:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Vg/rD7f/1C4oNvLsimwrfDYgxuV3BFNN+d4ntZIDLzY=; b=ugaJMLBK80i/szoXZ+Qsl/qPqamvHkES1qYdxG7x5CWlxwiXmiiGs454+if2JfB8zb WOOlTPCDfB5el8rXaSJLOS+BL897zxv0bLPp8QMc4Abn4u99pubIq/D5jd/nWKBfyGIt ijA2SBdTdXLr0P+B0lxsxY9UoFYNWPptB65Mm2dAwOR2qHZl84tccfJC85QFqqtX7l/s NqeRKww0aoyMiM7Ar9VJIac8Nqz3ZCdp7N5U4l7y6NgeU+6VNiYfMb0RC51iFGbrPLUr Xo098Gx+l98HUsiLGwlrOAH9qvff+MigAazOD3SYqyMQK1Csc0PnflljSC891C0lTuev W6Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Vg/rD7f/1C4oNvLsimwrfDYgxuV3BFNN+d4ntZIDLzY=; b=00m1QPM/14mgHlGOfOsnmrwsEmFFRRn20ZcdOxf1DrORmDL9KQdXYiUXkzZD89Z+XH inC8v79o12ZOFoNBRRg7M2o0uPYP/TAOiKbmyLFmJAgEFm3irMaMZl0bUvURF6u/SlHv Jpwr8R9umy90/GTGC1Zp4Bhj/WRhACruiUEePm9CF20dcgd+0rjQZGCnAiEHg/bo2t/B LMX1gHC0+bCaPC7t0APIZcUx6g3xJaggVIdUD/+q3m4JOypz8hl73AxIv3la9Ah2UCrA b31YJXp0Q6B3xMGJZW8LqNKJzKec3c9tApiI4edgeNhJeBpwXY271QcyXPAO2jfNy2g7 SHJg== X-Gm-Message-State: AJIora9OP96F7SrnkDeiTnymNy/XY5/J+H+iaIYg5BUa/YLLB2vmsAyV FL1Q4llI4CxQZ/ZBee6IfkeN0A== X-Google-Smtp-Source: AGRyM1uR/3S9T3/YG3EcgcnZliO+oZBPyK3jxofZapnREx5H208WvXhCEyG2Vd5KdEwrER7+kgicpQ== X-Received: by 2002:aa7:92d2:0:b0:51b:4d60:6475 with SMTP id k18-20020aa792d2000000b0051b4d606475mr9415915pfa.73.1657379725496; Sat, 09 Jul 2022 08:15:25 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:25 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 2/8] sched/fair: update comments in enqueue/dequeue_entity() Date: Sat, 9 Jul 2022 23:13:47 +0800 Message-Id: <20220709151353.32883-3-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When reading the sched_avg related code, I found the comments in enqueue/dequeue_entity() are not updated with the current code. We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg during enqueue/dequeue_entity(), those are done only for attach/detach. This patch updates the comments to reflect the current code working. Signed-off-by: Chengming Zhou Acked-by: Vincent Guittot --- kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0689b94ed70b..2a3e12ead144 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4443,7 +4443,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Add its load to cfs_rq->runnable_avg + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -4533,7 +4534,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Subtract its load from the cfs_rq->runnable_avg. + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F66CC43334 for ; Sat, 9 Jul 2022 15:15:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229518AbiGIPPm (ORCPT ); Sat, 9 Jul 2022 11:15:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229571AbiGIPPh (ORCPT ); Sat, 9 Jul 2022 11:15:37 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1843C3718D for ; Sat, 9 Jul 2022 08:15:31 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id fz10so1407877pjb.2 for ; Sat, 09 Jul 2022 08:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MDskq5TkLAf9k9lXEh0bR1eqhpdc9gD2pSHNRKogjVw=; b=CEbLnpaVLZOUr9TE0HoHzFAAkkc2P6YZnp7RE2tn6Jl038vmo8qyp2e+Epm9vlvaP0 DGwwayv8DB1aSdE96S3ptPooW2VwzRRTOqSL1/bAl67aLelaTr+MPBGrlwSqkc6QU0uv KMOWvWj/3+WtvU+u1iGZ40cTzd7Aq0otFiQmXj01KpHs+zI4S/DRDqwVmBm0ud7K313M ZvGzzQaxOG1vBmvHjEPOYKZn3u7XfBALCimiuM5NijCAJ2NetSfCO9rHKs1eOSIK/6Qj c3O2hTtAyonhIq7TtoGjNHthXOVmPQcqGJzkE+sEN7K998HpRMQM8CmKPAmfVtSlFQ5M bZkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MDskq5TkLAf9k9lXEh0bR1eqhpdc9gD2pSHNRKogjVw=; b=rpNu5ccvgTAp0iUCXH/Ky4k7bZoQanYNqf2+mypsdGBE/JPYmKFLKITd0limWS1J/2 D6wfxt7zsj3vmb5R506A6DS8buGONdonft8/90s3R7M3biCT3RZN8VpVA3nk134NVdTm BKXWCJyqtEneUbIzwk1aE1RcpNtlTnm48YiMdYhjjcb4VP4c5Qzngj9GMLJ1GY8GADVe O6quMECLs7Ol+UDiiqjryW6Y/vD5zG6cfxzP54npK64qkRLzYTgIWd7U/m5tdoBevxYz XTrkTmhDFjhZvhiasTEWK1ooxYY9L3dJU2fD5CVSbvmlVJETWUeigFSnM0aa08/Uhej/ y8uA== X-Gm-Message-State: AJIora/K0q2/sEmK78RiRCM4+AhYd9JPL0vHtrXyF/1FshWXo9jKNAgW eq8W/L9OBhAGBXVCvMNXat1UrQ== X-Google-Smtp-Source: AGRyM1vmTNmhMbvcXjxlsFR3DJdKsQGWg+RrVia0a0xO62wuKhZfudf9O4G76lrqzlwvssYlHuULvg== X-Received: by 2002:a17:90b:388c:b0:1f0:49e:b7d8 with SMTP id mu12-20020a17090b388c00b001f0049eb7d8mr5853501pjb.9.1657379730823; Sat, 09 Jul 2022 08:15:30 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:30 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 3/8] sched/fair: remove redundant cpu_cgrp_subsys->fork() Date: Sat, 9 Jul 2022 23:13:48 +0800 Message-Id: <20220709151353.32883-4-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use cpu_cgrp_subsys->fork() to set task group for the new fair task in cgroup_post_fork(). Since commit b1e8206582f9 ("sched: Fix yet more sched_fork() races") has already set task group for the new fair task in sched_cgroup_fork(), so cpu_cgrp_subsys->fork() can be removed. cgroup_can_fork() --> pin parent's sched_task_group sched_cgroup_fork() __set_task_cpu --> set task group cgroup_post_fork() ss->fork() :=3D cpu_cgroup_fork() --> set again After this patch's change, task_change_group_fair() only need to care about task cgroup migration, make the code much simplier. This patch also move the task se depth setting to set_task_rq(), which will set correct depth for the new task se in sched_cgroup_fork(). The se depth setting in attach_entity_cfs_rq() is removed since set_task_rq() is a better place to do this when task moves across CPUs/groups. Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot --- kernel/sched/core.c | 27 ++++----------------------- kernel/sched/fair.c | 31 +------------------------------ kernel/sched/sched.h | 6 ++---- 3 files changed, 7 insertions(+), 57 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c215b5adc707..d85fdea51e3a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -481,8 +481,7 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p= , int flags) { } * p->se.load, p->rt_priority, * p->dl.dl_{runtime, deadline, period, flags, bw, density} * - sched_setnuma(): p->numa_preferred_nid - * - sched_move_task()/ - * cpu_cgroup_fork(): p->sched_task_group + * - sched_move_task(): p->sched_task_group * - uclamp_update_active() p->uclamp* * * p->state <- TASK_*: @@ -10127,7 +10126,7 @@ void sched_release_group(struct task_group *tg) spin_unlock_irqrestore(&task_group_lock, flags); } =20 -static void sched_change_group(struct task_struct *tsk, int type) +static void sched_change_group(struct task_struct *tsk) { struct task_group *tg; =20 @@ -10143,7 +10142,7 @@ static void sched_change_group(struct task_struct *= tsk, int type) =20 #ifdef CONFIG_FAIR_GROUP_SCHED if (tsk->sched_class->task_change_group) - tsk->sched_class->task_change_group(tsk, type); + tsk->sched_class->task_change_group(tsk); else #endif set_task_rq(tsk, task_cpu(tsk)); @@ -10174,7 +10173,7 @@ void sched_move_task(struct task_struct *tsk) if (running) put_prev_task(rq, tsk); =20 - sched_change_group(tsk, TASK_MOVE_GROUP); + sched_change_group(tsk); =20 if (queued) enqueue_task(rq, tsk, queue_flags); @@ -10252,23 +10251,6 @@ static void cpu_cgroup_css_free(struct cgroup_subs= ys_state *css) sched_unregister_group(tg); } =20 -/* - * This is called before wake_up_new_task(), therefore we really only - * have to set its group bits, all the other stuff does not apply. - */ -static void cpu_cgroup_fork(struct task_struct *task) -{ - struct rq_flags rf; - struct rq *rq; - - rq =3D task_rq_lock(task, &rf); - - update_rq_clock(rq); - sched_change_group(task, TASK_SET_GROUP); - - task_rq_unlock(rq, task, &rf); -} - static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) { struct task_struct *task; @@ -11134,7 +11116,6 @@ struct cgroup_subsys cpu_cgrp_subsys =3D { .css_released =3D cpu_cgroup_css_released, .css_free =3D cpu_cgroup_css_free, .css_extra_stat_show =3D cpu_extra_stat_show, - .fork =3D cpu_cgroup_fork, .can_attach =3D cpu_cgroup_can_attach, .attach =3D cpu_cgroup_attach, .legacy_cftypes =3D cpu_legacy_files, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2a3e12ead144..8992ce5e73d2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11539,14 +11539,6 @@ static void attach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - /* - * Since the real-depth could have been changed (only FAIR - * class maintain depth value), reset depth properly. - */ - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -#endif - /* Synchronize entity with its cfs_rq */ update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); attach_entity_load_avg(cfs_rq, se); @@ -11642,15 +11634,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) } =20 #ifdef CONFIG_FAIR_GROUP_SCHED -static void task_set_group_fair(struct task_struct *p) -{ - struct sched_entity *se =3D &p->se; - - set_task_rq(p, task_cpu(p)); - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -} - -static void task_move_group_fair(struct task_struct *p) +static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); set_task_rq(p, task_cpu(p)); @@ -11662,19 +11646,6 @@ static void task_move_group_fair(struct task_struc= t *p) attach_task_cfs_rq(p); } =20 -static void task_change_group_fair(struct task_struct *p, int type) -{ - switch (type) { - case TASK_SET_GROUP: - task_set_group_fair(p); - break; - - case TASK_MOVE_GROUP: - task_move_group_fair(p); - break; - } -} - void free_fair_sched_group(struct task_group *tg) { int i; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index aad7f5ee9666..19e0076e4245 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1940,6 +1940,7 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; p->se.parent =3D tg->se[cpu]; + p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED @@ -2202,11 +2203,8 @@ struct sched_class { =20 void (*update_curr)(struct rq *rq); =20 -#define TASK_SET_GROUP 0 -#define TASK_MOVE_GROUP 1 - #ifdef CONFIG_FAIR_GROUP_SCHED - void (*task_change_group)(struct task_struct *p, int type); + void (*task_change_group)(struct task_struct *p); #endif }; =20 --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A069C43334 for ; Sat, 9 Jul 2022 15:15:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229674AbiGIPPx (ORCPT ); Sat, 9 Jul 2022 11:15:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229555AbiGIPPj (ORCPT ); Sat, 9 Jul 2022 11:15:39 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66E194AD7F for ; Sat, 9 Jul 2022 08:15:37 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id e16so1365091pfm.11 for ; Sat, 09 Jul 2022 08:15:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c+ZN/qkTNGGZVV+4lPGLakVKyq8BpcGfufNIFKXtAOY=; b=4CXgkBbJLsquBCq5ikx+eUuNqAtdxi+lH5ZNgpO9Pz880iESyqQ+RcG3B0/wL7kyJG v2sA6QUUkpbxl/6G0DgZN/DDMavceSzpdNGcufzEtFzwYRN4OI7J7uBU+po5soNn72Ie gLZ/4AVadgb3DggIo1i+GdjEnfcRM2pFJTVrhsWWhusbo2o6OuPNxy1skX3chcNh4Fvq YeQz2iHq/86B+5KRqyoLIb231jdNs+WPxBX4++REDY3+xap7emaaw5FtXHuZPaywv7gq SsZiI/faqTxIqco3C8FPzYGp7mqEiWIvUqhagzsoE9az8Dnw+L3KN+R+ejLeExvrjwp/ /bfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c+ZN/qkTNGGZVV+4lPGLakVKyq8BpcGfufNIFKXtAOY=; b=PcBY8iPoWNcOnF2Ary149SYh02psvJjvphEgvbiIJf5jtJcbq4bHXXO/DQk1HW4Drf MQGaK9ejYdFPH6RrJrngnME1lfYYf/vsAcWw85EB98IcCmS5IbdJSDgRRMCAlw1E6BwS hKRZ+SJJXjjRlIOrcGgzqyqAU+341PS12ueO6ZVYnrpjeVuMO4l6zsDvB+gjzdIqO5Wi 2AY8GShMA9YDykjP9aoOSOaBwDAIFRHcy6ctU/dvxU9jmJ2bPhd4edj1O/Q398e4+uER II95y9YBP3FPdNv6h5Mabldv9U/+t4bgR2Kurhk9hEz4DkzG7fWRh+8F+Xtg4v2MqHqW PRkQ== X-Gm-Message-State: AJIora+rwniHfo2/X9sov7JdG7fn/AJfvtLKRC8RS/3v7J1mg153wokt yMsFax+2phXqWYg5+IlQMpidYQ== X-Google-Smtp-Source: AGRyM1uyuQLWfGJG0Ls8rXFlRIiNe62a7AoTk5w+Yo/Ep+eghW7FSaAeE858VTM62vaDzOuaCO5dmQ== X-Received: by 2002:a05:6a00:23c1:b0:525:4886:4083 with SMTP id g1-20020a056a0023c100b0052548864083mr9734511pfc.10.1657379736682; Sat, 09 Jul 2022 08:15:36 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:36 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 4/8] sched/fair: reset sched_avg last_update_time before set_task_rq() Date: Sat, 9 Jul 2022 23:13:49 +0800 Message-Id: <20220709151353.32883-5-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked task's sched_avg when migrate, which is not needed for already detached task. task_change_group_fair() will detached the task sched_avg from prev cfs_rq first, so reset sched_avg last_update_time before set_task_rq() to avoid th= at. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8992ce5e73d2..171bc22bc142 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11637,12 +11637,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); - set_task_rq(p, task_cpu(p)); =20 #ifdef CONFIG_SMP /* Tell se's cfs_rq has been changed -- migrated */ p->se.avg.last_update_time =3D 0; #endif + set_task_rq(p, task_cpu(p)); attach_task_cfs_rq(p); } =20 --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 488A9C433EF for ; Sat, 9 Jul 2022 15:16:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229607AbiGIPQD (ORCPT ); Sat, 9 Jul 2022 11:16:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229632AbiGIPPo (ORCPT ); Sat, 9 Jul 2022 11:15:44 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBA023F327 for ; Sat, 9 Jul 2022 08:15:42 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id o18so1267338pgu.9 for ; Sat, 09 Jul 2022 08:15:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NyQhbFtrplLKMrzdaQSrG/fm24rVGc+yJtNT/T1DzNc=; b=vUbs5Li7MbwcOqpyEkG/FnGoWX2JzN203czND92oZYxEVQz7u9bC9qx1Zk3XF0ChWi ZKbG5FDGafzXsuRCYFtTebAiZD7AM/YBwFdhxneJyhUhD27cXrZOUMsqhBjtsuTuKCYR 2sEqF47AB/iHAKjDgTggTonDsxoK1CCs/Wj6etV0IV3NyLfSXZz/8GIhgcjVlHZTbpgZ pE7mzGYuAuafKSdyqsq3CI5LjH8duqtPNWLvHXzu5IzCyUAfw9QTk5G6lM1U3AXsdYgR WkEDHH8lOCVM5+/Lkdu5R8ZQ98ZeDu3GDCFd9NhGGBwzSAYbnRBOEfBxpGYsFGnsbaRi 5qkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NyQhbFtrplLKMrzdaQSrG/fm24rVGc+yJtNT/T1DzNc=; b=5sK+ux/vcZamusrfLdgAacupa6ByD4YFVj/NEQteEMkQ4teK1WMzD9pFQXDNtjaYS8 nd2nOLgFL/si6czE7yDgsMn2r7oDgB7FKodzh2aOBGFSpSCUVd3yx/XtJk8ECVvAQMZI NmyHq3tAPgWb3skeUIK4EusKMQWpg7mcbnaS0hVH5wgOjoV9nLPookjxs12fRGxCDNZ0 CQ/ZOx8onjFiRLblrm8u4JPre95BI9M+X4nLW8hL7lwhlnmzj78cbMUjt+C0m0yQLB1n 9zd8tMDzZ4EM2g12BHwqt893VOWdiSCXwRDioNgPb+NmdWzu//KC7XmaM/Vdo8pdL3YX B5hQ== X-Gm-Message-State: AJIora+RbXSMH8lIB69D2PevJCl/tsUqAPlcN0pwk8fgIgmTzPVSABa+ 0i8aiPJ4nucaahYDGNwI8XDYDw== X-Google-Smtp-Source: AGRyM1sOZ7vtQ9fOG91CDTDbdC9PYAuK0ZtYEG+ptrERrujsAqOMWCVozwunq52AWOjNJLMoeinu5g== X-Received: by 2002:a63:f14b:0:b0:412:6c21:f1ed with SMTP id o11-20020a63f14b000000b004126c21f1edmr8379398pgk.198.1657379742274; Sat, 09 Jul 2022 08:15:42 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:42 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 5/8] sched/fair: fix load tracking for new forked !fair task Date: Sat, 9 Jul 2022 23:13:50 +0800 Message-Id: <20220709151353.32883-6-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" New forked !fair task will set its sched_avg last_update_time to the pelt_clock of cfs_rq, after a while in switched_to_fair(): switched_to_fair attach_task_cfs_rq attach_entity_cfs_rq update_load_avg __update_load_avg_se(now, cfs_rq, se) the delta (now - sa->last_update_time) will contribute/decay sched_avg depends on the task running/runnable status at that time. This patch don't set sched_avg last_update_time of new forked !fair task, leave it to 0. So later in update_load_avg(), we don't need to contribute/decay the wrong delta (now - sa->last_update_time). Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 171bc22bc142..153a2c6c1069 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -849,22 +849,8 @@ void post_init_entity_util_avg(struct task_struct *p) =20 sa->runnable_avg =3D sa->util_avg; =20 - if (p->sched_class !=3D &fair_sched_class) { - /* - * For !fair tasks do: - * - update_cfs_rq_load_avg(now, cfs_rq); - attach_entity_load_avg(cfs_rq, se); - switched_from_fair(rq, p); - * - * such that the next switched_to_fair() has the - * expected state. - */ - se->avg.last_update_time =3D cfs_rq_clock_pelt(cfs_rq); - return; - } - - attach_entity_cfs_rq(se); + if (p->sched_class =3D=3D &fair_sched_class) + attach_entity_cfs_rq(se); } =20 #else /* !CONFIG_SMP */ --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32C46C43334 for ; Sat, 9 Jul 2022 15:16:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbiGIPQH (ORCPT ); Sat, 9 Jul 2022 11:16:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229564AbiGIPPv (ORCPT ); Sat, 9 Jul 2022 11:15:51 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F4004AD42 for ; Sat, 9 Jul 2022 08:15:49 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id s27so1254962pga.13 for ; Sat, 09 Jul 2022 08:15:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r3mxdMtzYN4EG2h1+j1zeQY1fxjwsNicM4bDMcwOEsI=; b=7u0xD4mTqOLbBezgkWkuZtIRvBOqah++R4agGmLSbq+QuvxRnkEM15nWPvShqHXPBT NXQR1TJTEcVa3HYCFAWE4K65/6HCS0vDdKqXXZR1QhVwr3Q3cHujwPgg1h9MKVFkgv8Z Ry/Qg2qTfh3M6wP2x3hr3YhASxgQZNwUn4Fxb2ToWQCK/tcylUUhVAmh06CSN6CITDsL Le/O3eyZn3EFB9bsP1xaa3WTFKAf2NVN4+Lt5WsVdfMEZPmfFy2+kQCC5ZwPlAT/gves iFB3QfJXmNmOL/8HJkLK1BwISr6zRdUJXcOtlPEvp8YKsEgSGS9abdjmuJx6/+Otp+h8 fRHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r3mxdMtzYN4EG2h1+j1zeQY1fxjwsNicM4bDMcwOEsI=; b=WZd/4iy/5n3SjiZMr1YCjCwHVo+od1ixEHms7BgCi5X5BAAvDMrKDuzJj7KgG3CMd6 6j6X7CHjM6cvEUriBU4DnLbHEAdxHoP2tyi706JQzx3ja4FDrT2Jwrh1PGEyrZNXKCQS sk+NsvA3Aiz88OntvyweWE1+hFy3An3pwgzz1/mSpeZeqIQQ19ODYhTYodlwFrsJIQq5 pr1XKpeuefTDkmiJRiuqgHTekGNbYbQ/hC75UwTBvkm+FLk0sTZhhFCWKU9Oqdc8tcj6 ei+EDZJHRUP3u/a3zf0kTH9c7ORZpUPexgn+3exaPyQyKpaDd3MaaogNbOTgosFFS+hR UkFw== X-Gm-Message-State: AJIora+FVuamCSfKA3zph6KSuPJc48+jMNOToLzgpG2dL4mxwJ+LLdv/ +YGSM/AmPPfii5OTXELZyz7wKA== X-Google-Smtp-Source: AGRyM1u4ITSSIedQ0QLA9OhiYdi+G3XG/9XwxCU/ntryEEx2YnWFoXG+saoGBggSt2zY0XLwLiairA== X-Received: by 2002:aa7:8186:0:b0:528:c344:ed6e with SMTP id g6-20020aa78186000000b00528c344ed6emr9281105pfi.35.1657379748820; Sat, 09 Jul 2022 08:15:48 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:48 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 6/8] sched/fair: stop load tracking when task switched_from_fair() Date: Sat, 9 Jul 2022 23:13:51 +0800 Message-Id: <20220709151353.32883-7-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The same reason as the previous commit, if we don't reset the sched_avg last_update_time to 0, after a while in switched_to_fair(): switched_to_fair attach_task_cfs_rq attach_entity_cfs_rq update_load_avg __update_load_avg_se(now, cfs_rq, se) The delta (now - sa->last_update_time) will wrongly contribute/decay sched_avg depends on the task running/runnable status at that time. This patch reset it's sched_avg last_update_time to 0, stop load tracking for !fair task, later in switched_to_fair() -> update_load_avg(), we can use its saved sched_avg. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 153a2c6c1069..ca714eedeec5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11563,6 +11563,11 @@ static void attach_task_cfs_rq(struct task_struct = *p) static void switched_from_fair(struct rq *rq, struct task_struct *p) { detach_task_cfs_rq(p); + +#ifdef CONFIG_SMP + /* Stop load tracking for !fair task */ + p->se.avg.last_update_time =3D 0; +#endif } =20 static void switched_to_fair(struct rq *rq, struct task_struct *p) --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAE33C43334 for ; Sat, 9 Jul 2022 15:16:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229686AbiGIPQP (ORCPT ); Sat, 9 Jul 2022 11:16:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229590AbiGIPPz (ORCPT ); Sat, 9 Jul 2022 11:15:55 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43FAB33A27 for ; Sat, 9 Jul 2022 08:15:55 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id m2so1114795plx.3 for ; Sat, 09 Jul 2022 08:15:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XFXP5LC8HL3mqo588wuUmwfRWCNgtbOQ+XNQ7lde5Ys=; b=65RueBNubOToFHloKGsGeVW9SUCku7EopCzpY1hqUZZ5kgCeRrhhZfSCikqIy8vjxC X8LNoWeIMwhfREI+F6fFdwZIqOB1lXIvN4uWdfIZedqjQsqIoOFJVUtg9KrEE50Psp0K EI7VeHRsF58LLbXh2CNQi3L2q8UNtKRtZaNcGnRqeKLgAB/gD4qHbqE8IF5Y++BINUTk VAvv1sBmYH901m4tBtjYpjdIvjr1d+RUO00Sn3e9i9+EkTmems93MsWjqMzQY4dTg5F1 OWpV3klXiXJqNQwPWHqyhLk8SygYu2iLC8JqCGCiOe/5G1v4kwi3j0GgOHlCNYhwxWy4 pCZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XFXP5LC8HL3mqo588wuUmwfRWCNgtbOQ+XNQ7lde5Ys=; b=YgRXfavwaw8N14GlCKuS+/jYVlufpoFXaQJlAntodxF2Aip6r63FVryV/WjV3gXy7n n5v3WLTYdJcjBMA/XcbpDvrK9xvVnvWLqyOwQ3Nitc2hYun23iooChkDyL4HDg9yDda8 /Lbb5bcVgogkDuib4RqfqlVUE43gs60N9WBmzSYRsw7KB7B7uHpNuI37hE8LHpgpoT0P y6Ozn9lfrdVV34FrOy8M8TqBxszEyR1EZUTYwq6VEeydLG+ACv8g0sSuEYKpIyWSclmy z501vbgHzrg9K6EjIMKqGqkXiFYzVg2Unm9KFMwQK1ibeRE/fm21pXUgamIktnPCkTXi oGgA== X-Gm-Message-State: AJIora+I8+iJDrI2YYtie4pnKiBEJuMIx0SolWBX/sSNloXZP8v0nwKu KqISnIB+KdIcrzKy4hDhpzpHjdVEr0OQG6Wb X-Google-Smtp-Source: AGRyM1uJuDOP8/4RP6+RcZbmValw0bVnXErnn/seWnQORd5hmeg9tr9BIjA2AhQwIHS8TZl5qUhCMg== X-Received: by 2002:a17:902:d490:b0:16b:fb09:50ea with SMTP id c16-20020a170902d49000b0016bfb0950eamr9277224plg.77.1657379754754; Sat, 09 Jul 2022 08:15:54 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:54 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 7/8] sched/fair: delete superfluous set_task_rq_fair() Date: Sat, 9 Jul 2022 23:13:52 +0800 Message-Id: <20220709151353.32883-8-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_task_rq() is used when move task across CPUs/groups to change its cfs_rq and parent entity, and it will call set_task_rq_fair() to sync blocked task load_avg just before change its cfs_rq. 1. task migrate CPU: will detach/remove from prev cfs_rq and reset its sched_avg last_update_time to 0, so don't need to sync again. 2. task migrate cgroup: will detach from prev cfs_rq and reset its sched_avg last_update_time to 0, so don't need to sync too. 3. !fair task migrate CPU/cgroup: we stop load tracking for !fair task, reset sched_avg last_update_time to 0 when switched_from_fair(), so don't need it too. So set_task_rq_fair() is not needed anymore, this patch delete it. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 31 ------------------------------- kernel/sched/sched.h | 8 -------- 2 files changed, 39 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ca714eedeec5..b0bde895ba96 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3430,37 +3430,6 @@ static inline void update_tg_load_avg(struct cfs_rq = *cfs_rq) } } =20 -/* - * Called within set_task_rq() right before setting a task's CPU. The - * caller only guarantees p->pi_lock is held; no other assumptions, - * including the state of rq->lock, should be made. - */ -void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next) -{ - u64 p_last_update_time; - u64 n_last_update_time; - - if (!sched_feat(ATTACH_AGE_LOAD)) - return; - - /* - * We are supposed to update the task to "current" time, then its up to - * date and ready to go to new CPU/cfs_rq. But we have difficulty in - * getting what current time is, so simply throw away the out-of-date - * time. This will result in the wakee task is less decayed, but giving - * the wakee more load sounds not bad. - */ - if (!(se->avg.last_update_time && prev)) - return; - - p_last_update_time =3D cfs_rq_last_update_time(prev); - n_last_update_time =3D cfs_rq_last_update_time(next); - - __update_load_avg_blocked_se(p_last_update_time, se); - se->avg.last_update_time =3D n_last_update_time; -} - /* * When on migration a sched_entity joins/leaves the PELT hierarchy, we ne= ed to * propagate its contribution. The key to this propagation is the invariant diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 19e0076e4245..a8ec7af4bd51 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -505,13 +505,6 @@ extern int sched_group_set_shares(struct task_group *t= g, unsigned long shares); =20 extern int sched_group_set_idle(struct task_group *tg, long idle); =20 -#ifdef CONFIG_SMP -extern void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next); -#else /* !CONFIG_SMP */ -static inline void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next) { } -#endif /* CONFIG_SMP */ #endif /* CONFIG_FAIR_GROUP_SCHED */ =20 #else /* CONFIG_CGROUP_SCHED */ @@ -1937,7 +1930,6 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; p->se.parent =3D tg->se[cpu]; p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; --=20 2.36.1 From nobody Sat Apr 18 22:41:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DDCBC433EF for ; Sat, 9 Jul 2022 15:16:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229573AbiGIPQR (ORCPT ); Sat, 9 Jul 2022 11:16:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229690AbiGIPQB (ORCPT ); Sat, 9 Jul 2022 11:16:01 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 724C84B0D1 for ; Sat, 9 Jul 2022 08:16:00 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id j1-20020a17090aeb0100b001ef777a7befso3373373pjz.0 for ; Sat, 09 Jul 2022 08:16:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yPtdrMhL5e8yh6wH/g86ZHF3r4Sj0ggBH1KhtHH5s8Q=; b=0Crk7rfhRGCx2d3NnQRlVIi4woe+EWOtrw7RgmOCkVqTHLyEpy7cyczQK29QH+a4UU JB0YqXANjbrzaOUaa96mO1tq8XFn0kSWcXt6vCLrcj5tsO9sb0lm3yjphZDBM0HOJkIy h6ZoXwe3y3ukuCT2yPj/7WGaE7s2qFUb4r70GftFIWar7VzvEdgkZkFtryFO7E0gLEli svG2s/FIniLUnUHvLUKREH7yPZWHwH+8g+pXSSkk3el2Nl5l8iMh03PjucUhdKlhGif0 PJi+efrdw4HbhFtCkX1E807XazkQjFQEUJhxJkSwIhY5f0sf/e7CtYqSlEXv64/9MPgm 5IYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yPtdrMhL5e8yh6wH/g86ZHF3r4Sj0ggBH1KhtHH5s8Q=; b=SlMI0euQKDZOCL0hdAHZAkISBkg2TlPCD4AXh0YQmtv5ncypNypkDQJxeDuL0WYnor G3jvGaZK8und+jbQzD86D/UVWefgGVPy8aUOGdigtsJdsJEsYQ3RgFC53iP8HSPYpc0N kT7HYsa3i/ZYBDF0xJzGfP+HmtrFYhgeaedoQaiHNQTUMElZvomwy/U36quNrEkz9WSO LFx75Sk32XyAYVOZJYQwxXMzvAUeZ5+eJ05Tfa3pGG0uSELwgcXHuTYBQg62h4hVv6Se z89CiRqybIr8PKdn8vGhFxp2Uift1pFC/93LFuhGnyp/XJeezyKIz9/j6IrFJ97x9DLt uhDg== X-Gm-Message-State: AJIora/UGXl2FGy+sSiD3Ajk86QKltUKEm1H3xIoTosvVPNcOUuJshOr Fd892AQW4rVdLxAllNkuncg5VQ== X-Google-Smtp-Source: AGRyM1s7U2bjc1ROQ8IyMcYSiCufXK2IdtF1kQWhrxSohH0mrD+A1ln1h0XWzrPPDY0+EFbWU4905Q== X-Received: by 2002:a17:90b:30c4:b0:1ef:8d1b:f9bf with SMTP id hi4-20020a17090b30c400b001ef8d1bf9bfmr6570623pjb.158.1657379759931; Sat, 09 Jul 2022 08:15:59 -0700 (PDT) Received: from localhost.localdomain ([2409:8a28:e63:e730:5493:aabd:e56d:65e6]) by smtp.gmail.com with ESMTPSA id bo20-20020a17090b091400b001ef8d2f72fesm1295709pjb.45.2022.07.09.08.15.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Jul 2022 08:15:59 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH 8/8] sched/fair: delete superfluous SKIP_AGE_LOAD Date: Sat, 9 Jul 2022 23:13:53 +0800 Message-Id: <20220709151353.32883-9-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220709151353.32883-1-zhouchengming@bytedance.com> References: <20220709151353.32883-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" All three attach_entity_cfs_rq() types: 1. task migrate CPU 2. task migrate cgroup 3. task switched to fair have its sched_avg last_update_time reset to 0 when attach_entity_cfs_rq() -> update_load_avg(), so it makes no difference whether SKIP_AGE_LOAD is set or not. This patch delete the superfluous SKIP_AGE_LOAD, and the unused feature ATTACH_AGE_LOAD together. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 18 ++++++------------ kernel/sched/features.h | 1 - 2 files changed, 6 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b0bde895ba96..b91643a2143e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3956,9 +3956,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s * Optional action to be done while updating the load average */ #define UPDATE_TG 0x1 -#define SKIP_AGE_LOAD 0x2 -#define DO_ATTACH 0x4 -#define DO_DETACH 0x8 +#define DO_ATTACH 0x2 +#define DO_DETACH 0x4 =20 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int flags) @@ -3970,7 +3969,7 @@ static inline void update_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) + if (se->avg.last_update_time) __update_load_avg_se(now, cfs_rq, se); =20 decayed =3D update_cfs_rq_load_avg(now, cfs_rq); @@ -4253,7 +4252,6 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *c= fs_rq) } =20 #define UPDATE_TG 0x0 -#define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 #define DO_DETACH 0x0 =20 @@ -11484,9 +11482,7 @@ static void detach_entity_cfs_rq(struct sched_entit= y *se) struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 /* Catch up with the cfs_rq and remove our load when we leave */ - update_load_avg(cfs_rq, se, 0); - detach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq); + update_load_avg(cfs_rq, se, UPDATE_TG | DO_DETACH); propagate_entity_cfs_rq(se); } =20 @@ -11494,10 +11490,8 @@ static void attach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 - /* Synchronize entity with its cfs_rq */ - update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); - attach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq); + /* Synchronize entity with its cfs_rq and attach our load */ + update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); propagate_entity_cfs_rq(se); } =20 diff --git a/kernel/sched/features.h b/kernel/sched/features.h index ee7f23c76bd3..fb92431d496f 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -85,7 +85,6 @@ SCHED_FEAT(RT_PUSH_IPI, true) =20 SCHED_FEAT(RT_RUNTIME_SHARE, false) SCHED_FEAT(LB_MIN, false) -SCHED_FEAT(ATTACH_AGE_LOAD, true) =20 SCHED_FEAT(WA_IDLE, true) SCHED_FEAT(WA_WEIGHT, true) --=20 2.36.1