From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2500C43334 for ; Wed, 13 Jul 2022 04:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231436AbiGMEE4 (ORCPT ); Wed, 13 Jul 2022 00:04:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230267AbiGMEEx (ORCPT ); Wed, 13 Jul 2022 00:04:53 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77225D64CC for ; Tue, 12 Jul 2022 21:04:51 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id q82so9378641pgq.6 for ; Tue, 12 Jul 2022 21:04:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IZT91HMRNNtPd10faSgbS3j67ADoK0u+DYdIQS+mTiY=; b=S8C7w9KHiVRvT79rFyh7CVOkBN8zB7MCFm/ZNpvVfKlNfSX/VpN16Siks7am88MirZ gQhgRMCjWVQN/T/R5pSDIuxyARTYMlnzPEF2ZLxXjZRxpBHmceld/dT3kqeOEaSkXtoV Y25qN6pUNkQfEDKks1byU4/9EtLEf/KW6+x+BAsO4H7DgQ3aKB10LgsE3ab/bxEyjt/3 hkgvmtmQc+tFB62SuR3qLxrpAXwx0iXxriR6I+UGxkPjydxUehcmM7oNbNSUiyTkMq6R i4r5OFHmdPUh+dlIbmYQS6JCYF5x0h4cLNn2j6aShrgEbiprC0IwID2+40dNcXEuIBX9 gpyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IZT91HMRNNtPd10faSgbS3j67ADoK0u+DYdIQS+mTiY=; b=7wDhYyQqAAzL7/0CJbRB1rzyuv74cFhk8VPiTecxcyE7xGju+oBZkAFSnZ3lMeVdYO 2tchEzSwuokjOQankeaXhjH16I1CZvC5omaW6gAtGFvVcMkgqePthWeg0l79wGNP5G/B JMDHihry4t0cp4Gh1r+DhEXEwnfV9IwvGHABUCPPHy8a4mm9NBfOr0Mz4dvv9hEnX3MU uWywtRU6WnM7W3QKL/+A5rx12uDEKZ+JHUzlM6UE2JBlfB380WF/3963VVIDDygKGna2 I9xA4GWXyWCcCcjLcrnIcJZTDobR2FLE/16YRlDrsg/Clf3+dkxNpRM+9jQOIaGVbgnF O22w== X-Gm-Message-State: AJIora/V9RkGW0SXa0lV7NX5/vNU9SxhG1jgZbk3P6c33fPrJfgaD6nP nhNFZXju1jXR4vacBeXvuAnB9Q== X-Google-Smtp-Source: AGRyM1sa+nvuWiefQNpEgLzEJt+hJzf3ZxcsE7DZE33zGPUd8m+1V1a09LxAEXSMMSioEyUrrVN/wA== X-Received: by 2002:a63:c53:0:b0:412:6f28:7a87 with SMTP id 19-20020a630c53000000b004126f287a87mr1400964pgm.136.1657685090980; Tue, 12 Jul 2022 21:04:50 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.04.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:04:50 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 01/10] sched/fair: combine detach into dequeue when migrating task Date: Wed, 13 Jul 2022 12:04:21 +0800 Message-Id: <20220713040430.25778-2-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When we are migrating task out of the CPU, we can combine detach and propagation into dequeue_entity() to save the detach_entity_cfs_rq() in migrate_task_rq_fair(). This optimization is like combining DO_ATTACH in the enqueue_entity() when migrating task to the CPU. So we don't have to traverse the CFS tree extra time to do the detach_entity_cfs_rq() -> propagate_entity_cfs_rq() call, which wouldn't be called anymore with this patch's change. detach_task() deactivate_task() dequeue_task_fair() for_each_sched_entity(se) dequeue_entity() update_load_avg() /* (1) */ detach_entity_load_avg() set_task_cpu() migrate_task_rq_fair() detach_entity_cfs_rq() /* (2) */ update_load_avg(); detach_entity_load_avg(); propagate_entity_cfs_rq(); for_each_sched_entity() update_load_avg() This patch save the detach_entity_cfs_rq() called in (2) by doing the detach_entity_load_avg() for a CPU migrating task inside (1) (the task being the first se in the loop) Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a78d2e3b9d49..0689b94ed70b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4003,6 +4003,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s #define UPDATE_TG 0x1 #define SKIP_AGE_LOAD 0x2 #define DO_ATTACH 0x4 +#define DO_DETACH 0x8 =20 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int flags) @@ -4020,7 +4021,14 @@ static inline void update_load_avg(struct cfs_rq *cf= s_rq, struct sched_entity *s decayed =3D update_cfs_rq_load_avg(now, cfs_rq); decayed |=3D propagate_entity_load_avg(se); =20 - if (!se->avg.last_update_time && (flags & DO_ATTACH)) { + if (flags & DO_DETACH) { + /* + * DO_DETACH means we're here from dequeue_entity() + * and we are migrating task out of the CPU. + */ + detach_entity_load_avg(cfs_rq, se); + update_tg_load_avg(cfs_rq); + } else if (!se->avg.last_update_time && (flags & DO_ATTACH)) { =20 /* * DO_ATTACH means we're here from enqueue_entity(). @@ -4292,6 +4300,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *c= fs_rq) #define UPDATE_TG 0x0 #define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 +#define DO_DETACH 0x0 =20 static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int not_used1) { @@ -4511,6 +4520,11 @@ static __always_inline void return_cfs_rq_runtime(st= ruct cfs_rq *cfs_rq); static void dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { + int action =3D UPDATE_TG; + + if (entity_is_task(se) && task_on_rq_migrating(task_of(se))) + action |=3D DO_DETACH; + /* * Update run-time statistics of the 'current'. */ @@ -4524,7 +4538,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * - For group entity, update its weight to reflect the new share * of its group cfs_rq. */ - update_load_avg(cfs_rq, se, UPDATE_TG); + update_load_avg(cfs_rq, se, action); se_update_runnable(se); =20 update_stats_dequeue_fair(cfs_rq, se, flags); @@ -7076,8 +7090,6 @@ select_task_rq_fair(struct task_struct *p, int prev_c= pu, int wake_flags) return new_cpu; } =20 -static void detach_entity_cfs_rq(struct sched_entity *se); - /* * Called immediately before a task is migrated to a new CPU; task_cpu(p) = and * cfs_rq_of(p) references at time of call are still valid and identify the @@ -7099,15 +7111,7 @@ static void migrate_task_rq_fair(struct task_struct = *p, int new_cpu) se->vruntime -=3D u64_u32_load(cfs_rq->min_vruntime); } =20 - if (p->on_rq =3D=3D TASK_ON_RQ_MIGRATING) { - /* - * In case of TASK_ON_RQ_MIGRATING we in fact hold the 'old' - * rq->lock and can modify state directly. - */ - lockdep_assert_rq_held(task_rq(p)); - detach_entity_cfs_rq(se); - - } else { + if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); =20 /* --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 802EAC433EF for ; Wed, 13 Jul 2022 04:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231964AbiGMEFB (ORCPT ); Wed, 13 Jul 2022 00:05:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230267AbiGMEE4 (ORCPT ); Wed, 13 Jul 2022 00:04:56 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39C3DD64C8 for ; Tue, 12 Jul 2022 21:04:55 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id 70so9196728pfx.1 for ; Tue, 12 Jul 2022 21:04:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Vg/rD7f/1C4oNvLsimwrfDYgxuV3BFNN+d4ntZIDLzY=; b=0OkYiRQcM2tsK6c4iFLMtlJk4grq+xf9P7uHjvY0HpXJw9Z+6wQZrtWtgAEGWBqRmc RNlbkTDqlPAAQbJQwzOE7xf3oxJpndk+F5MwxZlkIU026vGVTQknVGxQ3EB8P7QbWEmR 8RDxw/DaW5CBIsb+1SwQWYzzlSHVW1wM/9MfuD+2yRRozI7B5Wbf2f5PMsxAbuxFbhaO 2LuquutiZhKwi11GKzWytTF+RLixHUdTqFwSqYNa/4GEPfwxwDI0/wOLC0ZLDx+pHKqI NkPPdsOoGiEP3WhA+bLcbtPKETbIc/SpjmYKmZWdKycshKu5Q6jwEZyokWoskcGpvK7X OIJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Vg/rD7f/1C4oNvLsimwrfDYgxuV3BFNN+d4ntZIDLzY=; b=IkGVpzNilRIHUYpM7tv8wZoooYpJBYW8vqMShT13nNQyK7AbWixhfIL+7iRXpxHJ3/ kUBHRxaEGAd7VhT/B9JOgQeNmQG1fS6dRYB17zOkUwUoMAfKN/tg9WS2i8XYKUvmb3s8 5CE0k5e0h5sKc7jjVClZYgKtvxO+E+TO7d+4IuPfy4iY1Pi2OPjETxRR2rAerVnIUkFX a7jUHXLO7T1ImcJdcVghTIXxVBe9huUB7+toKWyqe/xmwhZVmzJKMzyB2vcpZIdvCUTk uy1cOXy6/KJ2vBbVe0vrkJOjpmweKS7+U3k75FYovYqSDUd4eWhTfYS/VTHwSQtyIKtg 0J8Q== X-Gm-Message-State: AJIora8HIrbCJ+7/qZW0SNA6gxkoeiryZE5aXc9DNd29hS8ygaYlLMgF c7Nep/FqNnKJ8d+kBlxYujtVXQ== X-Google-Smtp-Source: AGRyM1sI7WviC5J80C7YtsyhMRvwRB9SSFayK/ibyCplmayk2wZF1tFO92/sW7OBJ7KnKE4cdSyyDw== X-Received: by 2002:a63:34b:0:b0:412:b164:7a45 with SMTP id 72-20020a63034b000000b00412b1647a45mr1351233pgd.31.1657685094767; Tue, 12 Jul 2022 21:04:54 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.04.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:04:54 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 02/10] sched/fair: update comments in enqueue/dequeue_entity() Date: Wed, 13 Jul 2022 12:04:22 +0800 Message-Id: <20220713040430.25778-3-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When reading the sched_avg related code, I found the comments in enqueue/dequeue_entity() are not updated with the current code. We don't add/subtract entity's runnable_avg from cfs_rq->runnable_avg during enqueue/dequeue_entity(), those are done only for attach/detach. This patch updates the comments to reflect the current code working. Signed-off-by: Chengming Zhou Acked-by: Vincent Guittot --- kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0689b94ed70b..2a3e12ead144 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4443,7 +4443,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Add its load to cfs_rq->runnable_avg + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -4533,7 +4534,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) /* * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. - * - Subtract its load from the cfs_rq->runnable_avg. + * - For group_entity, update its runnable_weight to reflect the new + * h_nr_running of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A59AEC43334 for ; Wed, 13 Jul 2022 04:05:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233151AbiGMEFM (ORCPT ); Wed, 13 Jul 2022 00:05:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231710AbiGMEFF (ORCPT ); Wed, 13 Jul 2022 00:05:05 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E5E8D9E1B for ; Tue, 12 Jul 2022 21:04:59 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id o15so10292688pjh.1 for ; Tue, 12 Jul 2022 21:04:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DAiAxlX2yM9GEaHbySNw9bkpD1G1UgOvHW7DJbEFXpw=; b=cOoaSwrn9IFMIlZOmUPyAQ/aqgUX6EhUFNVqVpyKQTX/rcK9qMAoKONBBvY5xBkP2w QrrlvasfIGG4EVKsBClQsrcxI52Yf9KsMH7bMmS44ZW+svk6cBXOneuH+JaZAewg4Dlp beifnQkrxQdJHV5P+RJviIPq9n8NHRjV0AxNE/d3/6XCORFxvE/r4ftwAtBChGaWlIzc zM8MVWb7G5NXEYBTDurq2fH+DYuxvl0jdmAjGe1GXeJBBqRVYJhwaYTM1t05K3dTJY9w cXX80MX0AKCjPcCD7hjsmJwSa5h+z+QAapD8HqSjdH+CQDNeT6VMVkHcUD3ATxDJZMyu //KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DAiAxlX2yM9GEaHbySNw9bkpD1G1UgOvHW7DJbEFXpw=; b=z3G04JfA8E/dMZnb1nMcxsuVNQy8STrRmq/1uVZHJFo9k8B1sSFGTs0PxIXXE7nI68 2gh16vU4pAX9Z0cRr0teaT5LIoplq3HRMq2o4TlpGAb14ce9Ju4Sw3JpNowXmWtB4Ulq RcPTahFcXM6ETIpRHKxFiHJWf7BbPhANcDZEetG8SugN9wrTrjZCQdn0/aBnU4+eQ7+H CjNT1bICbnU9MYBW5joT9gtUWC5BqfooOmnd92Q530FweDHKKIQF5PZFAmy4z9kZ6UGe Nb5ZJSeOPm9p2qkmXGGD/FlNPzvKJtBOMGEtDZ5MlMP4Kqr4GBaNAMXPKF6bElzCgz/2 NIYQ== X-Gm-Message-State: AJIora8m8I+RZ1PVAPaTS8U/Ao1UlUsItHgcwOOouFjA/XcXTgj++Bmj baxdVIUMyX9zxwgVAFxjY1xI+w== X-Google-Smtp-Source: AGRyM1ttdf7T1YLlh9gJKvXn7DL9hQMPjsyvEYNHt+5K3wISV4uNTInrWjdZ1GdFlW4VFultDEYkqg== X-Received: by 2002:a17:90a:f3c8:b0:1ef:7976:a6d8 with SMTP id ha8-20020a17090af3c800b001ef7976a6d8mr8126104pjb.198.1657685098516; Tue, 12 Jul 2022 21:04:58 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.04.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:04:58 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 03/10] sched/fair: maintain task se depth in set_task_rq() Date: Wed, 13 Jul 2022 12:04:23 +0800 Message-Id: <20220713040430.25778-4-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Previously we only maintain task se depth in task_move_group_fair(), if a !fair task change task group, its se depth will not be updated, so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to F= AIR") fix the problem by updating se depth in switched_to_fair() too. This patch move task se depth maintainence to set_task_rq(), which will be called when CPU/cgroup change, so its depth will always be correct. This patch is preparation for the next patch. Signed-off-by: Chengming Zhou Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 8 -------- kernel/sched/sched.h | 1 + 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2a3e12ead144..bf595b622656 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11539,14 +11539,6 @@ static void attach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - /* - * Since the real-depth could have been changed (only FAIR - * class maintain depth value), reset depth properly. - */ - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -#endif - /* Synchronize entity with its cfs_rq */ update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); attach_entity_load_avg(cfs_rq, se); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index aad7f5ee9666..8cc3eb7b86cd 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1940,6 +1940,7 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; p->se.parent =3D tg->se[cpu]; + p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30903C43334 for ; Wed, 13 Jul 2022 04:05:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232907AbiGMEFS (ORCPT ); Wed, 13 Jul 2022 00:05:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232000AbiGMEFK (ORCPT ); Wed, 13 Jul 2022 00:05:10 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3A4DD9E12 for ; Tue, 12 Jul 2022 21:05:02 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id o3-20020a17090a744300b001ef8f7f3dddso1551709pjk.3 for ; Tue, 12 Jul 2022 21:05:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UKSij/GPDZ57EEFoNz665mJfxzgkXH1+vUXpr6VSwJY=; b=eneGubHa8PFlN1YCxnRt4gMiCQ5+/JgysKIrq0XJ+MU8ihz2dFyMxsDqEd7UqpjRZt YsmWgpn30tg3DEnwwZlLKcjE/VXB1BdwdAQoxICJoFAnsK7OUp2wELYtBvz+67BhxeTr ZUz7nflcxfK5UIIK0WCaMP2E7umizdGDfGL/V3JYrx+LO7Aggtcppw7FxIiIgmF9DSAC RxTcdqKaKWE8IzZXiwcE/Qjlb1Ak0qrhLVQwuUhjEAPi5883TLux1924LSiwcbtgfnk9 9PASUKfzclWT3oVJzcfZSNCE2xVg8QXWuTFnM5+IWIxjMLsK8Y26CBztAsrFbKJn1oBv IEig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UKSij/GPDZ57EEFoNz665mJfxzgkXH1+vUXpr6VSwJY=; b=V7EmpYZNm5yEevLeYleeBylAOhnp8sJ26tVpAdOe/mf8D6eVoM0p5nvvux2DbYkTpR oMCLK0mAxFsygTCTYZ+EsRZe/YdxCUyyftZ1ybPyx+3IbPb12DK5imoZrAB8qX6VSfnH SJztBSaCtZ7AmyNSzmVOrq/nS+pUFAaWu4h27Wj7WayGNZsLfXmkYceGYOF7WJZCx7Nh 8V6h5kig0zfOO8Ghh1VVVoBavbdE3bcv1FIc8N2pQlMZy+/bxoPIJX900iiTj/BYd8Vk FJv5bcvw8yxKYy0A00jV80hVJiH17oi7vuUNlJoutuTBbDvNj2g9Ik4yNEAJwq02pSrx sRVA== X-Gm-Message-State: AJIora+TuJfpuNG2TyALkZywr5JKydRUk1U+eNdtieByS70wXm9LMxwD JwJvgFW0UEGg1nnYpY7fvGZG+w== X-Google-Smtp-Source: AGRyM1veB8kwM9MVOHrQSU3lLBJ2yvWET9eeVimsKVpQvM8La0HV6pC6WXGeA42WVuKgxLazBvO+tw== X-Received: by 2002:a17:90b:3d04:b0:1f0:693f:69c8 with SMTP id pt4-20020a17090b3d0400b001f0693f69c8mr1650960pjb.188.1657685102273; Tue, 12 Jul 2022 21:05:02 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.04.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:02 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 04/10] sched/fair: remove redundant cpu_cgrp_subsys->fork() Date: Wed, 13 Jul 2022 12:04:24 +0800 Message-Id: <20220713040430.25778-5-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use cpu_cgrp_subsys->fork() to set task group for the new fair task in cgroup_post_fork(). Since commit b1e8206582f9 ("sched: Fix yet more sched_fork() races") has already set task group for the new fair task in sched_cgroup_fork(), so cpu_cgrp_subsys->fork() can be removed. cgroup_can_fork() --> pin parent's sched_task_group sched_cgroup_fork() __set_task_cpu --> set task group cgroup_post_fork() ss->fork() :=3D cpu_cgroup_fork() --> set again After this patch's change, task_change_group_fair() only need to care about task cgroup migration, make the code much simplier. Signed-off-by: Chengming Zhou Reviewed-by: Vincent Guittot Reviewed-by: Dietmar Eggemann --- kernel/sched/core.c | 27 ++++----------------------- kernel/sched/fair.c | 23 +---------------------- kernel/sched/sched.h | 5 +---- 3 files changed, 6 insertions(+), 49 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c215b5adc707..d85fdea51e3a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -481,8 +481,7 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p= , int flags) { } * p->se.load, p->rt_priority, * p->dl.dl_{runtime, deadline, period, flags, bw, density} * - sched_setnuma(): p->numa_preferred_nid - * - sched_move_task()/ - * cpu_cgroup_fork(): p->sched_task_group + * - sched_move_task(): p->sched_task_group * - uclamp_update_active() p->uclamp* * * p->state <- TASK_*: @@ -10127,7 +10126,7 @@ void sched_release_group(struct task_group *tg) spin_unlock_irqrestore(&task_group_lock, flags); } =20 -static void sched_change_group(struct task_struct *tsk, int type) +static void sched_change_group(struct task_struct *tsk) { struct task_group *tg; =20 @@ -10143,7 +10142,7 @@ static void sched_change_group(struct task_struct *= tsk, int type) =20 #ifdef CONFIG_FAIR_GROUP_SCHED if (tsk->sched_class->task_change_group) - tsk->sched_class->task_change_group(tsk, type); + tsk->sched_class->task_change_group(tsk); else #endif set_task_rq(tsk, task_cpu(tsk)); @@ -10174,7 +10173,7 @@ void sched_move_task(struct task_struct *tsk) if (running) put_prev_task(rq, tsk); =20 - sched_change_group(tsk, TASK_MOVE_GROUP); + sched_change_group(tsk); =20 if (queued) enqueue_task(rq, tsk, queue_flags); @@ -10252,23 +10251,6 @@ static void cpu_cgroup_css_free(struct cgroup_subs= ys_state *css) sched_unregister_group(tg); } =20 -/* - * This is called before wake_up_new_task(), therefore we really only - * have to set its group bits, all the other stuff does not apply. - */ -static void cpu_cgroup_fork(struct task_struct *task) -{ - struct rq_flags rf; - struct rq *rq; - - rq =3D task_rq_lock(task, &rf); - - update_rq_clock(rq); - sched_change_group(task, TASK_SET_GROUP); - - task_rq_unlock(rq, task, &rf); -} - static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) { struct task_struct *task; @@ -11134,7 +11116,6 @@ struct cgroup_subsys cpu_cgrp_subsys =3D { .css_released =3D cpu_cgroup_css_released, .css_free =3D cpu_cgroup_css_free, .css_extra_stat_show =3D cpu_extra_stat_show, - .fork =3D cpu_cgroup_fork, .can_attach =3D cpu_cgroup_can_attach, .attach =3D cpu_cgroup_attach, .legacy_cftypes =3D cpu_legacy_files, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bf595b622656..8992ce5e73d2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11634,15 +11634,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) } =20 #ifdef CONFIG_FAIR_GROUP_SCHED -static void task_set_group_fair(struct task_struct *p) -{ - struct sched_entity *se =3D &p->se; - - set_task_rq(p, task_cpu(p)); - se->depth =3D se->parent ? se->parent->depth + 1 : 0; -} - -static void task_move_group_fair(struct task_struct *p) +static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); set_task_rq(p, task_cpu(p)); @@ -11654,19 +11646,6 @@ static void task_move_group_fair(struct task_struc= t *p) attach_task_cfs_rq(p); } =20 -static void task_change_group_fair(struct task_struct *p, int type) -{ - switch (type) { - case TASK_SET_GROUP: - task_set_group_fair(p); - break; - - case TASK_MOVE_GROUP: - task_move_group_fair(p); - break; - } -} - void free_fair_sched_group(struct task_group *tg) { int i; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 8cc3eb7b86cd..19e0076e4245 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2203,11 +2203,8 @@ struct sched_class { =20 void (*update_curr)(struct rq *rq); =20 -#define TASK_SET_GROUP 0 -#define TASK_MOVE_GROUP 1 - #ifdef CONFIG_FAIR_GROUP_SCHED - void (*task_change_group)(struct task_struct *p, int type); + void (*task_change_group)(struct task_struct *p); #endif }; =20 --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65376C43334 for ; Wed, 13 Jul 2022 04:05:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233609AbiGMEFV (ORCPT ); Wed, 13 Jul 2022 00:05:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230343AbiGMEFM (ORCPT ); Wed, 13 Jul 2022 00:05:12 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACED9DA0D3 for ; Tue, 12 Jul 2022 21:05:06 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id p9so10274998pjd.3 for ; Tue, 12 Jul 2022 21:05:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c+ZN/qkTNGGZVV+4lPGLakVKyq8BpcGfufNIFKXtAOY=; b=tj7ioNMzHWXt9gnDDbJRVz7gJ09wqIOJgSiU/7EzqLdp+K14oSUZ1JI5cfXnCLVdl6 s06LvD0zWXHMhFSSWPvy7cG2Ex10djxaJvIapuFoBNEbaikYXEqnoUBC9OBbtqUkZymf gBXrXn9m/ypWFnndZqKZm9Ljy/6Qchd+DkeFL9ypSjnj4s07B825fU/lHwee1s6N5B+k IKZ2YJuSOvyU+xCQTiYbk7vLYhuR7cXu7mUyVRymBMz6oAdgAJXZdP8sUXggRmDFIvVt qUQEM4BPrVVDXnwlGC1esyf1LQzQZrLXTOgxXoal4BMhxKQ27jdGBObL/a6SCnEQJnrv KzMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c+ZN/qkTNGGZVV+4lPGLakVKyq8BpcGfufNIFKXtAOY=; b=1+4JS2Vyg5mRyr1cdUNik+ZQvRJDiQv6sDdHBCvQWX06L33tLYy9MUP1KM/TKF+aGZ 1/LTGsbDVe3+jO79FwZP/ZrUh5ih0Kv3uW988QL/c/M2qaTjxt14yCzGn9i0/8nA+XtE U/F99iqmqcDqTTCzsYeSYX7DNWULfDNLYKB6LwgC6NWxWOdpr71Y6omdfVVosFFEicwo e47Rol3j2DAyc2LNVsxuBesDCmcGzAZV3HBUYNVVs7UsI2ILQcDKXXcJEtDU5o2u1Ppn 1ws8Xu/jQcJ2xgEJzYCc7LY29BQQYBGMI1OAPIP/geS6+LzceVGaE9fS2K1rLryWUufN 2dZA== X-Gm-Message-State: AJIora+pp0oIXE+KMVfB99l203kjs0bsJikcmjXDTYcestVpchpL1czc Tt0lkJLF1HQzIIkBhW6xHFiVQw== X-Google-Smtp-Source: AGRyM1uQNB1XnDhjcblgVDJgtpTlMM6dtbCRISGOpvX3Q38jH8edt4ZJ0kjs4K9OplGrwN+kuFs/hQ== X-Received: by 2002:a17:902:e945:b0:16b:f802:1660 with SMTP id b5-20020a170902e94500b0016bf8021660mr1557501pll.7.1657685106150; Tue, 12 Jul 2022 21:05:06 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.05.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:05 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 05/10] sched/fair: reset sched_avg last_update_time before set_task_rq() Date: Wed, 13 Jul 2022 12:04:25 +0800 Message-Id: <20220713040430.25778-6-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked task's sched_avg when migrate, which is not needed for already detached task. task_change_group_fair() will detached the task sched_avg from prev cfs_rq first, so reset sched_avg last_update_time before set_task_rq() to avoid th= at. Signed-off-by: Chengming Zhou Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8992ce5e73d2..171bc22bc142 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11637,12 +11637,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); - set_task_rq(p, task_cpu(p)); =20 #ifdef CONFIG_SMP /* Tell se's cfs_rq has been changed -- migrated */ p->se.avg.last_update_time =3D 0; #endif + set_task_rq(p, task_cpu(p)); attach_task_cfs_rq(p); } =20 --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BAF7C433EF for ; Wed, 13 Jul 2022 04:05:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232397AbiGMEFY (ORCPT ); Wed, 13 Jul 2022 00:05:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230267AbiGMEFM (ORCPT ); Wed, 13 Jul 2022 00:05:12 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DD1AD9E1C for ; Tue, 12 Jul 2022 21:05:10 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id y9so9159965pff.12 for ; Tue, 12 Jul 2022 21:05:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f93VPJMo4dHwbuo4a1UCu4R/br+7n9ckKJLpiQoogmM=; b=pRG/gyxrG4hbJEoORBlZiJlSSvOAGy3zKqhZJWJLpXhvdvuD4ljjc6ESeL8CssbOfJ 0QRzA63ZVkMrDGBz5ixEZ9AQq6FGUuEZhZ2GWAA35HbBdo/cNw626TmZ1R9G8aCZo1x5 AVHdaykY00PZ5f2gzMv1sRXLEupt67gkvzJQvIU/zk3v3TNIUwFL/QufRp8YhVR8bJl0 2Bbzk6fK+L/dBpA5jIc1swpdvw4sBq6eBEGimcGCNlZ0cMCUZ6aUU+N54p0BqT+SqkI7 t4XgrGMtuQXuT9CpwKjCLtnzCb+xnn7R+H8vIfEiYQkxX3FyLsm5EqXz7URdTUiyhpvj L/Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f93VPJMo4dHwbuo4a1UCu4R/br+7n9ckKJLpiQoogmM=; b=HrmClZyo00Oa7kiRmG8P+WW5VQjwJ4eUbTJe5G+WIJc9sp/6yuB+DLjbwUrcLXWoBN 9KJU9RSMaVQyESNeIlDI9k/WuKxKyfDZ2xo9EhQgw5o0I/Q7kGUz+lkjEqRyS1hSazys TLQqcccn/SclVpmf7GTftTPw/cQQVBFKwqk892oqFYqRrC8cJtSMKLZc/0FOY6eBKSxb EjRK2OQzz5kDutBF0rDI478FPyirdAh95RpyiX5IAVTwskb2eGQTkMa1LzoIj5fd9EGq Ko5o6MVF+sgBtyOpuphjTRvzkJLJivrzLRN2E2t28SO+lb/lp9zuC2wRbL4HFJ0h6iD8 H96w== X-Gm-Message-State: AJIora8EYzh3kZKN1tVqmX1vFcSuRjFFmNOPInfTqr16U+NT7TZyIOTo c59YAb9LrY3pPHrA0dhPAGGsOw== X-Google-Smtp-Source: AGRyM1vqOkE19zRkRioANm9tOhYUZ8kXz2U+a4MQaR+istmts1vMjVYnITWAL9WJflUjQ9AGOQLpZg== X-Received: by 2002:a63:8148:0:b0:415:6fba:af3f with SMTP id t69-20020a638148000000b004156fbaaf3fmr1339090pgd.277.1657685109941; Tue, 12 Jul 2022 21:05:09 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.05.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:09 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 06/10] sched/fair: delete superfluous SKIP_AGE_LOAD Date: Wed, 13 Jul 2022 12:04:26 +0800 Message-Id: <20220713040430.25778-7-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There are three types of attach_entity_cfs_rq(): 1. task migrate to CPU 2. task move to cgroup 3. task switched to fair from !fair The case1 and case2 already have sched_avg last_update_time reset to 0 when attach_entity_cfs_rq(). We will make case3 also have last_update_time reset to 0 when attach_entity_cfs_rq() in the following patches. So it makes no difference whether SKIP_AGE_LOAD is set or not. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 171bc22bc142..29811869c1fe 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4001,9 +4001,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s * Optional action to be done while updating the load average */ #define UPDATE_TG 0x1 -#define SKIP_AGE_LOAD 0x2 -#define DO_ATTACH 0x4 -#define DO_DETACH 0x8 +#define DO_ATTACH 0x2 +#define DO_DETACH 0x4 =20 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_ent= ity *se, int flags) @@ -4015,7 +4014,7 @@ static inline void update_load_avg(struct cfs_rq *cfs= _rq, struct sched_entity *s * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) + if (se->avg.last_update_time) __update_load_avg_se(now, cfs_rq, se); =20 decayed =3D update_cfs_rq_load_avg(now, cfs_rq); @@ -4298,7 +4297,6 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *c= fs_rq) } =20 #define UPDATE_TG 0x0 -#define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 #define DO_DETACH 0x0 =20 @@ -11540,7 +11538,7 @@ static void attach_entity_cfs_rq(struct sched_entit= y *se) struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 /* Synchronize entity with its cfs_rq */ - update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LO= AD); + update_load_avg(cfs_rq, se, 0); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq); propagate_entity_cfs_rq(se); --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89624C433EF for ; Wed, 13 Jul 2022 04:05:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233361AbiGMEF2 (ORCPT ); Wed, 13 Jul 2022 00:05:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233360AbiGMEFR (ORCPT ); Wed, 13 Jul 2022 00:05:17 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99847DA0DE for ; Tue, 12 Jul 2022 21:05:14 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id a15so10316097pjs.0 for ; Tue, 12 Jul 2022 21:05:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i5Gc9IV85m7Fejt5IzYZGHmh3j/vway/Y227kMZ6Tts=; b=AlZ4vOFhaCQTJILmtfJOtZR/WwLGLONxL8aAbseajsd1QY7wSelUDi6cgfQ6sQWKSM 2ZT53xGZkEhFUbT/03JoRRs5m1g+uErBTXhbnXk2YlJzvCk6dS8as7nPiHA1bWNMhdsJ NGaK9kJhRIOZnqEYeGdOMwuNwLFZNsddNrt8wkJKoze3qe5qoo4wVih09gFuQSkqcPD5 FQC3IOiWK1LYCkJJMGWRMVtl1ebQTQM0xLZ/Dmiwf6sXA01gFX7vUQjaVZMii1yOP64J E0yNxj5NfxK6/ac6iWI4HKTD7+Oa0z1AahRHyqInTX3mRIs2JZzqh66AggLXy5fGUmmK nszQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i5Gc9IV85m7Fejt5IzYZGHmh3j/vway/Y227kMZ6Tts=; b=kJCDvIuqa+wNn/tBwklsyuUwCs7qTCan6brVRAX4G7PV3jGQCbSoMjBkftY7cj5e20 OEo35FZMKhUJCD1c3MEJdHd7eUG1XuYdCy3rxUeh3qIPt9HYwiaW6nrLiBUvbJpv1y0A qpP80VFprU5ZGa3i6e2wUvsstDL3BUWBq8+qy9YyLpxEdcsejGq6mUxN+BWvo0Coqnw8 pUBa/ZnCPyfEWEVK1/no3MPAXwwHaqVfWwgBhAcmZtcL0wiqscU3oaPKSDexAKwYtZr8 tsHj8fVVO2gpc7uI2LrrNLUpqZSZrfP+ZXhnFtgw8xF8tZK3OIiKk3UNcLzd9irle5Hm 8mew== X-Gm-Message-State: AJIora+Ivqoq/9zEdV3PB2fVKsCeO5dazZSPpB+JZPCsmFasfdiXJamu 9aYtfEiPB4oTmMHstqt37HSLrQ== X-Google-Smtp-Source: AGRyM1unErkqwqvb3n7DC2AUiQnbQSXT3tIyIZm852yhqK7lAn6gWaF3mUCy8eRvcPCtXcQv+Rd8nQ== X-Received: by 2002:a17:902:ecc3:b0:16c:461d:802b with SMTP id a3-20020a170902ecc300b0016c461d802bmr1257885plh.167.1657685113921; Tue, 12 Jul 2022 21:05:13 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.05.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:13 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 07/10] sched/fair: use update_load_avg() to attach/detach entity load_avg Date: Wed, 13 Jul 2022 12:04:27 +0800 Message-Id: <20220713040430.25778-8-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since update_load_avg() support DO_ATTACH and DO_DETACH now, we can use update_load_avg() to implement attach/detach entity load_avg. Another advantage of using update_load_avg() is that it will check last_update_time before attach or detach, instead of unconditional attach/detach in the current code. This way can avoid some corner problematic cases of load tracking, like twice attach problem, detach unattached NEW task problem. 1. switch to fair class (twice attach problem) p->sched_class =3D fair_class; --> p.se->avg.last_update_time =3D 0 if (queued) enqueue_task(p); ... enqueue_entity() update_load_avg(UPDATE_TG | DO_ATTACH) if (!se->avg.last_update_time && (flags & DO_ATTACH)) --> true attach_entity_load_avg() --> attached, will set last_update_ti= me check_class_changed() switched_from() (!fair) switched_to() (fair) switched_to_fair() attach_entity_load_avg() --> unconditional attach again! 2. change cgroup of NEW task (detach unattached task problem) sched_move_group(p) if (queued) dequeue_task() task_move_group_fair() detach_task_cfs_rq() detach_entity_load_avg() --> detach unattached NEW task set_task_rq() attach_task_cfs_rq() attach_entity_load_avg() if (queued) enqueue_task() These problems have been fixed in commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks"), which also bring its own problems. First, it add a new task state TASK_NEW and an unnessary limitation that we would fail when change the cgroup of TASK_NEW tasks. Second, it attach entity load_avg in post_init_entity_util_avg(), in which we only set sched_avg last_update_time for !fair tasks, will cause PELT integrity problem when switched_to_fair(). This patch make update_load_avg() the only location of attach/detach, and can handle these corner cases like change cgroup of NEW tasks, by checking last_update_time before attach/detach. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 15 +++------------ 1 file changed, 3 insertions(+), 12 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 29811869c1fe..51fc20c161a3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4307,11 +4307,6 @@ static inline void update_load_avg(struct cfs_rq *cf= s_rq, struct sched_entity *s =20 static inline void remove_entity_load_avg(struct sched_entity *se) {} =20 -static inline void -attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} -static inline void -detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} - static inline int newidle_balance(struct rq *rq, struct rq_flags *rf) { return 0; @@ -11527,9 +11522,7 @@ static void detach_entity_cfs_rq(struct sched_entit= y *se) struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 /* Catch up with the cfs_rq and remove our load when we leave */ - update_load_avg(cfs_rq, se, 0); - detach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq); + update_load_avg(cfs_rq, se, UPDATE_TG | DO_DETACH); propagate_entity_cfs_rq(se); } =20 @@ -11537,10 +11530,8 @@ static void attach_entity_cfs_rq(struct sched_enti= ty *se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 - /* Synchronize entity with its cfs_rq */ - update_load_avg(cfs_rq, se, 0); - attach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq); + /* Synchronize entity with its cfs_rq and attach our load */ + update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); propagate_entity_cfs_rq(se); } =20 --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 546FDC433EF for ; Wed, 13 Jul 2022 04:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233538AbiGMEFo (ORCPT ); Wed, 13 Jul 2022 00:05:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229568AbiGMEFT (ORCPT ); Wed, 13 Jul 2022 00:05:19 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB45CD9E34 for ; Tue, 12 Jul 2022 21:05:18 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id x18-20020a17090a8a9200b001ef83b332f5so1698878pjn.0 for ; Tue, 12 Jul 2022 21:05:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BKc5oxxSnwUEVTMGnUgWrj+YyCWOHtHYSRx6xfCDJcM=; b=WWivEGjL3hc7GWXCPy6ggOjPusMqq9kFr+B3NaWX98cjyuRLqCdjlIi2caIUISJoQO VHpV7b2x1ngMim3fSK04doD/Xxguvusa3KqO7r+QWUkUujjqtFweHgqYr8+qdWv+NxDs H3A/I3Nm+gjNK5fEF/0KtwttfVLUiiVgLSCvgdFHyZYjpv0kmP3hwQD3VpFQ6atwu6t0 9PCXf0eG9ODsHX66+eq2bYmeejbFlOv9+a4YpfOlVg5mteIJf84jzy4jEMlvHHUMQDvW rK9z7j8+NWVxo91rLC0t6CE60yohzfvj8wGQ3EKZapuDvfGVQFiCtcERucjTi2GWg5XS UVBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BKc5oxxSnwUEVTMGnUgWrj+YyCWOHtHYSRx6xfCDJcM=; b=JMkhGrQP/QI6m6bHbe+WUU0Whk1JmARkasgTYh2MkmI6kUUwfcpZ+EwWHenl3fpgEI sJX/Tukme/C5ah1PQ5cRi1+JOMbg1DJeqmkZ96a67fikZCfPhxRzQgGT8N8uyrCUQzef lxTuCVq3slvn9JbDHI+euDHk/q5Fr5EWQpVmk7POJtJg8vr/3xtWnZqrnhS6sSlrHKFk egMzy5amKwNtRz+IKDlDWF9CW7C2QOgU2uLRQujCZ7Yt7w4HyVEP/aXUbkb+e2Vy7vF+ IXK6NarWqUmn2EwnhDzA/JQFRDpfuhdso/oAQZ+X3H+l29BJqjCmjj7FyTLhnyFAAXgq WJog== X-Gm-Message-State: AJIora8j96empMAHMuJVQAYRypyiosmc4c0ktUGnA0ix2835XeEqScHx bbATvaLVBTq15dehCbFlFF6uqw== X-Google-Smtp-Source: AGRyM1tdM481aRAYI85nG1/rBJy4W0WtnyGZpw8n3/RDyXqboZeUoOjLyxgFkfnRyfI5s+trR7iGwA== X-Received: by 2002:a17:902:cec2:b0:16c:3deb:a062 with SMTP id d2-20020a170902cec200b0016c3deba062mr1618135plg.136.1657685117961; Tue, 12 Jul 2022 21:05:17 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.05.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:17 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 08/10] sched/fair: fix load tracking for new forked !fair task Date: Wed, 13 Jul 2022 12:04:28 +0800 Message-Id: <20220713040430.25778-9-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" New forked !fair task will set its sched_avg last_update_time to the pelt_clock of cfs_rq, after a while in switched_to_fair(): switched_to_fair attach_task_cfs_rq attach_entity_cfs_rq update_load_avg __update_load_avg_se(now, cfs_rq, se) the delta (now - sa->last_update_time) will contribute/decay sched_avg depends on the task running/runnable status at that time. This patch don't set sched_avg last_update_time of new forked !fair task, leave it to 0. So later in update_load_avg(), we don't need to contribute/decay the wrong delta (now - sa->last_update_time). Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 51fc20c161a3..50f65a2ede32 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -849,22 +849,8 @@ void post_init_entity_util_avg(struct task_struct *p) =20 sa->runnable_avg =3D sa->util_avg; =20 - if (p->sched_class !=3D &fair_sched_class) { - /* - * For !fair tasks do: - * - update_cfs_rq_load_avg(now, cfs_rq); - attach_entity_load_avg(cfs_rq, se); - switched_from_fair(rq, p); - * - * such that the next switched_to_fair() has the - * expected state. - */ - se->avg.last_update_time =3D cfs_rq_clock_pelt(cfs_rq); - return; - } - - attach_entity_cfs_rq(se); + if (p->sched_class =3D=3D &fair_sched_class) + attach_entity_cfs_rq(se); } =20 #else /* !CONFIG_SMP */ --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A97C7C433EF for ; Wed, 13 Jul 2022 04:05:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233584AbiGMEFr (ORCPT ); Wed, 13 Jul 2022 00:05:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231423AbiGMEFW (ORCPT ); Wed, 13 Jul 2022 00:05:22 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30F78D9E1B for ; Tue, 12 Jul 2022 21:05:22 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id 72so9432252pge.0 for ; Tue, 12 Jul 2022 21:05:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+2tUbyN2xCwMNnYylDuWSr4Ej/rDbj2w3froYdg0Q9I=; b=P7qQ0aXggkw+HRGyGnwGQUN4Ud+8NUR/peIiuMfu02rzQ/bNWeltmRIUSWcFMiA9A6 5j7PDwOHC4NsC59zdDPIVjsG/sbiyrLD+SqPbXHq7uBxr2UqFpJ3LNwl9z5CAkQY58bM 0YftkC6HDJ9DRddF0ppbnBuNLLM8x+6+RqFIflRtvaTGKTnKCLIYgbe2B1Kp5Oxc3nwQ yEwsFxWizmNgvkFSMSC6lt9aOSlXGYZOg04nXP7ZDn/GS6/Fau8PZpo1K1aLJ672iRgJ UZrKwO/fH1qdVsitGLCulSpRWZWiaXjVJ10BdXUDmiuM0kC1WseG2SYVho/ZM/tBDLGg 7k7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+2tUbyN2xCwMNnYylDuWSr4Ej/rDbj2w3froYdg0Q9I=; b=Y4kNSKxvO7/5toAZIjhAtt3QsYpws0IhDzKk7VUSOwQmcjWGlIOfOW7RfcDkci1G74 5prvutcyR0Royk6FRAwJHy6NI36Zy1JjnhYqqgFVmK6DLKmsVT3XdGHydNawEXCyU+6s FoU5XsgTomszjrsdEHYhcAj+7OCv0/mXw9PYkpP6AWWl3og2IdLYGMVKhpTieuXk0SWN ddg4iYwR8zTbj6aNGcES6MAZKTVfcDJB3ZhVWcepwKf516T8QOc/NEqZeIMggtFtVhjR Xzq7+mhZXgT+a9u799F2sIFyemLzKFyEdgOlOlFZPu9fk1lP7AGrFJh+3qtQjZ4f6fHy uqfg== X-Gm-Message-State: AJIora9eCSMuZ3kV15zI+sb6Is9G0oAI8SW9nx1G64gq8OB6VegPEei6 VCriujKPUwS+MalMOXzK1C3Ymw== X-Google-Smtp-Source: AGRyM1u6cXFHA6c24+Zt2sngu9RaumNXGT5FU4M2mMn5qYUeliCH0e7zSV4ly5ZeraCfygLbA09qqw== X-Received: by 2002:a05:6a00:a1f:b0:525:3ad6:fb7e with SMTP id p31-20020a056a000a1f00b005253ad6fb7emr1208891pfh.68.1657685121712; Tue, 12 Jul 2022 21:05:21 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.05.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:21 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 09/10] sched/fair: stop load tracking when task switched_from_fair() Date: Wed, 13 Jul 2022 12:04:29 +0800 Message-Id: <20220713040430.25778-10-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The same reason as the previous commit, if we don't reset the sched_avg last_update_time to 0, after a while in switched_to_fair(): switched_to_fair attach_task_cfs_rq attach_entity_cfs_rq update_load_avg __update_load_avg_se(now, cfs_rq, se) The delta (now - sa->last_update_time) will wrongly contribute/decay sched_avg depends on the task running/runnable status at that time. This patch reset it's sched_avg last_update_time to 0, stop load tracking for !fair task, later in switched_to_fair() -> update_load_avg(), we can use its saved sched_avg. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 50f65a2ede32..576028f5a09e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11552,6 +11552,11 @@ static void attach_task_cfs_rq(struct task_struct = *p) static void switched_from_fair(struct rq *rq, struct task_struct *p) { detach_task_cfs_rq(p); + +#ifdef CONFIG_SMP + /* Stop load tracking for !fair task */ + p->se.avg.last_update_time =3D 0; +#endif } =20 static void switched_to_fair(struct rq *rq, struct task_struct *p) --=20 2.36.1 From nobody Sat Apr 18 22:41:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 004F8C433EF for ; Wed, 13 Jul 2022 04:06:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233360AbiGMEF7 (ORCPT ); Wed, 13 Jul 2022 00:05:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233613AbiGMEF2 (ORCPT ); Wed, 13 Jul 2022 00:05:28 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C669DA0E4 for ; Tue, 12 Jul 2022 21:05:26 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id b8so8883923pjo.5 for ; Tue, 12 Jul 2022 21:05:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YxO3douG9yaDP325H3l2tI70xR2hZpS+aWxFMUKMRFw=; b=auPbO2h7LsRh0gM5laDsxqiugnJF/UTbb95NxHR5R+lPBv3nn0ZvzDqgNMeEKd5jDt fwZB+f/j3ByGbbss/INMM5XXu5JrDdjPGYXBiybN/XmGIB4Mr3eCfamx3AZ7Mwv39Joc FzHgyfnxjc57xsr8Q/jfz+CiiLIbopxmCz07ZxmjOfyNUIk1TzIdJxUhFRkvryoqqXvb EZGLs8jb0BmSTQ1OQpLI5B0MXL1S4RqHDEvgUj2CCBlyzBJMOB/Z84BS095VX/TdrpnG 8xkHujETtEbEjEscZB/AtQ0bIUPhXUJpyImGqYObrseIuFwJlvOfA8JY8/DVgIhEfI+E S3Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YxO3douG9yaDP325H3l2tI70xR2hZpS+aWxFMUKMRFw=; b=Tf6axrlmHdfMfTLSQecYccZI1WPW+29BywKuNRBkiBuzsVq5uVPOyAUzzlJtKg5BwO MTMQAGvJ47OuK1QsHTUCHmvgkSdFISIKaoTYkxD5giLUmmi9hQyq6PBuYqJSgN+0J/Sz 2ZU0d0P6ZlWH0CLugxuW40XmEXDwPCAdXggCdaWLfvWdV36PNT0XdOLGei0BO3tVcP+0 RRh85EsdeCSzODwt1+f/n29vbiZYaP5HKDmj4IQ7f8ddxVOlmggYkBYKBZQVCDY7XI7S eMJB/O+VpoiHTomhrnyiTgHwca9vcqUtVa/IwYfS60v24PyIqKCqe/z3B5W8Flm/sB+m Zbwg== X-Gm-Message-State: AJIora9T6tqMAbrUNrvXEBl2uz/gpT7hTIp9Hv9NRRQN4+Yt+gcmLCQ8 8QVdoDD1t6d6Wa7BeNhKoGGZKHLLpLo3hUnT X-Google-Smtp-Source: AGRyM1ubREtBdERwinjkfVXTmWC38yQCiY3CzrFHJGyov5zSsVcwROeUDgetNw7hdK0J6h6IqKJY4A== X-Received: by 2002:a17:903:40cc:b0:16c:115d:5e80 with SMTP id t12-20020a17090340cc00b0016c115d5e80mr1282948pld.0.1657685125521; Tue, 12 Jul 2022 21:05:25 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.235]) by smtp.gmail.com with ESMTPSA id y12-20020a17090322cc00b0016bd16f8acbsm6858942plg.114.2022.07.12.21.05.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 21:05:25 -0700 (PDT) From: Chengming Zhou To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, vschneid@redhat.com Cc: linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v2 10/10] sched/fair: delete superfluous set_task_rq_fair() Date: Wed, 13 Jul 2022 12:04:30 +0800 Message-Id: <20220713040430.25778-11-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713040430.25778-1-zhouchengming@bytedance.com> References: <20220713040430.25778-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" set_task_rq() is used when move task across CPUs/groups to change its cfs_rq and parent entity, and it will call set_task_rq_fair() to sync blocked task load_avg just before change its cfs_rq. 1. task migrate CPU: will detach/remove from prev cfs_rq and reset its sched_avg last_update_time to 0, so don't need to sync again. 2. task migrate cgroup: will detach from prev cfs_rq and reset its sched_avg last_update_time to 0, so don't need to sync too. 3. !fair task migrate CPU/cgroup: we stop load tracking for !fair task, reset sched_avg last_update_time to 0 when switched_from_fair(), so don't need it too. So set_task_rq_fair() is not needed anymore, this patch delete it. And delete unused ATTACH_AGE_LOAD feature together. Signed-off-by: Chengming Zhou --- kernel/sched/fair.c | 31 ------------------------------- kernel/sched/features.h | 1 - kernel/sched/sched.h | 8 -------- 3 files changed, 40 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 576028f5a09e..b435eda88468 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3430,37 +3430,6 @@ static inline void update_tg_load_avg(struct cfs_rq = *cfs_rq) } } =20 -/* - * Called within set_task_rq() right before setting a task's CPU. The - * caller only guarantees p->pi_lock is held; no other assumptions, - * including the state of rq->lock, should be made. - */ -void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next) -{ - u64 p_last_update_time; - u64 n_last_update_time; - - if (!sched_feat(ATTACH_AGE_LOAD)) - return; - - /* - * We are supposed to update the task to "current" time, then its up to - * date and ready to go to new CPU/cfs_rq. But we have difficulty in - * getting what current time is, so simply throw away the out-of-date - * time. This will result in the wakee task is less decayed, but giving - * the wakee more load sounds not bad. - */ - if (!(se->avg.last_update_time && prev)) - return; - - p_last_update_time =3D cfs_rq_last_update_time(prev); - n_last_update_time =3D cfs_rq_last_update_time(next); - - __update_load_avg_blocked_se(p_last_update_time, se); - se->avg.last_update_time =3D n_last_update_time; -} - /* * When on migration a sched_entity joins/leaves the PELT hierarchy, we ne= ed to * propagate its contribution. The key to this propagation is the invariant diff --git a/kernel/sched/features.h b/kernel/sched/features.h index ee7f23c76bd3..fb92431d496f 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -85,7 +85,6 @@ SCHED_FEAT(RT_PUSH_IPI, true) =20 SCHED_FEAT(RT_RUNTIME_SHARE, false) SCHED_FEAT(LB_MIN, false) -SCHED_FEAT(ATTACH_AGE_LOAD, true) =20 SCHED_FEAT(WA_IDLE, true) SCHED_FEAT(WA_WEIGHT, true) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 19e0076e4245..a8ec7af4bd51 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -505,13 +505,6 @@ extern int sched_group_set_shares(struct task_group *t= g, unsigned long shares); =20 extern int sched_group_set_idle(struct task_group *tg, long idle); =20 -#ifdef CONFIG_SMP -extern void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next); -#else /* !CONFIG_SMP */ -static inline void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next) { } -#endif /* CONFIG_SMP */ #endif /* CONFIG_FAIR_GROUP_SCHED */ =20 #else /* CONFIG_CGROUP_SCHED */ @@ -1937,7 +1930,6 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; p->se.parent =3D tg->se[cpu]; p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; --=20 2.36.1