From nobody Mon Feb 9 12:01:02 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC852C7EE2E for ; Wed, 31 May 2023 12:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234598AbjEaMtd (ORCPT ); Wed, 31 May 2023 08:49:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233059AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AEB6E47 for ; Wed, 31 May 2023 05:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=7c6h7GZfdJ+T9RjYs8/xyMIsRGXNP+akeErf4CwdC0Y=; b=D7Zx31l8iu9t90D4/DQ67QHEdv wNiOfwEEaHbVCg/SzU/UZ3Jc0Nz5oTIO2Yz9Dlw6ZB9KDA4BjndrYzYfFGAh59kE/Nh3SPYjL2Mwl yCSERkqENYxsFFggKCf6PUgYADYRyuWDNnhgZY1GA5Gsfd0zOz9/paiAEMt1fM7fhTtl13uVeeXF5 +9Ezv0WXysDu7TCGSq8/8EmwJRYg/Or77a5sM0Ka95UqqxH/UqCCQShGjihCYBmv+n3EA821k2jdH HeBB+3fjzpoZo0xqX9jkswX9Mys6zHn2ZBV0vXLEjHjyO6TrZTUkkNu+BN3zSw+RMkM3emFs4L5En LImozRIQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEp-007GRb-Tu; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 838083003E1; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 00C0D22BA6459; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124604.068911180@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:46 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 07/15] sched/smp: Use lag to simplify cross-runqueue placement References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Using lag is both more correct and simpler when moving between runqueues. Notable, min_vruntime() was invented as a cheap approximation of avg_vruntime() for this very purpose (SMP migration). Since we now have the real thing; use it. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 145 ++++++-----------------------------------------= ----- 1 file changed, 19 insertions(+), 126 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5083,7 +5083,7 @@ place_entity(struct cfs_rq *cfs_rq, stru * * EEVDF: placement strategy #1 / #2 */ - if (sched_feat(PLACE_LAG) && cfs_rq->nr_running > 1) { + if (sched_feat(PLACE_LAG) && cfs_rq->nr_running) { struct sched_entity *curr =3D cfs_rq->curr; unsigned long load; =20 @@ -5171,61 +5171,21 @@ static void check_enqueue_throttle(struc =20 static inline bool cfs_bandwidth_used(void); =20 -/* - * MIGRATION - * - * dequeue - * update_curr() - * update_min_vruntime() - * vruntime -=3D min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime +=3D min_vruntime - * - * this way the vruntime transition between RQs is done when both - * min_vruntime are up-to-date. - * - * WAKEUP (remote) - * - * ->migrate_task_rq_fair() (p->state =3D=3D TASK_WAKING) - * vruntime -=3D min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime +=3D min_vruntime - * - * this way we don't have the most up-to-date min_vruntime on the originat= ing - * CPU and an up-to-date min_vruntime on the destination CPU. - */ - static void enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { - bool renorm =3D !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_MIGRATED); bool curr =3D cfs_rq->curr =3D=3D se; =20 /* * If we're the current task, we must renormalise before calling * update_curr(). */ - if (renorm && curr) - se->vruntime +=3D cfs_rq->min_vruntime; + if (curr) + place_entity(cfs_rq, se, 0); =20 update_curr(cfs_rq); =20 /* - * Otherwise, renormalise after, such that we're placed at the current - * moment in time, instead of some random moment in the past. Being - * placed in the past could significantly boost this task to the - * fairness detriment of existing tasks. - */ - if (renorm && !curr) - se->vruntime +=3D cfs_rq->min_vruntime; - - /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new @@ -5236,11 +5196,22 @@ enqueue_entity(struct cfs_rq *cfs_rq, st */ update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); se_update_runnable(se); + /* + * XXX update_load_avg() above will have attached us to the pelt sum; + * but update_cfs_group() here will re-adjust the weight and have to + * undo/redo all that. Seems wasteful. + */ update_cfs_group(se); - account_entity_enqueue(cfs_rq, se); =20 - if (flags & ENQUEUE_WAKEUP) + /* + * XXX now that the entity has been re-weighted, and it's lag adjusted, + * we can place the entity. + */ + if (!curr) place_entity(cfs_rq, se, 0); + + account_entity_enqueue(cfs_rq, se); + /* Entity has migrated, no longer consider this task hot */ if (flags & ENQUEUE_MIGRATED) se->exec_start =3D 0; @@ -5335,23 +5306,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, st =20 clear_buddies(cfs_rq, se); =20 - if (flags & DEQUEUE_SLEEP) - update_entity_lag(cfs_rq, se); - + update_entity_lag(cfs_rq, se); if (se !=3D cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->on_rq =3D 0; account_entity_dequeue(cfs_rq, se); =20 - /* - * Normalize after update_curr(); which will also have moved - * min_vruntime if @se is the one holding it back. But before doing - * update_min_vruntime() again, which will discount @se's position and - * can move min_vruntime forward still more. - */ - if (!(flags & DEQUEUE_SLEEP)) - se->vruntime -=3D cfs_rq->min_vruntime; - /* return excess runtime on last dequeue */ return_cfs_rq_runtime(cfs_rq); =20 @@ -8102,18 +8062,6 @@ static void migrate_task_rq_fair(struct { struct sched_entity *se =3D &p->se; =20 - /* - * As blocked tasks retain absolute vruntime the migration needs to - * deal with this by subtracting the old and adding the new - * min_vruntime -- the latter is done by enqueue_entity() when placing - * the task on the new runqueue. - */ - if (READ_ONCE(p->__state) =3D=3D TASK_WAKING) { - struct cfs_rq *cfs_rq =3D cfs_rq_of(se); - - se->vruntime -=3D u64_u32_load(cfs_rq->min_vruntime); - } - if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); =20 @@ -12482,8 +12430,8 @@ static void task_tick_fair(struct rq *rq */ static void task_fork_fair(struct task_struct *p) { - struct cfs_rq *cfs_rq; struct sched_entity *se =3D &p->se, *curr; + struct cfs_rq *cfs_rq; struct rq *rq =3D this_rq(); struct rq_flags rf; =20 @@ -12492,22 +12440,9 @@ static void task_fork_fair(struct task_s =20 cfs_rq =3D task_cfs_rq(current); curr =3D cfs_rq->curr; - if (curr) { + if (curr) update_curr(cfs_rq); - se->vruntime =3D curr->vruntime; - } place_entity(cfs_rq, se, 1); - - if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) { - /* - * Upon rescheduling, sched_class::put_prev_task() will place - * 'current' within the tree based on its new key value. - */ - swap(curr->vruntime, se->vruntime); - resched_curr(rq); - } - - se->vruntime -=3D cfs_rq->min_vruntime; rq_unlock(rq, &rf); } =20 @@ -12536,34 +12471,6 @@ prio_changed_fair(struct rq *rq, struct check_preempt_curr(rq, p, 0); } =20 -static inline bool vruntime_normalized(struct task_struct *p) -{ - struct sched_entity *se =3D &p->se; - - /* - * In both the TASK_ON_RQ_QUEUED and TASK_ON_RQ_MIGRATING cases, - * the dequeue_entity(.flags=3D0) will already have normalized the - * vruntime. - */ - if (p->on_rq) - return true; - - /* - * When !on_rq, vruntime of the task has usually NOT been normalized. - * But there are some cases where it has already been normalized: - * - * - A forked child which is waiting for being woken up by - * wake_up_new_task(). - * - A task which has been woken up by try_to_wake_up() and - * waiting for actually being woken up by sched_ttwu_pending(). - */ - if (!se->sum_exec_runtime || - (READ_ONCE(p->__state) =3D=3D TASK_WAKING && p->sched_remote_wakeup)) - return true; - - return false; -} - #ifdef CONFIG_FAIR_GROUP_SCHED /* * Propagate the changes of the sched_entity across the tg tree to make it @@ -12634,16 +12541,6 @@ static void attach_entity_cfs_rq(struct static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se =3D &p->se; - struct cfs_rq *cfs_rq =3D cfs_rq_of(se); - - if (!vruntime_normalized(p)) { - /* - * Fix up our vruntime so that the current sleep doesn't - * cause 'unlimited' sleep bonus. - */ - place_entity(cfs_rq, se, 0); - se->vruntime -=3D cfs_rq->min_vruntime; - } =20 detach_entity_cfs_rq(se); } @@ -12651,12 +12548,8 @@ static void detach_task_cfs_rq(struct ta static void attach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se =3D &p->se; - struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 attach_entity_cfs_rq(se); - - if (!vruntime_normalized(p)) - se->vruntime +=3D cfs_rq->min_vruntime; } =20 static void switched_from_fair(struct rq *rq, struct task_struct *p)