From nobody Thu Apr 2 20:25:30 2026 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 627E1339878 for ; Thu, 19 Feb 2026 08:11:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771488665; cv=none; b=rgkvjdG0vxkmHkWqXTl0u8gWvV2w9X2ye4mIi3oEGgmYX8CwJ8qMUKU6AHifWG2Tis8s6E2JsJ5Uon7UXrxBbiQP7e/OokbZ9VKdEiQCnB/PEAGOa1bATxx2O+Ybf0RaaeAp+h/uUr57/axHvPpEe31PA3KRxR39wrG2WZFJDwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771488665; c=relaxed/simple; bh=nzjwwszV7nPITwyAvUTjd1+sOssFh41Csy3GUsMDjfA=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=dtlLUfTG6mcv8ny3qvAlzNUbmzc7scGqdE5/Mk10w1LvFU0DfCYqWIg4ssvJnmP9ESPAWPNjlCyUn3CTPlUHxXQpar3IrQ3Hu4HW3Pxy1q02TMHNG6sxOh/+eo16Glbmv6odRbi1em+OIacauOUPHx0nr82vic2YrBIuQ9sJcx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=hHS+mGlR; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="hHS+mGlR" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=R/dm1QedGFrzQel3qAPlX6FB2yOugGJ5Hr0oX6jvp2Y=; b=hHS+mGlROJxxEYUVTT4MStKA2/ TcsIOk3Ywf1pwQ55kIu/akMxI5g1irytxl7XwV0DX2utlMGCga6T833NYSxCFjrWOHmHisTWQnz0k /Mp/vjR6aWPbL2lGtfA4cj+1na/H0mS2dkOXvOg+yFZmaR9y6WPoXX00Ww2AWl4qj38HZq1orOU4t b3zgpkmW7ILkSacJ+21Q08B8482anIJNm68Vq/ngHXv7MeBIkp6wRznNWoMHgRdG69q5elTaD+uTQ L727817cWWy87NsvBYodp/08eESj2fveuesSpHHxo74ggSzNuoCyMulAd4pOzqbp33FJIQcmIr9X+ wZXeTQQA==; Received: from 77-249-17-252.cable.dynamic.v4.ziggo.nl ([77.249.17.252] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1vsz7Z-00000007KIy-3sd8; Thu, 19 Feb 2026 08:10:50 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 0) id B9F8C30325A; Thu, 19 Feb 2026 09:10:48 +0100 (CET) Message-ID: <20260219080624.438854780@infradead.org> User-Agent: quilt/0.68 Date: Thu, 19 Feb 2026 08:58:41 +0100 From: Peter Zijlstra To: mingo@kernel.org Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org, wangtao554@huawei.com, quzicheng@huawei.com, kprateek.nayak@amd.com, dsmythies@telus.net, shubhang@os.amperecomputing.com Subject: [PATCH v2 1/7] sched/fair: Fix zero_vruntime tracking References: <20260219075840.162631716@infradead.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It turns out that zero_vruntime tracking is broken when there is but a sing= le task running. Current update paths are through __{en,de}queue_entity(), and when there is but a single task, pick_next_task() will always return that o= ne task, and put_prev_set_next_task() will end up in neither function. This can cause entity_key() to grow indefinitely large and cause overflows, leading to much pain and suffering. Furtermore, doing update_zero_vruntime() from __{de,en}queue_entity(), which are called from {set_next,put_prev}_entity() has problems because: - set_next_entity() calls __dequeue_entity() before it does cfs_rq->curr = =3D se. This means the avg_vruntime() will see the removal but not current, miss= ing the entity for accounting. - put_prev_entity() calls __enqueue_entity() before it does cfs_rq->curr = =3D NULL. This means the avg_vruntime() will see the addition *and* current, leading to double accounting. Both cases are incorrect/inconsistent. Noting that avg_vruntime is already called on each {en,de}queue, remove the explicit avg_vruntime() calls (which removes an extra 64bit division for ea= ch {en,de}queue) and have avg_vruntime() update zero_vruntime itself. Additionally, have the tick call avg_vruntime() -- discarding the result, b= ut for the side-effect of updating zero_vruntime. While there, optimize avg_vruntime() by noting that the average of one valu= e is rather trivial to compute. Test case: # taskset -c -p 1 $$ # taskset -c 2 bash -c 'while :; do :; done&' # cat /sys/kernel/debug/sched/debug | awk '/^cpu#/ {P=3D0} /^cpu#2,/ {P= =3D1} {if (P) print $0}' | grep -e zero_vruntime -e "^>" PRE: .zero_vruntime : 31316.407903 >R bash 487 50787.345112 E 50789.145972 = 2.800000 50780.298364 16 120 0.000000 0.00= 0000 0.000000 / .zero_vruntime : 382548.253179 >R bash 487 427275.204288 E 427276.003584 = 2.800000 427268.157540 23 120 0.000000 0.00= 0000 0.000000 / POST: .zero_vruntime : 17259.709467 >R bash 526 17259.709467 E 17262.509467 = 2.800000 16915.031624 9 120 0.000000 0.00= 0000 0.000000 / .zero_vruntime : 18702.723356 >R bash 526 18702.723356 E 18705.523356 = 2.800000 18358.045513 9 120 0.000000 0.00= 0000 0.000000 / Fixes: 79f3f9bedd14 ("sched/eevdf: Fix min_vruntime vs avg_vruntime") Reported-by: K Prateek Nayak Signed-off-by: Peter Zijlstra (Intel) Tested-by: K Prateek Nayak Tested-by: Shubhang Kaushik Reviewed-by: Vincent Guittot Tested-by: John Stultz --- kernel/sched/fair.c | 84 +++++++++++++++++++++++++++++++++++------------= ----- 1 file changed, 57 insertions(+), 27 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -589,6 +589,21 @@ static inline bool entity_before(const s return vruntime_cmp(a->deadline, "<", b->deadline); } =20 +/* + * Per avg_vruntime() below, cfs_rq::zero_vruntime is only slightly stale + * and this value should be no more than two lag bounds. Which puts it in = the + * general order of: + * + * (slice + TICK_NSEC) << NICE_0_LOAD_SHIFT + * + * which is around 44 bits in size (on 64bit); that is 20 for + * NICE_0_LOAD_SHIFT, another 20 for NSEC_PER_MSEC and then a handful for + * however many msec the actual slice+tick ends up begin. + * + * (disregarding the actual divide-by-weight part makes for the worst case + * weight of 2, which nicely cancels vs the fuzz in zero_vruntime not actu= ally + * being the zero-lag point). + */ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *s= e) { return vruntime_op(se->vruntime, "-", cfs_rq->zero_vruntime); @@ -676,39 +691,61 @@ sum_w_vruntime_sub(struct cfs_rq *cfs_rq } =20 static inline -void sum_w_vruntime_update(struct cfs_rq *cfs_rq, s64 delta) +void update_zero_vruntime(struct cfs_rq *cfs_rq, s64 delta) { /* - * v' =3D v + d =3D=3D> sum_w_vruntime' =3D sum_runtime - d*sum_weight + * v' =3D v + d =3D=3D> sum_w_vruntime' =3D sum_w_vruntime - d*sum_weight */ cfs_rq->sum_w_vruntime -=3D cfs_rq->sum_weight * delta; + cfs_rq->zero_vruntime +=3D delta; } =20 /* - * Specifically: avg_runtime() + 0 must result in entity_eligible() :=3D t= rue + * Specifically: avg_vruntime() + 0 must result in entity_eligible() :=3D = true * For this to be so, the result of this function must have a left bias. + * + * Called in: + * - place_entity() -- before enqueue + * - update_entity_lag() -- before dequeue + * - entity_tick() + * + * This means it is one entry 'behind' but that puts it close enough to wh= ere + * the bound on entity_key() is at most two lag bounds. */ u64 avg_vruntime(struct cfs_rq *cfs_rq) { struct sched_entity *curr =3D cfs_rq->curr; - s64 avg =3D cfs_rq->sum_w_vruntime; - long load =3D cfs_rq->sum_weight; + long weight =3D cfs_rq->sum_weight; + s64 delta =3D 0; =20 - if (curr && curr->on_rq) { - unsigned long weight =3D scale_load_down(curr->load.weight); + if (curr && !curr->on_rq) + curr =3D NULL; =20 - avg +=3D entity_key(cfs_rq, curr) * weight; - load +=3D weight; - } + if (weight) { + s64 runtime =3D cfs_rq->sum_w_vruntime; + + if (curr) { + unsigned long w =3D scale_load_down(curr->load.weight); + + runtime +=3D entity_key(cfs_rq, curr) * w; + weight +=3D w; + } =20 - if (load) { /* sign flips effective floor / ceiling */ - if (avg < 0) - avg -=3D (load - 1); - avg =3D div_s64(avg, load); + if (runtime < 0) + runtime -=3D (weight - 1); + + delta =3D div_s64(runtime, weight); + } else if (curr) { + /* + * When there is but one element, it is the average. + */ + delta =3D curr->vruntime - cfs_rq->zero_vruntime; } =20 - return cfs_rq->zero_vruntime + avg; + update_zero_vruntime(cfs_rq, delta); + + return cfs_rq->zero_vruntime; } =20 /* @@ -777,16 +814,6 @@ int entity_eligible(struct cfs_rq *cfs_r return vruntime_eligible(cfs_rq, se->vruntime); } =20 -static void update_zero_vruntime(struct cfs_rq *cfs_rq) -{ - u64 vruntime =3D avg_vruntime(cfs_rq); - s64 delta =3D vruntime_op(vruntime, "-", cfs_rq->zero_vruntime); - - sum_w_vruntime_update(cfs_rq, delta); - - cfs_rq->zero_vruntime =3D vruntime; -} - static inline u64 cfs_rq_min_slice(struct cfs_rq *cfs_rq) { struct sched_entity *root =3D __pick_root_entity(cfs_rq); @@ -856,7 +883,6 @@ RB_DECLARE_CALLBACKS(static, min_vruntim static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *s= e) { sum_w_vruntime_add(cfs_rq, se); - update_zero_vruntime(cfs_rq); se->min_vruntime =3D se->vruntime; se->min_slice =3D se->slice; rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, @@ -868,7 +894,6 @@ static void __dequeue_entity(struct cfs_ rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, &min_vruntime_cb); sum_w_vruntime_sub(cfs_rq, se); - update_zero_vruntime(cfs_rq); } =20 struct sched_entity *__pick_root_entity(struct cfs_rq *cfs_rq) @@ -5524,6 +5549,11 @@ entity_tick(struct cfs_rq *cfs_rq, struc update_load_avg(cfs_rq, curr, UPDATE_TG); update_cfs_group(curr); =20 + /* + * Pulls along cfs_rq::zero_vruntime. + */ + avg_vruntime(cfs_rq); + #ifdef CONFIG_SCHED_HRTICK /* * queued ticks are scheduled to match the slice, so don't bother