From nobody Mon Dec 1 21:30:45 2025 Received: from mta20.hihonor.com (mta20.hihonor.com [81.70.206.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C787D2E091E for ; Fri, 28 Nov 2025 08:30:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=81.70.206.69 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764318631; cv=none; b=LjqzClQRbIU2VpPs/uVzu3h/uhclKhLe+sBjihh7MYU0MQfX2fA0xcH/J2boyjLKydvf+F1p972q2V9f4Pr8l9ApHLxKxJe+BSMYPrsoXgXXvE3dVbFDswuRAngnqH57TVTLBYOGc/SVJhqT3tDay3SENcbdNeupH58BCjN8k2U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764318631; c=relaxed/simple; bh=upa+E3nswAlpfYL/NhxhmGzfa6g+iUyZfGYB5dRxFq0=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=oAhi/pyDoTHK+EKuLQWE7CqEHXr4M9C54M5VY1AqQphYtDwXX9oTjZd5iP7m4TdMPZRbYwLSEwkn0rbg3iEFG5Jxjbmb1WT3laicMWxvTeced40jaCY/seE/ufCNogpz7lHskZ10rdaIKyTtSzZkg+OFOcRaz+Srobr9foort0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com; spf=pass smtp.mailfrom=honor.com; arc=none smtp.client-ip=81.70.206.69 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=honor.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=honor.com Received: from w011.hihonor.com (unknown [10.68.20.122]) by mta20.hihonor.com (SkyGuard) with ESMTPS id 4dHmGF6dYPzYlHH1; Fri, 28 Nov 2025 16:11:17 +0800 (CST) Received: from a011.hihonor.com (10.68.31.243) by w011.hihonor.com (10.68.20.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 28 Nov 2025 16:12:37 +0800 Received: from localhost.localdomain (10.144.18.117) by a011.hihonor.com (10.68.31.243) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 28 Nov 2025 16:12:37 +0800 From: wangtao To: , , , CC: , , , , , , , , wangtao Subject: [PATCH] sched: fair: make V move forward only Date: Fri, 28 Nov 2025 16:11:18 +0800 Message-ID: <20251128081118.20025-1-tao.wangtao@honor.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: w012.hihonor.com (10.68.27.189) To a011.hihonor.com (10.68.31.243) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" V is the weighted average of entities. Adding tasks with positive lag or removing tasks with negative lag may cause V to move backward. This will result in unfair task scheduling, causing previously eligible tasks to become ineligible, shorter runtimes, and more task switches. For example, when adding tasks a, x, and b, where a and b have zero lag and x has positive lag, task b (added later) might be scheduled before task a. Making V move forward only resolves such issues and simplifies the code for adding tasks with positive lag. hackbench tests show that with this patch, execution time is significantly reduced due to fewer task switches. ------------------------------------------------- hackbench test base patch opt ------------------------------------------------- process 1 group: 0.141 0.100 -29.3% process 4 group: 0.375 0.295 -21.2% process 16 group: 1.495 1.204 -19.5% thread 1 group: 0.090 0.068 -25.1% thread 4 group: 0.244 0.211 -13.4% thread 16 group: 0.860 0.795 -7.6% pipe process 1 group: 0.124 0.090 -27.8% pipe process 4 group: 0.340 0.289 -15.2% pipe process 16 group: 1.401 1.144 -18.3% pipe thread 1 group: 0.081 0.071 -11.7% pipe thread 4 group: 0.241 0.181 -24.7% pipe thread 16 group: 0.787 0.706 -10.2% Signed-off-by: wangtao --- kernel/sched/fair.c | 16 ++++++++++++---- kernel/sched/sched.h | 1 + 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5b752324270b..889ee8d4c9bd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -671,7 +671,11 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq) avg =3D div_s64(avg, load); } =20 - return cfs_rq->min_vruntime + avg; + avg +=3D cfs_rq->min_vruntime; + if ((s64)(cfs_rq->forward_avg_vruntime - avg) < 0) + cfs_rq->forward_avg_vruntime =3D avg; + + return cfs_rq->forward_avg_vruntime; } =20 /* @@ -725,6 +729,9 @@ static int vruntime_eligible(struct cfs_rq *cfs_rq, u64= vruntime) s64 avg =3D cfs_rq->avg_vruntime; long load =3D cfs_rq->avg_load; =20 + if ((s64)(cfs_rq->forward_avg_vruntime - vruntime) >=3D 0) + return 1; + if (curr && curr->on_rq) { unsigned long weight =3D scale_load_down(curr->load.weight); =20 @@ -5139,12 +5146,13 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * * EEVDF: placement strategy #1 / #2 */ - if (sched_feat(PLACE_LAG) && cfs_rq->nr_queued && se->vlag) { + if (sched_feat(PLACE_LAG) && cfs_rq->nr_queued && se->vlag) + lag =3D se->vlag; + /* positive lag does not evaporate with forward_avg_vruntime */ + if (lag < 0) { struct sched_entity *curr =3D cfs_rq->curr; unsigned long load; =20 - lag =3D se->vlag; - /* * If we want to place a task and preserve lag, we have to * consider the effect of the new entity on the weighted diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index adfb6e3409d7..2691d5e8a0ab 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -681,6 +681,7 @@ struct cfs_rq { =20 s64 avg_vruntime; u64 avg_load; + u64 forward_avg_vruntime; =20 u64 min_vruntime; #ifdef CONFIG_SCHED_CORE --=20 2.17.1