From nobody Wed Dec 17 17:44:48 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6F4F0206F01 for ; Tue, 4 Mar 2025 14:23:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741098217; cv=none; b=eo8TKRTsH99GeywJnKxkAoIWd6owzwcgya6eJ75rNxemAZDD2iBGhKdLj/ZDeL1hQO4IL5MbhxYGQsC/oA7LfJX+o8j4fWRlYGyg+N7bb7BqH75OqD2gUQPimTzTefFXZ9d9iIHGwwujZ8HUtgH9VAqiYTYcqfZV4sJ2C1IMjS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741098217; c=relaxed/simple; bh=gF/kBpcoYSDf4gxMDlpN07XcjBl9f1gdlaUeSadBZ5o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Z31CCA4vQ3YgBBvWFPwFhHYWND0R1+yGKqOMGYmYfAwwOQeJM87r3ac0X9nR5x1ehLvZrkg28zs3VX3GiIoZ+J25jEJhByD+kIAI8H+qL5tx/hNJorWage+Ke88YA+jDMgVpfrFH/J4UivCG+XkPa10JZG/jbMPf29CtIOFDA30= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6F52EFEC; Tue, 4 Mar 2025 06:23:48 -0800 (PST) Received: from e130256.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1AF953F66E; Tue, 4 Mar 2025 06:23:33 -0800 (PST) From: Hongyan Xia To: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Juri Lelli , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Morten Rasmussen , Lukasz Luba , Christian Loehle , Pierre Gondois , linux-kernel@vger.kernel.org Subject: [PATCH v2 3/8] sched/uclamp: Add util_est_uclamp Date: Tue, 4 Mar 2025 14:23:10 +0000 Message-Id: <723859b17ea463f91e04c87696b6d38ea2839deb.1741091349.git.hongyan.xia2@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The new util_est_uclamp is essentially clamp(util_est, min, max) and follows how util_est operates. Signed-off-by: Hongyan Xia --- include/linux/sched.h | 1 + kernel/sched/fair.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 1f3b06aa024d..a4bdfa1d6be1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -490,6 +490,7 @@ struct sched_avg { unsigned int util_avg; int util_avg_bias; unsigned int util_est; + unsigned int util_est_uclamp; } ____cacheline_aligned; =20 /* diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 438755f55624..e9aa93f99a4e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4867,6 +4867,16 @@ static inline unsigned long task_util_uclamp(struct = task_struct *p) =20 return max(ret, 0L); } + +static inline unsigned long _task_util_est_uclamp(struct task_struct *p) +{ + return READ_ONCE(p->se.avg.util_est_uclamp); +} + +static inline unsigned long task_util_est_uclamp(struct task_struct *p) +{ + return max(task_util_uclamp(p), _task_util_est_uclamp(p)); +} #else static inline long task_util_bias(struct task_struct *p) { @@ -4877,6 +4887,16 @@ static inline unsigned long task_util_uclamp(struct = task_struct *p) { return task_util(p); } + +static inline unsigned long _task_util_est_uclamp(struct task_struct *p) +{ + return _task_util_est(p); +} + +static inline unsigned long task_util_est_uclamp(struct task_struct *p) +{ + return task_util_est(p); +} #endif =20 static inline void util_est_enqueue(struct cfs_rq *cfs_rq, @@ -4891,6 +4911,9 @@ static inline void util_est_enqueue(struct cfs_rq *cf= s_rq, enqueued =3D cfs_rq->avg.util_est; enqueued +=3D _task_util_est(p); WRITE_ONCE(cfs_rq->avg.util_est, enqueued); + enqueued =3D cfs_rq->avg.util_est_uclamp; + enqueued +=3D _task_util_est_uclamp(p); + WRITE_ONCE(cfs_rq->avg.util_est_uclamp, enqueued); =20 trace_sched_util_est_cfs_tp(cfs_rq); } @@ -4907,6 +4930,9 @@ static inline void util_est_dequeue(struct cfs_rq *cf= s_rq, enqueued =3D cfs_rq->avg.util_est; enqueued -=3D min_t(unsigned int, enqueued, _task_util_est(p)); WRITE_ONCE(cfs_rq->avg.util_est, enqueued); + enqueued =3D cfs_rq->avg.util_est_uclamp; + enqueued -=3D _task_util_est_uclamp(p); + WRITE_ONCE(cfs_rq->avg.util_est_uclamp, enqueued); =20 trace_sched_util_est_cfs_tp(cfs_rq); } @@ -4994,6 +5020,10 @@ static inline void util_est_update(struct cfs_rq *cf= s_rq, ewma -=3D last_ewma_diff; ewma >>=3D UTIL_EST_WEIGHT_SHIFT; done: + WRITE_ONCE(p->se.avg.util_est_uclamp, + clamp(ewma, + (unsigned int)uclamp_eff_value(p, UCLAMP_MIN), + (unsigned int)uclamp_eff_value(p, UCLAMP_MAX))); ewma |=3D UTIL_AVG_UNCHANGED; WRITE_ONCE(p->se.avg.util_est, ewma); =20 --=20 2.34.1