From nobody Wed Feb 11 06:28:18 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B995171E47 for ; Fri, 5 Apr 2024 17:28:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338109; cv=none; b=JOJJ3LwX8YAZ09Ru5zohRkHxF617pplI4ENK6r4Hy+hwhtohi9PsgI8OsQEC57Zr7a9VsOd0sIXjfXx1Zj9rZOSKsYBvjREt4ao1A/lI1ka5wv9ngcmohrQ4VUuR1zPx6HH7apsz0dERQfimIqQY/LUjrF8qdegBihyKgXfgCq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338109; c=relaxed/simple; bh=duyVCafABsJQnwShlexpY7ydtdqahkofOOyxVGW5Et8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kiyHbcB87d4fncIt4+7KH6T6jfuwAfSFTTgTy+V1fKfem4Xs5QYt5cV/lL1A4qVXQ4Kv9VbMdNf+Jx3+6hp1k+8VEP15h434Mi/FOyydmaFl5csi/7oHU5mbG67kGLgmChCB1BwKMdOpEz+ZGIMUCB7j5XfN4ftWKXYRo6u1kkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SxV+qsna; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SxV+qsna" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C8B5C43394; Fri, 5 Apr 2024 17:28:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712338108; bh=duyVCafABsJQnwShlexpY7ydtdqahkofOOyxVGW5Et8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SxV+qsnaG+yg/iHqGNaB1rO9GuLVIaoyN2cd0jkb3kr74sLkVgCo1E+vV44toUka8 WbRVmW5oNzlg4ZuMcebVkd3dHVkj0ZH5KYtFjgQsE8NvqTEI3jr3AHaHqcRjl5XOtx yjGjNiLuNfINPuHaiddo0+N05/KYFE3sH1J3dqRfrbPq/fY+bKm7lf95OijqTxpIZA JRq3azKC+yxUcDR9sAOkXJOrBWSywCCvAUiOy1CoOjuHTL022wNr1c7bnKFQCmDC85 hZm0qVgmZ4ayO0iRUs6bMcw94entA0HulYOfNJunzzBjGV3N/Ny3T+CvZFU2+gc2wc gOWOFJyTyOAMw== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , bristot@kernel.org, Phil Auld , Suleiman Souhlal , Youssef Esmat Subject: [PATCH V6 1/6] sched/fair: Add trivial fair server Date: Fri, 5 Apr 2024 19:28:00 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Use deadline servers to service fair tasks. This patch adds a fair_server deadline entity which acts as a container for fair entities and can be used to fix starvation when higher priority (wrt fair) tasks are monopolizing CPU(s). Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira --- kernel/sched/core.c | 24 ++++++++++++++++-------- kernel/sched/deadline.c | 23 +++++++++++++++++++++++ kernel/sched/fair.c | 25 +++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 4 files changed, 68 insertions(+), 8 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7019a40457a6..04e2270487b7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6007,6 +6007,14 @@ static void put_prev_task_balance(struct rq *rq, str= uct task_struct *prev, #endif =20 put_prev_task(rq, prev); + + /* + * We've updated @prev and no longer need the server link, clear it. + * Must be done before ->pick_next_task() because that can (re)set + * ->dl_server. + */ + if (prev->dl_server) + prev->dl_server =3D NULL; } =20 /* @@ -6037,6 +6045,13 @@ __pick_next_task(struct rq *rq, struct task_struct *= prev, struct rq_flags *rf) p =3D pick_next_task_idle(rq); } =20 + /* + * This is a normal CFS pick, but the previous could be a DL pick. + * Clear it as previous is no longer picked. + */ + if (prev->dl_server) + prev->dl_server =3D NULL; + /* * This is the fast path; it cannot be a DL server pick; * therefore even if @p =3D=3D @prev, ->dl_server must be NULL. @@ -6050,14 +6065,6 @@ __pick_next_task(struct rq *rq, struct task_struct *= prev, struct rq_flags *rf) restart: put_prev_task_balance(rq, prev, rf); =20 - /* - * We've updated @prev and no longer need the server link, clear it. - * Must be done before ->pick_next_task() because that can (re)set - * ->dl_server. - */ - if (prev->dl_server) - prev->dl_server =3D NULL; - for_each_class(class) { p =3D class->pick_next_task(rq); if (p) @@ -10051,6 +10058,7 @@ void __init sched_init(void) #endif /* CONFIG_SMP */ hrtick_rq_init(rq); atomic_set(&rq->nr_iowait, 0); + fair_server_init(rq); =20 #ifdef CONFIG_SCHED_CORE rq->core =3D rq; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index a04a436af8cc..db5dc5c09106 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1382,6 +1382,13 @@ static void update_curr_dl_se(struct rq *rq, struct = sched_dl_entity *dl_se, s64 resched_curr(rq); } =20 + /* + * The fair server (sole dl_server) does not account for real-time + * workload because it is running fair work. + */ + if (dl_se =3D=3D &rq->fair_server) + return; + /* * Because -- for now -- we share the rt bandwidth, we need to * account our runtime there too, otherwise actual rt tasks @@ -1415,15 +1422,31 @@ void dl_server_update(struct sched_dl_entity *dl_se= , s64 delta_exec) =20 void dl_server_start(struct sched_dl_entity *dl_se) { + struct rq *rq =3D dl_se->rq; + if (!dl_server(dl_se)) { + /* Disabled */ + dl_se->dl_runtime =3D 0; + dl_se->dl_deadline =3D 1000 * NSEC_PER_MSEC; + dl_se->dl_period =3D 1000 * NSEC_PER_MSEC; + dl_se->dl_server =3D 1; setup_new_dl_entity(dl_se); } + + if (!dl_se->dl_runtime) + return; + enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP); + if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl)) + resched_curr(dl_se->rq); } =20 void dl_server_stop(struct sched_dl_entity *dl_se) { + if (!dl_se->dl_runtime) + return; + dequeue_dl_entity(dl_se, DEQUEUE_SLEEP); } =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 03be0d1330a6..304697a80e9e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6722,6 +6722,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) */ util_est_enqueue(&rq->cfs, p); =20 + if (!rq->cfs.h_nr_running) + dl_server_start(&rq->fair_server); + /* * If in_iowait is set, the code below may not trigger any cpufreq * utilization updates, so do it here explicitly with the IOWAIT flag @@ -6866,6 +6869,9 @@ static void dequeue_task_fair(struct rq *rq, struct t= ask_struct *p, int flags) rq->next_balance =3D jiffies; =20 dequeue_throttle: + if (!rq->cfs.h_nr_running) + dl_server_stop(&rq->fair_server); + util_est_update(&rq->cfs, p, task_sleep); hrtick_update(rq); } @@ -8538,6 +8544,25 @@ static struct task_struct *__pick_next_task_fair(str= uct rq *rq) return pick_next_task_fair(rq, NULL, NULL); } =20 +static bool fair_server_has_tasks(struct sched_dl_entity *dl_se) +{ + return !!dl_se->rq->cfs.nr_running; +} + +static struct task_struct *fair_server_pick(struct sched_dl_entity *dl_se) +{ + return pick_next_task_fair(dl_se->rq, NULL, NULL); +} + +void fair_server_init(struct rq *rq) +{ + struct sched_dl_entity *dl_se =3D &rq->fair_server; + + init_dl_entity(dl_se); + + dl_server_init(dl_se, rq, fair_server_has_tasks, fair_server_pick); +} + /* * Account for a descheduled task: */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d2242679239e..205e56929e15 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -340,6 +340,8 @@ extern void dl_server_init(struct sched_dl_entity *dl_s= e, struct rq *rq, dl_server_has_tasks_f has_tasks, dl_server_pick_f pick); =20 +extern void fair_server_init(struct rq *rq); + #ifdef CONFIG_CGROUP_SCHED =20 struct cfs_rq; @@ -1016,6 +1018,8 @@ struct rq { struct rt_rq rt; struct dl_rq dl; =20 + struct sched_dl_entity fair_server; + #ifdef CONFIG_FAIR_GROUP_SCHED /* list of leaf cfs_rq on this CPU: */ struct list_head leaf_cfs_rq_list; --=20 2.44.0 From nobody Wed Feb 11 06:28:18 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C476171675 for ; Fri, 5 Apr 2024 17:28:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338113; cv=none; b=r12Ra/yM05ELJVD37cDODsR9/dvZjw5RnkadFsC8z6fXZ9SlktNw/8LVKoy+/1u5q3m5rNB7Ptyfs8pHCStiTLrnJLznz7wxdGsp+tQ0Ujk+ISSTqfvg3fQRaL66xcfyl6x0q1Uxpe9uMhquNK5t4BPW5W3olO1+kaQJKcqKu3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338113; c=relaxed/simple; bh=amxxtWFGOE1JGozhcxEvRnk4Kms40eihsREXxOMDgd8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kWxairhVS7BgyZ3lJZyS16Ftj6qaua/UymVPyDFsfzcnzgY748S1lBij6HOmPZzzm6cpOwIrTWa9DcZCAC42VZnfSDPj+XjnBpfSjkZS+BBjKybkG5iw0eCACciSIEFIzIS4AZXbHwpgJC10PtyQLpEOqf5P8AtPb7xp9gJ7XjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DPYETBzo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DPYETBzo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2533DC433A6; Fri, 5 Apr 2024 17:28:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712338113; bh=amxxtWFGOE1JGozhcxEvRnk4Kms40eihsREXxOMDgd8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DPYETBzo1BUsmVKL31ivlMSv12JsW8h7JFgAgNAXb1NgmdxSy5PWis91cKF1U2Itw D59Jwca9OlbwiiCyYxOwS1spA5IQsCLWhlM3xQVyDLcm4RUAv2FwqOJrD+L55uxeHF q/9XW1Xr/+hwrZJDdEWfPbfgp/A4wAVecCdlwHnoRLX6WaiZ3MecKszHGosCa4PE/3 KH56Y235MtPkS0CecpDYFhQOzN1Gp5S+nA0L8uVpeqEJZCbv1GGRV03qyWQ7oDWiSV odD5N/qYrMaffrXQApobKLkPlh2yTcOf9uIvrEOFii+OBQUqDAE/3Mfnuluj21Hjmj tCeqcPQsoTUhw== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , bristot@kernel.org, Phil Auld , Suleiman Souhlal , Youssef Esmat Subject: [PATCH V6 2/6] sched/deadline: Deferrable dl server Date: Fri, 5 Apr 2024 19:28:01 +0200 Message-ID: <7b9c206e914ef257a2534199f25938ffafa3e59e.1712337227.git.bristot@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Among the motivations for the DL servers is the real-time throttling mechanism. This mechanism works by throttling the rt_rq after running for a long period without leaving space for fair tasks. The base dl server avoids this problem by boosting fair tasks instead of throttling the rt_rq. The point is that it boosts without waiting for potential starvation, causing some non-intuitive cases. For example, an IRQ dispatches two tasks on an idle system, a fair and an RT. The DL server will be activated, running the fair task before the RT one. This problem can be avoided by deferring the dl server activation. By setting the defer option, the dl_server will dispatch an SCHED_DEADLINE reservation with replenished runtime, but throttled. The dl_timer will be set for the defer time at (period - runtime) ns from start time. Thus boosting the fair rq at defer time. If the fair scheduler has the opportunity to run while waiting for defer time, the dl server runtime will be consumed. If the runtime is completely consumed before the defer time, the server will be replenished while still in a throttled state. Then, the dl_timer will be reset to the new defer time If the fair server reaches the defer time without consuming its runtime, the server will start running, following CBS rules (thus without breaking SCHED_DEADLINE). Then the server will continue the running state (without deferring) until it fair tasks are able to execute as regular fair scheduler (end of the starvation). Signed-off-by: Daniel Bristot de Oliveira --- include/linux/sched.h | 3 + kernel/sched/deadline.c | 296 ++++++++++++++++++++++++++++++++++------ kernel/sched/fair.c | 24 +++- kernel/sched/idle.c | 2 + kernel/sched/sched.h | 4 +- 5 files changed, 284 insertions(+), 45 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 3c2abbc587b4..4a405f0e64f8 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -643,6 +643,9 @@ struct sched_dl_entity { unsigned int dl_non_contending : 1; unsigned int dl_overrun : 1; unsigned int dl_server : 1; + unsigned int dl_defer : 1; + unsigned int dl_defer_armed : 1; + unsigned int dl_defer_running : 1; =20 /* * Bandwidth enforcement timer. Each -deadline task has its diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index db5dc5c09106..6ea9c05711ce 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -772,6 +772,15 @@ static inline void replenish_dl_new_period(struct sche= d_dl_entity *dl_se, /* for non-boosted task, pi_of(dl_se) =3D=3D dl_se */ dl_se->deadline =3D rq_clock(rq) + pi_of(dl_se)->dl_deadline; dl_se->runtime =3D pi_of(dl_se)->dl_runtime; + + /* + * If it is a deferred reservation, and the server + * is not handling an starvation case, defer it. + */ + if (dl_se->dl_defer & !dl_se->dl_defer_running) { + dl_se->dl_throttled =3D 1; + dl_se->dl_defer_armed =3D 1; + } } =20 /* @@ -810,6 +819,9 @@ static inline void setup_new_dl_entity(struct sched_dl_= entity *dl_se) replenish_dl_new_period(dl_se, rq); } =20 +static int start_dl_timer(struct sched_dl_entity *dl_se); +static bool dl_entity_overflow(struct sched_dl_entity *dl_se, u64 t); + /* * Pure Earliest Deadline First (EDF) scheduling does not deal with the * possibility of a entity lasting more than what it declared, and thus @@ -838,9 +850,18 @@ static void replenish_dl_entity(struct sched_dl_entity= *dl_se) /* * This could be the case for a !-dl task that is boosted. * Just go with full inherited parameters. + * + * Or, it could be the case of a deferred reservation that + * was not able to consume its runtime in background and + * reached this point with current u > U. + * + * In both cases, set a new period. */ - if (dl_se->dl_deadline =3D=3D 0) - replenish_dl_new_period(dl_se, rq); + if (dl_se->dl_deadline =3D=3D 0 || + (dl_se->dl_defer_armed && dl_entity_overflow(dl_se, rq_clock(rq)))) { + dl_se->deadline =3D rq_clock(rq) + pi_of(dl_se)->dl_deadline; + dl_se->runtime =3D pi_of(dl_se)->dl_runtime; + } =20 if (dl_se->dl_yielded && dl_se->runtime > 0) dl_se->runtime =3D 0; @@ -874,6 +895,37 @@ static void replenish_dl_entity(struct sched_dl_entity= *dl_se) dl_se->dl_yielded =3D 0; if (dl_se->dl_throttled) dl_se->dl_throttled =3D 0; + + /* + * If this is the replenishment of a deferred reservation, + * clear the flag and return. + */ + if (dl_se->dl_defer_armed) { + dl_se->dl_defer_armed =3D 0; + return; + } + + /* + * A this point, if the deferred server is not armed, and the deadline + * is in the future, if it is not running already, throttle the server + * and arm the defer timer. + */ + if (dl_se->dl_defer && !dl_se->dl_defer_running && + dl_time_before(rq_clock(dl_se->rq), dl_se->deadline - dl_se->runtime)= ) { + if (!is_dl_boosted(dl_se) && dl_se->server_has_tasks(dl_se)) { + dl_se->dl_defer_armed =3D 1; + dl_se->dl_throttled =3D 1; + if (!start_dl_timer(dl_se)) { + /* + * If for whatever reason (delays), if a previous timer was + * queued but not serviced, cancel it. + */ + hrtimer_try_to_cancel(&dl_se->dl_timer); + dl_se->dl_defer_armed =3D 0; + dl_se->dl_throttled =3D 0; + } + } + } } =20 /* @@ -1024,6 +1076,15 @@ static void update_dl_entity(struct sched_dl_entity = *dl_se) } =20 replenish_dl_new_period(dl_se, rq); + } else if (dl_server(dl_se) && dl_se->dl_defer) { + /* + * The server can still use its previous deadline, so check if + * it left the dl_defer_running state. + */ + if (!dl_se->dl_defer_running) { + dl_se->dl_defer_armed =3D 1; + dl_se->dl_throttled =3D 1; + } } } =20 @@ -1056,8 +1117,20 @@ static int start_dl_timer(struct sched_dl_entity *dl= _se) * We want the timer to fire at the deadline, but considering * that it is actually coming from rq->clock and not from * hrtimer's time base reading. + * + * The deferred reservation will have its timer set to + * (deadline - runtime). At that point, the CBS rule will decide + * if the current deadline can be used, or if a replenishment is + * required to avoid add too much pressure on the system + * (current u > U). */ - act =3D ns_to_ktime(dl_next_period(dl_se)); + if (dl_se->dl_defer_armed) { + WARN_ON_ONCE(!dl_se->dl_throttled); + act =3D ns_to_ktime(dl_se->deadline - dl_se->runtime); + } else { + act =3D ns_to_ktime(dl_next_period(dl_se)); + } + now =3D hrtimer_cb_get_time(timer); delta =3D ktime_to_ns(now) - rq_clock(rq); act =3D ktime_add_ns(act, delta); @@ -1107,6 +1180,64 @@ static void __push_dl_task(struct rq *rq, struct rq_= flags *rf) #endif } =20 +/* a defer timer will not be reset if the runtime consumed was < dl_server= _min_res */ +static const u64 dl_server_min_res =3D 1 * NSEC_PER_MSEC; + +static enum hrtimer_restart dl_server_timer(struct hrtimer *timer, struct = sched_dl_entity *dl_se) +{ + struct rq *rq =3D rq_of_dl_se(dl_se); + enum hrtimer_restart restart =3D 0; + struct rq_flags rf; + u64 fw; + + rq_lock(rq, &rf); + if (dl_se->dl_throttled) { + sched_clock_tick(); + update_rq_clock(rq); + + if (!dl_se->dl_runtime) + goto unlock; + + if (!dl_se->server_has_tasks(dl_se)) { + replenish_dl_entity(dl_se); + goto unlock; + } + + if (dl_se->dl_defer_armed) { + /* + * First check if the server could consume runtime in background. + * If so, it is possible to push the defer timer for this amount + * of time. The dl_server_min_res serves as a limit to avoid + * forwarding the timer for a too small amount of time. + */ + if (dl_time_before(rq_clock(dl_se->rq), + (dl_se->deadline - dl_se->runtime - dl_server_min_res))) { + + /* reset the defer timer */ + fw =3D dl_se->deadline - rq_clock(dl_se->rq) - dl_se->runtime; + + hrtimer_forward_now(timer, ns_to_ktime(fw)); + restart =3D 1; + goto unlock; + } + + dl_se->dl_defer_running =3D 1; + } + + enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH); + + if (!dl_task(dl_se->rq->curr) || + dl_entity_preempt(dl_se, &dl_se->rq->curr->dl)) + resched_curr(rq); + + __push_dl_task(rq, &rf); + } +unlock: + rq_unlock(rq, &rf); + + return restart ? HRTIMER_RESTART : HRTIMER_NORESTART; +} + /* * This is the bandwidth enforcement timer callback. If here, we know * a task is not on its dl_rq, since the fact that the timer was running @@ -1129,28 +1260,8 @@ static enum hrtimer_restart dl_task_timer(struct hrt= imer *timer) struct rq_flags rf; struct rq *rq; =20 - if (dl_server(dl_se)) { - struct rq *rq =3D rq_of_dl_se(dl_se); - struct rq_flags rf; - - rq_lock(rq, &rf); - if (dl_se->dl_throttled) { - sched_clock_tick(); - update_rq_clock(rq); - - if (dl_se->server_has_tasks(dl_se)) { - enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH); - resched_curr(rq); - __push_dl_task(rq, &rf); - } else { - replenish_dl_entity(dl_se); - } - - } - rq_unlock(rq, &rf); - - return HRTIMER_NORESTART; - } + if (dl_server(dl_se)) + return dl_server_timer(timer, dl_se); =20 p =3D dl_task_of(dl_se); rq =3D task_rq_lock(p, &rf); @@ -1320,22 +1431,10 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, s= truct sched_dl_entity *dl_se) return (delta * u_act) >> BW_SHIFT; } =20 -static inline void -update_stats_dequeue_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se, - int flags); -static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se= , s64 delta_exec) +s64 dl_scalled_delta_exec(struct rq *rq, struct sched_dl_entity *dl_se, s6= 4 delta_exec) { s64 scaled_delta_exec; =20 - if (unlikely(delta_exec <=3D 0)) { - if (unlikely(dl_se->dl_yielded)) - goto throttle; - return; - } - - if (dl_entity_is_special(dl_se)) - return; - /* * For tasks that participate in GRUB, we implement GRUB-PA: the * spare reclaimed bandwidth is used to clock down frequency. @@ -1354,8 +1453,64 @@ static void update_curr_dl_se(struct rq *rq, struct = sched_dl_entity *dl_se, s64 scaled_delta_exec =3D cap_scale(scaled_delta_exec, scale_cpu); } =20 + return scaled_delta_exec; +} + +static inline void +update_stats_dequeue_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se, + int flags); +static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se= , s64 delta_exec) +{ + s64 scaled_delta_exec; + + if (unlikely(delta_exec <=3D 0)) { + if (unlikely(dl_se->dl_yielded)) + goto throttle; + return; + } + + if (dl_server(dl_se) && dl_se->dl_throttled && !dl_se->dl_defer) + return; + + if (dl_entity_is_special(dl_se)) + return; + + scaled_delta_exec =3D dl_scalled_delta_exec(rq, dl_se, delta_exec); + dl_se->runtime -=3D scaled_delta_exec; =20 + /* + * The fair server can consume its runtime while throttled (not queued/ + * running as regular CFS). + * + * If the server consumes its entire runtime in this state. The server + * is not required for the current period. Thus, reset the server by + * starting a new period, pushing the activation. + */ + if (dl_se->dl_defer && dl_se->dl_throttled && dl_runtime_exceeded(dl_se))= { + /* + * If the server was previously activated - the starving condition + * took place, it this point it went away because the fair scheduler + * was able to get runtime in background. So return to the initial + * state. + */ + dl_se->dl_defer_running =3D 0; + + hrtimer_try_to_cancel(&dl_se->dl_timer); + + replenish_dl_new_period(dl_se, dl_se->rq); + + /* + * Not being able to start the timer seems problematic. If it could not + * be started for whatever reason, we need to "unthrottle" the DL server + * and queue right away. Otherwise nothing might queue it. That's similar + * to what enqueue_dl_entity() does on start_dl_timer=3D=3D0. For now, j= ust warn. + */ + WARN_ON_ONCE(!start_dl_timer(dl_se)); + + return; + } + throttle: if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) { dl_se->dl_throttled =3D 1; @@ -1415,9 +1570,47 @@ static void update_curr_dl_se(struct rq *rq, struct = sched_dl_entity *dl_se, s64 } } =20 +/* + * In the non-defer mode, the idle time is not accounted, as the + * server provides a guarantee. + * + * If the dl_server is in defer mode, the idle time is also considered + * as time available for the fair server. This avoids creating a + * regression with the rt throttling behavior where the idle time did + * not create a penalty to the rt schedulers. + */ +void dl_server_update_idle_time(struct rq *rq, struct task_struct *p) +{ + s64 delta_exec, scaled_delta_exec; + + if (!rq->fair_server.dl_defer) + return; + + /* no need to discount more */ + if (rq->fair_server.runtime < 0) + return; + + delta_exec =3D rq_clock_task(rq) - p->se.exec_start; + if (delta_exec < 0) + return; + + scaled_delta_exec =3D dl_scalled_delta_exec(rq, &rq->fair_server, delta_e= xec); + + rq->fair_server.runtime -=3D scaled_delta_exec; + + if (rq->fair_server.runtime < 0) { + rq->fair_server.dl_defer_running =3D 0; + rq->fair_server.runtime =3D 0; + } + + p->se.exec_start =3D rq_clock_task(rq); +} + void dl_server_update(struct sched_dl_entity *dl_se, s64 delta_exec) { - update_curr_dl_se(dl_se->rq, dl_se, delta_exec); + /* 0 runtime =3D fair server disabled */ + if (dl_se->dl_runtime) + update_curr_dl_se(dl_se->rq, dl_se, delta_exec); } =20 void dl_server_start(struct sched_dl_entity *dl_se) @@ -1431,6 +1624,7 @@ void dl_server_start(struct sched_dl_entity *dl_se) dl_se->dl_period =3D 1000 * NSEC_PER_MSEC; =20 dl_se->dl_server =3D 1; + dl_se->dl_defer =3D 1; setup_new_dl_entity(dl_se); } =20 @@ -1448,6 +1642,9 @@ void dl_server_stop(struct sched_dl_entity *dl_se) return; =20 dequeue_dl_entity(dl_se, DEQUEUE_SLEEP); + hrtimer_try_to_cancel(&dl_se->dl_timer); + dl_se->dl_defer_armed =3D 0; + dl_se->dl_throttled =3D 0; } =20 void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, @@ -1759,7 +1956,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int = flags) * be counted in the active utilization; hence, we need to call * add_running_bw(). */ - if (dl_se->dl_throttled && !(flags & ENQUEUE_REPLENISH)) { + if (!dl_se->dl_defer && dl_se->dl_throttled && !(flags & ENQUEUE_REPLENIS= H)) { if (flags & ENQUEUE_WAKEUP) task_contending(dl_se, flags); =20 @@ -1781,6 +1978,25 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int= flags) setup_new_dl_entity(dl_se); } =20 + /* + * If the reservation is still throttled, e.g., it got replenished but is= a + * deferred task and still got to wait, don't enqueue. + */ + if (dl_se->dl_throttled && start_dl_timer(dl_se)) + return; + + /* + * We're about to enqueue, make sure we're not ->dl_throttled! + * In case the timer was not started, say because the defer time + * has passed, mark as not throttled and mark unarmed. + * Also cancel earlier timers, since letting those run is pointless. + */ + if (dl_se->dl_throttled) { + hrtimer_try_to_cancel(&dl_se->dl_timer); + dl_se->dl_defer_armed =3D 0; + dl_se->dl_throttled =3D 0; + } + __enqueue_dl_entity(dl_se); } =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 304697a80e9e..fdeb4a61575c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1156,12 +1156,13 @@ s64 update_curr_common(struct rq *rq) static void update_curr(struct cfs_rq *cfs_rq) { struct sched_entity *curr =3D cfs_rq->curr; + struct rq *rq =3D rq_of(cfs_rq); s64 delta_exec; =20 if (unlikely(!curr)) return; =20 - delta_exec =3D update_curr_se(rq_of(cfs_rq), curr); + delta_exec =3D update_curr_se(rq, curr); if (unlikely(delta_exec <=3D 0)) return; =20 @@ -1169,8 +1170,19 @@ static void update_curr(struct cfs_rq *cfs_rq) update_deadline(cfs_rq, curr); update_min_vruntime(cfs_rq); =20 - if (entity_is_task(curr)) - update_curr_task(task_of(curr), delta_exec); + if (entity_is_task(curr)) { + struct task_struct *p =3D task_of(curr); + + update_curr_task(p, delta_exec); + + /* + * Any fair task that runs outside of fair_server should + * account against fair_server such that it can account for + * this time and possibly avoid running this period. + */ + if (p->dl_server !=3D &rq->fair_server) + dl_server_update(&rq->fair_server, delta_exec); + } =20 account_cfs_rq_runtime(cfs_rq, delta_exec); } @@ -6722,8 +6734,12 @@ enqueue_task_fair(struct rq *rq, struct task_struct = *p, int flags) */ util_est_enqueue(&rq->cfs, p); =20 - if (!rq->cfs.h_nr_running) + if (!rq->cfs.h_nr_running) { + /* Account for idle runtime */ + if (!rq->nr_running) + dl_server_update_idle_time(rq, rq->curr); dl_server_start(&rq->fair_server); + } =20 /* * If in_iowait is set, the code below may not trigger any cpufreq diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 6135fbe83d68..5f8806bc6924 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -458,12 +458,14 @@ static void wakeup_preempt_idle(struct rq *rq, struct= task_struct *p, int flags) =20 static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) { + dl_server_update_idle_time(rq, prev); } =20 static void set_next_task_idle(struct rq *rq, struct task_struct *next, bo= ol first) { update_idle_core(rq); schedstat_inc(rq->sched_goidle); + next->se.exec_start =3D rq_clock_task(rq); } =20 #ifdef CONFIG_SMP diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 205e56929e15..e70e17be83c3 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -312,7 +312,7 @@ extern bool __checkparam_dl(const struct sched_attr *at= tr); extern bool dl_param_changed(struct task_struct *p, const struct sched_att= r *attr); extern int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const = struct cpumask *trial); extern int dl_bw_check_overflow(int cpu); - +extern s64 dl_scalled_delta_exec(struct rq *rq, struct sched_dl_entity *dl= _se, s64 delta_exec); /* * SCHED_DEADLINE supports servers (nested scheduling) with the following * interface: @@ -340,6 +340,8 @@ extern void dl_server_init(struct sched_dl_entity *dl_s= e, struct rq *rq, dl_server_has_tasks_f has_tasks, dl_server_pick_f pick); =20 +extern void dl_server_update_idle_time(struct rq *rq, + struct task_struct *p); extern void fair_server_init(struct rq *rq); =20 #ifdef CONFIG_CGROUP_SCHED --=20 2.44.0 From nobody Wed Feb 11 06:28:18 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9D68172761 for ; Fri, 5 Apr 2024 17:28:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338118; cv=none; b=rhVXlqw0Z+Let5EMo027jiGdnClUUkb6aLcApgAy5JWICgFJBfKfOsC+vChdQWLaIWN5s8o9E1UI6O5eIsI4VRZ2xhSNZLXsfeRRwDZwCBJ7L2UtLXLXSC6CDXXBTBdOuSrOXk4K/CRcSwPxhdiKEWKt+AkF6zkbOmiYqRAjP88= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338118; c=relaxed/simple; bh=38MnuOR+fbynRkefesi1jVKTa5oG4c9AUn/MGCJwoM0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h8+Ac+qSlRoUJ6QO5abtxAMgsgaVV1wgllf3COKotwdHdu3tivNAT7P7JnkSAQUFTY0RyZYr+6ZVN0iVURiyKZXORPXSdWPgMSWwEOK47ocTthkeevHm9bfbwWubvm6BOMgCc25E6THP8k+11TRqbRjlaqSqGmckUD65kzakA4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iixnUawi; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iixnUawi" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F141CC433C7; Fri, 5 Apr 2024 17:28:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712338118; bh=38MnuOR+fbynRkefesi1jVKTa5oG4c9AUn/MGCJwoM0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iixnUawi+l4UuT2ebEAecJ9t1WIiUhUWPCRt9du+ckkn7G0HQlW743d6bZ7lpchAK 5cv1VI7D5aea9oka2+rN7B3rlpR8/ce/lDzZcFoeHK3xdGM/YM3MfWdY2rZKXYZ6wx ttRjswqCaRCH8BOEEVCKfGEXVEVOWspE+opJjsvI9iqRGqy5KuF2C5lcqqNmKaV+Tf qJ81DKgiLLdF7lUbLD+mYYZke23EN18jBR1N2AVENDRs0Pl5LyVqRAARGTgDRGOGYz UfgLssrhcup/YUgztCctWDN7Uls+1Idkiwd5CY/KWNv2/QFlVhz8o4SZzjvEByiqCN 8psRrlwpEj78A== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , bristot@kernel.org, Phil Auld , Suleiman Souhlal , Youssef Esmat Subject: [PATCH V6 3/6] sched/fair: Fair server interface Date: Fri, 5 Apr 2024 19:28:02 +0200 Message-ID: <1abba9e7f47ad4a5dfd8b2dfb59aa607983cdce4.1712337227.git.bristot@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an interface for fair server setup on debugfs. Each CPU has three files under /debug/sched/fair_server/cpu{ID}: - runtime: set runtime in ns - period: set period in ns - defer: on/off for the defer mechanism This then leaves /proc/sys/kernel/sched_rt_{period,runtime}_us to set bounds on admission control. The interface also add the server to the dl bandwidth accounting. Signed-off-by: Daniel Bristot de Oliveira --- kernel/sched/deadline.c | 111 ++++++++++++++++++---- kernel/sched/debug.c | 206 ++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 3 + kernel/sched/topology.c | 8 ++ 4 files changed, 311 insertions(+), 17 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 6ea9c05711ce..dd38370aa276 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -321,19 +321,12 @@ void sub_running_bw(struct sched_dl_entity *dl_se, st= ruct dl_rq *dl_rq) __sub_running_bw(dl_se->dl_bw, dl_rq); } =20 -static void dl_change_utilization(struct task_struct *p, u64 new_bw) +static void dl_rq_change_utilization(struct rq *rq, struct sched_dl_entity= *dl_se, u64 new_bw) { - struct rq *rq; - - WARN_ON_ONCE(p->dl.flags & SCHED_FLAG_SUGOV); - - if (task_on_rq_queued(p)) - return; + if (dl_se->dl_non_contending) { + sub_running_bw(dl_se, &rq->dl); + dl_se->dl_non_contending =3D 0; =20 - rq =3D task_rq(p); - if (p->dl.dl_non_contending) { - sub_running_bw(&p->dl, &rq->dl); - p->dl.dl_non_contending =3D 0; /* * If the timer handler is currently running and the * timer cannot be canceled, inactive_task_timer() @@ -341,13 +334,25 @@ static void dl_change_utilization(struct task_struct = *p, u64 new_bw) * will not touch the rq's active utilization, * so we are still safe. */ - if (hrtimer_try_to_cancel(&p->dl.inactive_timer) =3D=3D 1) - put_task_struct(p); + if (hrtimer_try_to_cancel(&dl_se->inactive_timer) =3D=3D 1) { + if (!dl_server(dl_se)) + put_task_struct(dl_task_of(dl_se)); + } } - __sub_rq_bw(p->dl.dl_bw, &rq->dl); + __sub_rq_bw(dl_se->dl_bw, &rq->dl); __add_rq_bw(new_bw, &rq->dl); } =20 +static void dl_change_utilization(struct task_struct *p, u64 new_bw) +{ + WARN_ON_ONCE(p->dl.flags & SCHED_FLAG_SUGOV); + + if (task_on_rq_queued(p)) + return; + + dl_rq_change_utilization(task_rq(p), &p->dl, new_bw); +} + static void __dl_clear_params(struct sched_dl_entity *dl_se); =20 /* @@ -1191,6 +1196,11 @@ static enum hrtimer_restart dl_server_timer(struct h= rtimer *timer, struct sched_ u64 fw; =20 rq_lock(rq, &rf); + + if (!dl_se->dl_runtime) { + goto unlock; + } + if (dl_se->dl_throttled) { sched_clock_tick(); update_rq_clock(rq); @@ -1617,11 +1627,17 @@ void dl_server_start(struct sched_dl_entity *dl_se) { struct rq *rq =3D dl_se->rq; =20 + /* + * XXX: the apply do not work fine at the init phase for the + * fair server because things are not yet set. We need to improve + * this before getting generic. + */ if (!dl_server(dl_se)) { /* Disabled */ - dl_se->dl_runtime =3D 0; - dl_se->dl_deadline =3D 1000 * NSEC_PER_MSEC; - dl_se->dl_period =3D 1000 * NSEC_PER_MSEC; + u64 runtime =3D 0; + u64 period =3D 1000 * NSEC_PER_MSEC; + + dl_server_apply_params(dl_se, runtime, period, 1); =20 dl_se->dl_server =3D 1; dl_se->dl_defer =3D 1; @@ -1656,6 +1672,67 @@ void dl_server_init(struct sched_dl_entity *dl_se, s= truct rq *rq, dl_se->server_pick =3D pick; } =20 +void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq) +{ + u64 new_bw =3D dl_se->dl_bw; + struct dl_bw *dl_b; + int cpu =3D cpu_of(rq); + + dl_b =3D dl_bw_of(cpu_of(rq)); + raw_spin_lock(&dl_b->lock); + + __dl_add(dl_b, new_bw, dl_bw_cpus(cpu)); + + raw_spin_unlock(&dl_b->lock); +} + +int dl_server_apply_params(struct sched_dl_entity *dl_se, u64 runtime, u64= period, bool init) +{ + u64 old_bw =3D init ? 0 : to_ratio(dl_se->dl_period, dl_se->dl_runtime); + u64 new_bw =3D to_ratio(period, runtime); + struct rq *rq =3D dl_se->rq; + int cpu =3D cpu_of(rq); + struct dl_bw *dl_b; + unsigned long cap; + int retval =3D 0; + int cpus; + + dl_b =3D dl_bw_of(cpu); + raw_spin_lock(&dl_b->lock); + cpus =3D dl_bw_cpus(cpu); + cap =3D dl_bw_capacity(cpu); + + if (__dl_overflow(dl_b, cap, old_bw, new_bw)) { + retval =3D -EBUSY; + goto out; + } + + if (init) { + __add_rq_bw(new_bw, &rq->dl); + __dl_add(dl_b, new_bw, cpus); + } else { + __dl_sub(dl_b, dl_se->dl_bw, cpus); + __dl_add(dl_b, new_bw, cpus); + + dl_rq_change_utilization(rq, dl_se, new_bw); + } + + dl_se->dl_runtime =3D runtime; + dl_se->dl_deadline =3D period; + dl_se->dl_period =3D period; + + dl_se->runtime =3D 0; + dl_se->deadline =3D 0; + + dl_se->dl_bw =3D to_ratio(dl_se->dl_period, dl_se->dl_runtime); + dl_se->dl_density =3D to_ratio(dl_se->dl_deadline, dl_se->dl_runtime); + +out: + raw_spin_unlock(&dl_b->lock); + + return retval; +} + /* * Update the current task's runtime statistics (provided it is still * a -deadline task and has not been removed from the dl_rq). diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 8d5d98a5834d..5da3297270cd 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -333,8 +333,212 @@ static const struct file_operations sched_debug_fops = =3D { .release =3D seq_release, }; =20 +enum dl_param { + DL_RUNTIME =3D 0, + DL_PERIOD, + DL_DEFER +}; + +static unsigned long fair_server_period_max =3D (1 << 22) * NSEC_PER_USEC;= /* ~4 seconds */ +static unsigned long fair_server_period_min =3D (100) * NSEC_PER_USEC; = /* 100 us */ + +static ssize_t sched_fair_server_write(struct file *filp, const char __use= r *ubuf, + size_t cnt, loff_t *ppos, enum dl_param param) +{ + long cpu =3D (long) ((struct seq_file *) filp->private_data)->private; + u64 runtime, period, defer; + struct rq *rq =3D cpu_rq(cpu); + size_t err; + int retval; + u64 value; + + err =3D kstrtoull_from_user(ubuf, cnt, 10, &value); + if (err) + return err; + + scoped_guard (rq_lock_irqsave, rq) { + + runtime =3D rq->fair_server.dl_runtime; + period =3D rq->fair_server.dl_period; + defer =3D rq->fair_server.dl_defer; + + switch (param) { + case DL_RUNTIME: + if (runtime =3D=3D value) + goto out; + runtime =3D value; + break; + case DL_PERIOD: + if (value =3D=3D period) + goto out; + period =3D value; + break; + case DL_DEFER: + if (defer =3D=3D value) + goto out; + defer =3D value; + break; + } + + if (runtime > period || + period > fair_server_period_max || + period < fair_server_period_min || + defer > 1) { + cnt =3D -EINVAL; + goto out; + } + + if (rq->cfs.h_nr_running) { + update_rq_clock(rq); + dl_server_stop(&rq->fair_server); + } + + /* + * The defer does not change utilization, so just + * setting it is enough. + */ + if (rq->fair_server.dl_defer !=3D defer) { + rq->fair_server.dl_defer =3D defer; + } else { + retval =3D dl_server_apply_params(&rq->fair_server, runtime, period, 0); + if (retval) + cnt =3D retval; + } + + if (!runtime) + printk_deferred("Fair server disabled in CPU %d, system may crash due t= o starvation.\n", + cpu_of(rq)); + + if (rq->cfs.h_nr_running) + dl_server_start(&rq->fair_server); + } + +out: + *ppos +=3D cnt; + return cnt; +} + +static size_t sched_fair_server_show(struct seq_file *m, void *v, enum dl_= param param) +{ + unsigned long cpu =3D (unsigned long) m->private; + struct rq *rq =3D cpu_rq(cpu); + u64 value; + + switch (param) { + case DL_RUNTIME: + value =3D rq->fair_server.dl_runtime; + break; + case DL_PERIOD: + value =3D rq->fair_server.dl_period; + break; + case DL_DEFER: + value =3D rq->fair_server.dl_defer; + } + + seq_printf(m, "%llu\n", value); + return 0; + +} + +static ssize_t +sched_fair_server_runtime_write(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + return sched_fair_server_write(filp, ubuf, cnt, ppos, DL_RUNTIME); +} + +static int sched_fair_server_runtime_show(struct seq_file *m, void *v) +{ + return sched_fair_server_show(m, v, DL_RUNTIME); +} + +static int sched_fair_server_runtime_open(struct inode *inode, struct file= *filp) +{ + return single_open(filp, sched_fair_server_runtime_show, inode->i_private= ); +} + +static const struct file_operations fair_server_runtime_fops =3D { + .open =3D sched_fair_server_runtime_open, + .write =3D sched_fair_server_runtime_write, + .read =3D seq_read, + .llseek =3D seq_lseek, + .release =3D single_release, +}; + +static ssize_t +sched_fair_server_period_write(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + return sched_fair_server_write(filp, ubuf, cnt, ppos, DL_PERIOD); +} + +static int sched_fair_server_period_show(struct seq_file *m, void *v) +{ + return sched_fair_server_show(m, v, DL_PERIOD); +} + +static int sched_fair_server_period_open(struct inode *inode, struct file = *filp) +{ + return single_open(filp, sched_fair_server_period_show, inode->i_private); +} + +static const struct file_operations fair_server_period_fops =3D { + .open =3D sched_fair_server_period_open, + .write =3D sched_fair_server_period_write, + .read =3D seq_read, + .llseek =3D seq_lseek, + .release =3D single_release, +}; + +static ssize_t +sched_fair_server_defer_write(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + return sched_fair_server_write(filp, ubuf, cnt, ppos, DL_DEFER); +} + +static int sched_fair_server_defer_show(struct seq_file *m, void *v) +{ + return sched_fair_server_show(m, v, DL_DEFER); +} + +static int sched_fair_server_defer_open(struct inode *inode, struct file *= filp) +{ + return single_open(filp, sched_fair_server_defer_show, inode->i_private); +} + +static const struct file_operations fair_server_defer_fops =3D { + .open =3D sched_fair_server_defer_open, + .write =3D sched_fair_server_defer_write, + .read =3D seq_read, + .llseek =3D seq_lseek, + .release =3D single_release, +}; + static struct dentry *debugfs_sched; =20 +static void debugfs_fair_server_init(void) +{ + struct dentry *d_fair; + unsigned long cpu; + + d_fair =3D debugfs_create_dir("fair_server", debugfs_sched); + if (!d_fair) + return; + + for_each_possible_cpu(cpu) { + struct dentry *d_cpu; + char buf[32]; + + snprintf(buf, sizeof(buf), "cpu%lu", cpu); + d_cpu =3D debugfs_create_dir(buf, d_fair); + + debugfs_create_file("runtime", 0644, d_cpu, (void *) cpu, &fair_server_r= untime_fops); + debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_pe= riod_fops); + debugfs_create_file("defer", 0644, d_cpu, (void *) cpu, &fair_server_def= er_fops); + } +} + static __init int sched_init_debug(void) { struct dentry __maybe_unused *numa; @@ -374,6 +578,8 @@ static __init int sched_init_debug(void) =20 debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops= ); =20 + debugfs_fair_server_init(); + return 0; } late_initcall(sched_init_debug); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e70e17be83c3..a80a236da57c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -343,6 +343,9 @@ extern void dl_server_init(struct sched_dl_entity *dl_s= e, struct rq *rq, extern void dl_server_update_idle_time(struct rq *rq, struct task_struct *p); extern void fair_server_init(struct rq *rq); +extern void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct = rq *rq); +extern int dl_server_apply_params(struct sched_dl_entity *dl_se, + u64 runtime, u64 period, bool init); =20 #ifdef CONFIG_CGROUP_SCHED =20 diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 99ea5986038c..ecb089c4967f 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -517,6 +517,14 @@ void rq_attach_root(struct rq *rq, struct root_domain = *rd) if (cpumask_test_cpu(rq->cpu, cpu_active_mask)) set_rq_online(rq); =20 + /* + * Because the rq is not a task, dl_add_task_root_domain() did not + * move the fair server bw to the rd if it already started. + * Add it now. + */ + if (rq->fair_server.dl_server) + __dl_server_attach_root(&rq->fair_server, rq); + rq_unlock_irqrestore(rq, &rf); =20 if (old_rd) --=20 2.44.0 From nobody Wed Feb 11 06:28:18 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EC571CFBC for ; Fri, 5 Apr 2024 17:33:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338389; cv=none; b=hgHVdmJFPrRJE7jg/K2NPEknXU5wmzbQoXVbwiBpilOmvAQHA33O/sXJ9YXCidlHYqDTL52Tukwvy96waEpZOncxs/ZjSpka0QEFA0DGyFaTsswUJaLlcfdGRIXD2LfmTsM+IHa5qNw9zwxzAwvhemWpsLUjYo1WtilrNzwIgso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338389; c=relaxed/simple; bh=Sw94HTzOtmMY+TFVC6IZuLqXE+LH2LZl0+TfstCUCW4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TJr/Q9GgClrxxf6f03EsCeLu5Poq/LoVDqghzUEB2fWBhUIaV+BWWc8a4ScrtuU8B3DjJueZiWcv7MkkP6lMWaOrgQug0cO1TTVVs1YSzyFAyLDB3eXfoyRO4PWS8yeVXq8UKxOlUZ/hO1iR8Q/VMu3GHBuFTeHV3Rh2GA+Z/vE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hMUYFlk5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hMUYFlk5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98008C433C7; Fri, 5 Apr 2024 17:33:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712338388; bh=Sw94HTzOtmMY+TFVC6IZuLqXE+LH2LZl0+TfstCUCW4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hMUYFlk5OwSfGVgl7biDNqdtdgTG+oKBSqTqsjTiqPfI567nkrVa8hblslsgHHMyh xGYe85tCN4zJEwwOTxnc1FinHJy7LtNecOhJ3i6FgtfH4+G/hYdl5e+QuZGpTZ3wbx dO0AtNVG7SC6C8GO4M7+kRIza6ihxOI9CI+CYiiKkTE/0L0aO0CCO0pv8rJgXdUrxA Nkz8eW7dmo+7rPclOgzyKoU00WW7cZ7z5zuTpyb2jd0qzciVcAbi1z95A90uRhEoD1 eJHDrp6ou9VS7YaFVbW5Vt/H2+1proeqJk3zhI+aEd3yb32Y+qgBO4HXVGCvdVQTBE A6oQq9sMXKQrw== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , bristot@kernel.org, Phil Auld , Suleiman Souhlal , Youssef Esmat Subject: [PATCH V6 4/6] sched/core: Fix priority checking for DL server picks Date: Fri, 5 Apr 2024 19:32:52 +0200 Message-ID: <5c199284e572a65e71f445be3c26d2711834d910.1712337227.git.bristot@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Joel Fernandes (Google)" In core scheduling, a DL server pick (which is CFS task) should be given higher priority than tasks in other classes. Not doing so causes CFS starvation. A kselftest is added later to demonstrate this. A CFS task that is competing with RT tasks can be completely starved without this and the DL server's boosting completely ignored. Fix these problems. Reviewed-by: Vineeth Pillai Reported-by: Suleiman Souhlal Signed-off-by: Joel Fernandes (Google) Signed-off-by: Daniel Bristot de Oliveira --- kernel/sched/core.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 04e2270487b7..4881e797ae07 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -162,6 +162,9 @@ static inline int __task_prio(const struct task_struct = *p) if (p->sched_class =3D=3D &stop_sched_class) /* trumps deadline */ return -2; =20 + if (p->dl_server) + return -1; /* deadline */ + if (rt_prio(p->prio)) /* includes deadline */ return p->prio; /* [-1, 99] */ =20 @@ -191,8 +194,24 @@ static inline bool prio_less(const struct task_struct = *a, if (-pb < -pa) return false; =20 - if (pa =3D=3D -1) /* dl_prio() doesn't work because of stop_class above */ - return !dl_time_before(a->dl.deadline, b->dl.deadline); + if (pa =3D=3D -1) { /* dl_prio() doesn't work because of stop_class above= */ + const struct sched_dl_entity *a_dl, *b_dl; + + a_dl =3D &a->dl; + /* + * Since,'a' and 'b' can be CFS tasks served by DL server, + * __task_prio() can return -1 (for DL) even for those. In that + * case, get to the dl_server's DL entity. + */ + if (a->dl_server) + a_dl =3D a->dl_server; + + b_dl =3D &b->dl; + if (b->dl_server) + b_dl =3D b->dl_server; + + return !dl_time_before(a_dl->deadline, b_dl->deadline); + } =20 if (pa =3D=3D MAX_RT_PRIO + MAX_NICE) /* fair */ return cfs_prio_less(a, b, in_fi); --=20 2.44.0 From nobody Wed Feb 11 06:28:18 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D953F1CFBC for ; Fri, 5 Apr 2024 17:33:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338430; cv=none; b=X4O0S5oTMfxLDU2IuoHRfZF+UfTZoMbajTOa8Ex9MLDqh3A0w9iidY5ZvZiEl5AhDiJ0U0ctr8N/zxICMMlt4G+0Ne8q9oilV+dlglclLvPF2xavY3vE+ALvhldSbXeK10UZqk4BCWDJktq5f5FZ4LQqW67Jdaprfpfr5788OPM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338430; c=relaxed/simple; bh=1BpkzgEXIu2Ef0PumiErjzWEX1nn+L3uV3QP5SZbExI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bWp3uXSfH+NA5/8XE6IBv9KJdSZMCjfFllkJy6thiIti1WIl/2rf1finefhDgdkDuj+B6IfBHegllGYfevk+MvsF8vp666OeglZ2u/X2DOIuYbtjyeLvKmxKm6C0UtJ+hb3yraNxs4g8woXzW3Eg5MVqLfMwcNGE9Zk36qMSVAQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f1Hy0+Hr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f1Hy0+Hr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45423C433F1; Fri, 5 Apr 2024 17:33:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712338430; bh=1BpkzgEXIu2Ef0PumiErjzWEX1nn+L3uV3QP5SZbExI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f1Hy0+Hr5YzJGuPpDrwJQO3y3Kj/6hfUCuAjrYQro2f/8g4g29XIlGUqGP3RUtRX2 5gdAOtxBOQ6qf1rD0VvMbT+glG4AAlJzuMx76y+50DqbikknMefTKrPRgJTkvGUzWq cF5HUoqDr0SIJJjmunlFIUrg7TkTtDs8PGJRzqduhnvLh2S8JWg9yY8SbkNc70md6m xIqm2kIfviYOVdBtOfcUV5z1swJdDzpsHB3HMeDX7PimbSDh4RqtPkLGIirhPM5AUa AHsZ4aaKiFhWCFDflnc2DZLfWYq56cXJr6FUOTHnYtooKsi9Vri/JRX862Cnd7tyRT LXWueo0sM+uZQ== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , bristot@kernel.org, Phil Auld , Suleiman Souhlal , Youssef Esmat Subject: [PATCH V6 5/6] sched/core: Fix picking of tasks for core scheduling with DL server Date: Fri, 5 Apr 2024 19:33:39 +0200 Message-ID: <527a56dd5190a88da9135992d37285caa15024b3.1712337227.git.bristot@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Joel Fernandes (Google)" * Use simple CFS pick_task for DL pick_task DL server's pick_task calls CFS's pick_next_task_fair(), this is wrong because core scheduling's pick_task only calls CFS's pick_task() for evaluation / checking of the CFS task (comparing across CPUs), not for actually affirmatively picking the next task. This causes RB tree corruption issues in CFS that were found by syzbot. * Make pick_task_fair clear DL server A DL task pick might set ->dl_server, but it is possible the task will never run (say the other HT has a stop task). If the CFS task is picked in the future directly (say without DL server), ->dl_server will be set. So clear it in pick_task_fair(). This fixes the KASAN issue reported by syzbot in set_next_entity(). (DL refactoring suggestions by Vineeth Pillai). Reviewed-by: Vineeth Pillai Reported-by: Suleiman Souhlal Signed-off-by: Joel Fernandes (Google) Signed-off-by: Daniel Bristot de Oliveira --- include/linux/sched.h | 3 ++- kernel/sched/deadline.c | 27 ++++++++++++++++++++++----- kernel/sched/fair.c | 23 +++++++++++++++++++++-- kernel/sched/sched.h | 3 ++- 4 files changed, 47 insertions(+), 9 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 4a405f0e64f8..b0a5983cf3d1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -673,7 +673,8 @@ struct sched_dl_entity { */ struct rq *rq; dl_server_has_tasks_f server_has_tasks; - dl_server_pick_f server_pick; + dl_server_pick_f server_pick_next; + dl_server_pick_f server_pick_task; =20 #ifdef CONFIG_RT_MUTEXES /* diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index dd38370aa276..45fde2fd3a1b 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1665,11 +1665,13 @@ void dl_server_stop(struct sched_dl_entity *dl_se) =20 void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, dl_server_has_tasks_f has_tasks, - dl_server_pick_f pick) + dl_server_pick_f pick_next, + dl_server_pick_f pick_task) { dl_se->rq =3D rq; dl_se->server_has_tasks =3D has_tasks; - dl_se->server_pick =3D pick; + dl_se->server_pick_next =3D pick_next; + dl_se->server_pick_task =3D pick_task; } =20 void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq) @@ -2398,7 +2400,12 @@ static struct sched_dl_entity *pick_next_dl_entity(s= truct dl_rq *dl_rq) return __node_2_dle(left); } =20 -static struct task_struct *pick_task_dl(struct rq *rq) +/* + * __pick_next_task_dl - Helper to pick the next -deadline task to run. + * @rq: The runqueue to pick the next task from. + * @peek: If true, just peek at the next task. Only relevant for dlserver. + */ +static struct task_struct *__pick_next_task_dl(struct rq *rq, bool peek) { struct sched_dl_entity *dl_se; struct dl_rq *dl_rq =3D &rq->dl; @@ -2412,7 +2419,10 @@ static struct task_struct *pick_task_dl(struct rq *r= q) WARN_ON_ONCE(!dl_se); =20 if (dl_server(dl_se)) { - p =3D dl_se->server_pick(dl_se); + if (IS_ENABLED(CONFIG_SMP) && peek) + p =3D dl_se->server_pick_task(dl_se); + else + p =3D dl_se->server_pick_next(dl_se); if (!p) { WARN_ON_ONCE(1); dl_se->dl_yielded =3D 1; @@ -2427,11 +2437,18 @@ static struct task_struct *pick_task_dl(struct rq *= rq) return p; } =20 +#ifdef CONFIG_SMP +static struct task_struct *pick_task_dl(struct rq *rq) +{ + return __pick_next_task_dl(rq, true); +} +#endif + static struct task_struct *pick_next_task_dl(struct rq *rq) { struct task_struct *p; =20 - p =3D pick_task_dl(rq); + p =3D __pick_next_task_dl(rq, false); if (!p) return p; =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fdeb4a61575c..b86bb3f23fb2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8406,6 +8406,14 @@ static struct task_struct *pick_task_fair(struct rq = *rq) cfs_rq =3D group_cfs_rq(se); } while (cfs_rq); =20 + /* + * This can be called from directly from CFS's ->pick_task() or indirectly + * from DL's ->pick_task when fair server is enabled. In the indirect cas= e, + * DL will set ->dl_server just after this function is called, so its Ok = to + * clear. In the direct case, we are picking directly so we must clear it. + */ + task_of(se)->dl_server =3D NULL; + return task_of(se); } #endif @@ -8565,7 +8573,16 @@ static bool fair_server_has_tasks(struct sched_dl_en= tity *dl_se) return !!dl_se->rq->cfs.nr_running; } =20 -static struct task_struct *fair_server_pick(struct sched_dl_entity *dl_se) +static struct task_struct *fair_server_pick_task(struct sched_dl_entity *d= l_se) +{ +#ifdef CONFIG_SMP + return pick_task_fair(dl_se->rq); +#else + return NULL; +#endif +} + +static struct task_struct *fair_server_pick_next(struct sched_dl_entity *d= l_se) { return pick_next_task_fair(dl_se->rq, NULL, NULL); } @@ -8576,7 +8593,9 @@ void fair_server_init(struct rq *rq) =20 init_dl_entity(dl_se); =20 - dl_server_init(dl_se, rq, fair_server_has_tasks, fair_server_pick); + dl_server_init(dl_se, rq, fair_server_has_tasks, fair_server_pick_next, + fair_server_pick_task); + } =20 /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index a80a236da57c..b200f09038db 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -338,7 +338,8 @@ extern void dl_server_start(struct sched_dl_entity *dl_= se); extern void dl_server_stop(struct sched_dl_entity *dl_se); extern void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, dl_server_has_tasks_f has_tasks, - dl_server_pick_f pick); + dl_server_pick_f pick_next, + dl_server_pick_f pick_task); =20 extern void dl_server_update_idle_time(struct rq *rq, struct task_struct *p); --=20 2.44.0 From nobody Wed Feb 11 06:28:18 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F150A171667 for ; Fri, 5 Apr 2024 17:34:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338467; cv=none; b=NgGT8mkEczy+d1w1kO7E91fO6PoHC8UST+dsgSawEhdv2gJ2bPfkHt+WH87n4Vf9XaQhTAOygHCR5mUgmHpVhOac2hPVDmNKSlMBlFbvHLbXuJQLNcJ6XLlSWMUzgNg/YHg40s9CdXEr3jDtZ1jAbnIW8Qkdkbmw6ZBRoRQeA8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712338467; c=relaxed/simple; bh=Hr9rAhQc8DmpXzIof5bunwZXhatCI7W4URuyANCeLqc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZeCg8EXqNJG8RDLYZWqqW2poiTc9a/bVSHuQDcxMSttrA7RofWnEEGoUncJS2mbIWXEW0ZD2QE2kc+YF7aDigJN31YZKb45G6uG2lpyOhWvP/bn4CHZQqHuIFYHSWIA8Kvlwt+lV1FIB1+8OX8enUkQsYBMZpHK/Z694h3Wd1G0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=E7AP/Yl9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="E7AP/Yl9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46E81C433F1; Fri, 5 Apr 2024 17:34:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712338466; bh=Hr9rAhQc8DmpXzIof5bunwZXhatCI7W4URuyANCeLqc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E7AP/Yl9Z+1TJ4Iz8dQdE10FbC+QR/XcDYiORZQv7wyaYXLQ4bZ/+w++elOEMj2Ms /BXJDa4rynGb2IBocYi0HL45yhWgul1ZT/HDbNCo3CZEGgjXh7Xp+9rbuh1wkI0F+v aKKw1vok9cCVpoVYbU9RreSjMtwTrNPtQqWaG13zhSHi/FXTYyWJoVhcpjj4ZZu1AJ LNGuvu4J5GWk0WvWIb76ufY53VrTJuRY+kJ++K50BjFkGxgj7KronGlZFh+u/qOFWk djV04WSF6v+9pfXRjqwaSz9CV9hpjwYdgGqvRp0zXPp+TJ7Po/6j1WrI86cvzprMbA x0v0jlpaq2OXw== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , bristot@kernel.org, Phil Auld , Suleiman Souhlal , Youssef Esmat Subject: [PATCH V6 6/6] sched/rt: Remove default bandwidth control Date: Fri, 5 Apr 2024 19:33:57 +0200 Message-ID: <03100a344f14806e2e965fd79319b2bd8615601b.1712337227.git.bristot@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Now that fair_server exists, we no longer need RT bandwidth control unless RT_GROUP_SCHED. Enable fair_server with parameters equivalent to RT throttling. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira --- kernel/sched/core.c | 9 +- kernel/sched/deadline.c | 5 +- kernel/sched/debug.c | 3 + kernel/sched/rt.c | 242 ++++++++++++++++++---------------------- kernel/sched/sched.h | 3 +- 5 files changed, 120 insertions(+), 142 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4881e797ae07..d70bfc7e3e7b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -9988,8 +9988,6 @@ void __init sched_init(void) #endif /* CONFIG_RT_GROUP_SCHED */ } =20 - init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtim= e()); - #ifdef CONFIG_SMP init_defrootdomain(); #endif @@ -10044,8 +10042,13 @@ void __init sched_init(void) init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ =20 - rq->rt.rt_runtime =3D def_rt_bandwidth.rt_runtime; #ifdef CONFIG_RT_GROUP_SCHED + /* + * This is required for init cpu because rt.c:__enable_runtime() + * starts working after scheduler_running, which is not the case + * yet. + */ + rq->rt.rt_runtime =3D global_rt_runtime(); init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL); #endif #ifdef CONFIG_SMP diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 45fde2fd3a1b..f0a7f0e43ff0 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1554,6 +1554,7 @@ static void update_curr_dl_se(struct rq *rq, struct s= ched_dl_entity *dl_se, s64 if (dl_se =3D=3D &rq->fair_server) return; =20 +#ifdef CONFIG_RT_GROUP_SCHED /* * Because -- for now -- we share the rt bandwidth, we need to * account our runtime there too, otherwise actual rt tasks @@ -1578,6 +1579,7 @@ static void update_curr_dl_se(struct rq *rq, struct s= ched_dl_entity *dl_se, s64 rt_rq->rt_time +=3D delta_exec; raw_spin_unlock(&rt_rq->rt_runtime_lock); } +#endif } =20 /* @@ -1633,8 +1635,7 @@ void dl_server_start(struct sched_dl_entity *dl_se) * this before getting generic. */ if (!dl_server(dl_se)) { - /* Disabled */ - u64 runtime =3D 0; + u64 runtime =3D 50 * NSEC_PER_MSEC; u64 period =3D 1000 * NSEC_PER_MSEC; =20 dl_server_apply_params(dl_se, runtime, period, 1); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 5da3297270cd..2e1f0ecdde38 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -935,9 +935,12 @@ void print_rt_rq(struct seq_file *m, int cpu, struct r= t_rq *rt_rq) SEQ_printf(m, " .%-30s: %Ld.%06ld\n", #x, SPLIT_NS(rt_rq->x)) =20 PU(rt_nr_running); + +#ifdef CONFIG_RT_GROUP_SCHED P(rt_throttled); PN(rt_time); PN(rt_runtime); +#endif =20 #undef PN #undef PU diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 3261b067b67e..d3065fe35c61 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -8,10 +8,6 @@ int sched_rr_timeslice =3D RR_TIMESLICE; /* More than 4 hours if BW_SHIFT equals 20. */ static const u64 max_rt_runtime =3D MAX_BW; =20 -static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun= ); - -struct rt_bandwidth def_rt_bandwidth; - /* * period over which we measure -rt task CPU usage in us. * default: 1s @@ -67,6 +63,40 @@ static int __init sched_rt_sysctl_init(void) late_initcall(sched_rt_sysctl_init); #endif =20 +void init_rt_rq(struct rt_rq *rt_rq) +{ + struct rt_prio_array *array; + int i; + + array =3D &rt_rq->active; + for (i =3D 0; i < MAX_RT_PRIO; i++) { + INIT_LIST_HEAD(array->queue + i); + __clear_bit(i, array->bitmap); + } + /* delimiter for bitsearch: */ + __set_bit(MAX_RT_PRIO, array->bitmap); + +#if defined CONFIG_SMP + rt_rq->highest_prio.curr =3D MAX_RT_PRIO-1; + rt_rq->highest_prio.next =3D MAX_RT_PRIO-1; + rt_rq->overloaded =3D 0; + plist_head_init(&rt_rq->pushable_tasks); +#endif /* CONFIG_SMP */ + /* We start is dequeued state, because no RT tasks are queued */ + rt_rq->rt_queued =3D 0; + +#ifdef CONFIG_RT_GROUP_SCHED + rt_rq->rt_time =3D 0; + rt_rq->rt_throttled =3D 0; + rt_rq->rt_runtime =3D 0; + raw_spin_lock_init(&rt_rq->rt_runtime_lock); +#endif +} + +#ifdef CONFIG_RT_GROUP_SCHED + +static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun= ); + static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer) { struct rt_bandwidth *rt_b =3D @@ -131,35 +161,6 @@ static void start_rt_bandwidth(struct rt_bandwidth *rt= _b) do_start_rt_bandwidth(rt_b); } =20 -void init_rt_rq(struct rt_rq *rt_rq) -{ - struct rt_prio_array *array; - int i; - - array =3D &rt_rq->active; - for (i =3D 0; i < MAX_RT_PRIO; i++) { - INIT_LIST_HEAD(array->queue + i); - __clear_bit(i, array->bitmap); - } - /* delimiter for bitsearch: */ - __set_bit(MAX_RT_PRIO, array->bitmap); - -#if defined CONFIG_SMP - rt_rq->highest_prio.curr =3D MAX_RT_PRIO-1; - rt_rq->highest_prio.next =3D MAX_RT_PRIO-1; - rt_rq->overloaded =3D 0; - plist_head_init(&rt_rq->pushable_tasks); -#endif /* CONFIG_SMP */ - /* We start is dequeued state, because no RT tasks are queued */ - rt_rq->rt_queued =3D 0; - - rt_rq->rt_time =3D 0; - rt_rq->rt_throttled =3D 0; - rt_rq->rt_runtime =3D 0; - raw_spin_lock_init(&rt_rq->rt_runtime_lock); -} - -#ifdef CONFIG_RT_GROUP_SCHED static void destroy_rt_bandwidth(struct rt_bandwidth *rt_b) { hrtimer_cancel(&rt_b->rt_period_timer); @@ -196,7 +197,6 @@ void unregister_rt_sched_group(struct task_group *tg) { if (tg->rt_se) destroy_rt_bandwidth(&tg->rt_bandwidth); - } =20 void free_rt_sched_group(struct task_group *tg) @@ -254,8 +254,7 @@ int alloc_rt_sched_group(struct task_group *tg, struct = task_group *parent) if (!tg->rt_se) goto err; =20 - init_rt_bandwidth(&tg->rt_bandwidth, - ktime_to_ns(def_rt_bandwidth.rt_period), 0); + init_rt_bandwidth(&tg->rt_bandwidth, ktime_to_ns(global_rt_period()), 0); =20 for_each_possible_cpu(i) { rt_rq =3D kzalloc_node(sizeof(struct rt_rq), @@ -605,70 +604,6 @@ static inline struct rt_bandwidth *sched_rt_bandwidth(= struct rt_rq *rt_rq) return &rt_rq->tg->rt_bandwidth; } =20 -#else /* !CONFIG_RT_GROUP_SCHED */ - -static inline u64 sched_rt_runtime(struct rt_rq *rt_rq) -{ - return rt_rq->rt_runtime; -} - -static inline u64 sched_rt_period(struct rt_rq *rt_rq) -{ - return ktime_to_ns(def_rt_bandwidth.rt_period); -} - -typedef struct rt_rq *rt_rq_iter_t; - -#define for_each_rt_rq(rt_rq, iter, rq) \ - for ((void) iter, rt_rq =3D &rq->rt; rt_rq; rt_rq =3D NULL) - -#define for_each_sched_rt_entity(rt_se) \ - for (; rt_se; rt_se =3D NULL) - -static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se) -{ - return NULL; -} - -static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq) -{ - struct rq *rq =3D rq_of_rt_rq(rt_rq); - - if (!rt_rq->rt_nr_running) - return; - - enqueue_top_rt_rq(rt_rq); - resched_curr(rq); -} - -static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq) -{ - dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running); -} - -static inline int rt_rq_throttled(struct rt_rq *rt_rq) -{ - return rt_rq->rt_throttled; -} - -static inline const struct cpumask *sched_rt_period_mask(void) -{ - return cpu_online_mask; -} - -static inline -struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu) -{ - return &cpu_rq(cpu)->rt; -} - -static inline struct rt_bandwidth *sched_rt_bandwidth(struct rt_rq *rt_rq) -{ - return &def_rt_bandwidth; -} - -#endif /* CONFIG_RT_GROUP_SCHED */ - bool sched_rt_bandwidth_account(struct rt_rq *rt_rq) { struct rt_bandwidth *rt_b =3D sched_rt_bandwidth(rt_rq); @@ -860,7 +795,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth= *rt_b, int overrun) const struct cpumask *span; =20 span =3D sched_rt_period_mask(); -#ifdef CONFIG_RT_GROUP_SCHED + /* * FIXME: isolated CPUs should really leave the root task group, * whether they are isolcpus or were isolated via cpusets, lest @@ -872,7 +807,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth= *rt_b, int overrun) */ if (rt_b =3D=3D &root_task_group.rt_bandwidth) span =3D cpu_online_mask; -#endif + for_each_cpu(i, span) { int enqueue =3D 0; struct rt_rq *rt_rq =3D sched_rt_period_rt_rq(rt_b, i); @@ -939,18 +874,6 @@ static int do_sched_rt_period_timer(struct rt_bandwidt= h *rt_b, int overrun) return idle; } =20 -static inline int rt_se_prio(struct sched_rt_entity *rt_se) -{ -#ifdef CONFIG_RT_GROUP_SCHED - struct rt_rq *rt_rq =3D group_rt_rq(rt_se); - - if (rt_rq) - return rt_rq->highest_prio.curr; -#endif - - return rt_task_of(rt_se)->prio; -} - static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq) { u64 runtime =3D sched_rt_runtime(rt_rq); @@ -994,6 +917,72 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_= rq) return 0; } =20 +#else /* !CONFIG_RT_GROUP_SCHED */ + +typedef struct rt_rq *rt_rq_iter_t; + +#define for_each_rt_rq(rt_rq, iter, rq) \ + for ((void) iter, rt_rq =3D &rq->rt; rt_rq; rt_rq =3D NULL) + +#define for_each_sched_rt_entity(rt_se) \ + for (; rt_se; rt_se =3D NULL) + +static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se) +{ + return NULL; +} + +static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq) +{ + struct rq *rq =3D rq_of_rt_rq(rt_rq); + + if (!rt_rq->rt_nr_running) + return; + + enqueue_top_rt_rq(rt_rq); + resched_curr(rq); +} + +static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq) +{ + dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running); +} + +static inline int rt_rq_throttled(struct rt_rq *rt_rq) +{ + return false; +} + +static inline const struct cpumask *sched_rt_period_mask(void) +{ + return cpu_online_mask; +} + +static inline +struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu) +{ + return &cpu_rq(cpu)->rt; +} + +#ifdef CONFIG_SMP +static void __enable_runtime(struct rq *rq) { } +static void __disable_runtime(struct rq *rq) { } +#endif + +#endif /* CONFIG_RT_GROUP_SCHED */ + +static inline int rt_se_prio(struct sched_rt_entity *rt_se) +{ +#ifdef CONFIG_RT_GROUP_SCHED + struct rt_rq *rt_rq =3D group_rt_rq(rt_se); + + if (rt_rq) + return rt_rq->highest_prio.curr; +#endif + + return rt_task_of(rt_se)->prio; +} + /* * Update the current task's runtime statistics. Skip current tasks that * are not in our scheduling class. @@ -1001,7 +990,6 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_= rq) static void update_curr_rt(struct rq *rq) { struct task_struct *curr =3D rq->curr; - struct sched_rt_entity *rt_se =3D &curr->rt; s64 delta_exec; =20 if (curr->sched_class !=3D &rt_sched_class) @@ -1011,6 +999,9 @@ static void update_curr_rt(struct rq *rq) if (unlikely(delta_exec <=3D 0)) return; =20 +#ifdef CONFIG_RT_GROUP_SCHED + struct sched_rt_entity *rt_se =3D &curr->rt; + if (!rt_bandwidth_enabled()) return; =20 @@ -1029,6 +1020,7 @@ static void update_curr_rt(struct rq *rq) do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq)); } } +#endif } =20 static void @@ -1185,7 +1177,6 @@ dec_rt_group(struct sched_rt_entity *rt_se, struct rt= _rq *rt_rq) static void inc_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) { - start_rt_bandwidth(&def_rt_bandwidth); } =20 static inline @@ -2913,19 +2904,6 @@ int sched_rt_can_attach(struct task_group *tg, struc= t task_struct *tsk) #ifdef CONFIG_SYSCTL static int sched_rt_global_constraints(void) { - unsigned long flags; - int i; - - raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags); - for_each_possible_cpu(i) { - struct rt_rq *rt_rq =3D &cpu_rq(i)->rt; - - raw_spin_lock(&rt_rq->rt_runtime_lock); - rt_rq->rt_runtime =3D global_rt_runtime(); - raw_spin_unlock(&rt_rq->rt_runtime_lock); - } - raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags); - return 0; } #endif /* CONFIG_SYSCTL */ @@ -2945,12 +2923,6 @@ static int sched_rt_global_validate(void) =20 static void sched_rt_do_global(void) { - unsigned long flags; - - raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags); - def_rt_bandwidth.rt_runtime =3D global_rt_runtime(); - def_rt_bandwidth.rt_period =3D ns_to_ktime(global_rt_period()); - raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags); } =20 static int sched_rt_handler(struct ctl_table *table, int write, void *buff= er, diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b200f09038db..fb8826cf80a9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -713,13 +713,13 @@ struct rt_rq { #endif /* CONFIG_SMP */ int rt_queued; =20 +#ifdef CONFIG_RT_GROUP_SCHED int rt_throttled; u64 rt_time; u64 rt_runtime; /* Nests inside the rq lock: */ raw_spinlock_t rt_runtime_lock; =20 -#ifdef CONFIG_RT_GROUP_SCHED unsigned int rt_nr_boosted; =20 struct rq *rq; @@ -2475,7 +2475,6 @@ extern void reweight_task(struct task_struct *p, int = prio); extern void resched_curr(struct rq *rq); extern void resched_cpu(int cpu); =20 -extern struct rt_bandwidth def_rt_bandwidth; extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 r= untime); extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq); =20 --=20 2.44.0