From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C0C9C636D6 for ; Thu, 23 Feb 2023 19:11:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231703AbjBWTLT (ORCPT ); Thu, 23 Feb 2023 14:11:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230219AbjBWTLN (ORCPT ); Thu, 23 Feb 2023 14:11:13 -0500 Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com [IPv6:2a00:1450:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B8A456506 for ; Thu, 23 Feb 2023 11:10:47 -0800 (PST) Received: by mail-wr1-x429.google.com with SMTP id c12so11558945wrw.1 for ; Thu, 23 Feb 2023 11:10:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WRK9CVl+wp5UA9FM0KiXaS6Hv0eR7fUWtXzYt4onYBI=; b=cJLTmlrHNJ4IiK93vT6TGBpN9S5FEqtA2s+/C9Yu566lyKqsXGLuM89eg1haoNDssu XxZW7KhHr6wX5qnbv6MDXLky7jXD/xZu3RgIVwYZoFs8AtzngYBNhITwhkQkeNQF3YWm T5FS7HiVk2OIgjTjNyertPKjsBeW+Gxbl8QuL7GLAnp46EPwXChRrSUKkX8bGHQntmdU VkJ1FsG5tt3wbgJRSwlHlzWxPTF2dBWPyBGSdSkOH+R9EKhTsxNCXz12sHLFgxSqWyI0 uR6AWn+lu3Ta5EdlsqXlV/9AH1/VVP1y9hFxaQu92bDnxveRkqQ7T7YS8+tCAJrZhbCy v/tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WRK9CVl+wp5UA9FM0KiXaS6Hv0eR7fUWtXzYt4onYBI=; b=vBLWSrQNNHk+3FRc7THzROvNLqvVycCa3WuobkKabh1fFY/MwtjDZSk6UQhSbLBB4+ 5RHk1TU4LBXUvdyTH5lfL6ZSUvcfP/wfb+Gt3OcTYv3a+sn5wHHYMW8hSSzsgWx/rfNA ACOHOfxuxhwYrAEjFzYEdj7VYSbuv++s2Y3jpWHFvp2RxgAF42z4bE55XdflXJeyKrPG mJ09kRGIOWDNXYPDXbaUSnhS9gpFX/aJTPuFC2KHoFseXXkvfloXxVSVN4K7Bqy3SaPd bl+RaMydCWPyZtYdNlY9PSb2OFg+ylOZm9413IB2R4cZKzvi1GZmIgVOqM0C+PIED/KX PKqQ== X-Gm-Message-State: AO0yUKVKFrpqb5/qaSjujYlhyNQTOKzGDecjtzYUBmE09vCuYnwgkKtF S4QLtYMThlD8wAV4lMZkqEaEQA== X-Google-Smtp-Source: AK7set8rtIfVKV7bn/vTlLB7uTXwd2V/aAxXF5XJslJGT7D8puAnurv+vs3OrWvzBs0n2u3G3OCBPA== X-Received: by 2002:a05:6000:1086:b0:2c7:454:ed41 with SMTP id y6-20020a056000108600b002c70454ed41mr9708055wrw.71.1677179445879; Thu, 23 Feb 2023 11:10:45 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:45 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 1/8] sched/fair: fix unfairness at wakeup Date: Thu, 23 Feb 2023 20:10:34 +0100 Message-Id: <20230223191041.577305-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" At wake up, the vruntime of a task is updated to not be more older than a sched_latency period behind the min_vruntime. This prevents long sleeping task to get unlimited credit at wakeup. Such waking task should preempt current one to use its CPU bandwidth but wakeup_gran() can be larger than sched_latency, filter out the wakeup preemption and as a results steals some CPU bandwidth to the waking task. Make sure that a task, which vruntime has been capped, will preempt current task and use its CPU bandwidth even if wakeup_gran() is in the same range as sched_latency. If the waking task failed to preempt current it could to wait up to sysctl_sched_min_granularity before preempting it during next tick. Strictly speaking, we should use cfs->min_vruntime instead of curr->vruntime but it doesn't worth the additional overhead and complexity as the vruntime of current should be close to min_vruntime if not equal. Reported-by: Youssef Esmat Signed-off-by: Vincent Guittot Reviewed-by: Joel Fernandes (Google) Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 46 ++++++++++++++++++++------------------------ kernel/sched/sched.h | 34 +++++++++++++++++++++++++++++++- 2 files changed, 54 insertions(+), 26 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ff4dbbae3b10..81bef11eb660 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4654,33 +4654,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int initial) u64 vruntime =3D cfs_rq->min_vruntime; u64 sleep_time; =20 - /* - * The 'current' period is already promised to the current tasks, - * however the extra weight of the new task will slow them down a - * little, place the new task so that it fits in the slot that - * stays open at the end. - */ - if (initial && sched_feat(START_DEBIT)) - vruntime +=3D sched_vslice(cfs_rq, se); - - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; - - if (se_is_idle(se)) - thresh =3D sysctl_sched_min_granularity; - else - thresh =3D sysctl_sched_latency; - + if (!initial) + /* sleeps up to a single latency don't count. */ + vruntime -=3D get_sleep_latency(se_is_idle(se)); + else if (sched_feat(START_DEBIT)) /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: + * The 'current' period is already promised to the current tasks, + * however the extra weight of the new task will slow them down a + * little, place the new task so that it fits in the slot that + * stays open at the end. */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>=3D 1; - - vruntime -=3D thresh; - } + vruntime +=3D sched_vslice(cfs_rq, se); =20 /* * Pull vruntime of the entity being placed to the base level of @@ -7721,6 +7705,18 @@ wakeup_preempt_entity(struct sched_entity *curr, str= uct sched_entity *se) return -1; =20 gran =3D wakeup_gran(se); + + /* + * At wake up, the vruntime of a task is capped to not be older than + * a sched_latency period compared to min_vruntime. This prevents long + * sleeping task to get unlimited credit at wakeup. Such waking up task + * has to preempt current in order to not lose its share of CPU + * bandwidth but wakeup_gran() can become higher than scheduling period + * for low priority task. Make sure that long sleeping task will get a + * chance to preempt current. + */ + gran =3D min_t(s64, gran, get_latency_max()); + if (vdiff > gran) return 1; =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3e8df6d31c1e..51ba0af7fb27 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2458,9 +2458,9 @@ extern void check_preempt_curr(struct rq *rq, struct = task_struct *p, int flags); extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; =20 -#ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_latency; extern unsigned int sysctl_sched_min_granularity; +#ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_idle_min_granularity; extern unsigned int sysctl_sched_wakeup_granularity; extern int sysctl_resched_latency_warn_ms; @@ -2475,6 +2475,38 @@ extern unsigned int sysctl_numa_balancing_scan_size; extern unsigned int sysctl_numa_balancing_hot_threshold; #endif =20 +static inline unsigned long get_sleep_latency(bool idle) +{ + unsigned long thresh; + + if (idle) + thresh =3D sysctl_sched_min_granularity; + else + thresh =3D sysctl_sched_latency; + + /* + * Halve their sleep time's effect, to allow + * for a gentler effect of sleepers: + */ + if (sched_feat(GENTLE_FAIR_SLEEPERS)) + thresh >>=3D 1; + + return thresh; +} + +static inline unsigned long get_latency_max(void) +{ + unsigned long thresh =3D get_sleep_latency(false); + + /* + * If the waking task failed to preempt current it could to wait up to + * sysctl_sched_min_granularity before preempting it during next tick. + */ + thresh -=3D sysctl_sched_min_granularity; + + return thresh; +} + #ifdef CONFIG_SCHED_HRTICK =20 /* --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 525C1C636D6 for ; Thu, 23 Feb 2023 19:11:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229497AbjBWTLY (ORCPT ); Thu, 23 Feb 2023 14:11:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231553AbjBWTLN (ORCPT ); Thu, 23 Feb 2023 14:11:13 -0500 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 175FB56529 for ; Thu, 23 Feb 2023 11:10:49 -0800 (PST) Received: by mail-wr1-x431.google.com with SMTP id bo30so10995816wrb.0 for ; Thu, 23 Feb 2023 11:10:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kiVS63wbAWm+QNHUVgj2Zht/JYfZt2Iaghb+UJPmGJA=; b=fIYGRnAgdM1pkszU9vujdElW+ZXCpTTf26yDfbUIv79L8uorGratJSQKy5JJHH9DQ+ mPbuFR4q4LajY8N0objspMBuREs1C8bEczpR+yZelsroKn3N/FpjN/oT7oxBNg139zUa PyiGiRirGFQgNB0Gd7VZRP6nDveoPxnPuvIJ3741+SjfmENOI4oTXnnvvm/FTpMVQ9N8 aRpzzIsafVX1tcketysgCV3PX8OB9+epSy1Mompzex59kSU8k4jFGekd7RMUSWcJq8TT LmOjl6fYcaFyY2DN/1vWEV0HB7KlYF2iNDl/Trfy7C5ruK9qvJmFBD+6ZkCnqYmRMa9S P0iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kiVS63wbAWm+QNHUVgj2Zht/JYfZt2Iaghb+UJPmGJA=; b=DdfxtuuOFGBZeEVchG32FOcUBdprhv5YOUqcdkNSY2eGWL/Bw9IO1gQonMDQf8UuY/ SQ0IdI3YiSkFIh0kCaYqPvZiowtd4RqKeGTCQqVpkMf/7DWo3mPlX6eJpMP5kC3y/uvm p6RxFoIEM9xpClRoYVP6oznDC4Wd4u4EFZyac72QlszBQ87WrjX+gaba8cx8JdypfXBW Z/SPtq8+6iV2BD13ygK0YECA/uahDsMCT8m3NPJU/F8e1DariSyxiPmBgyuPZBo4cz3Z Yzt2VMom7HlmyfqilRBAyKwneDZn9uaaICjGAZ8GOy8ifBrhpEdFGM1gnfAJIz1Fvanf 48mQ== X-Gm-Message-State: AO0yUKWR7ghw+L9ahMd6EPqnJUmhRsO7iRKsEGKrJePqIBxJIqxFO51R qvYiu83HxhokZu88XGK/Wt0VxA== X-Google-Smtp-Source: AK7set8ailWUj1cm9/MR8T00RP9NTdvcApBMpjUfy6nYCzgaS9+P6dhi4HqQHx97xYFu87vC8YFEtQ== X-Received: by 2002:a05:6000:1c04:b0:2c5:4ffb:b5d4 with SMTP id ba4-20020a0560001c0400b002c54ffbb5d4mr10630447wrb.19.1677179447566; Thu, 23 Feb 2023 11:10:47 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:47 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v8 2/8] sched: Introduce latency-nice as a per-task attribute Date: Thu, 23 Feb 2023 20:10:35 +0100 Message-Id: <20230223191041.577305-3-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Parth Shah Latency-nice indicates the latency requirements of a task with respect to the other tasks in the system. The value of the attribute can be within the range of [-20, 19] both inclusive to be in-line with the values just like task nice values. latency_nice =3D -20 indicates the task to have the least latency as compared to the tasks having latency_nice =3D +19. The latency_nice may affect only the CFS SCHED_CLASS by getting latency requirements from the userspace. Additionally, add debugging bits for newly added latency_nice attribute. Signed-off-by: Parth Shah [rebase, move defines in sched/prio.h] Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/linux/sched.h | 1 + include/linux/sched/prio.h | 18 ++++++++++++++++++ kernel/sched/debug.c | 1 + 3 files changed, 20 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 4df2b3e76b30..6c61bde49152 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -784,6 +784,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; + int latency_nice; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h index ab83d85e1183..bfcd7f1d1e11 100644 --- a/include/linux/sched/prio.h +++ b/include/linux/sched/prio.h @@ -42,4 +42,22 @@ static inline long rlimit_to_nice(long prio) return (MAX_NICE - prio + 1); } =20 +/* + * Latency nice is meant to provide scheduler hints about the relative + * latency requirements of a task with respect to other tasks. + * Thus a task with latency_nice =3D=3D 19 can be hinted as the task with = no + * latency requirements, in contrast to the task with latency_nice =3D=3D = -20 + * which should be given priority in terms of lower latency. + */ +#define MAX_LATENCY_NICE 19 +#define MIN_LATENCY_NICE -20 + +#define LATENCY_NICE_WIDTH \ + (MAX_LATENCY_NICE - MIN_LATENCY_NICE + 1) + +/* + * Default tasks should be treated as a task with latency_nice =3D 0. + */ +#define DEFAULT_LATENCY_NICE 0 + #endif /* _LINUX_SCHED_PRIO_H */ diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 1637b65ba07a..68be7a3e42a3 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1043,6 +1043,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); + P(latency_nice); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35E39C636D6 for ; Thu, 23 Feb 2023 19:11:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231822AbjBWTL0 (ORCPT ); Thu, 23 Feb 2023 14:11:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231591AbjBWTLO (ORCPT ); Thu, 23 Feb 2023 14:11:14 -0500 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3FDE5679A for ; Thu, 23 Feb 2023 11:10:50 -0800 (PST) Received: by mail-wr1-x433.google.com with SMTP id p8so11686599wrt.12 for ; Thu, 23 Feb 2023 11:10:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Oe0Uu935GUv7Os5h4xnlj3hjmH+1d59YLoRXBZZGIAY=; b=O5L/+yHrHJewBHNYaYMbLKYpHrKb8Xzmc5mPSGmWCTUeJrXKrVIEt9GIBqBDR8jLs9 ynu5wPnOjt62g7XREb/nchkcilMGJO5EVs4VNmOhKaEnQqJjyC59CW5xU7hF4xm9Kvzg SW9uMaGjdZaOO459sH7YCj0eTFqrJB/aKntYiF/greT67lZ0krZdefFpCMiDsQVBIPct X9J1b07w2ceDMTaDURJK39Vm3CYaoE0V3kH3z5ZK0iS4XIixTZsGrqXbUNsigjWCjoeq M3QYnDlmMTjwujqbLDLMCmu1b7d/755D9YDgEvxSq+E7CEbiWbnxv7d2mhK4wUqYOjt1 iKPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Oe0Uu935GUv7Os5h4xnlj3hjmH+1d59YLoRXBZZGIAY=; b=FcribIiV8P4r4WQe6qePTynIG23PO1/uckSNIn4ptHOwxOypj0H9OV3ezDO/qqB0J8 IHtKnerZUGp44kmINxXoaDsRNLRoAdykVPYZ6BrCBEbaDgYrb/i9fMDYmXKL0QqdFvlt Arj5De3nkIrdVz7bi3oOrmUeyWroONYahgaSOe+LYXv55xvSOis041oivKZa/Hct8bgX Mk+chZLEbeyu+A96nxKvDc/PnU+NCSWpghHCARp58XJe2XCW492t8VPKsMXybnhRUuFf AMK242xFX2vOdRN3N369FoBP9zEYCqAGBnixVRHOv+71rk9m8groN79U9a+l2xcA6i5o De+w== X-Gm-Message-State: AO0yUKWZTc443ljnFZuDO4oy7VkQA9NQa4BAz9gDc4JjqLEyo+JngNKm HRY4G0Hj9Ja6lgM66GZgD/hs0Q== X-Google-Smtp-Source: AK7set96DpUpc8Hz/quQD+izLeGyOtgvPquIgErpX9Ec6Y1qBZvnxhj2dEmIniQ2GWaJEOZ96ffgPg== X-Received: by 2002:adf:fb49:0:b0:2c5:3cfa:f7dc with SMTP id c9-20020adffb49000000b002c53cfaf7dcmr12039916wrs.7.1677179449339; Thu, 23 Feb 2023 11:10:49 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:48 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 3/8] sched/core: Propagate parent task's latency requirements to the child task Date: Thu, 23 Feb 2023 20:10:36 +0100 Message-Id: <20230223191041.577305-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Parth Shah Clone parent task's latency_nice attribute to the forked child task. Reset the latency_nice value to default value when the child task is set to sched_reset_on_fork. Also, initialize init_task.latency_nice value with DEFAULT_LATENCY_NICE value Signed-off-by: Parth Shah [rebase] Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- init/init_task.c | 1 + kernel/sched/core.c | 1 + 2 files changed, 2 insertions(+) diff --git a/init/init_task.c b/init/init_task.c index ff6c4b9bfe6b..7dd71dd2d261 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,6 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, + .latency_nice =3D DEFAULT_LATENCY_NICE, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4580fe3e1d0c..28b397f9698b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4681,6 +4681,7 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 + p->latency_nice =3D DEFAULT_LATENCY_NICE; /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90476C677F1 for ; Thu, 23 Feb 2023 19:11:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230081AbjBWTLc (ORCPT ); Thu, 23 Feb 2023 14:11:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231158AbjBWTLP (ORCPT ); Thu, 23 Feb 2023 14:11:15 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81366580C9 for ; Thu, 23 Feb 2023 11:10:52 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id bo30so10995964wrb.0 for ; Thu, 23 Feb 2023 11:10:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=onBZrkVc33fv3ZoSz4aRQrXEiepEJotpGYY7iF4GxXI=; b=tIx7WRe1oED2FOgKTVbVH3hgVVoxx/NnAUB+KJQ+AX0t6WNfszY++UqUXHyTHBBTgl tai2OkC3M3vYzUcDAsi1hcazHTkXICQkEdpcDX9C8stJDVhluEzoU462SK++XwoecNCx CK443LkoqoVxsgsKqxn/Ah0Is/OMvrGSPT4NcZUXJAdbN9+8Dby0EotJ0oZRsVaRAyFs q3qvJs24dEsbxWMZXborQ1DJUkQ4BloP2/TwqhzXkAjC8CueuiiB+89kcwpARutSSSWj Dz+p00+H2soYyS8yQdT/UuEipjmzgT1C+JmsZMBIRzHb3VI+CgcKrM4aNAJ9QSsis6bS mSMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=onBZrkVc33fv3ZoSz4aRQrXEiepEJotpGYY7iF4GxXI=; b=uJ+vOjU4i+KMVe3/E4/joIsnG8tzZGdf9SzyfT2gJz6ECAkqH7ss1mmyzKYeH3vnm3 37tZAnYikC2fmGwV8RpWjSNcmuAUViLj72YyS1uqepUjOCXtPB1BdxPRa/9A74d/HlQ7 XOR6JPBZMLIU9CenTQx/Ta/04tSKoTByl3T3k6OxTQHRfmo2rd7uqebrev0oyU2rQaKV RRxl3os+1eCYhzetafG2eR3ST2wmSM96TNOaCgMOHOrdgvjjQqrveBz72EFDQyOQ2nCx VuuewP4JuXS7rHCwmW46iPMyIlCYdnYjlW1MzfKRuGQ/lq8rGZRw/VE6+4oHdjoHQGbw E/Jw== X-Gm-Message-State: AO0yUKU3/BWNOtohLeUkb40ohkEtG8HiOoQrNjr59fjGErZB4ad8yblT WMMdXVGOuThD46FPaXazsblrTg== X-Google-Smtp-Source: AK7set+b0Ypobqdn7bWSHfihLqtfOuqdJRxYDxayVg5U2zdOx0XhwRP78VAPE1LZCSw5pzGdb3u5pw== X-Received: by 2002:a5d:5943:0:b0:2c5:4add:9e46 with SMTP id e3-20020a5d5943000000b002c54add9e46mr11030840wri.28.1677179450914; Thu, 23 Feb 2023 11:10:50 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:50 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 4/8] sched: Allow sched_{get,set}attr to change latency_nice of the task Date: Thu, 23 Feb 2023 20:10:37 +0100 Message-Id: <20230223191041.577305-5-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Parth Shah Introduce the latency_nice attribute to sched_attr and provide a mechanism to change the value with the use of sched_setattr/sched_getattr syscall. Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in latency_nice of the task on every sched_setattr syscall. Signed-off-by: Parth Shah [rebase and add a dedicated __setscheduler_latency ] Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/uapi/linux/sched.h | 4 +++- include/uapi/linux/sched/types.h | 19 +++++++++++++++++++ kernel/sched/core.c | 24 ++++++++++++++++++++++++ tools/include/uapi/linux/sched.h | 4 +++- 4 files changed, 49 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/include/uapi/linux/sched/types.h b/include/uapi/linux/sched/ty= pes.h index f2c4589d4dbf..db1e8199e8c8 100644 --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -10,6 +10,7 @@ struct sched_param { =20 #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ +#define SCHED_ATTR_SIZE_VER2 60 /* add: latency_nice */ =20 /* * Extended scheduling parameters data structure. @@ -98,6 +99,22 @@ struct sched_param { * scheduled on a CPU with no more capacity than the specified value. * * A task utilization boundary can be reset by setting the attribute to -1. + * + * Latency Tolerance Attributes + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D + * + * A subset of sched_attr attributes allows to specify the relative latency + * requirements of a task with respect to the other tasks running/queued i= n the + * system. + * + * @ sched_latency_nice task's latency_nice value + * + * The latency_nice of a task can have any value in a range of + * [MIN_LATENCY_NICE..MAX_LATENCY_NICE]. + * + * A task with latency_nice with the value of LATENCY_NICE_MIN can be + * taken for a task requiring a lower latency as opposed to the task with + * higher latency_nice. */ struct sched_attr { __u32 size; @@ -120,6 +137,8 @@ struct sched_attr { __u32 sched_util_min; __u32 sched_util_max; =20 + /* latency requirement hints */ + __s32 sched_latency_nice; }; =20 #endif /* _UAPI_LINUX_SCHED_TYPES_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 28b397f9698b..d327614c70b0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7443,6 +7443,14 @@ static void __setscheduler_params(struct task_struct= *p, p->rt_priority =3D attr->sched_priority; p->normal_prio =3D normal_prio(p); set_load_weight(p, true); + +} + +static void __setscheduler_latency(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + p->latency_nice =3D attr->sched_latency_nice; } =20 /* @@ -7585,6 +7593,13 @@ static int __sched_setscheduler(struct task_struct *= p, return retval; } =20 + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + if (attr->sched_latency_nice > MAX_LATENCY_NICE) + return -EINVAL; + if (attr->sched_latency_nice < MIN_LATENCY_NICE) + return -EINVAL; + } + if (pi) cpuset_read_lock(); =20 @@ -7619,6 +7634,9 @@ static int __sched_setscheduler(struct task_struct *p, goto change; if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && + attr->sched_latency_nice !=3D p->latency_nice) + goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; retval =3D 0; @@ -7707,6 +7725,7 @@ static int __sched_setscheduler(struct task_struct *p, __setscheduler_params(p, attr); __setscheduler_prio(p, newprio); } + __setscheduler_latency(p, attr); __setscheduler_uclamp(p, attr); =20 if (queued) { @@ -7917,6 +7936,9 @@ static int sched_copy_attr(struct sched_attr __user *= uattr, struct sched_attr *a size < SCHED_ATTR_SIZE_VER1) return -EINVAL; =20 + if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) && + size < SCHED_ATTR_SIZE_VER2) + return -EINVAL; /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -8154,6 +8176,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 + kattr.sched_latency_nice =3D p->latency_nice; + #ifdef CONFIG_UCLAMP_TASK /* * This could race with another potential updater, but this is fine diff --git a/tools/include/uapi/linux/sched.h b/tools/include/uapi/linux/sc= hed.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/tools/include/uapi/linux/sched.h +++ b/tools/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 953FCC61DA4 for ; Thu, 23 Feb 2023 19:11:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231874AbjBWTL3 (ORCPT ); Thu, 23 Feb 2023 14:11:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231597AbjBWTLQ (ORCPT ); Thu, 23 Feb 2023 14:11:16 -0500 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00CE357096 for ; Thu, 23 Feb 2023 11:10:53 -0800 (PST) Received: by mail-wr1-x431.google.com with SMTP id bt28so4897073wrb.8 for ; Thu, 23 Feb 2023 11:10:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NluA/bvTWT8jzIvRWktAUoRtEbCr/07FC+kKT5m9N3I=; b=kw78y2KVnNJoUZnD4zI6GVsCHvyVdY93lLGKA1NCdTdt3FT+3bkurVHBIe+RmbmIa1 5bROHr/BHmYMte0ASjklXWEdCr0zKO1sBwq+3Iu0/eU3wpX1nqKWM7XCEasOWZDUvXB7 ULp8AnWIvDJ/kw6MXyRdxGFycji40Pqy2TRfoA1HmhwcDm+rJV5lFU2/m2hTSWcwWMRy NA62YUJJxM910FoDCWST7drK6e9d3J3xqf6M9JrQ5iaQEtU4mnXmIF43T16rGcux+IrW FSzISb/STAGsBAZk5GElagk9AS9V4QXoJeaP6O1PLiYvjMxC/n9jrFInEZmmebmKBNvC Hmwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NluA/bvTWT8jzIvRWktAUoRtEbCr/07FC+kKT5m9N3I=; b=Zkh+suk8l6IE/efxSZIqwhrSxZR+XUAzoO/yCbybFAevwulLjWC0xeXHMQlplcFpGP tuoLHtnsdU+nsT7N7GRjWAzDKzVOjbEVAVb0O414mp6tXN1AdzIgsORpmNUq0FJR+Opg p3kqL7By2FYn9DWJzo+ZYGwmO6s92Cdpa/Zta/IrXkCyxiRDLCeJFTqvhJCtwteXzss0 1HwtcyFY0PRRnh5q3gaoIzhR5HQS/s7Gm5pLKSZpUxRNY+YZeBiy8oWvMXjCMRkrbWdS vGNznug/dTRDoidAKm1lNE95GEILsBeNWVISb/Q6IM5+qvTMkL7BSoCAhlFGocqYovpe ZSOQ== X-Gm-Message-State: AO0yUKWG1IdoYJQ5X7VyZ2N40257/lRWDnaLmdPBRGFOJtGWCmUm0aep mN97i6VrEgAwx+ZRR+egg3E1Xg== X-Google-Smtp-Source: AK7set/n+sL8It9KafnyEsOKB0eJyeZI2LF+IQduoh/8+pmqP2Zo5lSEY5JkRcF8ViClilPm3BkyZg== X-Received: by 2002:a5d:4692:0:b0:2c7:16c3:1756 with SMTP id u18-20020a5d4692000000b002c716c31756mr1581580wrq.61.1677179452287; Thu, 23 Feb 2023 11:10:52 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:51 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 5/8] sched/fair: Take into account latency priority at wakeup Date: Thu, 23 Feb 2023 20:10:38 +0100 Message-Id: <20230223191041.577305-6-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Take into account the latency priority of a thread when deciding to preempt the current running thread. We don't want to provide more CPU bandwidth to a thread but reorder the scheduling to run latency sensitive task first whenever possible. As long as a thread didn't use its bandwidth, it will be able to preempt the current thread. At the opposite, a thread with a low latency priority will preempt current thread at wakeup only to keep fair CPU bandwidth sharing. Otherwise it will wait for the tick to get its sched slice. curr vruntime | sysctl_sched_wakeup_granularity <--> ----------------------------------|----|-----------------------|-----------= ---- | |<---------------------> | . sysctl_sched_latency | . default/current latency entity | . | . 1111111111111111111111111111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-= 1-1- se preempts curr at wakeup ------>|<- se doesn't preempt curr -------------= ---- | . | . | . low latency entity | . ---------------------->| % of sysctl_sched_latency | 1111111111111111111111111111111111111111111111111111111111|0000|-1-1-1-1-1-= 1-1- preempt ------------------------------------------------->|<- do not preemp= t -- | . | . | . high latency entity | . |<-----------------------|----. | % of sysctl_sched_latency . 111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1= -1-1 preempt->|<- se doesn't preempt curr --------------------------------------= ---- Tests results of nice latency impact on heavy load like hackbench: hackbench -l (2560 / group) -g group group latency 0 latency 19 1 1.378(+/- 1%) 1.337(+/- 1%) + 3% 4 1.393(+/- 3%) 1.312(+/- 3%) + 6% 8 1.308(+/- 2%) 1.279(+/- 1%) + 2% 16 1.347(+/- 1%) 1.317(+/- 1%) + 2% hackbench -p -l (2560 / group) -g group group 1 1.836(+/- 17%) 1.148(+/- 5%) +37% 4 1.586(+/- 6%) 1.109(+/- 8%) +30% 8 1.209(+/- 4%) 0.780(+/- 4%) +35% 16 0.805(+/- 5%) 0.728(+/- 4%) +10% By deacreasing the latency prio, we reduce the number of preemption at wakeup and help hackbench making progress. Test results of nice latency impact on short live load like cyclictest while competing with heavy load like hackbench: hackbench -l 10000 -g $group & cyclictest --policy other -D 5 -q -n latency 0 latency -20 group min avg max min avg max 0 16 19 29 17 18 29 1 43 299 7359 63 84 3422 4 56 449 14806 45 83 284 8 63 820 51123 63 83 283 16 64 1326 70684 41 157 26852 group =3D 0 means that hackbench is not running. The avg is significantly improved with nice latency -20 especially with large number of groups but min and max remain quite similar. If we add the histogram parameter to get details of latency, we have : hackbench -l 10000 -g 16 & cyclictest --policy other -D 5 -q -n -H 20000 --histfile data.txt latency 0 latency -20 Min Latencies: 64 62 Avg Latencies: 1170 107 Max Latencies: 88069 10417 50% latencies: 122 86 75% latencies: 614 91 85% latencies: 961 94 90% latencies: 1225 97 95% latencies: 6120 102 99% latencies: 18328 159 With percentile details, we see the benefit of nice latency -20 as only 1% of the latencies are above 159us whereas the default latency has got 15% around ~1ms or above and 5% over the 6ms. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/linux/sched.h | 4 +++- include/linux/sched/prio.h | 9 +++++++++ init/init_task.c | 2 +- kernel/sched/core.c | 19 ++++++++++++++----- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 32 +++++++++++++++++++++++++++----- kernel/sched/sched.h | 11 +++++++++++ 7 files changed, 66 insertions(+), 13 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6c61bde49152..38decae3e156 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -568,6 +568,8 @@ struct sched_entity { /* cached value of my_q->h_nr_running */ unsigned long runnable_weight; #endif + /* preemption offset in ns */ + long latency_offset; =20 #ifdef CONFIG_SMP /* @@ -784,7 +786,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; - int latency_nice; + int latency_prio; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h index bfcd7f1d1e11..be79503d86af 100644 --- a/include/linux/sched/prio.h +++ b/include/linux/sched/prio.h @@ -59,5 +59,14 @@ static inline long rlimit_to_nice(long prio) * Default tasks should be treated as a task with latency_nice =3D 0. */ #define DEFAULT_LATENCY_NICE 0 +#define DEFAULT_LATENCY_PRIO (DEFAULT_LATENCY_NICE + LATENCY_NICE_WIDTH/2) + +/* + * Convert user-nice values [ -20 ... 0 ... 19 ] + * to static latency [ 0..39 ], + * and back. + */ +#define NICE_TO_LATENCY(nice) ((nice) + DEFAULT_LATENCY_PRIO) +#define LATENCY_TO_NICE(prio) ((prio) - DEFAULT_LATENCY_PRIO) =20 #endif /* _LINUX_SCHED_PRIO_H */ diff --git a/init/init_task.c b/init/init_task.c index 7dd71dd2d261..071deff8dbd1 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,7 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, - .latency_nice =3D DEFAULT_LATENCY_NICE, + .latency_prio =3D DEFAULT_LATENCY_PRIO, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d327614c70b0..d5b7e237d79b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1285,6 +1285,11 @@ static void set_load_weight(struct task_struct *p, b= ool update_load) } } =20 +static void set_latency_offset(struct task_struct *p) +{ + p->se.latency_offset =3D calc_latency_offset(p->latency_prio); +} + #ifdef CONFIG_UCLAMP_TASK /* * Serializes updates of utilization clamp values @@ -4681,7 +4686,9 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 - p->latency_nice =3D DEFAULT_LATENCY_NICE; + p->latency_prio =3D NICE_TO_LATENCY(0); + set_latency_offset(p); + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -7449,8 +7456,10 @@ static void __setscheduler_params(struct task_struct= *p, static void __setscheduler_latency(struct task_struct *p, const struct sched_attr *attr) { - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) - p->latency_nice =3D attr->sched_latency_nice; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + p->latency_prio =3D NICE_TO_LATENCY(attr->sched_latency_nice); + set_latency_offset(p); + } } =20 /* @@ -7635,7 +7644,7 @@ static int __sched_setscheduler(struct task_struct *p, if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && - attr->sched_latency_nice !=3D p->latency_nice) + attr->sched_latency_nice !=3D LATENCY_TO_NICE(p->latency_prio)) goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; @@ -8176,7 +8185,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 - kattr.sched_latency_nice =3D p->latency_nice; + kattr.sched_latency_nice =3D LATENCY_TO_NICE(p->latency_prio); =20 #ifdef CONFIG_UCLAMP_TASK /* diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 68be7a3e42a3..b3922184af91 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1043,7 +1043,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); - P(latency_nice); + P(latency_prio); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 81bef11eb660..414b6243208b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4877,6 +4877,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) update_idle_cfs_rq_clock_pelt(cfs_rq); } =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se); + /* * Preempt the current task with a newly woken task if needed: */ @@ -4885,7 +4887,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sche= d_entity *curr) { unsigned long ideal_runtime, delta_exec; struct sched_entity *se; - s64 delta; + s64 delta, offset; =20 /* * When many tasks blow up the sched_period; it is possible that @@ -4916,10 +4918,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sc= hed_entity *curr) se =3D __pick_first_entity(cfs_rq); delta =3D curr->vruntime - se->vruntime; =20 - if (delta < 0) + offset =3D wakeup_latency_gran(curr, se); + if (delta < offset) return; =20 - if (delta > ideal_runtime) + if ((delta > ideal_runtime) || + (delta > get_latency_max())) resched_curr(rq_of(cfs_rq)); } =20 @@ -7662,6 +7666,23 @@ balance_fair(struct rq *rq, struct task_struct *prev= , struct rq_flags *rf) } #endif /* CONFIG_SMP */ =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se) +{ + long latency_offset =3D se->latency_offset; + + /* + * A negative latency offset means that the sched_entity has latency + * requirement that needs to be evaluated versus other entity. + * Otherwise, use the latency weight to evaluate how much scheduling + * delay is acceptable by se. + */ + if ((latency_offset < 0) || (curr->latency_offset < 0)) + latency_offset -=3D curr->latency_offset; + latency_offset =3D min_t(long, latency_offset, get_latency_max()); + + return latency_offset; +} + static unsigned long wakeup_gran(struct sched_entity *se) { unsigned long gran =3D sysctl_sched_wakeup_granularity; @@ -7700,11 +7721,12 @@ static int wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se) { s64 gran, vdiff =3D curr->vruntime - se->vruntime; + s64 offset =3D wakeup_latency_gran(curr, se); =20 - if (vdiff <=3D 0) + if (vdiff < offset) return -1; =20 - gran =3D wakeup_gran(se); + gran =3D offset + wakeup_gran(se); =20 /* * At wake up, the vruntime of a task is capped to not be older than diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 51ba0af7fb27..3f42f86105d4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2494,6 +2494,17 @@ static inline unsigned long get_sleep_latency(bool i= dle) return thresh; } =20 +/* + * Calculate the latency offset for a priority level. + * We use a linear mapping of the priority in the range: + * [-sysctl_sched_latency:sysctl_sched_latency] + */ +static inline long calc_latency_offset(int prio) +{ + return (long)get_sleep_latency(false) * LATENCY_TO_NICE(prio) / + (LATENCY_NICE_WIDTH/2); +} + static inline unsigned long get_latency_max(void) { unsigned long thresh =3D get_sleep_latency(false); --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A487C61DA4 for ; Thu, 23 Feb 2023 19:11:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232039AbjBWTLk (ORCPT ); Thu, 23 Feb 2023 14:11:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231499AbjBWTLR (ORCPT ); Thu, 23 Feb 2023 14:11:17 -0500 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84DDD5AB74 for ; Thu, 23 Feb 2023 11:10:55 -0800 (PST) Received: by mail-wr1-x42c.google.com with SMTP id j2so11380768wrh.9 for ; Thu, 23 Feb 2023 11:10:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m64jqOrcIRXu/umTrAuoSeK+k4rXGOgELajP5IUVuyY=; b=MyvXH+hQVkIQRKbJjbYTPtiglbnMkCk1P9i3U9PkHMjHXNyQD/lee1mo5nndiHoIh4 dyVFkvuRsfpb5Bh5uIbICL0NVmRckTwZPnQcnxulOOo+pi9Y17Mmz3pdFKKGghBGwMK1 NJvW5hdcddQXnhoi+eReTiBg+fteuSLmHmici8k3/0tUyvf5sf6IebIMFmtz9KlYE1+W LkKLFODQjL5WbUOmSYr8eN3/654C9wlbCW22U28jO8cbBEXfXPkYk7Zrsvf6RY4eaNrI T9WduuT8znu2vdTCAScS2NYYYFLOKzdsXszzLlKk+ciYsu/bDTtwWohelNF5gc9ZhAoJ w5WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m64jqOrcIRXu/umTrAuoSeK+k4rXGOgELajP5IUVuyY=; b=6CeWbMouQSFzR89Di+y25LjXrcwE1ebRvgDRXN6hC2AEU8uwsCgYqDY2grgs6GZDMa OG4qaXb79jamyPaF0qRpGqyVWo99609Qruyst8TGxkW07vvXkvod51imtgTSt+IyPQgr GgOJpHuO/N3Y84/HeF5jsa16KYyuXV8YcwitT/q8R0jh8OxNSnvbXKIvxsp/PNYYZQ2z loGcVAq28AJ1IdduboLW/U5sFyFpYeQv+fPVDCV3l12FbVMYqsxfcVk5JcXLaKNfq9qz 89hCIjarywAump8rtlM30caR1rwCUYx9kR+8MNBdG2+orgKEfuMPZfREA6URPx7rMyLn 4eRA== X-Gm-Message-State: AO0yUKW36E+9g8QKgbVF7tc821SXJI89zUDa2V+u8/py9hcfaGtw7g4M upv7G2qqR+EoVEoRo7ddFAcQcw== X-Google-Smtp-Source: AK7set+affTXRXYF+WjxSIHCwNBB1HL+mU8P7SPfTvPa4jKz5DlADUFtAeGVy2gHq4bQjiIgBCIxpA== X-Received: by 2002:adf:efca:0:b0:2bf:b92b:8a8a with SMTP id i10-20020adfefca000000b002bfb92b8a8amr12679582wrp.7.1677179453910; Thu, 23 Feb 2023 11:10:53 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:53 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 6/8] sched/fair: Add sched group latency support Date: Thu, 23 Feb 2023 20:10:39 +0100 Message-Id: <20230223191041.577305-7-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Task can set its latency priority with sched_setattr(), which is then used to set the latency offset of its sched_enity, but sched group entities still have the default latency offset value. Add a latency.nice field in cpu cgroup controller to set the latency priority of the group similarly to sched_setattr(). The latency priority is then used to set the offset of the sched_entities of the group. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- Documentation/admin-guide/cgroup-v2.rst | 10 ++++++++ kernel/sched/core.c | 30 +++++++++++++++++++++++ kernel/sched/fair.c | 32 +++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 4 files changed, 76 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-= guide/cgroup-v2.rst index 1b3ed1c3b3f1..c08424593e4a 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1121,6 +1121,16 @@ All time durations are in microseconds. values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. =20 + cpu.latency.nice + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The nice value is in the range [-20, 19]. + + This interface file allows reading and setting latency using the + same values used by sched_setattr(2). The latency_nice of a group is + used to limit the impact of the latency_nice of a task outside the + group. =20 =20 Memory diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d5b7e237d79b..093cc1af73dc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11059,6 +11059,25 @@ static int cpu_idle_write_s64(struct cgroup_subsys= _state *css, { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return LATENCY_TO_NICE(css_tg(css)->latency_prio); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 nice) +{ + int prio; + + if (nice < MIN_LATENCY_NICE || nice > MAX_LATENCY_NICE) + return -ERANGE; + + prio =3D NICE_TO_LATENCY(nice); + + return sched_group_set_latency(css_tg(css), prio); +} #endif =20 static struct cftype cpu_legacy_files[] =3D { @@ -11073,6 +11092,11 @@ static struct cftype cpu_legacy_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency.nice", + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11290,6 +11314,12 @@ static struct cftype cpu_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency.nice", + .flags =3D CFTYPE_NOT_ON_ROOT, + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 414b6243208b..dc7570f43ebe 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12274,6 +12274,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) goto err; =20 tg->shares =3D NICE_0_LOAD; + tg->latency_prio =3D DEFAULT_LATENCY_PRIO; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 @@ -12372,6 +12373,9 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, } =20 se->my_q =3D cfs_rq; + + se->latency_offset =3D calc_latency_offset(tg->latency_prio); + /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent =3D parent; @@ -12502,6 +12506,34 @@ int sched_group_set_idle(struct task_group *tg, lo= ng idle) return 0; } =20 +int sched_group_set_latency(struct task_group *tg, int prio) +{ + long latency_offset; + int i; + + if (tg =3D=3D &root_task_group) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_prio =3D=3D prio) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_prio =3D prio; + latency_offset =3D calc_latency_offset(prio); + + for_each_possible_cpu(i) { + struct sched_entity *se =3D tg->se[i]; + + WRITE_ONCE(se->latency_offset, latency_offset); + } + + mutex_unlock(&shares_mutex); + return 0; +} + #else /* CONFIG_FAIR_GROUP_SCHED */ =20 void free_fair_sched_group(struct task_group *tg) { } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3f42f86105d4..9a2e71231083 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -378,6 +378,8 @@ struct task_group { =20 /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency priority of the group. */ + int latency_prio; =20 #ifdef CONFIG_SMP /* @@ -488,6 +490,8 @@ extern int sched_group_set_shares(struct task_group *tg= , unsigned long shares); =20 extern int sched_group_set_idle(struct task_group *tg, long idle); =20 +extern int sched_group_set_latency(struct task_group *tg, int prio); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next); --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FFF5C636D6 for ; Thu, 23 Feb 2023 19:11:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232014AbjBWTLh (ORCPT ); Thu, 23 Feb 2023 14:11:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231550AbjBWTLR (ORCPT ); Thu, 23 Feb 2023 14:11:17 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFF7C54A04 for ; Thu, 23 Feb 2023 11:10:56 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id bo30so10996155wrb.0 for ; Thu, 23 Feb 2023 11:10:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jFfq5teius6pAK1nmf1oSiYVaS3V8Wiqx9qF8KZNMCQ=; b=MJINS6Ehzxb0sA/shHfydBxkVARhkb5euAB6EGVMRuXxdAsAG6pDXdb4flSMzFT+yY PRjYJgKcy2c1kAx+kPy8dM5434vOLqdKjuKNeHCfiB8RKQbmGwyIuo4Ot0oXQb7muZ+l ZWC/DmvpqT63kO80J2s1A6myyM17DfUAmx3G55JMJo00DO8RkBo79sMzI6rv3ETm/bSg MZA8gNYLVzi9ZMwaLPg4IFzSrFXLkNnJk18xeVlUlJLothZxzfx19MCHKH5vD1Xot+6E jEnLaGltetRClluH4JSZ/6EniN60YkdoFAN5xk4I3XztP6jpTyMUAPpAM8IJgGi3E8Kx DmXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jFfq5teius6pAK1nmf1oSiYVaS3V8Wiqx9qF8KZNMCQ=; b=iOTAHYCC3oSJCTULAiKoQTNBmqnb8wBib0QfAykb7AmA4Seag0ZYeZISMfIM2b0Kxj cvgM7g5niesYIfDklDxSQksKObN91IlZ5J+9pH1BH/yILOFJWrZ9CW4PAlDNrWaLMumf a+u/yZBpIqB4wf/lV6olUFpJUy0flqwPB54bWUoxrpWkjmSY6NykhkTNrNneR2BEEiw3 H8w9hTx6/c5ZgjMKiiDwWmesSy5ykmu/VC5nnSTbGpiN5zCGjdGRt1lAySTdNi5bQKdK duivXXOxNQOO7LzkAgru5yk89sON+0yP/fT68lK5dzSZsRKAXj/HuvF6kW9GB2Y/ohNc NPKw== X-Gm-Message-State: AO0yUKVXFiwEl/xEDjbKgANLdW2txFh4AnrK3L+VL/oEwJNTlUcmVOtV akjl/QNy7GmwvmywGPuMTps9pQ== X-Google-Smtp-Source: AK7set/lPhrFKk1EDZZ4MO4SEWW9fv6y2VB8bmJO+3IiZdOFX52CZnJ7RT7NDHoOghMoTq6S965n+A== X-Received: by 2002:adf:e412:0:b0:2c7:148c:b04f with SMTP id g18-20020adfe412000000b002c7148cb04fmr2398501wrm.37.1677179455369; Thu, 23 Feb 2023 11:10:55 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:54 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 7/8] sched/core: Support latency priority with sched core Date: Thu, 23 Feb 2023 20:10:40 +0100 Message-Id: <20230223191041.577305-8-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Take into account wakeup_latency_gran() when ordering the cfs threads. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index dc7570f43ebe..125a6ff53378 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11949,6 +11949,9 @@ bool cfs_prio_less(const struct task_struct *a, con= st struct task_struct *b, delta =3D (s64)(sea->vruntime - seb->vruntime) + (s64)(cfs_rqb->min_vruntime_fi - cfs_rqa->min_vruntime_fi); =20 + /* Take into account latency offset */ + delta -=3D wakeup_latency_gran(sea, seb); + return delta > 0; } #else --=20 2.34.1 From nobody Tue Sep 9 22:19:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9EFEC636D7 for ; Thu, 23 Feb 2023 19:11:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232055AbjBWTLm (ORCPT ); Thu, 23 Feb 2023 14:11:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231681AbjBWTLT (ORCPT ); Thu, 23 Feb 2023 14:11:19 -0500 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F31436FC1 for ; Thu, 23 Feb 2023 11:10:58 -0800 (PST) Received: by mail-wr1-x42b.google.com with SMTP id bw19so926096wrb.13 for ; Thu, 23 Feb 2023 11:10:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lrHADWHoMY9y8ELVmvMbzTureJuyfP6JlwHmhpl9ZDg=; b=Qyw1CqW71eyXYbSz0GTnhvWK1TYOrEuH868f5034qbj8ll371dbPYSqy5emdEUDKGB 42bYlsrkmT1rK0Z/31TH70o0tisVmmC4i6Chi5MZJLpbzi3QxfPaSDgh8n+ZuvR/ifPI d655ZEinrxDVcIJyowzv4a7GQndq1CoIVqg1iKbq7qkFM9Sl7W2zednQlkJ8z25swJqB ANtbIND9z53bgeEp0PMHu58W7WDiGjA02kYKiM8K4u0tyCEq+tJ7k97+Ayatc6Jjww5n 7bkXG/2cvljjOEubkyxfPTmqpUlfuzxhI5IqRje3UzJ1iMVKAfz/9AavplCgCxdq1x8R esQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lrHADWHoMY9y8ELVmvMbzTureJuyfP6JlwHmhpl9ZDg=; b=VlcUGhPx7opaNj+VMeThFbquxs2D/91LQm4EjO+1KpYmA12HqZI7O86M09eJTfryNs UUxd967g+wBmp2RKZWAsUTOvsMslHUSP4Hu/h8ZXxe8GpgkFDRcj6zMJyUQdK9Koeo1z buMhayv8D1AI0Is3Bs8/04E3hVt7XuUn2+/FYZ+Da1z9scvILdI7y/Gu+8mnABC38jC1 /mEvSvFgDymwk6C6IDAOcg+Na6f1AptaiAgwfREmZbVWI+WgHGtVx3977fIDY9MToqKG j8sgPqpRZSvSJY+SoT2Z0Bl1R/UCI/FGPVJOMsbb4m+bly4ZBEfZajw0CiQ6QbzGWA2w iKTg== X-Gm-Message-State: AO0yUKWreZCpoYhmXmmK7+qXHJmp70DKAkVhzYtK3SSYw7zS/8KoaWR9 i/PnU+yA+Dy5pGLgmlNzcTvvTQ== X-Google-Smtp-Source: AK7set907oOLmf5B2utz5wIhikq7alUFQyP2J1ImCu7ZiubjkvQNk70Sm9GawcW4HF6tZz+tgwZGKQ== X-Received: by 2002:adf:cd0c:0:b0:2c7:454:cee8 with SMTP id w12-20020adfcd0c000000b002c70454cee8mr9504064wrm.1.1677179457122; Thu, 23 Feb 2023 11:10:57 -0800 (PST) Received: from vingu-book.. ([2a01:e0a:f:6020:a6f0:4ee9:c103:44cb]) by smtp.gmail.com with ESMTPSA id k2-20020adff282000000b002c6e8cb612fsm9844481wro.92.2023.02.23.11.10.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 11:10:56 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, cgroups@vger.kernel.org, corbet@lwn.net, linux-doc@vger.kernel.org Cc: tj@kernel.org, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v11 8/8] sched/fair: Add latency list Date: Thu, 23 Feb 2023 20:10:41 +0100 Message-Id: <20230223191041.577305-9-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230223191041.577305-1-vincent.guittot@linaro.org> References: <20230223191041.577305-1-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add a rb tree for latency sensitive entities so we can schedule the most sensitive one first even when it failed to preempt current at wakeup or when it got quickly preempted by another entity of higher priority. In order to keep fairness, the latency is used once at wakeup to get a minimum slice and not during the following scheduling slice to prevent long running entity to got more running time than allocated to his nice priority. The rb tree enables to cover the last corner case where latency sensitive entity can't got schedule quickly after the wakeup. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/linux/sched.h | 1 + kernel/sched/core.c | 1 + kernel/sched/fair.c | 108 ++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 1 + 4 files changed, 108 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 38decae3e156..41bb92be5ecc 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -548,6 +548,7 @@ struct sched_entity { /* For load-balancing: */ struct load_weight load; struct rb_node run_node; + struct rb_node latency_node; struct list_head group_node; unsigned int on_rq; =20 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 093cc1af73dc..752fd364216c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4434,6 +4434,7 @@ static void __sched_fork(unsigned long clone_flags, s= truct task_struct *p) p->se.nr_migrations =3D 0; p->se.vruntime =3D 0; INIT_LIST_HEAD(&p->se.group_node); + RB_CLEAR_NODE(&p->se.latency_node); =20 #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq =3D NULL; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 125a6ff53378..9fb0461197bd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -680,7 +680,85 @@ struct sched_entity *__pick_last_entity(struct cfs_rq = *cfs_rq) =20 return __node_2_se(last); } +#endif =20 +/************************************************************** + * Scheduling class tree data structure manipulation methods: + * for latency + */ + +static inline bool latency_before(struct sched_entity *a, + struct sched_entity *b) +{ + return (s64)(a->vruntime + a->latency_offset - b->vruntime - b->latency_o= ffset) < 0; +} + +#define __latency_node_2_se(node) \ + rb_entry((node), struct sched_entity, latency_node) + +static inline bool __latency_less(struct rb_node *a, const struct rb_node = *b) +{ + return latency_before(__latency_node_2_se(a), __latency_node_2_se(b)); +} + +/* + * Enqueue an entity into the latency rb-tree: + */ +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *= se, int flags) +{ + + /* Only latency sensitive entity can be added to the list */ + if (se->latency_offset >=3D 0) + return; + + if (!RB_EMPTY_NODE(&se->latency_node)) + return; + + /* + * The entity is always added the latency list at wakeup. + * Then, a not waking up entity that is put back in the list after an + * execution time less than sysctl_sched_min_granularity, means that + * the entity has been preempted by a higher sched class or an entity + * with higher latency constraint. In thi case, the entity is also put + * back in the latency list so it gets a chance to run 1st during the + * next slice. + */ + if (!(flags & ENQUEUE_WAKEUP)) { + u64 delta_exec =3D se->sum_exec_runtime - se->prev_sum_exec_runtime; + + if (delta_exec >=3D sysctl_sched_min_granularity) + return; + } + + rb_add_cached(&se->latency_node, &cfs_rq->latency_timeline, __latency_les= s); +} + +/* + * Dequeue an entity from the latency rb-tree and return true if it was re= ally + * part of the rb-tree: + */ +static bool __dequeue_latency(struct cfs_rq *cfs_rq, struct sched_entity *= se) +{ + if (!RB_EMPTY_NODE(&se->latency_node)) { + rb_erase_cached(&se->latency_node, &cfs_rq->latency_timeline); + RB_CLEAR_NODE(&se->latency_node); + return true; + } + + return false; +} + +static struct sched_entity *__pick_first_latency(struct cfs_rq *cfs_rq) +{ + struct rb_node *left =3D rb_first_cached(&cfs_rq->latency_timeline); + + if (!left) + return NULL; + + return __latency_node_2_se(left); +} + +#ifdef CONFIG_SCHED_DEBUG /************************************************************** * Scheduling class statistics methods: */ @@ -4758,8 +4836,10 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se, int flags) check_schedstat_required(); update_stats_enqueue_fair(cfs_rq, se, flags); check_spread(cfs_rq, se); - if (!curr) + if (!curr) { __enqueue_entity(cfs_rq, se); + __enqueue_latency(cfs_rq, se, flags); + } se->on_rq =3D 1; =20 if (cfs_rq->nr_running =3D=3D 1) { @@ -4845,8 +4925,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se, int flags) =20 clear_buddies(cfs_rq, se); =20 - if (se !=3D cfs_rq->curr) + if (se !=3D cfs_rq->curr) { __dequeue_entity(cfs_rq, se); + __dequeue_latency(cfs_rq, se); + } se->on_rq =3D 0; account_entity_dequeue(cfs_rq, se); =20 @@ -4941,6 +5023,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se) */ update_stats_wait_end_fair(cfs_rq, se); __dequeue_entity(cfs_rq, se); + __dequeue_latency(cfs_rq, se); update_load_avg(cfs_rq, se, UPDATE_TG); } =20 @@ -4979,7 +5062,7 @@ static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { struct sched_entity *left =3D __pick_first_entity(cfs_rq); - struct sched_entity *se; + struct sched_entity *latency, *se; =20 /* * If curr is set we have to see if its left of the leftmost entity @@ -5021,6 +5104,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched= _entity *curr) se =3D cfs_rq->last; } =20 + /* Check for latency sensitive entity waiting for running */ + latency =3D __pick_first_latency(cfs_rq); + if (latency && (latency !=3D se) && + wakeup_preempt_entity(latency, se) < 1) + se =3D latency; + return se; } =20 @@ -5044,6 +5133,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, st= ruct sched_entity *prev) update_stats_wait_start_fair(cfs_rq, prev); /* Put 'current' back into the tree. */ __enqueue_entity(cfs_rq, prev); + __enqueue_latency(cfs_rq, prev, 0); /* in !on_rq case, update occurred at dequeue */ update_load_avg(cfs_rq, prev, 0); } @@ -12222,6 +12312,7 @@ static void set_next_task_fair(struct rq *rq, struc= t task_struct *p, bool first) void init_cfs_rq(struct cfs_rq *cfs_rq) { cfs_rq->tasks_timeline =3D RB_ROOT_CACHED; + cfs_rq->latency_timeline =3D RB_ROOT_CACHED; u64_u32_store(cfs_rq->min_vruntime, (u64)(-(1LL << 20))); #ifdef CONFIG_SMP raw_spin_lock_init(&cfs_rq->removed.lock); @@ -12529,8 +12620,19 @@ int sched_group_set_latency(struct task_group *tg,= int prio) =20 for_each_possible_cpu(i) { struct sched_entity *se =3D tg->se[i]; + struct rq *rq =3D cpu_rq(i); + struct rq_flags rf; + bool queued; + + rq_lock_irqsave(rq, &rf); =20 + queued =3D __dequeue_latency(se->cfs_rq, se); WRITE_ONCE(se->latency_offset, latency_offset); + if (queued) + __enqueue_latency(se->cfs_rq, se, ENQUEUE_WAKEUP); + + + rq_unlock_irqrestore(rq, &rf); } =20 mutex_unlock(&shares_mutex); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 9a2e71231083..21dd309e98a9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -570,6 +570,7 @@ struct cfs_rq { #endif =20 struct rb_root_cached tasks_timeline; + struct rb_root_cached latency_timeline; =20 /* * 'curr' points to currently running entity on this cfs_rq. --=20 2.34.1