From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA564C43217 for ; Tue, 15 Nov 2022 17:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231460AbiKORTU (ORCPT ); Tue, 15 Nov 2022 12:19:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231755AbiKORTD (ORCPT ); Tue, 15 Nov 2022 12:19:03 -0500 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE9B813D3C for ; Tue, 15 Nov 2022 09:19:01 -0800 (PST) Received: by mail-wr1-x434.google.com with SMTP id o4so25327485wrq.6 for ; Tue, 15 Nov 2022 09:19:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=0gt5dMxvQbqno+STaQibPwy3l5gz8MECmjhw94KguxM=; b=gDlAKnk9+Ms0Le2gLFVmU+3clOP59GSKkv9GngIkekxRvYns5OBHP80f5OAv2vvvM6 J+9Sirc0kDuo0uRHvG85cjSAkntieDPfLiP5zLdnqEGiekLQi1Y9xp9ysNHXp6BN2j8u csCvr+rbhPeDVHLK0q3y2kcJ7DmQdTCEV+YxeGbAL8bDAN3FPq+Q3YXlCKuY7cghq40q OUerFuSRsmo4hMCll1BI73vbcS8mXd8gtz3SXsD8y0kJI47XE5RKYuU0ecUK2TMzKDx6 VgTYxklpL3Zaz+uTu+bh0+Oq6SSTXVcS7J8s5+y/rp9fH4WYeZpt4m7o+8YdyuZb3EhV Yl+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0gt5dMxvQbqno+STaQibPwy3l5gz8MECmjhw94KguxM=; b=lICA+aSGDZHSfjSj5UBRWdQTMJNoW4n5/kEdJtXlFYpwVzf32jWMOoOVgPWilA0N/I GrbPWYjzSz4M0pkWUpRKRe0ghK6lVJ7L8MVVYPz8loSniXezxCbIRxQxQxJTHjAGyyen DV37kqmENPQfTxkKR10mvxr/VVBOaQNpGzVN58eqeoyJRL5y79m7l5brX4BJ0zktSpGs Ge2csK9bHW9N4Q9W3h00kUKnPWbtw4OwwHNbXpgJ/9cqTFVYcpBrTTPzUF5hNYADmed8 ZEJKuVnV0Rwqx7cGmcJYfoeQlEEqTcKbiMSyvY25FxTvyJTMWtvRDNsr9v1TQTKpQds9 BtKg== X-Gm-Message-State: ANoB5pnx1OqBjCScoH2gxvuJ6q/3rbb+5RXbnNZ7DT3kyNXItmAQqBvA 3URnEu6yO8avxhXUfwM3SFQKCw== X-Google-Smtp-Source: AA0mqf55iZ1zHnxQ5HbFIfLc/TFTuq0dFnRpy4KDRhkgi1eznszPpqvaI5E1NktaD9oG9R+8+/nvAw== X-Received: by 2002:adf:e702:0:b0:236:695b:6275 with SMTP id c2-20020adfe702000000b00236695b6275mr11843770wrm.116.1668532739919; Tue, 15 Nov 2022 09:18:59 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.18.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:18:59 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 1/9] sched/fair: fix unfairness at wakeup Date: Tue, 15 Nov 2022 18:18:43 +0100 Message-Id: <20221115171851.835-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" At wake up, the vruntime of a task is updated to not be more older than a sched_latency period behind the min_vruntime. This prevents long sleeping task to get unlimited credit at wakeup. Such waking task should preempt current one to use its CPU bandwidth but wakeup_gran() can be larger than sched_latency, filter out the wakeup preemption and as a results steals some CPU bandwidth to the waking task. Make sure that a task, which vruntime has been capped, will preempt current task and use its CPU bandwidth even if wakeup_gran() is in the same range as sched_latency. If the waking task failed to preempt current it could to wait up to sysctl_sched_min_granularity before preempting it during next tick. Strictly speaking, we should use cfs->min_vruntime instead of curr->vruntime but it doesn't worth the additional overhead and complexity as the vruntime of current should be close to min_vruntime if not equal. Reported-by: Youssef Esmat Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 46 ++++++++++++++++++++------------------------ kernel/sched/sched.h | 34 +++++++++++++++++++++++++++++++- 2 files changed, 54 insertions(+), 26 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4cc56c91e06e..c8a697f8db88 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4645,33 +4645,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int initial) { u64 vruntime =3D cfs_rq->min_vruntime; =20 - /* - * The 'current' period is already promised to the current tasks, - * however the extra weight of the new task will slow them down a - * little, place the new task so that it fits in the slot that - * stays open at the end. - */ - if (initial && sched_feat(START_DEBIT)) - vruntime +=3D sched_vslice(cfs_rq, se); - - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; - - if (se_is_idle(se)) - thresh =3D sysctl_sched_min_granularity; - else - thresh =3D sysctl_sched_latency; - + if (!initial) + /* sleeps up to a single latency don't count. */ + vruntime -=3D get_sleep_latency(se_is_idle(se)); + else if (sched_feat(START_DEBIT)) /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: + * The 'current' period is already promised to the current tasks, + * however the extra weight of the new task will slow them down a + * little, place the new task so that it fits in the slot that + * stays open at the end. */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>=3D 1; - - vruntime -=3D thresh; - } + vruntime +=3D sched_vslice(cfs_rq, se); =20 /* ensure we never gain time by being placed backwards. */ se->vruntime =3D max_vruntime(se->vruntime, vruntime); @@ -7520,6 +7504,18 @@ wakeup_preempt_entity(struct sched_entity *curr, str= uct sched_entity *se) return -1; =20 gran =3D wakeup_gran(se); + + /* + * At wake up, the vruntime of a task is capped to not be older than + * a sched_latency period compared to min_vruntime. This prevents long + * sleeping task to get unlimited credit at wakeup. Such waking up task + * has to preempt current in order to not lose its share of CPU + * bandwidth but wakeup_gran() can become higher than scheduling period + * for low priority task. Make sure that long sleeping task will get a + * chance to preempt current. + */ + gran =3D min_t(s64, gran, get_latency_max()); + if (vdiff > gran) return 1; =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 771f8ddb7053..842ce0094d9c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2453,9 +2453,9 @@ extern void check_preempt_curr(struct rq *rq, struct = task_struct *p, int flags); extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; =20 -#ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_latency; extern unsigned int sysctl_sched_min_granularity; +#ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_idle_min_granularity; extern unsigned int sysctl_sched_wakeup_granularity; extern int sysctl_resched_latency_warn_ms; @@ -2470,6 +2470,38 @@ extern unsigned int sysctl_numa_balancing_scan_size; extern unsigned int sysctl_numa_balancing_hot_threshold; #endif =20 +static inline unsigned long get_sleep_latency(bool idle) +{ + unsigned long thresh; + + if (idle) + thresh =3D sysctl_sched_min_granularity; + else + thresh =3D sysctl_sched_latency; + + /* + * Halve their sleep time's effect, to allow + * for a gentler effect of sleepers: + */ + if (sched_feat(GENTLE_FAIR_SLEEPERS)) + thresh >>=3D 1; + + return thresh; +} + +static inline unsigned long get_latency_max(void) +{ + unsigned long thresh =3D get_sleep_latency(false); + + /* + * If the waking task failed to preempt current it could to wait up to + * sysctl_sched_min_granularity before preempting it during next tick. + */ + thresh -=3D sysctl_sched_min_granularity; + + return thresh; +} + #ifdef CONFIG_SCHED_HRTICK =20 /* --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CAD9C4332F for ; Tue, 15 Nov 2022 17:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238259AbiKORT0 (ORCPT ); Tue, 15 Nov 2022 12:19:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231538AbiKORTM (ORCPT ); Tue, 15 Nov 2022 12:19:12 -0500 Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D08A415FED for ; Tue, 15 Nov 2022 09:19:03 -0800 (PST) Received: by mail-wm1-x32c.google.com with SMTP id h186-20020a1c21c3000000b003cfe48519a6so613264wmh.0 for ; Tue, 15 Nov 2022 09:19:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=wENRkgQEb1UIrFAJEzEMVqSnf8X2lmPttrPLcOcx98M=; b=V8A8G6ryEkY5Zlffpr5423h5AkfoG7/Qhh7Vlc9PP2tE1WGksWc1eHVvpKMWVAlMIP LC2+wujeeQCObNicxOUGyWmP+3mepaba6VKelY6mL1fOpMT4RJfQuDFU5IiFtFj34H8H 9Ivypn2vI8GqdeKikXiN0ZNL59NXJIOsMZToD+8SRuqnm8DTtnKgADjDp6k2k0Ar+o+Z FpmrRoKOri2cs7JrU4I/2SYXHZ8d6dDq36ZQqrRDRVhRrqJqnzFvxrmhMOsks2aDyDPf YnRPygFpUMxOZhJDIxdASEgxbJb8x3cW2tO3ieCbqCmsZ8/g5XrPi9AgTRnkUaZ5ufRT 8ozQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wENRkgQEb1UIrFAJEzEMVqSnf8X2lmPttrPLcOcx98M=; b=JndTdytMHVyNLCs5J8hAaLFjnL1c6KnKQAfKkU4sMusc+qt329UG2QDrZ2zI7ervLS sCgu/S1qLy0AAwqJ0JYcTi+Zcq1kdC2KfGBKL/E2hPoYODNHDMS0B/6i5/vaPUGlq1y2 ip8WbKK49wxFy1V6qw1LCWmia5MwK1V3REZWmbrCqJ5UBQqLEb5JXCSBIrVR7oArLdQG xt9GF9yeWCCFchsMQdVd5CmHqlCRkZ+szXdEE/NrI0DOZU7JbIVxiXWgmGAG1pX8T7ck J/xxKseipjGQa61V9nPHSBjMN7KNhDtpztnrdRu16BQqkkPqZOHjG5k8EPNAt/DTbMpo 7A1Q== X-Gm-Message-State: ANoB5pmu/E3pyZqcTDEQS6T7b3aWrfgPzjN6lG+cihUgj4AZEjKzyCVD HeEDGeCzDNCmlQACv7lpvAm/0g== X-Google-Smtp-Source: AA0mqf7nMTe6guElveXVE/wvqrl5omSDffFoegmqrDIlPfixEcwJrdP7oBML4YZ06ibMbWJkO6Bz/Q== X-Received: by 2002:a05:600c:1d0a:b0:3cf:735c:9d5a with SMTP id l10-20020a05600c1d0a00b003cf735c9d5amr2297722wms.113.1668532742172; Tue, 15 Nov 2022 09:19:02 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:01 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 2/9] sched: Introduce latency-nice as a per-task attribute Date: Tue, 15 Nov 2022 18:18:44 +0100 Message-Id: <20221115171851.835-3-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Latency-nice indicates the latency requirements of a task with respect to the other tasks in the system. The value of the attribute can be within the range of [-20, 19] both inclusive to be in-line with the values just like task nice values. latency_nice =3D -20 indicates the task to have the least latency as compared to the tasks having latency_nice =3D +19. The latency_nice may affect only the CFS SCHED_CLASS by getting latency requirements from the userspace. Additionally, add debugging bits for newly added latency_nice attribute. Signed-off-by: Parth Shah [rebase, move defines in sched/prio.h] Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/linux/sched.h | 1 + include/linux/sched/prio.h | 18 ++++++++++++++++++ kernel/sched/debug.c | 1 + 3 files changed, 20 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 23de7fe86cc4..856240573300 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -784,6 +784,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; + int latency_nice; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h index ab83d85e1183..bfcd7f1d1e11 100644 --- a/include/linux/sched/prio.h +++ b/include/linux/sched/prio.h @@ -42,4 +42,22 @@ static inline long rlimit_to_nice(long prio) return (MAX_NICE - prio + 1); } =20 +/* + * Latency nice is meant to provide scheduler hints about the relative + * latency requirements of a task with respect to other tasks. + * Thus a task with latency_nice =3D=3D 19 can be hinted as the task with = no + * latency requirements, in contrast to the task with latency_nice =3D=3D = -20 + * which should be given priority in terms of lower latency. + */ +#define MAX_LATENCY_NICE 19 +#define MIN_LATENCY_NICE -20 + +#define LATENCY_NICE_WIDTH \ + (MAX_LATENCY_NICE - MIN_LATENCY_NICE + 1) + +/* + * Default tasks should be treated as a task with latency_nice =3D 0. + */ +#define DEFAULT_LATENCY_NICE 0 + #endif /* _LINUX_SCHED_PRIO_H */ diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 1637b65ba07a..68be7a3e42a3 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1043,6 +1043,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); + P(latency_nice); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9BFEC4332F for ; Tue, 15 Nov 2022 17:19:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238392AbiKORTj (ORCPT ); Tue, 15 Nov 2022 12:19:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237837AbiKORTN (ORCPT ); Tue, 15 Nov 2022 12:19:13 -0500 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F080114012 for ; Tue, 15 Nov 2022 09:19:05 -0800 (PST) Received: by mail-wm1-x329.google.com with SMTP id c3-20020a1c3503000000b003bd21e3dd7aso13622221wma.1 for ; Tue, 15 Nov 2022 09:19:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=DgoNBUoh3xxz3SNc31PO9TCe3YvH8+92yCBAOfiBBfg=; b=hR92ngdMg3GykIaP4ffe1GgO/tIUL6mOpgZJ4oKmccAUk9tP3gLq8vTiWyYjvolGJt vvF65JA347Yel2JzgD9Eneke7KfoUhrcJfPS1h1S/DlCWD6MPARG60Dn14vEwcaXxntV OT4pTEJqdz5Ycg4ITjVaH4ip0DLAy+YZ3huhT3MwNPuKmXz+XoXGqhTFBwpn6TlBoCcj D0OmDVl48lyoIgYYcgsLtwpATXsq0XWf9bLIEw0LFwQEE3ZlTw2iByPZDsTAj9tGUjgx 10O/fz1E0RMU/rO8/RngXp/ukrAmD7/1uidZXTiyDnQN/KZNv8LVOnPPcP0ecCIS7NsJ 23kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DgoNBUoh3xxz3SNc31PO9TCe3YvH8+92yCBAOfiBBfg=; b=cZ+umRTK4WfkpcYJ1Jo3gvHiNaks/Kl5067b5L1W6Gk0uf4vHeSIxvBnWsbyzLowr7 MzFTnf5GyTOw3N5GlcLzBLBAfs7Lp7pM3cD6ajnWw3AJfGm6taCpy5e+5NY+gjYpmz46 6Qlkt44PqAWD39KL178unWOmWWTbtZuhCXFbnPjLnGVZ8q7+AcRnvyyeWFcnuhF/h0/c fH2mrgRGb25Tvfp+dnE/svHd15qGLZWC6K1nafThh6URXFt2njEoAc+5FPfWemVPJgan cLtYBOF/Rgku3+/YKHN5fs4JMA4dd4fmM9hvIxB5axTApcJ3bpJio8bJdjEYz2BS9Jfi 6Q8w== X-Gm-Message-State: ANoB5plJ3eEnsKP1NH+PdYXDNW46yDpwFCOcLD6GZQtiX/YUpNRXsi1V /VkMW/mj66nGS1JWxX68VQyFgA== X-Google-Smtp-Source: AA0mqf4UlrX5D8DgBA1b4vkwpHc3w/tK1dpXtXCJyr65p3JMv5dgCg9uvTJM8CS+sJ2rK6vOYn0ZHg== X-Received: by 2002:a7b:c394:0:b0:3cf:b545:596 with SMTP id s20-20020a7bc394000000b003cfb5450596mr1198667wmj.49.1668532744320; Tue, 15 Nov 2022 09:19:04 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:03 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 3/9] sched/core: Propagate parent task's latency requirements to the child task Date: Tue, 15 Nov 2022 18:18:45 +0100 Message-Id: <20221115171851.835-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Clone parent task's latency_nice attribute to the forked child task. Reset the latency_nice value to default value when the child task is set to sched_reset_on_fork. Also, initialize init_task.latency_nice value with DEFAULT_LATENCY_NICE value Signed-off-by: Parth Shah [rebase] Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- init/init_task.c | 1 + kernel/sched/core.c | 1 + 2 files changed, 2 insertions(+) diff --git a/init/init_task.c b/init/init_task.c index ff6c4b9bfe6b..7dd71dd2d261 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,6 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, + .latency_nice =3D DEFAULT_LATENCY_NICE, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 07ac08caf019..8c84c652853b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4592,6 +4592,7 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 + p->latency_nice =3D DEFAULT_LATENCY_NICE; /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7878C4332F for ; Tue, 15 Nov 2022 17:19:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229498AbiKORTz (ORCPT ); Tue, 15 Nov 2022 12:19:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232178AbiKORTP (ORCPT ); Tue, 15 Nov 2022 12:19:15 -0500 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70D2618378 for ; Tue, 15 Nov 2022 09:19:08 -0800 (PST) Received: by mail-wr1-x435.google.com with SMTP id y16so25428769wrt.12 for ; Tue, 15 Nov 2022 09:19:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=VNnqFiK+Z/ErdMBjKPrnbW5+yw7a25XFT+5NgGkgZIQ=; b=md9Ac/r9a1O7toIV1WqvkpWh900j2oJvSBER4ZWontIVPXXy74b6ydOINhOSY9kP3i Jsy2KEzqaxEgbDoP5y6Loq1bBLxEmalgLk0n31iBIK9MWaY5SM98tVlqJtfONvMmUTqq PRxMsohBM2CBAc+Z3CEN6Lispq/JG0KfNRAKFB3WF3b4qMEx1iQxaiswOvbvaYE0QhZA bEsdmQ0wxYVtdKWYrHwpwkSALue7qNEVMWv+m45qWvgasraQP/Ish4TjacR8o8hptR9R 3+M+4T5s3jaY+6oTd1/vrDEEU/vOLpXVA0NRthwF5dRwkjz9swJw0m2ek/MnFeSE/VYo 86vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VNnqFiK+Z/ErdMBjKPrnbW5+yw7a25XFT+5NgGkgZIQ=; b=IflpUGPHAfI1XEb32WJBZFGR7vyIkpFDTtDK9W4yLbgt4F1POM+0AJUqqvxEqjHXRL nSJs6RWXGuKaK9vA1bW6GiNI8Ddw73EGYc83gNw2kk/z7QwTr+aMkbc96jMckWByBm08 mtGLZXB7L45duDJrWR4UEY37rUKOBq0JBvXJM7fUWaqMny+CGhNjD9dZFmxwSgrLouJ5 rd8BQKFfw8fsJhpUeMwwgWBuxsYYbSWB7kBLVVXi+OqyHqgID7DKnDTuvi79kdTrreI2 j4qX9jTFWRuvgkg/cnZ3ovUBBJN/zqTMoIXLbuTsjSC4iaCukRJudCBT3Iz725ssYJAS Lhgg== X-Gm-Message-State: ANoB5pk02h1EE85CBACsFclupz/fMDcSqKLsNUzG6IhnHkwV8+tevIBG RAahMN+4pKwFAoInOoDkaeRvWw== X-Google-Smtp-Source: AA0mqf5FO9bfpNfx1GDvKBBcaAyhbn0hQaEwclcHeucR8L1paDmov99IPfTshOYNEricWiX+v+cxug== X-Received: by 2002:adf:f0d2:0:b0:236:8ef4:6eea with SMTP id x18-20020adff0d2000000b002368ef46eeamr11946943wro.716.1668532746500; Tue, 15 Nov 2022 09:19:06 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:05 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 4/9] sched: Allow sched_{get,set}attr to change latency_nice of the task Date: Tue, 15 Nov 2022 18:18:46 +0100 Message-Id: <20221115171851.835-5-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Introduce the latency_nice attribute to sched_attr and provide a mechanism to change the value with the use of sched_setattr/sched_getattr syscall. Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in latency_nice of the task on every sched_setattr syscall. Signed-off-by: Parth Shah [rebase and add a dedicated __setscheduler_latency ] Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/uapi/linux/sched.h | 4 +++- include/uapi/linux/sched/types.h | 19 +++++++++++++++++++ kernel/sched/core.c | 24 ++++++++++++++++++++++++ tools/include/uapi/linux/sched.h | 4 +++- 4 files changed, 49 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/include/uapi/linux/sched/types.h b/include/uapi/linux/sched/ty= pes.h index f2c4589d4dbf..db1e8199e8c8 100644 --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -10,6 +10,7 @@ struct sched_param { =20 #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ +#define SCHED_ATTR_SIZE_VER2 60 /* add: latency_nice */ =20 /* * Extended scheduling parameters data structure. @@ -98,6 +99,22 @@ struct sched_param { * scheduled on a CPU with no more capacity than the specified value. * * A task utilization boundary can be reset by setting the attribute to -1. + * + * Latency Tolerance Attributes + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D + * + * A subset of sched_attr attributes allows to specify the relative latency + * requirements of a task with respect to the other tasks running/queued i= n the + * system. + * + * @ sched_latency_nice task's latency_nice value + * + * The latency_nice of a task can have any value in a range of + * [MIN_LATENCY_NICE..MAX_LATENCY_NICE]. + * + * A task with latency_nice with the value of LATENCY_NICE_MIN can be + * taken for a task requiring a lower latency as opposed to the task with + * higher latency_nice. */ struct sched_attr { __u32 size; @@ -120,6 +137,8 @@ struct sched_attr { __u32 sched_util_min; __u32 sched_util_max; =20 + /* latency requirement hints */ + __s32 sched_latency_nice; }; =20 #endif /* _UAPI_LINUX_SCHED_TYPES_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8c84c652853b..18c31a68eb18 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7352,6 +7352,14 @@ static void __setscheduler_params(struct task_struct= *p, p->rt_priority =3D attr->sched_priority; p->normal_prio =3D normal_prio(p); set_load_weight(p, true); + +} + +static void __setscheduler_latency(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + p->latency_nice =3D attr->sched_latency_nice; } =20 /* @@ -7494,6 +7502,13 @@ static int __sched_setscheduler(struct task_struct *= p, return retval; } =20 + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + if (attr->sched_latency_nice > MAX_LATENCY_NICE) + return -EINVAL; + if (attr->sched_latency_nice < MIN_LATENCY_NICE) + return -EINVAL; + } + if (pi) cpuset_read_lock(); =20 @@ -7528,6 +7543,9 @@ static int __sched_setscheduler(struct task_struct *p, goto change; if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && + attr->sched_latency_nice !=3D p->latency_nice) + goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; retval =3D 0; @@ -7616,6 +7634,7 @@ static int __sched_setscheduler(struct task_struct *p, __setscheduler_params(p, attr); __setscheduler_prio(p, newprio); } + __setscheduler_latency(p, attr); __setscheduler_uclamp(p, attr); =20 if (queued) { @@ -7826,6 +7845,9 @@ static int sched_copy_attr(struct sched_attr __user *= uattr, struct sched_attr *a size < SCHED_ATTR_SIZE_VER1) return -EINVAL; =20 + if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) && + size < SCHED_ATTR_SIZE_VER2) + return -EINVAL; /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -8063,6 +8085,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 + kattr.sched_latency_nice =3D p->latency_nice; + #ifdef CONFIG_UCLAMP_TASK /* * This could race with another potential updater, but this is fine diff --git a/tools/include/uapi/linux/sched.h b/tools/include/uapi/linux/sc= hed.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/tools/include/uapi/linux/sched.h +++ b/tools/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E570FC433FE for ; Tue, 15 Nov 2022 17:20:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231538AbiKORT6 (ORCPT ); Tue, 15 Nov 2022 12:19:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232839AbiKORTR (ORCPT ); Tue, 15 Nov 2022 12:19:17 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B4081A824 for ; Tue, 15 Nov 2022 09:19:09 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id g12so25255766wrs.10 for ; Tue, 15 Nov 2022 09:19:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=akILsQEShel+GUVI6x8rQ4arAvpSr96gzdf/RHUGxUg=; b=jkM0B1pTMSvrd10UMw8+whVhfXXrxonmvFwIhuDq3e41FpSuXjXfXlr65GnQNWgiio EIxIQOYLFraGNN8CUH0EcRNvRBsKrAjP0U+H/lZ2vywRPkhJhOSG+gdnR/zKL+HJ/Vsr G1lopmz9uhdGcMrXzFidyv6gXzw4F5QwHAFVHE6i/fSCBMRNYFt7wYq6iz+LoXHbtUyD s4NDbgAs3T2uU3y9TjP0602V6+MuPnebKK0gVdP5eJoIIFGxHPRiSoAUylYYrCzAxL3k DqlCrOAr/eGF2nafNjIiXh2S04YiKbPskBVceNREJn82d/qKs2r1d8zsDNzHY8WfcaqI M8yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=akILsQEShel+GUVI6x8rQ4arAvpSr96gzdf/RHUGxUg=; b=3Frucuy91JkcB/PxRt0YOP13TLeKemdKYEVuvfNZcgABjyhKONvXyFDGhcjoBgJjn5 qVOkzHn9Y/pXlbLkBhOgJcOe+2lLmdOozLeMZOahY89hnD/jlPLTuMSSeqVyjUDaL5zG 13gJSKpthG4Jr3jr//ov+QjhrjQQwxWBizSJT8/hs1aD8IeyDcCTpNV4ztHW2L6cQExO DSGgHUWa1h5rJIp1GwmeHAk6vbO+jFup1Tgs9qV0jGI84iagIppAVkcHRrZ03Sm5quQe lMUebMjk5aIqAgWp8lZKh4DCosXUoVIN+9FyFzE7fJfvZnUrO9y6M9/al9xnTRIcQZt7 iM/w== X-Gm-Message-State: ANoB5pn++8eH3w2kKqn6U+MefsDKDHYtx93nzYsk+h1qAbHKFCdP8xyN KUzuad9YnPN13rlv2UfERpFBNw== X-Google-Smtp-Source: AA0mqf4+Me2Tf3GjMOvSgXlDs5lx8qoaGHnaTSxjnCgoRkpdu5hJz3iFGzug3yz1kVR7BvdNb0pcdA== X-Received: by 2002:a5d:5186:0:b0:236:5486:421f with SMTP id k6-20020a5d5186000000b002365486421fmr11478642wrv.310.1668532749219; Tue, 15 Nov 2022 09:19:09 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:08 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH 5/9] sched/fair: Take into account latency priority at wakeup Date: Tue, 15 Nov 2022 18:18:47 +0100 Message-Id: <20221115171851.835-6-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Take into account the latency priority of a thread when deciding to preempt the current running thread. We don't want to provide more CPU bandwidth to a thread but reorder the scheduling to run latency sensitive task first whenever possible. As long as a thread didn't use its bandwidth, it will be able to preempt the current thread. At the opposite, a thread with a low latency priority will preempt current thread at wakeup only to keep fair CPU bandwidth sharing. Otherwise it will wait for the tick to get its sched slice. curr vruntime | sysctl_sched_wakeup_granularity <--> Reviewed-by: Joel Fernandes (Google) Tested-by: K Prateek Nayak ----------------------------------|----|-----------------------|-----------= ---- | |<---------------------> | . sysctl_sched_latency | . default/current latency entity | . | . 1111111111111111111111111111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-= 1-1- se preempts curr at wakeup ------>|<- se doesn't preempt curr -------------= ---- | . | . | . low latency entity | . ---------------------->| % of sysctl_sched_latency | 1111111111111111111111111111111111111111111111111111111111|0000|-1-1-1-1-1-= 1-1- preempt ------------------------------------------------->|<- do not preemp= t -- | . | . | . high latency entity | . |<-----------------------|----. | % of sysctl_sched_latency . 111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1= -1-1 preempt->|<- se doesn't preempt curr --------------------------------------= ---- Tests results of nice latency impact on heavy load like hackbench: hackbench -l (2560 / group) -g group group latency 0 latency 19 1 1.378(+/- 1%) 1.337(+/- 1%) + 3% 4 1.393(+/- 3%) 1.312(+/- 3%) + 6% 8 1.308(+/- 2%) 1.279(+/- 1%) + 2% 16 1.347(+/- 1%) 1.317(+/- 1%) + 2% hackbench -p -l (2560 / group) -g group group 1 1.836(+/- 17%) 1.148(+/- 5%) +37% 4 1.586(+/- 6%) 1.109(+/- 8%) +30% 8 1.209(+/- 4%) 0.780(+/- 4%) +35% 16 0.805(+/- 5%) 0.728(+/- 4%) +10% By deacreasing the latency prio, we reduce the number of preemption at wakeup and help hackbench making progress. Test results of nice latency impact on short live load like cyclictest while competing with heavy load like hackbench: hackbench -l 10000 -g $group & cyclictest --policy other -D 5 -q -n latency 0 latency -20 group min avg max min avg max 0 16 19 29 17 18 29 1 43 299 7359 63 84 3422 4 56 449 14806 45 83 284 8 63 820 51123 63 83 283 16 64 1326 70684 41 157 26852 group =3D 0 means that hackbench is not running. The avg is significantly improved with nice latency -20 especially with large number of groups but min and max remain quite similar. If we add the histogram parameter to get details of latency, we have : hackbench -l 10000 -g 16 & cyclictest --policy other -D 5 -q -n -H 20000 --histfile data.txt latency 0 latency -20 Min Latencies: 64 62 Avg Latencies: 1170 107 Max Latencies: 88069 10417 50% latencies: 122 86 75% latencies: 614 91 85% latencies: 961 94 90% latencies: 1225 97 95% latencies: 6120 102 99% latencies: 18328 159 With percentile details, we see the benefit of nice latency -20 as only 1% of the latencies are above 159us whereas the default latency has got 15% around ~1ms or above and 5% over the 6ms. Signed-off-by: Vincent Guittot --- include/linux/sched.h | 4 ++- include/linux/sched/prio.h | 9 ++++++ init/init_task.c | 2 +- kernel/sched/core.c | 38 +++++++++++++++++++--- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 66 ++++++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 6 ++++ 7 files changed, 112 insertions(+), 15 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 856240573300..2f33326adb8d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -568,6 +568,8 @@ struct sched_entity { /* cached value of my_q->h_nr_running */ unsigned long runnable_weight; #endif + /* preemption offset in ns */ + long latency_offset; =20 #ifdef CONFIG_SMP /* @@ -784,7 +786,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; - int latency_nice; + int latency_prio; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h index bfcd7f1d1e11..be79503d86af 100644 --- a/include/linux/sched/prio.h +++ b/include/linux/sched/prio.h @@ -59,5 +59,14 @@ static inline long rlimit_to_nice(long prio) * Default tasks should be treated as a task with latency_nice =3D 0. */ #define DEFAULT_LATENCY_NICE 0 +#define DEFAULT_LATENCY_PRIO (DEFAULT_LATENCY_NICE + LATENCY_NICE_WIDTH/2) + +/* + * Convert user-nice values [ -20 ... 0 ... 19 ] + * to static latency [ 0..39 ], + * and back. + */ +#define NICE_TO_LATENCY(nice) ((nice) + DEFAULT_LATENCY_PRIO) +#define LATENCY_TO_NICE(prio) ((prio) - DEFAULT_LATENCY_PRIO) =20 #endif /* _LINUX_SCHED_PRIO_H */ diff --git a/init/init_task.c b/init/init_task.c index 7dd71dd2d261..071deff8dbd1 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,7 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, - .latency_nice =3D DEFAULT_LATENCY_NICE, + .latency_prio =3D DEFAULT_LATENCY_PRIO, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 18c31a68eb18..b2b8cb6c08cd 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1283,6 +1283,16 @@ static void set_load_weight(struct task_struct *p, b= ool update_load) } } =20 +static void set_latency_offset(struct task_struct *p) +{ + long weight =3D sched_latency_to_weight[p->latency_prio]; + s64 offset; + + offset =3D weight * get_sleep_latency(false); + offset =3D div_s64(offset, NICE_LATENCY_WEIGHT_MAX); + p->se.latency_offset =3D (long)offset; +} + #ifdef CONFIG_UCLAMP_TASK /* * Serializes updates of utilization clamp values @@ -4592,7 +4602,9 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 - p->latency_nice =3D DEFAULT_LATENCY_NICE; + p->latency_prio =3D NICE_TO_LATENCY(0); + set_latency_offset(p); + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -7358,8 +7370,10 @@ static void __setscheduler_params(struct task_struct= *p, static void __setscheduler_latency(struct task_struct *p, const struct sched_attr *attr) { - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) - p->latency_nice =3D attr->sched_latency_nice; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + p->latency_prio =3D NICE_TO_LATENCY(attr->sched_latency_nice); + set_latency_offset(p); + } } =20 /* @@ -7544,7 +7558,7 @@ static int __sched_setscheduler(struct task_struct *p, if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && - attr->sched_latency_nice !=3D p->latency_nice) + attr->sched_latency_nice !=3D LATENCY_TO_NICE(p->latency_prio)) goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; @@ -8085,7 +8099,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 - kattr.sched_latency_nice =3D p->latency_nice; + kattr.sched_latency_nice =3D LATENCY_TO_NICE(p->latency_prio); =20 #ifdef CONFIG_UCLAMP_TASK /* @@ -11294,6 +11308,20 @@ const u32 sched_prio_to_wmult[40] =3D { /* 15 */ 119304647, 148102320, 186737708, 238609294, 286331153, }; =20 +/* + * latency weight for wakeup preemption + */ +const int sched_latency_to_weight[40] =3D { + /* -20 */ -1024, -973, -922, -870, -819, + /* -15 */ -768, -717, -666, -614, -563, + /* -10 */ -512, -461, -410, -358, -307, + /* -5 */ -256, -205, -154, -102, -51, + /* 0 */ 0, 51, 102, 154, 205, + /* 5 */ 256, 307, 358, 410, 461, + /* 10 */ 512, 563, 614, 666, 717, + /* 15 */ 768, 819, 870, 922, 973, +}; + void call_trace_sched_update_nr_running(struct rq *rq, int count) { trace_sched_update_nr_running_tp(rq, count); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 68be7a3e42a3..b3922184af91 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1043,7 +1043,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); - P(latency_nice); + P(latency_prio); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c8a697f8db88..0e80e65113bd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4858,6 +4858,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) update_idle_cfs_rq_clock_pelt(cfs_rq); } =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se); + /* * Preempt the current task with a newly woken task if needed: */ @@ -4866,7 +4868,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sche= d_entity *curr) { unsigned long ideal_runtime, delta_exec; struct sched_entity *se; - s64 delta; + s64 delta, offset; =20 ideal_runtime =3D sched_slice(cfs_rq, curr); delta_exec =3D curr->sum_exec_runtime - curr->prev_sum_exec_runtime; @@ -4891,10 +4893,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sc= hed_entity *curr) se =3D __pick_first_entity(cfs_rq); delta =3D curr->vruntime - se->vruntime; =20 - if (delta < 0) + offset =3D wakeup_latency_gran(curr, se); + if (delta < offset) return; =20 - if (delta > ideal_runtime) + if ((delta > ideal_runtime) || + (delta > get_latency_max())) resched_curr(rq_of(cfs_rq)); } =20 @@ -6019,6 +6023,35 @@ static int sched_idle_cpu(int cpu) } #endif =20 +static void set_next_buddy(struct sched_entity *se); + +static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_ent= ity *se) +{ + struct sched_entity *next; + + if (se->latency_offset >=3D 0) + return; + + if (cfs->nr_running <=3D 1) + return; + /* + * When waking from another class, we don't need to check to preempt at + * wakeup and don't set next buddy as a candidate for being picked in + * priority. + * In case of simultaneous wakeup when current is another class, the + * latency sensitive tasks lost opportunity to preempt non sensitive + * tasks which woke up simultaneously. + */ + + if (cfs->next) + next =3D cfs->next; + else + next =3D __pick_first_entity(cfs); + + if (next && wakeup_preempt_entity(next, se) =3D=3D 1) + set_next_buddy(se); +} + /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -6105,14 +6138,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct= *p, int flags) if (!task_new) update_overutilized_status(rq); =20 + if (rq->curr->sched_class !=3D &fair_sched_class) + check_preempt_from_others(cfs_rq_of(&p->se), &p->se); + enqueue_throttle: assert_list_leaf_cfs_rq(rq); =20 hrtick_update(rq); } =20 -static void set_next_buddy(struct sched_entity *se); - /* * The dequeue_task method is called before nr_running is * decreased. We remove the task from the rbtree and @@ -7461,6 +7495,23 @@ balance_fair(struct rq *rq, struct task_struct *prev= , struct rq_flags *rf) } #endif /* CONFIG_SMP */ =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se) +{ + long latency_offset =3D se->latency_offset; + + /* + * A negative latency offset means that the sched_entity has latency + * requirement that needs to be evaluated versus other entity. + * Otherwise, use the latency weight to evaluate how much scheduling + * delay is acceptable by se. + */ + if ((latency_offset < 0) || (curr->latency_offset < 0)) + latency_offset -=3D curr->latency_offset; + latency_offset =3D min_t(long, latency_offset, get_latency_max()); + + return latency_offset; +} + static unsigned long wakeup_gran(struct sched_entity *se) { unsigned long gran =3D sysctl_sched_wakeup_granularity; @@ -7499,11 +7550,12 @@ static int wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se) { s64 gran, vdiff =3D curr->vruntime - se->vruntime; + s64 offset =3D wakeup_latency_gran(curr, se); =20 - if (vdiff <=3D 0) + if (vdiff < offset) return -1; =20 - gran =3D wakeup_gran(se); + gran =3D offset + wakeup_gran(se); =20 /* * At wake up, the vruntime of a task is capped to not be older than diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 842ce0094d9c..7292652731d0 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -125,6 +125,11 @@ extern int sched_rr_timeslice; */ #define NS_TO_JIFFIES(TIME) ((unsigned long)(TIME) / (NSEC_PER_SEC / HZ)) =20 +/* Maximum nice latency weight used to scale the latency_offset */ + +#define NICE_LATENCY_SHIFT (SCHED_FIXEDPOINT_SHIFT) +#define NICE_LATENCY_WEIGHT_MAX (1L << NICE_LATENCY_SHIFT) + /* * Increase resolution of nice-level calculations for 64-bit architectures. * The extra resolution improves shares distribution and load balancing of @@ -2115,6 +2120,7 @@ static_assert(WF_TTWU =3D=3D SD_BALANCE_WAKE); =20 extern const int sched_prio_to_weight[40]; extern const u32 sched_prio_to_wmult[40]; +extern const int sched_latency_to_weight[40]; =20 /* * {de,en}queue flags: --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3437DC433FE for ; Tue, 15 Nov 2022 17:20:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237742AbiKORUD (ORCPT ); Tue, 15 Nov 2022 12:20:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229890AbiKORTS (ORCPT ); Tue, 15 Nov 2022 12:19:18 -0500 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A10D28E06 for ; Tue, 15 Nov 2022 09:19:12 -0800 (PST) Received: by mail-wr1-x435.google.com with SMTP id y16so25429168wrt.12 for ; Tue, 15 Nov 2022 09:19:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=ykXig/7XYkSW6Dkf0nyYluv6HE0QJzXE3P7uP1TrGB4=; b=wyOX2yEa2/MbyLgnbvcqOW23Zx89ycJpBs4987F8pDWadxSSrwJZWacT0K4X+6RenJ vVTOqRrn1zdMc9PynswpdGXTxQqRhJNT/xlDbhJSBjxP8I1Ad4g9ckwYjfV/d2vGyBwm 1SBL1F2I2BirHq7WO+3/CtUkKBnp10TAcw22IGGeu+wwrA+a2Rso7hpcmOhUlyoP69rf bhwRGif9HhYCLGbjKXlOt1+X2XLP9nRBgbDS/f8JVOlXUCsUUszwLwd4NHaRyjidGc0m n5n/O2R3YyUaEnqOI/PUFpglQ4G04g1T7getnbftyaW7+CGo+vFJTne59NyxTFCa74/h liCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ykXig/7XYkSW6Dkf0nyYluv6HE0QJzXE3P7uP1TrGB4=; b=qj4kRadcKUX8lQvDHc2G9+rNN7KQ5Tg38LW8iY1zn6rOhJturno9K+YlWHRbngTgcL ZXvc02iAzpfuBeV0FFQ0odQhvCX0gtcKN/skCBOviqcHfAak0LXMbbYLLCGQhhbYH3I4 +zjhkM4z/ids0JauFClkjGRa76Wm7tQcJlktCttmIMshoJwngc5n9Qw6CEaK5TB1eZkY UzIiOIC7oG9dYF6YIlIdoWg6JCFx+qfn6NRccBkS2/HbhPf4lxyhKXgAOi7X/wruKUTS haSO/sPRqa94zKFBnBNGDmxVkAkNZdyqs04jxEyFIvxpHWxeHkvyKOnGVOyG222l8jg2 b1gQ== X-Gm-Message-State: ANoB5plS6EhcAdjF6QKfKQKz3WlJbMpwiWinFQE/aqpBEdOVKBBO1WbQ vsCEvB0H4eU547V48cQ3PC8/sy+smQZBkg== X-Google-Smtp-Source: AA0mqf4oFfok5jL+YpUZVWqvut8Le0wvdvUkCNuknFwDnQvsdheDgjSQN25NVOA2tzxLigrimAtiDg== X-Received: by 2002:adf:e4c1:0:b0:236:4ba1:fb2d with SMTP id v1-20020adfe4c1000000b002364ba1fb2dmr11855364wrm.570.1668532751530; Tue, 15 Nov 2022 09:19:11 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:10 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 6/9] sched/fair: Add sched group latency support Date: Tue, 15 Nov 2022 18:18:48 +0100 Message-Id: <20221115171851.835-7-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Task can set its latency priority with sched_setattr(), which is then used to set the latency offset of its sched_enity, but sched group entities still have the default latency offset value. Add a latency.nice field in cpu cgroup controller to set the latency priority of the group similarly to sched_setattr(). The latency priority is then used to set the offset of the sched_entities of the group. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- Documentation/admin-guide/cgroup-v2.rst | 10 +++++ kernel/sched/core.c | 52 +++++++++++++++++++++++++ kernel/sched/fair.c | 33 ++++++++++++++++ kernel/sched/sched.h | 4 ++ 4 files changed, 99 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-= guide/cgroup-v2.rst index dc254a3cb956..93a73663a5f7 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1118,6 +1118,16 @@ All time durations are in microseconds. values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. =20 + cpu.latency.nice + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The nice value is in the range [-20, 19]. + + This interface file allows reading and setting latency using the + same values used by sched_setattr(2). The latency_nice of a group is + used to limit the impact of the latency_nice of a task outside the + group. =20 =20 Memory diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b2b8cb6c08cd..9f6700f812ea 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10967,6 +10967,47 @@ static int cpu_idle_write_s64(struct cgroup_subsys= _state *css, { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + int prio, delta, last_delta =3D INT_MAX; + s64 weight; + + weight =3D css_tg(css)->latency_offset * NICE_LATENCY_WEIGHT_MAX; + weight =3D div_s64(weight, get_sleep_latency(false)); + + /* Find the closest nice value to the current weight */ + for (prio =3D 0; prio < ARRAY_SIZE(sched_latency_to_weight); prio++) { + delta =3D abs(sched_latency_to_weight[prio] - weight); + if (delta >=3D last_delta) + break; + last_delta =3D delta; + } + + return LATENCY_TO_NICE(prio-1); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 nice) +{ + s64 latency_offset; + long weight; + int idx; + + if (nice < MIN_LATENCY_NICE || nice > MAX_LATENCY_NICE) + return -ERANGE; + + idx =3D NICE_TO_LATENCY(nice); + idx =3D array_index_nospec(idx, LATENCY_NICE_WIDTH); + weight =3D sched_latency_to_weight[idx]; + + latency_offset =3D weight * get_sleep_latency(false); + latency_offset =3D div_s64(latency_offset, NICE_LATENCY_WEIGHT_MAX); + + return sched_group_set_latency(css_tg(css), latency_offset); +} + #endif =20 static struct cftype cpu_legacy_files[] =3D { @@ -10981,6 +11022,11 @@ static struct cftype cpu_legacy_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency.nice", + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11198,6 +11244,12 @@ static struct cftype cpu_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency.nice", + .flags =3D CFTYPE_NOT_ON_ROOT, + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0e80e65113bd..75c0a8d203c3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12158,6 +12158,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) goto err; =20 tg->shares =3D NICE_0_LOAD; + tg->latency_offset =3D 0; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 @@ -12256,6 +12257,9 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, } =20 se->my_q =3D cfs_rq; + + se->latency_offset =3D tg->latency_offset; + /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent =3D parent; @@ -12386,6 +12390,35 @@ int sched_group_set_idle(struct task_group *tg, lo= ng idle) return 0; } =20 +int sched_group_set_latency(struct task_group *tg, s64 latency) +{ + int i; + + if (tg =3D=3D &root_task_group) + return -EINVAL; + + if (abs(latency) > sysctl_sched_latency) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_offset =3D=3D latency) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_offset =3D latency; + + for_each_possible_cpu(i) { + struct sched_entity *se =3D tg->se[i]; + + WRITE_ONCE(se->latency_offset, latency); + } + + mutex_unlock(&shares_mutex); + return 0; +} + #else /* CONFIG_FAIR_GROUP_SCHED */ =20 void free_fair_sched_group(struct task_group *tg) { } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 7292652731d0..c3735a34d394 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -383,6 +383,8 @@ struct task_group { =20 /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency constraint of the group. */ + int latency_offset; =20 #ifdef CONFIG_SMP /* @@ -493,6 +495,8 @@ extern int sched_group_set_shares(struct task_group *tg= , unsigned long shares); =20 extern int sched_group_set_idle(struct task_group *tg, long idle); =20 +extern int sched_group_set_latency(struct task_group *tg, s64 latency); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next); --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F13B1C433FE for ; Tue, 15 Nov 2022 17:20:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238423AbiKORUH (ORCPT ); Tue, 15 Nov 2022 12:20:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231211AbiKORT1 (ORCPT ); Tue, 15 Nov 2022 12:19:27 -0500 Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com [IPv6:2a00:1450:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 325312ED67 for ; Tue, 15 Nov 2022 09:19:15 -0800 (PST) Received: by mail-wr1-x429.google.com with SMTP id d9so20722382wrm.13 for ; Tue, 15 Nov 2022 09:19:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=CGMqXYjoYf/WCxKwtIkJoJC9zB/0I+t2qO/AqMAucT0=; b=I0/z0dlKkHFrFldEVa1E3JCFMbVQxwcWjYQU1Njus7Rg2rSLYlTEtI2S1xLqrfJMGM xFm/nOOtIU8k8dL5P4QXvCupx9GH2uGEGcVenhEYMSK07SrTrpBH1mTEKHCKcOSUeYuU leoh/DbNqm1sr15uKVVe459mKy1coCnTm0JWcZv8xeof6sVpOyzqH5XJqdsQyBXpWIhl f4PoIcWj+oVslNQe+tpGae3ZMePBDGAtlN6LazkLmPDKrfEuXyIgGNo3q0ZesSq1Xbuh W1ctsDeeMycFuFHCg5v0/kCUq1hE/pCYYNs3PHORwt0i9MOLtTKPOGwLNX9t1xqBJoyc ZXnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CGMqXYjoYf/WCxKwtIkJoJC9zB/0I+t2qO/AqMAucT0=; b=Ue0a0jzfMctoYfM0p8eoQK1/14vO6Lvv6w4WFHvh5MGQ0TPvo3n5e/kuxkbmnTCGR1 p7jIBHv3VyCocZCpSTCwh4XmIhMvmy+jeyWUhmyo4EHZRVhbemR54K7NtL56QCggbVAP 0DaH6wYXsQKSZ30aYiXCLrzM7EhXHcWB0+rkUEfKBqB8k2c0xNBTn8gepURDVDSUICx8 l3LsWzZLTXCbPJa4NY2IAqHyonTzBjjHO5XQvzMRTQ7dnv7fTJ7+A81JGkyTshSSnvBv UMRxWG0olgkOfI7cPmF3z6GAREjRUlMCb7IrG3WmHXzzcW1wrd73CoeQnBOJSo/KCH5r J6LA== X-Gm-Message-State: ANoB5pnKKPc2N+YHsdEUB+pJmsodSsl7az5Pi1sAbweWkAmnStG6lF/t 1zHhk6bTS0npcRiNcikMyrbOAw== X-Google-Smtp-Source: AA0mqf5cP8PM0ZsYvKiMd4yC8wCbPBwflP59tAoQSOE0Kt/ujzaGvhH7QlIk+T4NmeTMCu0iQuM5pw== X-Received: by 2002:a5d:6b46:0:b0:22e:2e22:6698 with SMTP id x6-20020a5d6b46000000b0022e2e226698mr12072367wrw.296.1668532753666; Tue, 15 Nov 2022 09:19:13 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:12 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 7/9] sched/core: Support latency priority with sched core Date: Tue, 15 Nov 2022 18:18:49 +0100 Message-Id: <20221115171851.835-8-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Take into account wakeup_latency_gran() when ordering the cfs threads. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 75c0a8d203c3..be446dc58be7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11833,6 +11833,9 @@ bool cfs_prio_less(struct task_struct *a, struct ta= sk_struct *b, bool in_fi) delta =3D (s64)(sea->vruntime - seb->vruntime) + (s64)(cfs_rqb->min_vruntime_fi - cfs_rqa->min_vruntime_fi); =20 + /* Take into account latency prio */ + delta -=3D wakeup_latency_gran(sea, seb); + return delta > 0; } #else --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42C61C4332F for ; Tue, 15 Nov 2022 17:20:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238453AbiKORUN (ORCPT ); Tue, 15 Nov 2022 12:20:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238328AbiKORT3 (ORCPT ); Tue, 15 Nov 2022 12:19:29 -0500 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E9842E9F3 for ; Tue, 15 Nov 2022 09:19:16 -0800 (PST) Received: by mail-wm1-x329.google.com with SMTP id c3-20020a1c3503000000b003bd21e3dd7aso13622607wma.1 for ; Tue, 15 Nov 2022 09:19:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=Vie2+W+a1jhqsNpaojKa5uLWE1X0hNQL14mrtfOMQow=; b=WJJlA3dtITTK/3i1Sxjtl9yU0qemCZstwlD9Kq+G/0PVSTS7I02LqXD7qytAYySKC1 5YGb9nMkKkDk2zv9Xv362ugY6V3ytd90a5W0igyqEVwuxDEM33z9qdpBi788UaTryHbn uzJkC8LjCMZoLQjL1Qo8kpM40stHxM+OU6nHAlLa7VXYASbqtDLayYCF5XPUwmcBJVVR Apm7yUw+kByTeIEGs5HuKJqP9ax6CtAqX/FsPzCzokoEb2IYuxXmqlhg6acFJnOVhlEE G4GGzl7s7+kbJcxzHKIVutSHSlT2TChFT16v84OiVdDodShK5zC3toGDCxmIbcJ9zLoA KABA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Vie2+W+a1jhqsNpaojKa5uLWE1X0hNQL14mrtfOMQow=; b=k2pcvz1eSOAhjLLb7ZjU//+Gtg1zssKbSlCDcuMM2tRNnkbqQ14Fso+8FqM31eSu5T VZyLmh+lEtZtOoZBLFNIrHA62osU2VtZHwDKMWu2vwF9L0e4kcqKvaLB6Thq3zwra9f1 WBxavCBVAu+RX4jgqoTaupEac1HyF4/radNyQD3tYovXDoh6151JF+atpTPEGiIjF+fl ne2ZxVSHaSTZVNxZR0+XRLVKALqShHQoIvZMTbcAmo1cv5wlL3k+qRT6/SL9pIoYka6K af8ORFnaX8Tkr7APRfnRiIpiNaVsHk9PqbvmFL8SGNdJsqL1zLOD1E58OXqoAMo9nQ9o fQbQ== X-Gm-Message-State: ANoB5ploAK3lCqyI9r4517e28eI3SjBJVGG/siSRj+1+jYNZpgjatFUp RT8fJEr0SdC0ZHJCZSn8yi1bRA== X-Google-Smtp-Source: AA0mqf5g6JGskhWjfO6tZLoJG0/0C9vcXH5b5lSyiG/osK2+FE0R3GPyF/D+GQJVFjZni85qi7k/9w== X-Received: by 2002:a05:600c:22c9:b0:3cf:ac8a:d43e with SMTP id 9-20020a05600c22c900b003cfac8ad43emr2294551wmg.65.1668532755732; Tue, 15 Nov 2022 09:19:15 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:14 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 8/9] sched/fair: Add latency list Date: Tue, 15 Nov 2022 18:18:50 +0100 Message-Id: <20221115171851.835-9-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Add a rb tree for latency sensitive entities so we can schedule the most sensitive one first even when it failed to preempt current at wakeup or when it got quickly preempted by another entity of higher priority. In order to keep fairness, the latency is used once at wakeup to get a minimum slice and not during the following scheduling slice to prevent long running entity to got more running time than allocated to his nice priority. The rb tree enables to cover the last corner case where latency sensitive entity can't got schedule quickly after the wakeup. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- include/linux/sched.h | 1 + kernel/sched/core.c | 1 + kernel/sched/fair.c | 95 +++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 1 + 4 files changed, 95 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 2f33326adb8d..5187114a9920 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -548,6 +548,7 @@ struct sched_entity { /* For load-balancing: */ struct load_weight load; struct rb_node run_node; + struct rb_node latency_node; struct list_head group_node; unsigned int on_rq; =20 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9f6700f812ea..eaca0e34ab58 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4361,6 +4361,7 @@ static void __sched_fork(unsigned long clone_flags, s= truct task_struct *p) p->se.nr_migrations =3D 0; p->se.vruntime =3D 0; INIT_LIST_HEAD(&p->se.group_node); + RB_CLEAR_NODE(&p->se.latency_node); =20 #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq =3D NULL; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index be446dc58be7..76da7c7a13ab 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -665,7 +665,76 @@ struct sched_entity *__pick_last_entity(struct cfs_rq = *cfs_rq) =20 return __node_2_se(last); } +#endif =20 +/************************************************************** + * Scheduling class tree data structure manipulation methods: + * for latency + */ + +static inline bool latency_before(struct sched_entity *a, + struct sched_entity *b) +{ + return (s64)(a->vruntime + a->latency_offset - b->vruntime - b->latency_o= ffset) < 0; +} + +#define __latency_node_2_se(node) \ + rb_entry((node), struct sched_entity, latency_node) + +static inline bool __latency_less(struct rb_node *a, const struct rb_node = *b) +{ + return latency_before(__latency_node_2_se(a), __latency_node_2_se(b)); +} + +/* + * Enqueue an entity into the latency rb-tree: + */ +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *= se, int flags) +{ + + /* Only latency sensitive entity can be added to the list */ + if (se->latency_offset >=3D 0) + return; + + if (!RB_EMPTY_NODE(&se->latency_node)) + return; + + /* + * An execution time less than sysctl_sched_min_granularity means that + * the entity has been preempted by a higher sched class or an entity + * with higher latency constraint. + * Put it back in the list so it gets a chance to run 1st during the + * next slice. + */ + if (!(flags & ENQUEUE_WAKEUP)) { + u64 delta_exec =3D se->sum_exec_runtime - se->prev_sum_exec_runtime; + + if (delta_exec >=3D sysctl_sched_min_granularity) + return; + } + + rb_add_cached(&se->latency_node, &cfs_rq->latency_timeline, __latency_les= s); +} + +static void __dequeue_latency(struct cfs_rq *cfs_rq, struct sched_entity *= se) +{ + if (!RB_EMPTY_NODE(&se->latency_node)) { + rb_erase_cached(&se->latency_node, &cfs_rq->latency_timeline); + RB_CLEAR_NODE(&se->latency_node); + } +} + +static struct sched_entity *__pick_first_latency(struct cfs_rq *cfs_rq) +{ + struct rb_node *left =3D rb_first_cached(&cfs_rq->latency_timeline); + + if (!left) + return NULL; + + return __latency_node_2_se(left); +} + +#ifdef CONFIG_SCHED_DEBUG /************************************************************** * Scheduling class statistics methods: */ @@ -4739,8 +4808,10 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se, int flags) check_schedstat_required(); update_stats_enqueue_fair(cfs_rq, se, flags); check_spread(cfs_rq, se); - if (!curr) + if (!curr) { __enqueue_entity(cfs_rq, se); + __enqueue_latency(cfs_rq, se, flags); + } se->on_rq =3D 1; =20 if (cfs_rq->nr_running =3D=3D 1) { @@ -4826,8 +4897,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se, int flags) =20 clear_buddies(cfs_rq, se); =20 - if (se !=3D cfs_rq->curr) + if (se !=3D cfs_rq->curr) { __dequeue_entity(cfs_rq, se); + __dequeue_latency(cfs_rq, se); + } se->on_rq =3D 0; account_entity_dequeue(cfs_rq, se); =20 @@ -4916,6 +4989,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se) */ update_stats_wait_end_fair(cfs_rq, se); __dequeue_entity(cfs_rq, se); + __dequeue_latency(cfs_rq, se); update_load_avg(cfs_rq, se, UPDATE_TG); } =20 @@ -4954,7 +5028,7 @@ static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { struct sched_entity *left =3D __pick_first_entity(cfs_rq); - struct sched_entity *se; + struct sched_entity *latency, *se; =20 /* * If curr is set we have to see if its left of the leftmost entity @@ -4996,6 +5070,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched= _entity *curr) se =3D cfs_rq->last; } =20 + /* Check for latency sensitive entity waiting for running */ + latency =3D __pick_first_latency(cfs_rq); + if (latency && (latency !=3D se) && + wakeup_preempt_entity(latency, se) < 1) + se =3D latency; + return se; } =20 @@ -5019,6 +5099,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, st= ruct sched_entity *prev) update_stats_wait_start_fair(cfs_rq, prev); /* Put 'current' back into the tree. */ __enqueue_entity(cfs_rq, prev); + __enqueue_latency(cfs_rq, prev, 0); /* in !on_rq case, update occurred at dequeue */ update_load_avg(cfs_rq, prev, 0); } @@ -12106,6 +12187,7 @@ static void set_next_task_fair(struct rq *rq, struc= t task_struct *p, bool first) void init_cfs_rq(struct cfs_rq *cfs_rq) { cfs_rq->tasks_timeline =3D RB_ROOT_CACHED; + cfs_rq->latency_timeline =3D RB_ROOT_CACHED; u64_u32_store(cfs_rq->min_vruntime, (u64)(-(1LL << 20))); #ifdef CONFIG_SMP raw_spin_lock_init(&cfs_rq->removed.lock); @@ -12414,8 +12496,15 @@ int sched_group_set_latency(struct task_group *tg,= s64 latency) =20 for_each_possible_cpu(i) { struct sched_entity *se =3D tg->se[i]; + struct rq *rq =3D cpu_rq(i); + struct rq_flags rf; + + rq_lock_irqsave(rq, &rf); =20 + __dequeue_latency(se->cfs_rq, se); WRITE_ONCE(se->latency_offset, latency); + + rq_unlock_irqrestore(rq, &rf); } =20 mutex_unlock(&shares_mutex); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c3735a34d394..b81179c512e6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -575,6 +575,7 @@ struct cfs_rq { #endif =20 struct rb_root_cached tasks_timeline; + struct rb_root_cached latency_timeline; =20 /* * 'curr' points to currently running entity on this cfs_rq. --=20 2.17.1 From nobody Mon Apr 13 17:14:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37B3DC4332F for ; Tue, 15 Nov 2022 17:20:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229640AbiKORUX (ORCPT ); Tue, 15 Nov 2022 12:20:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231560AbiKORTa (ORCPT ); Tue, 15 Nov 2022 12:19:30 -0500 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41C7115FED for ; Tue, 15 Nov 2022 09:19:19 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id o30so10127105wms.2 for ; Tue, 15 Nov 2022 09:19:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=rEkS8i7tIYgboxjzKf+DHpXTxKISIyLo6Edt1lmPKQc=; b=tjPVju3AGCAvlDVjVG5Z3kU1qxXIcz4EbJ0YzmvnkyOM7QTzLHV3iZYpxiagmelITC RUX9pdeixkhk28fIEsDi+xcAYl7tscemdKuB+0CmrUkUfJBqBlf14kJA9O7/a8rBmn6S anVdPmczbtuQdG6+ISMYeeKN1oryogoGOkp5fPkGdF8/aomsS7V06TASBJOCTlQF4mF7 Y0slf3EhfAOB0zrTflVJ059oEIg5lMN3mzRXf1pzYdJtKSzS5A4JRZjt9p6k5SQ/SSI9 nvJXS7i/byjVk6tvOP27i/U/WAWgOpcvFuWx04Vflp3sOXjv+BpjKrO4ISCC3zxbsMtq BFZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rEkS8i7tIYgboxjzKf+DHpXTxKISIyLo6Edt1lmPKQc=; b=jfScwWNJu9AzM7hyj5DM+Kg4JjP5gBviDTS+UmonSLDAA2l2wpjKljvOqqMKcinODw 2DWfMy49F5BqXxPgN/1ZtZlW1mQf/qTF6+p034Mhd6z9Xt9XcNEjQqloYI3EAZJJspba pOQjFQvYUjmg2YN0x4bjDKaKEmthFGGCvY2nr0KzqyB2T6nrmMOKNaiFOqmql4pvFtJJ HD4KTO3rUSdCkO3VHj4rk2ya40tItpiaA61vkEWsjwZjz+PJMX/+OoAUvHTVvY6HTsmQ F7bv9U3ql2F2oDvb7ILkc+QipIEhjgcmRnyaZ0/m6tPvOCLpPpDYfkxlHAHGryV/8iI0 PL4A== X-Gm-Message-State: ANoB5plky/6mu2p1BjPyb3TNEFkcZv6FdJ1MUAl1tW1BZvNToiwc8kvl W/sbzIb4xsF0OjsUYwswXFIFWA== X-Google-Smtp-Source: AA0mqf56oiRJLB1/ENBsh2USqP12nfyAUKMi4mypykD5SYdXWWlCTpr/D+II/BPeJxEPQwKqWhLC6Q== X-Received: by 2002:a05:600c:42c6:b0:3cf:6b14:1033 with SMTP id j6-20020a05600c42c600b003cf6b141033mr2302248wme.103.1668532757749; Tue, 15 Nov 2022 09:19:17 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:91c8:7496:8b73:811f]) by smtp.gmail.com with ESMTPSA id bg40-20020a05600c3ca800b003cf78aafdd7sm16846461wmb.39.2022.11.15.09.19.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Nov 2022 09:19:16 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v9 9/9] sched/fair: remove check_preempt_from_others Date: Tue, 15 Nov 2022 18:18:51 +0100 Message-Id: <20221115171851.835-10-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221115171851.835-1-vincent.guittot@linaro.org> References: <20221115171851.835-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" With the dedicated latency list, we don't have to take care of this special case anymore as pick_next_entity checks for a runnable latency sensitive task. Signed-off-by: Vincent Guittot Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 34 ++-------------------------------- 1 file changed, 2 insertions(+), 32 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 76da7c7a13ab..466a2fee1592 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6104,35 +6104,6 @@ static int sched_idle_cpu(int cpu) } #endif =20 -static void set_next_buddy(struct sched_entity *se); - -static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_ent= ity *se) -{ - struct sched_entity *next; - - if (se->latency_offset >=3D 0) - return; - - if (cfs->nr_running <=3D 1) - return; - /* - * When waking from another class, we don't need to check to preempt at - * wakeup and don't set next buddy as a candidate for being picked in - * priority. - * In case of simultaneous wakeup when current is another class, the - * latency sensitive tasks lost opportunity to preempt non sensitive - * tasks which woke up simultaneously. - */ - - if (cfs->next) - next =3D cfs->next; - else - next =3D __pick_first_entity(cfs); - - if (next && wakeup_preempt_entity(next, se) =3D=3D 1) - set_next_buddy(se); -} - /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -6219,15 +6190,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct= *p, int flags) if (!task_new) update_overutilized_status(rq); =20 - if (rq->curr->sched_class !=3D &fair_sched_class) - check_preempt_from_others(cfs_rq_of(&p->se), &p->se); - enqueue_throttle: assert_list_leaf_cfs_rq(rq); =20 hrtick_update(rq); } =20 +static void set_next_buddy(struct sched_entity *se); + /* * The dequeue_task method is called before nr_running is * decreased. We remove the task from the rbtree and --=20 2.17.1