From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7129CC38A02 for ; Fri, 28 Oct 2022 09:35:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230206AbiJ1JfL (ORCPT ); Fri, 28 Oct 2022 05:35:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229987AbiJ1JfB (ORCPT ); Fri, 28 Oct 2022 05:35:01 -0400 Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE3ED4A118 for ; Fri, 28 Oct 2022 02:34:57 -0700 (PDT) Received: by mail-wm1-x32c.google.com with SMTP id bg9-20020a05600c3c8900b003bf249616b0so3297960wmb.3 for ; Fri, 28 Oct 2022 02:34:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=FulDXK+MjqUjf+jM0x0PxczJh4rRajiCQKKkLNt7L9g=; b=SdbnTsaQh6LgyAxdqEqLmRwb7PI1bfyTiDndVDE72wlnq6+58oWdsYr/hY3s8l5lYX EF1dMaXFZkIb8+NbZYI6x1M/BDg3OS82tQDy20B3C7rV7IPrDBDiXr3saKP4YhK1zBDS gG8MuNuwR61tpEVH9PkjFnYQpRmLJkT6tHusBxPqn03r3/lgFyxWSmY3O8V0Hsij/6Iv fIsEmmeY/cTWIywK80i2LlMo0ya/P4flmrHe3BLgc4nM/N3z3m64uYomKU6Df9D6OQbX NE8uYeJVTrXoCb5x7pc5So4f5g5eT7zkrLXBO0y/2/cB2g98SdxvHLl7Z8E0L/MXr7El snSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FulDXK+MjqUjf+jM0x0PxczJh4rRajiCQKKkLNt7L9g=; b=GjzdNS88WeooUq2gTeLoZ+WQXVrPTiz94m6xh0Q6hv0RFDBj2WKj3oP9HGJRDwgywC E/U1rNKq67u1KTAlmyUUoY/UK9FyfIohvHTl2FJEOyREiOmoRsP0GVNgcmOr1zS2qF2F 1a/YsW3JJUhbJoc+FWwPfZuf6sIySGkqK5apHbeWLzd74b/Dka2r9PI70NOZ+v+iR7ct gRPnu+FhadpCw/SDp4nHWgYJjLwFaYoLZcBsd1T6WdC9kMmnGwnhxbo7yQyZ4uZMYKsd p1E5+llP2+A1tZLGp3PNMKpWP5VSAOToCa5XE0d/qiWCp0OhfLPCGa0eXUBqRFM3D6gE VTbQ== X-Gm-Message-State: ACrzQf2PGeYIUwfDq3nxK/PJnwt+psmO+qyH1GTwFQlBTlztFQRMcxaf M6pVTP2RzGR9x5p/xG7y/OnwgQ== X-Google-Smtp-Source: AMsMyM51l/DXQ1WCFqWVUC3skJtUK8u24WsZ85i3sPAQMKlaYv7W99U7oaowZ+xM9yBkcULFeu7hTg== X-Received: by 2002:a1c:7405:0:b0:3cf:5d41:be8b with SMTP id p5-20020a1c7405000000b003cf5d41be8bmr1730022wmc.1.1666949696259; Fri, 28 Oct 2022 02:34:56 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.34.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:34:55 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 1/9] sched/fair: fix unfairness at wakeup Date: Fri, 28 Oct 2022 11:33:55 +0200 Message-Id: <20221028093403.6673-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" At wake up, the vruntime of a task is updated to not be more older than a sched_latency period behind the min_vruntime. This prevents long sleeping task to get unlimited credit at wakeup. Such waking task should preempt current one to use its CPU bandwidth but wakeup_gran() can be larger than sched_latency, filter out the wakeup preemption and as a results steals some CPU bandwidth to the waking task. Make sure that a task, which vruntime has been capped, will preempt current task and use its CPU bandwidth even if wakeup_gran() is in the same range as sched_latency. If the waking task failed to preempt current it could to wait up to sysctl_sched_min_granularity before preempting it during next tick. Strictly speaking, we should use cfs->min_vruntime instead of curr->vruntime but it doesn't worth the additional overhead and complexity as the vruntime of current should be close to min_vruntime if not equal. Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- kernel/sched/fair.c | 46 ++++++++++++++++++++------------------------ kernel/sched/sched.h | 30 ++++++++++++++++++++++++++++- 2 files changed, 50 insertions(+), 26 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5ffec4370602..eb04c83112a0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4345,33 +4345,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int initial) { u64 vruntime =3D cfs_rq->min_vruntime; =20 - /* - * The 'current' period is already promised to the current tasks, - * however the extra weight of the new task will slow them down a - * little, place the new task so that it fits in the slot that - * stays open at the end. - */ - if (initial && sched_feat(START_DEBIT)) - vruntime +=3D sched_vslice(cfs_rq, se); - - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; - - if (se_is_idle(se)) - thresh =3D sysctl_sched_min_granularity; - else - thresh =3D sysctl_sched_latency; - + if (!initial) + /* sleeps up to a single latency don't count. */ + vruntime -=3D get_sched_latency(se_is_idle(se)); + else if (sched_feat(START_DEBIT)) /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: + * The 'current' period is already promised to the current tasks, + * however the extra weight of the new task will slow them down a + * little, place the new task so that it fits in the slot that + * stays open at the end. */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>=3D 1; - - vruntime -=3D thresh; - } + vruntime +=3D sched_vslice(cfs_rq, se); =20 /* ensure we never gain time by being placed backwards. */ se->vruntime =3D max_vruntime(se->vruntime, vruntime); @@ -7187,6 +7171,18 @@ wakeup_preempt_entity(struct sched_entity *curr, str= uct sched_entity *se) return -1; =20 gran =3D wakeup_gran(se); + + /* + * At wake up, the vruntime of a task is capped to not be older than + * a sched_latency period compared to min_vruntime. This prevents long + * sleeping task to get unlimited credit at wakeup. Such waking up task + * has to preempt current in order to not lose its share of CPU + * bandwidth but wakeup_gran() can become higher than scheduling period + * for low priority task. Make sure that long sleeping task will get a + * chance to preempt current. + */ + gran =3D min_t(s64, gran, get_latency_max()); + if (vdiff > gran) return 1; =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 1fc198be1ffd..14879d429919 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2432,9 +2432,9 @@ extern void check_preempt_curr(struct rq *rq, struct = task_struct *p, int flags); extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; =20 -#ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_latency; extern unsigned int sysctl_sched_min_granularity; +#ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_idle_min_granularity; extern unsigned int sysctl_sched_wakeup_granularity; extern int sysctl_resched_latency_warn_ms; @@ -2448,6 +2448,34 @@ extern unsigned int sysctl_numa_balancing_scan_perio= d_max; extern unsigned int sysctl_numa_balancing_scan_size; #endif =20 +static inline unsigned long get_sched_latency(bool idle) +{ + unsigned long thresh; + + if (idle) + thresh =3D sysctl_sched_min_granularity; + else + thresh =3D sysctl_sched_latency; + + /* + * Halve their sleep time's effect, to allow + * for a gentler effect of sleepers: + */ + if (sched_feat(GENTLE_FAIR_SLEEPERS)) + thresh >>=3D 1; + + return thresh; +} + +static inline unsigned long get_latency_max(void) +{ + unsigned long thresh =3D get_sched_latency(false); + + thresh -=3D sysctl_sched_min_granularity; + + return thresh; +} + #ifdef CONFIG_SCHED_HRTICK =20 /* --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3150EECAAA1 for ; Fri, 28 Oct 2022 09:35:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229824AbiJ1JfO (ORCPT ); Fri, 28 Oct 2022 05:35:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbiJ1JfE (ORCPT ); Fri, 28 Oct 2022 05:35:04 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D7D26565F for ; Fri, 28 Oct 2022 02:34:59 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id bg9-20020a05600c3c8900b003bf249616b0so3298061wmb.3 for ; Fri, 28 Oct 2022 02:34:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=kTjYYbmOvEwchQQqZbLCLNfz1JMqxHsTlxBK50VB9/E=; b=cnPkAzvA7ZunlGQgpe9Oq+kAlUCjdMbfFqvsgXSX/EWKP3acuINt9hk5sqL4jkcxF4 axbd3Yz6hs7Vp4VEhzoevNhSgPBKOK5kVUiTArZW+Tw8tIcHhY+GzNSwh4BFB0zwE0/E ewNjGWn4ABPBWjMrAgjw6Y3+yOfNyVOXeVoXW6a4NBPBujlut854iDR+PjkzjZKDPDVr hJ6NJWUCW3eAjTJmnklFGcGNmZBoPc7q/UyoIA8uQ2H6e62x1Xhfx+0me2BZ8I2vAb18 VOjd0as/3PRniDM7xgOc27MTaAK0w30y0dZWrdEn649JOYFiLiqeUKGvkQ8DEELMLzCg xY2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kTjYYbmOvEwchQQqZbLCLNfz1JMqxHsTlxBK50VB9/E=; b=w+nzGlZzj0A9uS+6RX3OhT2SX7RcrhbS0nDWXAE6jIO+GpwVqV5kIGg0nw7tioNiGZ WQ0tG6HtQsoOfLP4XC62xE4lmu40FHAPmlhhbwwdlPOKGmJaba6kFhKrya29+d7y3Qle rSoO9Ydts1Hz7tKkV1YttfKt4nbe5cbXZmthIgSgY+bOylHoUQZHzHy9SAx/X5V1cAGR SzTlCFkuQUzOS6lm7H/ENFZI64+va1O+UXiEQzrANl6UgQCcYXCQv0nHJ+GCcK6g6ipe Eiudwhp9DPrQ3fQ6SaF2N/x21qV+q1D5lmObCDW/CwyS0vw4xF83mzM3755BZ7OTNmln ZD1Q== X-Gm-Message-State: ACrzQf3HM/nwhsN7Dgo+oP4HsDcDQm0FlmzOg+TbK+Lce1peAUftxo7z LtC4pf6WwKCpyeL73B6on17I0Q== X-Google-Smtp-Source: AMsMyM7uaBdQBF2KnyeED9qZeu6WSrt/3Vdl5NTq8Z56wYtYvR4Gnsgk3ymbyRXY66OjoWg+LOIOag== X-Received: by 2002:a05:600c:3b17:b0:3c6:eaed:675 with SMTP id m23-20020a05600c3b1700b003c6eaed0675mr8864458wms.129.1666949698364; Fri, 28 Oct 2022 02:34:58 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.34.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:34:57 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 2/9] sched: Introduce latency-nice as a per-task attribute Date: Fri, 28 Oct 2022 11:33:56 +0200 Message-Id: <20221028093403.6673-3-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Latency-nice indicates the latency requirements of a task with respect to the other tasks in the system. The value of the attribute can be within the range of [-20, 19] both inclusive to be in-line with the values just like task nice values. latency_nice =3D -20 indicates the task to have the least latency as compared to the tasks having latency_nice =3D +19. The latency_nice may affect only the CFS SCHED_CLASS by getting latency requirements from the userspace. Additionally, add debugging bits for newly added latency_nice attribute. Signed-off-by: Parth Shah [rebase] Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- include/linux/sched.h | 1 + kernel/sched/debug.c | 1 + kernel/sched/sched.h | 18 ++++++++++++++++++ 3 files changed, 20 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 15e3bd96e4ce..6805f378a9c3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -783,6 +783,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; + int latency_nice; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index bb3d63bdf4ae..a3f7876217a6 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1042,6 +1042,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); + P(latency_nice); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 14879d429919..4bf9d7777f99 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -125,6 +125,24 @@ extern int sched_rr_timeslice; */ #define NS_TO_JIFFIES(TIME) ((unsigned long)(TIME) / (NSEC_PER_SEC / HZ)) =20 +/* + * Latency nice is meant to provide scheduler hints about the relative + * latency requirements of a task with respect to other tasks. + * Thus a task with latency_nice =3D=3D 19 can be hinted as the task with = no + * latency requirements, in contrast to the task with latency_nice =3D=3D = -20 + * which should be given priority in terms of lower latency. + */ +#define MAX_LATENCY_NICE 19 +#define MIN_LATENCY_NICE -20 + +#define LATENCY_NICE_WIDTH \ + (MAX_LATENCY_NICE - MIN_LATENCY_NICE + 1) + +/* + * Default tasks should be treated as a task with latency_nice =3D 0. + */ +#define DEFAULT_LATENCY_NICE 0 + /* * Increase resolution of nice-level calculations for 64-bit architectures. * The extra resolution improves shares distribution and load balancing of --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2019EC38A02 for ; Fri, 28 Oct 2022 09:35:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230217AbiJ1JfS (ORCPT ); Fri, 28 Oct 2022 05:35:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230136AbiJ1JfE (ORCPT ); Fri, 28 Oct 2022 05:35:04 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04AE93AE6E for ; Fri, 28 Oct 2022 02:35:02 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id fn7-20020a05600c688700b003b4fb113b86so3318013wmb.0 for ; Fri, 28 Oct 2022 02:35:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=ySpaS8Fkj5Miw0XHBXwZ3O8DcUQ428wDyrUI4Txxz7I=; b=Zethr+YWyoUGDOt2rSohY9VmL73GzBzAwYGQEAGWroUFVHBUusEYYVEBpbH75EicLj BjKrnpFuUeH4KuiATZH3NG2IuYUForNMhX5m4BoeRUqw/VhhYL5lKPXzXe2mJtq7djRd +ek/gTBwQ+3+V+38QrWHiVkgUqmGAHF2ShTtfV39CIVdHdItLpmnJmfmk0CLwTajg9I9 rESYyReYeAnaJNOl1JFjKdMTYDdG4x88pFhps3W5j+U8SoH2FIb3uf1fbPUrW3+mefMw pCGxyJ1pnkkUguvEocJbLw1qdaVpx8CklB4UeFeo5Prt1A0k8CpztK6CghqHxc6oIiT4 pEJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ySpaS8Fkj5Miw0XHBXwZ3O8DcUQ428wDyrUI4Txxz7I=; b=FwgdjMdh4c3e3T7liLGpT3QL4YsAkEj3CUobU9fgyFw1sPaOuSvJRZPlU7tE+P0jen vbDHdLJ4eo249fLcjFslh7pHLmWMUU4g8otNgLuNGHfudysueDIugDx8FRpGF+OuAVtn 2mlTnZpXhsUGEKkoI4PTbsAozb89nesxIFxKbYBlKJziEuPHOpyOx9hgH9qkANdmuTXP L6ApLY2PIxXrORA338ZsJ5cDpd6crhJRfZ+O5+ALpz3Bb3u1Dsd4AwzaqLscFDvPpYWo u++DDua4W1N5j5N2JVLYzGvK8IPcT3kTibrSai6Kl2q/xkuEC8gAASpacfHA1EojgWO7 mNlQ== X-Gm-Message-State: ACrzQf1CZAegLm6SNoxr2LnUJ4mv8/YhIYbPk9HK9kqX2bpcEtZsLmZu 52sWg5AWc2QKaCF+C68Tkgi5Hg== X-Google-Smtp-Source: AMsMyM7R8eILqCqs1G2NyHYDJWYoMYBh3GMejybWEZkJFg5xqY2Usqi9eupvN7kLS+9lhLu2odLXVQ== X-Received: by 2002:a05:600c:468a:b0:3c6:f84a:1fae with SMTP id p10-20020a05600c468a00b003c6f84a1faemr8586054wmo.148.1666949700484; Fri, 28 Oct 2022 02:35:00 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.34.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:34:59 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 3/9] sched/core: Propagate parent task's latency requirements to the child task Date: Fri, 28 Oct 2022 11:33:57 +0200 Message-Id: <20221028093403.6673-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Clone parent task's latency_nice attribute to the forked child task. Reset the latency_nice value to default value when the child task is set to sched_reset_on_fork. Also, initialize init_task.latency_nice value with DEFAULT_LATENCY_NICE value Signed-off-by: Parth Shah [rebase] Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- init/init_task.c | 1 + kernel/sched/core.c | 1 + 2 files changed, 2 insertions(+) diff --git a/init/init_task.c b/init/init_task.c index ff6c4b9bfe6b..7dd71dd2d261 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,6 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, + .latency_nice =3D DEFAULT_LATENCY_NICE, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 02dc1b8e3cb6..54544353025b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4559,6 +4559,7 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 + p->latency_nice =3D DEFAULT_LATENCY_NICE; /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 919EEECAAA1 for ; Fri, 28 Oct 2022 09:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230113AbiJ1Jf2 (ORCPT ); Fri, 28 Oct 2022 05:35:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229719AbiJ1JfG (ORCPT ); Fri, 28 Oct 2022 05:35:06 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C2EA52E58 for ; Fri, 28 Oct 2022 02:35:04 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id c3-20020a1c3503000000b003bd21e3dd7aso6137705wma.1 for ; Fri, 28 Oct 2022 02:35:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=Oeg4H6o/dcPx5AQqGWzAkKIEqb6LjGQnX/XTYoDflfQ=; b=HctYI/C7VcSDtt3f/hIEQL/8vpU0BuX0Hwu9pgY55Md1D7JqQ6c+9U39THwyDbF6BH T6EUa274nL0DFvpqVPcPS+0JWT6vm/Z6ky+KN7M9+ZoFjbK8/wb31oHPihhIlLM5dgHE dq+hFK7F2pLw5hnKH5vKGh9GlEc2KONNX6iPbzjnXyKuAA8+JnAS6Bw0idMMnswjxL38 Ouw5JhFh9Nh6TQBRW5pOBdSgAhOCj1RdJ4w+HkvBQvD/mp+iBfQj1vXDCETu6/UNdfsB obg2Zh7vwDrbqNjjjYenQwZhkpkxUONDoeWf/xPWqpqlGPCqOFjWY+V752dNYIFJHAro T98A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Oeg4H6o/dcPx5AQqGWzAkKIEqb6LjGQnX/XTYoDflfQ=; b=EZwSdmwaQ2+pOja3YfzTtw6vfMfNTk/PZRldhIk03DyqK7KqLENHiY8eY/oOgG1cek lOEazsLcPRw4H0VYJThHtaVBLspyMSf0TR844u2ij6sSJ/A3kL5/HIANgjrJLkhB+jyc NHd117p0DTNgQhXf29OX+erWH46+CLKyusr1ydfeUZmm/4V4IcxuthJnYZALN53HRXdv kRJSvpaYXxEuBiDKp33H9j4YCpE8cyDU+0Fzi7s7gU0UgpDz9XsX/dCOpWUU66Zk6hLN N1VtkSX7VSkRy4oxAU3y3rJQLgTjm9D1jtVOLnt2l0nCEAUTEOo1jaG5DJXxlLcqBnRs /F5Q== X-Gm-Message-State: ACrzQf2syE9nuOGtdWkxR4RKHulHk5M0WZMnqgCE4wquPaBmjNtZbsyh yRaFExEGpBx00N+AypSWHf0yFw== X-Google-Smtp-Source: AMsMyM45PTAucK12EKuGrQvxLrfjpzn8/VrOlBGQIqApr8MXGU9nq+pBKaowDZQbEO1m3YjWLBxV7g== X-Received: by 2002:a05:6000:2a7:b0:22f:bb40:83f8 with SMTP id l7-20020a05600002a700b0022fbb4083f8mr34795320wry.288.1666949702527; Fri, 28 Oct 2022 02:35:02 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.35.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:35:01 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 4/9] sched: Allow sched_{get,set}attr to change latency_nice of the task Date: Fri, 28 Oct 2022 11:33:58 +0200 Message-Id: <20221028093403.6673-5-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Introduce the latency_nice attribute to sched_attr and provide a mechanism to change the value with the use of sched_setattr/sched_getattr syscall. Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in latency_nice of the task on every sched_setattr syscall. Signed-off-by: Parth Shah [rebase and add a dedicated __setscheduler_latency ] Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- include/uapi/linux/sched.h | 4 +++- include/uapi/linux/sched/types.h | 19 +++++++++++++++++++ kernel/sched/core.c | 24 ++++++++++++++++++++++++ tools/include/uapi/linux/sched.h | 4 +++- 4 files changed, 49 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/include/uapi/linux/sched/types.h b/include/uapi/linux/sched/ty= pes.h index f2c4589d4dbf..db1e8199e8c8 100644 --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -10,6 +10,7 @@ struct sched_param { =20 #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ +#define SCHED_ATTR_SIZE_VER2 60 /* add: latency_nice */ =20 /* * Extended scheduling parameters data structure. @@ -98,6 +99,22 @@ struct sched_param { * scheduled on a CPU with no more capacity than the specified value. * * A task utilization boundary can be reset by setting the attribute to -1. + * + * Latency Tolerance Attributes + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D + * + * A subset of sched_attr attributes allows to specify the relative latency + * requirements of a task with respect to the other tasks running/queued i= n the + * system. + * + * @ sched_latency_nice task's latency_nice value + * + * The latency_nice of a task can have any value in a range of + * [MIN_LATENCY_NICE..MAX_LATENCY_NICE]. + * + * A task with latency_nice with the value of LATENCY_NICE_MIN can be + * taken for a task requiring a lower latency as opposed to the task with + * higher latency_nice. */ struct sched_attr { __u32 size; @@ -120,6 +137,8 @@ struct sched_attr { __u32 sched_util_min; __u32 sched_util_max; =20 + /* latency requirement hints */ + __s32 sched_latency_nice; }; =20 #endif /* _UAPI_LINUX_SCHED_TYPES_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 54544353025b..b2accc9da4fe 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7318,6 +7318,14 @@ static void __setscheduler_params(struct task_struct= *p, p->rt_priority =3D attr->sched_priority; p->normal_prio =3D normal_prio(p); set_load_weight(p, true); + +} + +static void __setscheduler_latency(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + p->latency_nice =3D attr->sched_latency_nice; } =20 /* @@ -7460,6 +7468,13 @@ static int __sched_setscheduler(struct task_struct *= p, return retval; } =20 + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + if (attr->sched_latency_nice > MAX_LATENCY_NICE) + return -EINVAL; + if (attr->sched_latency_nice < MIN_LATENCY_NICE) + return -EINVAL; + } + if (pi) cpuset_read_lock(); =20 @@ -7494,6 +7509,9 @@ static int __sched_setscheduler(struct task_struct *p, goto change; if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && + attr->sched_latency_nice !=3D p->latency_nice) + goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; retval =3D 0; @@ -7582,6 +7600,7 @@ static int __sched_setscheduler(struct task_struct *p, __setscheduler_params(p, attr); __setscheduler_prio(p, newprio); } + __setscheduler_latency(p, attr); __setscheduler_uclamp(p, attr); =20 if (queued) { @@ -7792,6 +7811,9 @@ static int sched_copy_attr(struct sched_attr __user *= uattr, struct sched_attr *a size < SCHED_ATTR_SIZE_VER1) return -EINVAL; =20 + if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) && + size < SCHED_ATTR_SIZE_VER2) + return -EINVAL; /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -8029,6 +8051,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 + kattr.sched_latency_nice =3D p->latency_nice; + #ifdef CONFIG_UCLAMP_TASK /* * This could race with another potential updater, but this is fine diff --git a/tools/include/uapi/linux/sched.h b/tools/include/uapi/linux/sc= hed.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/tools/include/uapi/linux/sched.h +++ b/tools/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10FCFECAAA1 for ; Fri, 28 Oct 2022 09:35:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229916AbiJ1Jfc (ORCPT ); Fri, 28 Oct 2022 05:35:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230172AbiJ1JfI (ORCPT ); Fri, 28 Oct 2022 05:35:08 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFEA859244 for ; Fri, 28 Oct 2022 02:35:06 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id 5so2787925wmo.1 for ; Fri, 28 Oct 2022 02:35:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=l+29IGkaCXOYpZn2jHPEOLz7ZMwnGQwxoqUEqxvOsgk=; b=EM/5izfCFr844VerJ4nlYtadrW2lP2ZSxwNZzWgCOSWkcksdT9R1j33sf+X0+G86rY PkKSMzauEro61usqDlrlhuYZHRzNd9Nmmofj9/Hq2jNHQ2RAYKAGjqC/4U5yjB6aI0sn XYXbD3fETUNCKVYi0h7ZrHe9aAGvmzMWEnXcJKiVzKPwTvmncZrDYm+i+Sf/hvff79Ax fJF2O0T0+B8kFwLZMV8TqbzI2iLDmBB0OUSPDYv3Ge/PueCPwpdZrehtZ+wsRgHHauNw Gdh8NkJJioPcsLiHr6+MdaA5X1BVvhBEEg4p344boW++BiapFq7xFDw0fD/w4K0XVQXH iXrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l+29IGkaCXOYpZn2jHPEOLz7ZMwnGQwxoqUEqxvOsgk=; b=W1wI0v3fpSScC/SWIMkV+padHvTGPtNewEYbZBc7Z6TykjYcLSd20pPDt7F9FJq+XW tbYB/i1lENuk+iM0j5Kme/EeBPcYKFHI+VBOtLwTFLjXaTx7DCV8ISzlkbY1AlErZLHf SDz3XIc5jjoQHbIAjRoLgzmYpxOomRv82INJHmrWLjG6M/BvwORUUhf4DRK6VA8ULrI2 TaYIhWIfJCzQYGUwGnIOGVMuvxmjYoz1gCbAM47LWkO+b1mB52xqKsNVBvqZ5QsaTc1a TEWRNv2dnlxuEPbOU5GNuXqcFEbCduMhK/NrqkCJmLk/px47cFq8ALyFv3Tz139lCe9a p3EA== X-Gm-Message-State: ACrzQf1jP0R94iPTXMkkuMsLPzPBial13RiCDMRyA18tZ5WWmajz7Nr4 tAjh00ufwue6ZqJQNef8oJopZQ== X-Google-Smtp-Source: AMsMyM5SMjGu/xkGr6qHqwl3CVSIdaf7+vTcGT9jxnswGZVSUPTpBBqOC5DUh/eBUeAA4OoLySQpdw== X-Received: by 2002:adf:e6ce:0:b0:236:76a2:fc80 with SMTP id y14-20020adfe6ce000000b0023676a2fc80mr15161409wrm.163.1666949704811; Fri, 28 Oct 2022 02:35:04 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.35.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:35:03 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 5/9] sched/fair: Take into account latency priority at wakeup Date: Fri, 28 Oct 2022 11:33:59 +0200 Message-Id: <20221028093403.6673-6-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Take into account the latency priority of a thread when deciding to preempt the current running thread. We don't want to provide more CPU bandwidth to a thread but reorder the scheduling to run latency sensitive task first whenever possible. As long as a thread didn't use its bandwidth, it will be able to preempt the current thread. At the opposite, a thread with a low latency priority will preempt current thread at wakeup only to keep fair CPU bandwidth sharing. Otherwise it will wait for the tick to get its sched slice. curr vruntime | sysctl_sched_wakeup_granularity <--> Tested-by: shrikanth Hegde ----------------------------------|----|-----------------------|-----------= ---- | |<---------------------> | . sysctl_sched_latency | . default/current latency entity | . | . 1111111111111111111111111111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-= 1-1- se preempts curr at wakeup ------>|<- se doesn't preempt curr -------------= ---- | . | . | . low latency entity | . ---------------------->| % of sysctl_sched_latency | 1111111111111111111111111111111111111111111111111111111111|0000|-1-1-1-1-1-= 1-1- preempt ------------------------------------------------->|<- do not preemp= t -- | . | . | . high latency entity | . |<-----------------------|----. | % of sysctl_sched_latency . 111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1= -1-1 preempt->|<- se doesn't preempt curr --------------------------------------= ---- Tests results of nice latency impact on heavy load like hackbench: hackbench -l (2560 / group) -g group group latency 0 latency 19 1 1.378(+/- 1%) 1.337(+/- 1%) + 3% 4 1.393(+/- 3%) 1.312(+/- 3%) + 6% 8 1.308(+/- 2%) 1.279(+/- 1%) + 2% 16 1.347(+/- 1%) 1.317(+/- 1%) + 2% hackbench -p -l (2560 / group) -g group group 1 1.836(+/- 17%) 1.148(+/- 5%) +37% 4 1.586(+/- 6%) 1.109(+/- 8%) +30% 8 1.209(+/- 4%) 0.780(+/- 4%) +35% 16 0.805(+/- 5%) 0.728(+/- 4%) +10% By deacreasing the latency prio, we reduce the number of preemption at wakeup and help hackbench making progress. Test results of nice latency impact on short live load like cyclictest while competing with heavy load like hackbench: hackbench -l 10000 -g $group & cyclictest --policy other -D 5 -q -n latency 0 latency -20 group min avg max min avg max 0 16 19 29 17 18 29 1 43 299 7359 63 84 3422 4 56 449 14806 45 83 284 8 63 820 51123 63 83 283 16 64 1326 70684 41 157 26852 group =3D 0 means that hackbench is not running. The avg is significantly improved with nice latency -20 especially with large number of groups but min and max remain quite similar. If we add the histogram parameter to get details of latency, we have : hackbench -l 10000 -g 16 & cyclictest --policy other -D 5 -q -n -H 20000 --histfile data.txt latency 0 latency -20 Min Latencies: 64 62 Avg Latencies: 1170 107 Max Latencies: 88069 10417 50% latencies: 122 86 75% latencies: 614 91 85% latencies: 961 94 90% latencies: 1225 97 95% latencies: 6120 102 99% latencies: 18328 159 With percentile details, we see the benefit of nice latency -20 as only 1% of the latencies are above 159us whereas the default latency has got 15% around ~1ms or above and 5% over the 6ms. Signed-off-by: Vincent Guittot --- include/linux/sched.h | 4 ++- init/init_task.c | 2 +- kernel/sched/core.c | 38 +++++++++++++++++++++---- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 66 ++++++++++++++++++++++++++++++++++++++----- kernel/sched/sched.h | 12 ++++++++ 6 files changed, 109 insertions(+), 15 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6805f378a9c3..a74cad08e91e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -567,6 +567,8 @@ struct sched_entity { /* cached value of my_q->h_nr_running */ unsigned long runnable_weight; #endif + /* preemption offset in ns */ + long latency_offset; =20 #ifdef CONFIG_SMP /* @@ -783,7 +785,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; - int latency_nice; + int latency_prio; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/init/init_task.c b/init/init_task.c index 7dd71dd2d261..b8ddf403bc62 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,7 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, - .latency_nice =3D DEFAULT_LATENCY_NICE, + .latency_prio =3D NICE_WIDTH - 20, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b2accc9da4fe..caf54e54a74f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1284,6 +1284,16 @@ static void set_load_weight(struct task_struct *p, b= ool update_load) } } =20 +static void set_latency_offset(struct task_struct *p) +{ + long weight =3D sched_latency_to_weight[p->latency_prio]; + s64 offset; + + offset =3D weight * get_sched_latency(false); + offset =3D div_s64(offset, NICE_LATENCY_WEIGHT_MAX); + p->se.latency_offset =3D (long)offset; +} + #ifdef CONFIG_UCLAMP_TASK /* * Serializes updates of utilization clamp values @@ -4559,7 +4569,9 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 - p->latency_nice =3D DEFAULT_LATENCY_NICE; + p->latency_prio =3D NICE_TO_LATENCY(0); + set_latency_offset(p); + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -7324,8 +7336,10 @@ static void __setscheduler_params(struct task_struct= *p, static void __setscheduler_latency(struct task_struct *p, const struct sched_attr *attr) { - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) - p->latency_nice =3D attr->sched_latency_nice; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + p->latency_prio =3D NICE_TO_LATENCY(attr->sched_latency_nice); + set_latency_offset(p); + } } =20 /* @@ -7510,7 +7524,7 @@ static int __sched_setscheduler(struct task_struct *p, if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && - attr->sched_latency_nice !=3D p->latency_nice) + attr->sched_latency_nice !=3D LATENCY_TO_NICE(p->latency_prio)) goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; @@ -8051,7 +8065,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 - kattr.sched_latency_nice =3D p->latency_nice; + kattr.sched_latency_nice =3D LATENCY_TO_NICE(p->latency_prio); =20 #ifdef CONFIG_UCLAMP_TASK /* @@ -11204,6 +11218,20 @@ const u32 sched_prio_to_wmult[40] =3D { /* 15 */ 119304647, 148102320, 186737708, 238609294, 286331153, }; =20 +/* + * latency weight for wakeup preemption + */ +const int sched_latency_to_weight[40] =3D { + /* -20 */ -1024, -973, -922, -870, -819, + /* -15 */ -768, -717, -666, -614, -563, + /* -10 */ -512, -461, -410, -358, -307, + /* -5 */ -256, -205, -154, -102, -51, + /* 0 */ 0, 51, 102, 154, 205, + /* 5 */ 256, 307, 358, 410, 461, + /* 10 */ 512, 563, 614, 666, 717, + /* 15 */ 768, 819, 870, 922, 973, +}; + void call_trace_sched_update_nr_running(struct rq *rq, int count) { trace_sched_update_nr_running_tp(rq, count); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index a3f7876217a6..06aaa0c81d4b 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1042,7 +1042,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); - P(latency_nice); + P(latency_prio); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eb04c83112a0..4299d5108dc7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4558,6 +4558,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) update_idle_cfs_rq_clock_pelt(cfs_rq); } =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se); + /* * Preempt the current task with a newly woken task if needed: */ @@ -4566,7 +4568,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sche= d_entity *curr) { unsigned long ideal_runtime, delta_exec; struct sched_entity *se; - s64 delta; + s64 delta, offset; =20 ideal_runtime =3D sched_slice(cfs_rq, curr); delta_exec =3D curr->sum_exec_runtime - curr->prev_sum_exec_runtime; @@ -4591,10 +4593,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sc= hed_entity *curr) se =3D __pick_first_entity(cfs_rq); delta =3D curr->vruntime - se->vruntime; =20 - if (delta < 0) + offset =3D wakeup_latency_gran(curr, se); + if (delta < offset) return; =20 - if (delta > ideal_runtime) + if ((delta > ideal_runtime) || + (delta > get_latency_max())) resched_curr(rq_of(cfs_rq)); } =20 @@ -5716,6 +5720,35 @@ static int sched_idle_cpu(int cpu) } #endif =20 +static void set_next_buddy(struct sched_entity *se); + +static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_ent= ity *se) +{ + struct sched_entity *next; + + if (se->latency_offset >=3D 0) + return; + + if (cfs->nr_running <=3D 1) + return; + /* + * When waking from another class, we don't need to check to preempt at + * wakeup and don't set next buddy as a candidate for being picked in + * priority. + * In case of simultaneous wakeup when current is another class, the + * latency sensitive tasks lost opportunity to preempt non sensitive + * tasks which woke up simultaneously. + */ + + if (cfs->next) + next =3D cfs->next; + else + next =3D __pick_first_entity(cfs); + + if (next && wakeup_preempt_entity(next, se) =3D=3D 1) + set_next_buddy(se); +} + /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -5802,14 +5835,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct= *p, int flags) if (!task_new) update_overutilized_status(rq); =20 + if (rq->curr->sched_class !=3D &fair_sched_class) + check_preempt_from_others(cfs_rq_of(&p->se), &p->se); + enqueue_throttle: assert_list_leaf_cfs_rq(rq); =20 hrtick_update(rq); } =20 -static void set_next_buddy(struct sched_entity *se); - /* * The dequeue_task method is called before nr_running is * decreased. We remove the task from the rbtree and @@ -7128,6 +7162,23 @@ balance_fair(struct rq *rq, struct task_struct *prev= , struct rq_flags *rf) } #endif /* CONFIG_SMP */ =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se) +{ + long latency_offset =3D se->latency_offset; + + /* + * A negative latency offset means that the sched_entity has latency + * requirement that needs to be evaluated versus other entity. + * Otherwise, use the latency weight to evaluate how much scheduling + * delay is acceptable by se. + */ + if ((latency_offset < 0) || (curr->latency_offset < 0)) + latency_offset -=3D curr->latency_offset; + latency_offset =3D min_t(long, latency_offset, get_latency_max()); + + return latency_offset; +} + static unsigned long wakeup_gran(struct sched_entity *se) { unsigned long gran =3D sysctl_sched_wakeup_granularity; @@ -7166,11 +7217,12 @@ static int wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se) { s64 gran, vdiff =3D curr->vruntime - se->vruntime; + s64 offset =3D wakeup_latency_gran(curr, se); =20 - if (vdiff <=3D 0) + if (vdiff < offset) return -1; =20 - gran =3D wakeup_gran(se); + gran =3D offset + wakeup_gran(se); =20 /* * At wake up, the vruntime of a task is capped to not be older than diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4bf9d7777f99..99f10b4dc230 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -142,6 +142,17 @@ extern int sched_rr_timeslice; * Default tasks should be treated as a task with latency_nice =3D 0. */ #define DEFAULT_LATENCY_NICE 0 +#define DEFAULT_LATENCY_PRIO (DEFAULT_LATENCY_NICE + LATENCY_NICE_WIDTH/2) + +/* + * Convert user-nice values [ -20 ... 0 ... 19 ] + * to static latency [ 0..39 ], + * and back. + */ +#define NICE_TO_LATENCY(nice) ((nice) + DEFAULT_LATENCY_PRIO) +#define LATENCY_TO_NICE(prio) ((prio) - DEFAULT_LATENCY_PRIO) +#define NICE_LATENCY_SHIFT (SCHED_FIXEDPOINT_SHIFT) +#define NICE_LATENCY_WEIGHT_MAX (1L << NICE_LATENCY_SHIFT) =20 /* * Increase resolution of nice-level calculations for 64-bit architectures. @@ -2116,6 +2127,7 @@ static_assert(WF_TTWU =3D=3D SD_BALANCE_WAKE); =20 extern const int sched_prio_to_weight[40]; extern const u32 sched_prio_to_wmult[40]; +extern const int sched_latency_to_weight[40]; =20 /* * {de,en}queue flags: --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23ACEC38A02 for ; Fri, 28 Oct 2022 09:35:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230219AbiJ1Jfq (ORCPT ); Fri, 28 Oct 2022 05:35:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230203AbiJ1JfK (ORCPT ); Fri, 28 Oct 2022 05:35:10 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27AED5E669 for ; Fri, 28 Oct 2022 02:35:09 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id bh7-20020a05600c3d0700b003c6fb3b2052so3309102wmb.2 for ; Fri, 28 Oct 2022 02:35:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=Ji/tbDBalDOjXwUPedhgktma27oNzTvFrvjSlmC+AaI=; b=CtUI0dFWy5iBU4N/aa/n7y2seGmPfFZZwMfMtQcUOOcdShtwp0DRIhX/ffww9WKKGN ims2UGmSqQmVN+WYwAQxOURX4MeGIidCnJwokOEHyT+lzeafGpDgo3LpLwRmRRQI6jD3 RV+udddnCPmrjsUJuWG5tPUZewk7XMwatPNoQacj3IaO60HveNqadXx4qzQkyBVHCo5L z0HwQtyT1sZM2Qjnb7/+rYt8Jr4L/Zz4uqJ/FoFMQX4p/zJNQT/qkuay3qPSkbCJsk05 P4/eBySziyTXApUa2HPfWVrk/udkx7EvXxXVJcFRg0l4At7oI/ExiJosG1PqCJmuOYo3 4BjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ji/tbDBalDOjXwUPedhgktma27oNzTvFrvjSlmC+AaI=; b=MeJpT2cl90g5z8Vruh2PgRrtdsZiNu4lVNfNlJ1I7xs17pe4t4C8fWFB+L59Lc4laA xGg7OgRU2H760tyCEsY1MGsajAo3+GmW3tL6dGBIWJThD5cBF1D52fxaLplKCh7Gz8du gnzdvNiLbjiW60IHaHoPXMX/m0r+mvMoMw/WWmsB/weuVKkOnIrhQPuyZUMgmv7xicXe cKIwfbOS55XuwRR5rXg46KWK0NXCqXSwFBxWR0OvkEVqcZuXWmGvVry4knvXYJmEV9a6 JY6MkOOYkNV7bSHPxWaZC26N+MjaNSZqNSyOQskXRlkkO8qgHfOrvqZpieWk2Dyr1H/4 gpIQ== X-Gm-Message-State: ACrzQf14pF9aRexXIDHUBmCIRhBYxKb8JWXm2y2nFjswLOQClKraqk36 d7OQwL7/XH9Igg1dtkRuM9m3Gg== X-Google-Smtp-Source: AMsMyM5XLhKjXcGgrb8hrR3AZA2ucF1nZ45/m31dtu8i7dfz4p3aHxdzFexM3FXztVv6nQTg+/xPRw== X-Received: by 2002:a5d:4d12:0:b0:236:751a:9c90 with SMTP id z18-20020a5d4d12000000b00236751a9c90mr15518471wrt.609.1666949707524; Fri, 28 Oct 2022 02:35:07 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.35.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:35:06 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 6/9] sched/fair: Add sched group latency support Date: Fri, 28 Oct 2022 11:34:00 +0200 Message-Id: <20221028093403.6673-7-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Task can set its latency priority with sched_setattr(), which is then used to set the latency offset of its sched_enity, but sched group entities still have the default latency offset value. Add a latency.nice field in cpu cgroup controller to set the latency priority of the group similarly to sched_setattr(). The latency priority is then used to set the offset of the sched_entities of the group. Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- Documentation/admin-guide/cgroup-v2.rst | 8 ++++ kernel/sched/core.c | 52 +++++++++++++++++++++++++ kernel/sched/fair.c | 33 ++++++++++++++++ kernel/sched/sched.h | 4 ++ 4 files changed, 97 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-= guide/cgroup-v2.rst index be4a77baf784..d8ae7e411f9c 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1095,6 +1095,14 @@ All time durations are in microseconds. values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. =20 + cpu.latency.nice + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The nice value is in the range [-20, 19]. + + This interface file allows reading and setting latency using the + same values used by sched_setattr(2). =20 =20 Memory diff --git a/kernel/sched/core.c b/kernel/sched/core.c index caf54e54a74f..3f42b1f61a7e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10890,6 +10890,47 @@ static int cpu_idle_write_s64(struct cgroup_subsys= _state *css, { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + int prio, delta, last_delta =3D INT_MAX; + s64 weight; + + weight =3D css_tg(css)->latency_offset * NICE_LATENCY_WEIGHT_MAX; + weight =3D div_s64(weight, get_sched_latency(false)); + + /* Find the closest nice value to the current weight */ + for (prio =3D 0; prio < ARRAY_SIZE(sched_latency_to_weight); prio++) { + delta =3D abs(sched_latency_to_weight[prio] - weight); + if (delta >=3D last_delta) + break; + last_delta =3D delta; + } + + return LATENCY_TO_NICE(prio-1); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 nice) +{ + s64 latency_offset; + long weight; + int idx; + + if (nice < MIN_LATENCY_NICE || nice > MAX_LATENCY_NICE) + return -ERANGE; + + idx =3D NICE_TO_LATENCY(nice); + idx =3D array_index_nospec(idx, LATENCY_NICE_WIDTH); + weight =3D sched_latency_to_weight[idx]; + + latency_offset =3D weight * get_sched_latency(false); + latency_offset =3D div_s64(latency_offset, NICE_LATENCY_WEIGHT_MAX); + + return sched_group_set_latency(css_tg(css), latency_offset); +} + #endif =20 static struct cftype cpu_legacy_files[] =3D { @@ -10904,6 +10945,11 @@ static struct cftype cpu_legacy_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency.nice", + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11121,6 +11167,12 @@ static struct cftype cpu_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency.nice", + .flags =3D CFTYPE_NOT_ON_ROOT, + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4299d5108dc7..9583936ce30c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11764,6 +11764,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) goto err; =20 tg->shares =3D NICE_0_LOAD; + tg->latency_offset =3D 0; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 @@ -11862,6 +11863,9 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, } =20 se->my_q =3D cfs_rq; + + se->latency_offset =3D tg->latency_offset; + /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent =3D parent; @@ -11992,6 +11996,35 @@ int sched_group_set_idle(struct task_group *tg, lo= ng idle) return 0; } =20 +int sched_group_set_latency(struct task_group *tg, s64 latency) +{ + int i; + + if (tg =3D=3D &root_task_group) + return -EINVAL; + + if (abs(latency) > sysctl_sched_latency) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_offset =3D=3D latency) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_offset =3D latency; + + for_each_possible_cpu(i) { + struct sched_entity *se =3D tg->se[i]; + + WRITE_ONCE(se->latency_offset, latency); + } + + mutex_unlock(&shares_mutex); + return 0; +} + #else /* CONFIG_FAIR_GROUP_SCHED */ =20 void free_fair_sched_group(struct task_group *tg) { } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 99f10b4dc230..95d4be4f3af6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -407,6 +407,8 @@ struct task_group { =20 /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency constraint of the group. */ + int latency_offset; =20 #ifdef CONFIG_SMP /* @@ -517,6 +519,8 @@ extern int sched_group_set_shares(struct task_group *tg= , unsigned long shares); =20 extern int sched_group_set_idle(struct task_group *tg, long idle); =20 +extern int sched_group_set_latency(struct task_group *tg, s64 latency); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next); --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1961AC38A02 for ; Fri, 28 Oct 2022 09:35:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230239AbiJ1Jft (ORCPT ); Fri, 28 Oct 2022 05:35:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229718AbiJ1JfZ (ORCPT ); Fri, 28 Oct 2022 05:35:25 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 881F943634 for ; Fri, 28 Oct 2022 02:35:11 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id v130-20020a1cac88000000b003bcde03bd44so6126323wme.5 for ; Fri, 28 Oct 2022 02:35:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=BdpGOqqRR51x8v+WZgI/bsaGNfHn7spXAbo4XhwMNf4=; b=zOLOF5KQfiM3VAF/LTTlIf8z8DkvCp0vZw3Xhv8+DGP/9V1ubeRFEVrxRFSTlAAIas H6/qxTvcmy51vmuCSjzc47frBlLaP7cXK3mOYbNZlRi736Lxygf+gsC7oV3ksszee2qT yF5HufCTZ+C3hn0Xpo38rgkjNmx2GRm/HXEB2/KMnavuz+g8BWoaME8AUYbCaWcj8VHE nl0nDaEK10WClhWkd/qf9YShlfFrJo6D6KcvrTbfvRDtv5XokNJExHW5U4a07MQ2lcX2 REYFI0sYsnZ8f2Kc3wm1XMWyy4qWQ5mhggw9D0W/j4jCaXPE7l3fA7xyrgZN9wE94+uj Hltg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BdpGOqqRR51x8v+WZgI/bsaGNfHn7spXAbo4XhwMNf4=; b=hrMzEDfxKp/paQEAZVPiTYQHYkSuba8pyppyttir/zbTJ8YDjwy4Kz61/smu3w+J41 9SVnHdZtsvCWEdkY8iMHJ9C8KyUGP0R3HbfEE7NA7uSe1A/RPfT+WAvkV66nQjvTuoMr h6IlI/H13R0r7QID57oSUHuLTo0DuiPFJVthuI15MR8b7naTXuvHlvn9w1VKOkfAwfMl UmP3R6pCvaKpxI2M75MO5UAY1QVOt2fZkrtbbvgMo7CrOWKXhfpBf8WhB3bqorEl9bpS fCwc76mhHCAxSmEzeFEPU2iU+bOrdt0gbU5xuBSYS1uPGW4Ar1K71gExKFG7bFCdqobO MoTg== X-Gm-Message-State: ACrzQf37eXI2Y1gNEx5aSPLTsDsd3yxd2is24TPWFhvbAjEe9EDHUMiU OPJBCKufnDIs640eLSUDXGEqXw== X-Google-Smtp-Source: AMsMyM4CU3U+RPFEHgAwQjydlwjNUidgvPB61PSCVCApAqQSDj3ft9Dh+Xuf4+/qi5dZD6xAuptTrw== X-Received: by 2002:adf:e610:0:b0:236:737f:8e5d with SMTP id p16-20020adfe610000000b00236737f8e5dmr15974537wrm.316.1666949710093; Fri, 28 Oct 2022 02:35:10 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.35.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:35:08 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 7/9] sched/core: Support latency priority with sched core Date: Fri, 28 Oct 2022 11:34:01 +0200 Message-Id: <20221028093403.6673-8-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Take into account wakeup_latency_gran() when ordering the cfs threads. Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- kernel/sched/fair.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9583936ce30c..a7372f80b1ea 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11439,6 +11439,9 @@ bool cfs_prio_less(struct task_struct *a, struct ta= sk_struct *b, bool in_fi) delta =3D (s64)(sea->vruntime - seb->vruntime) + (s64)(cfs_rqb->min_vruntime_fi - cfs_rqa->min_vruntime_fi); =20 + /* Take into account latency prio */ + delta -=3D wakeup_latency_gran(sea, seb); + return delta > 0; } #else --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A286C38A02 for ; Fri, 28 Oct 2022 09:35:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230295AbiJ1Jfv (ORCPT ); Fri, 28 Oct 2022 05:35:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229661AbiJ1Jf1 (ORCPT ); Fri, 28 Oct 2022 05:35:27 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E44A71989 for ; Fri, 28 Oct 2022 02:35:13 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id i5-20020a1c3b05000000b003cf47dcd316so6127411wma.4 for ; Fri, 28 Oct 2022 02:35:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=IqipjHgpREzjNaThIgGCrkPVXCZmKq3Gs2tcl5VBklo=; b=uAedgzxK/hhARCE2SOVitzXdwwfH9h9yFDTpPVIDqzXMkjEd0Zz/KUuzS0Ttj/Bf7E EAbskiG0Gh2ZP5dasm4tj2wNYP7qntVEonSGfhLzson5vIY37PJ6GT6nHDXV/h+/zV6y anZDiUqiEkGqkqyn94DrLKtFc5OvNCRzJzNQDIclttNALTQrAlqVmEQRE6JA3wFFPy3n rxr7sbf94cVgp3LDYQDr178NWEoNvZh0BLiokObVyOH+Jlt+wpPEuZ2AOhKCyAyw8UM3 O9gYgSDj3BZq81LeA9MID307+Ct382bGkPKHcHqEdVhMI3QXXRyGRdGz7TxOsSNIlKIS DVsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IqipjHgpREzjNaThIgGCrkPVXCZmKq3Gs2tcl5VBklo=; b=WANzo73V2z5+ZWfOrGN8/vX8sUPsqcUYpE6j/EKs0ZD32qYN4dWLh+I+zY6R6rRgO3 /InymwmV0QMFODgSYWlYWUvIvXmXOhbgc6wQiQsqLoFzmKOt0b/onkQujQpWLV7gnhh+ +BTfKgUSnMR2L2No4Bb27zfWxCXUsZE7OlOCpom0P4ZTvz1q7dH2F73xki0nL/LmJFCQ cmzP+YkAz6qhSJCjcmHsi8AwZxjYn7usNNuLJVh75iYBQW2MmFukO6ylTp/tYSKdcJNJ UWIQrXfV7Pqngzycv5QZE6umSI2Oz1ji45qSpRMYmgxMLZMVrWxbL80lvwXzgz71hxVh lqVA== X-Gm-Message-State: ACrzQf13IOMUZ2xCXIlWvfLcs7Dbl+LaeLW1MMranHQfeZC4IOaJ88ao t0E7rhcYdpUoOKo7fW+yG/k8HQ== X-Google-Smtp-Source: AMsMyM5cJJY04FdfLWG8ZDnvBHFMqtl1erXfAkx4Wg1Rz83JstKa+8/s0w7ry3ixKksN2+6mTe9fQg== X-Received: by 2002:adf:e6ce:0:b0:236:76a2:fc80 with SMTP id y14-20020adfe6ce000000b0023676a2fc80mr15161780wrm.163.1666949712438; Fri, 28 Oct 2022 02:35:12 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.35.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:35:11 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 8/9] sched/fair: Add latency list Date: Fri, 28 Oct 2022 11:34:02 +0200 Message-Id: <20221028093403.6673-9-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Add a rb tree for latency sensitive entities so we can schedule the most sensitive one first even when it failed to preempt current at wakeup or when it got quickly preempted by another entity of higher priority. In order to keep fairness, the latency is used once at wakeup to get a minimum slice and not during the following scheduling slice to prevent long running entity to got more running time than allocated to his nice priority. The rb tree enables to cover the last corner case where latency sensitive entity can't got schedule quickly after the wakeup. Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- include/linux/sched.h | 2 + kernel/sched/fair.c | 96 +++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 1 + 3 files changed, 96 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index a74cad08e91e..0b92674e3664 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -547,6 +547,8 @@ struct sched_entity { /* For load-balancing: */ struct load_weight load; struct rb_node run_node; + struct rb_node latency_node; + unsigned int on_latency; struct list_head group_node; unsigned int on_rq; =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a7372f80b1ea..c28992b7d1a6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -664,7 +664,77 @@ struct sched_entity *__pick_last_entity(struct cfs_rq = *cfs_rq) =20 return __node_2_se(last); } +#endif =20 +/************************************************************** + * Scheduling class tree data structure manipulation methods: + * for latency + */ + +static inline bool latency_before(struct sched_entity *a, + struct sched_entity *b) +{ + return (s64)(a->vruntime + a->latency_offset - b->vruntime - b->latency_o= ffset) < 0; +} + +#define __latency_node_2_se(node) \ + rb_entry((node), struct sched_entity, latency_node) + +static inline bool __latency_less(struct rb_node *a, const struct rb_node = *b) +{ + return latency_before(__latency_node_2_se(a), __latency_node_2_se(b)); +} + +/* + * Enqueue an entity into the latency rb-tree: + */ +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *= se, int flags) +{ + + /* Only latency sensitive entity can be added to the list */ + if (se->latency_offset >=3D 0) + return; + + if (se->on_latency) + return; + + /* + * An execution time less than sysctl_sched_min_granularity means that + * the entity has been preempted by a higher sched class or an entity + * with higher latency constraint. + * Put it back in the list so it gets a chance to run 1st during the + * next slice. + */ + if (!(flags & ENQUEUE_WAKEUP)) { + u64 delta_exec =3D se->sum_exec_runtime - se->prev_sum_exec_runtime; + + if (delta_exec >=3D sysctl_sched_min_granularity) + return; + } + + rb_add_cached(&se->latency_node, &cfs_rq->latency_timeline, __latency_les= s); + se->on_latency =3D 1; +} + +static void __dequeue_latency(struct cfs_rq *cfs_rq, struct sched_entity *= se) +{ + if (se->on_latency) { + rb_erase_cached(&se->latency_node, &cfs_rq->latency_timeline); + se->on_latency =3D 0; + } +} + +static struct sched_entity *__pick_first_latency(struct cfs_rq *cfs_rq) +{ + struct rb_node *left =3D rb_first_cached(&cfs_rq->latency_timeline); + + if (!left) + return NULL; + + return __latency_node_2_se(left); +} + +#ifdef CONFIG_SCHED_DEBUG /************************************************************** * Scheduling class statistics methods: */ @@ -4439,8 +4509,10 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se, int flags) check_schedstat_required(); update_stats_enqueue_fair(cfs_rq, se, flags); check_spread(cfs_rq, se); - if (!curr) + if (!curr) { __enqueue_entity(cfs_rq, se); + __enqueue_latency(cfs_rq, se, flags); + } se->on_rq =3D 1; =20 if (cfs_rq->nr_running =3D=3D 1) { @@ -4526,8 +4598,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se, int flags) =20 clear_buddies(cfs_rq, se); =20 - if (se !=3D cfs_rq->curr) + if (se !=3D cfs_rq->curr) { __dequeue_entity(cfs_rq, se); + __dequeue_latency(cfs_rq, se); + } se->on_rq =3D 0; account_entity_dequeue(cfs_rq, se); =20 @@ -4616,6 +4690,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se) */ update_stats_wait_end_fair(cfs_rq, se); __dequeue_entity(cfs_rq, se); + __dequeue_latency(cfs_rq, se); update_load_avg(cfs_rq, se, UPDATE_TG); } =20 @@ -4654,7 +4729,7 @@ static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { struct sched_entity *left =3D __pick_first_entity(cfs_rq); - struct sched_entity *se; + struct sched_entity *latency, *se; =20 /* * If curr is set we have to see if its left of the leftmost entity @@ -4696,6 +4771,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched= _entity *curr) se =3D cfs_rq->last; } =20 + /* Check for latency sensitive entity waiting for running */ + latency =3D __pick_first_latency(cfs_rq); + if (latency && (latency !=3D se) && + wakeup_preempt_entity(latency, se) < 1) + se =3D latency; + return se; } =20 @@ -4719,6 +4800,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, st= ruct sched_entity *prev) update_stats_wait_start_fair(cfs_rq, prev); /* Put 'current' back into the tree. */ __enqueue_entity(cfs_rq, prev); + __enqueue_latency(cfs_rq, prev, 0); /* in !on_rq case, update occurred at dequeue */ update_load_avg(cfs_rq, prev, 0); } @@ -11712,6 +11794,7 @@ static void set_next_task_fair(struct rq *rq, struc= t task_struct *p, bool first) void init_cfs_rq(struct cfs_rq *cfs_rq) { cfs_rq->tasks_timeline =3D RB_ROOT_CACHED; + cfs_rq->latency_timeline =3D RB_ROOT_CACHED; u64_u32_store(cfs_rq->min_vruntime, (u64)(-(1LL << 20))); #ifdef CONFIG_SMP raw_spin_lock_init(&cfs_rq->removed.lock); @@ -12020,8 +12103,15 @@ int sched_group_set_latency(struct task_group *tg,= s64 latency) =20 for_each_possible_cpu(i) { struct sched_entity *se =3D tg->se[i]; + struct rq *rq =3D cpu_rq(i); + struct rq_flags rf; + + rq_lock_irqsave(rq, &rf); =20 + __dequeue_latency(se->cfs_rq, se); WRITE_ONCE(se->latency_offset, latency); + + rq_unlock_irqrestore(rq, &rf); } =20 mutex_unlock(&shares_mutex); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 95d4be4f3af6..91ec36c1158b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -599,6 +599,7 @@ struct cfs_rq { #endif =20 struct rb_root_cached tasks_timeline; + struct rb_root_cached latency_timeline; =20 /* * 'curr' points to currently running entity on this cfs_rq. --=20 2.17.1 From nobody Wed Apr 8 17:12:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42786ECAAA1 for ; Fri, 28 Oct 2022 09:35:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230297AbiJ1Jf4 (ORCPT ); Fri, 28 Oct 2022 05:35:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230236AbiJ1Jf2 (ORCPT ); Fri, 28 Oct 2022 05:35:28 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5500974DF0 for ; Fri, 28 Oct 2022 02:35:15 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id c3-20020a1c3503000000b003bd21e3dd7aso6138794wma.1 for ; Fri, 28 Oct 2022 02:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=zy7QY6HAum4eocAiGxC3tA1yZA7zRnpZ3kdrW18hPm0=; b=upllvy9LTUqV280SjXJ2vWkMs8B52MyDHu3wt1BBEOT/wKTjagqC5NKp1Hz6fJajxX vkGFh+ZvA62iOszfaiWdscrssUfcW5rFJ5RZPNV+8Wu5t37O50sGCJpmLVD7MZjEaZ62 wpIpVtI5fETI4VozRW96M5B/SMQ+O3eGpfoKbZhj0UX4U+QlyjXReLWKIrHeNk1FxeaT Ve+KLnHS1OnClcBRKRrbiGVVIeJV8ItSrMjYOH3nwwpWg2g2i2jPA/F3syL51yHE8Fbl qp26yM1IQewXaaOwyNu2TNea6wJSjtFozWYqppGqfFJd3isTrhhmH7ZSz1eSaKugFu9/ T6Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zy7QY6HAum4eocAiGxC3tA1yZA7zRnpZ3kdrW18hPm0=; b=iCe6V10CcbYcv6WJIQLYWcQd6fQx3D7jCqFVO0jeky3KaprcapxhJ6Qgc1zFHkvQ67 yaP+zHvLoDzQngox7k+uei3lNnty3Qr3pzyCVq+1tD8hgxxTyY7AjkDipbH8uzKoWMce d9mjKDW1gd7tMKN5t8+cgvilGg1JQPBbPsIZI2il5X8UiK8ONsRSNibFlNe2ddDWlgcb j/KbBS4ll/jcoTfb5zG3jPn2/n5HhaZMWsI9JGPRcvI3McXeGMmaD2vI7a2n5AMQeCYQ 1wRrXgeOdAYNXtW5v9xho1E/y5i1tfAoiuHI4kgORa73qbqhaubrfvAhSUOHWE5AMOmo YrrA== X-Gm-Message-State: ACrzQf0uxYYXh8g6eXzpLlmTgWJqoG8xvUB3UYHcsbzyzMQLGq4ZumHK e/8Rf+qLdm1wxTkVsBFka1835Q== X-Google-Smtp-Source: AMsMyM7JEy9QidFj4K5IIHRD8gxTQ077sLRBL+QBQiZlZmZ0NNIcOHc+o1KMeXa33xJW97w5ToIhkg== X-Received: by 2002:a5d:42c6:0:b0:235:1b3a:8d2e with SMTP id t6-20020a5d42c6000000b002351b3a8d2emr28693512wrr.689.1666949714514; Fri, 28 Oct 2022 02:35:14 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:c12b:b448:f0a9:83ef]) by smtp.gmail.com with ESMTPSA id k3-20020a05600c1c8300b003c6b7f5567csm10909426wms.0.2022.10.28.02.35.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 02:35:13 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, Vincent Guittot Subject: [PATCH v7 9/9] sched/fair: remove check_preempt_from_others Date: Fri, 28 Oct 2022 11:34:03 +0200 Message-Id: <20221028093403.6673-10-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221028093403.6673-1-vincent.guittot@linaro.org> References: <20221028093403.6673-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" With the dedicated latency list, we don't have to take care of this special case anymore as pick_next_entity checks for a runnable latency sensitive task. Signed-off-by: Vincent Guittot Tested-by: shrikanth Hegde --- kernel/sched/fair.c | 34 ++-------------------------------- 1 file changed, 2 insertions(+), 32 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c28992b7d1a6..9a421b49dbfd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5802,35 +5802,6 @@ static int sched_idle_cpu(int cpu) } #endif =20 -static void set_next_buddy(struct sched_entity *se); - -static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_ent= ity *se) -{ - struct sched_entity *next; - - if (se->latency_offset >=3D 0) - return; - - if (cfs->nr_running <=3D 1) - return; - /* - * When waking from another class, we don't need to check to preempt at - * wakeup and don't set next buddy as a candidate for being picked in - * priority. - * In case of simultaneous wakeup when current is another class, the - * latency sensitive tasks lost opportunity to preempt non sensitive - * tasks which woke up simultaneously. - */ - - if (cfs->next) - next =3D cfs->next; - else - next =3D __pick_first_entity(cfs); - - if (next && wakeup_preempt_entity(next, se) =3D=3D 1) - set_next_buddy(se); -} - /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -5917,15 +5888,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct= *p, int flags) if (!task_new) update_overutilized_status(rq); =20 - if (rq->curr->sched_class !=3D &fair_sched_class) - check_preempt_from_others(cfs_rq_of(&p->se), &p->se); - enqueue_throttle: assert_list_leaf_cfs_rq(rq); =20 hrtick_update(rq); } =20 +static void set_next_buddy(struct sched_entity *se); + /* * The dequeue_task method is called before nr_running is * decreased. We remove the task from the rbtree and --=20 2.17.1