From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECAEDC433F5 for ; Thu, 12 May 2022 16:36:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356529AbiELQf6 (ORCPT ); Thu, 12 May 2022 12:35:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356328AbiELQfy (ORCPT ); Thu, 12 May 2022 12:35:54 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAFB91CFC7 for ; Thu, 12 May 2022 09:35:52 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id i5so7995141wrc.13 for ; Thu, 12 May 2022 09:35:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DQbo4HgFP6vPqhCDxYWNs5lOeo46it0h0h0zVYo9+RA=; b=jmK+Bau6i8TmbsAapzlpQp3nEwP90D81o13Pj+VeuicGZ97JDtBtjs2+6P2X0B4UxG x1fWoTDDblUTr3pGFWF923Ab0FvZ5j7Gau8sIQR52XIapY4/bNZKRgEnIWPdugjwzje1 Nmay/3mQA55GVh0SDFs5KDIX+Wv9gMYo/GWbqBo5HBdIdN9QSQfr3SKRqkbZKBdQkL05 6S1bYOa/JsE3YKmJka7bqh08yNdXUh6SK81flRsSNnyIb2cvrhqQKbjHqx9flNHGpiq0 wxTApQF/i9PgKuAFjvF0nv/4kM7lD+xKPtAYO9nClV+Sv8d20h4LpwsmUtb4pWplMKWC I7CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DQbo4HgFP6vPqhCDxYWNs5lOeo46it0h0h0zVYo9+RA=; b=7VF5nC9OvraZujHRzkzUavX0gX9DuDZ2Hoxyqe80Wm8gmo6iKk0xjvVDvxGeIVU0O6 Xjw/+Pf9bFMJWWeC/b0K07g5dDSwHcNcvnON0idAQidBXfOBzrRHBZTpf7lKtgSIgO74 onDk9wWG44cl7lDx9S8wOt8FPZHzGEMPqM+/yyaOwsnQ5almESRqAHfJGud/UGj1U8Uy K1s+Oda4pwUzbuQhjNZaDuJsf4YOx9oU7iFnuF5VQg4rxVmNuj0EEVEG1CU9dLKFTxci eFD6s1SXN05QcgcwB9fd8dk17Gt7QdAkO1AaLpy1LEiAApQjYynQHHcI18nnrO5JwMBM kBkQ== X-Gm-Message-State: AOAM53381K0z5chnAaFSF1QAeyrVTPXlpfiZ7xvXXYb0MQhHLfNsc0lL 7m821Stb4S3CnSSv80PEZ7E4ng== X-Google-Smtp-Source: ABdhPJwQvyD9t7CshfMjKcs1Vp2NMWg2mbFbRHh1ugJqD7+x+tZcb4WUP3q0pUx8kXSBxLsqrLlODA== X-Received: by 2002:a5d:4711:0:b0:20c:d5e0:1971 with SMTP id y17-20020a5d4711000000b0020cd5e01971mr444995wrq.245.1652373351400; Thu, 12 May 2022 09:35:51 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.35.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:35:50 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 1/7] sched: Introduce latency-nice as a per-task attribute Date: Thu, 12 May 2022 18:35:28 +0200 Message-Id: <20220512163534.2572-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Latency-nice indicates the latency requirements of a task with respect to the other tasks in the system. The value of the attribute can be within the range of [-20, 19] both inclusive to be in-line with the values just like task nice values. latency_nice =3D -20 indicates the task to have the least latency as compared to the tasks having latency_nice =3D +19. The latency_nice may affect only the CFS SCHED_CLASS by getting latency requirements from the userspace. Additionally, add debugging bits for newly added latency_nice attribute. Signed-off-by: Parth Shah [rebase - minor fixes] Signed-off-by: Vincent Guittot --- include/linux/sched.h | 1 + kernel/sched/debug.c | 1 + kernel/sched/sched.h | 18 ++++++++++++++++++ 3 files changed, 20 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index a27316f5f737..34c6c9c2797c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -775,6 +775,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; + int latency_nice; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index bb3d63bdf4ae..a3f7876217a6 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1042,6 +1042,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); + P(latency_nice); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4784898e8f83..271ecd37c13d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -117,6 +117,24 @@ extern void call_trace_sched_update_nr_running(struct = rq *rq, int count); */ #define NS_TO_JIFFIES(TIME) ((unsigned long)(TIME) / (NSEC_PER_SEC / HZ)) =20 +/* + * Latency nice is meant to provide scheduler hints about the relative + * latency requirements of a task with respect to other tasks. + * Thus a task with latency_nice =3D=3D 19 can be hinted as the task with = no + * latency requirements, in contrast to the task with latency_nice =3D=3D = -20 + * which should be given priority in terms of lower latency. + */ +#define MAX_LATENCY_NICE 19 +#define MIN_LATENCY_NICE -20 + +#define LATENCY_NICE_WIDTH \ + (MAX_LATENCY_NICE - MIN_LATENCY_NICE + 1) + +/* + * Default tasks should be treated as a task with latency_nice =3D 0. + */ +#define DEFAULT_LATENCY_NICE 0 + /* * Increase resolution of nice-level calculations for 64-bit architectures. * The extra resolution improves shares distribution and load balancing of --=20 2.17.1 From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58F77C433F5 for ; Thu, 12 May 2022 16:36:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356528AbiELQgd (ORCPT ); Thu, 12 May 2022 12:36:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229695AbiELQf5 (ORCPT ); Thu, 12 May 2022 12:35:57 -0400 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4242124BCD for ; Thu, 12 May 2022 09:35:54 -0700 (PDT) Received: by mail-wr1-x432.google.com with SMTP id d5so8040418wrb.6 for ; Thu, 12 May 2022 09:35:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WI3cwKsmHof/OcrqvPRziZY0WNbcOSp0pjtWhKscn90=; b=ql4skT3syzulvYtB3mKtdEZYmL+0sBTcLlZkplV4e+tc73rQf/EFwdUuvtl976ZNKQ Qx9+5MRx4oZYAIeIXr/MvmILjPot4qN73EAR4RfMxYoNRhFXkqobqwNrpQwkGwVyOyWc e0pNGwBKkNOQLDyDBkHJWsT/xmyzAnCAY/sedn5zoEzhkHDVOCiMytzk0Soi2tdWLNoy dHWKXbxzsuE+h2zeN/jeMFM5EWcF+9RwNK3YWT1ZC99eZl9CIF+HwdTweUnSm48vuosI tEiN6tPk5evLbVMtkmOZMTTypc27oCdkAnHFf0PGmg8kbYE4yRIx2hD2SkhOM9IMT7Kf 5YEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WI3cwKsmHof/OcrqvPRziZY0WNbcOSp0pjtWhKscn90=; b=aLWBHxM1ONY67BvSLr8ux5mFy2gqza4JNz8KNAazr7CWYUO0zbYdDqDWox36IpGry3 VvZJUXnfy6clLye7XMAzds8ZkrDk3OEbQExDwp9EmkOnHyyZdkH6eAEEPeKBZ/s1akQj EB2PtpEOcjfD3TFaoqddfow+vdDuSxBFhlgdIQ7Q4wR65AnV1502cJyNIBoUPWy+p7em n88geXZSsYVGUEXTfGJowgyyRLFHzZh8kukour5ZypLO7hzOQdDgg7rYbhS1aWvrxI81 FI9N5iDUpfmhzjn2juCIUxhHW+bdhBqaTUb4ZLjc8JJ9T1h5jbFXxybNtmkMOfNksvNE ITDQ== X-Gm-Message-State: AOAM530tJxTewpFFM3kEYdwVAJUk23vp4Ryy6XotHSVR3u2NVawSGE8E CDyF+WHQNwmmdpUHPuuG4cr44g== X-Google-Smtp-Source: ABdhPJyO/iWs0LX48cRHJvndLLMM3GLztO2pjnzf7Kd816KCuJ25oK4UEqNfvXeqmVis0QtYeoQNsg== X-Received: by 2002:a5d:64cc:0:b0:20c:6339:3790 with SMTP id f12-20020a5d64cc000000b0020c63393790mr467141wri.378.1652373353490; Thu, 12 May 2022 09:35:53 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.35.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:35:52 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 2/7] sched/core: Propagate parent task's latency requirements to the child task Date: Thu, 12 May 2022 18:35:29 +0200 Message-Id: <20220512163534.2572-3-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Clone parent task's latency_nice attribute to the forked child task. Reset the latency_nice value to default value when the child task is set to sched_reset_on_fork. Also, initialize init_task.latency_nice value with DEFAULT_LATENCY_NICE value Signed-off-by: Parth Shah [rebase - minor fixes] Signed-off-by: Vincent Guittot --- init/init_task.c | 1 + kernel/sched/core.c | 4 ++++ 2 files changed, 5 insertions(+) diff --git a/init/init_task.c b/init/init_task.c index 73cc8f03511a..225d11a39bc9 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,6 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, + .latency_nice =3D DEFAULT_LATENCY_NICE, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 07bacb050198..1f04b815b588 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4473,6 +4473,9 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) */ p->prio =3D current->normal_prio; =20 + /* Propagate the parent's latency requirements to the child as well */ + p->latency_nice =3D current->latency_nice; + uclamp_fork(p); =20 /* @@ -4489,6 +4492,7 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 + p->latency_nice =3D DEFAULT_LATENCY_NICE; /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: --=20 2.17.1 From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C4EC433F5 for ; Thu, 12 May 2022 16:36:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356549AbiELQgh (ORCPT ); Thu, 12 May 2022 12:36:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356530AbiELQf6 (ORCPT ); Thu, 12 May 2022 12:35:58 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EDA1275E7 for ; Thu, 12 May 2022 09:35:57 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id b19so8005037wrh.11 for ; Thu, 12 May 2022 09:35:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5LSQF4qQ6PdU2l44s8zh7kHvz3U3R9H2i+FKozzkMrk=; b=zn+g1Ycy1XsWOwIRH1IZWmSxoblP7fkC3C+xhRAAZcnQbWJZArprv6QGJwokYaEgI4 OFjd7l6KuNlcrbz4HNoEWaxKazwlFu6rBI1ebSgc79IBRrEPn1mYEi2rHVgWQeedm4TZ 89ZCh10ifROHuV/qmJqL1UU4iNigCCVLKvCokb52aSoR/lnP38dl+uDne0gpVVUwRWZs gOQJAtUoNAIXFKzAiqEhq6dMXV05Il441S/tO5l4BLMjcj1hmuXlon0+1o3+l/0So3+W IUymuBo/d88SX3cij56hjplOgvcLDgmo1t0MrbF4fVZ7p/NgIYq03yr9em8VQIVcGy19 jdgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5LSQF4qQ6PdU2l44s8zh7kHvz3U3R9H2i+FKozzkMrk=; b=unkwltldTY174tcwb6jn5+a2Yd79UiTz7TPw16hu/CxqNDgZO4MsZR2kiKvT+4mzAZ 4rZkskXKJZo9sFncslOSoFZWwtNPLgWzxgSgnIgA+nakSeTXMKLM9aCFB+oH6GrQtBKJ tAOEBRW0WOsoteM368D1F7aW0j/asBSfPOgSwNFba67EhKqiFhQ5NwYGpwrv91SIuTWg JqpJnwm99asc/DKxv6k+fQa5iAWdDuTecelRWpEiNf+Xbm2ztNwaldbEssaLEDbuqzRk HINNJR9wKHJJc5hyzy5EAyfxvrK3du2hW2vr/kzFjrWkkqWalLcUhD/bw4f3Nos26gms irGw== X-Gm-Message-State: AOAM533cDSqEXvF0QcDakI+FzcnxNBnceXdQk4H2m2qOxLEez9lHD1V9 VSo2coxci9kaoURu8SGAu4VqlQ== X-Google-Smtp-Source: ABdhPJz42eGyH0dwTt5/MYB8DtEJQeazcoRnR88OzcJR1YmQeSqxE12GDN8WDCVz/BkRBaZMW0HxLQ== X-Received: by 2002:a5d:6504:0:b0:20c:e8c7:55f7 with SMTP id x4-20020a5d6504000000b0020ce8c755f7mr448731wru.276.1652373355603; Thu, 12 May 2022 09:35:55 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.35.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:35:54 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 3/7] sched: Allow sched_{get,set}attr to change latency_nice of the task Date: Thu, 12 May 2022 18:35:30 +0200 Message-Id: <20220512163534.2572-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Introduce the latency_nice attribute to sched_attr and provide a mechanism to change the value with the use of sched_setattr/sched_getattr syscall. Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in latency_nice of the task on every sched_setattr syscall. Signed-off-by: Parth Shah [rebase and add a dedicated __setscheduler_latency ] Signed-off-by: Vincent Guittot --- include/uapi/linux/sched.h | 4 +++- include/uapi/linux/sched/types.h | 19 +++++++++++++++++++ kernel/sched/core.c | 25 +++++++++++++++++++++++++ tools/include/uapi/linux/sched.h | 4 +++- 4 files changed, 50 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 3bac0a8ceab2..b2e932c25be6 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/include/uapi/linux/sched/types.h b/include/uapi/linux/sched/ty= pes.h index f2c4589d4dbf..db1e8199e8c8 100644 --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -10,6 +10,7 @@ struct sched_param { =20 #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ +#define SCHED_ATTR_SIZE_VER2 60 /* add: latency_nice */ =20 /* * Extended scheduling parameters data structure. @@ -98,6 +99,22 @@ struct sched_param { * scheduled on a CPU with no more capacity than the specified value. * * A task utilization boundary can be reset by setting the attribute to -1. + * + * Latency Tolerance Attributes + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D + * + * A subset of sched_attr attributes allows to specify the relative latency + * requirements of a task with respect to the other tasks running/queued i= n the + * system. + * + * @ sched_latency_nice task's latency_nice value + * + * The latency_nice of a task can have any value in a range of + * [MIN_LATENCY_NICE..MAX_LATENCY_NICE]. + * + * A task with latency_nice with the value of LATENCY_NICE_MIN can be + * taken for a task requiring a lower latency as opposed to the task with + * higher latency_nice. */ struct sched_attr { __u32 size; @@ -120,6 +137,8 @@ struct sched_attr { __u32 sched_util_min; __u32 sched_util_max; =20 + /* latency requirement hints */ + __s32 sched_latency_nice; }; =20 #endif /* _UAPI_LINUX_SCHED_TYPES_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1f04b815b588..036bd9ff66e9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7200,6 +7200,15 @@ static void __setscheduler_params(struct task_struct= *p, p->rt_priority =3D attr->sched_priority; p->normal_prio =3D normal_prio(p); set_load_weight(p, true); + +} + +static void __setscheduler_latency(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + p->latency_nice =3D attr->sched_latency_nice; + } } =20 /* @@ -7326,6 +7335,13 @@ static int __sched_setscheduler(struct task_struct *= p, return retval; } =20 + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + if (attr->sched_latency_nice > MAX_LATENCY_NICE) + return -EINVAL; + if (attr->sched_latency_nice < MIN_LATENCY_NICE) + return -EINVAL; + } + if (pi) cpuset_read_lock(); =20 @@ -7360,6 +7376,9 @@ static int __sched_setscheduler(struct task_struct *p, goto change; if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && + attr->sched_latency_nice !=3D p->latency_nice) + goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; retval =3D 0; @@ -7448,6 +7467,7 @@ static int __sched_setscheduler(struct task_struct *p, __setscheduler_params(p, attr); __setscheduler_prio(p, newprio); } + __setscheduler_latency(p, attr); __setscheduler_uclamp(p, attr); =20 if (queued) { @@ -7658,6 +7678,9 @@ static int sched_copy_attr(struct sched_attr __user *= uattr, struct sched_attr *a size < SCHED_ATTR_SIZE_VER1) return -EINVAL; =20 + if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) && + size < SCHED_ATTR_SIZE_VER2) + return -EINVAL; /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -7895,6 +7918,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 + kattr.sched_latency_nice =3D p->latency_nice; + #ifdef CONFIG_UCLAMP_TASK /* * This could race with another potential updater, but this is fine diff --git a/tools/include/uapi/linux/sched.h b/tools/include/uapi/linux/sc= hed.h index 3bac0a8ceab2..ecc4884bfe4b 100644 --- a/tools/include/uapi/linux/sched.h +++ b/tools/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0X80 =20 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) =20 #endif /* _UAPI_LINUX_SCHED_H */ --=20 2.17.1 From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BDC7C433EF for ; Thu, 12 May 2022 16:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356546AbiELQgm (ORCPT ); Thu, 12 May 2022 12:36:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356533AbiELQf7 (ORCPT ); Thu, 12 May 2022 12:35:59 -0400 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1193248E2 for ; Thu, 12 May 2022 09:35:58 -0700 (PDT) Received: by mail-wr1-x432.google.com with SMTP id d5so8040418wrb.6 for ; Thu, 12 May 2022 09:35:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iEa/8mAzzO9RlCYIla5k70SI1Iglg8mw5NOlpxJMReg=; b=NaCvW4b10j7Xtxd2Yrgzm1tQkfxe+qxD9X9km6qswM7R5iviuD+vgUXO/gH9LW5YQ4 MLI5HZxCyI8HcHDS240rMWOuEAfzQZ/cTa4nsQK4a8QfD4WHNpbJ5iO6lA0yLQwpwEZ0 eUOs1Ui2jlWss3IsKQmh3zsj79p5rz1POnbv8G0aNVjWXKxpaYgiu4eFY3LbNvi9mMv5 YA8OjL2p/sBfGY5yfuy0BUR0O2YlLrIXJebV+xkfppf5bpmMUvE1U1QkR3Y6eifRf4cK 1o0eG4n3HkJxG50Vd/PQWKs8o39BKS/pyVi/KfedPTGZL5vACqA1/KVgo87s9Q7yNx0y 97Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iEa/8mAzzO9RlCYIla5k70SI1Iglg8mw5NOlpxJMReg=; b=jdtJIoDByKPRpXv+wElCCrVzrHl4k4wbxMopNMOzGlUMLBq441g7ae7qfZ6mzC19gU GqagthSp7JHglgn++AWehdXyBC1C0hKxqsMcyxdQEzEK2e3/dX+fL9kSvYaj34fBssX8 81q4km8U7C57MAGHF77pg8630VqEIlbWgrGH8MHEz29AIPMYQTUT2YsRgRKYxesR4NBO BxE4LrvYAcVTY3ESvR+7vnam49IG4ybgSOWbHQ2Vnz7NYxUWJ4lVUHXICpdjPb2aTRMC 8GRDlXCxWX9yGyyr2Wi7Gjo/MXhqYL9+uBkyzgaiMVReO5pR6QBp9XTSz5kgU1gahvwF iatQ== X-Gm-Message-State: AOAM5309iMf75nj6ANPvjFslp7I9YPGnHXJoG8CnE2hmmZ8f1tgg5zUL 56UHEs1uPR8BY8yG3pn57TNSbQ== X-Google-Smtp-Source: ABdhPJy1LcibwDb9MJE8lAR7DRt7CFWMODxweOaiwfYI3YYaI09MJQ7HOtUcThFhi4aeVSSIeG3mqA== X-Received: by 2002:adf:ed8f:0:b0:20a:e2a7:7201 with SMTP id c15-20020adfed8f000000b0020ae2a77201mr463281wro.26.1652373357971; Thu, 12 May 2022 09:35:57 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.35.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:35:56 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 4/7] sched/core: Add permission checks for setting the latency_nice value Date: Thu, 12 May 2022 18:35:31 +0200 Message-Id: <20220512163534.2572-5-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Parth Shah Since the latency_nice uses the similar infrastructure as NICE, use the already existing CAP_SYS_NICE security checks for the latency_nice. This should return -EPERM for the non-root user when trying to set the task latency_nice value to any lower than the current value. Signed-off-by: Parth Shah [rebase] Signed-off-by: Vincent Guittot --- kernel/sched/core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 036bd9ff66e9..2c0f782a9089 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7340,6 +7340,10 @@ static int __sched_setscheduler(struct task_struct *= p, return -EINVAL; if (attr->sched_latency_nice < MIN_LATENCY_NICE) return -EINVAL; + /* Use the same security checks as NICE */ + if (attr->sched_latency_nice < p->latency_nice && + !capable(CAP_SYS_NICE)) + return -EPERM; } =20 if (pi) --=20 2.17.1 From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D20CC433EF for ; Thu, 12 May 2022 16:36:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356541AbiELQgr (ORCPT ); Thu, 12 May 2022 12:36:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356538AbiELQgC (ORCPT ); Thu, 12 May 2022 12:36:02 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CF5C31DE3 for ; Thu, 12 May 2022 09:36:00 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id i5so7995141wrc.13 for ; Thu, 12 May 2022 09:36:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YZ+cPXjmqfKw6LKdcOiouNM9M6CBtUVVz2k9yXqujII=; b=LX9Fkr7EQnY8W5Qv2qxOxP8aCz3OOa64JUi3eeH/tGC+2qsuq6fPSylaf64XlOVVlV WxZstwMG+8qcxLe2++EGRC5nQGvoO/7Q/hVueiUBxATfAaoHyPMihthyOjYbjN2YXDc3 LyZGThDxrWPmr1uaM+786K/V+Bv5lJfSeiWOsabWPADr0R0E7sLcpCyr/xwv+8FVx7Bh koOCu801fj3g9iV4ysv1StxH9qlKW+XUAIJHIP0ZgtMjnayUfWlihPdq4bqRv4cYx4LN u0FcYRIpHTdXmFZhhekrrnNT0tGqf1yDyyij7bTYXIuhU5oHwZ7nhnQ9jdJYiAitKw3y gN2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YZ+cPXjmqfKw6LKdcOiouNM9M6CBtUVVz2k9yXqujII=; b=sypni5G6cPbPM/pdlheP1Sd59TmxbglPH7/aOg8zIwCqXYicJ4CRFNGh6aCX5k80P7 +n1eHacbvG8/bmaRto6JrB3Du1U6k+oZaFspg33CvMyMBtH/2pWEU32ZGJpy9uJ1QcSg IrKV9EhUZRE3gvK7K6SVyp0qZu3kq29gjf4pAJobqiJ+7h6LqMm0j1Zc0ZQPQGTZCuQR zCD+DLxPpg40HR9ZJXEVVexE+TSg+XWg0VTDdeIm/44d2ERViJtGn8nt+pxgLCiyFlDf ItZGrBO/mzvi7AgKOwiC09AZfXuRa9qRQ9+KgpxALtGui3bQ7lsAtSyNxHH3EdQ/OESw DB+g== X-Gm-Message-State: AOAM531DwdbOzimLQRZMFtaTkuu7PIDdLfcVz68De5TYN9bG+bYuUaUp rGOLhYvVkNQbZe8H+WR5bNingQ== X-Google-Smtp-Source: ABdhPJxe0lnU0gggw+cBKiJWFidl3tjZJIpUHtSMKXYqD8FfeUQkyoPj4uQt0mY7lvEAKK8gAFUWZg== X-Received: by 2002:a5d:5441:0:b0:20a:cdc0:6e90 with SMTP id w1-20020a5d5441000000b0020acdc06e90mr467256wrv.566.1652373360011; Thu, 12 May 2022 09:36:00 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.35.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:35:59 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 5/7] sched/fair: Take into account latency nice at wakeup Date: Thu, 12 May 2022 18:35:32 +0200 Message-Id: <20220512163534.2572-6-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Take into account the nice latency priority of a thread when deciding to preempt the current running thread. We don't want to provide more CPU bandwidth to a thread but reorder the scheduling to run latency sensitive task first whenever possible. As long as a thread didn't use its bandwidth, it will be able to preempt the current thread. At the opposite, a thread with a low latency priority will preempt current thread at wakeup only to keep fair CPU bandwidth sharing. Otherwise it will wait for the tick to get its sched slice. curr vruntime | sysctl_sched_wakeup_granularity <--> ----------------------------------|----|-----------------------|-----------= ---- | |<---------------------> | . sysctl_sched_latency | . default/current latency entity | . | . 1111111111111111111111111111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-= 1-1- se preempts curr at wakeup ------>|<- se doesn't preempt curr -------------= ---- | . | . | . low latency entity | . ---------------------->| % of sysctl_sched_latency | 1111111111111111111111111111111111111111111111111111111111|0000|-1-1-1-1-1-= 1-1- preempt ------------------------------------------------->|<- do not preemp= t -- | . | . | . high latency entity | . |<-----------------------| . | % of sysctl_sched_latency . 111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1= -1-1 preempt->|<- se doesn't preempt curr --------------------------------------= ---- Tests results of nice latency impact on heavy load like hackbench: hackbench -l (2560 / group) -g group group latency 0 latency 19 1 1.399(+/- 1.27%) 1.357(+/- 1.48%) + 3% 4 1.422(+/- 4.69%) 1.267(+/- 2.39%) +11% 8 1.334(+/- 2.92%) 1.290(+/- 0.92%) + 3% 16 1.353(+/- 1.37%) 1.310(+/- 0.35%) + 3% hackbench -p -l (2560 / group) -g group group 1 1.450(+/- 9.14%) 0.853(+/- 3.23%) +41% 4 1.539(+/- 6.41%) 0.754(+/- 3.96%) +51% 8 1.380(+/- 8.04%) 0.687(+/- 5.30%) +50% 16 0.774(+/- 6.30%) 0.688(+/- 3.11%) +11% By deacreasing the latency prio, we reduce the number of preemption at wakeup and help hackbench making progress. Test results of nice latency impact on short live load like cyclictest while competing with heavy load like hackbench: hackbench -l 10000 -g group & cyclictest --policy other -D 5 -q -n latency 0 latency -20 group min avg max min avg max 0 15 18 28 17 17 27 1 65 386 9154 62 92 6268 4 63 447 14623 54 93 7655 8 63 847 43705 49 124 26500 16 53 1081 66523 44 199 30185 group =3D 0 means that hackbench is not running. The avg is significantly improved with nice latency -20 especially with large number of groups but min and max remain quite similar. If we add the histogram parameters to get details of latency, we have : hackbench -l 10000 -g 16 & cyclictest --policy other -D 5 -q -n -H 20000 --histfile data.txt latency 0 latency -20 Min Latencies: 63 62 Avg Latencies: 1129 132 Max Latencies: 71331 16762 50% latencies: 92 85 75% latencies: 622 90 85% latencies: 1038 93 90% latencies: 1371 96 95% latencies: 5304 100 99% latencies: 17329 137 With percentile details, we see the benefit of nice latency -20 as 1% of the latencies stays above 137us whereas the default latency has got 25% are above 322us and 15% over the 1ms. Signed-off-by: Vincent Guittot --- For v1, it has been discussed the opportunity to take into account latency_= prio in other places than check_preempt_wakeup(). The fast wakeup path is mainly about quickly looking for an idle CPU and I don't see any place where the added complexity would provide obvious benefi= t. The only place could be wake_affine_weight when we already compare the load; here we could also check latency_nice prio of current tasks. The slow path is mainly/only used for exec and fork and I wonder if there is cases where we would like a newly forked task to preempt current one as soon as possible as an example. So I haven't add any new place that takes into account latency_prio for now. include/linux/sched.h | 4 ++- init/init_task.c | 2 +- kernel/sched/core.c | 34 +++++++++++++++++---- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 69 +++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 12 ++++++++ 6 files changed, 112 insertions(+), 11 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 34c6c9c2797c..d991cbd972ea 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -560,6 +560,8 @@ struct sched_entity { unsigned long runnable_weight; #endif =20 + int latency_weight; + #ifdef CONFIG_SMP /* * Per entity load average tracking. @@ -775,7 +777,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; - int latency_nice; + int latency_prio; =20 struct sched_entity se; struct sched_rt_entity rt; diff --git a/init/init_task.c b/init/init_task.c index 225d11a39bc9..e98c71f24981 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -78,7 +78,7 @@ struct task_struct init_task .prio =3D MAX_PRIO - 20, .static_prio =3D MAX_PRIO - 20, .normal_prio =3D MAX_PRIO - 20, - .latency_nice =3D DEFAULT_LATENCY_NICE, + .latency_prio =3D NICE_WIDTH - 20, .policy =3D SCHED_NORMAL, .cpus_ptr =3D &init_task.cpus_mask, .user_cpus_ptr =3D NULL, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2c0f782a9089..ff020b99625c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1308,6 +1308,11 @@ static void set_load_weight(struct task_struct *p, b= ool update_load) } } =20 +static void set_latency_weight(struct task_struct *p) +{ + p->se.latency_weight =3D sched_latency_to_weight[p->latency_prio]; +} + #ifdef CONFIG_UCLAMP_TASK /* * Serializes updates of utilization clamp values @@ -4474,7 +4479,7 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D current->normal_prio; =20 /* Propagate the parent's latency requirements to the child as well */ - p->latency_nice =3D current->latency_nice; + p->latency_prio =3D current->latency_prio; =20 uclamp_fork(p); =20 @@ -4492,7 +4497,9 @@ int sched_fork(unsigned long clone_flags, struct task= _struct *p) p->prio =3D p->normal_prio =3D p->static_prio; set_load_weight(p, false); =20 - p->latency_nice =3D DEFAULT_LATENCY_NICE; + p->latency_prio =3D NICE_TO_LATENCY(0); + set_latency_weight(p); + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -7207,7 +7214,8 @@ static void __setscheduler_latency(struct task_struct= *p, const struct sched_attr *attr) { if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { - p->latency_nice =3D attr->sched_latency_nice; + p->latency_prio =3D NICE_TO_LATENCY(attr->sched_latency_nice); + set_latency_weight(p); } } =20 @@ -7341,7 +7349,7 @@ static int __sched_setscheduler(struct task_struct *p, if (attr->sched_latency_nice < MIN_LATENCY_NICE) return -EINVAL; /* Use the same security checks as NICE */ - if (attr->sched_latency_nice < p->latency_nice && + if (attr->sched_latency_nice < LATENCY_TO_NICE(p->latency_prio) && !capable(CAP_SYS_NICE)) return -EPERM; } @@ -7381,7 +7389,7 @@ static int __sched_setscheduler(struct task_struct *p, if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && - attr->sched_latency_nice !=3D p->latency_nice) + attr->sched_latency_nice !=3D LATENCY_TO_NICE(p->latency_prio)) goto change; =20 p->sched_reset_on_fork =3D reset_on_fork; @@ -7922,7 +7930,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sch= ed_attr __user *, uattr, get_params(p, &kattr); kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 - kattr.sched_latency_nice =3D p->latency_nice; + kattr.sched_latency_nice =3D LATENCY_TO_NICE(p->latency_prio); =20 #ifdef CONFIG_UCLAMP_TASK /* @@ -11117,6 +11125,20 @@ const u32 sched_prio_to_wmult[40] =3D { /* 15 */ 119304647, 148102320, 186737708, 238609294, 286331153, }; =20 +/* + * latency weight for wakeup preemption + */ +const int sched_latency_to_weight[40] =3D { + /* -20 */ 1024, 973, 922, 870, 819, + /* -15 */ 768, 717, 666, 614, 563, + /* -10 */ 512, 461, 410, 358, 307, + /* -5 */ 256, 205, 154, 102, 51, + /* 0 */ 0, -51, -102, -154, -205, + /* 5 */ -256, -307, -358, -410, -461, + /* 10 */ -512, -563, -614, -666, -717, + /* 15 */ -768, -819, -870, -922, -973, +}; + void call_trace_sched_update_nr_running(struct rq *rq, int count) { trace_sched_update_nr_running_tp(rq, count); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index a3f7876217a6..06aaa0c81d4b 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1042,7 +1042,7 @@ void proc_sched_show_task(struct task_struct *p, stru= ct pid_namespace *ns, #endif P(policy); P(prio); - P(latency_nice); + P(latency_prio); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bc9f6e94c84e..3af74f1a79ca 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5619,6 +5619,35 @@ static int sched_idle_cpu(int cpu) } #endif =20 +static void set_next_buddy(struct sched_entity *se); + +static void check_preempt_from_idle(struct cfs_rq *cfs, struct sched_entit= y *se) +{ + struct sched_entity *next; + + if (se->latency_weight <=3D 0) + return; + + if (cfs->nr_running <=3D 1) + return; + /* + * When waking from idle, we don't need to check to preempt at wakeup + * the idle thread and don't set next buddy as a candidate for being + * picked in priority. + * In case of simultaneous wakeup from idle, the latency sensitive tasks + * lost opportunity to preempt non sensitive tasks which woke up + * simultaneously. + */ + + if (cfs->next) + next =3D cfs->next; + else + next =3D __pick_first_entity(cfs); + + if (next && wakeup_preempt_entity(next, se) =3D=3D 1) + set_next_buddy(se); +} + /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -5712,6 +5741,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) if (!task_new) update_overutilized_status(rq); =20 + if (rq->curr =3D=3D rq->idle) + check_preempt_from_idle(cfs_rq_of(&p->se), &p->se); + enqueue_throttle: if (cfs_bandwidth_used()) { /* @@ -5733,8 +5765,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) hrtick_update(rq); } =20 -static void set_next_buddy(struct sched_entity *se); - /* * The dequeue_task method is called before nr_running is * decreased. We remove the task from the rbtree and @@ -6991,6 +7021,37 @@ balance_fair(struct rq *rq, struct task_struct *prev= , struct rq_flags *rf) } #endif /* CONFIG_SMP */ =20 +static long wakeup_latency_gran(struct sched_entity *curr, struct sched_en= tity *se) +{ + int latency_weight =3D se->latency_weight; + long thresh =3D sysctl_sched_latency; + + /* + * A positive latency weigth means that the sched_entity has latency + * requirement that needs to be evaluated versus other entity. + * Otherwise, use the latency weight to evaluate how much scheduling + * delay is acceptable by se. + */ + if ((se->latency_weight > 0) || (curr->latency_weight > 0)) + latency_weight -=3D curr->latency_weight; + + if (!latency_weight) + return 0; + + if (sched_feat(GENTLE_FAIR_SLEEPERS)) + thresh >>=3D 1; + + /* + * Clamp the delta to stay in the scheduler period range + * [-sysctl_sched_latency:sysctl_sched_latency] + */ + latency_weight =3D clamp_t(long, latency_weight, + -1 * NICE_LATENCY_WEIGHT_MAX, + NICE_LATENCY_WEIGHT_MAX); + + return (thresh * latency_weight) >> NICE_LATENCY_SHIFT; +} + static unsigned long wakeup_gran(struct sched_entity *se) { unsigned long gran =3D sysctl_sched_wakeup_granularity; @@ -7030,6 +7091,9 @@ wakeup_preempt_entity(struct sched_entity *curr, stru= ct sched_entity *se) { s64 gran, vdiff =3D curr->vruntime - se->vruntime; =20 + /* Take into account latency priority */ + vdiff +=3D wakeup_latency_gran(curr, se); + if (vdiff <=3D 0) return -1; =20 @@ -7138,6 +7202,7 @@ static void check_preempt_wakeup(struct rq *rq, struc= t task_struct *p, int wake_ return; =20 update_curr(cfs_rq_of(se)); + if (wakeup_preempt_entity(se, pse) =3D=3D 1) { /* * Bias pick_next to pick the sched entity that is diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 271ecd37c13d..831b2c8feff1 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -134,6 +134,17 @@ extern void call_trace_sched_update_nr_running(struct = rq *rq, int count); * Default tasks should be treated as a task with latency_nice =3D 0. */ #define DEFAULT_LATENCY_NICE 0 +#define DEFAULT_LATENCY_PRIO (DEFAULT_LATENCY_NICE + LATENCY_NICE_WIDTH/2) + +/* + * Convert user-nice values [ -20 ... 0 ... 19 ] + * to static latency [ 0..39 ], + * and back. + */ +#define NICE_TO_LATENCY(nice) ((nice) + DEFAULT_LATENCY_PRIO) +#define LATENCY_TO_NICE(prio) ((prio) - DEFAULT_LATENCY_PRIO) +#define NICE_LATENCY_SHIFT (SCHED_FIXEDPOINT_SHIFT) +#define NICE_LATENCY_WEIGHT_MAX (1L << NICE_LATENCY_SHIFT) =20 /* * Increase resolution of nice-level calculations for 64-bit architectures. @@ -2078,6 +2089,7 @@ static_assert(WF_TTWU =3D=3D SD_BALANCE_WAKE); =20 extern const int sched_prio_to_weight[40]; extern const u32 sched_prio_to_wmult[40]; +extern const int sched_latency_to_weight[40]; =20 /* * {de,en}queue flags: --=20 2.17.1 From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B42CC433F5 for ; Thu, 12 May 2022 16:37:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356558AbiELQhB (ORCPT ); Thu, 12 May 2022 12:37:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356552AbiELQgF (ORCPT ); Thu, 12 May 2022 12:36:05 -0400 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4011326C8 for ; Thu, 12 May 2022 09:36:03 -0700 (PDT) Received: by mail-wr1-x433.google.com with SMTP id h16so7345040wrb.2 for ; Thu, 12 May 2022 09:36:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=VXuzqJ0yrQ0E5lpmc6uKqNtQ+2mUwLYmw3I0HBBF/Vo=; b=I9DkGbpG5XHsPitr9+hwa4ckaIoyIXiLoA+R15tt2BZD7flcgpvCWUcMhzOs3enXpd sgFCcM8VtMveMQEgUZYOnP0ZeqdKFYB2fyB5OucwP9ePlq1+2EvNPkpIrN0nPtXeC6HZ QtclNehTawAN+nr3pJpoMsuQuAKsD1hD7vSIzbFdCyrh7NMtlWB9dCYPJl5MifH1foo4 4U3okr9icaZmI7oWa06FuGeoMBF5TF2CodpCAaEFg+6m2co0XJpQXZEqSQEHLYUHCKRu s69rE6tnuOIytACON2/VU4r65nGifqFZRzo4JqtW8XRK9Da1ar4UZ+n/9Ow1aEoxcvnx lKyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VXuzqJ0yrQ0E5lpmc6uKqNtQ+2mUwLYmw3I0HBBF/Vo=; b=UH0OgK+tqpq08TkuLd8aIsTWRBW/jgOVpNqUIc0EFjWSzBRUa3F3DgoBg5RsYx+z+K Ky8qAqqtVEBBTNAJY77A4FUyiTwii7K3IZFCJTG1dlnDG2vZ++aMeFzci9fUsatPR+qz GS9JEQUAnDvDxK+1Xh91Z6h9futyFQ/K09J8mK/82UXvybeSX83eOFpESGFedOszwiKm poU+hY+DZA9xrvYsT8wGYlL67/WE9kaMSh85U2OdlIenPdzvK325nnYXbeN8kuOkTltL NiJkod3//TydDp+5MGE3Dgif+erpjI7WtfrBssydWvMiXBr88I6x5yi5u+okOYYjuh8b hS3A== X-Gm-Message-State: AOAM531o6BFLHYGKTgDmYvc/b4oHhcJvrpT4IAQASAGBb3FUgnlm+jcz 9EXqFFrABbIvXLlN/o4bHM/jhQ== X-Google-Smtp-Source: ABdhPJx8Gv7EJFcJrPdRLrLC/+5oW9TLPHvZQQ/TeTId7dAs9mwAVPaZ5tinjAlplZh/Nr1jFGsJaQ== X-Received: by 2002:a5d:64a6:0:b0:20c:64ef:c9cc with SMTP id m6-20020a5d64a6000000b0020c64efc9ccmr501674wrp.190.1652373362046; Thu, 12 May 2022 09:36:02 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.36.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:36:01 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 6/7] sched/fair: Add sched group latency support Date: Thu, 12 May 2022 18:35:33 +0200 Message-Id: <20220512163534.2572-7-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Tasks can set its latency priority which is then used to decide to preempt the current running entity of the cfs but sched group entities still have the default latency priority. Add a latency field in task group to set the latency priority of the group which will be used against other entity in the parent cfs. Signed-off-by: Vincent Guittot --- For v1, there were dicussions about the best interface to express some late= ncy constraints for a cfs group. The weight seems to be specific to the wakeup = path and can't be easily reused elsewhere like EAS or idle cpu selection path. The current proposal of a latency prio ranging from [0:40] seems the simple= st interface but on the other side, it removes the notion of either having lat= ency constaint with a negative value or relaxing the latency with positive value; This is an important and easy to understand difference. So I tend to think = that keeping the latency nice is the easiest way to express if a group has laten= cy constraint (negative value) or don't care about scheduling latency (positive value). Using a range will then help to order groups' constraint. I have studied how we can use a duration but this will mainly provide confu= sion to the user because whatever the value, we will never be able to ensure it. kernel/sched/core.c | 41 +++++++++++++++++++++++++++++++++++++++++ kernel/sched/fair.c | 32 ++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 3 files changed, 71 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ff020b99625c..95f3ef54447e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10798,6 +10798,30 @@ static int cpu_idle_write_s64(struct cgroup_subsys= _state *css, { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return css_tg(css)->latency_prio; +} + +static int cpu_latency_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 latency_prio) +{ + return sched_group_set_latency(css_tg(css), latency_prio); +} + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return LATENCY_TO_NICE(css_tg(css)->latency_prio); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 latency_nice) +{ + return sched_group_set_latency(css_tg(css), NICE_TO_LATENCY(latency_nice)= ); +} #endif =20 static struct cftype cpu_legacy_files[] =3D { @@ -10812,6 +10836,11 @@ static struct cftype cpu_legacy_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency", + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11029,6 +11058,12 @@ static struct cftype cpu_files[] =3D { .read_s64 =3D cpu_idle_read_s64, .write_s64 =3D cpu_idle_write_s64, }, + { + .name =3D "latency", + .flags =3D CFTYPE_NOT_ON_ROOT, + .read_s64 =3D cpu_latency_nice_read_s64, + .write_s64 =3D cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3af74f1a79ca..71c0762491c5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11529,6 +11529,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) goto err; =20 tg->shares =3D NICE_0_LOAD; + tg->latency_prio =3D DEFAULT_LATENCY_PRIO; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 @@ -11627,6 +11628,7 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, } =20 se->my_q =3D cfs_rq; + se->latency_weight =3D sched_latency_to_weight[tg->latency_prio]; /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent =3D parent; @@ -11757,6 +11759,36 @@ int sched_group_set_idle(struct task_group *tg, lo= ng idle) return 0; } =20 +int sched_group_set_latency(struct task_group *tg, long latency_prio) +{ + int i; + + if (tg =3D=3D &root_task_group) + return -EINVAL; + + if (latency_prio < 0 || + latency_prio > LATENCY_NICE_WIDTH) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_prio =3D=3D latency_prio) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_prio =3D latency_prio; + + for_each_possible_cpu(i) { + struct sched_entity *se =3D tg->se[i]; + + WRITE_ONCE(se->latency_weight, sched_latency_to_weight[latency_prio]); + } + + mutex_unlock(&shares_mutex); + return 0; +} + #else /* CONFIG_FAIR_GROUP_SCHED */ =20 void free_fair_sched_group(struct task_group *tg) { } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 831b2c8feff1..0c26bb5a742e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -414,6 +414,8 @@ struct task_group { =20 /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency priority of the group. */ + int latency_prio; =20 #ifdef CONFIG_SMP /* @@ -527,6 +529,8 @@ extern int sched_group_set_shares(struct task_group *tg= , unsigned long shares); =20 extern int sched_group_set_idle(struct task_group *tg, long idle); =20 +extern int sched_group_set_latency(struct task_group *tg, long latency); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next); --=20 2.17.1 From nobody Wed May 13 20:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E8C5C433FE for ; Thu, 12 May 2022 16:36:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356566AbiELQgv (ORCPT ); Thu, 12 May 2022 12:36:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356555AbiELQgF (ORCPT ); Thu, 12 May 2022 12:36:05 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3061721262 for ; Thu, 12 May 2022 09:36:05 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id k2so8052195wrd.5 for ; Thu, 12 May 2022 09:36:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+Gnska8b1Wfv23bF2U4XjBCNfnatEKh8FDyGDQymKDM=; b=rBXVL282FjWOaowoTpqpxtn0yMAEBpVZOdZzFXl7jgIiUf6QvjMbp0pKUVpJY47Hll iIpZjkl8geXm0/Jd7ZYYX6CbFdrWLpTXyHz5DfrW7RVsJPQusYYCTak4tVnqNEqtp/UJ pWWhAIoIdZaTkGkZhaOaVQ1doAInOLYYLojvJblUS9BfCOrPIw6yTockvjzynFWlnc2G TseRW4H66K6Wl7gjyUeFdKtuZxab/6Rm1s5bnvGb4jaruFOe8Gj7s+0iAGOrLCXD3mU7 haU1wD9h3bwxEsLD3qzk3rXCbMDothfLKyPJDTaJyX0pffTEOzZ2obKmy9vXPTgRgd/x 2N4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+Gnska8b1Wfv23bF2U4XjBCNfnatEKh8FDyGDQymKDM=; b=75pIGE8MPNTfs3vVLoUtFMBB1HampuJqgisTONN1FkkmD/y4nw1fvxHsm+pBlau33x Uu540DQi7gWsVoI8RHj5aLLCyOZ3QmURstgxYuSOgS+Xt2LYy49tYlrZXj2CP18163W9 palkVkEB6/XE6UeitYIRK/dp3hAZg0WeIFpgSN9DphgABN3u+27l2O25huwhrrWtQcuK s0HnIONRWS3i1K5JgJ3G5eS1soh7uGJV5D2/3TgJ61pgAJU1dyMAqLn8Vu0s9E+dUXJj ewVqs+IlaSgKheQNc7uaM5RdsPICa1kUFT74z78NcNwJICI3FWXXLFp7Vmux4nIaRDhf ux5Q== X-Gm-Message-State: AOAM531C8GylYxmZoswzLimjOuZ8HhM08lKU+nHRRzTITCwfgNNQ7+ke 98p8UguBijmuTVlr6xNmauJlTA== X-Google-Smtp-Source: ABdhPJxifD4YkQWr7J8e5gYnRXg2lLdmMTy9didZGBgQDF76kFhGlCM3vRyxBJxjvOMqDvxDo/Ekgg== X-Received: by 2002:a5d:44cf:0:b0:20a:c5d2:b6c3 with SMTP id z15-20020a5d44cf000000b0020ac5d2b6c3mr430657wrr.177.1652373363677; Thu, 12 May 2022 09:36:03 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:253e:ae0a:544b:2cb1]) by smtp.gmail.com with ESMTPSA id j25-20020adfa799000000b0020c5253d8dbsm21814wrc.39.2022.05.12.09.36.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 09:36:02 -0700 (PDT) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org, parth@linux.ibm.com Cc: qais.yousef@arm.com, chris.hyser@oracle.com, valentin.schneider@arm.com, patrick.bellasi@matbug.net, David.Laight@aculab.com, pjt@google.com, pavel@ucw.cz, tj@kernel.org, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, Vincent Guittot Subject: [PATCH v2 7/7] sched/core: support latency nice with sched core Date: Thu, 12 May 2022 18:35:34 +0200 Message-Id: <20220512163534.2572-8-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220512163534.2572-1-vincent.guittot@linaro.org> References: <20220512163534.2572-1-vincent.guittot@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Take into account wakeup_latency_gran() when ordering the cfs threads. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 71c0762491c5..063e9a3c7e51 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11193,6 +11193,10 @@ bool cfs_prio_less(struct task_struct *a, struct t= ask_struct *b, bool in_fi) delta =3D (s64)(sea->vruntime - seb->vruntime) + (s64)(cfs_rqb->min_vruntime_fi - cfs_rqa->min_vruntime_fi); =20 + /* Take into account latency prio */ + delta +=3D wakeup_latency_gran(sea, seb); + + return delta > 0; } #else --=20 2.17.1