From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19537C0015E for ; Wed, 9 Aug 2023 20:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234755AbjHIUoC (ORCPT ); Wed, 9 Aug 2023 16:44:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234602AbjHIUnt (ORCPT ); Wed, 9 Aug 2023 16:43:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3B5A100 for ; Wed, 9 Aug 2023 13:43:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=MG7QpnmCwLygWJ81Mn2iabAPA0zngwNBxpS2UJOPCco=; b=m/MsVKVFJXw+PWSw4nmwYAHQuS d8oHFeNWspaqeDl2kVqhAP5j3RrdWYh7jB4RLDDmLUbwdwFRA2W6NVS/dcBqBY3IhBVtUyR6Y/Dr9 +1MwcXNzG0isfasfU+eF2976h96XZU1fQZumEjg4KpwuFJEB2aJOEx1YVSZuV3j7FpBmTaj4Jgf8p N6WrRvsRmZ//sHQEINLmKUQ4EmvryOTNr9W5bTd+0+LedkDbTUb2MT7YozlWUB28cuuUbggpWVBuL 89+9fQxsShynNBV6JX1ynrucbiVgM1Iqpq09eoaeO3hpR1c2VonRKPd09sVWRg+vuNxU+Kr6392Io KANnIeFg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qTq1m-008LN5-HR; Wed, 09 Aug 2023 20:43:34 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 69EED30014A; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4E7ED206E9BA9; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.103286845@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:41 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 1/8] sched: Simplify set_user_nice() References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 13 ++++++------- kernel/sched/sched.h | 5 +++++ 2 files changed, 11 insertions(+), 7 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7119,9 +7119,8 @@ static inline int rt_effective_prio(stru void set_user_nice(struct task_struct *p, long nice) { bool queued, running; - int old_prio; - struct rq_flags rf; struct rq *rq; + int old_prio; =20 if (task_nice(p) =3D=3D nice || nice < MIN_NICE || nice > MAX_NICE) return; @@ -7129,7 +7128,9 @@ void set_user_nice(struct task_struct *p * We have to be careful, if called from sys_setpriority(), * the task might be in the middle of scheduling on another CPU. */ - rq =3D task_rq_lock(p, &rf); + CLASS(task_rq_lock, rq_guard)(p); + rq =3D rq_guard.rq; + update_rq_clock(rq); =20 /* @@ -7140,8 +7141,9 @@ void set_user_nice(struct task_struct *p */ if (task_has_dl_policy(p) || task_has_rt_policy(p)) { p->static_prio =3D NICE_TO_PRIO(nice); - goto out_unlock; + return; } + queued =3D task_on_rq_queued(p); running =3D task_current(rq, p); if (queued) @@ -7164,9 +7166,6 @@ void set_user_nice(struct task_struct *p * lowered its priority, then reschedule its CPU: */ p->sched_class->prio_changed(rq, p, old_prio); - -out_unlock: - task_rq_unlock(rq, p, &rf); } EXPORT_SYMBOL(set_user_nice); =20 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1630,6 +1630,11 @@ task_rq_unlock(struct rq *rq, struct tas raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); } =20 +DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct, + _T->rq =3D task_rq_lock(_T->lock, &_T->rf), + task_rq_unlock(_T->rq, _T->lock, &_T->rf), + struct rq *rq; struct rq_flags rf) + static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) __acquires(rq->lock) From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A86C7C001DE for ; Wed, 9 Aug 2023 20:43:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234610AbjHIUnt (ORCPT ); Wed, 9 Aug 2023 16:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233904AbjHIUnq (ORCPT ); Wed, 9 Aug 2023 16:43:46 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31ABE100 for ; Wed, 9 Aug 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=by7PPFcazB/pK+2FaRs+OdI6JGfC9WsBX82VOJ7mM6o=; b=Scv794bAXpGus3BY/xHGbA7a1o bmtmNQRdHUgTjS4vYPOHhaZgdEq0VY6J/EJmShBkfcE5LlLxbzegB86Ude2ywt6vaSysp4D3YoVJK IS6t4oW7hBH3mT5lz2abaUxJdm19EDhLsb0bFmMDJkCXmv+4mx/2aI+zPt4DeVMU1sdnxtiDVfw35 3EpC8kKN9D2ybZTiN4GahMfn4R4Tu3Jr0RGSvUZgkAPDG5NR8o/CrR76fmm8mk0Dpg6FLad+OJaeG pB22llN7V0rdALnqub5kRnE/BZ1/23byk5ehqpW79w0sud0/diVP3KumPgrYrJ27XBys0nUvF4MYm 3aMGihkA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qTq1m-005twK-1m; Wed, 09 Aug 2023 20:43:34 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6A2EE3003F1; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 50C14206EC335; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.173165884@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:42 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 2/8] sched: Simplify syscalls References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 154 ++++++++++++++++++++++-------------------------= ----- 1 file changed, 68 insertions(+), 86 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7425,6 +7425,21 @@ static struct task_struct *find_process_ return pid ? find_task_by_vpid(pid) : current; } =20 +static struct task_struct *find_get_task(pid_t pid) +{ + struct task_struct *p; + guard(rcu)(); + + p =3D find_process_by_pid(pid); + if (likely(p)) + get_task_struct(p); + + return p; +} + +DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(= _T), + find_get_task(pid), pid_t pid) + /* * sched_setparam() passes in -1 for its policy, to let the functions * it calls know not to change it. @@ -7462,14 +7477,11 @@ static void __setscheduler_params(struct static bool check_same_owner(struct task_struct *p) { const struct cred *cred =3D current_cred(), *pcred; - bool match; + guard(rcu)(); =20 - rcu_read_lock(); pcred =3D __task_cred(p); - match =3D (uid_eq(cred->euid, pcred->euid) || - uid_eq(cred->euid, pcred->uid)); - rcu_read_unlock(); - return match; + return (uid_eq(cred->euid, pcred->euid) || + uid_eq(cred->euid, pcred->uid)); } =20 /* @@ -7873,27 +7885,17 @@ static int do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *pa= ram) { struct sched_param lparam; - struct task_struct *p; - int retval; =20 if (!param || pid < 0) return -EINVAL; if (copy_from_user(&lparam, param, sizeof(struct sched_param))) return -EFAULT; =20 - rcu_read_lock(); - retval =3D -ESRCH; - p =3D find_process_by_pid(pid); - if (likely(p)) - get_task_struct(p); - rcu_read_unlock(); - - if (likely(p)) { - retval =3D sched_setscheduler(p, policy, &lparam); - put_task_struct(p); - } + CLASS(find_get_task, p)(pid); + if (!p) + return -ESRCH; =20 - return retval; + return sched_setscheduler(p, policy, &lparam); } =20 /* @@ -7989,7 +7991,6 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pi unsigned int, flags) { struct sched_attr attr; - struct task_struct *p; int retval; =20 if (!uattr || pid < 0 || flags) @@ -8004,21 +8005,14 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pi if (attr.sched_flags & SCHED_FLAG_KEEP_POLICY) attr.sched_policy =3D SETPARAM_POLICY; =20 - rcu_read_lock(); - retval =3D -ESRCH; - p =3D find_process_by_pid(pid); - if (likely(p)) - get_task_struct(p); - rcu_read_unlock(); + CLASS(find_get_task, p)(pid); + if (!p) + return -ESRCH; =20 - if (likely(p)) { - if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS) - get_params(p, &attr); - retval =3D sched_setattr(p, &attr); - put_task_struct(p); - } + if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS) + get_params(p, &attr); =20 - return retval; + return sched_setattr(p, &attr); } =20 /** @@ -8036,16 +8030,17 @@ SYSCALL_DEFINE1(sched_getscheduler, pid_ if (pid < 0) return -EINVAL; =20 - retval =3D -ESRCH; - rcu_read_lock(); + guard(rcu)(); p =3D find_process_by_pid(pid); - if (p) { - retval =3D security_task_getscheduler(p); - if (!retval) - retval =3D p->policy - | (p->sched_reset_on_fork ? SCHED_RESET_ON_FORK : 0); + if (!p) + return -ESRCH; + + retval =3D security_task_getscheduler(p); + if (!retval) { + retval =3D p->policy; + if (p->sched_reset_on_fork) + retval |=3D SCHED_RESET_ON_FORK; } - rcu_read_unlock(); return retval; } =20 @@ -8066,30 +8061,23 @@ SYSCALL_DEFINE2(sched_getparam, pid_t, p if (!param || pid < 0) return -EINVAL; =20 - rcu_read_lock(); - p =3D find_process_by_pid(pid); - retval =3D -ESRCH; - if (!p) - goto out_unlock; + scoped_guard (rcu) { + p =3D find_process_by_pid(pid); + if (!p) + return -ESRCH; =20 - retval =3D security_task_getscheduler(p); - if (retval) - goto out_unlock; + retval =3D security_task_getscheduler(p); + if (retval) + return retval; =20 - if (task_has_rt_policy(p)) - lp.sched_priority =3D p->rt_priority; - rcu_read_unlock(); + if (task_has_rt_policy(p)) + lp.sched_priority =3D p->rt_priority; + } =20 /* * This one might sleep, we cannot do it with a spinlock held ... */ - retval =3D copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0; - - return retval; - -out_unlock: - rcu_read_unlock(); - return retval; + return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0; } =20 /* @@ -8149,39 +8137,33 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pi usize < SCHED_ATTR_SIZE_VER0 || flags) return -EINVAL; =20 - rcu_read_lock(); - p =3D find_process_by_pid(pid); - retval =3D -ESRCH; - if (!p) - goto out_unlock; + scoped_guard (rcu) { + p =3D find_process_by_pid(pid); + if (!p) + return -ESRCH; =20 - retval =3D security_task_getscheduler(p); - if (retval) - goto out_unlock; + retval =3D security_task_getscheduler(p); + if (retval) + return retval; =20 - kattr.sched_policy =3D p->policy; - if (p->sched_reset_on_fork) - kattr.sched_flags |=3D SCHED_FLAG_RESET_ON_FORK; - get_params(p, &kattr); - kattr.sched_flags &=3D SCHED_FLAG_ALL; + kattr.sched_policy =3D p->policy; + if (p->sched_reset_on_fork) + kattr.sched_flags |=3D SCHED_FLAG_RESET_ON_FORK; + get_params(p, &kattr); + kattr.sched_flags &=3D SCHED_FLAG_ALL; =20 #ifdef CONFIG_UCLAMP_TASK - /* - * This could race with another potential updater, but this is fine - * because it'll correctly read the old or the new value. We don't need - * to guarantee who wins the race as long as it doesn't return garbage. - */ - kattr.sched_util_min =3D p->uclamp_req[UCLAMP_MIN].value; - kattr.sched_util_max =3D p->uclamp_req[UCLAMP_MAX].value; + /* + * This could race with another potential updater, but this is fine + * because it'll correctly read the old or the new value. We don't need + * to guarantee who wins the race as long as it doesn't return garbage. + */ + kattr.sched_util_min =3D p->uclamp_req[UCLAMP_MIN].value; + kattr.sched_util_max =3D p->uclamp_req[UCLAMP_MAX].value; #endif - - rcu_read_unlock(); + } =20 return sched_attr_copy_to_user(uattr, &kattr, usize); - -out_unlock: - rcu_read_unlock(); - return retval; } =20 #ifdef CONFIG_SMP From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD2A4C0015E for ; Wed, 9 Aug 2023 20:43:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231883AbjHIUnv (ORCPT ); Wed, 9 Aug 2023 16:43:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234260AbjHIUnr (ORCPT ); Wed, 9 Aug 2023 16:43:47 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 953C918E for ; Wed, 9 Aug 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=vDHS8Plr2cencdpiH/auo6E776oNoOhgYdcpKlgKc4Q=; b=QbqGE4Lx8gdWzy6RliqUTtWhkm s5M9NXwGATxbMdMuUP26qF0Hre/8SjU0ApGKf15r8v2VPjRWFdlw3k4eUd7Q4llx6G4EJq09SkqRk IfPXSYlNwRlIy1VCgvqSfJyIMczXqiB2kBcbYVKnplsexPUxCiOLFqka9jzJ80yUbZLjYroxuflHZ IdJfF5EgkR6g7jKCrjPUAGJXEVlXEg4WLBI/C3zIQSRls6SLM51rFiRJot/r8Z+vIbV3utpSEh2+G uhjN1idj5CU02HX+ZZKWY6/zUNOKDPAG2VLf2rUV4ogEp+lfG6zH5DaflXK8+3tb0f1g9By5HnZeI Z5i/pLLA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qTq1m-005twJ-1n; Wed, 09 Aug 2023 20:43:34 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6F02B300487; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 567BA2023CC01; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.241599953@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:43 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 3/8] sched: Simplify sched_{set,get}affinity() References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 53 +++++++++++++----------------------------------= ----- 1 file changed, 14 insertions(+), 39 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8258,39 +8258,24 @@ long sched_setaffinity(pid_t pid, const { struct affinity_context ac; struct cpumask *user_mask; - struct task_struct *p; int retval; =20 - rcu_read_lock(); - - p =3D find_process_by_pid(pid); - if (!p) { - rcu_read_unlock(); + CLASS(find_get_task, p)(pid); + if (!p) return -ESRCH; - } =20 - /* Prevent p going away */ - get_task_struct(p); - rcu_read_unlock(); - - if (p->flags & PF_NO_SETAFFINITY) { - retval =3D -EINVAL; - goto out_put_task; - } + if (p->flags & PF_NO_SETAFFINITY) + return -EINVAL; =20 if (!check_same_owner(p)) { - rcu_read_lock(); - if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) { - rcu_read_unlock(); - retval =3D -EPERM; - goto out_put_task; - } - rcu_read_unlock(); + guard(rcu)(); + if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) + return -EPERM; } =20 retval =3D security_task_setscheduler(p); if (retval) - goto out_put_task; + return retval; =20 /* * With non-SMP configs, user_cpus_ptr/user_mask isn't used and @@ -8300,8 +8285,7 @@ long sched_setaffinity(pid_t pid, const if (user_mask) { cpumask_copy(user_mask, in_mask); } else if (IS_ENABLED(CONFIG_SMP)) { - retval =3D -ENOMEM; - goto out_put_task; + return -ENOMEM; } =20 ac =3D (struct affinity_context){ @@ -8313,8 +8297,6 @@ long sched_setaffinity(pid_t pid, const retval =3D __sched_setaffinity(p, &ac); kfree(ac.user_mask); =20 -out_put_task: - put_task_struct(p); return retval; } =20 @@ -8356,28 +8338,21 @@ SYSCALL_DEFINE3(sched_setaffinity, pid_t long sched_getaffinity(pid_t pid, struct cpumask *mask) { struct task_struct *p; - unsigned long flags; int retval; =20 - rcu_read_lock(); - - retval =3D -ESRCH; + guard(rcu)(); p =3D find_process_by_pid(pid); if (!p) - goto out_unlock; + return -ESRCH; =20 retval =3D security_task_getscheduler(p); if (retval) - goto out_unlock; + return retval; =20 - raw_spin_lock_irqsave(&p->pi_lock, flags); + guard(raw_spinlock_irqsave)(&p->pi_lock); cpumask_and(mask, &p->cpus_mask, cpu_active_mask); - raw_spin_unlock_irqrestore(&p->pi_lock, flags); =20 -out_unlock: - rcu_read_unlock(); - - return retval; + return 0; } =20 /** From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0F22C001DE for ; Wed, 9 Aug 2023 20:44:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231987AbjHIUoA (ORCPT ); Wed, 9 Aug 2023 16:44:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229658AbjHIUns (ORCPT ); Wed, 9 Aug 2023 16:43:48 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3D64196 for ; Wed, 9 Aug 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=9sqy4wj1SuZG+E1ZOkNDiw8zR5F8T4bHRpYkByM8S6o=; b=nS7fhtU07WM4lf+9GY1jnt0WyU JlGBQ/BiKLol5nqGk5z8qeenQ4mLmAK77ctDGsrQOk5L6gu1qUBv5+gGiwlrFiiE9486TOOUJiEhr uRJy0pNKE+NMQ2PIZ7Bw6RKDtz3MYP4N4P0O4+nSmPgWEyiwjHOJoBL5BhV5q7CJUODfjuCPJ4sOQ QiFWMm4eCkB4n2bzrPwFoC6c7Z4U2F0gNa+7v40ZGzYk8FWPRkntfdll5hUyv7KwBTOfCPh+5Qoyo C9lbPEJn0Ps1CMfRJXozS8klhujcmLiZ87nsZvsEwZS0JDDZ8PLoAUvXJSYa4d+pwXv43JqCZyPYi QOzmr+Pg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qTq1m-005twI-2Q; Wed, 09 Aug 2023 20:43:34 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 73EC430049D; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5BDC5200D83A5; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.310688520@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:44 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 4/8] sched: Simplify yield_to() References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 73 ++++++++++++++++++++++-------------------------= ----- 1 file changed, 32 insertions(+), 41 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8799,55 +8799,46 @@ int __sched yield_to(struct task_struct { struct task_struct *curr =3D current; struct rq *rq, *p_rq; - unsigned long flags; int yielded =3D 0; =20 - local_irq_save(flags); - rq =3D this_rq(); + scoped_guard (irqsave) { + rq =3D this_rq(); =20 again: - p_rq =3D task_rq(p); - /* - * If we're the only runnable task on the rq and target rq also - * has only one task, there's absolutely no point in yielding. - */ - if (rq->nr_running =3D=3D 1 && p_rq->nr_running =3D=3D 1) { - yielded =3D -ESRCH; - goto out_irq; - } - - double_rq_lock(rq, p_rq); - if (task_rq(p) !=3D p_rq) { - double_rq_unlock(rq, p_rq); - goto again; - } - - if (!curr->sched_class->yield_to_task) - goto out_unlock; - - if (curr->sched_class !=3D p->sched_class) - goto out_unlock; - - if (task_on_cpu(p_rq, p) || !task_is_running(p)) - goto out_unlock; - - yielded =3D curr->sched_class->yield_to_task(rq, p); - if (yielded) { - schedstat_inc(rq->yld_count); + p_rq =3D task_rq(p); /* - * Make p's CPU reschedule; pick_next_entity takes care of - * fairness. + * If we're the only runnable task on the rq and target rq also + * has only one task, there's absolutely no point in yielding. */ - if (preempt && rq !=3D p_rq) - resched_curr(p_rq); - } + if (rq->nr_running =3D=3D 1 && p_rq->nr_running =3D=3D 1) + return -ESRCH; =20 -out_unlock: - double_rq_unlock(rq, p_rq); -out_irq: - local_irq_restore(flags); + guard(double_rq_lock)(rq, p_rq); + if (task_rq(p) !=3D p_rq) + goto again; + + if (!curr->sched_class->yield_to_task) + return 0; + + if (curr->sched_class !=3D p->sched_class) + return 0; + + if (task_on_cpu(p_rq, p) || !task_is_running(p)) + return 0; + + yielded =3D curr->sched_class->yield_to_task(rq, p); + if (yielded) { + schedstat_inc(rq->yld_count); + /* + * Make p's CPU reschedule; pick_next_entity + * takes care of fairness. + */ + if (preempt && rq !=3D p_rq) + resched_curr(p_rq); + } + } =20 - if (yielded > 0) + if (yielded) schedule(); =20 return yielded; From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA30CC001DE for ; Wed, 9 Aug 2023 20:43:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234732AbjHIUnz (ORCPT ); Wed, 9 Aug 2023 16:43:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234476AbjHIUnr (ORCPT ); Wed, 9 Aug 2023 16:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61B0E100 for ; Wed, 9 Aug 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=8IwM37e642EshGkaM591nsICKeU24b96IU4dYucK72I=; b=kDEI41ex5Qh1PMu+3DWS1pMzNz Rv/FhSgsDqJmwVpzh91ZxD+ZHqMLjogyP4nUNOaqVK0JHb7V37r6yi8buTfyryfZXc01wbWpRWjnX Fz4qvaigovDAa9B1tInSSU1K9kYgwuYSxDUrDvM+JQ+ERiEM8gzDaUMdRv8lBFbKqxZA+NQRt34di 7j/S0xrZIaqtb2CVchdhrg+4NPxNVju4LGDdurNwebasCDohKc5rKzaryYfwfzDt5626MefDu/ODo 8/Z1uUS/a+9EfVCihNOCQoGHw4pGO7vkD6UD95jEM4lw0k7hQjQBbwc5h17fXKGK81IMac2SFSBdY YV75oCrA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qTq1n-008LN8-Cb; Wed, 09 Aug 2023 20:43:35 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 1411630058D; Wed, 9 Aug 2023 22:43:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5E715206EC339; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.378325194@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:45 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 5/8] sched: Simplify sched_rr_get_interval() References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 40 ++++++++++++++++------------------------ 1 file changed, 16 insertions(+), 24 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8941,38 +8941,30 @@ SYSCALL_DEFINE1(sched_get_priority_min, =20 static int sched_rr_get_interval(pid_t pid, struct timespec64 *t) { - struct task_struct *p; - unsigned int time_slice; - struct rq_flags rf; - struct rq *rq; + unsigned int time_slice =3D 0; int retval; =20 if (pid < 0) return -EINVAL; =20 - retval =3D -ESRCH; - rcu_read_lock(); - p =3D find_process_by_pid(pid); - if (!p) - goto out_unlock; - - retval =3D security_task_getscheduler(p); - if (retval) - goto out_unlock; - - rq =3D task_rq_lock(p, &rf); - time_slice =3D 0; - if (p->sched_class->get_rr_interval) - time_slice =3D p->sched_class->get_rr_interval(rq, p); - task_rq_unlock(rq, p, &rf); + scoped_guard (rcu) { + struct task_struct *p =3D find_process_by_pid(pid); + if (!p) + return -ESRCH; + + retval =3D security_task_getscheduler(p); + if (retval) + return retval; + + scoped_guard (task_rq_lock, p) { + struct rq *rq =3D scope.rq; + if (p->sched_class->get_rr_interval) + time_slice =3D p->sched_class->get_rr_interval(rq, p); + } + } =20 - rcu_read_unlock(); jiffies_to_timespec64(time_slice, t); return 0; - -out_unlock: - rcu_read_unlock(); - return retval; } =20 /** From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1FE7C0015E for ; Wed, 9 Aug 2023 20:44:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234771AbjHIUoE (ORCPT ); Wed, 9 Aug 2023 16:44:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234643AbjHIUnu (ORCPT ); Wed, 9 Aug 2023 16:43:50 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FEAC10C0 for ; Wed, 9 Aug 2023 13:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=y0wgwpukS8T7O340+NKvU9chnTVlAmLcht1jdurq5Ss=; b=G1CW/e9Wi6y62GmD26ak0BAhVB TtuEpfLi5R8hXAcY3CZagxTz0yIt1MUOBr80dPH19ehINa5ipQ00Txct/aQcAyI+A9NRmfgR1nTc8 BgHMzqnRRJ7sLZH5mZeDpLZqchmttLUaevRvg4BHp7UlaMGeHoSlLTICEX5lrvQlun7gew63Tle+I ue/w3zHNNPe3jRe6bBn9zbA0vQjH38cwUzmfIgCqj4HrKY0nSGkWei0Otfjt+wEzOZw4l2r/+b7zl iADvYWSGBP4ROH3sR3Y4dv7GvUOHN8tgGeiQJ3GojxvIMpnWfoxmKslvt3XUOe4eTHAzvdZJgdyCW p+UGkKlw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qTq1n-008LNA-He; Wed, 09 Aug 2023 20:43:35 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 1420E300703; Wed, 9 Aug 2023 22:43:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 6568B206EC338; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.445269001@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:46 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 6/8] sched: Simplify sched_move_task() References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10361,17 +10361,18 @@ void sched_move_task(struct task_struct int queued, running, queue_flags =3D DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; struct task_group *group; - struct rq_flags rf; struct rq *rq; =20 - rq =3D task_rq_lock(tsk, &rf); + CLASS(task_rq_lock, rq_guard)(tsk); + rq =3D rq_guard.rq; + /* * Esp. with SCHED_AUTOGROUP enabled it is possible to get superfluous * group changes. */ group =3D sched_get_task_group(tsk); if (group =3D=3D tsk->sched_task_group) - goto unlock; + return; =20 update_rq_clock(rq); =20 @@ -10396,9 +10397,6 @@ void sched_move_task(struct task_struct */ resched_curr(rq); } - -unlock: - task_rq_unlock(rq, tsk, &rf); } =20 static inline struct task_group *css_tg(struct cgroup_subsys_state *css) From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E45F7C001DE for ; Wed, 9 Aug 2023 20:43:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234712AbjHIUnx (ORCPT ); Wed, 9 Aug 2023 16:43:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234402AbjHIUnr (ORCPT ); Wed, 9 Aug 2023 16:43:47 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE3761AA for ; Wed, 9 Aug 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=oJyDWnF7S8lcaog4Xmhyr6vhDojPiq7m8dz3r+ZsZG4=; b=FL+Tnhlx4eoL+EwLOTOjeFbtNJ KxhEb2/MbMqyAwLRirLMSF5cUsk6M5QY/Yq1OiVeY87YYssk92XJwFvEzZycmSFGJIV7WCug+R5nZ /l3nq69PGGFKGsS3QCJsK4D8aJylYMzd8xOpY51S9bMr3ZsH9tV8mejsAO82lXSm6CKe11EHXyObU qaYr4+rfUDT9TlwUaYoTScAU0kBVpPw760+9RNHRKPFvmsDLuCa9Ab9lwaM9fJjV/HAVPon2m6mT1 rGmNCnIqJNsPMs/gVpkx72d/5eHlXCq98j4bWfgflizkKkEiYCLJK0n40S2GzCqN/r/jumgfxUuoS owhaBi5A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qTq1n-005twO-1C; Wed, 09 Aug 2023 20:43:35 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 141B33006F1; Wed, 9 Aug 2023 22:43:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 6A358206EC337; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.512332208@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:47 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 7/8] sched: Simplify tg_set_cfs_bandwidth() References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/cpu.h | 2 ++ kernel/sched/core.c | 42 +++++++++++++++++++++--------------------- 2 files changed, 23 insertions(+), 21 deletions(-) --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -148,6 +148,8 @@ static inline int remove_cpu(unsigned in static inline void smp_shutdown_nonboot_cpus(unsigned int primary_cpu) { } #endif /* !CONFIG_HOTPLUG_CPU */ =20 +DEFINE_LOCK_GUARD_0(cpus_read_lock, cpus_read_lock(), cpus_read_unlock()) + #ifdef CONFIG_PM_SLEEP_SMP extern int freeze_secondary_cpus(int primary); extern void thaw_secondary_cpus(void); --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10726,11 +10726,12 @@ static int tg_set_cfs_bandwidth(struct t * Prevent race between setting of cfs_rq->runtime_enabled and * unthrottle_offline_cfs_rqs(). */ - cpus_read_lock(); - mutex_lock(&cfs_constraints_mutex); + guard(cpus_read_lock)(); + guard(mutex)(&cfs_constraints_mutex); + ret =3D __cfs_schedulable(tg, period, quota); if (ret) - goto out_unlock; + return ret; =20 runtime_enabled =3D quota !=3D RUNTIME_INF; runtime_was_enabled =3D cfs_b->quota !=3D RUNTIME_INF; @@ -10740,39 +10741,38 @@ static int tg_set_cfs_bandwidth(struct t */ if (runtime_enabled && !runtime_was_enabled) cfs_bandwidth_usage_inc(); - raw_spin_lock_irq(&cfs_b->lock); - cfs_b->period =3D ns_to_ktime(period); - cfs_b->quota =3D quota; - cfs_b->burst =3D burst; - - __refill_cfs_bandwidth_runtime(cfs_b); - - /* Restart the period timer (if active) to handle new period expiry: */ - if (runtime_enabled) - start_cfs_bandwidth(cfs_b); =20 - raw_spin_unlock_irq(&cfs_b->lock); + scoped_guard (raw_spinlock_irq, &cfs_b->lock) { + cfs_b->period =3D ns_to_ktime(period); + cfs_b->quota =3D quota; + cfs_b->burst =3D burst; + + __refill_cfs_bandwidth_runtime(cfs_b); + + /* + * Restart the period timer (if active) to handle new + * period expiry: + */ + if (runtime_enabled) + start_cfs_bandwidth(cfs_b); + } =20 for_each_online_cpu(i) { struct cfs_rq *cfs_rq =3D tg->cfs_rq[i]; struct rq *rq =3D cfs_rq->rq; - struct rq_flags rf; =20 - rq_lock_irq(rq, &rf); + guard(rq_lock_irq)(rq); cfs_rq->runtime_enabled =3D runtime_enabled; cfs_rq->runtime_remaining =3D 0; =20 if (cfs_rq->throttled) unthrottle_cfs_rq(cfs_rq); - rq_unlock_irq(rq, &rf); } + if (runtime_was_enabled && !runtime_enabled) cfs_bandwidth_usage_dec(); -out_unlock: - mutex_unlock(&cfs_constraints_mutex); - cpus_read_unlock(); =20 - return ret; + return 0; } =20 static int tg_set_cfs_quota(struct task_group *tg, long cfs_quota_us) From nobody Sun Feb 8 01:31:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7870C0015E for ; Wed, 9 Aug 2023 20:43:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234659AbjHIUn5 (ORCPT ); Wed, 9 Aug 2023 16:43:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234541AbjHIUnr (ORCPT ); Wed, 9 Aug 2023 16:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C67D4B4 for ; Wed, 9 Aug 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ThEUNb9D9nIZpZ+/BY6F6cfgdQIlu3LVP2VkHFdZEuo=; b=frd6yqbu86MqYEAXmJxyBqjZQ2 BgxYLcE4k0M5+CDr2I2lZPa/khnMfeSzCQT+VIFG70EV93XUsqCiyW+DLPCIqfU6AJUy5PkDnANbm K50Er4Q552rkIYUWds+3mn2gGDpJJ4+yDK+VWQxCmmxyRxXJb9FdG7Td1EZ8EBHiFabyFSSUTBpoi L+s9+6fkkRKEtmQ5ScSWXqAtUrNUk9u66Y/BvWvpkL3doio1QiXn/dUNvTJlSKoqQ41rxZp3mtFUQ 5WvNsoNddu3FhrQxD4IXxatYd/hzgfMoDIQQIyDa81MLqBuN757tDv3tBsyUsK5mhkwE8QnkbHKJC 3e7eY+nA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qTq1n-008LN9-GP; Wed, 09 Aug 2023 20:43:35 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 1416530067B; Wed, 9 Aug 2023 22:43:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 6EDA1206EC33A; Wed, 9 Aug 2023 22:43:33 +0200 (CEST) Message-ID: <20230809204200.580725323@infradead.org> User-Agent: quilt/0.66 Date: Wed, 09 Aug 2023 22:24:48 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 8/8] sched: Misc cleanups References: <20230809202440.012625269@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Random remaining guard use... Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 167 +++++++++++++++++++----------------------------= ----- 1 file changed, 63 insertions(+), 104 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1480,16 +1480,12 @@ static void __uclamp_update_util_min_rt_ =20 static void uclamp_update_util_min_rt_default(struct task_struct *p) { - struct rq_flags rf; - struct rq *rq; - if (!rt_task(p)) return; =20 /* Protect updates to p->uclamp_* */ - rq =3D task_rq_lock(p, &rf); + guard(task_rq_lock)(p); __uclamp_update_util_min_rt_default(p); - task_rq_unlock(rq, p, &rf); } =20 static inline struct uclamp_se @@ -1785,9 +1781,8 @@ static void uclamp_update_root_tg(void) uclamp_se_set(&tg->uclamp_req[UCLAMP_MAX], sysctl_sched_uclamp_util_max, false); =20 - rcu_read_lock(); + guard(rcu)(); cpu_util_update_eff(&root_task_group.css); - rcu_read_unlock(); } #else static void uclamp_update_root_tg(void) { } @@ -1814,10 +1809,9 @@ static void uclamp_sync_util_min_rt_defa smp_mb__after_spinlock(); read_unlock(&tasklist_lock); =20 - rcu_read_lock(); + guard(rcu)(); for_each_process_thread(g, p) uclamp_update_util_min_rt_default(p); - rcu_read_unlock(); } =20 static int sysctl_sched_uclamp_handler(struct ctl_table *table, int write, @@ -2250,20 +2244,13 @@ static __always_inline int task_state_match(struct task_struct *p, unsigned int state) { #ifdef CONFIG_PREEMPT_RT - int match; - /* * Serialize against current_save_and_set_rtlock_wait_state() and * current_restore_rtlock_saved_state(). */ - raw_spin_lock_irq(&p->pi_lock); - match =3D __task_state_match(p, state); - raw_spin_unlock_irq(&p->pi_lock); - - return match; -#else - return __task_state_match(p, state); + guard(raw_spinlock_irq)(&p->pi_lock); #endif + return __task_state_match(p, state); } =20 /* @@ -2417,10 +2404,9 @@ void migrate_disable(void) return; } =20 - preempt_disable(); + guard(preempt)(); this_rq()->nr_pinned++; p->migration_disabled =3D 1; - preempt_enable(); } EXPORT_SYMBOL_GPL(migrate_disable); =20 @@ -2444,7 +2430,7 @@ void migrate_enable(void) * Ensure stop_task runs either before or after this, and that * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule(). */ - preempt_disable(); + guard(preempt)(); if (p->cpus_ptr !=3D &p->cpus_mask) __set_cpus_allowed_ptr(p, &ac); /* @@ -2455,7 +2441,6 @@ void migrate_enable(void) barrier(); p->migration_disabled =3D 0; this_rq()->nr_pinned--; - preempt_enable(); } EXPORT_SYMBOL_GPL(migrate_enable); =20 @@ -3516,13 +3501,11 @@ int migrate_swap(struct task_struct *cur */ void kick_process(struct task_struct *p) { - int cpu; + guard(preempt)(); + int cpu =3D task_cpu(p); =20 - preempt_disable(); - cpu =3D task_cpu(p); if ((cpu !=3D smp_processor_id()) && task_curr(p)) smp_send_reschedule(cpu); - preempt_enable(); } EXPORT_SYMBOL_GPL(kick_process); =20 @@ -6367,8 +6350,9 @@ static void sched_core_balance(struct rq struct sched_domain *sd; int cpu =3D cpu_of(rq); =20 - preempt_disable(); - rcu_read_lock(); + guard(preempt)(); + guard(rcu)(); + raw_spin_rq_unlock_irq(rq); for_each_domain(cpu, sd) { if (need_resched()) @@ -6378,8 +6362,6 @@ static void sched_core_balance(struct rq break; } raw_spin_rq_lock_irq(rq); - rcu_read_unlock(); - preempt_enable(); } =20 static DEFINE_PER_CPU(struct balance_callback, core_balance_head); @@ -8257,8 +8239,6 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pi #ifdef CONFIG_SMP int dl_task_check_affinity(struct task_struct *p, const struct cpumask *ma= sk) { - int ret =3D 0; - /* * If the task isn't a deadline task or admission control is * disabled then we don't care about affinity changes. @@ -8272,11 +8252,11 @@ int dl_task_check_affinity(struct task_s * tasks allowed to run on all the CPUs in the task's * root_domain. */ - rcu_read_lock(); + guard(rcu)(); if (!cpumask_subset(task_rq(p)->rd->span, mask)) - ret =3D -EBUSY; - rcu_read_unlock(); - return ret; + return -EBUSY; + + return 0; } #endif =20 @@ -10508,11 +10488,9 @@ static int cpu_cgroup_css_online(struct =20 #ifdef CONFIG_UCLAMP_TASK_GROUP /* Propagate the effective uclamp value for the new group */ - mutex_lock(&uclamp_mutex); - rcu_read_lock(); + guard(mutex)(&uclamp_mutex); + guard(rcu)(); cpu_util_update_eff(css); - rcu_read_unlock(); - mutex_unlock(&uclamp_mutex); #endif =20 return 0; @@ -10663,8 +10641,8 @@ static ssize_t cpu_uclamp_write(struct k =20 static_branch_enable(&sched_uclamp_used); =20 - mutex_lock(&uclamp_mutex); - rcu_read_lock(); + guard(mutex)(&uclamp_mutex); + guard(rcu)(); =20 tg =3D css_tg(of_css(of)); if (tg->uclamp_req[clamp_id].value !=3D req.util) @@ -10679,9 +10657,6 @@ static ssize_t cpu_uclamp_write(struct k /* Update effective clamps to track the most restrictive value */ cpu_util_update_eff(of_css(of)); =20 - rcu_read_unlock(); - mutex_unlock(&uclamp_mutex); - return nbytes; } =20 @@ -10707,10 +10682,10 @@ static inline void cpu_uclamp_print(stru u64 percent; u32 rem; =20 - rcu_read_lock(); - tg =3D css_tg(seq_css(sf)); - util_clamp =3D tg->uclamp_req[clamp_id].value; - rcu_read_unlock(); + scoped_guard (rcu) { + tg =3D css_tg(seq_css(sf)); + util_clamp =3D tg->uclamp_req[clamp_id].value; + } =20 if (util_clamp =3D=3D SCHED_CAPACITY_SCALE) { seq_puts(sf, "max\n"); @@ -11032,7 +11007,6 @@ static int tg_cfs_schedulable_down(struc =20 static int __cfs_schedulable(struct task_group *tg, u64 period, u64 quota) { - int ret; struct cfs_schedulable_data data =3D { .tg =3D tg, .period =3D period, @@ -11044,11 +11018,8 @@ static int __cfs_schedulable(struct task do_div(data.quota, NSEC_PER_USEC); } =20 - rcu_read_lock(); - ret =3D walk_tg_tree(tg_cfs_schedulable_down, tg_nop, &data); - rcu_read_unlock(); - - return ret; + guard(rcu)(); + return walk_tg_tree(tg_cfs_schedulable_down, tg_nop, &data); } =20 static int cpu_cfs_stat_show(struct seq_file *sf, void *v) @@ -11653,14 +11624,12 @@ int __sched_mm_cid_migrate_from_fetch_ci * are not the last task to be migrated from this cpu for this mm, so * there is no need to move src_cid to the destination cpu. */ - rcu_read_lock(); + guard(rcu)(); src_task =3D rcu_dereference(src_rq->curr); if (READ_ONCE(src_task->mm_cid_active) && src_task->mm =3D=3D mm) { - rcu_read_unlock(); t->last_mm_cid =3D -1; return -1; } - rcu_read_unlock(); =20 return src_cid; } @@ -11704,18 +11673,17 @@ int __sched_mm_cid_migrate_from_try_stea * the lazy-put flag, this task will be responsible for transitioning * from lazy-put flag set to MM_CID_UNSET. */ - rcu_read_lock(); - src_task =3D rcu_dereference(src_rq->curr); - if (READ_ONCE(src_task->mm_cid_active) && src_task->mm =3D=3D mm) { - rcu_read_unlock(); - /* - * We observed an active task for this mm, there is therefore - * no point in moving this cid to the destination cpu. - */ - t->last_mm_cid =3D -1; - return -1; + scoped_guard (rcu) { + src_task =3D rcu_dereference(src_rq->curr); + if (READ_ONCE(src_task->mm_cid_active) && src_task->mm =3D=3D mm) { + /* + * We observed an active task for this mm, there is therefore + * no point in moving this cid to the destination cpu. + */ + t->last_mm_cid =3D -1; + return -1; + } } - rcu_read_unlock(); =20 /* * The src_cid is unused, so it can be unset. @@ -11788,7 +11756,6 @@ static void sched_mm_cid_remote_clear(st { struct rq *rq =3D cpu_rq(cpu); struct task_struct *t; - unsigned long flags; int cid, lazy_cid; =20 cid =3D READ_ONCE(pcpu_cid->cid); @@ -11823,23 +11790,21 @@ static void sched_mm_cid_remote_clear(st * the lazy-put flag, that task will be responsible for transitioning * from lazy-put flag set to MM_CID_UNSET. */ - rcu_read_lock(); - t =3D rcu_dereference(rq->curr); - if (READ_ONCE(t->mm_cid_active) && t->mm =3D=3D mm) { - rcu_read_unlock(); - return; + scoped_guard (rcu) { + t =3D rcu_dereference(rq->curr); + if (READ_ONCE(t->mm_cid_active) && t->mm =3D=3D mm) + return; } - rcu_read_unlock(); =20 /* * The cid is unused, so it can be unset. * Disable interrupts to keep the window of cid ownership without rq * lock small. */ - local_irq_save(flags); - if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) - __mm_cid_put(mm, cid); - local_irq_restore(flags); + scoped_guard (irqsave) { + if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) + __mm_cid_put(mm, cid); + } } =20 static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu) @@ -11861,14 +11826,13 @@ static void sched_mm_cid_remote_clear_ol * snapshot associated with this cid if an active task using the mm is * observed on this rq. */ - rcu_read_lock(); - curr =3D rcu_dereference(rq->curr); - if (READ_ONCE(curr->mm_cid_active) && curr->mm =3D=3D mm) { - WRITE_ONCE(pcpu_cid->time, rq_clock); - rcu_read_unlock(); - return; + scoped_guard (rcu) { + curr =3D rcu_dereference(rq->curr); + if (READ_ONCE(curr->mm_cid_active) && curr->mm =3D=3D mm) { + WRITE_ONCE(pcpu_cid->time, rq_clock); + return; + } } - rcu_read_unlock(); =20 if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS) return; @@ -11962,7 +11926,6 @@ void task_tick_mm_cid(struct rq *rq, str void sched_mm_cid_exit_signals(struct task_struct *t) { struct mm_struct *mm =3D t->mm; - struct rq_flags rf; struct rq *rq; =20 if (!mm) @@ -11970,7 +11933,7 @@ void sched_mm_cid_exit_signals(struct ta =20 preempt_disable(); rq =3D this_rq(); - rq_lock_irqsave(rq, &rf); + guard(rq_lock_irqsave)(rq); preempt_enable_no_resched(); /* holding spinlock */ WRITE_ONCE(t->mm_cid_active, 0); /* @@ -11980,13 +11943,11 @@ void sched_mm_cid_exit_signals(struct ta smp_mb(); mm_cid_put(mm); t->last_mm_cid =3D t->mm_cid =3D -1; - rq_unlock_irqrestore(rq, &rf); } =20 void sched_mm_cid_before_execve(struct task_struct *t) { struct mm_struct *mm =3D t->mm; - struct rq_flags rf; struct rq *rq; =20 if (!mm) @@ -11994,7 +11955,7 @@ void sched_mm_cid_before_execve(struct t =20 preempt_disable(); rq =3D this_rq(); - rq_lock_irqsave(rq, &rf); + guard(rq_lock_irqsave)(rq); preempt_enable_no_resched(); /* holding spinlock */ WRITE_ONCE(t->mm_cid_active, 0); /* @@ -12004,13 +11965,11 @@ void sched_mm_cid_before_execve(struct t smp_mb(); mm_cid_put(mm); t->last_mm_cid =3D t->mm_cid =3D -1; - rq_unlock_irqrestore(rq, &rf); } =20 void sched_mm_cid_after_execve(struct task_struct *t) { struct mm_struct *mm =3D t->mm; - struct rq_flags rf; struct rq *rq; =20 if (!mm) @@ -12018,16 +11977,16 @@ void sched_mm_cid_after_execve(struct ta =20 preempt_disable(); rq =3D this_rq(); - rq_lock_irqsave(rq, &rf); - preempt_enable_no_resched(); /* holding spinlock */ - WRITE_ONCE(t->mm_cid_active, 1); - /* - * Store t->mm_cid_active before loading per-mm/cpu cid. - * Matches barrier in sched_mm_cid_remote_clear_old(). - */ - smp_mb(); - t->last_mm_cid =3D t->mm_cid =3D mm_cid_get(rq, mm); - rq_unlock_irqrestore(rq, &rf); + scoped_guard (rq_lock_irqsave, rq) { + preempt_enable_no_resched(); /* holding spinlock */ + WRITE_ONCE(t->mm_cid_active, 1); + /* + * Store t->mm_cid_active before loading per-mm/cpu cid. + * Matches barrier in sched_mm_cid_remote_clear_old(). + */ + smp_mb(); + t->last_mm_cid =3D t->mm_cid =3D mm_cid_get(rq, mm); + } rseq_set_notify_resume(t); }