From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC4D1C00528 for ; Tue, 1 Aug 2023 21:25:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232531AbjHAVZa (ORCPT ); Tue, 1 Aug 2023 17:25:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230241AbjHAVY4 (ORCPT ); Tue, 1 Aug 2023 17:24:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 062492708 for ; Tue, 1 Aug 2023 14:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=2Fw/6pc13hFz4431UNCf/XS3M9fAIWcYfqPnXE09Ku4=; b=jf7h3twPdz2DT1GCwfUgdsHWvl 6gJ6KV5WtDzaSrLPLVmGcSsNI/aPU6YuG/xgdRcISp/8cdMuxLDYSSOACzvce0Xin0ei5eurCCVW6 pKhIniOu3a6lY3JXgczXG8BA4bNwQ7ibysONv7Fklk9bZ4pMTaX6+Jhxsn/5ua9oyqgzxNHtotBA2 m+Rej9y/VI678MskQgPdCshVY4X6Qmnwpqj76M6A/AW1o7OgItWsLTAKeedBx6E19GFALEo58HwfS 6ZyZ0DSbdXMfYqFv4nb7aPil+r8I1F8t7OlD0qIUnWQHwn4lbg194c89YhUGFLlx1PCBGeyPtrqCd QdZAOqSg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qQwqm-00B5yX-0o; Tue, 01 Aug 2023 21:24:17 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 569F43002D3; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 389C52028A2D8; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211811.828443100@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:22 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 1/9] sched: Simplify get_nohz_timer_target() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Joel Fernandes (Google) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1097,25 +1097,22 @@ int get_nohz_timer_target(void) =20 hk_mask =3D housekeeping_cpumask(HK_TYPE_TIMER); =20 - rcu_read_lock(); + guard(rcu)(); + for_each_domain(cpu, sd) { for_each_cpu_and(i, sched_domain_span(sd), hk_mask) { if (cpu =3D=3D i) continue; =20 - if (!idle_cpu(i)) { - cpu =3D i; - goto unlock; - } + if (!idle_cpu(i)) + return i; } } =20 if (default_cpu =3D=3D -1) default_cpu =3D housekeeping_any_cpu(HK_TYPE_TIMER); - cpu =3D default_cpu; -unlock: - rcu_read_unlock(); - return cpu; + + return default_cpu; } =20 /* From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7D0BC00528 for ; Tue, 1 Aug 2023 21:25:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232369AbjHAVZL (ORCPT ); Tue, 1 Aug 2023 17:25:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232266AbjHAVYz (ORCPT ); Tue, 1 Aug 2023 17:24:55 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3922359E for ; Tue, 1 Aug 2023 14:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=+t+cIZqcnxPpRu93TZSyPqI5zhsAiI8Yqy+BlhriBHw=; b=IM3WYqsNA2BbiQtdFADa2gomoD qqrejQLUxpaa3PX8lxd6dkq4elfIaAVb/VgsWGSVE2hhf4aL/p6PEV18fehDaFl6cT7d2ZEbNRHyW JrhQXBi4Jktho0k0IEW+Y8ZTrBu3UP2Db1TKI2P00+TNAGyN3K5oXBLOXr5CeCJSbcrWv3KOvo665 cwPgWklfuD8p35guhVqMcNcyqtQ0iWmvxMBiwdez1fFBkELMfv8gM8DK/PKc/Y3tVe5gU95qdK2Tb hBiwxSEcc40v8QISM1Lp5siTkIAQZmHqL+XUAnVu9A9p6jukzbGjTn+nOYR9wy1QTwXLe2D4b4aCG cv3D41BA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qQwqm-00B5yc-2S; Tue, 01 Aug 2023 21:24:17 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5893F300768; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3A78A202883B0; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211811.896559109@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:23 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 2/9] sched: Simplify sysctl_sched_uclamp_handler() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1801,7 +1801,8 @@ static int sysctl_sched_uclamp_handler(s int old_min, old_max, old_min_rt; int result; =20 - mutex_lock(&uclamp_mutex); + guard(mutex)(&uclamp_mutex); + old_min =3D sysctl_sched_uclamp_util_min; old_max =3D sysctl_sched_uclamp_util_max; old_min_rt =3D sysctl_sched_uclamp_util_min_rt_default; @@ -1810,7 +1811,7 @@ static int sysctl_sched_uclamp_handler(s if (result) goto undo; if (!write) - goto done; + return 0; =20 if (sysctl_sched_uclamp_util_min > sysctl_sched_uclamp_util_max || sysctl_sched_uclamp_util_max > SCHED_CAPACITY_SCALE || @@ -1846,16 +1847,12 @@ static int sysctl_sched_uclamp_handler(s * Otherwise, keep it simple and do just a lazy update at each next * task enqueue time. */ - - goto done; + return 0; =20 undo: sysctl_sched_uclamp_util_min =3D old_min; sysctl_sched_uclamp_util_max =3D old_max; sysctl_sched_uclamp_util_min_rt_default =3D old_min_rt; -done: - mutex_unlock(&uclamp_mutex); - return result; } #endif From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B542C04A6A for ; Tue, 1 Aug 2023 21:25:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232278AbjHAVZG (ORCPT ); Tue, 1 Aug 2023 17:25:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232263AbjHAVYz (ORCPT ); Tue, 1 Aug 2023 17:24:55 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE18F35A5 for ; Tue, 1 Aug 2023 14:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Z/QyOBDSEB+zoR9GPvKsSeTCgad9VvZXAWajjLt+jzo=; b=Al/Ax4if19A7a2TrJQQAx5hfN3 jqg+ksC+d20UaRIoWyI9FVSvx17aviTARLX3DxpTKJO7DNGF8MSnXB4mXyz7+rlnD8gTBntWtLV/K hAE6NSmjeMqZqHkdudwwOa8osYGnJq9Z38p0Bve15z5ihBn6bYNZmexYZS6dvTzC9d9gIk/PzL15F Y6VzxwEZXQinghsVeWDd1BAs2xUPy7ET2DfI7GoTkaVtUOdzCeD+ReoItHAbcno31GqJAqBl9WZDo 4FMje4CiSik7YQwcStueiaJPQ1dRuceH1JtbaVo8peHJv0nfT54H88ZRtJvgVv6ZCW/z+N06y61Mg hoCUurHQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qQwqm-00EvTp-0A; Tue, 01 Aug 2023 21:24:16 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5A81130083E; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4047520292D01; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211811.964370836@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:24 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 3/9] sched: Simplify: migrate_swap_stop() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 23 +++++++---------------- kernel/sched/sched.h | 20 ++++++++++++++++++++ 2 files changed, 27 insertions(+), 16 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3258,7 +3258,6 @@ static int migrate_swap_stop(void *data) { struct migration_swap_arg *arg =3D data; struct rq *src_rq, *dst_rq; - int ret =3D -EAGAIN; =20 if (!cpu_active(arg->src_cpu) || !cpu_active(arg->dst_cpu)) return -EAGAIN; @@ -3266,33 +3265,25 @@ static int migrate_swap_stop(void *data) src_rq =3D cpu_rq(arg->src_cpu); dst_rq =3D cpu_rq(arg->dst_cpu); =20 - double_raw_lock(&arg->src_task->pi_lock, - &arg->dst_task->pi_lock); - double_rq_lock(src_rq, dst_rq); + guard(double_raw_spinlock)(&arg->src_task->pi_lock, &arg->dst_task->pi_lo= ck); + guard(double_rq_lock)(src_rq, dst_rq); =20 if (task_cpu(arg->dst_task) !=3D arg->dst_cpu) - goto unlock; + return -EAGAIN; =20 if (task_cpu(arg->src_task) !=3D arg->src_cpu) - goto unlock; + return -EAGAIN; =20 if (!cpumask_test_cpu(arg->dst_cpu, arg->src_task->cpus_ptr)) - goto unlock; + return -EAGAIN; =20 if (!cpumask_test_cpu(arg->src_cpu, arg->dst_task->cpus_ptr)) - goto unlock; + return -EAGAIN; =20 __migrate_swap_task(arg->src_task, arg->dst_cpu); __migrate_swap_task(arg->dst_task, arg->src_cpu); =20 - ret =3D 0; - -unlock: - double_rq_unlock(src_rq, dst_rq); - raw_spin_unlock(&arg->dst_task->pi_lock); - raw_spin_unlock(&arg->src_task->pi_lock); - - return ret; + return 0; } =20 /* --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2572,6 +2572,12 @@ static inline void double_rq_clock_clear static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq = *rq2) {} #endif =20 +#define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \ +__DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \ +static inline class_##name##_t class_##name##_constructor(type *lock, type= *lock2) \ +{ class_##name##_t _t =3D { .lock =3D lock, .lock2 =3D lock2 }, *_T =3D &_= t; \ + _lock; return _t; } + #ifdef CONFIG_SMP =20 static inline bool rq_order_less(struct rq *rq1, struct rq *rq2) @@ -2701,6 +2707,16 @@ static inline void double_raw_lock(raw_s raw_spin_lock_nested(l2, SINGLE_DEPTH_NESTING); } =20 +static inline void double_raw_unlock(raw_spinlock_t *l1, raw_spinlock_t *l= 2) +{ + raw_spin_unlock(l1); + raw_spin_unlock(l2); +} + +DEFINE_LOCK_GUARD_2(double_raw_spinlock, raw_spinlock_t, + double_raw_lock(_T->lock, _T->lock2), + double_raw_unlock(_T->lock, _T->lock2)) + /* * double_rq_unlock - safely unlock two runqueues * @@ -2758,6 +2774,10 @@ static inline void double_rq_unlock(stru =20 #endif =20 +DEFINE_LOCK_GUARD_2(double_rq_lock, struct rq, + double_rq_lock(_T->lock, _T->lock2), + double_rq_unlock(_T->lock, _T->lock2)) + extern struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq); extern struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq); From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C4C5C00528 for ; Tue, 1 Aug 2023 21:25:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232508AbjHAVZ1 (ORCPT ); Tue, 1 Aug 2023 17:25:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230029AbjHAVY4 (ORCPT ); Tue, 1 Aug 2023 17:24:56 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18A4C35A8 for ; Tue, 1 Aug 2023 14:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=XnCmIaNvJgB5bekhtHHcxeVW5sv9v7ogZHNowp3XnxU=; b=grhO7SXHynJfVX9hhJuZZMTp7F 2h7GK1gVLNebfvh7OneKfz6GSmm7RHut5djU0eFNRKHMe2FRvhg+FzBwaJ78Uo0sHHH6LkF8AT80w g8y0RsK9+1Iu6n7w3Vh6NUEZFwqA63p3VTG0yK8h+GjE9tvSqg4mpVxmiGV6jeSfsUWArAdJhsqOk 3O+CLGk4lfoghVPNgVudI1bsdSVYBKmW/6tUF3Mh0mn3bMfl3PRjv16/jTB5L479Ad3d+Z9bAFUUh EpRpYKlW9aKWPGtO5oraXAa0wyinNXesFXZ3SFNSHMg/77OLUg0ZKcEVgKqNGl0dzBvsJ58Z5jKYf KEGfVo+A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qQwqm-00EvTo-06; Tue, 01 Aug 2023 21:24:16 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5C3463008C6; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4473A2029F9EC; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211812.032678917@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:25 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 4/9] sched: Simplify wake_up_if_idle() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 20 ++++++-------------- kernel/sched/sched.h | 15 +++++++++++++++ 2 files changed, 21 insertions(+), 14 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3872,21 +3872,13 @@ static void __ttwu_queue_wakelist(struct void wake_up_if_idle(int cpu) { struct rq *rq =3D cpu_rq(cpu); - struct rq_flags rf; =20 - rcu_read_lock(); - - if (!is_idle_task(rcu_dereference(rq->curr))) - goto out; - - rq_lock_irqsave(rq, &rf); - if (is_idle_task(rq->curr)) - resched_curr(rq); - /* Else CPU is not idle, do nothing here: */ - rq_unlock_irqrestore(rq, &rf); - -out: - rcu_read_unlock(); + guard(rcu)(); + if (is_idle_task(rcu_dereference(rq->curr))) { + guard(rq_lock_irqsave)(rq); + if (is_idle_task(rq->curr)) + resched_curr(rq); + } } =20 bool cpus_share_cache(int this_cpu, int that_cpu) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1678,6 +1678,21 @@ rq_unlock(struct rq *rq, struct rq_flags raw_spin_rq_unlock(rq); } =20 +DEFINE_LOCK_GUARD_1(rq_lock, struct rq, + rq_lock(_T->lock, &_T->rf), + rq_unlock(_T->lock, &_T->rf), + struct rq_flags rf) + +DEFINE_LOCK_GUARD_1(rq_lock_irq, struct rq, + rq_lock_irq(_T->lock, &_T->rf), + rq_unlock_irq(_T->lock, &_T->rf), + struct rq_flags rf) + +DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq, + rq_lock_irqsave(_T->lock, &_T->rf), + rq_unlock_irqrestore(_T->lock, &_T->rf), + struct rq_flags rf) + static inline struct rq * this_rq_lock_irq(struct rq_flags *rf) __acquires(rq->lock) From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E379AC04A6A for ; Tue, 1 Aug 2023 21:25:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232489AbjHAVZX (ORCPT ); Tue, 1 Aug 2023 17:25:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230197AbjHAVY4 (ORCPT ); Tue, 1 Aug 2023 17:24:56 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 131CF35A7 for ; Tue, 1 Aug 2023 14:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=d7IoV3mp1lNO/qVCmLB7jmrWy1OfPyeEV/3PfRbFH8k=; b=LzWkMvFWmU8IhV/McwiylHZHRN hOc3QDgIdIkDW0kCnPGyVzxZwdQ059bMivNk9HB2WYMTJNthS0QyhqOEV07B+y3prixahIxIZ9nqm Y/ex7w/59uEiiEk309Fn65+kNiI4vQCABToj2JSSdPJbT5klwC1/gk16xNSpVybV/vYn4OCO8z8QC b/CmIW1zYbsX9V16oshrcw1y1jPGtFLryH2PwsLWi1KLfb9i/qctey2noT6c43uSWISHAO+BZaE63 52S5itpnBKNdkd8nEH0nCdiaG1S7riCGC+zzipbuHx+HLP1K2nAhf1rSK74vqRUyNINRa2c5sNglX gaDe74yA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qQwqn-00EvTu-1c; Tue, 01 Aug 2023 21:24:19 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BEE4030278C; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 492882028A2D9; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211812.101069260@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:26 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 5/9] sched: Simplify ttwu() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 221 +++++++++++++++++++++++++----------------------= ----- 1 file changed, 109 insertions(+), 112 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3706,14 +3706,14 @@ ttwu_stat(struct task_struct *p, int cpu struct sched_domain *sd; =20 __schedstat_inc(p->stats.nr_wakeups_remote); - rcu_read_lock(); + + guard(rcu)(); for_each_domain(rq->cpu, sd) { if (cpumask_test_cpu(cpu, sched_domain_span(sd))) { __schedstat_inc(sd->ttwu_wake_remote); break; } } - rcu_read_unlock(); } =20 if (wake_flags & WF_MIGRATED) @@ -4172,10 +4172,9 @@ bool ttwu_state_match(struct task_struct static int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) { - unsigned long flags; + guard(preempt)(); int cpu, success =3D 0; =20 - preempt_disable(); if (p =3D=3D current) { /* * We're waking current, this means 'p->on_rq' and 'task_cpu(p) @@ -4202,129 +4201,127 @@ try_to_wake_up(struct task_struct *p, un * reordered with p->state check below. This pairs with smp_store_mb() * in set_current_state() that the waiting thread does. */ - raw_spin_lock_irqsave(&p->pi_lock, flags); - smp_mb__after_spinlock(); - if (!ttwu_state_match(p, state, &success)) - goto unlock; + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { + smp_mb__after_spinlock(); + if (!ttwu_state_match(p, state, &success)) + break; =20 - trace_sched_waking(p); + trace_sched_waking(p); =20 - /* - * Ensure we load p->on_rq _after_ p->state, otherwise it would - * be possible to, falsely, observe p->on_rq =3D=3D 0 and get stuck - * in smp_cond_load_acquire() below. - * - * sched_ttwu_pending() try_to_wake_up() - * STORE p->on_rq =3D 1 LOAD p->state - * UNLOCK rq->lock - * - * __schedule() (switch to task 'p') - * LOCK rq->lock smp_rmb(); - * smp_mb__after_spinlock(); - * UNLOCK rq->lock - * - * [task p] - * STORE p->state =3D UNINTERRUPTIBLE LOAD p->on_rq - * - * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in - * __schedule(). See the comment for smp_mb__after_spinlock(). - * - * A similar smb_rmb() lives in try_invoke_on_locked_down_task(). - */ - smp_rmb(); - if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags)) - goto unlock; + /* + * Ensure we load p->on_rq _after_ p->state, otherwise it would + * be possible to, falsely, observe p->on_rq =3D=3D 0 and get stuck + * in smp_cond_load_acquire() below. + * + * sched_ttwu_pending() try_to_wake_up() + * STORE p->on_rq =3D 1 LOAD p->state + * UNLOCK rq->lock + * + * __schedule() (switch to task 'p') + * LOCK rq->lock smp_rmb(); + * smp_mb__after_spinlock(); + * UNLOCK rq->lock + * + * [task p] + * STORE p->state =3D UNINTERRUPTIBLE LOAD p->on_rq + * + * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in + * __schedule(). See the comment for smp_mb__after_spinlock(). + * + * A similar smb_rmb() lives in try_invoke_on_locked_down_task(). + */ + smp_rmb(); + if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags)) + break; =20 #ifdef CONFIG_SMP - /* - * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be - * possible to, falsely, observe p->on_cpu =3D=3D 0. - * - * One must be running (->on_cpu =3D=3D 1) in order to remove oneself - * from the runqueue. - * - * __schedule() (switch to task 'p') try_to_wake_up() - * STORE p->on_cpu =3D 1 LOAD p->on_rq - * UNLOCK rq->lock - * - * __schedule() (put 'p' to sleep) - * LOCK rq->lock smp_rmb(); - * smp_mb__after_spinlock(); - * STORE p->on_rq =3D 0 LOAD p->on_cpu - * - * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in - * __schedule(). See the comment for smp_mb__after_spinlock(). - * - * Form a control-dep-acquire with p->on_rq =3D=3D 0 above, to ensure - * schedule()'s deactivate_task() has 'happened' and p will no longer - * care about it's own p->state. See the comment in __schedule(). - */ - smp_acquire__after_ctrl_dep(); + /* + * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be + * possible to, falsely, observe p->on_cpu =3D=3D 0. + * + * One must be running (->on_cpu =3D=3D 1) in order to remove oneself + * from the runqueue. + * + * __schedule() (switch to task 'p') try_to_wake_up() + * STORE p->on_cpu =3D 1 LOAD p->on_rq + * UNLOCK rq->lock + * + * __schedule() (put 'p' to sleep) + * LOCK rq->lock smp_rmb(); + * smp_mb__after_spinlock(); + * STORE p->on_rq =3D 0 LOAD p->on_cpu + * + * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in + * __schedule(). See the comment for smp_mb__after_spinlock(). + * + * Form a control-dep-acquire with p->on_rq =3D=3D 0 above, to ensure + * schedule()'s deactivate_task() has 'happened' and p will no longer + * care about it's own p->state. See the comment in __schedule(). + */ + smp_acquire__after_ctrl_dep(); =20 - /* - * We're doing the wakeup (@success =3D=3D 1), they did a dequeue (p->on_= rq - * =3D=3D 0), which means we need to do an enqueue, change p->state to - * TASK_WAKING such that we can unlock p->pi_lock before doing the - * enqueue, such as ttwu_queue_wakelist(). - */ - WRITE_ONCE(p->__state, TASK_WAKING); + /* + * We're doing the wakeup (@success =3D=3D 1), they did a dequeue (p->on= _rq + * =3D=3D 0), which means we need to do an enqueue, change p->state to + * TASK_WAKING such that we can unlock p->pi_lock before doing the + * enqueue, such as ttwu_queue_wakelist(). + */ + WRITE_ONCE(p->__state, TASK_WAKING); =20 - /* - * If the owning (remote) CPU is still in the middle of schedule() with - * this task as prev, considering queueing p on the remote CPUs wake_list - * which potentially sends an IPI instead of spinning on p->on_cpu to - * let the waker make forward progress. This is safe because IRQs are - * disabled and the IPI will deliver after on_cpu is cleared. - * - * Ensure we load task_cpu(p) after p->on_cpu: - * - * set_task_cpu(p, cpu); - * STORE p->cpu =3D @cpu - * __schedule() (switch to task 'p') - * LOCK rq->lock - * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu) - * STORE p->on_cpu =3D 1 LOAD p->cpu - * - * to ensure we observe the correct CPU on which the task is currently - * scheduling. - */ - if (smp_load_acquire(&p->on_cpu) && - ttwu_queue_wakelist(p, task_cpu(p), wake_flags)) - goto unlock; + /* + * If the owning (remote) CPU is still in the middle of schedule() with + * this task as prev, considering queueing p on the remote CPUs wake_list + * which potentially sends an IPI instead of spinning on p->on_cpu to + * let the waker make forward progress. This is safe because IRQs are + * disabled and the IPI will deliver after on_cpu is cleared. + * + * Ensure we load task_cpu(p) after p->on_cpu: + * + * set_task_cpu(p, cpu); + * STORE p->cpu =3D @cpu + * __schedule() (switch to task 'p') + * LOCK rq->lock + * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu) + * STORE p->on_cpu =3D 1 LOAD p->cpu + * + * to ensure we observe the correct CPU on which the task is currently + * scheduling. + */ + if (smp_load_acquire(&p->on_cpu) && + ttwu_queue_wakelist(p, task_cpu(p), wake_flags)) + break; =20 - /* - * If the owning (remote) CPU is still in the middle of schedule() with - * this task as prev, wait until it's done referencing the task. - * - * Pairs with the smp_store_release() in finish_task(). - * - * This ensures that tasks getting woken will be fully ordered against - * their previous state and preserve Program Order. - */ - smp_cond_load_acquire(&p->on_cpu, !VAL); + /* + * If the owning (remote) CPU is still in the middle of schedule() with + * this task as prev, wait until it's done referencing the task. + * + * Pairs with the smp_store_release() in finish_task(). + * + * This ensures that tasks getting woken will be fully ordered against + * their previous state and preserve Program Order. + */ + smp_cond_load_acquire(&p->on_cpu, !VAL); =20 - cpu =3D select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU); - if (task_cpu(p) !=3D cpu) { - if (p->in_iowait) { - delayacct_blkio_end(p); - atomic_dec(&task_rq(p)->nr_iowait); - } + cpu =3D select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU); + if (task_cpu(p) !=3D cpu) { + if (p->in_iowait) { + delayacct_blkio_end(p); + atomic_dec(&task_rq(p)->nr_iowait); + } =20 - wake_flags |=3D WF_MIGRATED; - psi_ttwu_dequeue(p); - set_task_cpu(p, cpu); - } + wake_flags |=3D WF_MIGRATED; + psi_ttwu_dequeue(p); + set_task_cpu(p, cpu); + } #else - cpu =3D task_cpu(p); + cpu =3D task_cpu(p); #endif /* CONFIG_SMP */ =20 - ttwu_queue(p, cpu, wake_flags); -unlock: - raw_spin_unlock_irqrestore(&p->pi_lock, flags); + ttwu_queue(p, cpu, wake_flags); + } out: if (success) ttwu_stat(p, task_cpu(p), wake_flags); - preempt_enable(); =20 return success; } From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4134C00528 for ; Tue, 1 Aug 2023 21:25:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232321AbjHAVZI (ORCPT ); Tue, 1 Aug 2023 17:25:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232258AbjHAVYz (ORCPT ); Tue, 1 Aug 2023 17:24:55 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC3D435A2 for ; Tue, 1 Aug 2023 14:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=drbNuL55DYuL5wiscrlTnGAb/cO6SeGIhgtGTiA6PhQ=; b=CmSlmMvBqFeZkasybngqkB/9/Z mUv7r6cmiKzOML5p3sP/7eNW16u9qXgad8LSgfbpyGh4CxsLLmw2+fNFo/ocTFfJNo0Roo/ELwr6o VWbMNAPshVBVeQplDvk7SN1Lgu5HGT6/O2moDQB/+MlNaKc8mQ0s4GL3cEOphzAOK24BFFOkgWJFk GzKQEzHnF7TRhjXrm0ER9eKzep+vUFLQNNxtbR+DusrIdrO6+z7R6eEDn9IfZESBYjm8ARA9ed+eh 4e2nwM1i8mQ175codWVx83vH1F5D6LSOw/I4VMx1Ei1MK7jL0k6BtcRWfsQ7puqN0FmXoKGXL4Zph igvkYxww==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qQwqn-00EvTv-1f; Tue, 01 Aug 2023 21:24:20 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BEDCA3006ED; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4BEB720282F78; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211812.168490417@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:27 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 6/9] sched: Simplify sched_exec() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5431,23 +5431,20 @@ unsigned int nr_iowait(void) void sched_exec(void) { struct task_struct *p =3D current; - unsigned long flags; + struct migration_arg arg; int dest_cpu; =20 - raw_spin_lock_irqsave(&p->pi_lock, flags); - dest_cpu =3D p->sched_class->select_task_rq(p, task_cpu(p), WF_EXEC); - if (dest_cpu =3D=3D smp_processor_id()) - goto unlock; + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { + dest_cpu =3D p->sched_class->select_task_rq(p, task_cpu(p), WF_EXEC); + if (dest_cpu =3D=3D smp_processor_id()) + return; =20 - if (likely(cpu_active(dest_cpu))) { - struct migration_arg arg =3D { p, dest_cpu }; + if (unlikely(!cpu_active(dest_cpu))) + return; =20 - raw_spin_unlock_irqrestore(&p->pi_lock, flags); - stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); - return; + arg =3D (struct migration_arg){ p, dest_cpu }; } -unlock: - raw_spin_unlock_irqrestore(&p->pi_lock, flags); + stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); } =20 #endif From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B3ECC04A6A for ; Tue, 1 Aug 2023 21:25:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232279AbjHAVZT (ORCPT ); Tue, 1 Aug 2023 17:25:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232272AbjHAVYz (ORCPT ); Tue, 1 Aug 2023 17:24:55 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62DC135AA for ; Tue, 1 Aug 2023 14:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=q92jsVFPZSP8G7FOeDp8TiMSNcZFRY9w1smIqAIutpc=; b=Zs/CXT4SulTjrAAiY5k/JWkVbm nhCxY6+yMoC+VlPJZokF8S4WA4fqGlAOTXENgeCoeHlkKzuq0wipZHItvz0AG72ro+VhW3Rlw8Fy+ ElTqg9gWrlTzSzFSOHOjUXPVGaa4AsavKRfIAD17HosObUFW+rCfubbLhI/BsR7UiLTr3ExI55yRp T0wjF+UVz+E7tCmZssp6z135YlHwBTB0VPKI0UC8Z+JzCUVCktqH0bmCT1imWr3KQA7lvam5+LWJT mWBpEaT7XJ96fmYyMKMQdX53gcJGDStd6CGGK0Ni5ERXhsLs7R/SAuGQH379LjysbNS7AvlYS07Iz M3/Z86xw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qQwqn-00EvTw-1h; Tue, 01 Aug 2023 21:24:19 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BEE5930310E; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 52AE820283BA4; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211812.236247952@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:28 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 7/9] sched: Simplify sched_tick_remote() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 43 ++++++++++++++++++------------------------- 1 file changed, 18 insertions(+), 25 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5651,9 +5651,6 @@ static void sched_tick_remote(struct wor struct tick_work *twork =3D container_of(dwork, struct tick_work, work); int cpu =3D twork->cpu; struct rq *rq =3D cpu_rq(cpu); - struct task_struct *curr; - struct rq_flags rf; - u64 delta; int os; =20 /* @@ -5663,30 +5660,26 @@ static void sched_tick_remote(struct wor * statistics and checks timeslices in a time-independent way, regardless * of when exactly it is running. */ - if (!tick_nohz_tick_stopped_cpu(cpu)) - goto out_requeue; + if (tick_nohz_tick_stopped_cpu(cpu)) { + guard(rq_lock_irq)(rq); + struct task_struct *curr =3D rq->curr; + + if (cpu_online(cpu)) { + update_rq_clock(rq); + + if (!is_idle_task(curr)) { + /* + * Make sure the next tick runs within a + * reasonable amount of time. + */ + u64 delta =3D rq_clock_task(rq) - curr->se.exec_start; + WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); + } + curr->sched_class->task_tick(rq, curr, 0); =20 - rq_lock_irq(rq, &rf); - curr =3D rq->curr; - if (cpu_is_offline(cpu)) - goto out_unlock; - - update_rq_clock(rq); - - if (!is_idle_task(curr)) { - /* - * Make sure the next tick runs within a reasonable - * amount of time. - */ - delta =3D rq_clock_task(rq) - curr->se.exec_start; - WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); + calc_load_nohz_remote(rq); + } } - curr->sched_class->task_tick(rq, curr, 0); - - calc_load_nohz_remote(rq); -out_unlock: - rq_unlock_irq(rq, &rf); -out_requeue: =20 /* * Run the remote tick once per second (1Hz). This arbitrary From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91530C04A6A for ; Tue, 1 Aug 2023 21:25:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229725AbjHAVZc (ORCPT ); Tue, 1 Aug 2023 17:25:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232078AbjHAVY6 (ORCPT ); Tue, 1 Aug 2023 17:24:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C77F2707 for ; Tue, 1 Aug 2023 14:24:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=4+/Q1nIn7pjeaoOkqiN8ZoN9dOzRXj0Qx8pmucBG9KU=; b=J8IHK9UeVvDpDEqjIjdyb+ueyD MZu72D5VRlzK2tgFk6E2Q19XNTvN6w4R97+txCz5/kQUErEhc2yg8x/4L9rokdCMEfx4NHRRpIjhh BmupbnZfuzBBcOzvXYkmurDDQkqYJVR+7bqQqWfeszzL94RmSahcvlCY0sz71Hu3ShBcJmRn4HVIW oGqKaY55edDIPUaXUF0TKedyjgYdxXvApczceNkZD6d2Nlc7uS47doikWbWyqjHJnIhPCUlIvhs5b JHQUnCDYksFwdhUdRC8afFXVARdY0EHUsRoFgfPKWRw6tgUQMNnYUYoByV+O2zkAEZavOIg2Ys3Xh FHwmGJMg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qQwqn-00EvTt-1a; Tue, 01 Aug 2023 21:24:18 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BEDEB302781; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5713220286FBA; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211812.304154828@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:29 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 8/9] sched: Simplify try_steal_cookie() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6229,19 +6229,19 @@ static bool try_steal_cookie(int this, i unsigned long cookie; bool success =3D false; =20 - local_irq_disable(); - double_rq_lock(dst, src); + guard(irq)(); + guard(double_rq_lock)(dst, src); =20 cookie =3D dst->core->core_cookie; if (!cookie) - goto unlock; + return false; =20 if (dst->curr !=3D dst->idle) - goto unlock; + return false; =20 p =3D sched_core_find(src, cookie); if (!p) - goto unlock; + return false; =20 do { if (p =3D=3D src->core_pick || p =3D=3D src->curr) @@ -6253,9 +6253,10 @@ static bool try_steal_cookie(int this, i if (p->core_occupation > dst->idle->core_occupation) goto next; /* - * sched_core_find() and sched_core_next() will ensure that task @p - * is not throttled now, we also need to check whether the runqueue - * of the destination CPU is being throttled. + * sched_core_find() and sched_core_next() will ensure + * that task @p is not throttled now, we also need to + * check whether the runqueue of the destination CPU is + * being throttled. */ if (sched_task_is_throttled(p, this)) goto next; @@ -6273,10 +6274,6 @@ static bool try_steal_cookie(int this, i p =3D sched_core_next(p, cookie); } while (p); =20 -unlock: - double_rq_unlock(dst, src); - local_irq_enable(); - return success; } From nobody Wed Sep 10 02:46:14 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27164C00528 for ; Tue, 1 Aug 2023 21:25:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232199AbjHAVZC (ORCPT ); Tue, 1 Aug 2023 17:25:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232257AbjHAVYz (ORCPT ); Tue, 1 Aug 2023 17:24:55 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3BE035A4 for ; Tue, 1 Aug 2023 14:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=tN5fF7Wyh35qBJFCdjEuKMaMwIiqau8XmW0j2x/SMjk=; b=OBxsTC6ytcWiY/yE6bTaLRGenU UXNPiXEek3EpXmglUInzCpiKgnS3Xz9uHEbo7oY5WDfMfRjUz6q3DK9ojN+5Su+zHJowi8JZkdBJ8 rZtpQZszObYqufmw4gEPMk5APgE2bWfFNTlvKj10VcO2RwkEUplUW2a7aLx0tk2rTvhA1MBl95JeA MLf1QC02VFnIlh00Zr9IL404fYayP1cF/EJqh1gO6LAcLlbPHhXEWK/jbPzOwChgZjJR1QxrVvTp9 Ss9CbChT2TfWVE/c/meuiPSnO7OW5RaPYlArIXzJru3Hk79gP0WwXf/+mZQHrSIM7limw+vBxut7M /lGBWahg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qQwqn-00B5yk-GZ; Tue, 01 Aug 2023 21:24:17 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BEE9330334C; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5B5942026B3DC; Tue, 1 Aug 2023 23:24:15 +0200 (CEST) Message-ID: <20230801211812.371787909@infradead.org> User-Agent: quilt/0.66 Date: Tue, 01 Aug 2023 22:41:30 +0200 From: Peter Zijlstra To: mingo@redhat.com Cc: peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 9/9] sched: Simplify sched_core_cpu_{starting,deactivate}() References: <20230801204121.929256934@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6331,20 +6331,24 @@ static void queue_core_balance(struct rq queue_balance_callback(rq, &per_cpu(core_balance_head, rq->cpu), sched_co= re_balance); } =20 +DEFINE_LOCK_GUARD_1(core_lock, int, + sched_core_lock(*_T->lock, &_T->flags), + sched_core_unlock(*_T->lock, &_T->flags), + unsigned long flags) + static void sched_core_cpu_starting(unsigned int cpu) { const struct cpumask *smt_mask =3D cpu_smt_mask(cpu); struct rq *rq =3D cpu_rq(cpu), *core_rq =3D NULL; - unsigned long flags; int t; =20 - sched_core_lock(cpu, &flags); + guard(core_lock)(&cpu); =20 WARN_ON_ONCE(rq->core !=3D rq); =20 /* if we're the first, we'll be our own leader */ if (cpumask_weight(smt_mask) =3D=3D 1) - goto unlock; + return; =20 /* find the leader */ for_each_cpu(t, smt_mask) { @@ -6358,7 +6362,7 @@ static void sched_core_cpu_starting(unsi } =20 if (WARN_ON_ONCE(!core_rq)) /* whoopsie */ - goto unlock; + return; =20 /* install and validate core_rq */ for_each_cpu(t, smt_mask) { @@ -6369,29 +6373,25 @@ static void sched_core_cpu_starting(unsi =20 WARN_ON_ONCE(rq->core !=3D core_rq); } - -unlock: - sched_core_unlock(cpu, &flags); } =20 static void sched_core_cpu_deactivate(unsigned int cpu) { const struct cpumask *smt_mask =3D cpu_smt_mask(cpu); struct rq *rq =3D cpu_rq(cpu), *core_rq =3D NULL; - unsigned long flags; int t; =20 - sched_core_lock(cpu, &flags); + guard(core_lock)(&cpu); =20 /* if we're the last man standing, nothing to do */ if (cpumask_weight(smt_mask) =3D=3D 1) { WARN_ON_ONCE(rq->core !=3D rq); - goto unlock; + return; } =20 /* if we're not the leader, nothing to do */ if (rq->core !=3D rq) - goto unlock; + return; =20 /* find a new leader */ for_each_cpu(t, smt_mask) { @@ -6402,7 +6402,7 @@ static void sched_core_cpu_deactivate(un } =20 if (WARN_ON_ONCE(!core_rq)) /* impossible */ - goto unlock; + return; =20 /* copy the shared state to the new leader */ core_rq->core_task_seq =3D rq->core_task_seq; @@ -6424,9 +6424,6 @@ static void sched_core_cpu_deactivate(un rq =3D cpu_rq(t); rq->core =3D core_rq; } - -unlock: - sched_core_unlock(cpu, &flags); } =20 static inline void sched_core_cpu_dying(unsigned int cpu)