From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F3B0C77B73 for ; Tue, 11 Apr 2023 04:25:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230105AbjDKEZ3 (ORCPT ); Tue, 11 Apr 2023 00:25:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbjDKEZW (ORCPT ); Tue, 11 Apr 2023 00:25:22 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AED9110E for ; Mon, 10 Apr 2023 21:25:21 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54c01480e3cso150821757b3.1 for ; Mon, 10 Apr 2023 21:25:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187121; x=1683779121; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9MZM9lxALf3anqWVT8caCUbvqRi6B6TU3wXrzxXwWm8=; b=m8OotgtBD1dzwzHFSOf6A8OMx2YT3W8fCJbtXvtA35su8koTUi5BxAyq9TwdQjLanf ogg8wPBp/laRDNFwwTEQ9ErH0fSRI6i1zZf4WjfvWsG+FsZtyq7/cVYGj+FUpNj//EYG E8FJTf7BFLWX7kkpnTnBM2FcPY2yHolepF1Q+BEwvK2FnIUhvqgvjuOmvFXUpt1zN9Md s1pccaUyOi5wl0OSWlqTtE29In4QAdG6qcLGje9UDqb8eWV9EVWhVdpLctYqswWxWPjm TZHXMB9KvWywjlgs67eHfiUM0z5VXQ7zC8HjylL8ux7XCWHGtmeHAzeWpyL64I4yy08O w/rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187121; x=1683779121; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9MZM9lxALf3anqWVT8caCUbvqRi6B6TU3wXrzxXwWm8=; b=ukuGJxBPyEWxr4c3mlVnQ8M2q+XcAJCefotiBUpnRf9BxQ0ZhpJKoavtRlLCaJ6PLN 1ma0+yeG9UyIDASrpQLT424pH9uhGScQ6ffEzpU7tmzxlwwQ9Zpld+KlyEI/G86BUSxD RrBWCXHvi2ZdakXovcbRJ9j37ibPeghnmJwA0Q31hn1pD7WXeV833X/a+wBB7+n0ALhy 6si/uiV6GYyDa3gEiLLu3dFWEIxchiIIc7MfD8gsB/resAzomToa92i8w4lqcKIrC9n4 swAw7/R1Oix+V6pWvbDqT5mnLyv8Q310SYklKn1A2chUQzQIzxrmOL410qcljPy+vJTH hRrA== X-Gm-Message-State: AAQBX9d5wSjSZ73j0kxEdRBLTMHuE4jTBOXtdQBWTZGxKLiRmfdDGGan dyzeiS/ymvwhuccdFJV6ZLv2x5tjNShrXg0/ehVU7UVauRpxR1h3+U+MGLiZO//POqvnHyxJpqs uhrdIhkKGInzN19D98zfE3hxy6U1zA5aWJth6vwh38Kxf4sNT4NQHmeCitacPqSq62eLQlJI= X-Google-Smtp-Source: AKy350YJspf/c1sV4lTrsMq9sZ1AzhVO1/PBLcmtTpNkA1tlHXm2jVcp0657Vmq/Xm8eTXxtqEPw5W9i8YZS X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a81:ad61:0:b0:546:6ef:8baf with SMTP id l33-20020a81ad61000000b0054606ef8bafmr6900052ywk.2.1681187120831; Mon, 10 Apr 2023 21:25:20 -0700 (PDT) Date: Tue, 11 Apr 2023 04:24:58 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-2-jstultz@google.com> Subject: [PATCH v3 01/14] locking/ww_mutex: Remove wakeups from under mutex::wait_lock From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra In preparation to nest mutex::wait_lock under rq::lock we need to remove wakeups from under it. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Connor O'Brien Signed-off-by: John Stultz --- v2: * Move wake_q_init() as suggested by Waiman Long --- include/linux/ww_mutex.h | 3 +++ kernel/locking/mutex.c | 8 ++++++++ kernel/locking/ww_mutex.h | 10 ++++++++-- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index bb763085479a..9335b2202017 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -19,6 +19,7 @@ =20 #include #include +#include =20 #if defined(CONFIG_DEBUG_MUTEXES) || \ (defined(CONFIG_PREEMPT_RT) && defined(CONFIG_DEBUG_RT_MUTEXES)) @@ -58,6 +59,7 @@ struct ww_acquire_ctx { unsigned int acquired; unsigned short wounded; unsigned short is_wait_die; + struct wake_q_head wake_q; #ifdef DEBUG_WW_MUTEXES unsigned int done_acquire; struct ww_class *ww_class; @@ -137,6 +139,7 @@ static inline void ww_acquire_init(struct ww_acquire_ct= x *ctx, ctx->acquired =3D 0; ctx->wounded =3D false; ctx->is_wait_die =3D ww_class->is_wait_die; + wake_q_init(&ctx->wake_q); #ifdef DEBUG_WW_MUTEXES ctx->ww_class =3D ww_class; ctx->done_acquire =3D 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index d973fe6041bf..1582756914df 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -676,6 +676,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas } =20 raw_spin_unlock(&lock->wait_lock); + if (ww_ctx) + ww_ctx_wake(ww_ctx); schedule_preempt_disabled(); =20 first =3D __mutex_waiter_is_first(lock, &waiter); @@ -725,6 +727,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas ww_mutex_lock_acquired(ww, ww_ctx); =20 raw_spin_unlock(&lock->wait_lock); + if (ww_ctx) + ww_ctx_wake(ww_ctx); preempt_enable(); return 0; =20 @@ -736,6 +740,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas raw_spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); + if (ww_ctx) + ww_ctx_wake(ww_ctx); preempt_enable(); return ret; } @@ -946,9 +952,11 @@ static noinline void __sched __mutex_unlock_slowpath(s= truct mutex *lock, unsigne if (owner & MUTEX_FLAG_HANDOFF) __mutex_handoff(lock, next); =20 + preempt_disable(); raw_spin_unlock(&lock->wait_lock); =20 wake_up_q(&wake_q); + preempt_enable(); } =20 #ifndef CONFIG_DEBUG_LOCK_ALLOC diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 56f139201f24..e49ea5336473 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -161,6 +161,12 @@ static inline void lockdep_assert_wait_lock_held(struc= t rt_mutex *lock) =20 #endif /* WW_RT */ =20 +void ww_ctx_wake(struct ww_acquire_ctx *ww_ctx) +{ + wake_up_q(&ww_ctx->wake_q); + wake_q_init(&ww_ctx->wake_q); +} + /* * Wait-Die: * The newer transactions are killed when: @@ -284,7 +290,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER = *waiter, #ifndef WW_RT debug_mutex_wake_waiter(lock, waiter); #endif - wake_up_process(waiter->task); + wake_q_add(&ww_ctx->wake_q, waiter->task); } =20 return true; @@ -331,7 +337,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * wakeup pending to re-read the wounded state. */ if (owner !=3D current) - wake_up_process(owner); + wake_q_add(&ww_ctx->wake_q, owner); =20 return true; } --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9CB0C76196 for ; Tue, 11 Apr 2023 04:25:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230114AbjDKEZd (ORCPT ); Tue, 11 Apr 2023 00:25:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230088AbjDKEZY (ORCPT ); Tue, 11 Apr 2023 00:25:24 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE38610E for ; Mon, 10 Apr 2023 21:25:23 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-632cf80572bso616520b3a.3 for ; Mon, 10 Apr 2023 21:25:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187123; x=1683779123; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=D11BkT/by/KTrics5kACR4e3C7wu73LGrBohPqnvbDM=; b=a/8h6h0U18RoU6lWNBts1y/Xzg9Z20KAdpvIJ/qhPltdInUHLLAQJQy6ixAXv1mjb1 6RhdONDB2S/hncpRejyEtOMjxGgN9UbZ4Gcj3Rpjs2J7+/qOVGU/vqWKnSz2lIgvnrU7 DQgcer+PFG7VOc6LR5J0jqwzYfqpyrdwGjpYuHF3terfFA/nfSWVvKaSOAFYYimDGr5X TUufRKic2F+HDcm80RvhbNxP6pHVdFJGRtEDjaEgYMw93ET9mCEOHNTnrnX9qCVZE430 kD0Sp7XElrnLAsgHIaBE/mJQ2BCnRLN0FwFAbc3cZPeDRYjtTzRamp7EcC0xQCex3SgY U6PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187123; x=1683779123; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=D11BkT/by/KTrics5kACR4e3C7wu73LGrBohPqnvbDM=; b=l704giov0w+SuLsxBI12ogpY9ZZeuH7yMHZpt7JtK+/BeMDv0Va2kucE1XLLA8HGMf xT4PNd2yYLzp8zaALRiDtTOY5vjRNbdsmMRqnbd89MSjeJ8N4SyZW/s+8TxgrW5J9DX/ omVqqgnds/ZGXL9PoNItvcMuCDUFzrp11ktt9HGB/hq/+FJ9JQuaGlAKQXFrzpt8y2ca Yoo2zdiXBL5Yu/41wgNK+0uUuNlCI3kId+srwzW2BOTrqZq08Goutm38JfOrQHyRiZHl u+gdLmRrZlOaSIQ20NCBmDMjGl+GU3VR1BzGWZG00TIfy7i7mp2GL0jlKQ5aQbbYUn4g Wypg== X-Gm-Message-State: AAQBX9fQ7nv23S62Grr1Z4eJPhAu8yI6mx8Fdmyvcy6gfCV/vsBHjU4L meusCa25/+GaAYM0GNotyxxsvg+EyqcLzvZNLf42PHILZzbEnyd6+6U2v7Tlkj/Q/u2bc0AyeCY 260p/3KhrVic4DQ1Jj2nw9mhO+KPImeJeTAlFvWU0yItpeK+NHXgV7mKKXZuqFiJM7ZI6uD8= X-Google-Smtp-Source: AKy350bLfk3lZjV3uXu/HKmTd6O9B4gIlEG5TOe1lGSMQDkfVUgDSf42OlNaT7CZ3Jf3m2A9fvf0ixchCeFe X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:2387:b0:632:1d87:688b with SMTP id f7-20020a056a00238700b006321d87688bmr4479902pfc.0.1681187122894; Mon, 10 Apr 2023 21:25:22 -0700 (PDT) Date: Tue, 11 Apr 2023 04:24:59 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-3-jstultz@google.com> Subject: [PATCH v3 02/14] locking/mutex: make mutex::wait_lock irq safe From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Juri Lelli mutex::wait_lock might be nested under rq->lock. Make it irq safe then. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) [rebase & fix {un,}lock_wait_lock helpers in ww_mutex.h] Signed-off-by: Connor O'Brien Signed-off-by: John Stultz --- v3: * Re-added this patch after it was dropped in v2 which caused lockdep warnings to trip. --- kernel/locking/mutex.c | 18 ++++++++++-------- kernel/locking/ww_mutex.h | 22 ++++++++++++---------- 2 files changed, 22 insertions(+), 18 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 1582756914df..a528e7f42caa 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -572,6 +572,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas { struct mutex_waiter waiter; struct ww_mutex *ww; + unsigned long flags; int ret; =20 if (!use_ww_ctx) @@ -614,7 +615,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas return 0; } =20 - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); /* * After waiting to acquire the wait_lock, try again. */ @@ -675,7 +676,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err; } =20 - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); if (ww_ctx) ww_ctx_wake(ww_ctx); schedule_preempt_disabled(); @@ -698,9 +699,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas trace_contention_begin(lock, LCB_F_MUTEX); } =20 - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: __set_current_state(TASK_RUNNING); =20 @@ -726,7 +727,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); =20 - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); if (ww_ctx) ww_ctx_wake(ww_ctx); preempt_enable(); @@ -737,7 +738,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas __mutex_remove_waiter(lock, &waiter); err_early_kill: trace_contention_end(lock, ret); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); if (ww_ctx) @@ -909,6 +910,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne struct task_struct *next =3D NULL; DEFINE_WAKE_Q(wake_q); unsigned long owner; + unsigned long flags; =20 mutex_release(&lock->dep_map, ip); =20 @@ -935,7 +937,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne } } =20 - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ @@ -953,7 +955,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne __mutex_handoff(lock, next); =20 preempt_disable(); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); =20 wake_up_q(&wake_q); preempt_enable(); diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index e49ea5336473..984a4e0bff36 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -70,14 +70,14 @@ __ww_mutex_has_waiters(struct mutex *lock) return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS; } =20 -static inline void lock_wait_lock(struct mutex *lock) +static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags) { - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, *flags); } =20 -static inline void unlock_wait_lock(struct mutex *lock) +static inline void unlock_wait_lock(struct mutex *lock, unsigned long flag= s) { - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); } =20 static inline void lockdep_assert_wait_lock_held(struct mutex *lock) @@ -144,14 +144,14 @@ __ww_mutex_has_waiters(struct rt_mutex *lock) return rt_mutex_has_waiters(&lock->rtmutex); } =20 -static inline void lock_wait_lock(struct rt_mutex *lock) +static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *fl= ags) { - raw_spin_lock(&lock->rtmutex.wait_lock); + raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags); } =20 -static inline void unlock_wait_lock(struct rt_mutex *lock) +static inline void unlock_wait_lock(struct rt_mutex *lock, flags) { - raw_spin_unlock(&lock->rtmutex.wait_lock); + raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, flags); } =20 static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock) @@ -383,6 +383,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_= acquire_ctx *ww_ctx) static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx= *ctx) { + unsigned long flags; + ww_mutex_lock_acquired(lock, ctx); =20 /* @@ -410,9 +412,9 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, st= ruct ww_acquire_ctx *ctx) * Uh oh, we raced in fastpath, check if any of the waiters need to * die or wound us. */ - lock_wait_lock(&lock->base); + lock_wait_lock(&lock->base, &flags); __ww_mutex_check_waiters(&lock->base, ctx); - unlock_wait_lock(&lock->base); + unlock_wait_lock(&lock->base, flags); } =20 static __always_inline int --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ECDEC7619A for ; Tue, 11 Apr 2023 04:25:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230120AbjDKEZh (ORCPT ); Tue, 11 Apr 2023 00:25:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbjDKEZZ (ORCPT ); Tue, 11 Apr 2023 00:25:25 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11B9E26B6 for ; Mon, 10 Apr 2023 21:25:25 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-517c573b459so251135a12.3 for ; Mon, 10 Apr 2023 21:25:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187124; x=1683779124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M3FRzs4MfhGJXugdp9HAHsz85BzfgozE6U/WQSP3LFk=; b=KB3ZIYaYrwBBvjA3ksjkSpF9ttsxxOVA9OXjhH2R19lbcUMG+7zgvNUTld3cp6yre+ 0FTg2UjI2DwtZ5ddhbh9eDZVc1M8q9dkkIm+ID1fMklwO78Fs0Ju517r0b6doGfLO+z7 fW4zhvTAXFZ0v+MR/beIBl65yY1CmwgbXR3TPfjHgzV7ngV/eaiZ1gHpeILTawW2eeVn BtvCg2ytG62e+EBH7CrGDkzToti0ZFmxNor5G+syuw1BTtgvPYn+Y7bD7FVzMQ+v1svO G9hyMo+Pb+LgSo+iwTwDc+FY4GNXoYKvGnqJpCNdGfLYynpngoRF8QlGxLumJXJdOlys vINA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187124; x=1683779124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M3FRzs4MfhGJXugdp9HAHsz85BzfgozE6U/WQSP3LFk=; b=wkD9c/p05pdCzAH5m2fdusBM+uXEgqmCnypHpfYVCQH6Vmhr2F4J+Cv1hz4v7z80rs w0Wz17rJsJ8S915NFa63L0XJBBRpnoOhnJzJhLARvU3FsOgtSjEEcjm0POS/H2bC2V0H KJEIWbj2vfCG49B1Hr4tj8ILGiAcuznMEyQJNqrGhWBUuAIGJBSh+JBIkC6ShP2li0xx aiDeUXCgSG4Gv2yoLV6t97KGGgaEGzks9fmOs1WGgwR4o/ba+74X97kjcqXL45De6C79 o+zIUXu+pfYPwCTKi898s6iFgYRWekEoX0tASYGzGtwcH5svY+joV9KnhkbhCXqOkQXe IhTw== X-Gm-Message-State: AAQBX9ftzMU1LKcFBdCkOOdlyWnthsWjARDYIoJfnargvnrXEUAUioiq FUlothCudrx40AFPeWq/cYTxOgXWcSUZA9HCcNiPh2z52+JW8Viv4sWVJipaaFv3+f8NM0h8OtB EShhClCiB4eJrtLk9k8gH6KjiElNyHocPD7Ww4EbZVvFgcfNiPmmEvwXWG9WfDPi2OYYlwlE= X-Google-Smtp-Source: AKy350ZuWJgZLZlBbnPiFA9eRlMmOrmk/C+ziCWJvtvRhX2bVLrBABSqGIagX44nMmgpvFsY1hXGeTrqQRxE X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:2388:b0:626:2343:76b0 with SMTP id f8-20020a056a00238800b00626234376b0mr6116748pfc.6.1681187124493; Mon, 10 Apr 2023 21:25:24 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:00 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-4-jstultz@google.com> Subject: [PATCH v3 03/14] locking/mutex: Rework task_struct::blocked_on From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Track the blocked-on relation for mutexes, this allows following this relation at schedule time. task | blocked-on v mutex | owner v task Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Peter Zijlstra (Intel) [minor changes while rebasing] Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Connor O'Brien [jstultz: Fix blocked_on tracking in __mutex_lock_common in error paths] Signed-off-by: John Stultz --- v2: * Fixed blocked_on tracking in error paths that was causing crashes --- include/linux/sched.h | 5 +---- kernel/fork.c | 3 +-- kernel/locking/mutex-debug.c | 9 +++++---- kernel/locking/mutex.c | 7 +++++++ 4 files changed, 14 insertions(+), 10 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 63d242164b1a..6053c7dfb40e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1139,10 +1139,7 @@ struct task_struct { struct rt_mutex_waiter *pi_blocked_on; #endif =20 -#ifdef CONFIG_DEBUG_MUTEXES - /* Mutex deadlock detection: */ - struct mutex_waiter *blocked_on; -#endif + struct mutex *blocked_on; /* lock we're blocked on */ =20 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP int non_block_count; diff --git a/kernel/fork.c b/kernel/fork.c index 0c92f224c68c..933406f5596b 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2221,9 +2221,8 @@ static __latent_entropy struct task_struct *copy_proc= ess( lockdep_init_task(p); #endif =20 -#ifdef CONFIG_DEBUG_MUTEXES p->blocked_on =3D NULL; /* not blocked yet */ -#endif + #ifdef CONFIG_BCACHE p->sequential_io =3D 0; p->sequential_io_avg =3D 0; diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index bc8abb8549d2..7228909c3e62 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -52,17 +52,18 @@ void debug_mutex_add_waiter(struct mutex *lock, struct = mutex_waiter *waiter, { lockdep_assert_held(&lock->wait_lock); =20 - /* Mark the current thread as blocked on the lock: */ - task->blocked_on =3D waiter; + /* Current thread can't be already blocked (since it's executing!) */ + DEBUG_LOCKS_WARN_ON(task->blocked_on); } =20 void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *wa= iter, struct task_struct *task) { + struct mutex *blocked_on =3D READ_ONCE(task->blocked_on); + DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); DEBUG_LOCKS_WARN_ON(waiter->task !=3D task); - DEBUG_LOCKS_WARN_ON(task->blocked_on !=3D waiter); - task->blocked_on =3D NULL; + DEBUG_LOCKS_WARN_ON(blocked_on && blocked_on !=3D lock); =20 INIT_LIST_HEAD(&waiter->list); waiter->task =3D NULL; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index a528e7f42caa..d7a202c35ebe 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -646,6 +646,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err_early_kill; } =20 + current->blocked_on =3D lock; set_current_state(state); trace_contention_begin(lock, LCB_F_MUTEX); for (;;) { @@ -683,6 +684,10 @@ __mutex_lock_common(struct mutex *lock, unsigned int s= tate, unsigned int subclas =20 first =3D __mutex_waiter_is_first(lock, &waiter); =20 + /* + * Gets reset by ttwu_runnable(). + */ + current->blocked_on =3D lock; set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -720,6 +725,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas debug_mutex_free_waiter(&waiter); =20 skip_wait: + current->blocked_on =3D NULL; /* got the lock - cleanup and rejoice! */ lock_acquired(&lock->dep_map, ip); trace_contention_end(lock, 0); @@ -734,6 +740,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas return 0; =20 err: + current->blocked_on =3D NULL; __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); err_early_kill: --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF43BC7619A for ; Tue, 11 Apr 2023 04:25:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230128AbjDKEZl (ORCPT ); Tue, 11 Apr 2023 00:25:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230098AbjDKEZ1 (ORCPT ); Tue, 11 Apr 2023 00:25:27 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A582626BE for ; Mon, 10 Apr 2023 21:25:26 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id k7-20020a170902c40700b001a20f75cd40so4415191plk.22 for ; Mon, 10 Apr 2023 21:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187126; x=1683779126; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UHVT3HqFgc9YcSUsEkMLCVBjJkF8RFumVK/cskeGmNA=; b=VbottEcWO41KpD1qtyu9FHlRCSJZdu4FW5Q/IsJbmdGTGRJB2AYl3dhfPelR3e5e02 3P+/UyK4TTP1aQbOkZkNCv7Sif65DF+O9fLt06eC6R3FsmBNHVMwUHRMYahTwaBty3kN xWsy8ND//clDiIwKJBwpOE2JEHFUbZyvqcuMPvjwwE+jwryelkb7Q7cm49E33AlGkQz8 Zb4S52tJGkUOLVhuEFwr2NknZPQx16wAjg2jMnz9B3tHs9KCZD6FiTbPbXym5RGoWPf5 6z/+yAL4MoFkFI2lD+e2A6bKdxEFHabxAuNoHY5GZwMl5O2/QNvgmCwFAF0t27QfhwRT gK1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187126; x=1683779126; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UHVT3HqFgc9YcSUsEkMLCVBjJkF8RFumVK/cskeGmNA=; b=lW6I//mB/LyfTuMce5uUiwcTuKxZQFWGEDUS7cPKC1xRqvFTijpIzMHK+aFcxOzthR UtmNPLSHJ57YG9hZDvgiq8R3AHiPQgw3ERYQJyxZqrgyl/pTtJQtDYrCueKwfkufc8fB 06aYuQ36nKpLi2bWuwfR7YnkaCYKkwlf8VhtcIT7HSN/yz63bHhpsxpqzTU/d2R8kZ+7 uhuHe9M6AOz0Ag+cNTazFXKgA5T7SHf0Yfx0zuabmdEo2cLzbIYWUdjZQhPbAYP3uYS+ d1LGTYQ1hrTkxJC+EE1Upc8tS9xCvy3yU6V/uDix8MAobmi8WYHJYFCJ9abvCgqW+M1t ltcw== X-Gm-Message-State: AAQBX9eLuRqr7dQ4gPcLirOEdZuEkcqudZIVWVE3yf5C4KNz1oHEk0QD NiKYKQ4CC5VMNfirMmRFJrr5XlhPBSA6b7vVAynWxtXejDUiFskzjC+rEm7dLz0a1oxsXql9gqh BIAwLu3q+fVvDth6ZOjjH55fpijkBPxcPZU33DZwi5gQSqF8vf3eJJni9EMEThKwVg9hxAoc= X-Google-Smtp-Source: AKy350ZprjH+WO9igNL7Sw51ThJrBl1XpIdlgNRwXqdlfxAxAyfzGGKA338qaLLbIjYJDTT0oLicjZmLKKuK X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a17:902:9898:b0:19f:35d3:ed0b with SMTP id s24-20020a170902989800b0019f35d3ed0bmr4086337plp.2.1681187125879; Mon, 10 Apr 2023 21:25:25 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:01 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-5-jstultz@google.com> Subject: [PATCH v3 04/14] locking/mutex: Add task_struct::blocked_lock to serialize changes to the blocked_on state From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Valentin Schneider , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra This patch was split out from the later "sched: Add proxy execution" patch. Adds blocked_lock to the task_struct so we can safely keep track of which tasks are blocked on us. This will be used for tracking blocked-task/mutex chains with the prox-execution patch in a similar fashion to how priority inheritence is done with rt_mutexes. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Peter Zijlstra (Intel) [rebased, added comments and changelog] Signed-off-by: Juri Lelli [Fixed rebase conflicts] [squashed sched: Ensure blocked_on is always guarded by blocked_lock] Signed-off-by: Valentin Schneider [fix rebase conflicts, various fixes & tweaks commented inline] [squashed sched: Use rq->curr vs rq->proxy checks] Signed-off-by: Connor O'Brien [jstultz: Split out from bigger patch] Signed-off-by: John Stultz --- v2: * Split out into its own patch TODO: Still need to clarify some of the locking changes here --- include/linux/sched.h | 1 + init/init_task.c | 1 + kernel/fork.c | 1 + kernel/locking/mutex.c | 27 +++++++++++++++++++++++---- 4 files changed, 26 insertions(+), 4 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6053c7dfb40e..2d736b6c44e9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1140,6 +1140,7 @@ struct task_struct { #endif =20 struct mutex *blocked_on; /* lock we're blocked on */ + raw_spinlock_t blocked_lock; =20 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP int non_block_count; diff --git a/init/init_task.c b/init/init_task.c index ff6c4b9bfe6b..189ce67e9704 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -130,6 +130,7 @@ struct task_struct init_task .journal_info =3D NULL, INIT_CPU_TIMERS(init_task) .pi_lock =3D __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock), + .blocked_lock =3D __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock), .timer_slack_ns =3D 50000, /* 50 usec default slack */ .thread_pid =3D &init_struct_pid, .thread_group =3D LIST_HEAD_INIT(init_task.thread_group), diff --git a/kernel/fork.c b/kernel/fork.c index 933406f5596b..a0ff6d73affc 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2119,6 +2119,7 @@ static __latent_entropy struct task_struct *copy_proc= ess( ftrace_graph_init_task(p); =20 rt_mutex_init_task(p); + raw_spin_lock_init(&p->blocked_lock); =20 lockdep_assert_irqs_enabled(); #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index d7a202c35ebe..9cb2686fb974 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -616,6 +616,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas } =20 raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); /* * After waiting to acquire the wait_lock, try again. */ @@ -677,6 +678,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err; } =20 + raw_spin_unlock(¤t->blocked_lock); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); if (ww_ctx) ww_ctx_wake(ww_ctx); @@ -684,6 +686,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas =20 first =3D __mutex_waiter_is_first(lock, &waiter); =20 + raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); /* * Gets reset by ttwu_runnable(). */ @@ -698,15 +702,28 @@ __mutex_lock_common(struct mutex *lock, unsigned int = state, unsigned int subclas break; =20 if (first) { + bool acquired; + + /* + * XXX connoro: mutex_optimistic_spin() can schedule, so + * we need to release these locks before calling it. + * This needs refactoring though b/c currently we take + * the locks earlier than necessary when proxy exec is + * disabled and release them unnecessarily when it's + * enabled. At a minimum, need to verify that releasing + * blocked_lock here doesn't create any races. + */ + raw_spin_unlock(¤t->blocked_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN); - if (mutex_optimistic_spin(lock, ww_ctx, &waiter)) + acquired =3D mutex_optimistic_spin(lock, ww_ctx, &waiter); + raw_spin_lock_irqsave(&lock->wait_lock, flags); + raw_spin_lock(¤t->blocked_lock); + if (acquired) break; trace_contention_begin(lock, LCB_F_MUTEX); } - - raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: __set_current_state(TASK_RUNNING); =20 @@ -733,6 +750,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); =20 + raw_spin_unlock(¤t->blocked_lock); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); if (ww_ctx) ww_ctx_wake(ww_ctx); @@ -745,6 +763,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas __mutex_remove_waiter(lock, &waiter); err_early_kill: trace_contention_end(lock, ret); + raw_spin_unlock(¤t->blocked_lock); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01A7FC76196 for ; Tue, 11 Apr 2023 04:25:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230153AbjDKEZx (ORCPT ); Tue, 11 Apr 2023 00:25:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230103AbjDKEZ3 (ORCPT ); Tue, 11 Apr 2023 00:25:29 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5021B2D5A for ; Mon, 10 Apr 2023 21:25:28 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v67-20020a254846000000b00b8189f73e94so30494744yba.12 for ; Mon, 10 Apr 2023 21:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187127; x=1683779127; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MXbqcM8S0eoqY1JXMQQLXkL4JKytwPxCmdsZ17HQXnE=; b=WK9/avoTybboSLoz1SBxCBBJnRPD/Fd/1to1j2hAhGaAsh+wRplwRqS4kx4v9MqJrY 7iSOkK8HXV8a8dc2GnPxpH6SqClEmMxKazCmc4zJbRyGQUb5MruIYt/wzpFfRAf6xrQS il7AC0DqyPgA+fYVr6P/St1k777N/15sA1RGQiR910rWVDV3tinIpXdyBQ8fYpEzqlx/ OUuD6e6aAcO6M/UorjnNeVZ75OZ1mFZjEbToiqb7tiycf3j5UUs1KlJKcUUiydTbkAuB wksAC9/P2sOfBQuqe3JDJylNcebdeV1VkkRKeMMkMxBX88yJseIYg9zjD3s5ks0wEzTl veFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187127; x=1683779127; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MXbqcM8S0eoqY1JXMQQLXkL4JKytwPxCmdsZ17HQXnE=; b=WqHWyIMhBvQ4Fw/kglN/7C72gzlEmJ586YGTO8uDwN6yoPA/1t+m22ZRIieJ41j6Rt +eWdZMxxscdueS1BbhAyyMqi/w3BDLKMeGZ5IufY1wdH0w6DsioQ0klHHhfsaM1s+Ejp ZJAAjsRbf1YHvyAL0/Pe+dfSqjzUhOIDDQK0doyzf6gQVr+k+ih1F5+WxCPXmdCppkYl y9hGZYXkPNejD/S8rUmrsXGtV7e5SL+BypTVJroe0MdSwEpxAeM2U2XPZGI6rCo++vUf P/xV/tZ+7Kenx1x1Mw+LQDSq3UB2aucbMtX2JStmstRkbDpxuw1b1jDIhIgeREOPDwme zfpA== X-Gm-Message-State: AAQBX9fj1369Y0X9yjUtJT159s9Vwn+aYqTyYIebjDWb52LzJlnDgljS UYfLduMj47uGnCv0W2qaqUC/3/m+5UL+JH3OO0qbK0u5OHjWepuKr+QK3Leif+T1NjqKVhVa0xK +BcF+GSr/tYZezaGcWdSPxqUH7XZ0Jg08Jozq7Qtvrmy3nchcHlrT3aqoB4hhTY+dTVYuSV0= X-Google-Smtp-Source: AKy350YTwDNMNFMEMSlEf2un49G3Hi/s1Jg7Eo4F9wi9+MfJgOiKQO66YR3cMhFTxI1jfBmdVFHcBKboPBZ3 X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a81:4320:0:b0:541:6941:5aa8 with SMTP id q32-20020a814320000000b0054169415aa8mr5051718ywa.7.1681187127553; Mon, 10 Apr 2023 21:25:27 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:02 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-6-jstultz@google.com> Subject: [PATCH v3 05/14] locking/mutex: Add p->blocked_on wrappers From: John Stultz To: LKML Cc: Valentin Schneider , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Valentin Schneider This lets us assert p->blocked_lock is held whenever we access p->blocked_on. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Valentin Schneider [fix conflicts, call in more places] Signed-off-by: Connor O'Brien [jstultz: tweaked commit subject, added get_task_blocked_on() as well] Signed-off-by: John Stultz --- v2: * Added get_task_blocked_on() accessor --- include/linux/sched.h | 14 ++++++++++++++ kernel/locking/mutex-debug.c | 4 ++-- kernel/locking/mutex.c | 8 ++++---- 3 files changed, 20 insertions(+), 6 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 2d736b6c44e9..9d46ca8ac221 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2222,6 +2222,20 @@ static inline int rwlock_needbreak(rwlock_t *lock) #endif } =20 +static inline void set_task_blocked_on(struct task_struct *p, struct mutex= *m) +{ + lockdep_assert_held(&p->blocked_lock); + + p->blocked_on =3D m; +} + +static inline struct mutex *get_task_blocked_on(struct task_struct *p) +{ + lockdep_assert_held(&p->blocked_lock); + + return p->blocked_on; +} + static __always_inline bool need_resched(void) { return unlikely(tif_need_resched()); diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 7228909c3e62..e3cd64ae6ea4 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -53,13 +53,13 @@ void debug_mutex_add_waiter(struct mutex *lock, struct = mutex_waiter *waiter, lockdep_assert_held(&lock->wait_lock); =20 /* Current thread can't be already blocked (since it's executing!) */ - DEBUG_LOCKS_WARN_ON(task->blocked_on); + DEBUG_LOCKS_WARN_ON(get_task_blocked_on(task)); } =20 void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *wa= iter, struct task_struct *task) { - struct mutex *blocked_on =3D READ_ONCE(task->blocked_on); + struct mutex *blocked_on =3D get_task_blocked_on(task); /*XXX jstultz: dr= opped READ_ONCE here, revisit.*/ =20 DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); DEBUG_LOCKS_WARN_ON(waiter->task !=3D task); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 9cb2686fb974..45f1b7519f63 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -647,7 +647,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err_early_kill; } =20 - current->blocked_on =3D lock; + set_task_blocked_on(current, lock); set_current_state(state); trace_contention_begin(lock, LCB_F_MUTEX); for (;;) { @@ -691,7 +691,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas /* * Gets reset by ttwu_runnable(). */ - current->blocked_on =3D lock; + set_task_blocked_on(current, lock); set_current_state(state); /* * Here we order against unlock; we must either see it change @@ -742,7 +742,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas debug_mutex_free_waiter(&waiter); =20 skip_wait: - current->blocked_on =3D NULL; + set_task_blocked_on(current, NULL); /* got the lock - cleanup and rejoice! */ lock_acquired(&lock->dep_map, ip); trace_contention_end(lock, 0); @@ -758,7 +758,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas return 0; =20 err: - current->blocked_on =3D NULL; + set_task_blocked_on(current, NULL); __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); err_early_kill: --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64B42C76196 for ; Tue, 11 Apr 2023 04:25:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230107AbjDKEZ4 (ORCPT ); Tue, 11 Apr 2023 00:25:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230129AbjDKEZv (ORCPT ); Tue, 11 Apr 2023 00:25:51 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0024C10E5 for ; Mon, 10 Apr 2023 21:25:29 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id y4-20020a253204000000b00b392ae70300so7594891yby.21 for ; Mon, 10 Apr 2023 21:25:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187129; x=1683779129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z7PRxTE/G+TewFJJW+aiwRGrdyvM9ejq86ABZBlhxUk=; b=PRdVbcVQdfIvRjuDqKuLiynVWdcAN7tDcV7TDQEKAG5aVZGSCUbH5Ta/p4ZVugRsVq WfuN9fGmQhTNNbzm7G2sf0T+YL9tjWRwgSFCQGFO9s5L5e5CmJk58BzV21Rm/QMGEfah /CMBT1hG5tiNi0FDGwUnyi/1uKzPFNfX/6Jmx8tVF4CVqg/CeynRI0UihrstmFo23J0l jZI0F2SPu5AOYuH2u/sOCW2mBfKvgsSSnmgFtbmR3/9w0g/SAO/EsPnTe64jmwvEk+55 jpTazCE8KWeokl17MiSSofm95JOyDOiDFw5f2lSo1iUPgg/PEDE5oaFCGeWT3MUIh1cQ ZBLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187129; x=1683779129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z7PRxTE/G+TewFJJW+aiwRGrdyvM9ejq86ABZBlhxUk=; b=0zZ5YDmcIW3e1q4JCBs3KdihVGZgSn7mqGEptDGallUJjiPeLfS6A1WA+hArAdCU5e mgPQmtx/IF6kPSiLs55Y4ddAotQxZRaJSul5yT26oQEpWLTWH3xWreI5RvlqPK5eNCfs b1LNYb2Eo54oyISGh76EREc5CcTM2IqT/fuOacl591Zmfezvi3jSod7FM8LypiOps8mj IVx3bAylchp34FUB6Xqe5yzfJ0h71ZcxaNa93XvA8HyB8/3xen4mp6uVHF3IYoiSPdOM 5e0v8d6GHvhfFZzHFiORu67TK8mWqHWLQgFvpeVEvfeIcQ86geDqI43fIxm8CF0YG/rs aaDw== X-Gm-Message-State: AAQBX9e+sGMHEZd+Hx38pCxIMoRFhQX5XXpwuPR969LcQPJw+ahRNyXo m7iCMBczLRJNu9MBPRpdjnONrE0NrSHGsqQG/66/NSTnCsLoPB3cPKk9XbENGl0QPzCiUZzoBge S10afamBNDl844AHKOUO48MhMeNATyViztofLOdI9bquYoqz+NUhab2I+y4zhFg0X1AUt6p8= X-Google-Smtp-Source: AKy350ZAZhWGobfYrb6GHmvIGepMHShBMoVGwWMp6n3Jgy5a9161QujnAr76OYbjAFiV7su1rDTLXo66KC7r X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a25:73ca:0:b0:b8e:dadc:a081 with SMTP id o193-20020a2573ca000000b00b8edadca081mr5070323ybc.5.1681187129167; Mon, 10 Apr 2023 21:25:29 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:03 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-7-jstultz@google.com> Subject: [PATCH v3 06/14] locking/mutex: Expose mutex_owner() From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Valentin Schneider , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Juri Lelli Implementing proxy execution requires that scheduler code be able to identify the current owner of a mutex. Expose a new helper mutex_owner() for this purpose. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Juri Lelli [Removed the EXPORT_SYMBOL] Signed-off-by: Valentin Schneider Signed-off-by: Connor O'Brien [jstultz: Tweaked subject line] Signed-off-by: John Stultz --- include/linux/mutex.h | 2 ++ kernel/locking/mutex.c | 5 +++++ 2 files changed, 7 insertions(+) diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 8f226d460f51..ebdc59cb0bf6 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -118,6 +118,8 @@ do { \ extern void __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key); =20 +extern struct task_struct *mutex_owner(struct mutex *lock); + /** * mutex_is_locked - is the mutex locked * @lock: the mutex to be queried diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 45f1b7519f63..cbc34d5f4486 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -81,6 +81,11 @@ static inline struct task_struct *__mutex_owner(struct m= utex *lock) return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLA= GS); } =20 +struct task_struct *mutex_owner(struct mutex *lock) +{ + return __mutex_owner(lock); +} + static inline struct task_struct *__owner_task(unsigned long owner) { return (struct task_struct *)(owner & ~MUTEX_FLAGS); --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAE88C7619A for ; Tue, 11 Apr 2023 04:26:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230143AbjDKE0C (ORCPT ); Tue, 11 Apr 2023 00:26:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230137AbjDKEZw (ORCPT ); Tue, 11 Apr 2023 00:25:52 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEB5426BE for ; Mon, 10 Apr 2023 21:25:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54ed65e1fe0so87127487b3.18 for ; Mon, 10 Apr 2023 21:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187131; x=1683779131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TZla9n0aM3/MbVHsUXpKu5CiwnUuagyO7/qCT5IzdPE=; b=QgrPUMQ4AA3eVZQ4ghVfek3Gc3xqtPPMbZT2DjmZOsjUgJvusTMnJckJ1dCZOaQ2kF E91JWypF1eri8NsLZUfgMt8MHZLaEqjlL3yCu/oEB0GE2GZjEsH59ojPBbQzbowypI9S cDZWLJE1eOx+k1tBskmh8g6ZRbKvFNZkyHhy4RNY3nanXVOCDjIR9UUAERGLW8ZJNyve S6CEu362lgW5F3Jt9LqlwKBTwaR76lkKFfeGjS+AW6vAIJZKWSaSPrw61g4CZVQxCCVf IM+z6gFBNtSRNzU2azn1ynxSZYt8IhXeqk2a9z49pI1CrThqFtPH6WhJZ5tq3snIVTck SoWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187131; x=1683779131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TZla9n0aM3/MbVHsUXpKu5CiwnUuagyO7/qCT5IzdPE=; b=y6h8l6MLMl4sHXMJSkmjMt6fZ3Sw1/gxI5w+oXr6n69jA8Xqudnrr0pviHC1p4BUph FOMIYjJ55Bz3pxbHhzsUrccyKEhbdgwf+A/utYH+nCzEvuLLKtQBaNQIkXbbVcyNCSR3 b3UWyJbqqQ3xRI+DXM0xX6LgCA41NPlMD4aHzzXSnU6UmxtjcsAwjD+PFWsyK1nXj923 htKuzpNLJIkEuMZ5LSE13aJlbzQzHYwLOOCSg9EIErhkDMJDdKhy1dM9wpJtHK+krpo9 mYwVEyed/xAUArBEwYlkh8bpAOenrEKW3en5ZLP0yHywrPfgyF9H5i+SxHEsa9nyz2nv pNzw== X-Gm-Message-State: AAQBX9dy41FRzGdGJTclYGOhM7/vp2WGjTV9cI0tgBuPTa1UI3uz7scw cfEKFNlPXio5U6JoIXvVYWKy1b1vT4+ANN0exZICGZJWqziJSkgC91CsmRs82KqvumyDRJb1fnn pfBoso2sC1JkF3Z41lm2W7BOfVMM9/7YzoNYxFpc14Yn6+xXqx7tzDL4MdrvuQ4lC4rq5rAY= X-Google-Smtp-Source: AKy350bwDuf02aEKP6ebTPqyNNygPJcSPuWQV2n8z5zqdUCEHMtRzX4QiMSPHzifYdY4NZMa/E/CwORnEN6j X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:909:b0:a27:3ecc:ffe7 with SMTP id bu9-20020a056902090900b00a273eccffe7mr10920720ybb.3.1681187130882; Mon, 10 Apr 2023 21:25:30 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:04 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-8-jstultz@google.com> Subject: [PATCH v3 07/14] sched: Unify runtime accounting across classes From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra All classes use sched_entity::exec_start to track runtime and have copies of the exact same code around to compute runtime. Collapse all that. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Peter Zijlstra (Intel) [fix conflicts, fold in update_current_exec_runtime] Signed-off-by: Connor O'Brien [jstultz: rebased, resovling minor conflicts] Signed-off-by: John Stultz --- include/linux/sched.h | 2 +- kernel/sched/deadline.c | 13 +++------- kernel/sched/fair.c | 56 ++++++++++++++++++++++++++++++---------- kernel/sched/rt.c | 13 +++------- kernel/sched/sched.h | 12 ++------- kernel/sched/stop_task.c | 13 +--------- 6 files changed, 52 insertions(+), 57 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 9d46ca8ac221..6d22542d3648 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -519,7 +519,7 @@ struct sched_statistics { u64 block_max; s64 sum_block_runtime; =20 - u64 exec_max; + s64 exec_max; u64 slice_max; =20 u64 nr_migrations_cold; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 71b24371a6f7..5a7c4edd5b13 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1308,9 +1308,8 @@ static void update_curr_dl(struct rq *rq) { struct task_struct *curr =3D rq->curr; struct sched_dl_entity *dl_se =3D &curr->dl; - u64 delta_exec, scaled_delta_exec; + s64 delta_exec, scaled_delta_exec; int cpu =3D cpu_of(rq); - u64 now; =20 if (!dl_task(curr) || !on_dl_rq(dl_se)) return; @@ -1323,21 +1322,15 @@ static void update_curr_dl(struct rq *rq) * natural solution, but the full ramifications of this * approach need further study. */ - now =3D rq_clock_task(rq); - delta_exec =3D now - curr->se.exec_start; - if (unlikely((s64)delta_exec <=3D 0)) { + delta_exec =3D update_curr_common(rq); + if (unlikely(delta_exec <=3D 0)) { if (unlikely(dl_se->dl_yielded)) goto throttle; return; } =20 - schedstat_set(curr->stats.exec_max, - max(curr->stats.exec_max, delta_exec)); - trace_sched_stat_runtime(curr, delta_exec, 0); =20 - update_current_exec_runtime(curr, now, delta_exec); - if (dl_entity_is_special(dl_se)) return; =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6986ea31c984..bea9a31c76ff 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -891,23 +891,17 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq) } #endif /* CONFIG_SMP */ =20 -/* - * Update the current task's runtime statistics. - */ -static void update_curr(struct cfs_rq *cfs_rq) +static s64 update_curr_se(struct rq *rq, struct sched_entity *curr) { - struct sched_entity *curr =3D cfs_rq->curr; - u64 now =3D rq_clock_task(rq_of(cfs_rq)); - u64 delta_exec; - - if (unlikely(!curr)) - return; + u64 now =3D rq_clock_task(rq); + s64 delta_exec; =20 delta_exec =3D now - curr->exec_start; - if (unlikely((s64)delta_exec <=3D 0)) - return; + if (unlikely(delta_exec <=3D 0)) + return delta_exec; =20 curr->exec_start =3D now; + curr->sum_exec_runtime +=3D delta_exec; =20 if (schedstat_enabled()) { struct sched_statistics *stats; @@ -917,9 +911,43 @@ static void update_curr(struct cfs_rq *cfs_rq) max(delta_exec, stats->exec_max)); } =20 - curr->sum_exec_runtime +=3D delta_exec; - schedstat_add(cfs_rq->exec_clock, delta_exec); + return delta_exec; +} + +/* + * Used by other classes to account runtime. + */ +s64 update_curr_common(struct rq *rq) +{ + struct task_struct *curr =3D rq->curr; + s64 delta_exec; =20 + delta_exec =3D update_curr_se(rq, &curr->se); + if (unlikely(delta_exec <=3D 0)) + return delta_exec; + + account_group_exec_runtime(curr, delta_exec); + cgroup_account_cputime(curr, delta_exec); + + return delta_exec; +} + +/* + * Update the current task's runtime statistics. + */ +static void update_curr(struct cfs_rq *cfs_rq) +{ + struct sched_entity *curr =3D cfs_rq->curr; + s64 delta_exec; + + if (unlikely(!curr)) + return; + + delta_exec =3D update_curr_se(rq_of(cfs_rq), curr); + if (unlikely(delta_exec <=3D 0)) + return; + + schedstat_add(cfs_rq->exec_clock, delta_exec); curr->vruntime +=3D calc_delta_fair(delta_exec, curr); update_min_vruntime(cfs_rq); =20 diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 0a11f44adee5..18eb6ce60c5c 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1046,24 +1046,17 @@ static void update_curr_rt(struct rq *rq) { struct task_struct *curr =3D rq->curr; struct sched_rt_entity *rt_se =3D &curr->rt; - u64 delta_exec; - u64 now; + s64 delta_exec; =20 if (curr->sched_class !=3D &rt_sched_class) return; =20 - now =3D rq_clock_task(rq); - delta_exec =3D now - curr->se.exec_start; - if (unlikely((s64)delta_exec <=3D 0)) + delta_exec =3D update_curr_common(rq); + if (unlikely(delta_exec < 0)) return; =20 - schedstat_set(curr->stats.exec_max, - max(curr->stats.exec_max, delta_exec)); - trace_sched_stat_runtime(curr, delta_exec, 0); =20 - update_current_exec_runtime(curr, now, delta_exec); - if (!rt_bandwidth_enabled()) return; =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3e8df6d31c1e..d18e3c3a3f40 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2166,6 +2166,8 @@ struct affinity_context { unsigned int flags; }; =20 +extern s64 update_curr_common(struct rq *rq); + struct sched_class { =20 #ifdef CONFIG_UCLAMP_TASK @@ -3238,16 +3240,6 @@ extern int sched_dynamic_mode(const char *str); extern void sched_dynamic_update(int mode); #endif =20 -static inline void update_current_exec_runtime(struct task_struct *curr, - u64 now, u64 delta_exec) -{ - curr->se.sum_exec_runtime +=3D delta_exec; - account_group_exec_runtime(curr, delta_exec); - - curr->se.exec_start =3D now; - cgroup_account_cputime(curr, delta_exec); -} - #ifdef CONFIG_SCHED_MM_CID static inline int __mm_cid_get(struct mm_struct *mm) { diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index 85590599b4d6..7595494ceb6d 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -70,18 +70,7 @@ static void yield_task_stop(struct rq *rq) =20 static void put_prev_task_stop(struct rq *rq, struct task_struct *prev) { - struct task_struct *curr =3D rq->curr; - u64 now, delta_exec; - - now =3D rq_clock_task(rq); - delta_exec =3D now - curr->se.exec_start; - if (unlikely((s64)delta_exec < 0)) - delta_exec =3D 0; - - schedstat_set(curr->stats.exec_max, - max(curr->stats.exec_max, delta_exec)); - - update_current_exec_runtime(curr, now, delta_exec); + update_curr_common(rq); } =20 /* --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 412A7C7619A for ; Tue, 11 Apr 2023 04:26:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230208AbjDKE0P (ORCPT ); Tue, 11 Apr 2023 00:26:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230164AbjDKEZz (ORCPT ); Tue, 11 Apr 2023 00:25:55 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1851B30E0 for ; Mon, 10 Apr 2023 21:25:34 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id pt5-20020a17090b3d0500b0023d3ffe542fso405685pjb.0 for ; Mon, 10 Apr 2023 21:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187133; x=1683779133; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QGs69cFMj2ObD2ALOPqytWsX0S38G4o3J2+SZBj4hsY=; b=MKjkNKT/HRktibvKK5zHSxJWnEa9BzEKeztXPmlzrVhdwyFZn4zG66zS6GIs1o/GwH XFFiJ8AEaGFDQbkz93h0yJAH2t7YnXw56CV1aIcsKfhSaJST9lrtJM4Cler331pgbwJH 2y4esmSXiO2NcihAdP6/nIItsCWeq1OiCHkWtjmJ3X4SV7vmEp5MG5YV4iqZBCG93sr9 W/v+gcYPxWTk7e72m1+q4Kj5ad8EsqwvJmC+xZjQY5Rb2Yyaz/Ak6bYXVO7qEljBpBTK ZjsnhDhrUUlcyQInGOCSS1nvFNLtkeVZpXTthIw70wQqg+We/TKQwQSvNsC17+I1f5Tf h9wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187133; x=1683779133; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QGs69cFMj2ObD2ALOPqytWsX0S38G4o3J2+SZBj4hsY=; b=asOzRJh3HqPjFiCnE57lX7eiLDFGpW8Qb+YfOPHwIXip6Fzep2TgNb++6ocnlloxmM CjZ0L8msn6oZs3rez6p8SdtS7HAjf4sLVDgZMon8DGyPZUv+LP5iMBf+HoSfKCsH3Hxl IbyvH8uVAixVQw7aDb9t0LmxFDHMZHXg6NaJUNDNsmrXJEsS7GT9o764UMsxEDTNaV/e q4lARTAc1h7AdIBTFVZ460k6vhN6THYJugO8F+5y8RlHozXdpo+MKX0/ypNTzYPpcWWn XR3Yk/2VIKCh7x4v4hqGnlHmd7hwaRTJzPBMx0BDwi/O8mXD6ownzLWaSlNSbJomV3sG zNfw== X-Gm-Message-State: AAQBX9fnc6Tody1/CawNGlsyB4O7mDyV1aq1h3jtUx4KrwU5IDYWJI+v NrBtPTnqygRBHJerYa+J/G2NIikzsp2CsVL7WCWVYYqkI6/g5nQqtqCEbgxDkwlHGsK0LnCZr9K uVuZ+HHQeD6srLtUSc2e/cvQOxHJMUc8pLduRP8ZoKsX6+4LZ4NWA8dlx72YcuM4zVfKTzB8= X-Google-Smtp-Source: AKy350btluCAZTYX0dd8uuf76sLKo0r+yZOkBE8lmKkQ7zE9n2HVN0u068LNQqi4ADsDCsWLXG23Yc/77QRC X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a17:90a:f98f:b0:246:d7d1:69ff with SMTP id cq15-20020a17090af98f00b00246d7d169ffmr499447pjb.1.1681187132868; Mon, 10 Apr 2023 21:25:32 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:05 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-9-jstultz@google.com> Subject: [PATCH v3 08/14] sched: Replace rq->curr access w/ rq_curr(rq) From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparing for proxy-execution changes add a bit of indirection for reading and writing rq->curr. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: John Stultz --- v3: * Build fixups Reported-by: kernel test robot https://lore.kernel.org/oe-kbuild-all/202303211827.IXnKJ5rO-lkp@intel.com/ * Fix missed rq->curr references in comments * Tweaked wrapper names --- kernel/sched/core.c | 56 ++++++++++++++++++++------------------- kernel/sched/core_sched.c | 2 +- kernel/sched/cputime.c | 4 +-- kernel/sched/deadline.c | 50 +++++++++++++++++----------------- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 25 ++++++++--------- kernel/sched/idle.c | 4 +-- kernel/sched/membarrier.c | 22 +++++++-------- kernel/sched/pelt.h | 2 +- kernel/sched/rt.c | 44 +++++++++++++++--------------- kernel/sched/sched.h | 46 +++++++++++++++++++++++++++----- 11 files changed, 147 insertions(+), 110 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0d18c3969f90..969256189da0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -257,7 +257,7 @@ void sched_core_dequeue(struct rq *rq, struct task_stru= ct *p, int flags) * and re-examine whether the core is still in forced idle state. */ if (!(flags & DEQUEUE_SAVE) && rq->nr_running =3D=3D 1 && - rq->core->core_forceidle_count && rq->curr =3D=3D rq->idle) + rq->core->core_forceidle_count && rq_curr(rq) =3D=3D rq->idle) resched_curr(rq); } =20 @@ -703,7 +703,7 @@ static void update_rq_clock_task(struct rq *rq, s64 del= ta) =20 rq->prev_irq_time +=3D irq_delta; delta -=3D irq_delta; - psi_account_irqtime(rq->curr, irq_delta); + psi_account_irqtime(rq_curr(rq), irq_delta); #endif #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING if (static_key_false((¶virt_steal_rq_enabled))) { @@ -773,7 +773,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *time= r) =20 rq_lock(rq, &rf); update_rq_clock(rq); - rq->curr->sched_class->task_tick(rq, rq->curr, 1); + rq_curr(rq)->sched_class->task_tick(rq, rq_curr(rq), 1); rq_unlock(rq, &rf); =20 return HRTIMER_NORESTART; @@ -1020,7 +1020,7 @@ void wake_up_q(struct wake_q_head *head) */ void resched_curr(struct rq *rq) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); int cpu; =20 lockdep_assert_rq_held(rq); @@ -2178,16 +2178,18 @@ static inline void check_class_changed(struct rq *r= q, struct task_struct *p, =20 void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags) { - if (p->sched_class =3D=3D rq->curr->sched_class) - rq->curr->sched_class->check_preempt_curr(rq, p, flags); - else if (sched_class_above(p->sched_class, rq->curr->sched_class)) + struct task_struct *curr =3D rq_curr(rq); + + if (p->sched_class =3D=3D curr->sched_class) + curr->sched_class->check_preempt_curr(rq, p, flags); + else if (sched_class_above(p->sched_class, curr->sched_class)) resched_curr(rq); =20 /* * A queue event has occurred, and we're going to schedule. In * this case, we can save a useless back to back clock update. */ - if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr)) + if (task_on_rq_queued(curr) && test_tsk_need_resched(curr)) rq_clock_skip_update(rq); } =20 @@ -3862,11 +3864,11 @@ void wake_up_if_idle(int cpu) =20 rcu_read_lock(); =20 - if (!is_idle_task(rcu_dereference(rq->curr))) + if (!is_idle_task(rq_curr_rcu(rq))) goto out; =20 rq_lock_irqsave(rq, &rf); - if (is_idle_task(rq->curr)) + if (is_idle_task(rq_curr(rq))) resched_curr(rq); /* Else CPU is not idle, do nothing here: */ rq_unlock_irqrestore(rq, &rf); @@ -4391,7 +4393,7 @@ struct task_struct *cpu_curr_snapshot(int cpu) struct task_struct *t; =20 smp_mb(); /* Pairing determined by caller's synchronization design. */ - t =3D rcu_dereference(cpu_curr(cpu)); + t =3D cpu_curr_rcu(cpu); smp_mb(); /* Pairing determined by caller's synchronization design. */ return t; } @@ -5200,7 +5202,7 @@ static struct rq *finish_task_switch(struct task_stru= ct *prev) * kernel thread and not issued an IPI. It is therefore possible to * schedule between user->kernel->user threads without passing though * switch_mm(). Membarrier requires a barrier after storing to - * rq->curr, before returning to userspace, so provide them here: + * rq_curr(rq), before returning to userspace, so provide them here: * * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly * provided by mmdrop(), @@ -5283,7 +5285,7 @@ context_switch(struct rq *rq, struct task_struct *pre= v, membarrier_switch_mm(rq, prev->active_mm, next->mm); /* * sys_membarrier() requires an smp_mb() between setting - * rq->curr / membarrier_switch_mm() and returning to userspace. + * rq_curr(rq) / membarrier_switch_mm() and returning to userspace. * * The below provides this either through switch_mm(), or in * case 'prev->active_mm =3D=3D next->mm' through @@ -5567,7 +5569,7 @@ void scheduler_tick(void) { int cpu =3D smp_processor_id(); struct rq *rq =3D cpu_rq(cpu); - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); struct rq_flags rf; unsigned long thermal_pressure; u64 resched_latency; @@ -5660,7 +5662,7 @@ static void sched_tick_remote(struct work_struct *wor= k) goto out_requeue; =20 rq_lock_irq(rq, &rf); - curr =3D rq->curr; + curr =3D rq_curr(rq); if (cpu_is_offline(cpu)) goto out_unlock; =20 @@ -6204,7 +6206,7 @@ pick_next_task(struct rq *rq, struct task_struct *pre= v, struct rq_flags *rf) /* Did we break L1TF mitigation requirements? */ WARN_ON_ONCE(!cookie_match(next, rq_i->core_pick)); =20 - if (rq_i->curr =3D=3D rq_i->core_pick) { + if (rq_curr(rq_i) =3D=3D rq_i->core_pick) { rq_i->core_pick =3D NULL; continue; } @@ -6235,7 +6237,7 @@ static bool try_steal_cookie(int this, int that) if (!cookie) goto unlock; =20 - if (dst->curr !=3D dst->idle) + if (rq_curr(dst) !=3D dst->idle) goto unlock; =20 p =3D sched_core_find(src, cookie); @@ -6243,7 +6245,7 @@ static bool try_steal_cookie(int this, int that) goto unlock; =20 do { - if (p =3D=3D src->core_pick || p =3D=3D src->curr) + if (p =3D=3D src->core_pick || p =3D=3D rq_curr(src)) goto next; =20 if (!is_cpu_allowed(p, this)) @@ -6514,7 +6516,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) =20 cpu =3D smp_processor_id(); rq =3D cpu_rq(cpu); - prev =3D rq->curr; + prev =3D rq_curr(rq); =20 schedule_debug(prev, !!sched_mode); =20 @@ -6537,7 +6539,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) * if (signal_pending_state()) if (p->state & @state) * * Also, the membarrier system call requires a full memory barrier - * after coming from user-space, before storing to rq->curr. + * after coming from user-space, before storing to rq_curr(). */ rq_lock(rq, &rf); smp_mb__after_spinlock(); @@ -6596,14 +6598,14 @@ static void __sched notrace __schedule(unsigned int= sched_mode) if (likely(prev !=3D next)) { rq->nr_switches++; /* - * RCU users of rcu_dereference(rq->curr) may not see + * RCU users of rq_curr_rcu(rq) may not see * changes to task_struct made by pick_next_task(). */ - RCU_INIT_POINTER(rq->curr, next); + rq_set_curr_rcu_init(rq, next); /* * The membarrier system call requires each architecture * to have a full memory barrier after updating - * rq->curr, before returning to user-space. + * rq_curr(rq), before returning to user-space. * * Here are the schemes providing that barrier on the * various architectures: @@ -7040,7 +7042,7 @@ void rt_mutex_setprio(struct task_struct *p, struct t= ask_struct *pi_task) * real need to boost. */ if (unlikely(p =3D=3D rq->idle)) { - WARN_ON(p !=3D rq->curr); + WARN_ON(p !=3D rq_curr(rq)); WARN_ON(p->pi_blocked_on); goto out_unlock; } @@ -7256,7 +7258,7 @@ int idle_cpu(int cpu) { struct rq *rq =3D cpu_rq(cpu); =20 - if (rq->curr !=3D rq->idle) + if (rq_curr(rq) !=3D rq->idle) return 0; =20 if (rq->nr_running) @@ -9157,7 +9159,7 @@ void __init init_idle(struct task_struct *idle, int c= pu) rcu_read_unlock(); =20 rq->idle =3D idle; - rcu_assign_pointer(rq->curr, idle); + rq_set_curr(rq, idle); idle->on_rq =3D TASK_ON_RQ_QUEUED; #ifdef CONFIG_SMP idle->on_cpu =3D 1; @@ -9331,7 +9333,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, push_work= ); */ static void balance_push(struct rq *rq) { - struct task_struct *push_task =3D rq->curr; + struct task_struct *push_task =3D rq_curr(rq); =20 lockdep_assert_rq_held(rq); =20 diff --git a/kernel/sched/core_sched.c b/kernel/sched/core_sched.c index a57fd8f27498..ece2157a265d 100644 --- a/kernel/sched/core_sched.c +++ b/kernel/sched/core_sched.c @@ -273,7 +273,7 @@ void __sched_core_account_forceidle(struct rq *rq) =20 for_each_cpu(i, smt_mask) { rq_i =3D cpu_rq(i); - p =3D rq_i->core_pick ?: rq_i->curr; + p =3D rq_i->core_pick ?: rq_curr(rq_i); =20 if (p =3D=3D rq_i->idle) continue; diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index af7952f12e6c..83a653d47d22 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -994,7 +994,7 @@ u64 kcpustat_field(struct kernel_cpustat *kcpustat, struct task_struct *curr; =20 rcu_read_lock(); - curr =3D rcu_dereference(rq->curr); + curr =3D rq_curr_rcu(rq); if (WARN_ON_ONCE(!curr)) { rcu_read_unlock(); return cpustat[usage]; @@ -1081,7 +1081,7 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, i= nt cpu) struct task_struct *curr; =20 rcu_read_lock(); - curr =3D rcu_dereference(rq->curr); + curr =3D rq_curr_rcu(rq); if (WARN_ON_ONCE(!curr)) { rcu_read_unlock(); *dst =3D *src; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 5a7c4edd5b13..a8296d38b066 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1179,7 +1179,7 @@ static enum hrtimer_restart dl_task_timer(struct hrti= mer *timer) #endif =20 enqueue_task_dl(rq, p, ENQUEUE_REPLENISH); - if (dl_task(rq->curr)) + if (dl_task(rq_curr(rq))) check_preempt_curr_dl(rq, p, 0); else resched_curr(rq); @@ -1306,7 +1306,7 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, str= uct sched_dl_entity *dl_se) */ static void update_curr_dl(struct rq *rq) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); struct sched_dl_entity *dl_se =3D &curr->dl; s64 delta_exec, scaled_delta_exec; int cpu =3D cpu_of(rq); @@ -1792,7 +1792,7 @@ static void yield_task_dl(struct rq *rq) * it and the bandwidth timer will wake it up and will give it * new scheduling parameters (thanks to dl_yielded=3D1). */ - rq->curr->dl.dl_yielded =3D 1; + rq_curr(rq)->dl.dl_yielded =3D 1; =20 update_rq_clock(rq); update_curr_dl(rq); @@ -1829,7 +1829,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int= flags) rq =3D cpu_rq(cpu); =20 rcu_read_lock(); - curr =3D READ_ONCE(rq->curr); /* unlocked access */ + curr =3D rq_curr_once(rq); =20 /* * If we are dealing with a -deadline task, we must @@ -1904,8 +1904,8 @@ static void check_preempt_equal_dl(struct rq *rq, str= uct task_struct *p) * Current can't be migrated, useless to reschedule, * let's hope p can move out. */ - if (rq->curr->nr_cpus_allowed =3D=3D 1 || - !cpudl_find(&rq->rd->cpudl, rq->curr, NULL)) + if (rq_curr(rq)->nr_cpus_allowed =3D=3D 1 || + !cpudl_find(&rq->rd->cpudl, rq_curr(rq), NULL)) return; =20 /* @@ -1944,7 +1944,7 @@ static int balance_dl(struct rq *rq, struct task_stru= ct *p, struct rq_flags *rf) static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, int flags) { - if (dl_entity_preempt(&p->dl, &rq->curr->dl)) { + if (dl_entity_preempt(&p->dl, &rq_curr(rq)->dl)) { resched_curr(rq); return; } @@ -1954,8 +1954,8 @@ static void check_preempt_curr_dl(struct rq *rq, stru= ct task_struct *p, * In the unlikely case current and p have the same deadline * let us try to decide what's the best thing to do... */ - if ((p->dl.deadline =3D=3D rq->curr->dl.deadline) && - !test_tsk_need_resched(rq->curr)) + if ((p->dl.deadline =3D=3D rq_curr(rq)->dl.deadline) && + !test_tsk_need_resched(rq_curr(rq))) check_preempt_equal_dl(rq, p); #endif /* CONFIG_SMP */ } @@ -1989,7 +1989,7 @@ static void set_next_task_dl(struct rq *rq, struct ta= sk_struct *p, bool first) if (hrtick_enabled_dl(rq)) start_hrtick_dl(rq, p); =20 - if (rq->curr->sched_class !=3D &dl_sched_class) + if (rq_curr(rq)->sched_class !=3D &dl_sched_class) update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0); =20 deadline_queue_push_tasks(rq); @@ -2301,13 +2301,13 @@ static int push_dl_task(struct rq *rq) =20 retry: /* - * If next_task preempts rq->curr, and rq->curr + * If next_task preempts rq_curr(rq), and rq_curr(rq) * can move away, it makes sense to just reschedule * without going further in pushing next_task. */ - if (dl_task(rq->curr) && - dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) && - rq->curr->nr_cpus_allowed > 1) { + if (dl_task(rq_curr(rq)) && + dl_time_before(next_task->dl.deadline, rq_curr(rq)->dl.deadline) && + rq_curr(rq)->nr_cpus_allowed > 1) { resched_curr(rq); return 0; } @@ -2315,7 +2315,7 @@ static int push_dl_task(struct rq *rq) if (is_migration_disabled(next_task)) return 0; =20 - if (WARN_ON(next_task =3D=3D rq->curr)) + if (WARN_ON(next_task =3D=3D rq_curr(rq))) return 0; =20 /* We might release rq lock */ @@ -2423,7 +2423,7 @@ static void pull_dl_task(struct rq *this_rq) */ if (p && dl_time_before(p->dl.deadline, dmin) && dl_task_is_earliest_deadline(p, this_rq)) { - WARN_ON(p =3D=3D src_rq->curr); + WARN_ON(p =3D=3D rq_curr(src_rq)); WARN_ON(!task_on_rq_queued(p)); =20 /* @@ -2431,7 +2431,7 @@ static void pull_dl_task(struct rq *this_rq) * deadline than the current task of its runqueue. */ if (dl_time_before(p->dl.deadline, - src_rq->curr->dl.deadline)) + rq_curr(src_rq)->dl.deadline)) goto skip; =20 if (is_migration_disabled(p)) { @@ -2468,11 +2468,11 @@ static void pull_dl_task(struct rq *this_rq) static void task_woken_dl(struct rq *rq, struct task_struct *p) { if (!task_on_cpu(rq, p) && - !test_tsk_need_resched(rq->curr) && + !test_tsk_need_resched(rq_curr(rq)) && p->nr_cpus_allowed > 1 && - dl_task(rq->curr) && - (rq->curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &rq->curr->dl))) { + dl_task(rq_curr(rq)) && + (rq_curr(rq)->nr_cpus_allowed < 2 || + !dl_entity_preempt(&p->dl, &rq_curr(rq)->dl))) { push_dl_tasks(rq); } } @@ -2635,12 +2635,12 @@ static void switched_to_dl(struct rq *rq, struct ta= sk_struct *p) return; } =20 - if (rq->curr !=3D p) { + if (rq_curr(rq) !=3D p) { #ifdef CONFIG_SMP if (p->nr_cpus_allowed > 1 && rq->dl.overloaded) deadline_queue_push_tasks(rq); #endif - if (dl_task(rq->curr)) + if (dl_task(rq_curr(rq))) check_preempt_curr_dl(rq, p, 0); else resched_curr(rq); @@ -2684,8 +2684,8 @@ static void prio_changed_dl(struct rq *rq, struct tas= k_struct *p, * * Otherwise, if p was given an earlier deadline, reschedule. */ - if (!dl_task(rq->curr) || - dl_time_before(p->dl.deadline, rq->curr->dl.deadline)) + if (!dl_task(rq_curr(rq)) || + dl_time_before(p->dl.deadline, rq_curr(rq)->dl.deadline)) resched_curr(rq); } #else diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 1637b65ba07a..55f57156502d 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -743,7 +743,7 @@ do { \ P(nr_switches); P(nr_uninterruptible); PN(next_balance); - SEQ_printf(m, " .%-30s: %ld\n", "curr->pid", (long)(task_pid_nr(rq->curr= ))); + SEQ_printf(m, " .%-30s: %ld\n", "curr->pid", (long)(task_pid_nr(rq_curr(= rq)))); PN(clock); PN(clock_task); #undef P diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bea9a31c76ff..9295e85ab83b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -919,7 +919,7 @@ static s64 update_curr_se(struct rq *rq, struct sched_e= ntity *curr) */ s64 update_curr_common(struct rq *rq) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); s64 delta_exec; =20 delta_exec =3D update_curr_se(rq, &curr->se); @@ -964,7 +964,7 @@ static void update_curr(struct cfs_rq *cfs_rq) =20 static void update_curr_fair(struct rq *rq) { - update_curr(cfs_rq_of(&rq->curr->se)); + update_curr(cfs_rq_of(&rq_curr(rq)->se)); } =20 static inline void @@ -1958,7 +1958,7 @@ static bool task_numa_compare(struct task_numa_env *e= nv, return false; =20 rcu_read_lock(); - cur =3D rcu_dereference(dst_rq->curr); + cur =3D rcu_dereference(rq_curr(dst_rq)); if (cur && ((cur->flags & PF_EXITING) || is_idle_task(cur))) cur =3D NULL; =20 @@ -2747,7 +2747,7 @@ static void task_numa_group(struct task_struct *p, in= t cpupid, int flags, } =20 rcu_read_lock(); - tsk =3D READ_ONCE(cpu_rq(cpu)->curr); + tsk =3D READ_ONCE(cpu_curr(cpu)); =20 if (!cpupid_match_pid(tsk, cpupid)) goto no_join; @@ -3969,7 +3969,7 @@ static inline void migrate_se_pelt_lag(struct sched_e= ntity *se) rq =3D rq_of(cfs_rq); =20 rcu_read_lock(); - is_idle =3D is_idle_task(rcu_dereference(rq->curr)); + is_idle =3D is_idle_task(rq_curr_rcu(rq)); rcu_read_unlock(); =20 /* @@ -5534,7 +5534,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) assert_list_leaf_cfs_rq(rq); =20 /* Determine whether we need to wake up potentially idle CPU: */ - if (rq->curr =3D=3D rq->idle && rq->cfs.nr_running) + if (rq_curr(rq) =3D=3D rq->idle && rq->cfs.nr_running) resched_curr(rq); } =20 @@ -6184,7 +6184,7 @@ static void hrtick_start_fair(struct rq *rq, struct t= ask_struct *p) */ static void hrtick_update(struct rq *rq) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); =20 if (!hrtick_enabled_fair(rq) || curr->sched_class !=3D &fair_sched_class) return; @@ -7821,7 +7821,7 @@ static void set_skip_buddy(struct sched_entity *se) */ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int= wake_flags) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); struct sched_entity *se =3D &curr->se, *pse =3D &p->se; struct cfs_rq *cfs_rq =3D task_cfs_rq(curr); int scale =3D cfs_rq->nr_running >=3D sched_nr_latency; @@ -8119,7 +8119,7 @@ static void put_prev_task_fair(struct rq *rq, struct = task_struct *prev) */ static void yield_task_fair(struct rq *rq) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); struct cfs_rq *cfs_rq =3D task_cfs_rq(curr); struct sched_entity *se =3D &curr->se; =20 @@ -8854,7 +8854,7 @@ static bool __update_blocked_others(struct rq *rq, bo= ol *done) * update_load_avg() can call cpufreq_update_util(). Make sure that RT, * DL and IRQ signals have been updated before updating CFS. */ - curr_class =3D rq->curr->sched_class; + curr_class =3D rq_curr(rq)->sched_class; =20 thermal_pressure =3D arch_scale_thermal_pressure(cpu_of(rq)); =20 @@ -9673,8 +9673,9 @@ static unsigned int task_running_on_cpu(int cpu, stru= ct task_struct *p) static int idle_cpu_without(int cpu, struct task_struct *p) { struct rq *rq =3D cpu_rq(cpu); + struct task_struct *curr =3D rq_curr(rq); =20 - if (rq->curr !=3D rq->idle && rq->curr !=3D p) + if (curr !=3D rq->idle && curr !=3D p) return 0; =20 /* @@ -10872,7 +10873,7 @@ static int load_balance(int this_cpu, struct rq *th= is_rq, * if the curr task on busiest CPU can't be * moved to this_cpu: */ - if (!cpumask_test_cpu(this_cpu, busiest->curr->cpus_ptr)) { + if (!cpumask_test_cpu(this_cpu, rq_curr(busiest)->cpus_ptr)) { raw_spin_rq_unlock_irqrestore(busiest, flags); goto out_one_pinned; } diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index e9ef66be2870..8b8b6214d7b7 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -246,8 +246,8 @@ static void do_idle(void) /* * If the arch has a polling bit, we maintain an invariant: * - * Our polling bit is clear if we're not scheduled (i.e. if rq->curr !=3D - * rq->idle). This means that, if rq->idle has the polling bit set, + * Our polling bit is clear if we're not scheduled (i.e. if rq_curr(rq) + * !=3D rq->idle). This means that, if rq->idle has the polling bit set, * then setting need_resched is guaranteed to cause the CPU to * reschedule. */ diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index 2ad881d07752..761044fb3422 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -86,7 +86,7 @@ * membarrier(): * a: smp_mb() * d: switch to kthread (include= s mb) - * b: read rq->curr->mm =3D=3D NULL + * b: read rq_curr(rq)->mm =3D=3D NULL * e: switch to user (includes m= b) * c: smp_mb() * @@ -108,7 +108,7 @@ * exit_mm(): * d: smp_mb() * e: current->mm =3D NULL - * b: read rq->curr->mm =3D=3D NULL + * b: read rq_curr(rq)->mm =3D=3D NULL * c: smp_mb() * * Using scenario (B), we can show that (c) needs to be paired with (d). @@ -122,7 +122,7 @@ * kthread_unuse_mm() * d: smp_mb() * e: current->mm =3D NULL - * b: read rq->curr->mm =3D=3D NULL + * b: read rq_curr(rq)->mm =3D=3D NULL * kthread_use_mm() * f: current->mm =3D mm * g: smp_mb() @@ -251,7 +251,7 @@ static int membarrier_global_expedited(void) return 0; =20 /* - * Matches memory barriers around rq->curr modification in + * Matches memory barriers around rq_set_curr() in * scheduler. */ smp_mb(); /* system call entry is not a mb. */ @@ -283,7 +283,7 @@ static int membarrier_global_expedited(void) * Skip the CPU if it runs a kernel thread which is not using * a task mm. */ - p =3D rcu_dereference(cpu_rq(cpu)->curr); + p =3D cpu_curr_rcu(cpu); if (!p->mm) continue; =20 @@ -301,7 +301,7 @@ static int membarrier_global_expedited(void) /* * Memory barrier on the caller thread _after_ we finished * waiting for the last IPI. Matches memory barriers around - * rq->curr modification in scheduler. + * rq_set_curr() in scheduler. */ smp_mb(); /* exit from system call is not a mb */ return 0; @@ -339,7 +339,7 @@ static int membarrier_private_expedited(int flags, int = cpu_id) return 0; =20 /* - * Matches memory barriers around rq->curr modification in + * Matches memory barriers around rq_set_curr() in * scheduler. */ smp_mb(); /* system call entry is not a mb. */ @@ -355,7 +355,7 @@ static int membarrier_private_expedited(int flags, int = cpu_id) if (cpu_id >=3D nr_cpu_ids || !cpu_online(cpu_id)) goto out; rcu_read_lock(); - p =3D rcu_dereference(cpu_rq(cpu_id)->curr); + p =3D cpu_curr_rcu(cpu_id); if (!p || p->mm !=3D mm) { rcu_read_unlock(); goto out; @@ -368,7 +368,7 @@ static int membarrier_private_expedited(int flags, int = cpu_id) for_each_online_cpu(cpu) { struct task_struct *p; =20 - p =3D rcu_dereference(cpu_rq(cpu)->curr); + p =3D cpu_curr_rcu(cpu); if (p && p->mm =3D=3D mm) __cpumask_set_cpu(cpu, tmpmask); } @@ -416,7 +416,7 @@ static int membarrier_private_expedited(int flags, int = cpu_id) /* * Memory barrier on the caller thread _after_ we finished * waiting for the last IPI. Matches memory barriers around - * rq->curr modification in scheduler. + * rq_set_curr() in scheduler. */ smp_mb(); /* exit from system call is not a mb */ =20 @@ -466,7 +466,7 @@ static int sync_runqueues_membarrier_state(struct mm_st= ruct *mm) struct rq *rq =3D cpu_rq(cpu); struct task_struct *p; =20 - p =3D rcu_dereference(rq->curr); + p =3D rq_curr_rcu(rq); if (p && p->mm =3D=3D mm) __cpumask_set_cpu(cpu, tmpmask); } diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 3a0e0dc28721..bf3276f8df78 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -94,7 +94,7 @@ static inline void _update_idle_rq_clock_pelt(struct rq *= rq) */ static inline void update_rq_clock_pelt(struct rq *rq, s64 delta) { - if (unlikely(is_idle_task(rq->curr))) { + if (unlikely(is_idle_task(rq_curr(rq)))) { _update_idle_rq_clock_pelt(rq); return; } diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 18eb6ce60c5c..ecd53be8a6e5 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -574,7 +574,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *r= t_se, unsigned int flags) =20 static void sched_rt_rq_enqueue(struct rt_rq *rt_rq) { - struct task_struct *curr =3D rq_of_rt_rq(rt_rq)->curr; + struct task_struct *curr =3D rq_curr(rq_of_rt_rq(rt_rq)); struct rq *rq =3D rq_of_rt_rq(rt_rq); struct sched_rt_entity *rt_se; =20 @@ -958,7 +958,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth= *rt_b, int overrun) * and this unthrottle will get accounted as * 'runtime'. */ - if (rt_rq->rt_nr_running && rq->curr =3D=3D rq->idle) + if (rt_rq->rt_nr_running && rq_curr(rq) =3D=3D rq->idle) rq_clock_cancel_skipupdate(rq); } if (rt_rq->rt_time || rt_rq->rt_nr_running) @@ -1044,7 +1044,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt= _rq) */ static void update_curr_rt(struct rq *rq) { - struct task_struct *curr =3D rq->curr; + struct task_struct *curr =3D rq_curr(rq); struct sched_rt_entity *rt_se =3D &curr->rt; s64 delta_exec; =20 @@ -1582,7 +1582,7 @@ static void requeue_task_rt(struct rq *rq, struct tas= k_struct *p, int head) =20 static void yield_task_rt(struct rq *rq) { - requeue_task_rt(rq, rq->curr, 0); + requeue_task_rt(rq, rq_curr(rq), 0); } =20 #ifdef CONFIG_SMP @@ -1602,7 +1602,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int= flags) rq =3D cpu_rq(cpu); =20 rcu_read_lock(); - curr =3D READ_ONCE(rq->curr); /* unlocked access */ + curr =3D rq_curr_once(rq); =20 /* * If the current task on @p's runqueue is an RT task, then @@ -1666,8 +1666,8 @@ static void check_preempt_equal_prio(struct rq *rq, s= truct task_struct *p) * Current can't be migrated, useless to reschedule, * let's hope p can move out. */ - if (rq->curr->nr_cpus_allowed =3D=3D 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + if (rq_curr(rq)->nr_cpus_allowed =3D=3D 1 || + !cpupri_find(&rq->rd->cpupri, rq_curr(rq), NULL)) return; =20 /* @@ -1710,7 +1710,7 @@ static int balance_rt(struct rq *rq, struct task_stru= ct *p, struct rq_flags *rf) */ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, in= t flags) { - if (p->prio < rq->curr->prio) { + if (p->prio < rq_curr(rq)->prio) { resched_curr(rq); return; } @@ -1728,7 +1728,7 @@ static void check_preempt_curr_rt(struct rq *rq, stru= ct task_struct *p, int flag * to move current somewhere else, making room for our non-migratable * task. */ - if (p->prio =3D=3D rq->curr->prio && !test_tsk_need_resched(rq->curr)) + if (p->prio =3D=3D rq_curr(rq)->prio && !test_tsk_need_resched(rq_curr(rq= ))) check_preempt_equal_prio(rq, p); #endif } @@ -1753,7 +1753,7 @@ static inline void set_next_task_rt(struct rq *rq, st= ruct task_struct *p, bool f * utilization. We only care of the case where we start to schedule a * rt task */ - if (rq->curr->sched_class !=3D &rt_sched_class) + if (rq_curr(rq)->sched_class !=3D &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); =20 rt_queue_push_tasks(rq); @@ -2062,7 +2062,7 @@ static int push_rt_task(struct rq *rq, bool pull) * higher priority than current. If that's the case * just reschedule current. */ - if (unlikely(next_task->prio < rq->curr->prio)) { + if (unlikely(next_task->prio < rq_curr(rq)->prio)) { resched_curr(rq); return 0; } @@ -2083,10 +2083,10 @@ static int push_rt_task(struct rq *rq, bool pull) * Note that the stoppers are masqueraded as SCHED_FIFO * (cf. sched_set_stop_task()), so we can't rely on rt_task(). */ - if (rq->curr->sched_class !=3D &rt_sched_class) + if (rq_curr(rq)->sched_class !=3D &rt_sched_class) return 0; =20 - cpu =3D find_lowest_rq(rq->curr); + cpu =3D find_lowest_rq(rq_curr(rq)); if (cpu =3D=3D -1 || cpu =3D=3D rq->cpu) return 0; =20 @@ -2107,7 +2107,7 @@ static int push_rt_task(struct rq *rq, bool pull) return 0; } =20 - if (WARN_ON(next_task =3D=3D rq->curr)) + if (WARN_ON(next_task =3D=3D rq_curr(rq))) return 0; =20 /* We might release rq lock */ @@ -2404,7 +2404,7 @@ static void pull_rt_task(struct rq *this_rq) * the to-be-scheduled task? */ if (p && (p->prio < this_rq->rt.highest_prio.curr)) { - WARN_ON(p =3D=3D src_rq->curr); + WARN_ON(p =3D=3D rq_curr(src_rq)); WARN_ON(!task_on_rq_queued(p)); =20 /* @@ -2415,7 +2415,7 @@ static void pull_rt_task(struct rq *this_rq) * p if it is lower in priority than the * current task on the run queue */ - if (p->prio < src_rq->curr->prio) + if (p->prio < rq_curr(src_rq)->prio) goto skip; =20 if (is_migration_disabled(p)) { @@ -2455,11 +2455,11 @@ static void pull_rt_task(struct rq *this_rq) static void task_woken_rt(struct rq *rq, struct task_struct *p) { bool need_to_push =3D !task_on_cpu(rq, p) && - !test_tsk_need_resched(rq->curr) && + !test_tsk_need_resched(rq_curr(rq)) && p->nr_cpus_allowed > 1 && - (dl_task(rq->curr) || rt_task(rq->curr)) && - (rq->curr->nr_cpus_allowed < 2 || - rq->curr->prio <=3D p->prio); + (dl_task(rq_curr(rq)) || rt_task(rq_curr(rq))) && + (rq_curr(rq)->nr_cpus_allowed < 2 || + rq_curr(rq)->prio <=3D p->prio); =20 if (need_to_push) push_rt_tasks(rq); @@ -2543,7 +2543,7 @@ static void switched_to_rt(struct rq *rq, struct task= _struct *p) if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) rt_queue_push_tasks(rq); #endif /* CONFIG_SMP */ - if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq))) + if (p->prio < rq_curr(rq)->prio && cpu_online(cpu_of(rq))) resched_curr(rq); } } @@ -2584,7 +2584,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p,= int oldprio) * greater than the current running task * then reschedule. */ - if (p->prio < rq->curr->prio) + if (p->prio < rq_curr(rq)->prio) resched_curr(rq); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d18e3c3a3f40..9e6fb54c66be 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1008,7 +1008,7 @@ struct rq { */ unsigned int nr_uninterruptible; =20 - struct task_struct __rcu *curr; + struct task_struct __rcu *curr_exec; struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; @@ -1201,12 +1201,46 @@ static inline bool is_migration_disabled(struct tas= k_struct *p) =20 DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); =20 +static inline struct task_struct *rq_curr(struct rq *rq) +{ + return rq->curr_exec; +} + +static inline struct task_struct *rq_curr_rcu(struct rq *rq) +{ + return rcu_dereference(rq->curr_exec); +} + +static inline struct task_struct *rq_curr_once(struct rq *rq) +{ + return READ_ONCE(rq->curr_exec); +} + +static inline void rq_set_curr(struct rq *rq, struct task_struct *task) +{ + rcu_assign_pointer(rq->curr_exec, task); +} + +/* + * XXX jstultz: seems like rcu_assign_pointer above would also + * work for this, but trying to match usage. + */ +static inline void rq_set_curr_rcu_init(struct rq *rq, struct task_struct = *task) +{ + RCU_INIT_POINTER(rq->curr_exec, task); +} + #define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) #define this_rq() this_cpu_ptr(&runqueues) #define task_rq(p) cpu_rq(task_cpu(p)) -#define cpu_curr(cpu) (cpu_rq(cpu)->curr) +#define cpu_curr(cpu) (rq_curr(cpu_rq(cpu))) #define raw_rq() raw_cpu_ptr(&runqueues) =20 +static inline struct task_struct *cpu_curr_rcu(int cpu) +{ + return rq_curr_rcu(cpu_rq(cpu)); +} + struct sched_group; #ifdef CONFIG_SCHED_CORE static inline struct cpumask *sched_group_span(struct sched_group *sg); @@ -2070,7 +2104,7 @@ static inline u64 global_rt_runtime(void) =20 static inline int task_current(struct rq *rq, struct task_struct *p) { - return rq->curr =3D=3D p; + return rq_curr(rq) =3D=3D p; } =20 static inline int task_on_cpu(struct rq *rq, struct task_struct *p) @@ -2230,7 +2264,7 @@ struct sched_class { =20 static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { - WARN_ON_ONCE(rq->curr !=3D prev); + WARN_ON_ONCE(rq_curr(rq) !=3D prev); prev->sched_class->put_prev_task(rq, prev); } =20 @@ -2311,7 +2345,7 @@ extern void set_cpus_allowed_common(struct task_struc= t *p, struct affinity_conte =20 static inline struct task_struct *get_push_task(struct rq *rq) { - struct task_struct *p =3D rq->curr; + struct task_struct *p =3D rq_curr(rq); =20 lockdep_assert_rq_held(rq); =20 @@ -3193,7 +3227,7 @@ static inline bool sched_energy_enabled(void) { retur= n false; } * The scheduler provides memory barriers required by membarrier between: * - prior user-space memory accesses and store to rq->membarrier_state, * - store to rq->membarrier_state and following user-space memory accesse= s. - * In the same way it provides those guarantees around store to rq->curr. + * In the same way it provides those guarantees around store to rq_curr(rq= ). */ static inline void membarrier_switch_mm(struct rq *rq, struct mm_struct *prev_mm, --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41662C77B70 for ; Tue, 11 Apr 2023 04:26:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230152AbjDKE0T (ORCPT ); Tue, 11 Apr 2023 00:26:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbjDKE0C (ORCPT ); Tue, 11 Apr 2023 00:26:02 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D787A3595 for ; Mon, 10 Apr 2023 21:25:35 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2467aa31e77so455860a91.1 for ; Mon, 10 Apr 2023 21:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187135; x=1683779135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kLVCvGDgyolH8XsJyVtUp0s6XASmsJpCWxd7zmCGgc0=; b=dEk/4j5HijfXP5QDCl6VOZfe699/8vSmK6ziRbuJnB8C2H03QUcCGousEi+Fkb1K6R wCquTObCE3IO2NNck8om6jX8nkxy5G+Z9oSXzJdwdSrBPAE6yGJ2Nf0Ttuns5UNQGY/M LRGd0/CPyEFCThogppiZWbGddnprxF0Mu1DB7tFRI7Y8sIsLv7GGWCjFsm4ecVeHxbn9 0NIlXEq/T4PSvrF/+3TeZ7Eyi9+xhOQeEOC0FgwKHhB5uyAzvBFBYlC48x36q5fkTH5o ljcGZ15yQNLvZjo0lIx3GpD/UZEjS9LmtKdnnNrcFNHSxT3H7dla7OTZ4T2ftdJu+PXh 4l4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187135; x=1683779135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kLVCvGDgyolH8XsJyVtUp0s6XASmsJpCWxd7zmCGgc0=; b=MCqVmYpX3zO7ffGEW7Zx8DlsX9NqVy5zsgXDL4zL4fFdo3I1PUFKZMLqmPzKpLiZMn mpi7mLTZPSsdXgdJop+qQhfIGn4CLWLh7NFjVnIwzOiWxX1KkzXpeazZdCvugnT+/Dda lfUCD5jZhPodUWG4SI3h6uBTVsRBxZxz3gd+wjbTvf89lcgCAxYaY4zbaqmTpJUJQi6D +40NikUahPnaX2feVuT4AvRzjJ4Ng/Wjc1Vu6+TXU4wIy+xoIG03O+pUl8XqR3UWP31V IDFFIFuqr1riJD9XTPFufMxcu8k4kDF4x6AJwI9kKJilv60UQmM8d8ZnN6p1WfgjnQ9a XlRQ== X-Gm-Message-State: AAQBX9eRyyQA8qbqUY4UoHb06WBJXkuum5z2HgbUZor8mGnEmPoTws1p KdCmwJQI/gOLQh5mhdV2MhmN9QJdGFYvr5eHqdbXvhmfR+SAeFR28ai1+9sy26j4XhBHhkkoDNe 6jMfv4xpcZV2uzNZf7+klfXF2NXOsKVlqjrqq2AmrOHJUPM4n1rL5yGh//cz3ne8sTDsM2GE= X-Google-Smtp-Source: AKy350abeXnDn1A1Yw8twfv2OjbJ5Dns4k1e6QFQaVRCIA0Z86uE6IrNLQrU8tO9bYDBfzbsDnadIRz0imZ7 X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:190c:b0:62e:1972:fab5 with SMTP id y12-20020a056a00190c00b0062e1972fab5mr4521538pfi.4.1681187134570; Mon, 10 Apr 2023 21:25:34 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:06 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-10-jstultz@google.com> Subject: [PATCH v3 09/14] sched: Split scheduler execution context From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Lets define the scheduling context as all the scheduler state in task_struct and the execution context as all state required to run the task. Currently both are intertwined in task_struct. We want to logically split these such that we can run the execution context of one task with the scheduling context of another. To this purpose introduce rq_selected() to point to the task_struct used for scheduler state and preserve rq_curr() to denote the execution context. XXX connoro: some further work may be needed in RT/DL load balancing paths to properly handle split context; see comments in code for specifics Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" [added lot of comments/questions - identifiable by XXX] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20181009092434.26221-5-juri.lelli@redhat.com [add additional comments and update more sched_class code to use rq::proxy] Signed-off-by: Connor O'Brien [jstultz: Rebased and resolved minor collisions, reworked to use accessors, tweaked update_curr_common to use rq_proxy fixing rt scheduling issues] Signed-off-by: John Stultz --- v2: * Reworked to use accessors * Fixed update_curr_common to use proxy instead of curr v3: * Tweaked wrapper names * Swapped proxy for selected for clarity --- kernel/sched/core.c | 41 +++++++++++++++++++-------- kernel/sched/deadline.c | 36 +++++++++++++----------- kernel/sched/fair.c | 22 +++++++++------ kernel/sched/rt.c | 61 ++++++++++++++++++++++++++-------------- kernel/sched/sched.h | 62 +++++++++++++++++++++++++++++++++++++++-- 5 files changed, 162 insertions(+), 60 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 969256189da0..a9cf8397c601 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -773,7 +773,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *time= r) =20 rq_lock(rq, &rf); update_rq_clock(rq); - rq_curr(rq)->sched_class->task_tick(rq, rq_curr(rq), 1); + rq_selected(rq)->sched_class->task_tick(rq, rq_selected(rq), 1); rq_unlock(rq, &rf); =20 return HRTIMER_NORESTART; @@ -2178,7 +2178,7 @@ static inline void check_class_changed(struct rq *rq,= struct task_struct *p, =20 void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags) { - struct task_struct *curr =3D rq_curr(rq); + struct task_struct *curr =3D rq_selected(rq); =20 if (p->sched_class =3D=3D curr->sched_class) curr->sched_class->check_preempt_curr(rq, p, flags); @@ -2189,7 +2189,7 @@ void check_preempt_curr(struct rq *rq, struct task_st= ruct *p, int flags) * A queue event has occurred, and we're going to schedule. In * this case, we can save a useless back to back clock update. */ - if (task_on_rq_queued(curr) && test_tsk_need_resched(curr)) + if (task_on_rq_queued(curr) && test_tsk_need_resched(rq_curr(rq))) rq_clock_skip_update(rq); } =20 @@ -2579,7 +2579,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct a= ffinity_context *ctx) lockdep_assert_held(&p->pi_lock); =20 queued =3D task_on_rq_queued(p); - running =3D task_current(rq, p); + running =3D task_current_selected(rq, p); =20 if (queued) { /* @@ -5501,7 +5501,7 @@ unsigned long long task_sched_runtime(struct task_str= uct *p) * project cycles that may never be accounted to this * thread, breaking clock_gettime(). */ - if (task_current(rq, p) && task_on_rq_queued(p)) { + if (task_current_selected(rq, p) && task_on_rq_queued(p)) { prefetch_curr_exec_start(p); update_rq_clock(rq); p->sched_class->update_curr(rq); @@ -5569,7 +5569,8 @@ void scheduler_tick(void) { int cpu =3D smp_processor_id(); struct rq *rq =3D cpu_rq(cpu); - struct task_struct *curr =3D rq_curr(rq); + /* accounting goes to the selected task */ + struct task_struct *curr =3D rq_selected(rq); struct rq_flags rf; unsigned long thermal_pressure; u64 resched_latency; @@ -5666,6 +5667,13 @@ static void sched_tick_remote(struct work_struct *wo= rk) if (cpu_is_offline(cpu)) goto out_unlock; =20 + /* + * XXX don't we need to account to rq_selected()?? + * Maybe, since this is a remote tick for full dynticks mode, we are + * always sure that there is no proxy (only a single task is running). + */ + SCHED_WARN_ON(rq_curr(rq) !=3D rq_selected(rq)); + update_rq_clock(rq); =20 if (!is_idle_task(curr)) { @@ -6589,6 +6597,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) } =20 next =3D pick_next_task(rq, prev, &rf); + rq_set_selected(rq, next); clear_tsk_need_resched(prev); clear_preempt_need_resched(); #ifdef CONFIG_SCHED_DEBUG @@ -7055,7 +7064,10 @@ void rt_mutex_setprio(struct task_struct *p, struct = task_struct *pi_task) =20 prev_class =3D p->sched_class; queued =3D task_on_rq_queued(p); - running =3D task_current(rq, p); + /* + * XXX how does (proxy exec) mutexes and RT_mutexes work together?! + */ + running =3D task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flag); if (running) @@ -7143,7 +7155,10 @@ void set_user_nice(struct task_struct *p, long nice) goto out_unlock; } queued =3D task_on_rq_queued(p); - running =3D task_current(rq, p); + /* + * XXX see concerns about do_set_cpus_allowed, rt_mutex_prio & Co. + */ + running =3D task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_NOCLOCK); if (running) @@ -7707,7 +7722,10 @@ static int __sched_setscheduler(struct task_struct *= p, } =20 queued =3D task_on_rq_queued(p); - running =3D task_current(rq, p); + /* + * XXX and again, how is this safe w.r.t. proxy exec? + */ + running =3D task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flags); if (running) @@ -9159,6 +9177,7 @@ void __init init_idle(struct task_struct *idle, int c= pu) rcu_read_unlock(); =20 rq->idle =3D idle; + rq_set_selected(rq, idle); rq_set_curr(rq, idle); idle->on_rq =3D TASK_ON_RQ_QUEUED; #ifdef CONFIG_SMP @@ -9261,7 +9280,7 @@ void sched_setnuma(struct task_struct *p, int nid) =20 rq =3D task_rq_lock(p, &rf); queued =3D task_on_rq_queued(p); - running =3D task_current(rq, p); + running =3D task_current_selected(rq, p); =20 if (queued) dequeue_task(rq, p, DEQUEUE_SAVE); @@ -10373,7 +10392,7 @@ void sched_move_task(struct task_struct *tsk) rq =3D task_rq_lock(tsk, &rf); update_rq_clock(rq); =20 - running =3D task_current(rq, tsk); + running =3D task_current_selected(rq, tsk); queued =3D task_on_rq_queued(tsk); =20 if (queued) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index a8296d38b066..63a0564cb1f8 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1179,7 +1179,7 @@ static enum hrtimer_restart dl_task_timer(struct hrti= mer *timer) #endif =20 enqueue_task_dl(rq, p, ENQUEUE_REPLENISH); - if (dl_task(rq_curr(rq))) + if (dl_task(rq_selected(rq))) check_preempt_curr_dl(rq, p, 0); else resched_curr(rq); @@ -1306,7 +1306,7 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, str= uct sched_dl_entity *dl_se) */ static void update_curr_dl(struct rq *rq) { - struct task_struct *curr =3D rq_curr(rq); + struct task_struct *curr =3D rq_selected(rq); struct sched_dl_entity *dl_se =3D &curr->dl; s64 delta_exec, scaled_delta_exec; int cpu =3D cpu_of(rq); @@ -1819,7 +1819,7 @@ static int find_later_rq(struct task_struct *task); static int select_task_rq_dl(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *proxy; bool select_rq; struct rq *rq; =20 @@ -1830,6 +1830,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int= flags) =20 rcu_read_lock(); curr =3D rq_curr_once(rq); + proxy =3D rq_selected_once(rq); =20 /* * If we are dealing with a -deadline task, we must @@ -1840,9 +1841,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int= flags) * other hand, if it has a shorter deadline, we * try to make it stay here, it might be important. */ - select_rq =3D unlikely(dl_task(curr)) && + select_rq =3D unlikely(dl_task(proxy)) && (curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &curr->dl)) && + !dl_entity_preempt(&p->dl, &proxy->dl)) && p->nr_cpus_allowed > 1; =20 /* @@ -1905,7 +1906,7 @@ static void check_preempt_equal_dl(struct rq *rq, str= uct task_struct *p) * let's hope p can move out. */ if (rq_curr(rq)->nr_cpus_allowed =3D=3D 1 || - !cpudl_find(&rq->rd->cpudl, rq_curr(rq), NULL)) + !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL)) return; =20 /* @@ -1944,7 +1945,7 @@ static int balance_dl(struct rq *rq, struct task_stru= ct *p, struct rq_flags *rf) static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, int flags) { - if (dl_entity_preempt(&p->dl, &rq_curr(rq)->dl)) { + if (dl_entity_preempt(&p->dl, &rq_selected(rq)->dl)) { resched_curr(rq); return; } @@ -1954,7 +1955,7 @@ static void check_preempt_curr_dl(struct rq *rq, stru= ct task_struct *p, * In the unlikely case current and p have the same deadline * let us try to decide what's the best thing to do... */ - if ((p->dl.deadline =3D=3D rq_curr(rq)->dl.deadline) && + if ((p->dl.deadline =3D=3D rq_selected(rq)->dl.deadline) && !test_tsk_need_resched(rq_curr(rq))) check_preempt_equal_dl(rq, p); #endif /* CONFIG_SMP */ @@ -1989,7 +1990,7 @@ static void set_next_task_dl(struct rq *rq, struct ta= sk_struct *p, bool first) if (hrtick_enabled_dl(rq)) start_hrtick_dl(rq, p); =20 - if (rq_curr(rq)->sched_class !=3D &dl_sched_class) + if (rq_selected(rq)->sched_class !=3D &dl_sched_class) update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0); =20 deadline_queue_push_tasks(rq); @@ -2305,8 +2306,8 @@ static int push_dl_task(struct rq *rq) * can move away, it makes sense to just reschedule * without going further in pushing next_task. */ - if (dl_task(rq_curr(rq)) && - dl_time_before(next_task->dl.deadline, rq_curr(rq)->dl.deadline) && + if (dl_task(rq_selected(rq)) && + dl_time_before(next_task->dl.deadline, rq_selected(rq)->dl.deadline) = && rq_curr(rq)->nr_cpus_allowed > 1) { resched_curr(rq); return 0; @@ -2322,6 +2323,7 @@ static int push_dl_task(struct rq *rq) get_task_struct(next_task); =20 /* Will lock the rq it'll find */ + /* XXX connoro: update find_lock_later_rq() for split context? */ later_rq =3D find_lock_later_rq(next_task, rq); if (!later_rq) { struct task_struct *task; @@ -2431,7 +2433,7 @@ static void pull_dl_task(struct rq *this_rq) * deadline than the current task of its runqueue. */ if (dl_time_before(p->dl.deadline, - rq_curr(src_rq)->dl.deadline)) + rq_selected(src_rq)->dl.deadline)) goto skip; =20 if (is_migration_disabled(p)) { @@ -2470,9 +2472,9 @@ static void task_woken_dl(struct rq *rq, struct task_= struct *p) if (!task_on_cpu(rq, p) && !test_tsk_need_resched(rq_curr(rq)) && p->nr_cpus_allowed > 1 && - dl_task(rq_curr(rq)) && + dl_task(rq_selected(rq)) && (rq_curr(rq)->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &rq_curr(rq)->dl))) { + !dl_entity_preempt(&p->dl, &rq_selected(rq)->dl))) { push_dl_tasks(rq); } } @@ -2635,12 +2637,12 @@ static void switched_to_dl(struct rq *rq, struct ta= sk_struct *p) return; } =20 - if (rq_curr(rq) !=3D p) { + if (rq_selected(rq) !=3D p) { #ifdef CONFIG_SMP if (p->nr_cpus_allowed > 1 && rq->dl.overloaded) deadline_queue_push_tasks(rq); #endif - if (dl_task(rq_curr(rq))) + if (dl_task(rq_selected(rq))) check_preempt_curr_dl(rq, p, 0); else resched_curr(rq); @@ -2669,7 +2671,7 @@ static void prio_changed_dl(struct rq *rq, struct tas= k_struct *p, if (!rq->dl.overloaded) deadline_queue_pull_task(rq); =20 - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { /* * If we now have a earlier deadline task than p, * then reschedule, provided p is still on this diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9295e85ab83b..3f7df45f7402 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -919,7 +919,7 @@ static s64 update_curr_se(struct rq *rq, struct sched_e= ntity *curr) */ s64 update_curr_common(struct rq *rq) { - struct task_struct *curr =3D rq_curr(rq); + struct task_struct *curr =3D rq_selected(rq); s64 delta_exec; =20 delta_exec =3D update_curr_se(rq, &curr->se); @@ -964,7 +964,7 @@ static void update_curr(struct cfs_rq *cfs_rq) =20 static void update_curr_fair(struct rq *rq) { - update_curr(cfs_rq_of(&rq_curr(rq)->se)); + update_curr(cfs_rq_of(&rq_selected(rq)->se)); } =20 static inline void @@ -6169,7 +6169,7 @@ static void hrtick_start_fair(struct rq *rq, struct t= ask_struct *p) s64 delta =3D slice - ran; =20 if (delta < 0) { - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); return; } @@ -6184,7 +6184,7 @@ static void hrtick_start_fair(struct rq *rq, struct t= ask_struct *p) */ static void hrtick_update(struct rq *rq) { - struct task_struct *curr =3D rq_curr(rq); + struct task_struct *curr =3D rq_selected(rq); =20 if (!hrtick_enabled_fair(rq) || curr->sched_class !=3D &fair_sched_class) return; @@ -7821,7 +7821,7 @@ static void set_skip_buddy(struct sched_entity *se) */ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int= wake_flags) { - struct task_struct *curr =3D rq_curr(rq); + struct task_struct *curr =3D rq_selected(rq); struct sched_entity *se =3D &curr->se, *pse =3D &p->se; struct cfs_rq *cfs_rq =3D task_cfs_rq(curr); int scale =3D cfs_rq->nr_running >=3D sched_nr_latency; @@ -7855,7 +7855,7 @@ static void check_preempt_wakeup(struct rq *rq, struc= t task_struct *p, int wake_ * prevents us from potentially nominating it as a false LAST_BUDDY * below. */ - if (test_tsk_need_resched(curr)) + if (test_tsk_need_resched(rq_curr(rq))) return; =20 /* Idle tasks are by definition preempted by non-idle tasks. */ @@ -8854,7 +8854,7 @@ static bool __update_blocked_others(struct rq *rq, bo= ol *done) * update_load_avg() can call cpufreq_update_util(). Make sure that RT, * DL and IRQ signals have been updated before updating CFS. */ - curr_class =3D rq_curr(rq)->sched_class; + curr_class =3D rq_selected(rq)->sched_class; =20 thermal_pressure =3D arch_scale_thermal_pressure(cpu_of(rq)); =20 @@ -12017,6 +12017,10 @@ static void task_tick_fair(struct rq *rq, struct t= ask_struct *curr, int queued) entity_tick(cfs_rq, se, queued); } =20 + /* + * XXX need to use execution context (rq->curr) for task_tick_numa and + * update_misfit_status? + */ if (static_branch_unlikely(&sched_numa_balancing)) task_tick_numa(rq, curr); =20 @@ -12080,7 +12084,7 @@ prio_changed_fair(struct rq *rq, struct task_struct= *p, int oldprio) * our priority decreased, or if we are not currently running on * this runqueue and our priority is higher than the current's */ - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { if (p->prio > oldprio) resched_curr(rq); } else @@ -12225,7 +12229,7 @@ static void switched_to_fair(struct rq *rq, struct = task_struct *p) * kick off the schedule if running, otherwise just see * if we can still preempt the current task. */ - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); else check_preempt_curr(rq, p, 0); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index ecd53be8a6e5..44139d56466e 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -574,7 +574,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *r= t_se, unsigned int flags) =20 static void sched_rt_rq_enqueue(struct rt_rq *rt_rq) { - struct task_struct *curr =3D rq_curr(rq_of_rt_rq(rt_rq)); + struct task_struct *curr =3D rq_selected(rq_of_rt_rq(rt_rq)); struct rq *rq =3D rq_of_rt_rq(rt_rq); struct sched_rt_entity *rt_se; =20 @@ -1044,7 +1044,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt= _rq) */ static void update_curr_rt(struct rq *rq) { - struct task_struct *curr =3D rq_curr(rq); + struct task_struct *curr =3D rq_selected(rq); struct sched_rt_entity *rt_se =3D &curr->rt; s64 delta_exec; =20 @@ -1591,7 +1591,7 @@ static int find_lowest_rq(struct task_struct *task); static int select_task_rq_rt(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *proxy; struct rq *rq; bool test; =20 @@ -1602,7 +1602,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int= flags) rq =3D cpu_rq(cpu); =20 rcu_read_lock(); - curr =3D rq_curr_once(rq); + curr =3D rq_curr_once(rq); /* XXX jstultz: using rcu_dereference intead o= f READ_ONCE */ + proxy =3D rq_selected_once(rq); =20 /* * If the current task on @p's runqueue is an RT task, then @@ -1631,8 +1632,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int= flags) * systems like big.LITTLE. */ test =3D curr && - unlikely(rt_task(curr)) && - (curr->nr_cpus_allowed < 2 || curr->prio <=3D p->prio); + unlikely(rt_task(proxy)) && + (curr->nr_cpus_allowed < 2 || proxy->prio <=3D p->prio); =20 if (test || !rt_task_fits_capacity(p, cpu)) { int target =3D find_lowest_rq(p); @@ -1662,12 +1663,12 @@ select_task_rq_rt(struct task_struct *p, int cpu, i= nt flags) =20 static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) { - /* - * Current can't be migrated, useless to reschedule, - * let's hope p can move out. + /* XXX connoro: need to revise cpupri_find() to reflect the split + * context since it should look at rq_selected() for priority but + * rq_curr() for affinity. */ if (rq_curr(rq)->nr_cpus_allowed =3D=3D 1 || - !cpupri_find(&rq->rd->cpupri, rq_curr(rq), NULL)) + !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL)) return; =20 /* @@ -1710,7 +1711,9 @@ static int balance_rt(struct rq *rq, struct task_stru= ct *p, struct rq_flags *rf) */ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, in= t flags) { - if (p->prio < rq_curr(rq)->prio) { + struct task_struct *curr =3D rq_selected(rq); + + if (p->prio < curr->prio) { resched_curr(rq); return; } @@ -1728,7 +1731,7 @@ static void check_preempt_curr_rt(struct rq *rq, stru= ct task_struct *p, int flag * to move current somewhere else, making room for our non-migratable * task. */ - if (p->prio =3D=3D rq_curr(rq)->prio && !test_tsk_need_resched(rq_curr(rq= ))) + if (p->prio =3D=3D curr->prio && !test_tsk_need_resched(rq_curr(rq))) check_preempt_equal_prio(rq, p); #endif } @@ -1753,7 +1756,7 @@ static inline void set_next_task_rt(struct rq *rq, st= ruct task_struct *p, bool f * utilization. We only care of the case where we start to schedule a * rt task */ - if (rq_curr(rq)->sched_class !=3D &rt_sched_class) + if (rq_selected(rq)->sched_class !=3D &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); =20 rt_queue_push_tasks(rq); @@ -2029,7 +2032,7 @@ static struct task_struct *pick_next_pushable_task(st= ruct rq *rq) struct task_struct, pushable_tasks); =20 BUG_ON(rq->cpu !=3D task_cpu(p)); - BUG_ON(task_current(rq, p)); + BUG_ON(task_current(rq, p) || task_current_selected(rq, p)); BUG_ON(p->nr_cpus_allowed <=3D 1); =20 BUG_ON(!task_on_rq_queued(p)); @@ -2062,7 +2065,7 @@ static int push_rt_task(struct rq *rq, bool pull) * higher priority than current. If that's the case * just reschedule current. */ - if (unlikely(next_task->prio < rq_curr(rq)->prio)) { + if (unlikely(next_task->prio < rq_selected(rq)->prio)) { resched_curr(rq); return 0; } @@ -2083,6 +2086,16 @@ static int push_rt_task(struct rq *rq, bool pull) * Note that the stoppers are masqueraded as SCHED_FIFO * (cf. sched_set_stop_task()), so we can't rely on rt_task(). */ + /* + * XXX connoro: seems like what we actually want here might be: + * 1) Enforce that rq_selected() must be RT + * 2) Revise find_lowest_rq() to handle split context, searching + * for an rq that can accommodate rq_selected()'s prio and + * rq->curr's affinity + * 3) Send the whole chain to the new rq in push_cpu_stop()? + * If #3 is needed, might be best to make a separate patch with + * all the "chain-level load balancing" changes. + */ if (rq_curr(rq)->sched_class !=3D &rt_sched_class) return 0; =20 @@ -2114,6 +2127,12 @@ static int push_rt_task(struct rq *rq, bool pull) get_task_struct(next_task); =20 /* find_lock_lowest_rq locks the rq if found */ + /* + * XXX connoro: find_lock_lowest_rq() likely also needs split context + * support. This also needs to include something like an exec_ctx=3DNULL + * case for when we push a blocked task whose lock owner is not on + * this rq. + */ lowest_rq =3D find_lock_lowest_rq(next_task, rq); if (!lowest_rq) { struct task_struct *task; @@ -2415,7 +2434,7 @@ static void pull_rt_task(struct rq *this_rq) * p if it is lower in priority than the * current task on the run queue */ - if (p->prio < rq_curr(src_rq)->prio) + if (p->prio < rq_selected(src_rq)->prio) goto skip; =20 if (is_migration_disabled(p)) { @@ -2457,9 +2476,9 @@ static void task_woken_rt(struct rq *rq, struct task_= struct *p) bool need_to_push =3D !task_on_cpu(rq, p) && !test_tsk_need_resched(rq_curr(rq)) && p->nr_cpus_allowed > 1 && - (dl_task(rq_curr(rq)) || rt_task(rq_curr(rq))) && + (dl_task(rq_selected(rq)) || rt_task(rq_selected(rq))) && (rq_curr(rq)->nr_cpus_allowed < 2 || - rq_curr(rq)->prio <=3D p->prio); + rq_selected(rq)->prio <=3D p->prio); =20 if (need_to_push) push_rt_tasks(rq); @@ -2543,7 +2562,7 @@ static void switched_to_rt(struct rq *rq, struct task= _struct *p) if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) rt_queue_push_tasks(rq); #endif /* CONFIG_SMP */ - if (p->prio < rq_curr(rq)->prio && cpu_online(cpu_of(rq))) + if (p->prio < rq_selected(rq)->prio && cpu_online(cpu_of(rq))) resched_curr(rq); } } @@ -2558,7 +2577,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p,= int oldprio) if (!task_on_rq_queued(p)) return; =20 - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { #ifdef CONFIG_SMP /* * If our priority decreases while running, we @@ -2584,7 +2603,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p,= int oldprio) * greater than the current running task * then reschedule. */ - if (p->prio < rq_curr(rq)->prio) + if (p->prio < rq_selected(rq)->prio) resched_curr(rq); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 9e6fb54c66be..70cb55ad025d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1008,7 +1008,10 @@ struct rq { */ unsigned int nr_uninterruptible; =20 - struct task_struct __rcu *curr_exec; + struct task_struct __rcu *curr_exec; /* Execution context */ +#ifdef CONFIG_PROXY_EXEC + struct task_struct __rcu *curr_sched; /* Scheduling context (policy) */ +#endif struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; @@ -1230,6 +1233,37 @@ static inline void rq_set_curr_rcu_init(struct rq *r= q, struct task_struct *task) RCU_INIT_POINTER(rq->curr_exec, task); } =20 +#ifdef CONFIG_PROXY_EXEC +static inline struct task_struct *rq_selected(struct rq *rq) +{ + return rq->curr_sched; +} + +static inline struct task_struct *rq_selected_rcu(struct rq *rq) +{ + return rcu_dereference(rq->curr_sched); +} + +static inline struct task_struct *rq_selected_once(struct rq *rq) +{ + return READ_ONCE(rq->curr_sched); +} + +static inline void rq_set_selected(struct rq *rq, struct task_struct *t) +{ + rcu_assign_pointer(rq->curr_sched, t); +} + +#else +#define rq_selected(x) (rq_curr(x)) +#define rq_selected_rcu(x) (rq_curr_rcu(x)) +#define rq_selected_once(x) (rq_curr_once(x)) +static inline void rq_set_selected(struct rq *rq, struct task_struct *t) +{ + /* Do nothing */ +} +#endif + #define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) #define this_rq() this_cpu_ptr(&runqueues) #define task_rq(p) cpu_rq(task_cpu(p)) @@ -2102,11 +2136,25 @@ static inline u64 global_rt_runtime(void) return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC; } =20 +/* + * Is p the current execution context? + */ static inline int task_current(struct rq *rq, struct task_struct *p) { return rq_curr(rq) =3D=3D p; } =20 +/* + * Is p the current scheduling context? + * + * Note that it might be the current execution context at the same time if + * rq_curr() =3D=3D rq_selected() =3D=3D p. + */ +static inline int task_current_selected(struct rq *rq, struct task_struct = *p) +{ + return rq_selected(rq) =3D=3D p; +} + static inline int task_on_cpu(struct rq *rq, struct task_struct *p) { #ifdef CONFIG_SMP @@ -2264,7 +2312,7 @@ struct sched_class { =20 static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { - WARN_ON_ONCE(rq_curr(rq) !=3D prev); + WARN_ON_ONCE(rq_selected(rq) !=3D prev); prev->sched_class->put_prev_task(rq, prev); } =20 @@ -2345,6 +2393,16 @@ extern void set_cpus_allowed_common(struct task_stru= ct *p, struct affinity_conte =20 static inline struct task_struct *get_push_task(struct rq *rq) { + /* + * XXX connoro: should this be rq_selected? + * When rq_curr() !=3D rq_selected(), pushing rq_curr() alone means it + * stops inheriting. Perhaps returning rq_selected() and pushing the + * entire chain would be correct? OTOH if we are guaranteed that + * rq_selected() is the highest prio task on the rq when + * get_push_task() is called, then proxy() will migrate the rest of the + * chain during the __schedule() call immediately after rq_curr() is + * pushed. + */ struct task_struct *p =3D rq_curr(rq); =20 lockdep_assert_rq_held(rq); --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF2BCC7619A for ; Tue, 11 Apr 2023 04:26:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230233AbjDKE0Z (ORCPT ); Tue, 11 Apr 2023 00:26:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230109AbjDKE0K (ORCPT ); Tue, 11 Apr 2023 00:26:10 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE13C30FC for ; Mon, 10 Apr 2023 21:25:36 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v67-20020a254846000000b00b8189f73e94so30494977yba.12 for ; Mon, 10 Apr 2023 21:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187136; x=1683779136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JX517IRT7COXCefv+Z6JEfzB3hFBTiM6JAYd9NR6DHw=; b=n9OxpFiiHvxNx2CGROwrN80pWd5CJMjso971ZiubbKw5jiObrq1Mgugo656rBrOxZI ALNs0u7HNNBRzN3hFKunqUuD60DBlEUJuSpu5M63wGVw+oZk7tL/Ft+aZ0bLKw7wP1Jb IJn5E27ux0G9/iqH6GdQ9ZyiXmFtQBbbHVW3IMjByLI5KO0xXyFGvKS57q2/jepT3ZtF xRk9XIm340tUnkDYpUl65TsRljG/MUSXWNAY+85L7vgnqlIlnfFRY/6W/wocYOXk7KHq m22eKPovkhjAGGHSZRv8wh2PZEQJuxXNkF5YuM2gSkkysxEuMgqlM0HwIfMR2UDg9rFy fANQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187136; x=1683779136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JX517IRT7COXCefv+Z6JEfzB3hFBTiM6JAYd9NR6DHw=; b=pIpPfgA742YbsTa2Art3pA43Aq34lnw55C3P+piGcdVV9/e2bqW6sqmUdnBGKQwOrA FWq055BzxLxO8t/afOjCQLGyGtKXLKNHvtXWM+uACOiD0HmGg+LmplI3YmuqvLXsJILZ 1N8VmpFfwZWCLRTArcsx3tCJiuHfgIgU8OT6FWXv3JPfAJDEYOh1S04OcT59ZmTGgnqg XUtI5NXSHngCXoF3Ti0XbwN+A5LiyQufxbUBtIUuInqO6iS3UVbDzy9BEex2bKU/l5lw +KtbOxcOf41wqPe1Xe9ntxTSWZqh4lTloBSdSbaVo4N4yqP3u4h4wpNREybhKJhsS44I OH4g== X-Gm-Message-State: AAQBX9fEqAjXowjyTnmzxRdYbwMqsJlfVGVC4nNlLLJOOYxpID6CXYCl us+5dL0AXWjqezPUDw0FnbU7AHpBtG9T2k3/k0ZDUtf95pX62go11PJlsrIXUm/OYGU9xPLZRX6 JiU3hH8R5J/UNHuCgDFLSemnLSSooQ2AUH6sCj+qLpjkyCugk0RY1/xRd8cNN94Np6O19KTQ= X-Google-Smtp-Source: AKy350Zcs3W4gptUndbilLC3UgkG7fame+KRWIflBq00QUXM77IOGDtAvm9f3ebv6zQqUhxbZCRJV8UxM+5A X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:12c7:b0:b26:884:c35e with SMTP id j7-20020a05690212c700b00b260884c35emr10795372ybu.4.1681187136322; Mon, 10 Apr 2023 21:25:36 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:07 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-11-jstultz@google.com> Subject: [PATCH v3 10/14] sched: Unnest ttwu_runnable in prep for proxy-execution From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Slightly rework ttwu_runnable to minimize the nesting to help make the proxy-execution changes easier to read. Should be no logical change here. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: John Stultz --- kernel/sched/core.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a9cf8397c601..82a62480d8d7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3776,18 +3776,20 @@ static int ttwu_runnable(struct task_struct *p, int= wake_flags) int ret =3D 0; =20 rq =3D __task_rq_lock(p, &rf); - if (task_on_rq_queued(p)) { - if (!task_on_cpu(rq, p)) { - /* - * When on_rq && !on_cpu the task is preempted, see if - * it should preempt the task that is current now. - */ - update_rq_clock(rq); - check_preempt_curr(rq, p, wake_flags); - } - ttwu_do_wakeup(p); - ret =3D 1; + if (!task_on_rq_queued(p)) + goto out_unlock; + + if (!task_on_cpu(rq, p)) { + /* + * When on_rq && !on_cpu the task is preempted, see if + * it should preempt the task that is current now. + */ + update_rq_clock(rq); + check_preempt_curr(rq, p, wake_flags); } + ttwu_do_wakeup(p); + ret =3D 1; +out_unlock: __task_rq_unlock(rq, &rf); =20 return ret; --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63896C76196 for ; Tue, 11 Apr 2023 04:26:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230205AbjDKE0q (ORCPT ); Tue, 11 Apr 2023 00:26:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230193AbjDKE0N (ORCPT ); Tue, 11 Apr 2023 00:26:13 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 359B22691 for ; Mon, 10 Apr 2023 21:25:38 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54c23fab905so115897967b3.14 for ; Mon, 10 Apr 2023 21:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187138; x=1683779138; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eoYXpQ2LZXTwZUi+ZtMPLwKF3H0uK5AJbZIZwQzpjoA=; b=hkDg2HC82UsnhNvMFudGv3isGMGc1UAZKoxoddfB8xfrBsdcwYvXGPN5KJtYv9Li1r wbAA4HlfJqa36og0HMTDiLRCl6QQW0HIo4TUb5AQAMNsWJge7kv99n8KD6X0CZfMP0lp QJdMLqBARGxZk2sueqh54Tb6fgOZ6wZr+Z2RQutKyXU4+magzXP2yU0O4QLm2LUy+LCu zQE391SE9ailwxfJ5vY7GwGUAYaxtItZEv1RCZti7PYIvVX2vK2ApyhMMklJt7bZg34R aJLMbW1jXCUT9ZLa9YTbqrDxYSBsmZ9WxE17DO8JYXDH9GbmTvTPsG9WHM4nhrTVAk4m VqBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187138; x=1683779138; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eoYXpQ2LZXTwZUi+ZtMPLwKF3H0uK5AJbZIZwQzpjoA=; b=j8G3tN733KzsI+udOTNza7FZYfVtBLKv96l1/+f6TAJb1xhdF2ZGQ4hTOPJDL4adYW lKkGJD9ZMZPAbCS2Is8KL0zGtkFEWSDkGZQQDnagzW86rZEaiK/hHfn//4Qs+Euh3o98 3bhaXEl+cz/IxWfBvS1ePfv9jiZJbE9LY+o6MODXDiBEpmZB+WtqFdrOLPmH/Mszw2TJ +VRn7MkRjDwd+j/29fA6Tzf3+WYcXSFWaWRoYhsSAlnV3uHiMwwDsRPbRn8EUjOnNI3c /fivYoElYtAJByn636iv27eFQBdQoVgy+zoaBRZ1T5IRpbshUJNx2z0iOkxMXgKgkrU7 URFw== X-Gm-Message-State: AAQBX9fxvZ42M6hYJwlfy8xzrDs7UGjWwuzT7mEfm0ubc8ouUbPQiHnI kLeM2JIYR45njwWKGutPu1DzhJMIO/s6J4YiBJeUiv0rScGRPTMtAE33B6aeZrszb9T81kVuCTa uQPMS/zY0vmKdJ0VKfH4H8ZLOvDtoE+VDsl3YrjHTOHst8QuXm5A6d3WC5u7UoJHqpedBuWA= X-Google-Smtp-Source: AKy350Zvg6EXftIhZzCDANoKTtzdNT9PnBznFFddpLSAgRw8tkvUkGvT6VluMG8Na6x+OOb1QX7/LgC/eFwC X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a81:bc08:0:b0:54e:e490:d190 with SMTP id a8-20020a81bc08000000b0054ee490d190mr925385ywi.4.1681187138097; Mon, 10 Apr 2023 21:25:38 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:08 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-12-jstultz@google.com> Subject: [PATCH v3 11/14] sched: Add proxy execution From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , Valentin Schneider , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra A task currently holding a mutex (executing a critical section) might find benefit in using scheduling contexts of other tasks blocked on the same mutex if they happen to have higher priority of the current owner (e.g., to prevent priority inversions). Proxy execution lets a task do exactly that: if a mutex owner has waiters, it can use waiters' scheduling context to potentially continue running if preempted. The basic mechanism is implemented by this patch, the core of which resides in the proxy() function. Potential proxies (i.e., tasks blocked on a mutex) are not dequeued, so, if one of them is actually selected by schedule() as the next task to be put to run on a CPU, proxy() is used to walk the blocked_on relation and find which task (mutex owner) might be able to use the proxy's scheduling context. Here come the tricky bits. In fact, owner task might be in all sort of states when a proxy is found (blocked, executing on a different CPU, etc.). Details on how to handle different situations are to be found in proxy() code comments. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Peter Zijlstra (Intel) [rebased, added comments and changelog] Signed-off-by: Juri Lelli [Fixed rebase conflicts] [squashed sched: Ensure blocked_on is always guarded by blocked_lock] Signed-off-by: Valentin Schneider [fix rebase conflicts, various fixes & tweaks commented inline] [squashed sched: Use rq->curr vs rq->proxy checks] Signed-off-by: Connor O'Brien [jstultz: Rebased, split up, and folded in changes from Juri Lelli and Connor O'Brian, added additional locking on get_task_blocked_on(next) logic, pretty major rework to better conditionalize logic on CONFIG_PROXY_EXEC and split up the very large proxy() function - hopefully without changes to logic / behavior] Signed-off-by: John Stultz --- v2: * Numerous changes folded in * Split out some of the logic into separate patches * Break up the proxy() function so its a bit easier to read and is better conditionalized on CONFIG_PROXY_EXEC v3: * Improve comments * Added fix to call __balance_callbacks before we call pick_next_task() again, as a callback may have been set causing rq_pin_lock to generate warnings. * Added fix to call __balance_callbacks before we drop the rq lock in proxy_migrate_task, to avoid rq_pin_lock from generating warnings if a callback was set TODO: Finish conditionalization edge cases --- include/linux/sched.h | 2 + init/Kconfig | 7 + kernel/Kconfig.locks | 2 +- kernel/fork.c | 2 + kernel/locking/mutex.c | 58 +++- kernel/sched/core.c | 666 +++++++++++++++++++++++++++++++++++++++- kernel/sched/deadline.c | 2 +- kernel/sched/fair.c | 13 +- kernel/sched/rt.c | 3 +- kernel/sched/sched.h | 21 +- 10 files changed, 760 insertions(+), 16 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6d22542d3648..b88303ceacaf 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1139,7 +1139,9 @@ struct task_struct { struct rt_mutex_waiter *pi_blocked_on; #endif =20 + struct task_struct *blocked_proxy; /* task that is boosting us */ struct mutex *blocked_on; /* lock we're blocked on */ + struct list_head blocked_entry; /* tasks blocked on us */ raw_spinlock_t blocked_lock; =20 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP diff --git a/init/Kconfig b/init/Kconfig index 1fb5f313d18f..38cdd2ccc538 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -935,6 +935,13 @@ config NUMA_BALANCING_DEFAULT_ENABLED If set, automatic NUMA balancing will be enabled if running on a NUMA machine. =20 +config PROXY_EXEC + bool "Proxy Execution" + default n + help + This option enables proxy execution, a mechanism for mutex owning + tasks to inherit the scheduling context of higher priority waiters. + menuconfig CGROUPS bool "Control Group support" select KERNFS diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index 4198f0273ecd..791c98f1d329 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -226,7 +226,7 @@ config ARCH_SUPPORTS_ATOMIC_RMW =20 config MUTEX_SPIN_ON_OWNER def_bool y - depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW + depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW && !PROXY_EXEC =20 config RWSEM_SPIN_ON_OWNER def_bool y diff --git a/kernel/fork.c b/kernel/fork.c index a0ff6d73affc..1cde7733d387 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2222,7 +2222,9 @@ static __latent_entropy struct task_struct *copy_proc= ess( lockdep_init_task(p); #endif =20 + p->blocked_proxy =3D NULL; /* nobody is boosting us yet */ p->blocked_on =3D NULL; /* not blocked yet */ + INIT_LIST_HEAD(&p->blocked_entry); =20 #ifdef CONFIG_BCACHE p->sequential_io =3D 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index cbc34d5f4486..d778dbfb9981 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -940,11 +940,22 @@ static noinline void __sched __mutex_unlock_slowpath(= struct mutex *lock, unsigne { struct task_struct *next =3D NULL; DEFINE_WAKE_Q(wake_q); - unsigned long owner; + /* + * XXX [juril] Proxy Exec forces always an HANDOFF (so that owner is + * never empty when there are waiters waiting?). Should we make this + * conditional on having proxy exec configured in? + */ + unsigned long owner =3D MUTEX_FLAG_HANDOFF; unsigned long flags; =20 mutex_release(&lock->dep_map, ip); =20 + /* + * XXX must always handoff the mutex to avoid !owner in proxy(). + * scheduler delay is minimal since we hand off to the task that + * is to be scheduled next. + */ +#ifndef CONFIG_PROXY_EXEC /* * Release the lock before (potentially) taking the spinlock such that * other contenders can get on with things ASAP. @@ -967,10 +978,48 @@ static noinline void __sched __mutex_unlock_slowpath(= struct mutex *lock, unsigne return; } } +#endif =20 raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); - if (!list_empty(&lock->wait_list)) { + +#ifdef CONFIG_PROXY_EXEC + raw_spin_lock(¤t->blocked_lock); + /* + * If we have a task boosting us, and that task was boosting us through + * this lock, hand the lock to that task, as that is the highest + * waiter, as selected by the scheduling function. + */ + next =3D current->blocked_proxy; + if (next) { + struct mutex *next_lock; + + /* + * jstultz: get_task_blocked_on(next) seemed to be missing locking + * so I've added it here (which required nesting the locks). + */ + raw_spin_lock_nested(&next->blocked_lock, SINGLE_DEPTH_NESTING); + next_lock =3D get_task_blocked_on(next); + raw_spin_unlock(&next->blocked_lock); + if (next_lock !=3D lock) { + next =3D NULL; + } else { + wake_q_add(&wake_q, next); + current->blocked_proxy =3D NULL; + } + } + + /* + * XXX if there was no higher prio proxy, ->blocked_task will not have + * been set. Therefore lower prio contending tasks are serviced in + * FIFO order. + */ +#endif + + /* + * Failing that, pick any on the wait list. + */ + if (!next && !list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ struct mutex_waiter *waiter =3D list_first_entry(&lock->wait_list, @@ -985,7 +1034,10 @@ static noinline void __sched __mutex_unlock_slowpath(= struct mutex *lock, unsigne if (owner & MUTEX_FLAG_HANDOFF) __mutex_handoff(lock, next); =20 - preempt_disable(); + preempt_disable(); /* XXX connoro: why disable preemption here? */ +#ifdef CONFIG_PROXY_EXEC + raw_spin_unlock(¤t->blocked_lock); +#endif raw_spin_unlock_irqrestore(&lock->wait_lock, flags); =20 wake_up_q(&wake_q); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 82a62480d8d7..1d92f1a304b8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -505,6 +505,8 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p= , int flags) { } * * task_cpu(p): is changed by set_task_cpu(), the rules are: * + * XXX connoro: does it matter that ttwu_do_activate now calls __set_task_= cpu + * on blocked tasks? * - Don't call set_task_cpu() on a blocked task: * * We don't care what CPU we're not running on, this simplifies hotplug, @@ -2777,8 +2779,15 @@ static int affine_move_task(struct rq *rq, struct ta= sk_struct *p, struct rq_flag struct set_affinity_pending my_pending =3D { }, *pending =3D NULL; bool stop_pending, complete =3D false; =20 - /* Can the task run on the task's current CPU? If so, we're done */ - if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) { + /* + * Can the task run on the task's current CPU? If so, we're done + * + * We are also done if the task is currently acting as proxy (and + * potentially has been migrated outside its current or previous + * affinity mask) + */ + if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask) || + (task_current_selected(rq, p) && !task_current(rq, p))) { struct task_struct *push_task =3D NULL; =20 if ((flags & SCA_MIGRATE_ENABLE) && @@ -3690,6 +3699,72 @@ static inline void ttwu_do_wakeup(struct task_struct= *p) trace_sched_wakeup(p); } =20 +#ifdef CONFIG_PROXY_EXEC +static void activate_task_and_blocked_ent(struct rq *rq, struct task_struc= t *p, int en_flags) +{ + /* + * XXX connoro: By calling activate_task with blocked_lock held, we order= against + * the proxy() blocked_task case such that no more blocked tasks will + * be enqueued on p once we release p->blocked_lock. + */ + raw_spin_lock(&p->blocked_lock); + /* + * XXX connoro: do we need to check p->on_rq here like we do for pp below? + * or does holding p->pi_lock ensure nobody else activates p first? + */ + activate_task(rq, p, en_flags); + raw_spin_unlock(&p->blocked_lock); + + /* + * A whole bunch of 'proxy' tasks back this blocked task, wake + * them all up to give this task its 'fair' share. + */ + while (!list_empty(&p->blocked_entry)) { + struct task_struct *pp =3D + list_first_entry(&p->blocked_entry, + struct task_struct, + blocked_entry); + /* + * XXX connoro: proxy blocked_task case might be enqueuing more blocked = tasks + * on pp. If those continue past when we delete pp from the list, we'll = get an + * active with a non-empty blocked_entry list, which is no good. Locking + * pp->blocked_lock ensures either the blocked_task path gets the lock f= irst and + * enqueues everything before we ever get the lock, or we get the lock f= irst, the + * other path sees pp->on_rq !=3D 0 and enqueues nothing. + */ + raw_spin_lock(&pp->blocked_lock); + BUG_ON(pp->blocked_entry.prev !=3D &p->blocked_entry); + + list_del_init(&pp->blocked_entry); + if (READ_ONCE(pp->on_rq)) { + /* + * XXX connoro: We raced with a non mutex handoff activation of pp. That + * activation will also take care of activating all of the tasks after = pp in + * the blocked_entry list, so we're done here. + */ + raw_spin_unlock(&pp->blocked_lock); + break; + } + /* XXX can't call set_task_cpu() because we are not holding + * neither pp->pi_lock nor task's rq lock. This should however + * be fine as this task can't be woken up as it is blocked on + * this mutex atm. + * A problem however might be that __set_task_cpu() calls + * set_task_rq() which deals with groups and such... + */ + __set_task_cpu(pp, cpu_of(rq)); + activate_task(rq, pp, en_flags); + resched_curr(rq); + raw_spin_unlock(&pp->blocked_lock); + } +} +#else +static inline void activate_task_and_blocked_ent(struct rq *rq, struct tas= k_struct *p, int en_flags) +{ + activate_task(rq, p, en_flags); +} +#endif + static void ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, struct rq_flags *rf) @@ -3711,7 +3786,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p= , int wake_flags, atomic_dec(&task_rq(p)->nr_iowait); } =20 - activate_task(rq, p, en_flags); + activate_task_and_blocked_ent(rq, p, en_flags); + check_preempt_curr(rq, p, wake_flags); =20 ttwu_do_wakeup(p); @@ -3744,6 +3820,95 @@ ttwu_do_activate(struct rq *rq, struct task_struct *= p, int wake_flags, #endif } =20 +#ifdef CONFIG_PROXY_EXEC +/* XXX jstultz: This needs a better name! */ +bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p) +{ + /* + * XXX connoro: wrap this case with #ifdef CONFIG_PROXY_EXEC? + */ + if (task_current(rq, p)) { + bool ret =3D true; + /* + * XXX connoro: p is currently running. 3 cases are possible: + * 1) p is blocked on a lock it owns, but we got the rq lock before + * it could schedule out. Kill blocked_on relation and call + * ttwu_do_wakeup + * 2) p is blocked on a lock it does not own. Leave blocked_on + * unchanged, don't call ttwu_do_wakeup, and return 0. + * 3) p is unblocked, but unless we hold onto blocked_lock while + * calling ttwu_do_wakeup, we could race with it becoming + * blocked and overwrite the correct p->__state with TASK_RUNNING. + */ + raw_spin_lock(&p->blocked_lock); + if (task_is_blocked(p) && mutex_owner(p->blocked_on) =3D=3D p) + set_task_blocked_on(p, NULL); + if (!task_is_blocked(p)) + ret =3D false; + raw_spin_unlock(&p->blocked_lock); + return ret; + } + + /* + * Since we don't dequeue for blocked-on relations, we'll always + * trigger the on_rq_queued() clause for them. + */ + if (task_is_blocked(p)) { + raw_spin_lock(&p->blocked_lock); + + if (mutex_owner(p->blocked_on) !=3D p) { + /* + * XXX connoro: p already woke, ran and blocked on + * another mutex. Since a successful wakeup already + * happened, we're done. + */ + raw_spin_unlock(&p->blocked_lock); + return true; + } + + set_task_blocked_on(p, NULL); + if (!cpumask_test_cpu(cpu_of(rq), p->cpus_ptr)) { + /* + * proxy stuff moved us outside of the affinity mask + * 'sleep' now and fail the direct wakeup so that the + * normal wakeup path will fix things. + */ + deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); + if (task_current_selected(rq, p)) { + /* + * XXX connoro: If p is the proxy, then remove lingering + * references to it from rq and sched_class structs after + * dequeueing. + * can we get here while rq is inside __schedule? + * do any assumptions break if so? + */ + put_prev_task(rq, p); + rq_set_selected(rq, rq->idle); + } + resched_curr(rq); + raw_spin_unlock(&p->blocked_lock); + return true; + } + /* connoro: perhaps deq/enq here to get our task into the pushable task + * list again now that it's unblocked? Does that break if we're the prox= y or + * does holding the rq lock make that OK? + */ + /* + * Must resched after killing a blocked_on relation. The currently + * executing context might not be the most elegible anymore. + */ + resched_curr(rq); + raw_spin_unlock(&p->blocked_lock); + } + return false; +} +#else +static inline bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struc= t *p) +{ + return false; +} +#endif + /* * Consider @p being inside a wait loop: * @@ -3776,9 +3941,15 @@ static int ttwu_runnable(struct task_struct *p, int = wake_flags) int ret =3D 0; =20 rq =3D __task_rq_lock(p, &rf); - if (!task_on_rq_queued(p)) + if (!task_on_rq_queued(p)) { + BUG_ON(task_is_running(p)); goto out_unlock; + } =20 + /* + * ttwu_do_wakeup()-> + * check_preempt_curr() may use rq clock + */ if (!task_on_cpu(rq, p)) { /* * When on_rq && !on_cpu the task is preempted, see if @@ -3787,8 +3958,14 @@ static int ttwu_runnable(struct task_struct *p, int = wake_flags) update_rq_clock(rq); check_preempt_curr(rq, p, wake_flags); } + + /* XXX jstultz: This needs a better name! */ + if (ttwu_proxy_skip_wakeup(rq, p)) + goto out_unlock; + ttwu_do_wakeup(p); ret =3D 1; + out_unlock: __task_rq_unlock(rq, &rf); =20 @@ -4196,6 +4373,23 @@ try_to_wake_up(struct task_struct *p, unsigned int s= tate, int wake_flags) if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags)) goto unlock; =20 + if (task_is_blocked(p)) { + /* + * XXX connoro: we are in one of 2 cases: + * 1) p is blocked on a mutex it doesn't own but is still + * enqueued on a rq. We definitely don't want to keep going + * (and potentially activate it elsewhere without ever + * dequeueing) but maybe this is more properly handled by + * having ttwu_runnable() return 1 in this case? + * 2) p was removed from its rq and added to a blocked_entry + * list by proxy(). It should not be woken until the task at + * the head of the list gets a mutex handoff wakeup. + * Should try_to_wake_up() return 1 in either of these cases? + */ + success =3D 0; + goto unlock; + } + #ifdef CONFIG_SMP /* * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be @@ -5584,6 +5778,18 @@ void scheduler_tick(void) =20 rq_lock(rq, &rf); =20 +#ifdef CONFIG_PROXY_EXEC + /* + * XXX connoro: is this check needed? Why? + */ + if (task_cpu(curr) !=3D cpu) { + BUG_ON(!test_preempt_need_resched() && + !tif_need_resched()); + rq_unlock(rq, &rf); + return; + } +#endif + update_rq_clock(rq); thermal_pressure =3D arch_scale_thermal_pressure(cpu_of(rq)); update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure); @@ -6476,6 +6682,404 @@ pick_next_task(struct rq *rq, struct task_struct *p= rev, struct rq_flags *rf) # define SM_MASK_PREEMPT SM_PREEMPT #endif =20 +#ifdef CONFIG_PROXY_EXEC + +static struct task_struct * +proxy_migrate_task(struct rq *rq, struct task_struct *next, + struct rq_flags *rf, struct task_struct *p, + int that_cpu, bool curr_in_chain) +{ + struct rq *that_rq; + LIST_HEAD(migrate_list); + + /* + * The blocked-on relation must not cross CPUs, if this happens + * migrate @p to the @owner's CPU. + * + * This is because we must respect the CPU affinity of execution + * contexts (@owner) but we can ignore affinity for scheduling + * contexts (@p). So we have to move scheduling contexts towards + * potential execution contexts. + * + * XXX [juril] what if @p is not the highest prio task once migrated + * to @owner's CPU? + * + * XXX [juril] also, after @p is migrated it is not migrated back once + * @owner releases the lock? Isn't this a potential problem w.r.t. + * @owner affinity settings? + * [juril] OK. It is migrated back into its affinity mask in + * ttwu_remote(), or by using wake_cpu via select_task_rq, guess we + * might want to add a comment about that here. :-) + * + * TODO: could optimize by finding the CPU of the final owner + * and migrating things there. Given: + * + * CPU0 CPU1 CPU2 + * + * a ----> b ----> c + * + * the current scheme would result in migrating 'a' to CPU1, + * then CPU1 would migrate b and a to CPU2. Only then would + * CPU2 run c. + */ + that_rq =3D cpu_rq(that_cpu); + + /* + * @owner can disappear, simply migrate to @that_cpu and leave that CPU + * to sort things out. + */ + + /* + * Since we're going to drop @rq, we have to put(@next) first, + * otherwise we have a reference that no longer belongs to us. Use + * @fake_task to fill the void and make the next pick_next_task() + * invocation happy. + * + * XXX double, triple think about this. + * XXX put doesn't work with ON_RQ_MIGRATE + * + * CPU0 CPU1 + * + * B mutex_lock(X) + * + * A mutex_lock(X) <- B + * A __schedule() + * A pick->A + * A proxy->B + * A migrate A to CPU1 + * B mutex_unlock(X) -> A + * B __schedule() + * B pick->A + * B switch_to (A) + * A ... does stuff + * A ... is still running here + * + * * BOOM * + */ + put_prev_task(rq, next); + if (curr_in_chain) { + rq_set_selected(rq, rq->idle); + set_tsk_need_resched(rq->idle); + /* + * XXX [juril] don't we still need to migrate @next to + * @owner's CPU? + */ + return rq->idle; + } + rq_set_selected(rq, rq->idle); + + for (; p; p =3D p->blocked_proxy) { + int wake_cpu =3D p->wake_cpu; + + WARN_ON(p =3D=3D rq_curr(rq)); + + deactivate_task(rq, p, 0); + set_task_cpu(p, that_cpu); + /* + * We can abuse blocked_entry to migrate the thing, because @p is + * still on the rq. + */ + list_add(&p->blocked_entry, &migrate_list); + + /* + * Preserve p->wake_cpu, such that we can tell where it + * used to run later. + */ + p->wake_cpu =3D wake_cpu; + } + + /* + * XXX jstultz: Try to ensure we handle balance callbacks + * before releasing the rq lock - needs review + */ + if (rq->balance_callback) + __balance_callbacks(rq); + + rq_unpin_lock(rq, rf); + raw_spin_rq_unlock(rq); + raw_spin_rq_lock(that_rq); + + while (!list_empty(&migrate_list)) { + p =3D list_first_entry(&migrate_list, struct task_struct, blocked_entry); + list_del_init(&p->blocked_entry); + + enqueue_task(that_rq, p, 0); + check_preempt_curr(that_rq, p, 0); + p->on_rq =3D TASK_ON_RQ_QUEUED; + /* + * check_preempt_curr has already called + * resched_curr(that_rq) in case it is + * needed. + */ + } + + raw_spin_rq_unlock(that_rq); + raw_spin_rq_lock(rq); + rq_repin_lock(rq, rf); + + return NULL; /* Retry task selection on _this_ CPU. */ +} + +static inline struct task_struct * +proxy_resched_idle(struct rq *rq, struct task_struct *next) +{ + put_prev_task(rq, next); + rq_set_selected(rq, rq->idle); + set_tsk_need_resched(rq->idle); + return rq->idle; +} + +static void proxy_enqueue_on_owner(struct rq *rq, struct task_struct *p, + struct task_struct *owner, + struct task_struct *next) +{ + /* + * Walk back up the blocked_proxy relation and enqueue them all on @owner + * + * ttwu_activate() will pick them up and place them on whatever rq + * @owner will run next. + * XXX connoro: originally we would jump back into the main proxy() loop + * owner->on_rq !=3D0 path, but if we then end up taking the owned_task p= ath + * then we can overwrite p->on_rq after ttwu_do_activate sets it to 1 whi= ch breaks + * the assumptions made in ttwu_do_activate. + * + * Perhaps revisit whether retry is now possible given the changes to the + * owned_task path since I wrote the prior comment... + */ + if (!owner->on_rq) { + /* jstultz: Err, do we need to hold a lock on p? (we gave it up for owne= r) */ + for (; p; p =3D p->blocked_proxy) { + if (p =3D=3D owner) + continue; + BUG_ON(!p->on_rq); + deactivate_task(rq, p, DEQUEUE_SLEEP); + if (task_current_selected(rq, p)) { + put_prev_task(rq, next); + rq_set_selected(rq, rq->idle); + } + /* + * XXX connoro: need to verify this is necessary. The rationale is that + * ttwu_do_activate must not have a chance to activate p elsewhere befo= re + * it's fully extricated from its old rq. + */ + smp_mb(); + list_add(&p->blocked_entry, &owner->blocked_entry); + } + } +} + +/* + * Find who @next (currently blocked on a mutex) can proxy for. + * + * Follow the blocked-on relation: + * + * ,-> task + * | | blocked-on + * | v + * blocked_proxy | mutex + * | | owner + * | v + * `-- task + * + * and set the blocked_proxy relation, this latter is used by the mutex + * code to find which (blocked) task to hand-off to. + * + * Lock order: + * + * p->pi_lock + * rq->lock + * mutex->wait_lock + * p->blocked_lock + * + * Returns the task that is going to be used as execution context (the one + * that is actually going to be put to run on cpu_of(rq)). + */ +static struct task_struct * +proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf) +{ + struct task_struct *p =3D next; + struct task_struct *owner =3D NULL; + bool curr_in_chain =3D false; + int this_cpu, that_cpu; + struct mutex *mutex; + + this_cpu =3D cpu_of(rq); + + /* + * Follow blocked_on chain. + * + * TODO: deadlock detection + */ + for (p =3D next; p->blocked_on; p =3D owner) { + mutex =3D p->blocked_on; + /* Something changed in the chain, pick_again */ + if (!mutex) + return NULL; + + /* + * By taking mutex->wait_lock we hold off concurrent mutex_unlock() + * and ensure @owner sticks around. + */ + raw_spin_lock(&mutex->wait_lock); + raw_spin_lock(&p->blocked_lock); + + /* Check again that p is blocked with blocked_lock held */ + if (!task_is_blocked(p) || mutex !=3D p->blocked_on) { + /* + * Something changed in the blocked_on chain and + * we don't know if only at this level. So, let's + * just bail out completely and let __schedule + * figure things out (pick_again loop). + */ + raw_spin_unlock(&p->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + return NULL; + } + + if (task_current(rq, p)) + curr_in_chain =3D true; + + owner =3D mutex_owner(mutex); + if (task_cpu(owner) !=3D this_cpu) { + that_cpu =3D task_cpu(owner); + /* + * @owner can disappear, simply migrate to @that_cpu and leave that CPU + * to sort things out. + */ + raw_spin_unlock(&p->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + + return proxy_migrate_task(rq, next, rf, p, that_cpu, curr_in_chain); + } + + if (task_on_rq_migrating(owner)) { + /* + * XXX connoro: one of the chain of mutex owners is currently + * migrating to this CPU, but has not yet been enqueued because + * we are holding the rq lock. As a simple solution, just schedule + * rq->idle to give the migration a chance to complete. Much like + * the migrate_task case we should end up back in proxy(), this + * time hopefully with all relevant tasks already enqueued. + */ + raw_spin_unlock(&p->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + return proxy_resched_idle(rq, next); + } + + if (!owner->on_rq) { + /* + * XXX connoro: rq->curr must not be added to the blocked_entry list + * or else ttwu_do_activate could enqueue it elsewhere before it + * switches out here. The approach to avoiding this is the same as in + * the migrate_task case. + */ + if (curr_in_chain) { + /* + * This is identical to the owned_task handling, probably should + * fold them together... + */ + raw_spin_unlock(&p->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + return proxy_resched_idle(rq, next); + } + + /* + * If !@owner->on_rq, holding @rq->lock will not pin the task, + * so we cannot drop @mutex->wait_lock until we're sure its a blocked + * task on this rq. + * + * We use @owner->blocked_lock to serialize against ttwu_activate(). + * Either we see its new owner->on_rq or it will see our list_add(). + */ + if (owner !=3D p) { + raw_spin_unlock(&p->blocked_lock); + raw_spin_lock(&owner->blocked_lock); + } + + proxy_enqueue_on_owner(rq, p, owner, next); + + if (task_current_selected(rq, next)) { + put_prev_task(rq, next); + rq_set_selected(rq, rq->idle); + } + raw_spin_unlock(&owner->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + + return NULL; /* retry task selection */ + } + + if (owner =3D=3D p) { + /* + * Its possible we interleave with mutex_unlock like: + * + * lock(&rq->lock); + * proxy() + * mutex_unlock() + * lock(&wait_lock); + * next(owner) =3D current->blocked_proxy; + * unlock(&wait_lock); + * + * wake_up_q(); + * ... + * ttwu_runnable() + * __task_rq_lock() + * lock(&wait_lock); + * owner =3D=3D p + * + * Which leaves us to finish the ttwu_runnable() and make it go. + * + * XXX is this happening in case of an HANDOFF to p? + * In any case, reading of the owner in __mutex_unlock_slowpath is + * done atomically outside wait_lock (only adding waiters to wake_q is + * done inside the critical section). + * Does this means we can get to proxy _w/o an owner_ if that was + * cleared before grabbing wait_lock? Do we account for this case? + * OK we actually do (see PROXY_EXEC ifdeffery in unlock function). + */ + + /* + * XXX connoro: prior versions would clear p->blocked_on here, but I th= ink + * that can race with the handoff wakeup path. If a wakeup reaches the + * call to ttwu_runnable after this point and finds that p is enqueued + * and marked as unblocked, it will happily do a ttwu_do_wakeup() call + * with zero regard for whether the task's affinity actually allows + * running it on this CPU. + */ + + /* + * XXX connoro: previous versions would immediately run owner here if + * it's allowed to run on this CPU, but this creates potential races + * with the wakeup logic. Instead we can just take the blocked_task path + * when owner is already !on_rq, or else schedule rq->idle so that + * ttwu_runnable can get the rq lock and mark owner as running. + */ + raw_spin_unlock(&p->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + return proxy_resched_idle(rq, next); + } + + /* + * OK, now we're absolutely sure @owner is not blocked _and_ + * on this rq, therefore holding @rq->lock is sufficient to + * guarantee its existence, as per ttwu_remote(). + */ + raw_spin_unlock(&p->blocked_lock); + raw_spin_unlock(&mutex->wait_lock); + + owner->blocked_proxy =3D p; + } + + WARN_ON_ONCE(!owner->on_rq); + return owner; +} +#else /* PROXY_EXEC */ +static struct task_struct * +proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf) +{ + return next; +} +#endif /* PROXY_EXEC */ + /* * __schedule() is the main scheduler function. * @@ -6523,6 +7127,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) struct rq_flags rf; struct rq *rq; int cpu; + bool preserve_need_resched =3D false; =20 cpu =3D smp_processor_id(); rq =3D cpu_rq(cpu); @@ -6568,7 +7173,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) { if (signal_pending_state(prev_state, prev)) { WRITE_ONCE(prev->__state, TASK_RUNNING); - } else { + } else if (!task_is_blocked(prev)) { prev->sched_contributes_to_load =3D (prev_state & TASK_UNINTERRUPTIBLE) && !(prev_state & TASK_NOLOAD) && @@ -6594,13 +7199,56 @@ static void __sched notrace __schedule(unsigned int= sched_mode) atomic_inc(&rq->nr_iowait); delayacct_blkio_start(); } + } else { + /* + * XXX + * Let's make this task, which is blocked on + * a mutex, (push/pull)able (RT/DL). + * Unfortunately we can only deal with that by + * means of a dequeue/enqueue cycle. :-/ + */ + dequeue_task(rq, prev, 0); + enqueue_task(rq, prev, 0); } switch_count =3D &prev->nvcsw; } =20 - next =3D pick_next_task(rq, prev, &rf); +pick_again: + /* + * If picked task is actually blocked it means that it can act as a + * proxy for the task that is holding the mutex picked task is blocked + * on. Get a reference to the blocked (going to be proxy) task here. + * Note that if next isn't actually blocked we will have rq->proxy =3D=3D + * rq->curr =3D=3D next in the end, which is intended and means that proxy + * execution is currently "not in use". + */ + next =3D pick_next_task(rq, rq_selected(rq), &rf); rq_set_selected(rq, next); - clear_tsk_need_resched(prev); + next->blocked_proxy =3D NULL; + if (unlikely(task_is_blocked(next))) { + next =3D proxy(rq, next, &rf); + if (!next) { + /* In pick_next_task() we a balance callback + * may have been queued, so call it here + * to clear the callbacks to avoid warnings + * in rq_pin_lock + */ + __balance_callbacks(rq); + goto pick_again; + } + /* + * XXX connoro: when proxy() returns rq->idle it sets the + * TIF_NEED_RESCHED flag, but in the case where + * rq->idle =3D=3D rq->prev, the flag would be cleared immediately, + * defeating the desired behavior. So, check explicitly for + * this case. + */ + if (next =3D=3D rq->idle && prev =3D=3D rq->idle) + preserve_need_resched =3D true; + } + + if (!preserve_need_resched) + clear_tsk_need_resched(prev); clear_preempt_need_resched(); #ifdef CONFIG_SCHED_DEBUG rq->last_seen_need_resched_ns =3D 0; @@ -6687,6 +7335,10 @@ static inline void sched_submit_work(struct task_str= uct *tsk) */ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT); =20 + /* XXX still necessary? tsk_is_pi_blocked check here was deleted... */ + if (task_is_blocked(tsk)) + return; + /* * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 63a0564cb1f8..c47a75cd057f 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1740,7 +1740,7 @@ static void enqueue_task_dl(struct rq *rq, struct tas= k_struct *p, int flags) =20 enqueue_dl_entity(&p->dl, flags); =20 - if (!task_current(rq, p) && p->nr_cpus_allowed > 1) + if (!task_current(rq, p) && p->nr_cpus_allowed > 1 && !task_is_blocked(p)) enqueue_pushable_dl_task(rq, p); } =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3f7df45f7402..748a912c2122 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7962,7 +7962,9 @@ pick_next_task_fair(struct rq *rq, struct task_struct= *prev, struct rq_flags *rf goto idle; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - if (!prev || prev->sched_class !=3D &fair_sched_class) + if (!prev || + prev->sched_class !=3D &fair_sched_class || + rq_curr(rq) !=3D rq_selected(rq)) goto simple; =20 /* @@ -8480,6 +8482,9 @@ int can_migrate_task(struct task_struct *p, struct lb= _env *env) =20 lockdep_assert_rq_held(env->src_rq); =20 + if (task_is_blocked(p)) + return 0; + /* * We do not migrate tasks that are: * 1) throttled_lb_pair, or @@ -8530,7 +8535,11 @@ int can_migrate_task(struct task_struct *p, struct l= b_env *env) /* Record that we found at least one task that could run on dst_cpu */ env->flags &=3D ~LBF_ALL_PINNED; =20 - if (task_on_cpu(env->src_rq, p)) { + /* + * XXX mutex unlock path may have marked proxy as unblocked allowing us to + * reach this point, but we still shouldn't migrate it. + */ + if (task_on_cpu(env->src_rq, p) || task_current_selected(env->src_rq, p))= { schedstat_inc(p->stats.nr_failed_migrations_running); return 0; } diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 44139d56466e..5ce48eb8f5b6 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1537,7 +1537,8 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p,= int flags) =20 enqueue_rt_entity(rt_se, flags); =20 - if (!task_current(rq, p) && p->nr_cpus_allowed > 1) + if (!task_current(rq, p) && p->nr_cpus_allowed > 1 && + !task_is_blocked(p)) enqueue_pushable_task(rq, p); } =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 70cb55ad025d..8330d22b286f 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2155,6 +2155,19 @@ static inline int task_current_selected(struct rq *r= q, struct task_struct *p) return rq_selected(rq) =3D=3D p; } =20 +#ifdef CONFIG_PROXY_EXEC +static inline bool task_is_blocked(struct task_struct *p) +{ + return !!p->blocked_on; +} +#else /* !PROXY_EXEC */ +static inline bool task_is_blocked(struct task_struct *p) +{ + return false; +} + +#endif /* PROXY_EXEC */ + static inline int task_on_cpu(struct rq *rq, struct task_struct *p) { #ifdef CONFIG_SMP @@ -2312,12 +2325,18 @@ struct sched_class { =20 static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { - WARN_ON_ONCE(rq_selected(rq) !=3D prev); + WARN_ON_ONCE(rq_curr(rq) !=3D prev && prev !=3D rq_selected(rq)); + + /* XXX connoro: is this check necessary? */ + if (prev =3D=3D rq_selected(rq) && task_cpu(prev) !=3D cpu_of(rq)) + return; + prev->sched_class->put_prev_task(rq, prev); } =20 static inline void set_next_task(struct rq *rq, struct task_struct *next) { + WARN_ON_ONCE(!task_current_selected(rq, next)); next->sched_class->set_next_task(rq, next, false); } =20 --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21CBAC7619A for ; Tue, 11 Apr 2023 04:27:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbjDKE06 (ORCPT ); Tue, 11 Apr 2023 00:26:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230213AbjDKE0P (ORCPT ); Tue, 11 Apr 2023 00:26:15 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EE0F3ABB for ; Mon, 10 Apr 2023 21:25:41 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54c0c9de33aso136527787b3.15 for ; Mon, 10 Apr 2023 21:25:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187140; x=1683779140; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=piEpuDYcKHF89nkEdwaozfXRx7wTZdEh3pdPf1YGVNw=; b=CjLzarKi92f1d0XLabTPl60LM/umqPJhbsLylM7t9Npvvm4ynJBk0RUKXBpPoWelfc zhAbTeSJkJRI40owH36KA/gxFo5LZWId1kXHdZdd5DbUbvdDSx9PBkmx1fe7vvOmElhc jyI9/6h2Vv3s055fhxir3/yeFpoCEJQ78/yWR/OCPLRqxiOrV4RCuKiEHbGMKxm7hOop jAoObY7mp9+VnhysRXKBy1QtxyrlT3lSZjOlYnAx/gaJmiFDsNr8z2tYXjTVeQWPk+35 dquGQFXfkmTFmvv/2/YuMdiUgicXu0YA10chzue/RhZjYW5rjaBgCCcF/ed3pp+efTp7 4j3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187140; x=1683779140; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=piEpuDYcKHF89nkEdwaozfXRx7wTZdEh3pdPf1YGVNw=; b=J/MIWuGeFPj3W8MotfkBYF1PCeueuQwu+QtCjRDUvko7bvulN49W4+vcjG+x1b9Un4 dzeOMOHqIPzkBkIbfWSD0WUvr78kbeXXZZvESP5dEzWeAYs7gnDcK8mXyvHYi77pvWIG EbCVubYpGAl5y0EWv/FiXnvifYCcjzopOUDTjxxzoCoXDyzajYWqwtycxKBeo9ItMJA3 540mpNPOOtVOZivqxPYiBWOChnwFpko9whixf6rN+knPdwdVv3VVKAEcvqj4VIVDIZO7 ZBroQXDECpch4NoMuH4LlxB5ZGE/rRph5sDdexqpM712dwcm//0SoEx/zU8/HPX+neU4 hzuw== X-Gm-Message-State: AAQBX9ecG4zSkVbiXZmE/vhXyLn9sKgJT812O3uh3H3Mwz/eyx+8MAo7 qWOozy3tVgluY8nzG75jrD1kEyLw4lFaDb+kbDfD2FZ4aw8Br4OoEBbgVtzL2EqVRT0MmcPzefu Pym74oia2i6q2JRxaeO/FyfZFU2Q90vI2LcjfIR2u9pQGcByUZwYS6y3sQbxgMu8WSgcf5GY= X-Google-Smtp-Source: AKy350YWruGP+bJKTaBSiq8DZf396qHcrMA0NFOfxhcsQzWyUR+5HibNc4baZmAaQkP91r0Eyuj2q59zQKqF X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:690c:1:b0:544:bbd2:74be with SMTP id bc1-20020a05690c000100b00544bbd274bemr8499842ywb.4.1681187139908; Mon, 10 Apr 2023 21:25:39 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:09 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-13-jstultz@google.com> Subject: [PATCH v3 12/14] sched/rt: Fix proxy/current (push,pull)ability From: John Stultz To: LKML Cc: Valentin Schneider , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Valentin Schneider Proxy execution forms atomic pairs of tasks: a proxy (scheduling context) and an owner (execution context). The proxy, along with the rest of the blocked chain, follows the owner wrt CPU placement. They can be the same task, in which case push/pull doesn't need any modification. When they are different, however, FIFO1 & FIFO42: ,-> RT42 | | blocked-on | v blocked_proxy | mutex | | owner | v `-- RT1 RT1 RT42 CPU0 CPU1 ^ ^ | | overloaded !overloaded rq prio =3D 42 rq prio =3D 0 RT1 is eligible to be pushed to CPU1, but should that happen it will "carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as push/pullable. Furthermore, tasks becoming blocked on a mutex don't need an explicit dequeue/enqueue cycle to be made (push/pull)able: they have to be running to block on a mutex, thus they will eventually hit put_prev_task(). XXX: pinned tasks becoming unblocked should be removed from the push/pull lists, but those don't get to see __schedule() straight away. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Valentin Schneider Signed-off-by: Connor O'Brien Signed-off-by: John Stultz --- v3: * Tweaked comments & commit message TODO: Rework the wording of the commit message to match the rq_selected renaming. (XXX Maybe "Delegator" for the task being proxied for?) --- kernel/sched/core.c | 37 +++++++++++++++++++++++++++---------- kernel/sched/rt.c | 22 +++++++++++++++++----- 2 files changed, 44 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1d92f1a304b8..033856bae002 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7072,12 +7072,29 @@ proxy(struct rq *rq, struct task_struct *next, stru= ct rq_flags *rf) WARN_ON_ONCE(!owner->on_rq); return owner; } + +static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next) +{ + /* + * pick_next_task() calls set_next_task() on the selected task + * at some point, which ensures it is not push/pullable. + * However, the selected task *and* the ,mutex owner form an + * atomic pair wrt push/pull. + * + * Make sure owner is not pushable. Unfortunately we can only + * deal with that by means of a dequeue/enqueue cycle. :-/ + */ + dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE); + enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE); +} #else /* PROXY_EXEC */ static struct task_struct * proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf) { return next; } + +static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)= { } #endif /* PROXY_EXEC */ =20 /* @@ -7126,6 +7143,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) unsigned long prev_state; struct rq_flags rf; struct rq *rq; + bool proxied; int cpu; bool preserve_need_resched =3D false; =20 @@ -7199,20 +7217,11 @@ static void __sched notrace __schedule(unsigned int= sched_mode) atomic_inc(&rq->nr_iowait); delayacct_blkio_start(); } - } else { - /* - * XXX - * Let's make this task, which is blocked on - * a mutex, (push/pull)able (RT/DL). - * Unfortunately we can only deal with that by - * means of a dequeue/enqueue cycle. :-/ - */ - dequeue_task(rq, prev, 0); - enqueue_task(rq, prev, 0); } switch_count =3D &prev->nvcsw; } =20 + proxied =3D !!prev->blocked_proxy; pick_again: /* * If picked task is actually blocked it means that it can act as a @@ -7261,6 +7270,10 @@ static void __sched notrace __schedule(unsigned int = sched_mode) * changes to task_struct made by pick_next_task(). */ rq_set_curr_rcu_init(rq, next); + + if (unlikely(!task_current_proxy(rq, next))) + proxy_tag_curr(rq, next); + /* * The membarrier system call requires each architecture * to have a full memory barrier after updating @@ -7285,6 +7298,10 @@ static void __sched notrace __schedule(unsigned int = sched_mode) /* Also unlocks the rq: */ rq =3D context_switch(rq, prev, next, &rf); } else { + /* In case next was already curr but just got blocked_proxy */ + if (unlikely(!proxied && next->blocked_proxy)) + proxy_tag_curr(rq, next); + rq->clock_update_flags &=3D ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); =20 rq_unpin_lock(rq, &rf); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 5ce48eb8f5b6..af92e4147703 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1537,9 +1537,21 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p= , int flags) =20 enqueue_rt_entity(rt_se, flags); =20 - if (!task_current(rq, p) && p->nr_cpus_allowed > 1 && - !task_is_blocked(p)) - enqueue_pushable_task(rq, p); + /* + * Current can't be pushed away. Proxy is tied to current, so don't + * push it either. + */ + if (task_current(rq, p) || task_current_proxy(rq, p)) + return; + + /* + * Pinned tasks can't be pushed. + * Affinity of blocked tasks doesn't matter. + */ + if (!task_is_blocked(p) && p->nr_cpus_allowed =3D=3D 1) + return; + + enqueue_pushable_task(rq, p); } =20 static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flag= s) @@ -1832,9 +1844,9 @@ static void put_prev_task_rt(struct rq *rq, struct ta= sk_struct *p) =20 /* * The previous task needs to be made eligible for pushing - * if it is still active + * if it is still active. Affinity of blocked task doesn't matter. */ - if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) + if (on_rt_rq(&p->rt) && (p->nr_cpus_allowed > 1 || task_is_blocked(p))) enqueue_pushable_task(rq, p); } =20 --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73674C7619A for ; Tue, 11 Apr 2023 04:27:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230262AbjDKE1C (ORCPT ); Tue, 11 Apr 2023 00:27:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230164AbjDKE0Q (ORCPT ); Tue, 11 Apr 2023 00:26:16 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 894073C23 for ; Mon, 10 Apr 2023 21:25:42 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1a52104db35so3251805ad.1 for ; Mon, 10 Apr 2023 21:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187142; x=1683779142; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aqbmtVWq+rNWBxgazqjFRXmtnuaSzfFomrIJgWqj7lI=; b=N4K3WNfIFz6KEb95R/sEj3oC+roc7rWpRkP+DtvQeM6+VWKpDGlaN+xWOA1x9Gn25V kGE9HCAepci5c+Vqq7QKTfjBRdAoO3YGWMnMSbglv+/xwQNvOK4PDUQn2P7pht3nMMCD 9rTOeaXncTnMaeWhwPHaL2ygvuQcxS2u3SjRva4OJsCLh/O91cNYMBLdty8JyRoKMYU9 Y6iW3zpZCaUwUeX8zgJzhxLeoUhLLucpC7u093LMez3ruomUx55VsWLh4K5Ik6z/vVxA rtmgH98u+qERMMLGiaZPl3xUpHffLuhUesSZCT6B/C3703ZtENpISO81SviqYeyJM87A 5e7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187142; x=1683779142; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aqbmtVWq+rNWBxgazqjFRXmtnuaSzfFomrIJgWqj7lI=; b=3Nw5BA89sULRyHHyokJM5ZT0sRI3egm1QY5c/6hOznUBYzBMNuo4XESnhmJE8QsKsi FsmVDDLDmFPtNZflRUEuKxJKL2OUK9/RvWIqM7qYMUZT0qqqu/jfOnH6oT2XUqOFZRgA EEYtThfuo+3GuYPXUgKRo4ufrBnExokZT+SMtYg4U6tuLSIPvmBDlipmY1dVJvEU5tqf gLBDnXGH8esVQ0KAxtiyR5qrVTf269HwVGtDncj3ZCfrmSSR0BDq6UwxoAMaOn9RI7fY 7+XrZNGDPh14lthUlrUDvz+90awLDKE6391nVu4rjC6gDJGHn9ZG+Z+27PpqnlixHXEu KW9w== X-Gm-Message-State: AAQBX9ee9TFaMpyJNPtNUi6zC7nZOLRcvgWqOaw/DFQEohdzCn7r5WT3 Cy3SqhTeXzSWjMViIIl7QcLJ3hjbBSdtOWxkg3M6c7+/ugMic3mDa6mDBOFDODQ617CyQjmc/qz oW46SKBSpbsdLouQY9fBP7QmlZCTsU1bBRAGNKKukNotN/ao3I1c/hkGg/Z8tzbSNNz6697c= X-Google-Smtp-Source: AKy350YDBIZjhtxg3aCUKae05/9KZrDUSlZA2vMTxB8bqYW4QrUHtHcdP0zcdWHx2+paVsyKQMPps2Xdczm7 X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:b41:b0:62d:934e:6ef8 with SMTP id p1-20020a056a000b4100b0062d934e6ef8mr5963461pfo.5.1681187141668; Mon, 10 Apr 2023 21:25:41 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:10 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-14-jstultz@google.com> Subject: [PATCH v3 13/14] sched: Attempt to fix rt/dl load balancing via chain level balance From: John Stultz To: LKML Cc: "Connor O'Brien" , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Connor O'Brien RT/DL balancing is supposed to guarantee that with N cpus available & CPU affinity permitting, the top N RT/DL tasks will get spread across the CPUs and all get to run. Proxy exec greatly complicates this as blocked tasks remain on the rq but cannot be usefully migrated away from their lock owning tasks. This has two major consequences: 1. In order to get the desired properties we need to migrate a blocked task, its would-be proxy, and everything in between, all together - i.e., we need to push/pull "blocked chains" rather than individual tasks. 2. Tasks that are part of rq->curr's "blocked tree" therefore should not be pushed or pulled. Options for enforcing this seem to include a) create some sort of complex data structure for tracking pushability, updating it whenever the blocked tree for rq->curr changes (e.g. on mutex handoffs, migrations, etc.) as well as on context switches. b) give up on O(1) pushability checks, and search through the pushable list every push/pull until we find a pushable "chain" c) Extend option "b" with some sort of caching to avoid repeated work. For the sake of simplicity & separating the "chain level balancing" concerns from complicated optimizations, this patch focuses on trying to implement option "b" correctly. This can then hopefully provide a baseline for "correct load balancing behavior" that optimizations can try to implement more efficiently. Note: The inability to atomically check "is task enqueued on a specific rq" creates 2 possible races when following a blocked chain: - If we check task_rq() first on a task that is dequeued from its rq, it can be woken and enqueued on another rq before the call to task_on_rq_queued() - If we call task_on_rq_queued() first on a task that is on another rq, it can be dequeued (since we don't hold its rq's lock) and then be set to the current rq before we check task_rq(). Maybe there's a more elegant solution that would work, but for now, just sandwich the task_rq() check between two task_on_rq_queued() checks, all separated by smp_rmb() calls. Since we hold rq's lock, task can't be enqueued or dequeued from rq, so neither race should be possible. extensive comments on various pitfalls, races, etc. included inline. TODO: Probably no good reason not to move the new helper implementations from sched.h into core.c Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: Connor O'Brien [jstultz: rebased & sorted minor conflicts, folded down numerous fixes from Connor, fixed number of checkpatch issues] Signed-off-by: John Stultz --- v3: * Fix crash by checking find_exec_ctx return for NULL before using it --- kernel/sched/core.c | 10 +- kernel/sched/cpudeadline.c | 12 +-- kernel/sched/cpudeadline.h | 3 +- kernel/sched/cpupri.c | 29 ++++-- kernel/sched/cpupri.h | 6 +- kernel/sched/deadline.c | 147 +++++++++++++++++--------- kernel/sched/fair.c | 5 + kernel/sched/rt.c | 204 ++++++++++++++++++++++++++----------- kernel/sched/sched.h | 149 ++++++++++++++++++++++++++- 9 files changed, 434 insertions(+), 131 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 033856bae002..653348263d42 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2495,6 +2495,10 @@ static int migration_cpu_stop(void *data) =20 int push_cpu_stop(void *arg) { + /* XXX connoro: how do we handle this case when the rq->curr we push away + * is part of a proxy chain!? + * we actually push the old rq->proxy and its blocker chain. + */ struct rq *lowest_rq =3D NULL, *rq =3D this_rq(); struct task_struct *p =3D arg; =20 @@ -2519,9 +2523,7 @@ int push_cpu_stop(void *arg) =20 // XXX validate p is still the highest prio task if (task_rq(p) =3D=3D rq) { - deactivate_task(rq, p, 0); - set_task_cpu(p, lowest_rq->cpu); - activate_task(lowest_rq, p, 0); + push_task_chain(rq, lowest_rq, p); resched_curr(lowest_rq); } =20 @@ -7271,7 +7273,7 @@ static void __sched notrace __schedule(unsigned int s= ched_mode) */ rq_set_curr_rcu_init(rq, next); =20 - if (unlikely(!task_current_proxy(rq, next))) + if (unlikely(!task_current_selected(rq, next))) proxy_tag_curr(rq, next); =20 /* diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c index 57c92d751bcd..efd6d716a3f2 100644 --- a/kernel/sched/cpudeadline.c +++ b/kernel/sched/cpudeadline.c @@ -113,13 +113,13 @@ static inline int cpudl_maximum(struct cpudl *cp) * * Returns: int - CPUs were found */ -int cpudl_find(struct cpudl *cp, struct task_struct *p, +int cpudl_find(struct cpudl *cp, struct task_struct *sched_ctx, struct tas= k_struct *exec_ctx, struct cpumask *later_mask) { - const struct sched_dl_entity *dl_se =3D &p->dl; + const struct sched_dl_entity *dl_se =3D &sched_ctx->dl; =20 if (later_mask && - cpumask_and(later_mask, cp->free_cpus, &p->cpus_mask)) { + cpumask_and(later_mask, cp->free_cpus, &exec_ctx->cpus_mask)) { unsigned long cap, max_cap =3D 0; int cpu, max_cpu =3D -1; =20 @@ -128,13 +128,13 @@ int cpudl_find(struct cpudl *cp, struct task_struct *= p, =20 /* Ensure the capacity of the CPUs fits the task. */ for_each_cpu(cpu, later_mask) { - if (!dl_task_fits_capacity(p, cpu)) { + if (!dl_task_fits_capacity(sched_ctx, cpu)) { cpumask_clear_cpu(cpu, later_mask); =20 cap =3D capacity_orig_of(cpu); =20 if (cap > max_cap || - (cpu =3D=3D task_cpu(p) && cap =3D=3D max_cap)) { + (cpu =3D=3D task_cpu(exec_ctx) && cap =3D=3D max_cap)) { max_cap =3D cap; max_cpu =3D cpu; } @@ -150,7 +150,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p, =20 WARN_ON(best_cpu !=3D -1 && !cpu_present(best_cpu)); =20 - if (cpumask_test_cpu(best_cpu, &p->cpus_mask) && + if (cpumask_test_cpu(best_cpu, &exec_ctx->cpus_mask) && dl_time_before(dl_se->deadline, cp->elements[0].dl)) { if (later_mask) cpumask_set_cpu(best_cpu, later_mask); diff --git a/kernel/sched/cpudeadline.h b/kernel/sched/cpudeadline.h index 0adeda93b5fb..6bb27f70e9d2 100644 --- a/kernel/sched/cpudeadline.h +++ b/kernel/sched/cpudeadline.h @@ -16,7 +16,8 @@ struct cpudl { }; =20 #ifdef CONFIG_SMP -int cpudl_find(struct cpudl *cp, struct task_struct *p, struct cpumask *l= ater_mask); +int cpudl_find(struct cpudl *cp, struct task_struct *sched_ctx, + struct task_struct *exec_ctx, struct cpumask *later_mask); void cpudl_set(struct cpudl *cp, int cpu, u64 dl); void cpudl_clear(struct cpudl *cp, int cpu); int cpudl_init(struct cpudl *cp); diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c index a286e726eb4b..285242b76597 100644 --- a/kernel/sched/cpupri.c +++ b/kernel/sched/cpupri.c @@ -64,6 +64,7 @@ static int convert_prio(int prio) return cpupri; } =20 +/* XXX connoro: the p passed in here should be exec ctx */ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p, struct cpumask *lowest_mask, int idx) { @@ -96,11 +97,15 @@ static inline int __cpupri_find(struct cpupri *cp, stru= ct task_struct *p, if (skip) return 0; =20 - if (cpumask_any_and(&p->cpus_mask, vec->mask) >=3D nr_cpu_ids) + if ((p && cpumask_any_and(&p->cpus_mask, vec->mask) >=3D nr_cpu_ids) || + (!p && cpumask_any(vec->mask) >=3D nr_cpu_ids)) return 0; =20 if (lowest_mask) { - cpumask_and(lowest_mask, &p->cpus_mask, vec->mask); + if (p) + cpumask_and(lowest_mask, &p->cpus_mask, vec->mask); + else + cpumask_copy(lowest_mask, vec->mask); =20 /* * We have to ensure that we have at least one bit @@ -117,10 +122,11 @@ static inline int __cpupri_find(struct cpupri *cp, st= ruct task_struct *p, return 1; } =20 -int cpupri_find(struct cpupri *cp, struct task_struct *p, +int cpupri_find(struct cpupri *cp, struct task_struct *sched_ctx, + struct task_struct *exec_ctx, struct cpumask *lowest_mask) { - return cpupri_find_fitness(cp, p, lowest_mask, NULL); + return cpupri_find_fitness(cp, sched_ctx, exec_ctx, lowest_mask, NULL); } =20 /** @@ -140,18 +146,19 @@ int cpupri_find(struct cpupri *cp, struct task_struct= *p, * * Return: (int)bool - CPUs were found */ -int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p, - struct cpumask *lowest_mask, - bool (*fitness_fn)(struct task_struct *p, int cpu)) +int cpupri_find_fitness(struct cpupri *cp, struct task_struct *sched_ctx, + struct task_struct *exec_ctx, + struct cpumask *lowest_mask, + bool (*fitness_fn)(struct task_struct *p, int cpu)) { - int task_pri =3D convert_prio(p->prio); + int task_pri =3D convert_prio(sched_ctx->prio); int idx, cpu; =20 WARN_ON_ONCE(task_pri >=3D CPUPRI_NR_PRIORITIES); =20 for (idx =3D 0; idx < task_pri; idx++) { =20 - if (!__cpupri_find(cp, p, lowest_mask, idx)) + if (!__cpupri_find(cp, exec_ctx, lowest_mask, idx)) continue; =20 if (!lowest_mask || !fitness_fn) @@ -159,7 +166,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_= struct *p, =20 /* Ensure the capacity of the CPUs fit the task */ for_each_cpu(cpu, lowest_mask) { - if (!fitness_fn(p, cpu)) + if (!fitness_fn(sched_ctx, cpu)) cpumask_clear_cpu(cpu, lowest_mask); } =20 @@ -191,7 +198,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_= struct *p, * really care. */ if (fitness_fn) - return cpupri_find(cp, p, lowest_mask); + return cpupri_find(cp, sched_ctx, exec_ctx, lowest_mask); =20 return 0; } diff --git a/kernel/sched/cpupri.h b/kernel/sched/cpupri.h index d6cba0020064..bde7243cec2e 100644 --- a/kernel/sched/cpupri.h +++ b/kernel/sched/cpupri.h @@ -18,9 +18,11 @@ struct cpupri { }; =20 #ifdef CONFIG_SMP -int cpupri_find(struct cpupri *cp, struct task_struct *p, +int cpupri_find(struct cpupri *cp, struct task_struct *sched_ctx, + struct task_struct *exec_ctx, struct cpumask *lowest_mask); -int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p, +int cpupri_find_fitness(struct cpupri *cp, struct task_struct *sched_ctx, + struct task_struct *exec_ctx, struct cpumask *lowest_mask, bool (*fitness_fn)(struct task_struct *p, int cpu)); void cpupri_set(struct cpupri *cp, int cpu, int pri); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index c47a75cd057f..539d04310597 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1814,7 +1814,7 @@ static inline bool dl_task_is_earliest_deadline(struc= t task_struct *p, rq->dl.earliest_dl.curr)); } =20 -static int find_later_rq(struct task_struct *task); +static int find_later_rq(struct task_struct *sched_ctx, struct task_struct= *exec_ctx); =20 static int select_task_rq_dl(struct task_struct *p, int cpu, int flags) @@ -1854,7 +1854,11 @@ select_task_rq_dl(struct task_struct *p, int cpu, in= t flags) select_rq |=3D !dl_task_fits_capacity(p, cpu); =20 if (select_rq) { - int target =3D find_later_rq(p); + /* + * XXX connoro: verify this but in wakeup path we should + * always have unblocked p, so exec_ctx =3D=3D sched_ctx =3D=3D p. + */ + int target =3D find_later_rq(p, p); =20 if (target !=3D -1 && dl_task_is_earliest_deadline(p, cpu_rq(target))) @@ -1901,12 +1905,18 @@ static void migrate_task_rq_dl(struct task_struct *= p, int new_cpu __maybe_unused =20 static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p) { + struct task_struct *exec_ctx; + /* * Current can't be migrated, useless to reschedule, * let's hope p can move out. */ if (rq_curr(rq)->nr_cpus_allowed =3D=3D 1 || - !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL)) + !cpudl_find(&rq->rd->cpudl, rq_selected(rq), rq_curr(rq), NULL)) + return; + + exec_ctx =3D find_exec_ctx(rq, p); + if (task_current(rq, exec_ctx)) return; =20 /* @@ -1914,7 +1924,7 @@ static void check_preempt_equal_dl(struct rq *rq, str= uct task_struct *p) * see if it is pushed or pulled somewhere else. */ if (p->nr_cpus_allowed !=3D 1 && - cpudl_find(&rq->rd->cpudl, p, NULL)) + cpudl_find(&rq->rd->cpudl, p, exec_ctx, NULL)) return; =20 resched_curr(rq); @@ -2084,14 +2094,6 @@ static void task_fork_dl(struct task_struct *p) /* Only try algorithms three times */ #define DL_MAX_TRIES 3 =20 -static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu) -{ - if (!task_on_cpu(rq, p) && - cpumask_test_cpu(cpu, &p->cpus_mask)) - return 1; - return 0; -} - /* * Return the earliest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise: @@ -2110,7 +2112,7 @@ static struct task_struct *pick_earliest_pushable_dl_= task(struct rq *rq, int cpu if (next_node) { p =3D __node_2_pdl(next_node); =20 - if (pick_dl_task(rq, p, cpu)) + if (pushable_chain(rq, p, cpu) =3D=3D 1) return p; =20 next_node =3D rb_next(next_node); @@ -2122,25 +2124,25 @@ static struct task_struct *pick_earliest_pushable_d= l_task(struct rq *rq, int cpu =20 static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl); =20 -static int find_later_rq(struct task_struct *task) +static int find_later_rq(struct task_struct *sched_ctx, struct task_struct= *exec_ctx) { struct sched_domain *sd; struct cpumask *later_mask =3D this_cpu_cpumask_var_ptr(local_cpu_mask_dl= ); int this_cpu =3D smp_processor_id(); - int cpu =3D task_cpu(task); + int cpu =3D task_cpu(sched_ctx); =20 /* Make sure the mask is initialized first */ if (unlikely(!later_mask)) return -1; =20 - if (task->nr_cpus_allowed =3D=3D 1) + if (exec_ctx && exec_ctx->nr_cpus_allowed =3D=3D 1) return -1; =20 /* * We have to consider system topology and task affinity * first, then we can look for a suitable CPU. */ - if (!cpudl_find(&task_rq(task)->rd->cpudl, task, later_mask)) + if (!cpudl_find(&task_rq(exec_ctx)->rd->cpudl, sched_ctx, exec_ctx, later= _mask)) return -1; =20 /* @@ -2209,15 +2211,66 @@ static int find_later_rq(struct task_struct *task) return -1; } =20 +static struct task_struct *pick_next_pushable_dl_task(struct rq *rq) +{ + struct task_struct *p =3D NULL; + struct rb_node *next_node; + + if (!has_pushable_dl_tasks(rq)) + return NULL; + + next_node =3D rb_first_cached(&rq->dl.pushable_dl_tasks_root); + +next_node: + if (next_node) { + p =3D __node_2_pdl(next_node); + + /* + * cpu argument doesn't matter because we treat a -1 result + * (pushable but can't go to cpu0) the same as a 1 result + * (pushable to cpu0). All we care about here is general + * pushability. + */ + if (pushable_chain(rq, p, 0)) + return p; /* XXX connoro TODO this is definitely wrong in combo with th= e later checks...*/ + + next_node =3D rb_next(next_node); + goto next_node; + } + + if (!p) + return NULL; + + WARN_ON_ONCE(rq->cpu !=3D task_cpu(p)); + WARN_ON_ONCE(task_current(rq, p)); + WARN_ON_ONCE(p->nr_cpus_allowed <=3D 1); + + WARN_ON_ONCE(!task_on_rq_queued(p)); + WARN_ON_ONCE(!dl_task(p)); + + return p; +} + /* Locks the rq it finds */ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *= rq) { + struct task_struct *exec_ctx; struct rq *later_rq =3D NULL; + bool retry; int tries; int cpu; =20 for (tries =3D 0; tries < DL_MAX_TRIES; tries++) { - cpu =3D find_later_rq(task); + retry =3D false; + exec_ctx =3D find_exec_ctx(rq, task); + /* + * XXX jstultz: double check: if we get null from find_exec_ctx, + * is breaking the right thing? + */ + if (!exec_ctx) + break; + + cpu =3D find_later_rq(task, exec_ctx); =20 if ((cpu =3D=3D -1) || (cpu =3D=3D rq->cpu)) break; @@ -2236,11 +2289,30 @@ static struct rq *find_lock_later_rq(struct task_st= ruct *task, struct rq *rq) =20 /* Retry if something changed. */ if (double_lock_balance(rq, later_rq)) { - if (unlikely(task_rq(task) !=3D rq || - !cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) || - task_on_cpu(rq, task) || - !dl_task(task) || - !task_on_rq_queued(task))) { + bool fail =3D false; + + /* XXX connoro: this is a mess. Surely there's a better way to express = it...*/ + if (!dl_task(task)) { + fail =3D true; + } else if (rq !=3D this_rq()) { + struct task_struct *next_task =3D pick_next_pushable_dl_task(rq); + + if (next_task !=3D task) { + fail =3D true; + } else { + exec_ctx =3D find_exec_ctx(rq, next_task); + retry =3D (exec_ctx && + !cpumask_test_cpu(later_rq->cpu, + &exec_ctx->cpus_mask)); + } + } else { + int pushable =3D pushable_chain(rq, task, later_rq->cpu); + + fail =3D !pushable; + retry =3D pushable =3D=3D -1; + } + + if (unlikely(fail)) { double_unlock_balance(rq, later_rq); later_rq =3D NULL; break; @@ -2252,7 +2324,7 @@ static struct rq *find_lock_later_rq(struct task_stru= ct *task, struct rq *rq) * its earliest one has a later deadline than our * task, the rq is a good one. */ - if (dl_task_is_earliest_deadline(task, later_rq)) + if (!retry && dl_task_is_earliest_deadline(task, later_rq)) break; =20 /* Otherwise we try again. */ @@ -2263,25 +2335,6 @@ static struct rq *find_lock_later_rq(struct task_str= uct *task, struct rq *rq) return later_rq; } =20 -static struct task_struct *pick_next_pushable_dl_task(struct rq *rq) -{ - struct task_struct *p; - - if (!has_pushable_dl_tasks(rq)) - return NULL; - - p =3D __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root)); - - WARN_ON_ONCE(rq->cpu !=3D task_cpu(p)); - WARN_ON_ONCE(task_current(rq, p)); - WARN_ON_ONCE(p->nr_cpus_allowed <=3D 1); - - WARN_ON_ONCE(!task_on_rq_queued(p)); - WARN_ON_ONCE(!dl_task(p)); - - return p; -} - /* * See if the non running -deadline tasks on this rq * can be sent to some other CPU where they can preempt @@ -2351,9 +2404,7 @@ static int push_dl_task(struct rq *rq) goto retry; } =20 - deactivate_task(rq, next_task, 0); - set_task_cpu(next_task, later_rq->cpu); - activate_task(later_rq, next_task, 0); + push_task_chain(rq, later_rq, next_task); ret =3D 1; =20 resched_curr(later_rq); @@ -2439,9 +2490,7 @@ static void pull_dl_task(struct rq *this_rq) if (is_migration_disabled(p)) { push_task =3D get_push_task(src_rq); } else { - deactivate_task(src_rq, p, 0); - set_task_cpu(p, this_cpu); - activate_task(this_rq, p, 0); + push_task_chain(src_rq, this_rq, p); dmin =3D p->dl.deadline; resched =3D true; } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 748a912c2122..acb8155f40c3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8482,6 +8482,11 @@ int can_migrate_task(struct task_struct *p, struct l= b_env *env) =20 lockdep_assert_rq_held(env->src_rq); =20 + /* + * XXX connoro: Is this correct, or should we be doing chain + * balancing for CFS tasks too? Balancing chains that aren't + * part of the running task's blocked "tree" seems reasonable? + */ if (task_is_blocked(p)) return 0; =20 diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index af92e4147703..a3af1eb47647 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1541,7 +1541,7 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p,= int flags) * Current can't be pushed away. Proxy is tied to current, so don't * push it either. */ - if (task_current(rq, p) || task_current_proxy(rq, p)) + if (task_current(rq, p) || task_current_selected(rq, p)) return; =20 /* @@ -1599,7 +1599,7 @@ static void yield_task_rt(struct rq *rq) } =20 #ifdef CONFIG_SMP -static int find_lowest_rq(struct task_struct *task); +static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struc= t *exec_ctx); =20 static int select_task_rq_rt(struct task_struct *p, int cpu, int flags) @@ -1649,7 +1649,10 @@ select_task_rq_rt(struct task_struct *p, int cpu, in= t flags) (curr->nr_cpus_allowed < 2 || proxy->prio <=3D p->prio); =20 if (test || !rt_task_fits_capacity(p, cpu)) { - int target =3D find_lowest_rq(p); + /* XXX connoro: double check this, but if we're waking p then + * it is unblocked so exec_ctx =3D=3D sched_ctx =3D=3D p. + */ + int target =3D find_lowest_rq(p, p); =20 /* * Bail out if we were forcing a migration to find a better @@ -1676,12 +1679,22 @@ select_task_rq_rt(struct task_struct *p, int cpu, i= nt flags) =20 static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) { + struct task_struct *exec_ctx =3D p; + /* + * Current can't be migrated, useless to reschedule, + * let's hope p can move out. + */ /* XXX connoro: need to revise cpupri_find() to reflect the split * context since it should look at rq_selected() for priority but * rq_curr() for affinity. */ if (rq_curr(rq)->nr_cpus_allowed =3D=3D 1 || - !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL)) + !cpupri_find(&rq->rd->cpupri, rq_selected(rq), rq_curr(rq), NULL)) + return; + + /* No reason to preempt since rq->curr wouldn't change anyway */ + exec_ctx =3D find_exec_ctx(rq, p); + if (task_current(rq, exec_ctx)) return; =20 /* @@ -1689,7 +1702,7 @@ static void check_preempt_equal_prio(struct rq *rq, s= truct task_struct *p) * see if it is pushed or pulled somewhere else. */ if (p->nr_cpus_allowed !=3D 1 && - cpupri_find(&rq->rd->cpupri, p, NULL)) + cpupri_find(&rq->rd->cpupri, p, exec_ctx, NULL)) return; =20 /* @@ -1855,15 +1868,6 @@ static void put_prev_task_rt(struct rq *rq, struct t= ask_struct *p) /* Only try algorithms three times */ #define RT_MAX_TRIES 3 =20 -static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu) -{ - if (!task_on_cpu(rq, p) && - cpumask_test_cpu(cpu, &p->cpus_mask)) - return 1; - - return 0; -} - /* * Return the highest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise @@ -1877,7 +1881,7 @@ static struct task_struct *pick_highest_pushable_task= (struct rq *rq, int cpu) return NULL; =20 plist_for_each_entry(p, head, pushable_tasks) { - if (pick_rt_task(rq, p, cpu)) + if (pushable_chain(rq, p, cpu) =3D=3D 1) return p; } =20 @@ -1886,19 +1890,19 @@ static struct task_struct *pick_highest_pushable_ta= sk(struct rq *rq, int cpu) =20 static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask); =20 -static int find_lowest_rq(struct task_struct *task) +static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struc= t *exec_ctx) { struct sched_domain *sd; struct cpumask *lowest_mask =3D this_cpu_cpumask_var_ptr(local_cpu_mask); int this_cpu =3D smp_processor_id(); - int cpu =3D task_cpu(task); + int cpu =3D task_cpu(sched_ctx); int ret; =20 /* Make sure the mask is initialized first */ if (unlikely(!lowest_mask)) return -1; =20 - if (task->nr_cpus_allowed =3D=3D 1) + if (exec_ctx && exec_ctx->nr_cpus_allowed =3D=3D 1) return -1; /* No other targets possible */ =20 /* @@ -1907,13 +1911,13 @@ static int find_lowest_rq(struct task_struct *task) */ if (sched_asym_cpucap_active()) { =20 - ret =3D cpupri_find_fitness(&task_rq(task)->rd->cpupri, - task, lowest_mask, + ret =3D cpupri_find_fitness(&task_rq(sched_ctx)->rd->cpupri, + sched_ctx, exec_ctx, lowest_mask, rt_task_fits_capacity); } else { =20 - ret =3D cpupri_find(&task_rq(task)->rd->cpupri, - task, lowest_mask); + ret =3D cpupri_find(&task_rq(sched_ctx)->rd->cpupri, + sched_ctx, exec_ctx, lowest_mask); } =20 if (!ret) @@ -1977,15 +1981,48 @@ static int find_lowest_rq(struct task_struct *task) return -1; } =20 +static struct task_struct *pick_next_pushable_task(struct rq *rq) +{ + struct plist_head *head =3D &rq->rt.pushable_tasks; + struct task_struct *p, *push_task =3D NULL; + + if (!has_pushable_tasks(rq)) + return NULL; + + plist_for_each_entry(p, head, pushable_tasks) { + if (pushable_chain(rq, p, 0)) { + push_task =3D p; + break; + } + } + + if (!push_task) + return NULL; + + BUG_ON(rq->cpu !=3D task_cpu(push_task)); + BUG_ON(task_current(rq, push_task) || task_current_selected(rq, push_task= )); + /*XXX connoro: this check is pointless for blocked push_task. */ + /* BUG_ON(push_task->nr_cpus_allowed <=3D 1); */ + + BUG_ON(!task_on_rq_queued(push_task)); + BUG_ON(!rt_task(push_task)); + + return p; +} + /* Will lock the rq it finds */ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq = *rq) { + struct task_struct *exec_ctx; struct rq *lowest_rq =3D NULL; + bool retry; int tries; int cpu; =20 for (tries =3D 0; tries < RT_MAX_TRIES; tries++) { - cpu =3D find_lowest_rq(task); + retry =3D false; + exec_ctx =3D find_exec_ctx(rq, task); + cpu =3D find_lowest_rq(task, exec_ctx); =20 if ((cpu =3D=3D -1) || (cpu =3D=3D rq->cpu)) break; @@ -2004,18 +2041,77 @@ static struct rq *find_lock_lowest_rq(struct task_s= truct *task, struct rq *rq) =20 /* if the prio of this runqueue changed, try again */ if (double_lock_balance(rq, lowest_rq)) { + bool fail =3D false; /* * We had to unlock the run queue. In * the mean time, task could have * migrated already or had its affinity changed. * Also make sure that it wasn't scheduled on its rq. + * + * XXX connoro: releasing the rq lock means we need to re-check pushabi= lity. + * Some scenarios: + * 1) If a migration from another CPU sent a task/chain to rq + * that made task newly unpushable by completing a chain + * from task to rq->curr, then we need to bail out and push something + * else. + * 2) If our chain led off this CPU or to a dequeued task, the last wai= ter + * on this CPU might have acquired the lock and woken (or even migra= ted + * & run, handed off the lock it held, etc...). This can invalidate = the + * result of find_lowest_rq() if our chain previously ended in a blo= cked + * task whose affinity we could ignore, but now ends in an unblocked + * task that can't run on lowest_rq. + * 3) race described at https://lore.kernel.org/all/1523536384-26781-2-= git-send-email-huawei.libin@huawei.com/ + * + * Notes on these: + * - Scenario #2 is properly handled by rerunning find_lowest_rq + * - Scenario #1 requires that we fail + * - Scenario #3 can AFAICT only occur when rq is not this_rq(). And the + * suggested fix is not universally correct now that push_cpu_stop() = can + * call this function. */ - if (unlikely(task_rq(task) !=3D rq || - !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) || - task_on_cpu(rq, task) || - !rt_task(task) || - !task_on_rq_queued(task))) { + if (!rt_task(task)) { + fail =3D true; + } else if (rq !=3D this_rq()) { + /* + * If we are dealing with a remote rq, then all bets are off + * because task might have run & then been dequeued since we + * released the lock, at which point our normal checks can race + * with migration, as described in + * https://lore.kernel.org/all/1523536384-26781-2-git-send-email-huawe= i.libin@huawei.com/ + * Need to repick to ensure we avoid a race. + * But re-picking would be unnecessary & incorrect in the + * push_cpu_stop() path. + */ + struct task_struct *next_task =3D pick_next_pushable_task(rq); + + if (next_task !=3D task) { + fail =3D true; + } else { + exec_ctx =3D find_exec_ctx(rq, next_task); + retry =3D (exec_ctx && + !cpumask_test_cpu(lowest_rq->cpu, + &exec_ctx->cpus_mask)); + } + } else { + /* + * Chain level balancing introduces new ways for our choice of + * task & rq to become invalid when we release the rq lock, e.g.: + * 1) Migration to rq from another CPU makes task newly unpushable + * by completing a "blocked chain" from task to rq->curr. + * Fail so a different task can be chosen for push. + * 2) In cases where task's blocked chain led to a dequeued task + * or one on another rq, the last waiter in the chain on this + * rq might have acquired the lock and woken, meaning we must + * pick a different rq if its affinity prevents running on + * lowest_rq. + */ + int pushable =3D pushable_chain(rq, task, lowest_rq->cpu); + + fail =3D !pushable; + retry =3D pushable =3D=3D -1; + } =20 + if (unlikely(fail)) { double_unlock_balance(rq, lowest_rq); lowest_rq =3D NULL; break; @@ -2023,7 +2119,7 @@ static struct rq *find_lock_lowest_rq(struct task_str= uct *task, struct rq *rq) } =20 /* If this rq is still suitable use it. */ - if (lowest_rq->rt.highest_prio.curr > task->prio) + if (lowest_rq->rt.highest_prio.curr > task->prio && !retry) break; =20 /* try again */ @@ -2034,26 +2130,6 @@ static struct rq *find_lock_lowest_rq(struct task_st= ruct *task, struct rq *rq) return lowest_rq; } =20 -static struct task_struct *pick_next_pushable_task(struct rq *rq) -{ - struct task_struct *p; - - if (!has_pushable_tasks(rq)) - return NULL; - - p =3D plist_first_entry(&rq->rt.pushable_tasks, - struct task_struct, pushable_tasks); - - BUG_ON(rq->cpu !=3D task_cpu(p)); - BUG_ON(task_current(rq, p) || task_current_selected(rq, p)); - BUG_ON(p->nr_cpus_allowed <=3D 1); - - BUG_ON(!task_on_rq_queued(p)); - BUG_ON(!rt_task(p)); - - return p; -} - /* * If the current CPU has more than one RT task, see if the non * running task can migrate over to a CPU that is running a task @@ -2109,10 +2185,10 @@ static int push_rt_task(struct rq *rq, bool pull) * If #3 is needed, might be best to make a separate patch with * all the "chain-level load balancing" changes. */ - if (rq_curr(rq)->sched_class !=3D &rt_sched_class) + if (rq_selected(rq)->sched_class !=3D &rt_sched_class) return 0; =20 - cpu =3D find_lowest_rq(rq_curr(rq)); + cpu =3D find_lowest_rq(rq_selected(rq), rq_curr(rq)); if (cpu =3D=3D -1 || cpu =3D=3D rq->cpu) return 0; =20 @@ -2146,6 +2222,15 @@ static int push_rt_task(struct rq *rq, bool pull) * case for when we push a blocked task whose lock owner is not on * this rq. */ + /* XXX connoro: we might unlock the rq here. But it might be the case that + * the unpushable set can only *grow* and not shrink? Hmmm + * - load balancing should not pull anything from the active blocked tree + * - rq->curr can't have made progress or released mutexes + * - we can't have scheduled, right? Is preemption disabled here? + * - however, suppose proxy() pushed a task or chain here that linked our= chain + * into the active tree. + */ + /* XXX connoro: we need to pass in */ lowest_rq =3D find_lock_lowest_rq(next_task, rq); if (!lowest_rq) { struct task_struct *task; @@ -2180,9 +2265,7 @@ static int push_rt_task(struct rq *rq, bool pull) goto retry; } =20 - deactivate_task(rq, next_task, 0); - set_task_cpu(next_task, lowest_rq->cpu); - activate_task(lowest_rq, next_task, 0); + push_task_chain(rq, lowest_rq, next_task); resched_curr(lowest_rq); ret =3D 1; =20 @@ -2453,9 +2536,8 @@ static void pull_rt_task(struct rq *this_rq) if (is_migration_disabled(p)) { push_task =3D get_push_task(src_rq); } else { - deactivate_task(src_rq, p, 0); - set_task_cpu(p, this_cpu); - activate_task(this_rq, p, 0); + /* XXX connoro: need to do chain migration here. */ + push_task_chain(src_rq, this_rq, p); resched =3D true; } /* @@ -2469,6 +2551,14 @@ static void pull_rt_task(struct rq *this_rq) double_unlock_balance(this_rq, src_rq); =20 if (push_task) { + /* + * can push_cpu_stop get away with following blocked_proxy + * even though it's not following it from rq->curr? + * I can't figure out if that's correct. + * Ha! actually the trick is that get_push_task should return + * the proxy! + * So push_cpu_stop just follows blocked_on relations. + */ raw_spin_rq_unlock(this_rq); stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop, push_task, &src_rq->push_work); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 8330d22b286f..a4f5d03dfd50 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2422,7 +2422,7 @@ static inline struct task_struct *get_push_task(struc= t rq *rq) * chain during the __schedule() call immediately after rq_curr() is * pushed. */ - struct task_struct *p =3D rq_curr(rq); + struct task_struct *p =3D rq_selected(rq); =20 lockdep_assert_rq_held(rq); =20 @@ -3409,4 +3409,151 @@ static inline void switch_mm_cid(struct task_struct= *prev, struct task_struct *n static inline void switch_mm_cid(struct task_struct *prev, struct task_str= uct *next) { } #endif =20 +#ifdef CONFIG_SMP + +static inline bool task_queued_on_rq(struct rq *rq, struct task_struct *ta= sk) +{ + if (!task_on_rq_queued(task)) + return false; + smp_rmb(); + if (task_rq(task) !=3D rq) + return false; + smp_rmb(); + if (!task_on_rq_queued(task)) + return false; + return true; +} + +static inline void push_task_chain(struct rq *rq, struct rq *dst_rq, struc= t task_struct *task) +{ + struct task_struct *owner; + + lockdep_assert_rq_held(rq); + lockdep_assert_rq_held(dst_rq); + + BUG_ON(!task_queued_on_rq(rq, task)); + BUG_ON(task_current_selected(rq, task)); + + for (; task !=3D NULL; task =3D owner) { + /* + * XXX connoro: note that if task is currently in the process of migrati= ng to + * rq (but not yet enqueued since we hold the rq lock) then we stop only= after + * pushing all the preceding tasks. This isn't ideal (the pushed chain w= ill + * probably get sent back as soon as it's picked on dst_rq) but short of= holding + * all of the rq locks while balancing, I don't see how we can avoid thi= s, and + * some extra migrations are clearly better than trying to dequeue task = from rq + * before it's ever enqueued here. + * + * XXX connoro: catastrophic race when task is dequeued on rq to start a= nd then + * wakes on another rq in between the two checks. + * There's probably a better way than the below though... + */ + if (!task_queued_on_rq(rq, task) || task_current_selected(rq, task)) + break; + + if (task_is_blocked(task)) { + owner =3D mutex_owner(task->blocked_on); + } else { + owner =3D NULL; + } + deactivate_task(rq, task, 0); + set_task_cpu(task, dst_rq->cpu); + activate_task(dst_rq, task, 0); + if (task =3D=3D owner) + break; + } +} + +/* + * Returns the unblocked task at the end of the blocked chain starting wit= h p + * if that chain is composed entirely of tasks enqueued on rq, or NULL oth= erwise. + */ +static inline struct task_struct *find_exec_ctx(struct rq *rq, struct task= _struct *p) +{ + struct task_struct *exec_ctx, *owner; + struct mutex *mutex; + + lockdep_assert_rq_held(rq); + + /* + * XXX connoro: I *think* we have to return rq->curr if it occurs anywher= e in the chain + * to avoid races in certain scenarios where rq->curr has just blocked bu= t can't + * switch out until we release its rq lock. + * Should the check be task_on_cpu() instead? Does it matter? I don't thi= nk this + * gets called while context switch is actually ongoing which IIUC is whe= re this would + * make a difference... + * correction: it can get called from finish_task_switch apparently. Unle= ss that's wrong; + * double check. + */ + for (exec_ctx =3D p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_c= tx); + exec_ctx =3D owner) { + mutex =3D exec_ctx->blocked_on; + owner =3D mutex_owner(mutex); + if (owner =3D=3D exec_ctx) + break; + + /* + * XXX connoro: can we race here if owner is migrating to rq? + * owner has to be dequeued from its old rq before set_task_cpu + * is called, and we hold this rq's lock so it can't be + * enqueued here yet...right? + * + * Also if owner is dequeued we can race with its wakeup on another + * CPU...at which point all hell will break loose potentially... + */ + if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) { + exec_ctx =3D NULL; + break; + } + } + return exec_ctx; +} + +/* + * Returns: + * 1 if chain is pushable and affinity does not prevent pushing to cpu + * 0 if chain is unpushable + * -1 if chain is pushable but affinity blocks running on cpu. + * XXX connoro: maybe there's a cleaner way to do this... + */ +static inline int pushable_chain(struct rq *rq, struct task_struct *p, int= cpu) +{ + struct task_struct *exec_ctx; + + lockdep_assert_rq_held(rq); + + /* + * XXX connoro: 2 issues combine here: + * 1) we apparently have some stuff on the pushable list after it's + * dequeued from the rq + * 2) This check can race with migration/wakeup if p was already dequeued + * when we got the rq lock... + */ + if (task_rq(p) !=3D rq || !task_on_rq_queued(p)) + return 0; + + exec_ctx =3D find_exec_ctx(rq, p); + /* + * Chain leads off the rq, we're free to push it anywhere. + * + * One wrinkle with relying on find_exec_ctx is that when the chain + * leads to a task currently migrating to rq, we see the chain as + * pushable & push everything prior to the migrating task. Even if + * we checked explicitly for this case, we could still race with a + * migration after the check. + * This shouldn't permanently produce a bad state though, as proxy() + * will send the chain back to rq and by that point the migration + * should be complete & a proper push can occur. + */ + if (!exec_ctx) + return 1; + + if (task_on_cpu(rq, exec_ctx) || exec_ctx->nr_cpus_allowed <=3D 1) + return 0; + + return cpumask_test_cpu(cpu, &exec_ctx->cpus_mask) ? 1 : -1; +} + +#endif + #endif /* _KERNEL_SCHED_SCHED_H */ --=20 2.40.0.577.gac1e443424-goog From nobody Wed Feb 11 18:10:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0228C76196 for ; Tue, 11 Apr 2023 04:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230245AbjDKE12 (ORCPT ); Tue, 11 Apr 2023 00:27:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230236AbjDKE0d (ORCPT ); Tue, 11 Apr 2023 00:26:33 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACC0D3C1B for ; Mon, 10 Apr 2023 21:25:44 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id j6-20020a255506000000b00b8ef3da4acfso4758376ybb.8 for ; Mon, 10 Apr 2023 21:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1681187143; x=1683779143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y/6hOBiHy1bEYHAGxzPHFuQ7hDS+zseLZd12U/15Vp8=; b=sIeuE5t6SEZoJebDKFeRtbS1L48rsUbt4c93iOrv+64E2MnOG6aL86RHI0gvrdjsvX 7bbrgVhLmrxsoY8BOd14oS+PRAc2XaZJEPUhsWgf7kp9SmnZZrA/WkicVF+fVQep7Il8 7PtAMfwzWqKOFsPGF2CJNxuBjDtPWKsrEfGG6LpvvJDGi3SAyPE4cfkOqxszMc3WViwO WMDn/l5rvfBD4zSlKhj3gpUjs2a+A/LdS1jeiT3vH5x15yihf6l+mnFy72BKii4Caz/Q 2JIYvw8Y6Akb1msQvhH4ZKhVgHigK1HYbluiQPXJjem6Yz0YC1C48XRG/GHUedp//VZD 2PQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681187143; x=1683779143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y/6hOBiHy1bEYHAGxzPHFuQ7hDS+zseLZd12U/15Vp8=; b=3ktlchWrmbx2WaMX9PJiskxLI5uamTiWyzYF3neIbxc//Ntl2a2uDQ+FcrVGx7ZEGh QDs/L1dPN1r3mUaeRYMgQQ+61dY3NYMbpHU2ZwOWXts5nhyWZ9JhEZMU+hR1RLVTM+LZ QZMr2dMsEgZ+im/pjohteaU6lIEYq4gHvLoEcVQUT09+Pe/OYeILAKszMXHyNsR9c6O1 +PcVOu3mQ1DFRZzYZI/UQ7AmUkx6ASCTYF5Zv0MqqjHNXDa/d1aI+Fb1oH8ig9RtYCin v1LbI+tJGYTEr5PPPEGlNYpOjzyqcO6e0hvuZ1L8tpi+f330jchP2Htb3GoIUqDtgllC JNJw== X-Gm-Message-State: AAQBX9f+klgrZsA9RmCrDRGZKKorZrzujGUDv6rEtyRIbf7xz3qQvc/x h63NG9bEE0zGu8txjoJqIKUblGjYBclkp23LFdzp85Ic97dlCHW3GCS7Hnh4iR/IWX0KPiINR2b ehO7Pqx4piXcyUrfKPDa6gwtUTQNJMvWBiQZXy55CahouPwByF0JlBjSUBW8JH2PxR9rATSY= X-Google-Smtp-Source: AKy350bfZe4kQbo8ZRC4t+yONkcQMM5VjvcU6phoN41kKcxO55+gEjF1aJlT8lu/fUzYvXyy6jV8a/4RQO/v X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a25:6c07:0:b0:b8b:eea7:525a with SMTP id h7-20020a256c07000000b00b8beea7525amr6139638ybc.7.1681187143235; Mon, 10 Apr 2023 21:25:43 -0700 (PDT) Date: Tue, 11 Apr 2023 04:25:11 +0000 In-Reply-To: <20230411042511.1606592-1-jstultz@google.com> Mime-Version: 1.0 References: <20230411042511.1606592-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230411042511.1606592-15-jstultz@google.com> Subject: [PATCH v3 14/14] sched: Fix runtime accounting w/ proxy-execution From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The idea here is we want to charge the selected task's vruntime but charge the executed task's sum_exec_runtime. This way cputime accounting goes against the task actually running but vruntime accounting goes against the selected task so we get proper fairness. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Signed-off-by: John Stultz --- kernel/sched/fair.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index acb8155f40c3..3abb48b3515c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -891,22 +891,36 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq) } #endif /* CONFIG_SMP */ =20 -static s64 update_curr_se(struct rq *rq, struct sched_entity *curr) +static s64 update_curr_se(struct rq *rq, struct sched_entity *se) { u64 now =3D rq_clock_task(rq); s64 delta_exec; =20 - delta_exec =3D now - curr->exec_start; + /* Calculate the delta from selected se */ + delta_exec =3D now - se->exec_start; if (unlikely(delta_exec <=3D 0)) return delta_exec; =20 - curr->exec_start =3D now; - curr->sum_exec_runtime +=3D delta_exec; + /* Update selected se's exec_start */ + se->exec_start =3D now; + if (entity_is_task(se)) { + struct task_struct *running =3D rq_curr(rq); + /* + * If se is a task, we account the time + * against the running task, as w/ proxy-exec + * they may not be the same. + */ + running->se.exec_start =3D now; + running->se.sum_exec_runtime +=3D delta_exec; + } else { + /* If not task, account the time against se */ + se->sum_exec_runtime +=3D delta_exec; + } =20 if (schedstat_enabled()) { struct sched_statistics *stats; =20 - stats =3D __schedstats_from_se(curr); + stats =3D __schedstats_from_se(se); __schedstat_set(stats->exec_max, max(delta_exec, stats->exec_max)); } --=20 2.40.0.577.gac1e443424-goog