From nobody Mon Feb 9 04:29:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4628DC77B7A for ; Thu, 1 Jun 2023 05:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231588AbjFAF7Z (ORCPT ); Thu, 1 Jun 2023 01:59:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231418AbjFAF7D (ORCPT ); Thu, 1 Jun 2023 01:59:03 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFC91101 for ; Wed, 31 May 2023 22:59:01 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bb0d11a56abso705940276.2 for ; Wed, 31 May 2023 22:59:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685599141; x=1688191141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+xkolKmWTL+2iTTj7TXGWopHIMpr9SLMisRaLvY4Yfg=; b=TJ2WdrIOapOqDtc5nnMpVStH5sUL4KPu3fSBsJOowo5gg58XLqCieyzOiNurx/rGVB 5ki6fTVyUa2m5wu2wQgdju6KnnRPtfhnZGOpsSbYbBypwsMWbB5d8eMeKSeN6RKI/luW n9GPhsnzcoV+M/fMxspULPbqktxIWUHrevRG64708Ucevupk1M0K4xp0WIvpUHOuH9ht e4tfy8sqgW/I4aUm3OpWbLdEssJziaSkzqA4JroBP7Xc/X1y48k4LBZ8Nu6ezp3RkCMj 10Bn8JyHQDZ5/IjpQB/opv7yR1EUkdTS0OWf8SFgoJLwiNzpIfotO7x5Gq9Urg5Z9cxA ON/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685599141; x=1688191141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+xkolKmWTL+2iTTj7TXGWopHIMpr9SLMisRaLvY4Yfg=; b=GnWljRKZnRWWp+N8j7BLqqPzS0wlpD4N8NMnNmjQ4q0h0xfVuwuQ8AtsP9tRZPyNIE xYNX5dBocxpQcF4kw1olhAoqZFG82jZNTgUPCNLb64hd5zoMLYt5ZY+jhGESQ5OKEnUr QlC66+QBy1lto72hdIF1EAo8oxRcQ0je7C4vGvQBKKdENCvFMoSDW6A8ZSHNs5nnFHZ4 gDleCRkiPgfs8NPSCusdnhTMWbnLqj0HtzBcdmQdScavpwaptfAZ0wXthH6MIqMX4e8o 3WCvzmyV+PNdZESEbqQlVh3jNzh2hVL3SuOO+WtvKmxuDR/DUmpj0nt836KUmzcLPgNP ypeQ== X-Gm-Message-State: AC+VfDzVCLbtCLab+zWotn8xi5TMRGA2Oj0Oqa6V2lg+oSvgKDfcSvAL O9qO/UtvZxwEtyzUcaOIjrXkgwDNPygoeLAMU2bp+95Wb01ScR3dSN/euF9TAT8uA4O7JeyMGiE Z2vF0eCnd7/bwpMj1+3mmZoyXX79WOqhD+A3lG+GxLb4lMR9Eci0T3Cmah0GEHzM7quPSY/Y= X-Google-Smtp-Source: ACHHUZ7RjU2Wbk1cODPBYFd/sY2vVdY7ulEdbcPBSRK91GlEgn/QzV7M2kBp+QmKmo16XS+lYaH7Rcw0+rVp X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a25:fe12:0:b0:ba8:4c16:78b7 with SMTP id k18-20020a25fe12000000b00ba84c1678b7mr4704158ybe.12.1685599141065; Wed, 31 May 2023 22:59:01 -0700 (PDT) Date: Thu, 1 Jun 2023 05:58:06 +0000 In-Reply-To: <20230601055846.2349566-1-jstultz@google.com> Mime-Version: 1.0 References: <20230601055846.2349566-1-jstultz@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230601055846.2349566-4-jstultz@google.com> Subject: [PATCH v4 03/13] locking/mutex: make mutex::wait_lock irq safe From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , kernel-team@android.com, "Connor O'Brien" , John Stultz Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Juri Lelli mutex::wait_lock might be nested under rq->lock. Make it irq safe then. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: kernel-team@android.com Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) [rebase & fix {un,}lock_wait_lock helpers in ww_mutex.h] Signed-off-by: Connor O'Brien Signed-off-by: John Stultz --- v3: * Re-added this patch after it was dropped in v2 which caused lockdep warnings to trip. --- kernel/locking/mutex.c | 18 ++++++++++-------- kernel/locking/ww_mutex.h | 22 ++++++++++++---------- 2 files changed, 22 insertions(+), 18 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 1582756914df..a528e7f42caa 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -572,6 +572,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas { struct mutex_waiter waiter; struct ww_mutex *ww; + unsigned long flags; int ret; =20 if (!use_ww_ctx) @@ -614,7 +615,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas return 0; } =20 - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); /* * After waiting to acquire the wait_lock, try again. */ @@ -675,7 +676,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas goto err; } =20 - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); if (ww_ctx) ww_ctx_wake(ww_ctx); schedule_preempt_disabled(); @@ -698,9 +699,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas trace_contention_begin(lock, LCB_F_MUTEX); } =20 - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: __set_current_state(TASK_RUNNING); =20 @@ -726,7 +727,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); =20 - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); if (ww_ctx) ww_ctx_wake(ww_ctx); preempt_enable(); @@ -737,7 +738,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int st= ate, unsigned int subclas __mutex_remove_waiter(lock, &waiter); err_early_kill: trace_contention_end(lock, ret); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); if (ww_ctx) @@ -909,6 +910,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne struct task_struct *next =3D NULL; DEFINE_WAKE_Q(wake_q); unsigned long owner; + unsigned long flags; =20 mutex_release(&lock->dep_map, ip); =20 @@ -935,7 +937,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne } } =20 - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ @@ -953,7 +955,7 @@ static noinline void __sched __mutex_unlock_slowpath(st= ruct mutex *lock, unsigne __mutex_handoff(lock, next); =20 preempt_disable(); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); =20 wake_up_q(&wake_q); preempt_enable(); diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index e49ea5336473..984a4e0bff36 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -70,14 +70,14 @@ __ww_mutex_has_waiters(struct mutex *lock) return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS; } =20 -static inline void lock_wait_lock(struct mutex *lock) +static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags) { - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, *flags); } =20 -static inline void unlock_wait_lock(struct mutex *lock) +static inline void unlock_wait_lock(struct mutex *lock, unsigned long flag= s) { - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); } =20 static inline void lockdep_assert_wait_lock_held(struct mutex *lock) @@ -144,14 +144,14 @@ __ww_mutex_has_waiters(struct rt_mutex *lock) return rt_mutex_has_waiters(&lock->rtmutex); } =20 -static inline void lock_wait_lock(struct rt_mutex *lock) +static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *fl= ags) { - raw_spin_lock(&lock->rtmutex.wait_lock); + raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags); } =20 -static inline void unlock_wait_lock(struct rt_mutex *lock) +static inline void unlock_wait_lock(struct rt_mutex *lock, flags) { - raw_spin_unlock(&lock->rtmutex.wait_lock); + raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, flags); } =20 static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock) @@ -383,6 +383,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_= acquire_ctx *ww_ctx) static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx= *ctx) { + unsigned long flags; + ww_mutex_lock_acquired(lock, ctx); =20 /* @@ -410,9 +412,9 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, st= ruct ww_acquire_ctx *ctx) * Uh oh, we raced in fastpath, check if any of the waiters need to * die or wound us. */ - lock_wait_lock(&lock->base); + lock_wait_lock(&lock->base, &flags); __ww_mutex_check_waiters(&lock->base, ctx); - unlock_wait_lock(&lock->base); + unlock_wait_lock(&lock->base, flags); } =20 static __always_inline int --=20 2.41.0.rc0.172.g3f132b7071-goog