From nobody Thu Feb 12 06:14:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96707C77B73 for ; Thu, 27 Apr 2023 11:20:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243509AbjD0LT6 (ORCPT ); Thu, 27 Apr 2023 07:19:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243561AbjD0LTs (ORCPT ); Thu, 27 Apr 2023 07:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A52E5277 for ; Thu, 27 Apr 2023 04:19:46 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682594385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w7SiOThIQ863K6oJvVNAv7De2opm8Uk3DJjr8iS90qc=; b=oZYijA09gs1W+HixBhBUTywY91zPkNMZlMSfNHnaMbMWBseV8AYEnY8zYx9B5isQZHVMCX wpkZtNYaHOVxJUJv+gSd7PBNmFI0uW3UmwnF1ZQNVLN1D7Cdqtzy9kGlOC88T2wMKycg7D +GpP+cfi5IupFM7/tyvniKKryq9KV/+B7ln8iCcSZMM+W4JOa4P9PqDljCFZpoYTwkPLYk Rv0dTfXJSG5EcrXcXTpJUowENq6TFP8DN/zpXH3Wz0SUJYCX7oEelnKQQh58fcNMIJvYFV 4EsLWM8uF6VqbSolkXfIdmakwlFazEBCDC+UX1LNc2POR4Z/m/ge3vFtdSIKag== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682594385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w7SiOThIQ863K6oJvVNAv7De2opm8Uk3DJjr8iS90qc=; b=YOAwTPHx6SRktxKl/lL048FMWg95bASatMmM3bNYK79/tErexC/Gtrfjto2afxjuVJNGwx TnjHTOIEoStIXICQ== To: linux-kernel@vger.kernel.org Cc: Ben Segall , Boqun Feng , Crystal Wood , Daniel Bristot de Oliveira , Dietmar Eggemann , Ingo Molnar , John Stultz , Juri Lelli , Mel Gorman , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH v2 4/4] locking/rtmutex: Add a lockdep assert to catch potential nested blocking Date: Thu, 27 Apr 2023 13:19:37 +0200 Message-Id: <20230427111937.2745231-5-bigeasy@linutronix.de> In-Reply-To: <20230427111937.2745231-1-bigeasy@linutronix.de> References: <20230427111937.2745231-1-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner There used to be a BUG_ON(current->pi_blocked_on) in the lock acquisition functions, but that vanished in one of the rtmutex overhauls. Bring it back in form of a lockdep assert to catch code paths which take rtmutex based locks with current::pi_blocked_on !=3D NULL. Reported-by: Crystal Wood Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/rtmutex.c | 2 ++ kernel/locking/rwbase_rt.c | 2 ++ kernel/locking/spinlock_rt.c | 2 ++ 3 files changed, 6 insertions(+) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index dd76c1b9b7d21..479a9487edcc2 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1727,6 +1727,8 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_= base *lock, static __always_inline int __rt_mutex_lock(struct rt_mutex_base *lock, unsigned int state) { + lockdep_assert(!current->pi_blocked_on); + if (likely(rt_mutex_try_acquire(lock))) return 0; =20 diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 945d474f5d27f..5be92ca5afabc 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -133,6 +133,8 @@ static __always_inline int rwbase_read_lock(struct rwba= se_rt *rwb, { int ret; =20 + lockdep_assert(!current->pi_blocked_on); + if (rwbase_read_trylock(rwb)) return 0; =20 diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 62c4a6866087a..9fe282cd145d9 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -37,6 +37,8 @@ =20 static __always_inline void rtlock_lock(struct rt_mutex_base *rtm) { + lockdep_assert(!current->pi_blocked_on); + if (unlikely(!rt_mutex_cmpxchg_acquire(rtm, NULL, current))) rtlock_slowlock(rtm); } --=20 2.40.1