From nobody Fri Sep 12 12:17:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84BC2EB64DC for ; Wed, 14 Jun 2023 12:25:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244567AbjFNMZY (ORCPT ); Wed, 14 Jun 2023 08:25:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244631AbjFNMYk (ORCPT ); Wed, 14 Jun 2023 08:24:40 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC6FA19A5 for ; Wed, 14 Jun 2023 05:23:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686745431; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=35DChW7keH1A0MG1jE0ycDQ3IFSVa8psKti14q3PpAE=; b=JDTTMkYrhC96aBlXSwIY5tdkh5m4xrLHWOjJt550lmCHL5IxrMGEpYE/lndHvozdxKPoKU rrJdEGnYFutLdCk+h4Ag+ioV+eT9gilKvei1xG998YB5CiM1OHbuhNMBRNaFB2uPw/5ItD bSstkn8FV5dYkc+q4AE9jaDMl/E6fsA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-173-4klzjLiSM4qwKObKXbZGzA-1; Wed, 14 Jun 2023 08:23:45 -0400 X-MC-Unique: 4klzjLiSM4qwKObKXbZGzA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F213285A5BB; Wed, 14 Jun 2023 12:23:44 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.33.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9362A40C20F5; Wed, 14 Jun 2023 12:23:41 +0000 (UTC) From: Wander Lairson Costa To: "Christian Brauner (Microsoft)" , Mike Christie , "Michael S. Tsirkin" , Peter Zijlstra , Wander Lairson Costa , Oleg Nesterov , Kefeng Wang , Andrew Morton , "Liam R. Howlett" , Suren Baghdasaryan , Andrei Vagin , Nicholas Piggin , linux-kernel@vger.kernel.org (open list) Cc: Sebastian Andrzej Siewior , Steven Rostedt , Luis Goncalves Subject: [PATCH v10 2/2] sched: avoid false lockdep splat in put_task_struct() Date: Wed, 14 Jun 2023 09:23:22 -0300 Message-Id: <20230614122323.37957-3-wander@redhat.com> In-Reply-To: <20230614122323.37957-1-wander@redhat.com> References: <20230614122323.37957-1-wander@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In put_task_struct(), a spin_lock is indirectly acquired under the kernel stock. When running the kernel in real-time (RT) configuration, the operation is dispatched to a preemptible context call to ensure guaranteed preemption. However, if PROVE_RAW_LOCK_NESTING is enabled and __put_task_struct() is called while holding a raw_spinlock, lockdep incorrectly reports an "Invalid lock context" in the stock kernel. This false splat occurs because lockdep is unaware of the different route taken under RT. To address this issue, override the inner wait type to prevent the false lockdep splat. Signed-off-by: Wander Lairson Costa Suggested-by: Oleg Nesterov Suggested-by: Sebastian Andrzej Siewior Suggested-by: Peter Zijlstra Cc: Steven Rostedt Cc: Luis Goncalves --- include/linux/sched/task.h | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index d20de91e3b95..b53909027771 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -125,6 +125,19 @@ static inline void put_task_struct(struct task_struct = *t) if (!refcount_dec_and_test(&t->usage)) return; =20 + /* + * In !RT, it is always safe to call __put_task_struct(). + * Under RT, we can only call it in preemptible context. + */ + if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) { + static DEFINE_WAIT_OVERRIDE_MAP(put_task_map, LD_WAIT_SLEEP); + + lock_map_acquire_try(&put_task_map); + __put_task_struct(t); + lock_map_release(&put_task_map); + return; + } + /* * under PREEMPT_RT, we can't call put_task_struct * in atomic context because it will indirectly @@ -145,10 +158,7 @@ static inline void put_task_struct(struct task_struct = *t) * when it fails to fork a process. Therefore, there is no * way it can conflict with put_task_struct(). */ - if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible()) - call_rcu(&t->rcu, __put_task_struct_rcu_cb); - else - __put_task_struct(t); + call_rcu(&t->rcu, __put_task_struct_rcu_cb); } =20 static inline void put_task_struct_many(struct task_struct *t, int nr) --=20 2.40.1