From nobody Thu Dec 18 23:22:12 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B86FAC6FA82 for ; Tue, 20 Sep 2022 19:40:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230176AbiITTkX (ORCPT ); Tue, 20 Sep 2022 15:40:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231439AbiITTkP (ORCPT ); Tue, 20 Sep 2022 15:40:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DC0E33E09 for ; Tue, 20 Sep 2022 12:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1663702813; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=m5GnfPTYLmYoHKb229FAi6w6BrsorU6M4XjjdQjFxOQ=; b=fw3/QZMEZUgplb8D+nqgsQ4wMBKV6VXC6UC4U+slpntiA1YokofCDl8RlySkvSkz7mYze5 MEY9mosMHeh7bdEGzIxAQ3EpdJbCQvFpi0B4W+oq64SRGVmQRAhj1f1SuM7pZkkHvEHpRw wVSHLCZyidMztYLBCncXdwBvs3N+9oM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-5-wa86u5ayPf6i936qEjwbgA-1; Tue, 20 Sep 2022 15:40:05 -0400 X-MC-Unique: wa86u5ayPf6i936qEjwbgA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 83BF81C20AE7; Tue, 20 Sep 2022 19:40:04 +0000 (UTC) Received: from llong.com (unknown [10.22.34.82]) by smtp.corp.redhat.com (Postfix) with ESMTP id 355B7C15BA4; Tue, 20 Sep 2022 19:40:04 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng Cc: linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH] locking/qspinlock: Do spin-wait in slowpath if preemptible Date: Tue, 20 Sep 2022 15:39:51 -0400 Message-Id: <20220920193951.1545892-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There are some code paths in the kernel where arch_spin_lock() will be called directly when the lock isn't expected to be contended and critical section is short. For example, tracing_saved_cmdlines_size_read() in kernel/trace/trace.c does that. In most cases, preemption is also not disabled. This creates a problem for the qspinlock slowpath which expects preemption to be disabled to guarantee the safe use of per cpu qnodes structure. To work around these special use cases, add a preemption count check in the slowpath and do a simple spin-wait when preemption isn't disabled. Fixes: a33fda35e3a7 ("Introduce a simple generic 4-byte queued spinlock") Signed-off-by: Waiman Long --- kernel/locking/qspinlock.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 65a9a10caa6f..5e352cf8fa18 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -321,6 +321,23 @@ void queued_spin_lock_slowpath(struct qspinlock *lock,= u32 val) =20 BUILD_BUG_ON(CONFIG_NR_CPUS >=3D (1U << _Q_TAIL_CPU_BITS)); =20 +#ifdef CONFIG_PREEMPT_COUNT + /* + * As arch_spin_lock() can be called directly in some use cases + * where the lock isn't expected to be contended, critical section + * is short and preemption isn't disabled, we can't use qnodes in + * this case as state may be screwed up in case preemption happens + * or preemption warning may be printed (CONFIG_DEBUG_PREEMPT). + * Just do a simple spin-wait in this case as the lock shouldn't be + * contended for long. + */ + if (unlikely(!preempt_count())) { + while (!queued_spin_trylock(lock)) + cpu_relax(); + return; + } +#endif + if (pv_enabled()) goto pv_queue; =20 --=20 2.31.1