From nobody Thu Apr 9 09:02:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91EC4C4332F for ; Thu, 3 Nov 2022 18:31:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231532AbiKCSbH (ORCPT ); Thu, 3 Nov 2022 14:31:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230203AbiKCSap (ORCPT ); Thu, 3 Nov 2022 14:30:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CB3C63EB for ; Thu, 3 Nov 2022 11:29:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667500187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QeVhMTv4WeHhoSDG61naFzwv8taUrm36WiOql1W/sjc=; b=fl31Wi8oTyMQLYOqpecDj9gd0+Ij6+WHCirIoia0rhYMhpBbA2TriYN06gi9ANbSA7BQKm kL1UiR2mi1QivJb6R6khXIi+U7hkSZHVXgOBt7Dk6DIihjb/rSt4M1mMEfOJyh8G/7i83Y bWz/7VJ5ZFyBCrJqAmSZCbyh/1tgUu0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-413-hDkW7RqPMfy-zfxVt6GxHQ-1; Thu, 03 Nov 2022 14:29:46 -0400 X-MC-Unique: hDkW7RqPMfy-zfxVt6GxHQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C4BDF800B23; Thu, 3 Nov 2022 18:29:44 +0000 (UTC) Received: from llong.com (unknown [10.22.33.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id 647FE1121325; Thu, 3 Nov 2022 18:29:44 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng Cc: linux-kernel@vger.kernel.org, john.p.donnelly@oracle.com, Hillf Danton , Mukesh Ojha , =?UTF-8?q?Ting11=20Wang=20=E7=8E=8B=E5=A9=B7?= , Waiman Long Subject: [PATCH v5 3/6] locking/rwsem: Disable preemption at all down_write*() and up_write() code paths Date: Thu, 3 Nov 2022 14:29:33 -0400 Message-Id: <20221103182936.217120-4-longman@redhat.com> In-Reply-To: <20221103182936.217120-1-longman@redhat.com> References: <20221103182936.217120-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The previous patch has disabled preemption at all the down_read() and up_read() code paths. For symmetry, this patch extends commit 48dfb5d2560d ("locking/rwsem: Disable preemption while trying for rwsem lock") to have preemption disabled at all the down_write() and up_write() code path including downgrade_write(). Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long --- kernel/locking/rwsem.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index ebaff8a87e1d..2953fa4dd790 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -256,16 +256,13 @@ static inline bool rwsem_read_trylock(struct rw_semap= hore *sem, long *cntp) static inline bool rwsem_write_trylock(struct rw_semaphore *sem) { long tmp =3D RWSEM_UNLOCKED_VALUE; - bool ret =3D false; =20 - preempt_disable(); if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, RWSEM_WRITER_LOCKE= D)) { rwsem_set_owner(sem); - ret =3D true; + return true; } =20 - preempt_enable(); - return ret; + return false; } =20 /* @@ -716,7 +713,6 @@ static inline bool rwsem_can_spin_on_owner(struct rw_se= maphore *sem) return false; } =20 - preempt_disable(); /* * Disable preemption is equal to the RCU read-side crital section, * thus the task_strcut structure won't go away. @@ -728,7 +724,6 @@ static inline bool rwsem_can_spin_on_owner(struct rw_se= maphore *sem) if ((flags & RWSEM_NONSPINNABLE) || (owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner))) ret =3D false; - preempt_enable(); =20 lockevent_cond_inc(rwsem_opt_fail, !ret); return ret; @@ -828,8 +823,6 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *= sem) int loop =3D 0; u64 rspin_threshold =3D 0; =20 - preempt_disable(); - /* sem->wait_lock should not be held when doing optimistic spinning */ if (!osq_lock(&sem->osq)) goto done; @@ -937,7 +930,6 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *= sem) } osq_unlock(&sem->osq); done: - preempt_enable(); lockevent_cond_inc(rwsem_opt_fail, !taken); return taken; } @@ -1178,15 +1170,12 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem,= int state) if (waiter.handoff_set) { enum owner_state owner_state; =20 - preempt_disable(); owner_state =3D rwsem_spin_on_owner(sem); - preempt_enable(); - if (owner_state =3D=3D OWNER_NULL) goto trylock_again; } =20 - schedule(); + schedule_preempt_disabled(); lockevent_inc(rwsem_sleep_writer); set_current_state(state); trylock_again: @@ -1310,10 +1299,14 @@ static inline int __down_read_trylock(struct rw_sem= aphore *sem) */ static inline int __down_write_common(struct rw_semaphore *sem, int state) { + int ret =3D 0; + + preempt_disable(); if (unlikely(!rwsem_write_trylock(sem))) { if (IS_ERR(rwsem_down_write_slowpath(sem, state))) - return -EINTR; + ret =3D -EINTR; } + preempt_enable(); =20 return 0; } @@ -1330,8 +1323,14 @@ static inline int __down_write_killable(struct rw_se= maphore *sem) =20 static inline int __down_write_trylock(struct rw_semaphore *sem) { + int ret; + + preempt_disable(); DEBUG_RWSEMS_WARN_ON(sem->magic !=3D sem, sem); - return rwsem_write_trylock(sem); + ret =3D rwsem_write_trylock(sem); + preempt_enable(); + + return ret; } =20 /* @@ -1374,9 +1373,9 @@ static inline void __up_write(struct rw_semaphore *se= m) preempt_disable(); rwsem_clear_owner(sem); tmp =3D atomic_long_fetch_add_release(-RWSEM_WRITER_LOCKED, &sem->count); - preempt_enable(); if (unlikely(tmp & RWSEM_FLAG_WAITERS)) rwsem_wake(sem); + preempt_enable(); } =20 /* @@ -1394,11 +1393,13 @@ static inline void __downgrade_write(struct rw_sema= phore *sem) * write side. As such, rely on RELEASE semantics. */ DEBUG_RWSEMS_WARN_ON(rwsem_owner(sem) !=3D current, sem); + preempt_disable(); tmp =3D atomic_long_fetch_add_release( -RWSEM_WRITER_LOCKED+RWSEM_READER_BIAS, &sem->count); rwsem_set_reader_owned(sem); if (tmp & RWSEM_FLAG_WAITERS) rwsem_downgrade_wake(sem); + preempt_enable(); } =20 #else /* !CONFIG_PREEMPT_RT */ --=20 2.31.1