From nobody Fri Sep 12 04:10:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69ECEC636CC for ; Mon, 13 Feb 2023 19:50:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231455AbjBMTuG (ORCPT ); Mon, 13 Feb 2023 14:50:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230201AbjBMTtr (ORCPT ); Mon, 13 Feb 2023 14:49:47 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46886BB87 for ; Mon, 13 Feb 2023 11:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676317733; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GznFzpuODdzkTxUTk9yUnyf2c6J4yJOQyX0ufIRGdKw=; b=gLmpvTwN7vI9koXveBz4zDv8x3/qmWPcz2u60+m7OQuOjV8btqgIsn7lZW3iSKdYJbw2W2 bpnYwyLga7XhN8/wkuAfDBe/6RfYQhJdSobgoQyJikV1mT5A3JXOJPUFp+Dq7V8SIAU9Ci SDzHD+TeFWFLksJJ4eCLwTy5R9gxDG4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-570-y-5srHbKO1yjvKd7G90nNw-1; Mon, 13 Feb 2023 14:48:49 -0500 X-MC-Unique: y-5srHbKO1yjvKd7G90nNw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 626C488B7A0; Mon, 13 Feb 2023 19:48:49 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2C681C16022; Mon, 13 Feb 2023 19:48:49 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng Cc: linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH 1/2] locking/rwsem: Enable early rwsem writer lock handoff Date: Mon, 13 Feb 2023 14:48:31 -0500 Message-Id: <20230213194832.832256-2-longman@redhat.com> In-Reply-To: <20230213194832.832256-1-longman@redhat.com> References: <20230213194832.832256-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The lock handoff provided in rwsem isn't a true handoff like that in the mutex. Instead, it is more like a quiescent state where optimistic spinning and lock stealing are disabled to make it easier for the first waiter to acquire the lock. For readers, setting the HANDOFF bit will disable writers from stealing the lock. The actual handoff is done at rwsem_wake() time after taking the wait_lock. There isn't much we need to improve here other than setting the RWSEM_NONSPINNABLE bit in owner. For writers, setting the HANDOFF bit does not guarantee that it can acquire the rwsem successfully in a subsequent rwsem_try_write_lock() after setting the bit there. A reader can come in and add a RWSEM_READER_BIAS temporarily which can spoil the takeover of the rwsem in rwsem_try_write_lock() leading to additional delay. For mutex, lock handoff is done at unlock time as the owner value and the handoff bit is in the same lock word and can be updated atomically. That is the not case for rwsem which has a count value for locking and a different owner value for storing lock owner. In addition, the handoff processing differs depending on whether the first waiter is a writer or a reader. We can only make that waiter type determination after acquiring the wait lock. Together with the fact that the RWSEM_FLAG_HANDOFF bit is stable while holding the wait_lock, the most convenient place to do the early handoff is at rwsem_wake() where wait_lock has to be acquired anyway. There isn't much additional cost in doing this check there while increasing the chance that a lock handoff will be successful when the writer wakes up. Since a lot can happen between unlock time and after acquiring the wait_lock in rwsem_wake(), we have to reconfirm the presence of the handoff bit and the lock is free before doing the handoff. Running a 96-thread rwsem locking test on a 96-thread x86-64 system, the locking throughput increases slightly from 588 kops/s to 592 kops/s with this change. Kernel test robot also noticed a 19.3% improvement of will-it-scale.per_thread_ops due to this commit [1]. [1] https://lore.kernel.org/lkml/202302122155.87699b56-oliver.sang@intel.co= m/ Signed-off-by: Waiman Long --- kernel/locking/rwsem.c | 74 +++++++++++++++++++++++++++++++++++------- 1 file changed, 63 insertions(+), 11 deletions(-) diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index acb5a50309a1..3936a5fe1229 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -40,7 +40,7 @@ * * When the rwsem is reader-owned and a spinning writer has timed out, * the nonspinnable bit will be set to disable optimistic spinning. - + * * When a writer acquires a rwsem, it puts its task_struct pointer * into the owner field. It is cleared after an unlock. * @@ -430,6 +430,10 @@ static void rwsem_mark_wake(struct rw_semaphore *sem, * Mark writer at the front of the queue for wakeup. * Until the task is actually later awoken later by * the caller, other writers are able to steal it. + * + * *Unless* HANDOFF is set, in which case only the + * first waiter is allowed to take it. + * * Readers, on the other hand, will block as they * will notice the queued writer. */ @@ -467,7 +471,12 @@ static void rwsem_mark_wake(struct rw_semaphore *sem, adjustment -=3D RWSEM_FLAG_HANDOFF; lockevent_inc(rwsem_rlock_handoff); } + /* + * With HANDOFF set for reader, we must + * terminate all spinning. + */ waiter->handoff_set =3D true; + rwsem_set_nonspinnable(sem); } =20 atomic_long_add(-adjustment, &sem->count); @@ -609,6 +618,12 @@ static inline bool rwsem_try_write_lock(struct rw_sema= phore *sem, =20 lockdep_assert_held(&sem->wait_lock); =20 + if (!waiter->task) { + /* Write lock handed off */ + smp_acquire__after_ctrl_dep(); + return true; + } + count =3D atomic_long_read(&sem->count); do { bool has_handoff =3D !!(count & RWSEM_FLAG_HANDOFF); @@ -754,6 +769,10 @@ rwsem_spin_on_owner(struct rw_semaphore *sem) =20 owner =3D rwsem_owner_flags(sem, &flags); state =3D rwsem_owner_state(owner, flags); + + if (owner =3D=3D current) + return OWNER_NONSPINNABLE; /* Handoff granted */ + if (state !=3D OWNER_WRITER) return state; =20 @@ -844,7 +863,6 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *= sem) * Try to acquire the lock */ taken =3D rwsem_try_write_lock_unqueued(sem); - if (taken) break; =20 @@ -1168,21 +1186,23 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem,= int state) * without sleeping. */ if (waiter.handoff_set) { - enum owner_state owner_state; - - owner_state =3D rwsem_spin_on_owner(sem); - if (owner_state =3D=3D OWNER_NULL) - goto trylock_again; + rwsem_spin_on_owner(sem); + if (!READ_ONCE(waiter.task)) { + /* Write lock handed off */ + smp_acquire__after_ctrl_dep(); + set_current_state(TASK_RUNNING); + goto out; + } } =20 schedule_preempt_disabled(); lockevent_inc(rwsem_sleep_writer); set_current_state(state); -trylock_again: raw_spin_lock_irq(&sem->wait_lock); } __set_current_state(TASK_RUNNING); raw_spin_unlock_irq(&sem->wait_lock); +out: lockevent_inc(rwsem_wlock); trace_contention_end(sem, 0); return sem; @@ -1190,6 +1210,11 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, = int state) out_nolock: __set_current_state(TASK_RUNNING); raw_spin_lock_irq(&sem->wait_lock); + if (!waiter.task) { + smp_acquire__after_ctrl_dep(); + raw_spin_unlock_irq(&sem->wait_lock); + goto out; + } rwsem_del_wake_waiter(sem, &waiter, &wake_q); lockevent_inc(rwsem_wlock_fail); trace_contention_end(sem, -EINTR); @@ -1202,14 +1227,41 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem,= int state) */ static struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem) { - unsigned long flags; DEFINE_WAKE_Q(wake_q); + unsigned long flags; + unsigned long count; =20 raw_spin_lock_irqsave(&sem->wait_lock, flags); =20 - if (!list_empty(&sem->wait_list)) - rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q); + if (list_empty(&sem->wait_list)) + goto unlock_out; + + /* + * If the rwsem is free and handoff flag is set with wait_lock held, + * no other CPUs can take an active lock. + */ + count =3D atomic_long_read(&sem->count); + if (!(count & RWSEM_LOCK_MASK) && (count & RWSEM_FLAG_HANDOFF)) { + /* + * Since rwsem_mark_wake() will handle the handoff to readers + * properly, we don't need to do anything extra for readers. + * Early handoff processing will only be needed for writers. + */ + struct rwsem_waiter *waiter =3D rwsem_first_waiter(sem); + long adj =3D RWSEM_WRITER_LOCKED - RWSEM_FLAG_HANDOFF; + + if (waiter->type =3D=3D RWSEM_WAITING_FOR_WRITE) { + atomic_long_set(&sem->owner, (long)waiter->task); + atomic_long_add(adj, &sem->count); + wake_q_add(&wake_q, waiter->task); + rwsem_del_waiter(sem, waiter); + waiter->task =3D NULL; /* Signal the handoff */ + goto unlock_out; + } + } + rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q); =20 +unlock_out: raw_spin_unlock_irqrestore(&sem->wait_lock, flags); wake_up_q(&wake_q); =20 --=20 2.31.1 From nobody Fri Sep 12 04:10:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F31E1C636CC for ; Mon, 13 Feb 2023 19:50:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231370AbjBMTts (ORCPT ); Mon, 13 Feb 2023 14:49:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229645AbjBMTtq (ORCPT ); Mon, 13 Feb 2023 14:49:46 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45E71422A for ; Mon, 13 Feb 2023 11:48:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676317733; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=b7jYYaKDjGObGliNYlXudjzJtUfA3xOqfNANQ2twpT0=; b=JIK5Jv5RQBeTgG7bsqgToBVCPdS2Q3pPx2hnoweUylqcFl7f6Zn4rJ+hoO4ZEdutJZMy3U iWxqtLMUaqRX1R+cOVHVU2kP/XpZ6rzMruVFF9lDR2QkAwVQNUfpbQZt4bJsCF8eGbHo8K 57f+1G+OCnhQmX2iLswLonE6HEBxW7Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-463-VdLTVRkBO5q9XEy8WNHXUA-1; Mon, 13 Feb 2023 14:48:50 -0500 X-MC-Unique: VdLTVRkBO5q9XEy8WNHXUA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9C50C100F906; Mon, 13 Feb 2023 19:48:49 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6AD89C16023; Mon, 13 Feb 2023 19:48:49 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng Cc: linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH 2/2] locking/rwsem: Wake up all readers for wait queue waker Date: Mon, 13 Feb 2023 14:48:32 -0500 Message-Id: <20230213194832.832256-3-longman@redhat.com> In-Reply-To: <20230213194832.832256-1-longman@redhat.com> References: <20230213194832.832256-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As noted in commit 54c1ee4d614d ("locking/rwsem: Conditionally wake waiters in reader/writer slowpaths"), it was possible for a rwsem to get into a state where a reader-owned rwsem could have many readers waiting in the wait queue but no writer. Recently, it was found that one way to cause this condition is to have a highly contended rwsem with many readers, like a mmap_sem. There can be hundreds of readers waiting in the wait queue of a writer-owned mmap_sem. The rwsem_wake() call by the up_write() call of the rwsem owning writer can hit the 256 reader wakeup limit and leave the rests of the readers remaining in the wait queue. The reason for the limit is to avoid excessive delay in doing other useful work. With commit 54c1ee4d614d ("locking/rwsem: Conditionally wake waiters in reader/writer slowpaths"), a new incoming reader should wake up another batch of up to 256 readers. However, these incoming readers or writers will have to wait in the wait queue and there is nothing else they can do until it is their turn to be waken up. This patch adds an additional in_waitq argument to rwsem_mark_wake() to indicate that the waker is in the wait queue and can ignore the limit. Signed-off-by: Waiman Long --- kernel/locking/rwsem.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index 3936a5fe1229..723a8824b967 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -410,7 +410,7 @@ rwsem_del_waiter(struct rw_semaphore *sem, struct rwsem= _waiter *waiter) */ static void rwsem_mark_wake(struct rw_semaphore *sem, enum rwsem_wake_type wake_type, - struct wake_q_head *wake_q) + struct wake_q_head *wake_q, bool in_waitq) { struct rwsem_waiter *waiter, *tmp; long oldcount, woken =3D 0, adjustment =3D 0; @@ -524,9 +524,10 @@ static void rwsem_mark_wake(struct rw_semaphore *sem, list_move_tail(&waiter->list, &wlist); =20 /* - * Limit # of readers that can be woken up per wakeup call. + * Limit # of readers that can be woken up per wakeup call + * unless the waker is waiting in the wait queue. */ - if (unlikely(woken >=3D MAX_READERS_WAKEUP)) + if (unlikely(!in_waitq && (woken >=3D MAX_READERS_WAKEUP))) break; } =20 @@ -597,7 +598,7 @@ rwsem_del_wake_waiter(struct rw_semaphore *sem, struct = rwsem_waiter *waiter, * be eligible to acquire or spin on the lock. */ if (rwsem_del_waiter(sem, waiter) && first) - rwsem_mark_wake(sem, RWSEM_WAKE_ANY, wake_q); + rwsem_mark_wake(sem, RWSEM_WAKE_ANY, wake_q, false); raw_spin_unlock_irq(&sem->wait_lock); if (!wake_q_empty(wake_q)) wake_up_q(wake_q); @@ -1004,7 +1005,7 @@ static inline void rwsem_cond_wake_waiter(struct rw_s= emaphore *sem, long count, wake_type =3D RWSEM_WAKE_ANY; clear_nonspinnable(sem); } - rwsem_mark_wake(sem, wake_type, wake_q); + rwsem_mark_wake(sem, wake_type, wake_q, true); } =20 /* @@ -1042,7 +1043,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, lo= ng count, unsigned int stat raw_spin_lock_irq(&sem->wait_lock); if (!list_empty(&sem->wait_list)) rwsem_mark_wake(sem, RWSEM_WAKE_READ_OWNED, - &wake_q); + &wake_q, false); raw_spin_unlock_irq(&sem->wait_lock); wake_up_q(&wake_q); } @@ -1259,7 +1260,7 @@ static struct rw_semaphore *rwsem_wake(struct rw_sema= phore *sem) goto unlock_out; } } - rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q); + rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q, false); =20 unlock_out: raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -1281,7 +1282,7 @@ static struct rw_semaphore *rwsem_downgrade_wake(stru= ct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); =20 if (!list_empty(&sem->wait_list)) - rwsem_mark_wake(sem, RWSEM_WAKE_READ_OWNED, &wake_q); + rwsem_mark_wake(sem, RWSEM_WAKE_READ_OWNED, &wake_q, false); =20 raw_spin_unlock_irqrestore(&sem->wait_lock, flags); wake_up_q(&wake_q); --=20 2.31.1