From nobody Fri Dec 26 21:39:28 2025 Received: from eu-smtp-delivery-151.mimecast.com (eu-smtp-delivery-151.mimecast.com [185.58.86.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A62CC8CA for ; Sun, 31 Dec 2023 21:53:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ACULAB.COM Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=aculab.com Received: from AcuMS.aculab.com (156.67.243.121 [156.67.243.121]) by relay.mimecast.com with ESMTP with both STARTTLS and AUTH (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id uk-mta-29-UXCaayHdPeixgRLFCuK6JQ-1; Sun, 31 Dec 2023 21:53:13 +0000 X-MC-Unique: UXCaayHdPeixgRLFCuK6JQ-1 Received: from AcuMS.Aculab.com (10.202.163.6) by AcuMS.aculab.com (10.202.163.6) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 31 Dec 2023 21:52:51 +0000 Received: from AcuMS.Aculab.com ([::1]) by AcuMS.aculab.com ([::1]) with mapi id 15.00.1497.048; Sun, 31 Dec 2023 21:52:51 +0000 From: David Laight To: "'linux-kernel@vger.kernel.org'" , "'peterz@infradead.org'" , "'longman@redhat.com'" CC: "'mingo@redhat.com'" , "'will@kernel.org'" , "'boqun.feng@gmail.com'" , "'Linus Torvalds'" , "'virtualization@lists.linux-foundation.org'" , 'Zeng Heng' Subject: [PATCH next v2 2/5] locking/osq_lock: Optimise the vcpu_is_preempted() check. Thread-Topic: [PATCH next v2 2/5] locking/osq_lock: Optimise the vcpu_is_preempted() check. Thread-Index: Ado8M69TGuBFtEJaSr+qwMS7CJLhkw== Date: Sun, 31 Dec 2023 21:52:51 +0000 Message-ID: <3a9d1782cd50436c99ced8c10175bae6@AcuMS.aculab.com> References: <2b4e8a5816a742d2bd23fdbaa8498e80@AcuMS.aculab.com> In-Reply-To: <2b4e8a5816a742d2bd23fdbaa8498e80@AcuMS.aculab.com> Accept-Language: en-GB, en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: aculab.com Content-Language: en-US Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The vcpu_is_preempted() test stops osq_lock() spinning if a virtual cpu is no longer running. Although patched out for bare-metal the code still needs the cpu number. Reading this from 'prev->cpu' is a pretty much guaranteed have a cache miss when osq_unlock() is waking up the next cpu. Instead save 'prev->cpu' in 'node->prev_cpu' and use that value instead. Update in the osq_lock() 'unqueue' path when 'node->prev' is changed. This is simpler than checking for 'node->prev' changing and caching 'prev->cpu'. Signed-off-by: David Laight Reviewed-by: Waiman Long --- kernel/locking/osq_lock.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c index e0bc74d85a76..eb8a6dfdb79d 100644 --- a/kernel/locking/osq_lock.c +++ b/kernel/locking/osq_lock.c @@ -14,8 +14,9 @@ =20 struct optimistic_spin_node { struct optimistic_spin_node *next, *prev; - int locked; /* 1 if lock acquired */ - int cpu; /* encoded CPU # + 1 value */ + int locked; /* 1 if lock acquired */ + int cpu; /* encoded CPU # + 1 value */ + int prev_cpu; /* encoded CPU # + 1 value */ }; =20 static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node= ); @@ -29,11 +30,6 @@ static inline int encode_cpu(int cpu_nr) return cpu_nr + 1; } =20 -static inline int node_cpu(struct optimistic_spin_node *node) -{ - return node->cpu - 1; -} - static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val) { int cpu_nr =3D encoded_cpu_val - 1; @@ -110,9 +106,10 @@ bool osq_lock(struct optimistic_spin_queue *lock) if (old =3D=3D OSQ_UNLOCKED_VAL) return true; =20 - node->locked =3D 0; + node->prev_cpu =3D old; prev =3D decode_cpu(old); node->prev =3D prev; + node->locked =3D 0; =20 /* * osq_lock() unqueue @@ -144,7 +141,7 @@ bool osq_lock(struct optimistic_spin_queue *lock) * polling, be careful. */ if (smp_cond_load_relaxed(&node->locked, VAL || need_resched() || - vcpu_is_preempted(node_cpu(node->prev)))) + vcpu_is_preempted(READ_ONCE(node->prev_cpu) - 1))) return true; =20 /* unqueue */ @@ -201,6 +198,7 @@ bool osq_lock(struct optimistic_spin_queue *lock) * it will wait in Step-A. */ =20 + WRITE_ONCE(next->prev_cpu, prev->cpu); WRITE_ONCE(next->prev, prev); WRITE_ONCE(prev->next, next); =20 --=20 2.17.1 - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1= PT, UK Registration No: 1397386 (Wales)