From nobody Fri Nov 14 19:46:55 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=reject dis=none) header.from=rsg.ci.i.u-tokyo.ac.jp Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1761718496817622.7788916798543; Tue, 28 Oct 2025 23:14:56 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vDzRC-0000fE-EL; Wed, 29 Oct 2025 02:13:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vDzQz-0000Yv-ND for qemu-devel@nongnu.org; Wed, 29 Oct 2025 02:13:26 -0400 Received: from www3579.sakura.ne.jp ([49.212.243.89]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vDzQr-0006q6-D3 for qemu-devel@nongnu.org; Wed, 29 Oct 2025 02:13:24 -0400 Received: from h205.csg.ci.i.u-tokyo.ac.jp (h205.csg.ci.i.u-tokyo.ac.jp [133.11.54.205]) (authenticated bits=0) by www3579.sakura.ne.jp (8.16.1/8.16.1) with ESMTPSA id 59T6CsDS077258 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 29 Oct 2025 15:13:03 +0900 (JST) (envelope-from odaki@rsg.ci.i.u-tokyo.ac.jp) DKIM-Signature: a=rsa-sha256; bh=Y0z8Leh5RjHBxYsvXtHQahS9nXSybZI1c+I/pM8VVN4=; c=relaxed/relaxed; d=rsg.ci.i.u-tokyo.ac.jp; h=From:Date:Subject:Message-Id:To; s=rs20250326; t=1761718383; v=1; b=ED43PdhD7lt0TAm3zaamAXjOokX9+OLX8T3SFUeXEuEuavw8V/M/bk9w3jqOLKyN B/YNjYg+c+OV5Iz80kWfLTup2LaNZQTl1UdDUuEaVMwUCoe8zXCo2SNiPGvmFf0j rMJH/EdiozF7+HuI4NhxoUS7slAWhtfQmAozj50aTo8VlvEQy6+A+OzQSiwCLErA iQD45IZS6U2o60sQOWK1KvOnvdzn5IjvPUip2WS8qSufmmF/pDfg6Aq6DKtpYZfb gVa4nNMrq176GMp3F+NE52kUMvxsD7c9ECQcy3gRgon2WhpadhLBdZoO+p7Y3UiA PkXUkCHTHFh8TOKolJ/xAw== From: Akihiko Odaki Date: Wed, 29 Oct 2025 15:12:47 +0900 Subject: [PATCH 3/5] rcu: Use call_rcu() in synchronize_rcu() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251029-force_rcu-v1-3-bf860a6277a6@rsg.ci.i.u-tokyo.ac.jp> References: <20251029-force_rcu-v1-0-bf860a6277a6@rsg.ci.i.u-tokyo.ac.jp> In-Reply-To: <20251029-force_rcu-v1-0-bf860a6277a6@rsg.ci.i.u-tokyo.ac.jp> To: qemu-devel@nongnu.org, Dmitry Osipenko Cc: Paolo Bonzini , "Michael S. Tsirkin" , =?utf-8?q?Alex_Benn=C3=A9e?= , Akihiko Odaki X-Mailer: b4 0.15-dev-179e8 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=49.212.243.89; envelope-from=odaki@rsg.ci.i.u-tokyo.ac.jp; helo=www3579.sakura.ne.jp X-Spam_score_int: -16 X-Spam_score: -1.7 X-Spam_bar: - X-Spam_report: (-1.7 / 5.0 requ) BAYES_00=-1.9, DKIM_INVALID=0.1, DKIM_SIGNED=0.1, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZM-MESSAGEID: 1761718497746154100 Previously, synchronize_rcu() was a single-threaded implementation that is protected with a mutex. It was used only in the RCU thread and tests, and real users instead use call_rcu(), which relies on the RCU thread. The usage of synchronize_rcu() in tests did not accurately represent real use cases because it caused locking with the mutex, which never happened in real use cases, and it did not exercise the logic in the RCU thread. Add a new implementation of synchronize_rcu() which uses call_rcu() to represent real use cases in tests. The old synchronize_rcu() is now renamed to enter_qs() and only used in the RCU thread, making the mutex unnecessary. Signed-off-by: Akihiko Odaki --- util/rcu.c | 51 +++++++++++++++++++++++++++------------------------ 1 file changed, 27 insertions(+), 24 deletions(-) diff --git a/util/rcu.c b/util/rcu.c index acac9446ea98..3c4af9d213c8 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -38,7 +38,7 @@ =20 /* * Global grace period counter. Bit 0 is always one in rcu_gp_ctr. - * Bits 1 and above are defined in synchronize_rcu. + * Bits 1 and above are defined in enter_qs(). */ #define RCU_GP_LOCKED (1UL << 0) #define RCU_GP_CTR (1UL << 1) @@ -52,7 +52,6 @@ QemuEvent rcu_gp_event; static int in_drain_call_rcu; static int rcu_call_count; static QemuMutex rcu_registry_lock; -static QemuMutex rcu_sync_lock; =20 /* * Check whether a quiescent state was crossed between the beginning of @@ -111,7 +110,7 @@ static void wait_for_readers(void) * * If this is the last iteration, this barrier also prevents * frees from seeping upwards, and orders the two wait phases - * on architectures with 32-bit longs; see synchronize_rcu(). + * on architectures with 32-bit longs; see enter_qs(). */ smp_mb_global(); =20 @@ -137,9 +136,9 @@ static void wait_for_readers(void) * wait too much time. * * rcu_register_thread() may add nodes to ®istry; it will not - * wake up synchronize_rcu, but that is okay because at least anot= her + * wake up enter_qs(), but that is okay because at least another * thread must exit its RCU read-side critical section before - * synchronize_rcu is done. The next iteration of the loop will + * enter_qs() is done. The next iteration of the loop will * move the new thread's rcu_reader from ®istry to &qsreaders, * because rcu_gp_ongoing() will return false. * @@ -171,10 +170,8 @@ static void wait_for_readers(void) QLIST_SWAP(®istry, &qsreaders, node); } =20 -void synchronize_rcu(void) +static void enter_qs(void) { - QEMU_LOCK_GUARD(&rcu_sync_lock); - /* Write RCU-protected pointers before reading p_rcu_reader->ctr. * Pairs with smp_mb_placeholder() in rcu_read_lock(). * @@ -289,7 +286,7 @@ static void *call_rcu_thread(void *opaque) =20 /* * Fetch rcu_call_count now, we only must process elements that we= re - * added before synchronize_rcu() starts. + * added before enter_qs() starts. */ for (;;) { qemu_event_reset(&rcu_call_ready_event); @@ -304,7 +301,7 @@ static void *call_rcu_thread(void *opaque) qemu_event_wait(&rcu_call_ready_event); } =20 - synchronize_rcu(); + enter_qs(); qatomic_sub(&rcu_call_count, n); bql_lock(); while (n > 0) { @@ -337,15 +334,24 @@ void call_rcu1(struct rcu_head *node, void (*func)(st= ruct rcu_head *node)) } =20 =20 -struct rcu_drain { +typedef struct Sync { struct rcu_head rcu; - QemuEvent drain_complete_event; -}; + QemuEvent complete_event; +} Sync; =20 -static void drain_rcu_callback(struct rcu_head *node) +static void sync_rcu_callback(Sync *sync) { - struct rcu_drain *event =3D (struct rcu_drain *)node; - qemu_event_set(&event->drain_complete_event); + qemu_event_set(&sync->complete_event); +} + +void synchronize_rcu(void) +{ + Sync sync; + + qemu_event_init(&sync.complete_event, false); + call_rcu(&sync, sync_rcu_callback, rcu); + qemu_event_wait(&sync.complete_event); + qemu_event_destroy(&sync.complete_event); } =20 /* @@ -359,11 +365,11 @@ static void drain_rcu_callback(struct rcu_head *node) =20 void drain_call_rcu(void) { - struct rcu_drain rcu_drain; + Sync sync; bool locked =3D bql_locked(); =20 - memset(&rcu_drain, 0, sizeof(struct rcu_drain)); - qemu_event_init(&rcu_drain.drain_complete_event, false); + memset(&sync, 0, sizeof(sync)); + qemu_event_init(&sync.complete_event, false); =20 if (locked) { bql_unlock(); @@ -383,8 +389,8 @@ void drain_call_rcu(void) */ =20 qatomic_inc(&in_drain_call_rcu); - call_rcu1(&rcu_drain.rcu, drain_rcu_callback); - qemu_event_wait(&rcu_drain.drain_complete_event); + call_rcu(&sync, sync_rcu_callback, rcu); + qemu_event_wait(&sync.complete_event); qatomic_dec(&in_drain_call_rcu); =20 if (locked) { @@ -427,7 +433,6 @@ static void rcu_init_complete(void) QemuThread thread; =20 qemu_mutex_init(&rcu_registry_lock); - qemu_mutex_init(&rcu_sync_lock); qemu_event_init(&rcu_gp_event, true); =20 qemu_event_init(&rcu_call_ready_event, false); @@ -460,7 +465,6 @@ static void rcu_init_lock(void) return; } =20 - qemu_mutex_lock(&rcu_sync_lock); qemu_mutex_lock(&rcu_registry_lock); } =20 @@ -471,7 +475,6 @@ static void rcu_init_unlock(void) } =20 qemu_mutex_unlock(&rcu_registry_lock); - qemu_mutex_unlock(&rcu_sync_lock); } =20 static void rcu_init_child(void) --=20 2.51.0