From nobody Fri Nov 14 18:26:44 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=reject dis=none) header.from=rsg.ci.i.u-tokyo.ac.jp Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1760596580151745.0628481984021; Wed, 15 Oct 2025 23:36:20 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1v9Hac-0000pW-12; Thu, 16 Oct 2025 02:35:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1v9HaQ-0000nH-NC for qemu-devel@nongnu.org; Thu, 16 Oct 2025 02:35:45 -0400 Received: from www3579.sakura.ne.jp ([49.212.243.89]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1v9HaF-0004TA-0J for qemu-devel@nongnu.org; Thu, 16 Oct 2025 02:35:37 -0400 Received: from h205.csg.ci.i.u-tokyo.ac.jp (h205.csg.ci.i.u-tokyo.ac.jp [133.11.54.205]) (authenticated bits=0) by www3579.sakura.ne.jp (8.16.1/8.16.1) with ESMTPSA id 59G6YquU099261 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 16 Oct 2025 15:35:05 +0900 (JST) (envelope-from odaki@rsg.ci.i.u-tokyo.ac.jp) DKIM-Signature: a=rsa-sha256; bh=Dr456ID8nj2hAPfVNzcXfjYjbtMrOIuqc1lkRDZklFs=; c=relaxed/relaxed; d=rsg.ci.i.u-tokyo.ac.jp; h=From:Date:Subject:Message-Id:To; s=rs20250326; t=1760596505; v=1; b=W00g6CQ3a8hVJfKedbJXQpnlyg1oy1RFvhHfS8nRph2dWMxDJN8d1LqDdmBBWbPO TwNSt5djle2TemWaSgfhgQyGkvmGalS7vXTmyr/ksXgR1U84fdf5ZZXq7fy3SZAL 1wIBYR7j5uEWUv+aTwunj92TOoXKK+hGgw2sxNJFp5O9uT2wDKDCqWA/cFOI/9ZR 10o1uXeAyEnxeS6vkroEhjJSYVHnKuaPTmChU5WYNeEGM4LdW6KYscmycaUq8nIs FokbskMK6UmPoSwRsPm0riNeRS24+uGN8PhrwOQaDEMit3zxrT7W2XqprnQIMMF2 nf5mH31xz7Utv2vftGFP7w== From: Akihiko Odaki Date: Thu, 16 Oct 2025 15:34:41 +0900 Subject: [PATCH] rcu: Unify force quiescent state MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251016-force-v1-1-919a82112498@rsg.ci.i.u-tokyo.ac.jp> X-B4-Tracking: v=1; b=H4sIAACS8GgC/6tWKk4tykwtVrJSqFYqSi3LLM7MzwNyDHUUlJIzE vPSU3UzU4B8JSMDI1NDA0NT3bT8ouRU3WSTVAPjRMukRHNDSyWg2oKi1LTMCrA50bG1tQDaRQH WVwAAAA== X-Change-ID: 20251015-force-c4e03a9ba719 To: qemu-devel@nongnu.org Cc: Paolo Bonzini , David Hildenbrand , Eric Blake , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Dmitry Osipenko , Markus Armbruster , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , Peter Xu , "Michael S. Tsirkin" , Akihiko Odaki X-Mailer: b4 0.15-dev-179e8 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=49.212.243.89; envelope-from=odaki@rsg.ci.i.u-tokyo.ac.jp; helo=www3579.sakura.ne.jp X-Spam_score_int: -16 X-Spam_score: -1.7 X-Spam_bar: - X-Spam_report: (-1.7 / 5.0 requ) BAYES_00=-1.9, DKIM_INVALID=0.1, DKIM_SIGNED=0.1, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZM-MESSAGEID: 1760596584087154100 Borrow the concept of force quiescent state from Linux to ensure readers remain fast during normal operation and to avoid stalls. Background =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The previous implementation had four steps to begin reclamation. 1. call_rcu_thread() would wait for the first callback. 2. call_rcu_thread() would periodically poll until a decent number of callbacks piled up or it timed out. 3. synchronize_rcu() would statr a grace period (GP). 4. wait_for_readers() would wait for the GP to end. It would also trigger the force_rcu notifier to break busy loops in a read-side critical section if drain_call_rcu() had been called. Problem =3D=3D=3D=3D=3D=3D=3D The separation of waiting logic across these steps led to suboptimal behavior: The GP was delayed until call_rcu_thread() stops polling. force_rcu was not consistently triggered when call_rcu_thread() detected a high number of pending callbacks or a timeout. This inconsistency sometimes led to stalls, as reported in a virtio-gpu issue where memory unmapping was blocked[1]. wait_for_readers() imposed unnecessary overhead in non-urgent cases by unconditionally executing qatomic_set(&index->waiting, true) and qemu_event_reset(&rcu_gp_event), which are necessary only for expedited synchronization. Solution =3D=3D=3D=3D=3D=3D=3D=3D Move the polling in call_rcu_thread() to wait_for_readers() to prevent the delay of the GP. Additionally, reorganize wait_for_readers() to distinguish between two states: Normal State: it relies exclusively on periodic polling to detect the end of the GP and maintains the read-side fast path. Force Quiescent State: Whenever expediting synchronization, it always triggers force_rcu and executes both qatomic_set(&index->waiting, true) and qemu_event_reset(&rcu_gp_event). This avoids stalls while confining the read-side overhead to this state. This unified approach, inspired by the Linux RCU, ensures consistent and efficient RCU grace period handling and confirms resolution of the virtio-gpu issue. [1] https://lore.kernel.org/qemu-devel/20251014111234.3190346-9-alex.bennee= @linaro.org/ Signed-off-by: Akihiko Odaki Tested-by: Dmitry Osipenko # drm native --- util/rcu.c | 79 ++++++++++++++++++++++++++++++++++++++++------------------= ---- 1 file changed, 51 insertions(+), 28 deletions(-) diff --git a/util/rcu.c b/util/rcu.c index b703c86f15a3..acac9446ea98 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -43,10 +43,14 @@ #define RCU_GP_LOCKED (1UL << 0) #define RCU_GP_CTR (1UL << 1) =20 + +#define RCU_CALL_MIN_SIZE 30 + unsigned long rcu_gp_ctr =3D RCU_GP_LOCKED; =20 QemuEvent rcu_gp_event; static int in_drain_call_rcu; +static int rcu_call_count; static QemuMutex rcu_registry_lock; static QemuMutex rcu_sync_lock; =20 @@ -76,15 +80,29 @@ static void wait_for_readers(void) { ThreadList qsreaders =3D QLIST_HEAD_INITIALIZER(qsreaders); struct rcu_reader_data *index, *tmp; + int sleeps =3D 0; + bool forced =3D false; =20 for (;;) { - /* We want to be notified of changes made to rcu_gp_ongoing - * while we walk the list. + /* + * Force the grace period to end and wait for it if any of the + * following heuristical conditions are satisfied: + * - A decent number of callbacks piled up. + * - It timed out. + * - It is in a drain_call_rcu() call. + * + * Otherwise, periodically poll the grace period, hoping it ends + * promptly. */ - qemu_event_reset(&rcu_gp_event); + if (!forced && + (qatomic_read(&rcu_call_count) >=3D RCU_CALL_MIN_SIZE || + sleeps >=3D 5 || qatomic_read(&in_drain_call_rcu))) { + forced =3D true; =20 - QLIST_FOREACH(index, ®istry, node) { - qatomic_set(&index->waiting, true); + QLIST_FOREACH(index, ®istry, node) { + notifier_list_notify(&index->force_rcu, NULL); + qatomic_set(&index->waiting, true); + } } =20 /* Here, order the stores to index->waiting before the loads of @@ -106,8 +124,6 @@ static void wait_for_readers(void) * get some extra futex wakeups. */ qatomic_set(&index->waiting, false); - } else if (qatomic_read(&in_drain_call_rcu)) { - notifier_list_notify(&index->force_rcu, NULL); } } =20 @@ -115,7 +131,8 @@ static void wait_for_readers(void) break; } =20 - /* Wait for one thread to report a quiescent state and try again. + /* + * Sleep for a while and try again. * Release rcu_registry_lock, so rcu_(un)register_thread() doesn't * wait too much time. * @@ -133,7 +150,20 @@ static void wait_for_readers(void) * rcu_registry_lock is released. */ qemu_mutex_unlock(&rcu_registry_lock); - qemu_event_wait(&rcu_gp_event); + + if (forced) { + qemu_event_wait(&rcu_gp_event); + + /* + * We want to be notified of changes made to rcu_gp_ongoing + * while we walk the list. + */ + qemu_event_reset(&rcu_gp_event); + } else { + g_usleep(10000); + sleeps++; + } + qemu_mutex_lock(&rcu_registry_lock); } =20 @@ -173,15 +203,11 @@ void synchronize_rcu(void) } } =20 - -#define RCU_CALL_MIN_SIZE 30 - /* Multi-producer, single-consumer queue based on urcu/static/wfqueue.h * from liburcu. Note that head is only used by the consumer. */ static struct rcu_head dummy; static struct rcu_head *head =3D &dummy, **tail =3D &dummy.next; -static int rcu_call_count; static QemuEvent rcu_call_ready_event; =20 static void enqueue(struct rcu_head *node) @@ -259,30 +285,27 @@ static void *call_rcu_thread(void *opaque) rcu_register_thread(); =20 for (;;) { - int tries =3D 0; - int n =3D qatomic_read(&rcu_call_count); + int n; =20 - /* Heuristically wait for a decent number of callbacks to pile up. + /* * Fetch rcu_call_count now, we only must process elements that we= re * added before synchronize_rcu() starts. */ - while (n =3D=3D 0 || (n < RCU_CALL_MIN_SIZE && ++tries <=3D 5)) { - g_usleep(10000); - if (n =3D=3D 0) { - qemu_event_reset(&rcu_call_ready_event); - n =3D qatomic_read(&rcu_call_count); - if (n =3D=3D 0) { + for (;;) { + qemu_event_reset(&rcu_call_ready_event); + n =3D qatomic_read(&rcu_call_count); + if (n) { + break; + } + #if defined(CONFIG_MALLOC_TRIM) - malloc_trim(4 * 1024 * 1024); + malloc_trim(4 * 1024 * 1024); #endif - qemu_event_wait(&rcu_call_ready_event); - } - } - n =3D qatomic_read(&rcu_call_count); + qemu_event_wait(&rcu_call_ready_event); } =20 - qatomic_sub(&rcu_call_count, n); synchronize_rcu(); + qatomic_sub(&rcu_call_count, n); bql_lock(); while (n > 0) { node =3D try_dequeue(); --- base-commit: 0dc905ac306c68649e05cdaf8434123c8f917b41 change-id: 20251015-force-c4e03a9ba719 Best regards, -- =20 Akihiko Odaki