From nobody Wed Nov 27 04:59:10 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800510; cv=none; d=zohomail.com; s=zohoarc; b=Ek4YDEm/Frm6BOh+0tuRTJIhep92vZ1tjrzbEra2R95QTM04l/Sxsa+ezkxqvncbQbFYRP2b1JPQdaba1k25+BfwTj69HxQzsUBkUBkT4u+8FnAXSZmntfKA+KUUYkCRf2u9LwXTc99Xaz1piuHcUp+jNywU11dqaC20BaMvah8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800510; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8/B7zc0XB6x72qbuhQc1b2o1g2AgjtE1qkTNJwgvUlY=; b=QGbbTHrjvifm6afMYLVa+OyM+eMgGqHQ3+0LjlPDRVxXTjYtf3ffJk7PERYohLmKQ7yLNfHHrD9v8lkVVJcy1lhdwd3yT5PRPdeK6i8dlOX90wEok0tIYbz+AcdKgPY/bIjMUjR3k2Dsg4hqt58Lp6n2fY0Y0RDcPUBcXeSLRMM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800510349941.515461590579; Tue, 5 Dec 2023 10:21:50 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648199.1012380 (Exim 4.92) (envelope-from ) id 1rAa2S-0006YY-EM; Tue, 05 Dec 2023 18:20:56 +0000 Received: by outflank-mailman (output) from mailman id 648199.1012380; Tue, 05 Dec 2023 18:20:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2S-0006Ws-86; Tue, 05 Dec 2023 18:20:56 +0000 Received: by outflank-mailman (input) for mailman id 648199; Tue, 05 Dec 2023 18:20:54 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2Q-0002wG-I7 for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:54 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 05a8f400-939b-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:52 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-685-8BIN0AuQO9OJpRD2qHD49w-1; Tue, 05 Dec 2023 13:20:46 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 780813C2A1C1; Tue, 5 Dec 2023 18:20:45 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id D5A7A40C6EB9; Tue, 5 Dec 2023 18:20:44 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 05a8f400-939b-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800451; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8/B7zc0XB6x72qbuhQc1b2o1g2AgjtE1qkTNJwgvUlY=; b=es5UEZFnLlnAER/WQ1VKxtnMjgIKwE0DDM1v837p1/n0jTVeKA0d+O2qZidblkg5bDK/Cp YQzVB2uaRoJMrqNPUZDCEBW2WJIQfYffuWr/E007pFDVIbAFlXRuWuphSZ7KE+h/L5yEK1 b9GUUnFTleR5zaHMhZtyd/DWP7msn4g= X-MC-Unique: 8BIN0AuQO9OJpRD2qHD49w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 11/14] docs: remove AioContext lock from IOThread docs Date: Tue, 5 Dec 2023 13:20:08 -0500 Message-ID: <20231205182011.1976568-12-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800511983100005 Content-Type: text/plain; charset="utf-8" Encourage the use of locking primitives and stop mentioning the AioContext lock since it is being removed. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake --- docs/devel/multiple-iothreads.txt | 45 +++++++++++-------------------- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/docs/devel/multiple-iothreads.txt b/docs/devel/multiple-iothre= ads.txt index a3e949f6b3..4865196bde 100644 --- a/docs/devel/multiple-iothreads.txt +++ b/docs/devel/multiple-iothreads.txt @@ -88,27 +88,18 @@ loop, depending on which AioContext instance the caller= passes in. =20 How to synchronize with an IOThread ----------------------------------- -AioContext is not thread-safe so some rules must be followed when using fi= le -descriptors, event notifiers, timers, or BHs across threads: +Variables that can be accessed by multiple threads require some form of +synchronization such as qemu_mutex_lock(), rcu_read_lock(), etc. =20 -1. AioContext functions can always be called safely. They handle their -own locking internally. - -2. Other threads wishing to access the AioContext must use -aio_context_acquire()/aio_context_release() for mutual exclusion. Once the -context is acquired no other thread can access it or run event loop iterat= ions -in this AioContext. - -Legacy code sometimes nests aio_context_acquire()/aio_context_release() ca= lls. -Do not use nesting anymore, it is incompatible with the BDRV_POLL_WHILE() = macro -used in the block layer and can lead to hangs. - -There is currently no lock ordering rule if a thread needs to acquire mult= iple -AioContexts simultaneously. Therefore, it is only safe for code holding t= he -QEMU global mutex to acquire other AioContexts. +AioContext functions like aio_set_fd_handler(), aio_set_event_notifier(), +aio_bh_new(), and aio_timer_new() are thread-safe. They can be used to tri= gger +activity in an IOThread. =20 Side note: the best way to schedule a function call across threads is to c= all -aio_bh_schedule_oneshot(). No acquire/release or locking is needed. +aio_bh_schedule_oneshot(). + +The main loop thread can wait synchronously for a condition using +AIO_WAIT_WHILE(). =20 AioContext and the block layer ------------------------------ @@ -124,22 +115,16 @@ Block layer code must therefore expect to run in an I= OThread and avoid using old APIs that implicitly use the main loop. See the "How to program for IOThreads" above for information on how to do that. =20 -If main loop code such as a QMP function wishes to access a BlockDriverSta= te -it must first call aio_context_acquire(bdrv_get_aio_context(bs)) to ensure -that callbacks in the IOThread do not run in parallel. - Code running in the monitor typically needs to ensure that past requests from the guest are completed. When a block device is running in an IOThread, the IOThread can also process requests from the guest (via ioeventfd). To achieve both objects, wrap the code between bdrv_drained_begin() and bdrv_drained_end(), thus creating a "drained -section". The functions must be called between aio_context_acquire() -and aio_context_release(). You can freely release and re-acquire the -AioContext within a drained section. +section". =20 -Long-running jobs (usually in the form of coroutines) are best scheduled in -the BlockDriverState's AioContext to avoid the need to acquire/release aro= und -each bdrv_*() call. The functions bdrv_add/remove_aio_context_notifier, -or alternatively blk_add/remove_aio_context_notifier if you use BlockBacke= nds, -can be used to get a notification whenever bdrv_try_change_aio_context() m= oves a +Long-running jobs (usually in the form of coroutines) are often scheduled = in +the BlockDriverState's AioContext. The functions +bdrv_add/remove_aio_context_notifier, or alternatively +blk_add/remove_aio_context_notifier if you use BlockBackends, can be used = to +get a notification whenever bdrv_try_change_aio_context() moves a BlockDriverState to a different AioContext. --=20 2.43.0