From nobody Wed Nov 27 04:54:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701287817; cv=none; d=zohomail.com; s=zohoarc; b=IRTSm/1+TQMpCr97+mX4n9Ol1jowbBMv+Z9SDT7W6KsLze8CKuhlDAEoMkVcoL1spzf03VbpMu6gA9y7s6odVwRZbWqrsw4R4SivlvOzfMnwhuneP/yib1b+bgF+OEpkoZiyf6dRJZ8k+MLcfawdzzC4w7j8supCAk9fGgCEaU0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701287817; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=LEy9EWuebl4CuFgfogaWwDt3UnxTDs2t+LuSpg8gM1k=; b=HYj+FtXNSGTRMxD7wE9qY32YDBw+PkLfNHbuVefk9dvPNrNPHc0b9jl2MmRO0YSNFLlETQQgAlPqlt4eR97pDVwwXTU80MuvZURVxUHZbtnZsr63DRf5nFjOObzPhDz4Hl3uGXd5Km1lNNn9C2C8vGn7+g7xzt6xN7M1voQULUs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701287817568469.7202414306731; Wed, 29 Nov 2023 11:56:57 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.644189.1004961 (Exim 4.92) (envelope-from ) id 1r8Qfh-0003jw-Ip; Wed, 29 Nov 2023 19:56:33 +0000 Received: by outflank-mailman (output) from mailman id 644189.1004961; Wed, 29 Nov 2023 19:56:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8Qfh-0003j3-E3; Wed, 29 Nov 2023 19:56:33 +0000 Received: by outflank-mailman (input) for mailman id 644189; Wed, 29 Nov 2023 19:56:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8Qfg-0000Sw-4W for xen-devel@lists.xenproject.org; Wed, 29 Nov 2023 19:56:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 63bcdf84-8ef1-11ee-98e4-6d05b1d4d9a1; Wed, 29 Nov 2023 20:56:31 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-15-z9E51FFMPQOGFuJl_yEIDg-1; Wed, 29 Nov 2023 14:56:26 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 569E8852ACC; Wed, 29 Nov 2023 19:56:25 +0000 (UTC) Received: from localhost (unknown [10.39.192.91]) by smtp.corp.redhat.com (Postfix) with ESMTP id A6CB32026D66; Wed, 29 Nov 2023 19:56:23 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 63bcdf84-8ef1-11ee-98e4-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701287790; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LEy9EWuebl4CuFgfogaWwDt3UnxTDs2t+LuSpg8gM1k=; b=QIpFOEM3P+UcwlRS8PFadX67/n06mUI3wwBsyA+41zLZhFgbmON4Pj/jS6JA6dc6XfHOLu gsJQo+9PEPCTTIOD79+BEp+MZNDsTvvbS9qIOOwLO0OXT44Cvnt2WzP4doAebFJPr0cHrk 1f4lS52OFLkIsh1D1hfzWsQTlMe8xm0= X-MC-Unique: z9E51FFMPQOGFuJl_yEIDg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Hanna Reitz , Paul Durrant , Paolo Bonzini , Alberto Garcia , Emanuele Giuseppe Esposito , John Snow , Kevin Wolf , Eric Blake , Wen Congyang , , xen-devel@lists.xenproject.org, Coiby Xu , Stefan Hajnoczi , Eduardo Habkost , Xie Changlong , Ari Sundholm , Li Zhijian , Cleber Rosa , Juan Quintela , "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Jason Wang , Vladimir Sementsov-Ogievskiy , Zhang Chen , Peter Xu , Anthony Perard , Stefano Stabellini , Leonardo Bras , Pavel Dovgalyuk , Fam Zheng , Fabiano Rosas Subject: [PATCH 09/12] docs: remove AioContext lock from IOThread docs Date: Wed, 29 Nov 2023 14:55:50 -0500 Message-ID: <20231129195553.942921-10-stefanha@redhat.com> In-Reply-To: <20231129195553.942921-1-stefanha@redhat.com> References: <20231129195553.942921-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701287818436000001 Content-Type: text/plain; charset="utf-8" Encourage the use of locking primitives and stop mentioning the AioContext lock since it is being removed. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake --- docs/devel/multiple-iothreads.txt | 45 +++++++++++-------------------- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/docs/devel/multiple-iothreads.txt b/docs/devel/multiple-iothre= ads.txt index a3e949f6b3..4865196bde 100644 --- a/docs/devel/multiple-iothreads.txt +++ b/docs/devel/multiple-iothreads.txt @@ -88,27 +88,18 @@ loop, depending on which AioContext instance the caller= passes in. =20 How to synchronize with an IOThread ----------------------------------- -AioContext is not thread-safe so some rules must be followed when using fi= le -descriptors, event notifiers, timers, or BHs across threads: +Variables that can be accessed by multiple threads require some form of +synchronization such as qemu_mutex_lock(), rcu_read_lock(), etc. =20 -1. AioContext functions can always be called safely. They handle their -own locking internally. - -2. Other threads wishing to access the AioContext must use -aio_context_acquire()/aio_context_release() for mutual exclusion. Once the -context is acquired no other thread can access it or run event loop iterat= ions -in this AioContext. - -Legacy code sometimes nests aio_context_acquire()/aio_context_release() ca= lls. -Do not use nesting anymore, it is incompatible with the BDRV_POLL_WHILE() = macro -used in the block layer and can lead to hangs. - -There is currently no lock ordering rule if a thread needs to acquire mult= iple -AioContexts simultaneously. Therefore, it is only safe for code holding t= he -QEMU global mutex to acquire other AioContexts. +AioContext functions like aio_set_fd_handler(), aio_set_event_notifier(), +aio_bh_new(), and aio_timer_new() are thread-safe. They can be used to tri= gger +activity in an IOThread. =20 Side note: the best way to schedule a function call across threads is to c= all -aio_bh_schedule_oneshot(). No acquire/release or locking is needed. +aio_bh_schedule_oneshot(). + +The main loop thread can wait synchronously for a condition using +AIO_WAIT_WHILE(). =20 AioContext and the block layer ------------------------------ @@ -124,22 +115,16 @@ Block layer code must therefore expect to run in an I= OThread and avoid using old APIs that implicitly use the main loop. See the "How to program for IOThreads" above for information on how to do that. =20 -If main loop code such as a QMP function wishes to access a BlockDriverSta= te -it must first call aio_context_acquire(bdrv_get_aio_context(bs)) to ensure -that callbacks in the IOThread do not run in parallel. - Code running in the monitor typically needs to ensure that past requests from the guest are completed. When a block device is running in an IOThread, the IOThread can also process requests from the guest (via ioeventfd). To achieve both objects, wrap the code between bdrv_drained_begin() and bdrv_drained_end(), thus creating a "drained -section". The functions must be called between aio_context_acquire() -and aio_context_release(). You can freely release and re-acquire the -AioContext within a drained section. +section". =20 -Long-running jobs (usually in the form of coroutines) are best scheduled in -the BlockDriverState's AioContext to avoid the need to acquire/release aro= und -each bdrv_*() call. The functions bdrv_add/remove_aio_context_notifier, -or alternatively blk_add/remove_aio_context_notifier if you use BlockBacke= nds, -can be used to get a notification whenever bdrv_try_change_aio_context() m= oves a +Long-running jobs (usually in the form of coroutines) are often scheduled = in +the BlockDriverState's AioContext. The functions +bdrv_add/remove_aio_context_notifier, or alternatively +blk_add/remove_aio_context_notifier if you use BlockBackends, can be used = to +get a notification whenever bdrv_try_change_aio_context() moves a BlockDriverState to a different AioContext. --=20 2.42.0