From nobody Wed Nov 27 05:52:45 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701293245; cv=none; d=zohomail.com; s=zohoarc; b=f9bXlJMqb5WoWLBGprPJoHnoyzCiuPqkarEi+r7CiF0HWLEVgmkW73stDlGHfHquWGWAe1m4j0/9WDF3q458OxUBSqAf1lWqGmo9HRXP0Ihv90q28ZoVbxT6LKvOLpvxgmRS84HnG9E0nWLPZv8hhbUDzNkP6ztxHHHyV/uEaVA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701293245; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=BpLNXxP4Yehpq7UvX9OuyGYKLdha9NvrROrpeAtoX0o=; b=F3mFUGwuPsHMsH4syJv4LEiHpok3+NzfMemhExxFDrY3n+KjRkqf4X5j1q4vQlFi7hX73UmzQEs504DtE78caP5EDhm0pFnBk7Ii0BN8eVtc/reWFRYw3h5oLxN7888HbMvqFclXBN7nDd2DIL6RqB4p1R6UzKfX6Knyt4hM+yk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701293245296762.1090151085707; Wed, 29 Nov 2023 13:27:25 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.644254.1005104 (Exim 4.92) (envelope-from ) id 1r8S5E-0001nR-2K; Wed, 29 Nov 2023 21:27:00 +0000 Received: by outflank-mailman (output) from mailman id 644254.1005104; Wed, 29 Nov 2023 21:27:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8S5D-0001j0-Nu; Wed, 29 Nov 2023 21:26:59 +0000 Received: by outflank-mailman (input) for mailman id 644254; Wed, 29 Nov 2023 21:26:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8S5C-00008d-5Z for xen-devel@lists.xenproject.org; Wed, 29 Nov 2023 21:26:58 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 0489934e-8efe-11ee-98e4-6d05b1d4d9a1; Wed, 29 Nov 2023 22:26:55 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-120-FSUOmW5LOtChOrGn_QBs1g-1; Wed, 29 Nov 2023 16:26:51 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 63A1138049F9; Wed, 29 Nov 2023 21:26:49 +0000 (UTC) Received: from localhost (unknown [10.39.192.91]) by smtp.corp.redhat.com (Postfix) with ESMTP id 41F60492BE0; Wed, 29 Nov 2023 21:26:48 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0489934e-8efe-11ee-98e4-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701293214; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BpLNXxP4Yehpq7UvX9OuyGYKLdha9NvrROrpeAtoX0o=; b=Gu8Ymmv44Elij5mK8jFXcUituPCnDLYAJpD9KXKFLVX9MubljpeNOfIwTcsSvP9jqYTB1P LoyVPmeqlu8HMZqSUUsezu+7M6v0GUxt0cKJNErFjcdaKZYaG1Qhrje96yCL7jQiJ2WsVf C4GQj6L64KW/yT8mcvzvDN1y5Fb5OHc= X-MC-Unique: FSUOmW5LOtChOrGn_QBs1g-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Jean-Christophe Dubois , Fabiano Rosas , qemu-s390x@nongnu.org, Song Gao , Marcel Apfelbaum , Thomas Huth , Hyman Huang , Marcelo Tosatti , David Woodhouse , Andrey Smirnov , Peter Maydell , Kevin Wolf , Ilya Leoshkevich , Artyom Tarasenko , Mark Cave-Ayland , Max Filippov , Alistair Francis , Paul Durrant , Jagannathan Raman , Juan Quintela , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , qemu-arm@nongnu.org, Jason Wang , Gerd Hoffmann , Hanna Reitz , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , BALATON Zoltan , Daniel Henrique Barboza , Elena Ufimtseva , Aurelien Jarno , Hailiang Zhang , Roman Bolshakov , Huacai Chen , Fam Zheng , Eric Blake , Jiri Slaby , Alexander Graf , Liu Zhiwei , Weiwei Li , Eric Farman , Stafford Horne , David Hildenbrand , Markus Armbruster , Reinoud Zandijk , Palmer Dabbelt , Cameron Esfahani , xen-devel@lists.xenproject.org, Pavel Dovgalyuk , qemu-riscv@nongnu.org, Aleksandar Rikalo , John Snow , Sunil Muthuswamy , Michael Roth , David Gibson , "Michael S. Tsirkin" , Richard Henderson , Bin Meng , Stefano Stabellini , kvm@vger.kernel.org, Stefan Hajnoczi , qemu-block@nongnu.org, Halil Pasic , Peter Xu , Anthony Perard , Harsh Prateek Bora , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Eduardo Habkost , Paolo Bonzini , Vladimir Sementsov-Ogievskiy , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , qemu-ppc@nongnu.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Christian Borntraeger , Akihiko Odaki , Leonardo Bras , Nicholas Piggin , Jiaxun Yang Subject: [PATCH 6/6] Rename "QEMU global mutex" to "BQL" in comments and docs Date: Wed, 29 Nov 2023 16:26:25 -0500 Message-ID: <20231129212625.1051502-7-stefanha@redhat.com> In-Reply-To: <20231129212625.1051502-1-stefanha@redhat.com> References: <20231129212625.1051502-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701293246787000001 Content-Type: text/plain; charset="utf-8" The term "QEMU global mutex" is identical to the more widely used Big QEMU Lock ("BQL"). Update the code comments and documentation to use "BQL" instead of "QEMU global mutex". Signed-off-by: Stefan Hajnoczi Acked-by: Markus Armbruster Reviewed-by: Philippe Mathieu-Daud=C3=A9 --- docs/devel/multi-thread-tcg.rst | 7 +++---- docs/devel/qapi-code-gen.rst | 2 +- docs/devel/replay.rst | 2 +- docs/devel/multiple-iothreads.txt | 16 ++++++++-------- include/block/blockjob.h | 6 +++--- include/io/task.h | 2 +- include/qemu/coroutine-core.h | 2 +- include/qemu/coroutine.h | 2 +- hw/block/dataplane/virtio-blk.c | 8 ++++---- hw/block/virtio-blk.c | 2 +- hw/scsi/virtio-scsi-dataplane.c | 6 +++--- net/tap.c | 2 +- 12 files changed, 28 insertions(+), 29 deletions(-) diff --git a/docs/devel/multi-thread-tcg.rst b/docs/devel/multi-thread-tcg.= rst index c9541a7b20..7302c3bf53 100644 --- a/docs/devel/multi-thread-tcg.rst +++ b/docs/devel/multi-thread-tcg.rst @@ -226,10 +226,9 @@ instruction. This could be a future optimisation. Emulated hardware state ----------------------- =20 -Currently thanks to KVM work any access to IO memory is automatically -protected by the global iothread mutex, also known as the BQL (Big -QEMU Lock). Any IO region that doesn't use global mutex is expected to -do its own locking. +Currently thanks to KVM work any access to IO memory is automatically prot= ected +by the BQL (Big QEMU Lock). Any IO region that doesn't use the BQL is expe= cted +to do its own locking. =20 However IO memory isn't the only way emulated hardware state can be modified. Some architectures have model specific registers that diff --git a/docs/devel/qapi-code-gen.rst b/docs/devel/qapi-code-gen.rst index 7f78183cd4..ea8228518c 100644 --- a/docs/devel/qapi-code-gen.rst +++ b/docs/devel/qapi-code-gen.rst @@ -594,7 +594,7 @@ blocking the guest and other background operations. Coroutine safety can be hard to prove, similar to thread safety. Common pitfalls are: =20 -- The global mutex isn't held across ``qemu_coroutine_yield()``, so +- The BQL isn't held across ``qemu_coroutine_yield()``, so operations that used to assume that they execute atomically may have to be more careful to protect against changes in the global state. =20 diff --git a/docs/devel/replay.rst b/docs/devel/replay.rst index 0244be8b9c..effd856f0c 100644 --- a/docs/devel/replay.rst +++ b/docs/devel/replay.rst @@ -184,7 +184,7 @@ modes. Reading and writing requests are created by CPU thread of QEMU. Later these requests proceed to block layer which creates "bottom halves". Bottom halves consist of callback and its parameters. They are processed when -main loop locks the global mutex. These locks are not synchronized with +main loop locks the BQL. These locks are not synchronized with replaying process because main loop also processes the events that do not affect the virtual machine state (like user interaction with monitor). =20 diff --git a/docs/devel/multiple-iothreads.txt b/docs/devel/multiple-iothre= ads.txt index a3e949f6b3..828e5527a3 100644 --- a/docs/devel/multiple-iothreads.txt +++ b/docs/devel/multiple-iothreads.txt @@ -5,7 +5,7 @@ the COPYING file in the top-level directory. =20 =20 This document explains the IOThread feature and how to write code that runs -outside the QEMU global mutex. +outside the BQL. =20 The main loop and IOThreads --------------------------- @@ -29,13 +29,13 @@ scalability bottleneck on hosts with many CPUs. Work c= an be spread across several IOThreads instead of just one main loop. When set up correctly th= is can improve I/O latency and reduce jitter seen by the guest. =20 -The main loop is also deeply associated with the QEMU global mutex, which = is a -scalability bottleneck in itself. vCPU threads and the main loop use the = QEMU -global mutex to serialize execution of QEMU code. This mutex is necessary -because a lot of QEMU's code historically was not thread-safe. +The main loop is also deeply associated with the BQL, which is a +scalability bottleneck in itself. vCPU threads and the main loop use the = BQL +to serialize execution of QEMU code. This mutex is necessary because a lo= t of +QEMU's code historically was not thread-safe. =20 The fact that all I/O processing is done in a single main loop and that the -QEMU global mutex is contended by all vCPU threads and the main loop expla= in +BQL is contended by all vCPU threads and the main loop explain why it is desirable to place work into IOThreads. =20 The experimental virtio-blk data-plane implementation has been benchmarked= and @@ -66,7 +66,7 @@ There are several old APIs that use the main loop AioCont= ext: =20 Since they implicitly work on the main loop they cannot be used in code th= at runs in an IOThread. They might cause a crash or deadlock if called from = an -IOThread since the QEMU global mutex is not held. +IOThread since the BQL is not held. =20 Instead, use the AioContext functions directly (see include/block/aio.h): * aio_set_fd_handler() - monitor a file descriptor @@ -105,7 +105,7 @@ used in the block layer and can lead to hangs. =20 There is currently no lock ordering rule if a thread needs to acquire mult= iple AioContexts simultaneously. Therefore, it is only safe for code holding t= he -QEMU global mutex to acquire other AioContexts. +BQL to acquire other AioContexts. =20 Side note: the best way to schedule a function call across threads is to c= all aio_bh_schedule_oneshot(). No acquire/release or locking is needed. diff --git a/include/block/blockjob.h b/include/block/blockjob.h index e594c10d23..b2bc7c04d6 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -54,7 +54,7 @@ typedef struct BlockJob { =20 /** * Speed that was set with @block_job_set_speed. - * Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE= ). + * Always modified and read under BQL (GLOBAL_STATE_CODE). */ int64_t speed; =20 @@ -66,7 +66,7 @@ typedef struct BlockJob { =20 /** * Block other operations when block job is running. - * Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE= ). + * Always modified and read under BQL (GLOBAL_STATE_CODE). */ Error *blocker; =20 @@ -89,7 +89,7 @@ typedef struct BlockJob { =20 /** * BlockDriverStates that are involved in this block job. - * Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE= ). + * Always modified and read under BQL (GLOBAL_STATE_CODE). */ GSList *nodes; } BlockJob; diff --git a/include/io/task.h b/include/io/task.h index dc7d32ebd0..0b5342ee84 100644 --- a/include/io/task.h +++ b/include/io/task.h @@ -149,7 +149,7 @@ typedef void (*QIOTaskWorker)(QIOTask *task, * lookups) to be easily run non-blocking. Reporting the * results in the main thread context means that the caller * typically does not need to be concerned about thread - * safety wrt the QEMU global mutex. + * safety wrt the BQL. * * For example, the socket_listen() method will block the caller * while DNS lookups take place if given a name, instead of IP diff --git a/include/qemu/coroutine-core.h b/include/qemu/coroutine-core.h index 230bb56517..503bad6e0e 100644 --- a/include/qemu/coroutine-core.h +++ b/include/qemu/coroutine-core.h @@ -22,7 +22,7 @@ * rather than callbacks, for operations that need to give up control while * waiting for events to complete. * - * These functions are re-entrant and may be used outside the global mutex. + * These functions are re-entrant and may be used outside the BQL. * * Functions that execute in coroutine context cannot be called * directly from normal functions. Use @coroutine_fn to mark such diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index a65be6697f..e6aff45301 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -26,7 +26,7 @@ * rather than callbacks, for operations that need to give up control while * waiting for events to complete. * - * These functions are re-entrant and may be used outside the global mutex. + * These functions are re-entrant and may be used outside the BQL. * * Functions that execute in coroutine context cannot be called * directly from normal functions. Use @coroutine_fn to mark such diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-bl= k.c index f83bb0f116..eafc573407 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -47,7 +47,7 @@ void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s= , VirtQueue *vq) virtio_notify_irqfd(s->vdev, vq); } =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf, VirtIOBlockDataPlane **dataplane, Error **errp) @@ -100,7 +100,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, V= irtIOBlkConf *conf, return true; } =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) { VirtIOBlock *vblk; @@ -117,7 +117,7 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane= *s) g_free(s); } =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ int virtio_blk_data_plane_start(VirtIODevice *vdev) { VirtIOBlock *vblk =3D VIRTIO_BLK(vdev); @@ -261,7 +261,7 @@ static void virtio_blk_data_plane_stop_bh(void *opaque) } } =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ void virtio_blk_data_plane_stop(VirtIODevice *vdev) { VirtIOBlock *vblk =3D VIRTIO_BLK(vdev); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index a1f8e15522..2a5f6544cc 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -1500,7 +1500,7 @@ static void virtio_blk_resize(void *opaque) VirtIODevice *vdev =3D VIRTIO_DEVICE(opaque); =20 /* - * virtio_notify_config() needs to acquire the global mutex, + * virtio_notify_config() needs to acquire the BQL, * so it can't be called from an iothread. Instead, schedule * it to be run in the main context BH. */ diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplan= e.c index 1e684beebe..56ecc5b12e 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -20,7 +20,7 @@ #include "scsi/constants.h" #include "hw/virtio/virtio-bus.h" =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **errp) { VirtIOSCSICommon *vs =3D VIRTIO_SCSI_COMMON(s); @@ -93,7 +93,7 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque) } } =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ int virtio_scsi_dataplane_start(VirtIODevice *vdev) { int i; @@ -191,7 +191,7 @@ fail_guest_notifiers: return -ENOSYS; } =20 -/* Context: QEMU global mutex held */ +/* Context: BQL held */ void virtio_scsi_dataplane_stop(VirtIODevice *vdev) { BusState *qbus =3D qdev_get_parent_bus(DEVICE(vdev)); diff --git a/net/tap.c b/net/tap.c index c23d0323c2..c698b70475 100644 --- a/net/tap.c +++ b/net/tap.c @@ -219,7 +219,7 @@ static void tap_send(void *opaque) =20 /* * When the host keeps receiving more packets while tap_send() is - * running we can hog the QEMU global mutex. Limit the number of + * running we can hog the BQL. Limit the number of * packets that are processed per tap_send() callback to prevent * stalling the guest. */ --=20 2.42.0