From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800450; cv=none; d=zohomail.com; s=zohoarc; b=I+dxIOFc3DRPZK69vkvJDnhXUHkhSUgi2ti2fWIZH+x7q9y73hYEr6KkUQyvK5AlbRtRsH2EiGZl6oM8ZIEMDrgDT68koBtJIhVL1YavXuDb3CbxR/PDXtcJJKID2hp0T2vV2UxvJ7ICluUaezJ8+cd7fr6mNmFo1RZp0ZnOg7w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800450; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=PYzMzXb1PdcjP3A1qwHad/XAhWJKM4hT2cxMg6piGu4=; b=DZzsEDXMSBkkZ1FEKZXXDNIsQzEQKEdXtzomND+BKY+02VqYi1/MSS3MV799vovioyezdtNjSrEw3sWoVrwcQON5cmDO404lnm3Tmtwo0AhuSJW6qvsHPG/45q/HlkhMXDM4NfRWVXsdAv3x888rR270rZUycalouz2h8+hOFjw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800450845546.0066148480051; Tue, 5 Dec 2023 10:20:50 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648184.1012290 (Exim 4.92) (envelope-from ) id 1rAa26-0003FW-08; Tue, 05 Dec 2023 18:20:34 +0000 Received: by outflank-mailman (output) from mailman id 648184.1012290; Tue, 05 Dec 2023 18:20:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa25-0003FP-Sz; Tue, 05 Dec 2023 18:20:33 +0000 Received: by outflank-mailman (input) for mailman id 648184; Tue, 05 Dec 2023 18:20:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa24-0002wG-F9 for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f86807a6-939a-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:30 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-365-GF-QWva-Oni-7XU0aB-D9w-1; Tue, 05 Dec 2023 13:20:20 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4226585A597; Tue, 5 Dec 2023 18:20:18 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 99C5B51E3; Tue, 5 Dec 2023 18:20:17 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f86807a6-939a-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PYzMzXb1PdcjP3A1qwHad/XAhWJKM4hT2cxMg6piGu4=; b=WCB28mTctlm3HwtP/GGotoNmvKb3jY7S4dCnfTsaXoi2R9QnZCGAeOSqUWsC5z1ZCuUGNb Mh7yJw/f52Pr13MI1Q4ZS5dcXKhCOZ0ShmJyr4cEJYbRjlcBqjNSM7xyTvVkpegipigXlt P4HrDlGa4qL4FJGZR3nnd9W5rFZAMtY= X-MC-Unique: GF-QWva-Oni-7XU0aB-D9w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 01/14] virtio-scsi: replace AioContext lock with tmf_bh_lock Date: Tue, 5 Dec 2023 13:19:58 -0500 Message-ID: <20231205182011.1976568-2-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800452360100004 Content-Type: text/plain; charset="utf-8" Protect the Task Management Function BH state with a lock. The TMF BH runs in the main loop thread. An IOThread might process a TMF at the same time as the TMF BH is running. Therefore tmf_bh_list and tmf_bh must be protected by a lock. Run TMF request completion in the IOThread using aio_wait_bh_oneshot(). This avoids more locking to protect the virtqueue and SCSI layer state. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/hw/virtio/virtio-scsi.h | 3 +- hw/scsi/virtio-scsi.c | 62 ++++++++++++++++++++++----------- 2 files changed, 43 insertions(+), 22 deletions(-) diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scs= i.h index 779568ab5d..da8cb928d9 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -85,8 +85,9 @@ struct VirtIOSCSI { =20 /* * TMFs deferred to main loop BH. These fields are protected by - * virtio_scsi_acquire(). + * tmf_bh_lock. */ + QemuMutex tmf_bh_lock; QEMUBH *tmf_bh; QTAILQ_HEAD(, VirtIOSCSIReq) tmf_bh_list; =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 9c751bf296..4f8d35facc 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -123,6 +123,30 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *re= q) virtio_scsi_free_req(req); } =20 +static void virtio_scsi_complete_req_bh(void *opaque) +{ + VirtIOSCSIReq *req =3D opaque; + + virtio_scsi_complete_req(req); +} + +/* + * Called from virtio_scsi_do_one_tmf_bh() in main loop thread. The main l= oop + * thread cannot touch the virtqueue since that could race with an IOThrea= d. + */ +static void virtio_scsi_complete_req_from_main_loop(VirtIOSCSIReq *req) +{ + VirtIOSCSI *s =3D req->dev; + + if (!s->ctx || s->ctx =3D=3D qemu_get_aio_context()) { + /* No need to schedule a BH when there is no IOThread */ + virtio_scsi_complete_req(req); + } else { + /* Run request completion in the IOThread */ + aio_wait_bh_oneshot(s->ctx, virtio_scsi_complete_req_bh, req); + } +} + static void virtio_scsi_bad_req(VirtIOSCSIReq *req) { virtio_error(VIRTIO_DEVICE(req->dev), "wrong size for virtio-scsi head= ers"); @@ -338,10 +362,7 @@ static void virtio_scsi_do_one_tmf_bh(VirtIOSCSIReq *r= eq) =20 out: object_unref(OBJECT(d)); - - virtio_scsi_acquire(s); - virtio_scsi_complete_req(req); - virtio_scsi_release(s); + virtio_scsi_complete_req_from_main_loop(req); } =20 /* Some TMFs must be processed from the main loop thread */ @@ -354,18 +375,16 @@ static void virtio_scsi_do_tmf_bh(void *opaque) =20 GLOBAL_STATE_CODE(); =20 - virtio_scsi_acquire(s); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { + QTAILQ_REMOVE(&s->tmf_bh_list, req, next); + QTAILQ_INSERT_TAIL(&reqs, req, next); + } =20 - QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { - QTAILQ_REMOVE(&s->tmf_bh_list, req, next); - QTAILQ_INSERT_TAIL(&reqs, req, next); + qemu_bh_delete(s->tmf_bh); + s->tmf_bh =3D NULL; } =20 - qemu_bh_delete(s->tmf_bh); - s->tmf_bh =3D NULL; - - virtio_scsi_release(s); - QTAILQ_FOREACH_SAFE(req, &reqs, next, tmp) { QTAILQ_REMOVE(&reqs, req, next); virtio_scsi_do_one_tmf_bh(req); @@ -379,8 +398,7 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) =20 GLOBAL_STATE_CODE(); =20 - virtio_scsi_acquire(s); - + /* Called after ioeventfd has been stopped, so tmf_bh_lock is not need= ed */ if (s->tmf_bh) { qemu_bh_delete(s->tmf_bh); s->tmf_bh =3D NULL; @@ -393,19 +411,19 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) req->resp.tmf.response =3D VIRTIO_SCSI_S_TARGET_FAILURE; virtio_scsi_complete_req(req); } - - virtio_scsi_release(s); } =20 static void virtio_scsi_defer_tmf_to_bh(VirtIOSCSIReq *req) { VirtIOSCSI *s =3D req->dev; =20 - QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); =20 - if (!s->tmf_bh) { - s->tmf_bh =3D qemu_bh_new(virtio_scsi_do_tmf_bh, s); - qemu_bh_schedule(s->tmf_bh); + if (!s->tmf_bh) { + s->tmf_bh =3D qemu_bh_new(virtio_scsi_do_tmf_bh, s); + qemu_bh_schedule(s->tmf_bh); + } } } =20 @@ -1235,6 +1253,7 @@ static void virtio_scsi_device_realize(DeviceState *d= ev, Error **errp) Error *err =3D NULL; =20 QTAILQ_INIT(&s->tmf_bh_list); + qemu_mutex_init(&s->tmf_bh_lock); =20 virtio_scsi_common_realize(dev, virtio_scsi_handle_ctrl, @@ -1277,6 +1296,7 @@ static void virtio_scsi_device_unrealize(DeviceState = *dev) =20 qbus_set_hotplug_handler(BUS(&s->bus), NULL); virtio_scsi_common_unrealize(dev); + qemu_mutex_destroy(&s->tmf_bh_lock); } =20 static Property virtio_scsi_properties[] =3D { --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800456; cv=none; d=zohomail.com; s=zohoarc; b=b7w2yhRSZXu0DXcR9PP6ir1sR8GI2+Ho9TxHM0NsjjpFmquHUa4021pwzxvh9SxCoT9wMUkgh8Deo5EAm1Jja8A34VFFtYQcTV816mNLFmFyvFP98N1mfkBnjVfnxsN2pqGNEMV+hg//GBfsblnugTJRExex3LGKuxfA4NFTeoI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800456; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=XASdwYxqaVrj7Zfh8PNH/kQYEj4OCOxW3FbL/rilTmk=; b=i9N8ePuYLDF7KBOj1faEQXFMYOjUWio9UNaRq2rT0Yek0v/F5akToQqR44pO0hcD6YpNzds9UhOx6etaKZ8M9yaFe/zDBhIyk4Nu9BXg/nwDCWLSkaDJ367MhtUHjZE0lyZnHjumT6ybTXzfUnj6xFH9d59j8gawtd6fnKmjSoo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800456900563.7784026998967; Tue, 5 Dec 2023 10:20:56 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648183.1012279 (Exim 4.92) (envelope-from ) id 1rAa21-0002wr-N5; Tue, 05 Dec 2023 18:20:29 +0000 Received: by outflank-mailman (output) from mailman id 648183.1012279; Tue, 05 Dec 2023 18:20:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa21-0002wk-K7; Tue, 05 Dec 2023 18:20:29 +0000 Received: by outflank-mailman (input) for mailman id 648183; Tue, 05 Dec 2023 18:20:28 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa20-0002wG-OM for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:28 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f5e9bd2f-939a-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:26 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-166-Jo0TQi2kMYmjMAUTVu4qcg-1; Tue, 05 Dec 2023 13:20:21 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 98D45881D35; Tue, 5 Dec 2023 18:20:20 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id F19A02166B31; Tue, 5 Dec 2023 18:20:19 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f5e9bd2f-939a-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800425; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XASdwYxqaVrj7Zfh8PNH/kQYEj4OCOxW3FbL/rilTmk=; b=RwA24PZ3499uaaQMl6J8s+HqChqLDzhexQduPKzvdJCgV1EY6FjANpyORle7Qrd82t0qoW fmnYQjYM5u5HGGfkLq4WB1W3ktZXig9fGDMTDzx9/+VyaOuaV8pCryf2pc+Mg3oxJ/XCBe LQkO6T4/gBx4ucLLhz4DLUTYZaPgdRc= X-MC-Unique: Jo0TQi2kMYmjMAUTVu4qcg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 02/14] scsi: assert that callbacks run in the correct AioContext Date: Tue, 5 Dec 2023 13:19:59 -0500 Message-ID: <20231205182011.1976568-3-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800458529100001 Content-Type: text/plain; charset="utf-8" Since the removal of AioContext locking, the correctness of the code relies on running requests from a single AioContext at any given time. Add assertions that verify that callbacks are invoked in the correct AioContext. Signed-off-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf --- hw/scsi/scsi-disk.c | 14 ++++++++++++++ system/dma-helpers.c | 3 +++ 2 files changed, 17 insertions(+) diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index 2c1bbb3530..a5048e0aaf 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -273,6 +273,10 @@ static void scsi_aio_complete(void *opaque, int ret) SCSIDiskReq *r =3D (SCSIDiskReq *)opaque; SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, r->req.dev); =20 + /* The request must only run in the BlockBackend's AioContext */ + assert(blk_get_aio_context(s->qdev.conf.blk) =3D=3D + qemu_get_current_aio_context()); + assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 @@ -370,8 +374,13 @@ static void scsi_dma_complete(void *opaque, int ret) =20 static void scsi_read_complete_noio(SCSIDiskReq *r, int ret) { + SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, r->req.dev); uint32_t n; =20 + /* The request must only run in the BlockBackend's AioContext */ + assert(blk_get_aio_context(s->qdev.conf.blk) =3D=3D + qemu_get_current_aio_context()); + assert(r->req.aiocb =3D=3D NULL); if (scsi_disk_req_check_error(r, ret, false)) { goto done; @@ -496,8 +505,13 @@ static void scsi_read_data(SCSIRequest *req) =20 static void scsi_write_complete_noio(SCSIDiskReq *r, int ret) { + SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, r->req.dev); uint32_t n; =20 + /* The request must only run in the BlockBackend's AioContext */ + assert(blk_get_aio_context(s->qdev.conf.blk) =3D=3D + qemu_get_current_aio_context()); + assert (r->req.aiocb =3D=3D NULL); if (scsi_disk_req_check_error(r, ret, false)) { goto done; diff --git a/system/dma-helpers.c b/system/dma-helpers.c index 528117f256..9b221cf94e 100644 --- a/system/dma-helpers.c +++ b/system/dma-helpers.c @@ -119,6 +119,9 @@ static void dma_blk_cb(void *opaque, int ret) =20 trace_dma_blk_cb(dbs, ret); =20 + /* DMAAIOCB is not thread-safe and must be accessed only from dbs->ctx= */ + assert(ctx =3D=3D qemu_get_current_aio_context()); + dbs->acb =3D NULL; dbs->offset +=3D dbs->iov.size; =20 --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800471; cv=none; d=zohomail.com; s=zohoarc; b=SAEXT3OOLM2jf9rthmnM2DCE8GDm+fVzdIRzkjlMPMOTzFXNw/aNZWz0Vf2XHvpzb9JLTota0zedbD1Vi/Xk/u5uGQu87gMTQZaig0jbamekz4z+/+RFaD94R2qPMOTrRDQrveF35q3gmNts1THcf3O8jJie9HPyOGx9Wg9ci3U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800471; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8L/sqNOye5u7RuNbcak0BJzo7ABJgGjW0OxZLTOj6y8=; b=XAK3QMfEQHy+GusACh/EehvDVMYl1MjyGve3pWzPZHT7ohW153KAIVR7uje86c4lM6kiorcyWQEjb8eCW5DEH81RhXAo1DAdo458AsjSu9C9q5jC2z2cIc9M0DRF/qO5vXx534rVhYveF9wtWPrjvHtgVcYIaTCujVvJVU2Tz+0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800471258956.9213799044584; Tue, 5 Dec 2023 10:21:11 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648185.1012297 (Exim 4.92) (envelope-from ) id 1rAa26-0003Na-IG; Tue, 05 Dec 2023 18:20:34 +0000 Received: by outflank-mailman (output) from mailman id 648185.1012297; Tue, 05 Dec 2023 18:20:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa26-0003MG-DB; Tue, 05 Dec 2023 18:20:34 +0000 Received: by outflank-mailman (input) for mailman id 648185; Tue, 05 Dec 2023 18:20:33 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa25-0002wG-FD for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:33 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f8cd0d77-939a-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:31 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-624-nxpk8rGSNfC8bigtBsmm5g-1; Tue, 05 Dec 2023 13:20:25 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EFE3529ABA09; Tue, 5 Dec 2023 18:20:22 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3AB3CC15968; Tue, 5 Dec 2023 18:20:22 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f8cd0d77-939a-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800430; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8L/sqNOye5u7RuNbcak0BJzo7ABJgGjW0OxZLTOj6y8=; b=bC9o89MvyIQXd7luX1WPop4JIAjQwlCK5qbuihP6ckdrjAyW5QBj1rrlZux4B6vOuAwaZw S9SF4dLEP/uYjiGIEYlV7EpFbP/QpyLbXSnJK1BevzjIBoEUpTGEmEIfSNwRc8pVbOKOj4 CBM+KzVca8Sw9TEHEMHVou8lkfNm2/0= X-MC-Unique: nxpk8rGSNfC8bigtBsmm5g-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 03/14] tests: remove aio_context_acquire() tests Date: Tue, 5 Dec 2023 13:20:00 -0500 Message-ID: <20231205182011.1976568-4-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800473732100001 Content-Type: text/plain; charset="utf-8" The aio_context_acquire() API is being removed. Drop the test case that calls the API. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- tests/unit/test-aio.c | 67 +------------------------------------------ 1 file changed, 1 insertion(+), 66 deletions(-) diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c index 337b6e4ea7..e77d86be87 100644 --- a/tests/unit/test-aio.c +++ b/tests/unit/test-aio.c @@ -100,76 +100,12 @@ static void event_ready_cb(EventNotifier *e) =20 /* Tests using aio_*. */ =20 -typedef struct { - QemuMutex start_lock; - EventNotifier notifier; - bool thread_acquired; -} AcquireTestData; - -static void *test_acquire_thread(void *opaque) -{ - AcquireTestData *data =3D opaque; - - /* Wait for other thread to let us start */ - qemu_mutex_lock(&data->start_lock); - qemu_mutex_unlock(&data->start_lock); - - /* event_notifier_set might be called either before or after - * the main thread's call to poll(). The test case's outcome - * should be the same in either case. - */ - event_notifier_set(&data->notifier); - aio_context_acquire(ctx); - aio_context_release(ctx); - - data->thread_acquired =3D true; /* success, we got here */ - - return NULL; -} - static void set_event_notifier(AioContext *nctx, EventNotifier *notifier, EventNotifierHandler *handler) { aio_set_event_notifier(nctx, notifier, handler, NULL, NULL); } =20 -static void dummy_notifier_read(EventNotifier *n) -{ - event_notifier_test_and_clear(n); -} - -static void test_acquire(void) -{ - QemuThread thread; - AcquireTestData data; - - /* Dummy event notifier ensures aio_poll() will block */ - event_notifier_init(&data.notifier, false); - set_event_notifier(ctx, &data.notifier, dummy_notifier_read); - g_assert(!aio_poll(ctx, false)); /* consume aio_notify() */ - - qemu_mutex_init(&data.start_lock); - qemu_mutex_lock(&data.start_lock); - data.thread_acquired =3D false; - - qemu_thread_create(&thread, "test_acquire_thread", - test_acquire_thread, - &data, QEMU_THREAD_JOINABLE); - - /* Block in aio_poll(), let other thread kick us and acquire context */ - aio_context_acquire(ctx); - qemu_mutex_unlock(&data.start_lock); /* let the thread run */ - g_assert(aio_poll(ctx, true)); - g_assert(!data.thread_acquired); - aio_context_release(ctx); - - qemu_thread_join(&thread); - set_event_notifier(ctx, &data.notifier, NULL); - event_notifier_cleanup(&data.notifier); - - g_assert(data.thread_acquired); -} - static void test_bh_schedule(void) { BHTestData data =3D { .n =3D 0 }; @@ -879,7 +815,7 @@ static void test_worker_thread_co_enter(void) qemu_thread_get_self(&this_thread); co =3D qemu_coroutine_create(co_check_current_thread, &this_thread); =20 - qemu_thread_create(&worker_thread, "test_acquire_thread", + qemu_thread_create(&worker_thread, "test_aio_co_enter", test_aio_co_enter, co, QEMU_THREAD_JOINABLE); =20 @@ -899,7 +835,6 @@ int main(int argc, char **argv) while (g_main_context_iteration(NULL, false)); =20 g_test_init(&argc, &argv, NULL); - g_test_add_func("/aio/acquire", test_acquire); g_test_add_func("/aio/bh/schedule", test_bh_schedule); g_test_add_func("/aio/bh/schedule10", test_bh_schedule10); g_test_add_func("/aio/bh/cancel", test_bh_cancel); --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800451; cv=none; d=zohomail.com; s=zohoarc; b=JOK8pmQd5akQHg4x/FrEyU5wDSbZxpVrVUFVJsdPsgLPiCB9U1QKhrXAF+W5AbS/2LUcbipO4OnpC6LZZ5pvEZL69YQ0jRZjPKxagnkW8pgBKpaq+pbqVMBIuxmlgGGth4EYxIqi2RZqnhbODTekWYXNs82R4VNbJMGTLVCAx9Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800451; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=2xkZi3XPrXCgy6FPT4RsS2T7UTaP09qNkYfhhg0EmAc=; b=Nev2EI7X9GYPtiU2PMSVsfNSWm4ZKXdbfx8pN+Og9ShVVHrua3OAEWZ5/BScMLGEaVVkjl7tR6pgz5edFKVBEGs7TVCE3a81qNt/vdGYmVJmQtmUBYhmAfxsN9CDESdP94GwQ3VZA0c5YuAybimlzFTvoU2ikSyLrRRBXqC6Kt8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800451826938.114420055655; Tue, 5 Dec 2023 10:20:51 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648186.1012304 (Exim 4.92) (envelope-from ) id 1rAa26-0003Ts-W2; Tue, 05 Dec 2023 18:20:34 +0000 Received: by outflank-mailman (output) from mailman id 648186.1012304; Tue, 05 Dec 2023 18:20:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa26-0003SQ-O4; Tue, 05 Dec 2023 18:20:34 +0000 Received: by outflank-mailman (input) for mailman id 648186; Tue, 05 Dec 2023 18:20:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa25-0002fT-Ls for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:33 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id f9c4140d-939a-11ee-98e5-6d05b1d4d9a1; Tue, 05 Dec 2023 19:20:33 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-551--Te_eBzINpij5dm8ZX6ilA-1; Tue, 05 Dec 2023 13:20:28 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CCAF2386A0AF; Tue, 5 Dec 2023 18:20:26 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id AFE9A3C25; Tue, 5 Dec 2023 18:20:25 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f9c4140d-939a-11ee-98e5-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800431; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2xkZi3XPrXCgy6FPT4RsS2T7UTaP09qNkYfhhg0EmAc=; b=Ku+Yxukh1G++AZSyP02JQQxBVn8GBn/4scck2VsAmUe2U9uSZqmeY6obnyyaOXYYfVRsKk du6+uuBJxnixlsSOeB1Wp+jjwrlIi9xJesxliFZg/gA+FqLHLwObG01yZ9ZPnMECcX87Fm KZbM2fbp0pBqIngxPPj2GFEYuWlCA08= X-MC-Unique: -Te_eBzINpij5dm8ZX6ilA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 04/14] aio: make aio_context_acquire()/aio_context_release() a no-op Date: Tue, 5 Dec 2023 13:20:01 -0500 Message-ID: <20231205182011.1976568-5-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800452345100003 Content-Type: text/plain; charset="utf-8" aio_context_acquire()/aio_context_release() has been replaced by fine-grained locking to protect state shared by multiple threads. The AioContext lock still plays the role of balancing locking in AIO_WAIT_WHILE() and many functions in QEMU either require that the AioContext lock is held or not held for this reason. In other words, the AioContext lock is purely there for consistency with itself and serves no real purpose anymore. Stop actually acquiring/releasing the lock in aio_context_acquire()/aio_context_release() so that subsequent patches can remove callers across the codebase incrementally. I have performed "make check" and qemu-iotests stress tests across x86-64, ppc64le, and aarch64 to confirm that there are no failures as a result of eliminating the lock. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Acked-by: Kevin Wolf --- util/async.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/util/async.c b/util/async.c index 8f90ddc304..04ee83d220 100644 --- a/util/async.c +++ b/util/async.c @@ -725,12 +725,12 @@ void aio_context_unref(AioContext *ctx) =20 void aio_context_acquire(AioContext *ctx) { - qemu_rec_mutex_lock(&ctx->lock); + /* TODO remove this function */ } =20 void aio_context_release(AioContext *ctx) { - qemu_rec_mutex_unlock(&ctx->lock); + /* TODO remove this function */ } =20 QEMU_DEFINE_STATIC_CO_TLS(AioContext *, my_aiocontext) --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800477; cv=none; d=zohomail.com; s=zohoarc; b=fRzZijV1sPuu/+sOE1UAcpZR0Asmkp+9c4hLZBkrPA4Aa2LmKd283dUgI6iOVXQ2SF2/xht2xAUZNFPEgX8Hvvgv5b+IGeMyYk22URrOk3Ln0BPTZzD3F+oQ1TuxgKGtA029ZYoQryDeOt2oNiuOhNaq3vZS3uA/Am3oIGqvMCo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800477; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=KSOsE0toGbQZwizSCs5XmoIfQBJx0yD5NLfMU2oeOow=; b=dD+6RuoT+b+5vGLcw/GRzPAcT+GTcl3KTqnfcQIP0XAvHo/p0ETFu6IYpAUjmvLfzr8e3NlW6kEsYYlK+rYom/jY5ok29368Hg1snrBN5A2wc7NJQsaRN+RTVC7Dw6IvInK+/ykMjFVnbPwhDh5ZyzoqPmFWPccfjYKaoPoNRXY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800477014648.7874860718247; Tue, 5 Dec 2023 10:21:17 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648187.1012320 (Exim 4.92) (envelope-from ) id 1rAa2E-0004DP-7t; Tue, 05 Dec 2023 18:20:42 +0000 Received: by outflank-mailman (output) from mailman id 648187.1012320; Tue, 05 Dec 2023 18:20:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2E-0004D8-4H; Tue, 05 Dec 2023 18:20:42 +0000 Received: by outflank-mailman (input) for mailman id 648187; Tue, 05 Dec 2023 18:20:40 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2C-0002fT-1s for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:40 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id fcff71f5-939a-11ee-98e5-6d05b1d4d9a1; Tue, 05 Dec 2023 19:20:38 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-621-3t8RjCJkMrO520wzfyhiUw-1; Tue, 05 Dec 2023 13:20:30 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0E76485A59A; Tue, 5 Dec 2023 18:20:30 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9070F1C060AF; Tue, 5 Dec 2023 18:20:28 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fcff71f5-939a-11ee-98e5-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800437; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KSOsE0toGbQZwizSCs5XmoIfQBJx0yD5NLfMU2oeOow=; b=Rj0w8tM20pwOGpe2UnROK3UirMzh1O74dNJmDYtbnUr46vW4uHMfQEfB2SXVAFhS46injD pYgn5dm5ZjeQ5Uw2JQwMQXj3NEg2CTDbdR3C4Zd5tdhQAq3KtczBnNyLQX+XwB/WmpI74Y Vyyk1UY6Af1/HwA88YA8X8Em1T/fWsY= X-MC-Unique: 3t8RjCJkMrO520wzfyhiUw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 05/14] graph-lock: remove AioContext locking Date: Tue, 5 Dec 2023 13:20:02 -0500 Message-ID: <20231205182011.1976568-6-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800479130100001 Content-Type: text/plain; charset="utf-8" Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect. The distinction between bdrv_graph_wrunlock() and bdrv_graph_wrunlock_ctx() becomes meaningless and they can be collapsed into one function. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/block/graph-lock.h | 21 ++----------- block.c | 50 +++++++++++++++--------------- block/backup.c | 4 +-- block/blklogwrites.c | 8 ++--- block/blkverify.c | 4 +-- block/block-backend.c | 11 +++---- block/commit.c | 16 +++++----- block/graph-lock.c | 44 ++------------------------ block/mirror.c | 22 ++++++------- block/qcow2.c | 4 +-- block/quorum.c | 8 ++--- block/replication.c | 14 ++++----- block/snapshot.c | 4 +-- block/stream.c | 12 +++---- block/vmdk.c | 20 ++++++------ blockdev.c | 8 ++--- blockjob.c | 12 +++---- tests/unit/test-bdrv-drain.c | 40 ++++++++++++------------ tests/unit/test-bdrv-graph-mod.c | 20 ++++++------ scripts/block-coroutine-wrapper.py | 4 +-- 20 files changed, 133 insertions(+), 193 deletions(-) diff --git a/include/block/graph-lock.h b/include/block/graph-lock.h index 22b5db1ed9..d7545e82d0 100644 --- a/include/block/graph-lock.h +++ b/include/block/graph-lock.h @@ -110,34 +110,17 @@ void unregister_aiocontext(AioContext *ctx); * * The wrlock can only be taken from the main loop, with BQL held, as only= the * main loop is allowed to modify the graph. - * - * If @bs is non-NULL, its AioContext is temporarily released. - * - * This function polls. Callers must not hold the lock of any AioContext o= ther - * than the current one and the one of @bs. */ void no_coroutine_fn TSA_ACQUIRE(graph_lock) TSA_NO_TSA -bdrv_graph_wrlock(BlockDriverState *bs); +bdrv_graph_wrlock(void); =20 /* * bdrv_graph_wrunlock: * Write finished, reset global has_writer to 0 and restart * all readers that are waiting. - * - * If @bs is non-NULL, its AioContext is temporarily released. */ void no_coroutine_fn TSA_RELEASE(graph_lock) TSA_NO_TSA -bdrv_graph_wrunlock(BlockDriverState *bs); - -/* - * bdrv_graph_wrunlock_ctx: - * Write finished, reset global has_writer to 0 and restart - * all readers that are waiting. - * - * If @ctx is non-NULL, its lock is temporarily released. - */ -void no_coroutine_fn TSA_RELEASE(graph_lock) TSA_NO_TSA -bdrv_graph_wrunlock_ctx(AioContext *ctx); +bdrv_graph_wrunlock(void); =20 /* * bdrv_graph_co_rdlock: diff --git a/block.c b/block.c index bfb0861ec6..25e1ebc606 100644 --- a/block.c +++ b/block.c @@ -1708,12 +1708,12 @@ bdrv_open_driver(BlockDriverState *bs, BlockDriver = *drv, const char *node_name, open_failed: bs->drv =3D NULL; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); if (bs->file !=3D NULL) { bdrv_unref_child(bs, bs->file); assert(!bs->file); } - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 g_free(bs->opaque); bs->opaque =3D NULL; @@ -3575,9 +3575,9 @@ int bdrv_set_backing_hd(BlockDriverState *bs, BlockDr= iverState *backing_hd, =20 bdrv_ref(drain_bs); bdrv_drained_begin(drain_bs); - bdrv_graph_wrlock(backing_hd); + bdrv_graph_wrlock(); ret =3D bdrv_set_backing_hd_drained(bs, backing_hd, errp); - bdrv_graph_wrunlock(backing_hd); + bdrv_graph_wrunlock(); bdrv_drained_end(drain_bs); bdrv_unref(drain_bs); =20 @@ -3790,13 +3790,13 @@ BdrvChild *bdrv_open_child(const char *filename, return NULL; } =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); ctx =3D bdrv_get_aio_context(bs); aio_context_acquire(ctx); child =3D bdrv_attach_child(parent, bs, bdref_key, child_class, child_= role, errp); aio_context_release(ctx); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 return child; } @@ -4650,9 +4650,9 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, = Error **errp) aio_context_release(ctx); } =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); tran_commit(tran); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 QTAILQ_FOREACH_REVERSE(bs_entry, bs_queue, entry) { BlockDriverState *bs =3D bs_entry->state.bs; @@ -4669,9 +4669,9 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, = Error **errp) goto cleanup; =20 abort: - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); tran_abort(tran); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 QTAILQ_FOREACH_SAFE(bs_entry, bs_queue, entry, next) { if (bs_entry->prepared) { @@ -4852,12 +4852,12 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *= reopen_state, } =20 bdrv_graph_rdunlock_main_loop(); - bdrv_graph_wrlock(new_child_bs); + bdrv_graph_wrlock(); =20 ret =3D bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing, tran, errp); =20 - bdrv_graph_wrunlock_ctx(ctx); + bdrv_graph_wrunlock(); =20 if (old_ctx !=3D ctx) { aio_context_release(ctx); @@ -5209,14 +5209,14 @@ static void bdrv_close(BlockDriverState *bs) bs->drv =3D NULL; } =20 - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); QLIST_FOREACH_SAFE(child, &bs->children, next, next) { bdrv_unref_child(bs, child); } =20 assert(!bs->backing); assert(!bs->file); - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); =20 g_free(bs->opaque); bs->opaque =3D NULL; @@ -5509,9 +5509,9 @@ int bdrv_drop_filter(BlockDriverState *bs, Error **er= rp) bdrv_graph_rdunlock_main_loop(); =20 bdrv_drained_begin(child_bs); - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); ret =3D bdrv_replace_node_common(bs, child_bs, true, true, errp); - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); bdrv_drained_end(child_bs); =20 return ret; @@ -5561,7 +5561,7 @@ int bdrv_append(BlockDriverState *bs_new, BlockDriver= State *bs_top, aio_context_acquire(old_context); new_context =3D NULL; =20 - bdrv_graph_wrlock(bs_top); + bdrv_graph_wrlock(); =20 child =3D bdrv_attach_child_noperm(bs_new, bs_top, "backing", &child_of_bds, bdrv_backing_role(bs_n= ew), @@ -5593,7 +5593,7 @@ out: tran_finalize(tran, ret); =20 bdrv_refresh_limits(bs_top, NULL, NULL); - bdrv_graph_wrunlock(bs_top); + bdrv_graph_wrunlock(); =20 bdrv_drained_end(bs_top); bdrv_drained_end(bs_new); @@ -5620,7 +5620,7 @@ int bdrv_replace_child_bs(BdrvChild *child, BlockDriv= erState *new_bs, bdrv_ref(old_bs); bdrv_drained_begin(old_bs); bdrv_drained_begin(new_bs); - bdrv_graph_wrlock(new_bs); + bdrv_graph_wrlock(); =20 bdrv_replace_child_tran(child, new_bs, tran); =20 @@ -5631,7 +5631,7 @@ int bdrv_replace_child_bs(BdrvChild *child, BlockDriv= erState *new_bs, =20 tran_finalize(tran, ret); =20 - bdrv_graph_wrunlock(new_bs); + bdrv_graph_wrunlock(); bdrv_drained_end(old_bs); bdrv_drained_end(new_bs); bdrv_unref(old_bs); @@ -5718,9 +5718,9 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *= bs, QDict *options, bdrv_ref(bs); bdrv_drained_begin(bs); bdrv_drained_begin(new_node_bs); - bdrv_graph_wrlock(new_node_bs); + bdrv_graph_wrlock(); ret =3D bdrv_replace_node(bs, new_node_bs, errp); - bdrv_graph_wrunlock(new_node_bs); + bdrv_graph_wrunlock(); bdrv_drained_end(new_node_bs); bdrv_drained_end(bs); bdrv_unref(bs); @@ -5975,7 +5975,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, Blo= ckDriverState *base, =20 bdrv_ref(top); bdrv_drained_begin(base); - bdrv_graph_wrlock(base); + bdrv_graph_wrlock(); =20 if (!top->drv || !base->drv) { goto exit_wrlock; @@ -6015,7 +6015,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, Blo= ckDriverState *base, * That's a FIXME. */ bdrv_replace_node_common(top, base, false, false, &local_err); - bdrv_graph_wrunlock(base); + bdrv_graph_wrunlock(); =20 if (local_err) { error_report_err(local_err); @@ -6052,7 +6052,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, Blo= ckDriverState *base, goto exit; =20 exit_wrlock: - bdrv_graph_wrunlock(base); + bdrv_graph_wrunlock(); exit: bdrv_drained_end(base); bdrv_unref(top); diff --git a/block/backup.c b/block/backup.c index 8aae5836d7..ec29d6b810 100644 --- a/block/backup.c +++ b/block/backup.c @@ -496,10 +496,10 @@ BlockJob *backup_job_create(const char *job_id, Block= DriverState *bs, block_copy_set_speed(bcs, speed); =20 /* Required permissions are taken by copy-before-write filter target */ - bdrv_graph_wrlock(target); + bdrv_graph_wrlock(); block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL, &error_abort); - bdrv_graph_wrunlock(target); + bdrv_graph_wrunlock(); =20 return &job->common; =20 diff --git a/block/blklogwrites.c b/block/blklogwrites.c index 3678f6cf42..7207b2e757 100644 --- a/block/blklogwrites.c +++ b/block/blklogwrites.c @@ -251,9 +251,9 @@ static int blk_log_writes_open(BlockDriverState *bs, QD= ict *options, int flags, ret =3D 0; fail_log: if (ret < 0) { - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, s->log_file); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); s->log_file =3D NULL; } fail: @@ -265,10 +265,10 @@ static void blk_log_writes_close(BlockDriverState *bs) { BDRVBlkLogWritesState *s =3D bs->opaque; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, s->log_file); s->log_file =3D NULL; - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); } =20 static int64_t coroutine_fn GRAPH_RDLOCK diff --git a/block/blkverify.c b/block/blkverify.c index 9b17c46644..ec45d8335e 100644 --- a/block/blkverify.c +++ b/block/blkverify.c @@ -151,10 +151,10 @@ static void blkverify_close(BlockDriverState *bs) { BDRVBlkverifyState *s =3D bs->opaque; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, s->test_file); s->test_file =3D NULL; - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); } =20 static int64_t coroutine_fn GRAPH_RDLOCK diff --git a/block/block-backend.c b/block/block-backend.c index ec21148806..abac4e0235 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -889,7 +889,6 @@ void blk_remove_bs(BlockBackend *blk) { ThrottleGroupMember *tgm =3D &blk->public.throttle_group_member; BdrvChild *root; - AioContext *ctx; =20 GLOBAL_STATE_CODE(); =20 @@ -919,10 +918,9 @@ void blk_remove_bs(BlockBackend *blk) root =3D blk->root; blk->root =3D NULL; =20 - ctx =3D bdrv_get_aio_context(root->bs); - bdrv_graph_wrlock(root->bs); + bdrv_graph_wrlock(); bdrv_root_unref_child(root); - bdrv_graph_wrunlock_ctx(ctx); + bdrv_graph_wrunlock(); } =20 /* @@ -933,16 +931,15 @@ void blk_remove_bs(BlockBackend *blk) int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp) { ThrottleGroupMember *tgm =3D &blk->public.throttle_group_member; - AioContext *ctx =3D bdrv_get_aio_context(bs); =20 GLOBAL_STATE_CODE(); bdrv_ref(bs); - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); blk->root =3D bdrv_root_attach_child(bs, "root", &child_root, BDRV_CHILD_FILTERED | BDRV_CHILD_PR= IMARY, blk->perm, blk->shared_perm, blk, errp); - bdrv_graph_wrunlock_ctx(ctx); + bdrv_graph_wrunlock(); if (blk->root =3D=3D NULL) { return -EPERM; } diff --git a/block/commit.c b/block/commit.c index 69cc75be0c..1dd7a65ffb 100644 --- a/block/commit.c +++ b/block/commit.c @@ -100,9 +100,9 @@ static void commit_abort(Job *job) bdrv_graph_rdunlock_main_loop(); =20 bdrv_drained_begin(commit_top_backing_bs); - bdrv_graph_wrlock(commit_top_backing_bs); + bdrv_graph_wrlock(); bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abor= t); - bdrv_graph_wrunlock(commit_top_backing_bs); + bdrv_graph_wrunlock(); bdrv_drained_end(commit_top_backing_bs); =20 bdrv_unref(s->commit_top_bs); @@ -339,7 +339,7 @@ void commit_start(const char *job_id, BlockDriverState = *bs, * this is the responsibility of the interface (i.e. whoever calls * commit_start()). */ - bdrv_graph_wrlock(top); + bdrv_graph_wrlock(); s->base_overlay =3D bdrv_find_overlay(top, base); assert(s->base_overlay); =20 @@ -370,19 +370,19 @@ void commit_start(const char *job_id, BlockDriverStat= e *bs, ret =3D block_job_add_bdrv(&s->common, "intermediate node", iter, = 0, iter_shared_perms, errp); if (ret < 0) { - bdrv_graph_wrunlock(top); + bdrv_graph_wrunlock(); goto fail; } } =20 if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) { - bdrv_graph_wrunlock(top); + bdrv_graph_wrunlock(); goto fail; } s->chain_frozen =3D true; =20 ret =3D block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, = errp); - bdrv_graph_wrunlock(top); + bdrv_graph_wrunlock(); =20 if (ret < 0) { goto fail; @@ -434,9 +434,9 @@ fail: * otherwise this would fail because of lack of permissions. */ if (commit_top_bs) { bdrv_drained_begin(top); - bdrv_graph_wrlock(top); + bdrv_graph_wrlock(); bdrv_replace_node(commit_top_bs, top, &error_abort); - bdrv_graph_wrunlock(top); + bdrv_graph_wrunlock(); bdrv_drained_end(top); } } diff --git a/block/graph-lock.c b/block/graph-lock.c index 079e878d9b..c81162b147 100644 --- a/block/graph-lock.c +++ b/block/graph-lock.c @@ -106,27 +106,12 @@ static uint32_t reader_count(void) return rd; } =20 -void no_coroutine_fn bdrv_graph_wrlock(BlockDriverState *bs) +void no_coroutine_fn bdrv_graph_wrlock(void) { - AioContext *ctx =3D NULL; - GLOBAL_STATE_CODE(); assert(!qatomic_read(&has_writer)); assert(!qemu_in_coroutine()); =20 - /* - * Release only non-mainloop AioContext. The mainloop often relies on = the - * BQL and doesn't lock the main AioContext before doing things. - */ - if (bs) { - ctx =3D bdrv_get_aio_context(bs); - if (ctx !=3D qemu_get_aio_context()) { - aio_context_release(ctx); - } else { - ctx =3D NULL; - } - } - /* Make sure that constantly arriving new I/O doesn't cause starvation= */ bdrv_drain_all_begin_nopoll(); =20 @@ -155,27 +140,13 @@ void no_coroutine_fn bdrv_graph_wrlock(BlockDriverSta= te *bs) } while (reader_count() >=3D 1); =20 bdrv_drain_all_end(); - - if (ctx) { - aio_context_acquire(bdrv_get_aio_context(bs)); - } } =20 -void no_coroutine_fn bdrv_graph_wrunlock_ctx(AioContext *ctx) +void no_coroutine_fn bdrv_graph_wrunlock(void) { GLOBAL_STATE_CODE(); assert(qatomic_read(&has_writer)); =20 - /* - * Release only non-mainloop AioContext. The mainloop often relies on = the - * BQL and doesn't lock the main AioContext before doing things. - */ - if (ctx && ctx !=3D qemu_get_aio_context()) { - aio_context_release(ctx); - } else { - ctx =3D NULL; - } - WITH_QEMU_LOCK_GUARD(&aio_context_list_lock) { /* * No need for memory barriers, this works in pair with @@ -197,17 +168,6 @@ void no_coroutine_fn bdrv_graph_wrunlock_ctx(AioContex= t *ctx) * progress. */ aio_bh_poll(qemu_get_aio_context()); - - if (ctx) { - aio_context_acquire(ctx); - } -} - -void no_coroutine_fn bdrv_graph_wrunlock(BlockDriverState *bs) -{ - AioContext *ctx =3D bs ? bdrv_get_aio_context(bs) : NULL; - - bdrv_graph_wrunlock_ctx(ctx); } =20 void coroutine_fn bdrv_graph_co_rdlock(void) diff --git a/block/mirror.c b/block/mirror.c index cd9d3ad4a8..51f9e2f17c 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -764,7 +764,7 @@ static int mirror_exit_common(Job *job) * check for an op blocker on @to_replace, and we have our own * there. */ - bdrv_graph_wrlock(target_bs); + bdrv_graph_wrlock(); if (bdrv_recurse_can_replace(src, to_replace)) { bdrv_replace_node(to_replace, target_bs, &local_err); } else { @@ -773,7 +773,7 @@ static int mirror_exit_common(Job *job) "would not lead to an abrupt change of visible data= ", to_replace->node_name, target_bs->node_name); } - bdrv_graph_wrunlock(target_bs); + bdrv_graph_wrunlock(); bdrv_drained_end(to_replace); if (local_err) { error_report_err(local_err); @@ -796,9 +796,9 @@ static int mirror_exit_common(Job *job) * valid. */ block_job_remove_all_bdrv(bjob); - bdrv_graph_wrlock(mirror_top_bs); + bdrv_graph_wrlock(); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_ab= ort); - bdrv_graph_wrunlock(mirror_top_bs); + bdrv_graph_wrunlock(); =20 bdrv_drained_end(target_bs); bdrv_unref(target_bs); @@ -1914,13 +1914,13 @@ static BlockJob *mirror_start_job( */ bdrv_disable_dirty_bitmap(s->dirty_bitmap); =20 - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); ret =3D block_job_add_bdrv(&s->common, "source", bs, 0, BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE | BLK_PERM_CONSISTENT_READ, errp); if (ret < 0) { - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); goto fail; } =20 @@ -1965,17 +1965,17 @@ static BlockJob *mirror_start_job( ret =3D block_job_add_bdrv(&s->common, "intermediate node", it= er, 0, iter_shared_perms, errp); if (ret < 0) { - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); goto fail; } } =20 if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) { - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); goto fail; } } - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); =20 QTAILQ_INIT(&s->ops_in_flight); =20 @@ -2001,12 +2001,12 @@ fail: =20 bs_opaque->stop =3D true; bdrv_drained_begin(bs); - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); assert(mirror_top_bs->backing->bs =3D=3D bs); bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, &error_abort); bdrv_replace_node(mirror_top_bs, bs, &error_abort); - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); bdrv_drained_end(bs); =20 bdrv_unref(mirror_top_bs); diff --git a/block/qcow2.c b/block/qcow2.c index 13e032bd5e..9bee66fff5 100644 --- a/block/qcow2.c +++ b/block/qcow2.c @@ -2807,9 +2807,9 @@ qcow2_do_close(BlockDriverState *bs, bool close_data_= file) if (close_data_file && has_data_file(bs)) { GLOBAL_STATE_CODE(); bdrv_graph_rdunlock_main_loop(); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, s->data_file); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); s->data_file =3D NULL; bdrv_graph_rdlock_main_loop(); } diff --git a/block/quorum.c b/block/quorum.c index 505b8b3e18..db8fe891c4 100644 --- a/block/quorum.c +++ b/block/quorum.c @@ -1037,14 +1037,14 @@ static int quorum_open(BlockDriverState *bs, QDict = *options, int flags, =20 close_exit: /* cleanup on error */ - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); for (i =3D 0; i < s->num_children; i++) { if (!opened[i]) { continue; } bdrv_unref_child(bs, s->children[i]); } - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); g_free(s->children); g_free(opened); exit: @@ -1057,11 +1057,11 @@ static void quorum_close(BlockDriverState *bs) BDRVQuorumState *s =3D bs->opaque; int i; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); for (i =3D 0; i < s->num_children; i++) { bdrv_unref_child(bs, s->children[i]); } - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 g_free(s->children); } diff --git a/block/replication.c b/block/replication.c index 5ded5f1ca9..424b537ff7 100644 --- a/block/replication.c +++ b/block/replication.c @@ -560,7 +560,7 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, return; } =20 - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); =20 bdrv_ref(hidden_disk->bs); s->hidden_disk =3D bdrv_attach_child(bs, hidden_disk->bs, "hidden = disk", @@ -568,7 +568,7 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, &local_err); if (local_err) { error_propagate(errp, local_err); - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); aio_context_release(aio_context); return; } @@ -579,7 +579,7 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, BDRV_CHILD_DATA, &local_err); if (local_err) { error_propagate(errp, local_err); - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); aio_context_release(aio_context); return; } @@ -592,7 +592,7 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, if (!top_bs || !bdrv_is_root_node(top_bs) || !check_top_bs(top_bs, bs)) { error_setg(errp, "No top_bs or it is invalid"); - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); reopen_backing_file(bs, false, NULL); aio_context_release(aio_context); return; @@ -600,7 +600,7 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, bdrv_op_block_all(top_bs, s->blocker); bdrv_op_unblock(top_bs, BLOCK_OP_TYPE_DATAPLANE, s->blocker); =20 - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); =20 s->backup_job =3D backup_job_create( NULL, s->secondary_disk->bs, s->hidden_dis= k->bs, @@ -691,12 +691,12 @@ static void replication_done(void *opaque, int ret) if (ret =3D=3D 0) { s->stage =3D BLOCK_REPLICATION_DONE; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, s->secondary_disk); s->secondary_disk =3D NULL; bdrv_unref_child(bs, s->hidden_disk); s->hidden_disk =3D NULL; - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 s->error =3D 0; } else { diff --git a/block/snapshot.c b/block/snapshot.c index ec8cf4810b..e486d3e205 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -290,9 +290,9 @@ int bdrv_snapshot_goto(BlockDriverState *bs, } =20 /* .bdrv_open() will re-attach it */ - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, fallback); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 ret =3D bdrv_snapshot_goto(fallback_bs, snapshot_id, errp); open_ret =3D drv->bdrv_open(bs, options, bs->open_flags, &local_er= r); diff --git a/block/stream.c b/block/stream.c index 01fe7c0f16..048c2d282f 100644 --- a/block/stream.c +++ b/block/stream.c @@ -99,9 +99,9 @@ static int stream_prepare(Job *job) } } =20 - bdrv_graph_wrlock(s->target_bs); + bdrv_graph_wrlock(); bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err); - bdrv_graph_wrunlock(s->target_bs); + bdrv_graph_wrunlock(); =20 /* * This call will do I/O, so the graph can change again from here = on. @@ -366,10 +366,10 @@ void stream_start(const char *job_id, BlockDriverStat= e *bs, * already have our own plans. Also don't allow resize as the image si= ze is * queried only at the job start and then cached. */ - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); if (block_job_add_bdrv(&s->common, "active node", bs, 0, basic_flags | BLK_PERM_WRITE, errp)) { - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); goto fail; } =20 @@ -389,11 +389,11 @@ void stream_start(const char *job_id, BlockDriverStat= e *bs, ret =3D block_job_add_bdrv(&s->common, "intermediate node", iter, = 0, basic_flags, errp); if (ret < 0) { - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); goto fail; } } - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); =20 s->base_overlay =3D base_overlay; s->above_base =3D above_base; diff --git a/block/vmdk.c b/block/vmdk.c index d6971c7067..bf78e12383 100644 --- a/block/vmdk.c +++ b/block/vmdk.c @@ -272,7 +272,7 @@ static void vmdk_free_extents(BlockDriverState *bs) BDRVVmdkState *s =3D bs->opaque; VmdkExtent *e; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); for (i =3D 0; i < s->num_extents; i++) { e =3D &s->extents[i]; g_free(e->l1_table); @@ -283,7 +283,7 @@ static void vmdk_free_extents(BlockDriverState *bs) bdrv_unref_child(bs, e->file); } } - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 g_free(s->extents); } @@ -1247,9 +1247,9 @@ vmdk_parse_extents(const char *desc, BlockDriverState= *bs, QDict *options, 0, 0, 0, 0, 0, &extent, errp); if (ret < 0) { bdrv_graph_rdunlock_main_loop(); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, extent_file); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); bdrv_graph_rdlock_main_loop(); goto out; } @@ -1266,9 +1266,9 @@ vmdk_parse_extents(const char *desc, BlockDriverState= *bs, QDict *options, g_free(buf); if (ret) { bdrv_graph_rdunlock_main_loop(); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, extent_file); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); bdrv_graph_rdlock_main_loop(); goto out; } @@ -1277,9 +1277,9 @@ vmdk_parse_extents(const char *desc, BlockDriverState= *bs, QDict *options, ret =3D vmdk_open_se_sparse(bs, extent_file, bs->open_flags, e= rrp); if (ret) { bdrv_graph_rdunlock_main_loop(); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, extent_file); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); bdrv_graph_rdlock_main_loop(); goto out; } @@ -1287,9 +1287,9 @@ vmdk_parse_extents(const char *desc, BlockDriverState= *bs, QDict *options, } else { error_setg(errp, "Unsupported extent type '%s'", type); bdrv_graph_rdunlock_main_loop(); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(bs, extent_file); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); bdrv_graph_rdlock_main_loop(); ret =3D -ENOTSUP; goto out; diff --git a/blockdev.c b/blockdev.c index 4c1177e8db..db9cc96510 100644 --- a/blockdev.c +++ b/blockdev.c @@ -1611,9 +1611,9 @@ static void external_snapshot_abort(void *opaque) } =20 bdrv_drained_begin(state->new_bs); - bdrv_graph_wrlock(state->old_bs); + bdrv_graph_wrlock(); bdrv_replace_node(state->new_bs, state->old_bs, &error_abort); - bdrv_graph_wrunlock(state->old_bs); + bdrv_graph_wrunlock(); bdrv_drained_end(state->new_bs); =20 bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_b= s */ @@ -3656,7 +3656,7 @@ void qmp_x_blockdev_change(const char *parent, const = char *child, BlockDriverState *parent_bs, *new_bs =3D NULL; BdrvChild *p_child; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); =20 parent_bs =3D bdrv_lookup_bs(parent, parent, errp); if (!parent_bs) { @@ -3692,7 +3692,7 @@ void qmp_x_blockdev_change(const char *parent, const = char *child, } =20 out: - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); } =20 BlockJobInfoList *qmp_query_block_jobs(Error **errp) diff --git a/blockjob.c b/blockjob.c index b7a29052b9..7310412313 100644 --- a/blockjob.c +++ b/blockjob.c @@ -199,7 +199,7 @@ void block_job_remove_all_bdrv(BlockJob *job) * to process an already freed BdrvChild. */ aio_context_release(job->job.aio_context); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); aio_context_acquire(job->job.aio_context); while (job->nodes) { GSList *l =3D job->nodes; @@ -212,7 +212,7 @@ void block_job_remove_all_bdrv(BlockJob *job) =20 g_slist_free_1(l); } - bdrv_graph_wrunlock_ctx(job->job.aio_context); + bdrv_graph_wrunlock(); } =20 bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs) @@ -514,7 +514,7 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, int ret; GLOBAL_STATE_CODE(); =20 - bdrv_graph_wrlock(bs); + bdrv_graph_wrlock(); =20 if (job_id =3D=3D NULL && !(flags & JOB_INTERNAL)) { job_id =3D bdrv_get_device_name(bs); @@ -523,7 +523,7 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, job =3D job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_cont= ext(bs), flags, cb, opaque, errp); if (job =3D=3D NULL) { - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); return NULL; } =20 @@ -563,11 +563,11 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, goto fail; } =20 - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); return job; =20 fail: - bdrv_graph_wrunlock(bs); + bdrv_graph_wrunlock(); job_early_fail(&job->job); return NULL; } diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 704d1a3f36..d9754dfebc 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -807,9 +807,9 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, tjob->bs =3D src; job =3D &tjob->common; =20 - bdrv_graph_wrlock(target); + bdrv_graph_wrlock(); block_job_add_bdrv(job, "target", target, 0, BLK_PERM_ALL, &error_abor= t); - bdrv_graph_wrunlock(target); + bdrv_graph_wrunlock(); =20 switch (result) { case TEST_JOB_SUCCESS: @@ -991,11 +991,11 @@ static void bdrv_test_top_close(BlockDriverState *bs) { BdrvChild *c, *next_c; =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) { bdrv_unref_child(bs, c); } - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); } =20 static int coroutine_fn GRAPH_RDLOCK @@ -1085,10 +1085,10 @@ static void do_test_delete_by_drain(bool detach_ins= tead_of_delete, =20 null_bs =3D bdrv_open("null-co://", NULL, NULL, BDRV_O_RDWR | BDRV_O_P= ROTOCOL, &error_abort); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(bs, null_bs, "null-child", &child_of_bds, BDRV_CHILD_DATA, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 /* This child will be the one to pass to requests through to, and * it will stall until a drain occurs */ @@ -1096,21 +1096,21 @@ static void do_test_delete_by_drain(bool detach_ins= tead_of_delete, &error_abort); child_bs->total_sectors =3D 65536 >> BDRV_SECTOR_BITS; /* Takes our reference to child_bs */ - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); tts->wait_child =3D bdrv_attach_child(bs, child_bs, "wait-child", &child_of_bds, BDRV_CHILD_DATA | BDRV_CHILD_PRIMA= RY, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 /* This child is just there to be deleted * (for detach_instead_of_delete =3D=3D true) */ null_bs =3D bdrv_open("null-co://", NULL, NULL, BDRV_O_RDWR | BDRV_O_P= ROTOCOL, &error_abort); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(bs, null_bs, "null-child", &child_of_bds, BDRV_CHILD= _DATA, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 blk =3D blk_new(qemu_get_aio_context(), BLK_PERM_ALL, BLK_PERM_ALL); blk_insert_bs(blk, bs, &error_abort); @@ -1193,14 +1193,14 @@ static void no_coroutine_fn detach_indirect_bh(void= *opaque) =20 bdrv_dec_in_flight(data->child_b->bs); =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_unref_child(data->parent_b, data->child_b); =20 bdrv_ref(data->c); data->child_c =3D bdrv_attach_child(data->parent_b, data->c, "PB-C", &child_of_bds, BDRV_CHILD_DATA, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); } =20 static void coroutine_mixed_fn detach_by_parent_aio_cb(void *opaque, int r= et) @@ -1298,7 +1298,7 @@ static void TSA_NO_TSA test_detach_indirect(bool by_p= arent_cb) /* Set child relationships */ bdrv_ref(b); bdrv_ref(a); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); child_b =3D bdrv_attach_child(parent_b, b, "PB-B", &child_of_bds, BDRV_CHILD_DATA, &error_abort); child_a =3D bdrv_attach_child(parent_b, a, "PB-A", &child_of_bds, @@ -1308,7 +1308,7 @@ static void TSA_NO_TSA test_detach_indirect(bool by_p= arent_cb) bdrv_attach_child(parent_a, a, "PA-A", by_parent_cb ? &child_of_bds : &detach_by_driver_cb_= class, BDRV_CHILD_DATA, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 g_assert_cmpint(parent_a->refcnt, =3D=3D, 1); g_assert_cmpint(parent_b->refcnt, =3D=3D, 1); @@ -1727,7 +1727,7 @@ static void test_drop_intermediate_poll(void) * Establish the chain last, so the chain links are the first * elements in the BDS.parents lists */ - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); for (i =3D 0; i < 3; i++) { if (i) { /* Takes the reference to chain[i - 1] */ @@ -1735,7 +1735,7 @@ static void test_drop_intermediate_poll(void) &chain_child_class, BDRV_CHILD_COW, &error_a= bort); } } - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 job =3D block_job_create("job", &test_simple_job_driver, NULL, job_nod= e, 0, BLK_PERM_ALL, 0, 0, NULL, NULL, &error_abort= ); @@ -1982,10 +1982,10 @@ static void do_test_replace_child_mid_drain(int old= _drain_count, new_child_bs->total_sectors =3D 1; =20 bdrv_ref(old_child_bs); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(parent_bs, old_child_bs, "child", &child_of_bds, BDRV_CHILD_COW, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); parent_s->setup_completed =3D true; =20 for (i =3D 0; i < old_drain_count; i++) { @@ -2016,9 +2016,9 @@ static void do_test_replace_child_mid_drain(int old_d= rain_count, g_assert(parent_bs->quiesce_counter =3D=3D old_drain_count); bdrv_drained_begin(old_child_bs); bdrv_drained_begin(new_child_bs); - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_replace_node(old_child_bs, new_child_bs, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); bdrv_drained_end(new_child_bs); bdrv_drained_end(old_child_bs); g_assert(parent_bs->quiesce_counter =3D=3D new_drain_count); diff --git a/tests/unit/test-bdrv-graph-mod.c b/tests/unit/test-bdrv-graph-= mod.c index 074adcbb93..8ee6ef38d8 100644 --- a/tests/unit/test-bdrv-graph-mod.c +++ b/tests/unit/test-bdrv-graph-mod.c @@ -137,10 +137,10 @@ static void test_update_perm_tree(void) =20 blk_insert_bs(root, bs, &error_abort); =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(filter, bs, "child", &child_of_bds, BDRV_CHILD_DATA, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 aio_context_acquire(qemu_get_aio_context()); ret =3D bdrv_append(filter, bs, NULL); @@ -206,11 +206,11 @@ static void test_should_update_child(void) =20 bdrv_set_backing_hd(target, bs, &error_abort); =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); g_assert(target->backing->bs =3D=3D bs); bdrv_attach_child(filter, target, "target", &child_of_bds, BDRV_CHILD_DATA, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); aio_context_acquire(qemu_get_aio_context()); bdrv_append(filter, bs, &error_abort); aio_context_release(qemu_get_aio_context()); @@ -248,7 +248,7 @@ static void test_parallel_exclusive_write(void) bdrv_ref(base); bdrv_ref(fl1); =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(top, fl1, "backing", &child_of_bds, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, &error_abort); @@ -260,7 +260,7 @@ static void test_parallel_exclusive_write(void) &error_abort); =20 bdrv_replace_node(fl1, fl2, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 bdrv_drained_end(fl2); bdrv_drained_end(fl1); @@ -367,7 +367,7 @@ static void test_parallel_perm_update(void) */ bdrv_ref(base); =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(top, ws, "file", &child_of_bds, BDRV_CHILD_DATA, &error_abort); c_fl1 =3D bdrv_attach_child(ws, fl1, "first", &child_of_bds, @@ -380,7 +380,7 @@ static void test_parallel_perm_update(void) bdrv_attach_child(fl2, base, "backing", &child_of_bds, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 /* Select fl1 as first child to be active */ s->selected =3D c_fl1; @@ -434,11 +434,11 @@ static void test_append_greedy_filter(void) BlockDriverState *base =3D no_perm_node("base"); BlockDriverState *fl =3D exclusive_writer_node("fl1"); =20 - bdrv_graph_wrlock(NULL); + bdrv_graph_wrlock(); bdrv_attach_child(top, base, "backing", &child_of_bds, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, &error_abort); - bdrv_graph_wrunlock(NULL); + bdrv_graph_wrunlock(); =20 aio_context_acquire(qemu_get_aio_context()); bdrv_append(fl, base, &error_abort); diff --git a/scripts/block-coroutine-wrapper.py b/scripts/block-coroutine-w= rapper.py index a38e5833fb..38364fa557 100644 --- a/scripts/block-coroutine-wrapper.py +++ b/scripts/block-coroutine-wrapper.py @@ -261,8 +261,8 @@ def gen_no_co_wrapper(func: FuncDecl) -> str: graph_lock=3D' bdrv_graph_rdlock_main_loop();' graph_unlock=3D' bdrv_graph_rdunlock_main_loop();' elif func.graph_wrlock: - graph_lock=3D' bdrv_graph_wrlock(NULL);' - graph_unlock=3D' bdrv_graph_wrunlock(NULL);' + graph_lock=3D' bdrv_graph_wrlock();' + graph_unlock=3D' bdrv_graph_wrunlock();' =20 return f"""\ /* --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800469; cv=none; d=zohomail.com; s=zohoarc; b=lV2G1yfRtUJOKJWQWX+EkGkM4Rgy4h7WRwPKTdNzrDn/X6J4wUDtw6Cxo+ZcCoX9QppS8UkFkKVaggpOoIK+E1eGRhE/8dZ9DRWqrMYiFrLOdwG22vEt+ftD1qzkq3V02cpfvsXvantsM+6+kdy2BC6n/ZsBDyZc6EWg2cA1qc4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800469; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=vw22P8mnLUq8KteOIhRR1yuFI9XJA9QdIuPuhFXL9NY=; b=KkJqMNgTHDfeNZp3OiIXgM5/H2ZBp5vbfbhZckBe/kmrg5bJkAXzqNVDOhofYbOZdEcdTseNHPxvfF5g1JleCBV+QllCchImJ40h3fNtoOi+g+1XyiP9vjpCDnAzchIhFpntMrX7xgHqKpjpkJw3JJL8Xju9ylK3IxMQClwDyiQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800469756320.38874063032665; Tue, 5 Dec 2023 10:21:09 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648188.1012330 (Exim 4.92) (envelope-from ) id 1rAa2F-0004Vm-O7; Tue, 05 Dec 2023 18:20:43 +0000 Received: by outflank-mailman (output) from mailman id 648188.1012330; Tue, 05 Dec 2023 18:20:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2F-0004VP-J7; Tue, 05 Dec 2023 18:20:43 +0000 Received: by outflank-mailman (input) for mailman id 648188; Tue, 05 Dec 2023 18:20:43 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2E-0002fT-N2 for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:43 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id fe38e3d3-939a-11ee-98e5-6d05b1d4d9a1; Tue, 05 Dec 2023 19:20:40 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-643-jerahUbrOfKr08It_qIgXA-1; Tue, 05 Dec 2023 13:20:35 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2508D1D6BEE3; Tue, 5 Dec 2023 18:20:34 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id D40162166B31; Tue, 5 Dec 2023 18:20:31 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fe38e3d3-939a-11ee-98e5-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800439; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vw22P8mnLUq8KteOIhRR1yuFI9XJA9QdIuPuhFXL9NY=; b=FsSaQmSHgT4z6unIsCnFfNHiQ/KunN5grsJuaDyzwUid/bZtNhrKRjAEGxbG5yT1vQwsdN w4c1/YtNmd+qtrvUPpgmdYNlTDwiV+GBCo7fWIFHA6OJXqtA7MI/DpF9pnq9Ixe73Wfh9c nyCicZqfOoxbNwUnYOffg6ogb1dYnkk= X-MC-Unique: jerahUbrOfKr08It_qIgXA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 06/14] block: remove AioContext locking Date: Tue, 5 Dec 2023 13:20:03 -0500 Message-ID: <20231205182011.1976568-7-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800470995100001 Content-Type: text/plain; charset="utf-8" This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users. There isn't a clean way to split this patch and the reviewers are likely the same group of people, so I decided to do it in one patch. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf Reviewed-by: Paul Durrant --- include/block/block-global-state.h | 9 +- include/block/block-io.h | 3 +- include/block/snapshot.h | 2 - block.c | 234 +--------------------- block/block-backend.c | 14 -- block/copy-before-write.c | 22 +-- block/export/export.c | 22 +-- block/io.c | 45 +---- block/mirror.c | 19 -- block/monitor/bitmap-qmp-cmds.c | 20 +- block/monitor/block-hmp-cmds.c | 29 --- block/qapi-sysemu.c | 27 +-- block/qapi.c | 18 +- block/raw-format.c | 5 - block/replication.c | 58 +----- block/snapshot.c | 22 +-- block/write-threshold.c | 6 - blockdev.c | 307 +++++------------------------ blockjob.c | 18 -- hw/block/dataplane/virtio-blk.c | 10 - hw/block/dataplane/xen-block.c | 17 +- hw/block/virtio-blk.c | 45 +---- hw/core/qdev-properties-system.c | 9 - job.c | 16 -- migration/block.c | 33 +--- migration/migration-hmp-cmds.c | 3 - migration/savevm.c | 22 --- net/colo-compare.c | 2 - qemu-img.c | 4 - qemu-io.c | 10 +- qemu-nbd.c | 2 - replay/replay-debugging.c | 4 - tests/unit/test-bdrv-drain.c | 51 +---- tests/unit/test-bdrv-graph-mod.c | 6 - tests/unit/test-block-iothread.c | 31 --- tests/unit/test-blockjob.c | 137 ------------- tests/unit/test-replication.c | 11 -- util/async.c | 4 - util/vhost-user-server.c | 3 - scripts/block-coroutine-wrapper.py | 3 - tests/tsan/suppressions.tsan | 1 - 41 files changed, 102 insertions(+), 1202 deletions(-) diff --git a/include/block/block-global-state.h b/include/block/block-globa= l-state.h index 6b21fbc73f..0327f1c605 100644 --- a/include/block/block-global-state.h +++ b/include/block/block-global-state.h @@ -31,11 +31,10 @@ /* * Global state (GS) API. These functions run under the BQL. * - * If a function modifies the graph, it also uses drain and/or - * aio_context_acquire/release to be sure it has unique access. - * aio_context locking is needed together with BQL because of - * the thread-safe I/O API that concurrently runs and accesses - * the graph without the BQL. + * If a function modifies the graph, it also uses the graph lock to be sur= e it + * has unique access. The graph lock is needed together with BQL because o= f the + * thread-safe I/O API that concurrently runs and accesses the graph witho= ut + * the BQL. * * It is important to note that not all of these functions are * necessarily limited to running under the BQL, but they would diff --git a/include/block/block-io.h b/include/block/block-io.h index f8729ccc55..8eb39a858b 100644 --- a/include/block/block-io.h +++ b/include/block/block-io.h @@ -31,8 +31,7 @@ =20 /* * I/O API functions. These functions are thread-safe, and therefore - * can run in any thread as long as the thread has called - * aio_context_acquire/release(). + * can run in any thread. * * These functions can only call functions from I/O and Common categories, * but can be invoked by GS, "I/O or GS" and I/O APIs. diff --git a/include/block/snapshot.h b/include/block/snapshot.h index d49c5599d9..304cc6ea61 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -86,8 +86,6 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState= *bs, =20 /* * Group operations. All block drivers are involved. - * These functions will properly handle dataplane (take aio_context_acquire - * when appropriate for appropriate block drivers */ =20 bool bdrv_all_can_snapshot(bool has_devices, strList *devices, diff --git a/block.c b/block.c index 25e1ebc606..91ace5d2d5 100644 --- a/block.c +++ b/block.c @@ -1625,7 +1625,6 @@ static int no_coroutine_fn GRAPH_UNLOCKED bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv, const char *node_= name, QDict *options, int open_flags, Error **errp) { - AioContext *ctx; Error *local_err =3D NULL; int i, ret; GLOBAL_STATE_CODE(); @@ -1673,21 +1672,15 @@ bdrv_open_driver(BlockDriverState *bs, BlockDriver = *drv, const char *node_name, bs->supported_read_flags |=3D BDRV_REQ_REGISTERED_BUF; bs->supported_write_flags |=3D BDRV_REQ_REGISTERED_BUF; =20 - /* Get the context after .bdrv_open, it can change the context */ - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); - ret =3D bdrv_refresh_total_sectors(bs, bs->total_sectors); if (ret < 0) { error_setg_errno(errp, -ret, "Could not refresh total sector count= "); - aio_context_release(ctx); return ret; } =20 bdrv_graph_rdlock_main_loop(); bdrv_refresh_limits(bs, NULL, &local_err); bdrv_graph_rdunlock_main_loop(); - aio_context_release(ctx); =20 if (local_err) { error_propagate(errp, local_err); @@ -3062,7 +3055,7 @@ bdrv_attach_child_common(BlockDriverState *child_bs, Transaction *tran, Error **errp) { BdrvChild *new_child; - AioContext *parent_ctx, *new_child_ctx; + AioContext *parent_ctx; AioContext *child_ctx =3D bdrv_get_aio_context(child_bs); =20 assert(child_class->get_parent_desc); @@ -3114,12 +3107,6 @@ bdrv_attach_child_common(BlockDriverState *child_bs, } } =20 - new_child_ctx =3D bdrv_get_aio_context(child_bs); - if (new_child_ctx !=3D child_ctx) { - aio_context_release(child_ctx); - aio_context_acquire(new_child_ctx); - } - bdrv_ref(child_bs); /* * Let every new BdrvChild start with a drained parent. Inserting the = child @@ -3149,11 +3136,6 @@ bdrv_attach_child_common(BlockDriverState *child_bs, }; tran_add(tran, &bdrv_attach_child_common_drv, s); =20 - if (new_child_ctx !=3D child_ctx) { - aio_context_release(new_child_ctx); - aio_context_acquire(child_ctx); - } - return new_child; } =20 @@ -3605,7 +3587,6 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDic= t *parent_options, int ret =3D 0; bool implicit_backing =3D false; BlockDriverState *backing_hd; - AioContext *backing_hd_ctx; QDict *options; QDict *tmp_parent_options =3D NULL; Error *local_err =3D NULL; @@ -3691,11 +3672,8 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDi= ct *parent_options, =20 /* Hook up the backing file link; drop our reference, bs owns the * backing_hd reference now */ - backing_hd_ctx =3D bdrv_get_aio_context(backing_hd); - aio_context_acquire(backing_hd_ctx); ret =3D bdrv_set_backing_hd(bs, backing_hd, errp); bdrv_unref(backing_hd); - aio_context_release(backing_hd_ctx); =20 if (ret < 0) { goto free_exit; @@ -3780,7 +3758,6 @@ BdrvChild *bdrv_open_child(const char *filename, { BlockDriverState *bs; BdrvChild *child; - AioContext *ctx; =20 GLOBAL_STATE_CODE(); =20 @@ -3791,11 +3768,8 @@ BdrvChild *bdrv_open_child(const char *filename, } =20 bdrv_graph_wrlock(); - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); child =3D bdrv_attach_child(parent, bs, bdref_key, child_class, child_= role, errp); - aio_context_release(ctx); bdrv_graph_wrunlock(); =20 return child; @@ -3881,7 +3855,6 @@ static BlockDriverState *bdrv_append_temp_snapshot(Bl= ockDriverState *bs, int64_t total_size; QemuOpts *opts =3D NULL; BlockDriverState *bs_snapshot =3D NULL; - AioContext *ctx =3D bdrv_get_aio_context(bs); int ret; =20 GLOBAL_STATE_CODE(); @@ -3890,9 +3863,7 @@ static BlockDriverState *bdrv_append_temp_snapshot(Bl= ockDriverState *bs, instead of opening 'filename' directly */ =20 /* Get the required size from the image */ - aio_context_acquire(ctx); total_size =3D bdrv_getlength(bs); - aio_context_release(ctx); =20 if (total_size < 0) { error_setg_errno(errp, -total_size, "Could not get image size"); @@ -3927,10 +3898,7 @@ static BlockDriverState *bdrv_append_temp_snapshot(B= lockDriverState *bs, goto out; } =20 - aio_context_acquire(ctx); ret =3D bdrv_append(bs_snapshot, bs, errp); - aio_context_release(ctx); - if (ret < 0) { bs_snapshot =3D NULL; goto out; @@ -3974,7 +3942,6 @@ bdrv_open_inherit(const char *filename, const char *r= eference, QDict *options, Error *local_err =3D NULL; QDict *snapshot_options =3D NULL; int snapshot_flags =3D 0; - AioContext *ctx =3D qemu_get_aio_context(); =20 assert(!child_class || !flags); assert(!child_class =3D=3D !parent); @@ -4115,12 +4082,10 @@ bdrv_open_inherit(const char *filename, const char = *reference, QDict *options, /* Not requesting BLK_PERM_CONSISTENT_READ because we're only * looking at the header to guess the image format. This works= even * in cases where a guest would not see a consistent state. */ - ctx =3D bdrv_get_aio_context(file_bs); - aio_context_acquire(ctx); + AioContext *ctx =3D bdrv_get_aio_context(file_bs); file =3D blk_new(ctx, 0, BLK_PERM_ALL); blk_insert_bs(file, file_bs, &local_err); bdrv_unref(file_bs); - aio_context_release(ctx); =20 if (local_err) { goto fail; @@ -4167,13 +4132,8 @@ bdrv_open_inherit(const char *filename, const char *= reference, QDict *options, goto fail; } =20 - /* The AioContext could have changed during bdrv_open_common() */ - ctx =3D bdrv_get_aio_context(bs); - if (file) { - aio_context_acquire(ctx); blk_unref(file); - aio_context_release(ctx); file =3D NULL; } =20 @@ -4231,16 +4191,13 @@ bdrv_open_inherit(const char *filename, const char = *reference, QDict *options, * (snapshot_bs); thus, we have to drop the strong reference to bs * (which we obtained by calling bdrv_new()). bs will not be delet= ed, * though, because the overlay still has a reference to it. */ - aio_context_acquire(ctx); bdrv_unref(bs); - aio_context_release(ctx); bs =3D snapshot_bs; } =20 return bs; =20 fail: - aio_context_acquire(ctx); blk_unref(file); qobject_unref(snapshot_options); qobject_unref(bs->explicit_options); @@ -4249,14 +4206,11 @@ fail: bs->options =3D NULL; bs->explicit_options =3D NULL; bdrv_unref(bs); - aio_context_release(ctx); error_propagate(errp, local_err); return NULL; =20 close_and_fail: - aio_context_acquire(ctx); bdrv_unref(bs); - aio_context_release(ctx); qobject_unref(snapshot_options); qobject_unref(options); error_propagate(errp, local_err); @@ -4540,12 +4494,7 @@ void bdrv_reopen_queue_free(BlockReopenQueue *bs_que= ue) if (bs_queue) { BlockReopenQueueEntry *bs_entry, *next; QTAILQ_FOREACH_SAFE(bs_entry, bs_queue, entry, next) { - AioContext *ctx =3D bdrv_get_aio_context(bs_entry->state.bs); - - aio_context_acquire(ctx); bdrv_drained_end(bs_entry->state.bs); - aio_context_release(ctx); - qobject_unref(bs_entry->state.explicit_options); qobject_unref(bs_entry->state.options); g_free(bs_entry); @@ -4577,7 +4526,6 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, = Error **errp) { int ret =3D -1; BlockReopenQueueEntry *bs_entry, *next; - AioContext *ctx; Transaction *tran =3D tran_new(); g_autoptr(GSList) refresh_list =3D NULL; =20 @@ -4586,10 +4534,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue,= Error **errp) GLOBAL_STATE_CODE(); =20 QTAILQ_FOREACH(bs_entry, bs_queue, entry) { - ctx =3D bdrv_get_aio_context(bs_entry->state.bs); - aio_context_acquire(ctx); ret =3D bdrv_flush(bs_entry->state.bs); - aio_context_release(ctx); if (ret < 0) { error_setg_errno(errp, -ret, "Error flushing drive"); goto abort; @@ -4598,10 +4543,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue,= Error **errp) =20 QTAILQ_FOREACH(bs_entry, bs_queue, entry) { assert(bs_entry->state.bs->quiesce_counter > 0); - ctx =3D bdrv_get_aio_context(bs_entry->state.bs); - aio_context_acquire(ctx); ret =3D bdrv_reopen_prepare(&bs_entry->state, bs_queue, tran, errp= ); - aio_context_release(ctx); if (ret < 0) { goto abort; } @@ -4644,10 +4586,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue,= Error **errp) * to first element. */ QTAILQ_FOREACH_REVERSE(bs_entry, bs_queue, entry) { - ctx =3D bdrv_get_aio_context(bs_entry->state.bs); - aio_context_acquire(ctx); bdrv_reopen_commit(&bs_entry->state); - aio_context_release(ctx); } =20 bdrv_graph_wrlock(); @@ -4658,10 +4597,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue,= Error **errp) BlockDriverState *bs =3D bs_entry->state.bs; =20 if (bs->drv->bdrv_reopen_commit_post) { - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); bs->drv->bdrv_reopen_commit_post(&bs_entry->state); - aio_context_release(ctx); } } =20 @@ -4675,10 +4611,7 @@ abort: =20 QTAILQ_FOREACH_SAFE(bs_entry, bs_queue, entry, next) { if (bs_entry->prepared) { - ctx =3D bdrv_get_aio_context(bs_entry->state.bs); - aio_context_acquire(ctx); bdrv_reopen_abort(&bs_entry->state); - aio_context_release(ctx); } } =20 @@ -4691,24 +4624,13 @@ cleanup: int bdrv_reopen(BlockDriverState *bs, QDict *opts, bool keep_old_opts, Error **errp) { - AioContext *ctx =3D bdrv_get_aio_context(bs); BlockReopenQueue *queue; - int ret; =20 GLOBAL_STATE_CODE(); =20 queue =3D bdrv_reopen_queue(NULL, bs, opts, keep_old_opts); =20 - if (ctx !=3D qemu_get_aio_context()) { - aio_context_release(ctx); - } - ret =3D bdrv_reopen_multiple(queue, errp); - - if (ctx !=3D qemu_get_aio_context()) { - aio_context_acquire(ctx); - } - - return ret; + return bdrv_reopen_multiple(queue, errp); } =20 int bdrv_reopen_set_read_only(BlockDriverState *bs, bool read_only, @@ -4760,7 +4682,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *re= open_state, const char *child_name =3D is_backing ? "backing" : "file"; QObject *value; const char *str; - AioContext *ctx, *old_ctx; bool has_child; int ret; =20 @@ -4844,13 +4765,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *r= eopen_state, bdrv_drained_begin(old_child_bs); } =20 - old_ctx =3D bdrv_get_aio_context(bs); - ctx =3D bdrv_get_aio_context(new_child_bs); - if (old_ctx !=3D ctx) { - aio_context_release(old_ctx); - aio_context_acquire(ctx); - } - bdrv_graph_rdunlock_main_loop(); bdrv_graph_wrlock(); =20 @@ -4859,11 +4773,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *r= eopen_state, =20 bdrv_graph_wrunlock(); =20 - if (old_ctx !=3D ctx) { - aio_context_release(ctx); - aio_context_acquire(old_ctx); - } - if (old_child_bs) { bdrv_drained_end(old_child_bs); bdrv_unref(old_child_bs); @@ -5537,7 +5446,6 @@ int bdrv_append(BlockDriverState *bs_new, BlockDriver= State *bs_top, int ret; BdrvChild *child; Transaction *tran =3D tran_new(); - AioContext *old_context, *new_context =3D NULL; =20 GLOBAL_STATE_CODE(); =20 @@ -5545,21 +5453,8 @@ int bdrv_append(BlockDriverState *bs_new, BlockDrive= rState *bs_top, assert(!bs_new->backing); bdrv_graph_rdunlock_main_loop(); =20 - old_context =3D bdrv_get_aio_context(bs_top); bdrv_drained_begin(bs_top); - - /* - * bdrv_drained_begin() requires that only the AioContext of the drain= ed - * node is locked, and at this point it can still differ from the AioC= ontext - * of bs_top. - */ - new_context =3D bdrv_get_aio_context(bs_new); - aio_context_release(old_context); - aio_context_acquire(new_context); bdrv_drained_begin(bs_new); - aio_context_release(new_context); - aio_context_acquire(old_context); - new_context =3D NULL; =20 bdrv_graph_wrlock(); =20 @@ -5571,18 +5466,6 @@ int bdrv_append(BlockDriverState *bs_new, BlockDrive= rState *bs_top, goto out; } =20 - /* - * bdrv_attach_child_noperm could change the AioContext of bs_top and - * bs_new, but at least they are in the same AioContext now. This is t= he - * AioContext that we need to lock for the rest of the function. - */ - new_context =3D bdrv_get_aio_context(bs_top); - - if (old_context !=3D new_context) { - aio_context_release(old_context); - aio_context_acquire(new_context); - } - ret =3D bdrv_replace_node_noperm(bs_top, bs_new, true, tran, errp); if (ret < 0) { goto out; @@ -5598,11 +5481,6 @@ out: bdrv_drained_end(bs_top); bdrv_drained_end(bs_new); =20 - if (new_context && old_context !=3D new_context) { - aio_context_release(new_context); - aio_context_acquire(old_context); - } - return ret; } =20 @@ -5697,12 +5575,8 @@ BlockDriverState *bdrv_insert_node(BlockDriverState = *bs, QDict *options, =20 GLOBAL_STATE_CODE(); =20 - aio_context_release(ctx); - aio_context_acquire(qemu_get_aio_context()); new_node_bs =3D bdrv_new_open_driver_opts(drv, node_name, options, fla= gs, errp); - aio_context_release(qemu_get_aio_context()); - aio_context_acquire(ctx); assert(bdrv_get_aio_context(bs) =3D=3D ctx); =20 options =3D NULL; /* bdrv_new_open_driver() eats options */ @@ -7037,12 +6911,9 @@ void bdrv_activate_all(Error **errp) GRAPH_RDLOCK_GUARD_MAINLOOP(); =20 for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { - AioContext *aio_context =3D bdrv_get_aio_context(bs); int ret; =20 - aio_context_acquire(aio_context); ret =3D bdrv_activate(bs, errp); - aio_context_release(aio_context); if (ret < 0) { bdrv_next_cleanup(&it); return; @@ -7137,20 +7008,10 @@ int bdrv_inactivate_all(void) BlockDriverState *bs =3D NULL; BdrvNextIterator it; int ret =3D 0; - GSList *aio_ctxs =3D NULL, *ctx; =20 GLOBAL_STATE_CODE(); GRAPH_RDLOCK_GUARD_MAINLOOP(); =20 - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { - AioContext *aio_context =3D bdrv_get_aio_context(bs); - - if (!g_slist_find(aio_ctxs, aio_context)) { - aio_ctxs =3D g_slist_prepend(aio_ctxs, aio_context); - aio_context_acquire(aio_context); - } - } - for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { /* Nodes with BDS parents are covered by recursion from the last * parent that gets inactivated. Don't inactivate them a second @@ -7161,17 +7022,10 @@ int bdrv_inactivate_all(void) ret =3D bdrv_inactivate_recurse(bs); if (ret < 0) { bdrv_next_cleanup(&it); - goto out; + break; } } =20 -out: - for (ctx =3D aio_ctxs; ctx !=3D NULL; ctx =3D ctx->next) { - AioContext *aio_context =3D ctx->data; - aio_context_release(aio_context); - } - g_slist_free(aio_ctxs); - return ret; } =20 @@ -7257,11 +7111,8 @@ void bdrv_unref(BlockDriverState *bs) static void bdrv_schedule_unref_bh(void *opaque) { BlockDriverState *bs =3D opaque; - AioContext *ctx =3D bdrv_get_aio_context(bs); =20 - aio_context_acquire(ctx); bdrv_unref(bs); - aio_context_release(ctx); } =20 /* @@ -7398,8 +7249,6 @@ void bdrv_img_create(const char *filename, const char= *fmt, return; } =20 - aio_context_acquire(qemu_get_aio_context()); - /* Create parameter list */ create_opts =3D qemu_opts_append(create_opts, drv->create_opts); create_opts =3D qemu_opts_append(create_opts, proto_drv->create_opts); @@ -7549,7 +7398,6 @@ out: qemu_opts_del(opts); qemu_opts_free(create_opts); error_propagate(errp, local_err); - aio_context_release(qemu_get_aio_context()); } =20 AioContext *bdrv_get_aio_context(BlockDriverState *bs) @@ -7585,29 +7433,12 @@ void coroutine_fn bdrv_co_leave(BlockDriverState *b= s, AioContext *old_ctx) =20 void coroutine_fn bdrv_co_lock(BlockDriverState *bs) { - AioContext *ctx =3D bdrv_get_aio_context(bs); - - /* In the main thread, bs->aio_context won't change concurrently */ - assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); - - /* - * We're in coroutine context, so we already hold the lock of the main - * loop AioContext. Don't lock it twice to avoid deadlocks. - */ - assert(qemu_in_coroutine()); - if (ctx !=3D qemu_get_aio_context()) { - aio_context_acquire(ctx); - } + /* TODO removed in next patch */ } =20 void coroutine_fn bdrv_co_unlock(BlockDriverState *bs) { - AioContext *ctx =3D bdrv_get_aio_context(bs); - - assert(qemu_in_coroutine()); - if (ctx !=3D qemu_get_aio_context()) { - aio_context_release(ctx); - } + /* TODO removed in next patch */ } =20 static void bdrv_do_remove_aio_context_notifier(BdrvAioNotifier *ban) @@ -7728,21 +7559,8 @@ static void bdrv_set_aio_context_commit(void *opaque) BdrvStateSetAioContext *state =3D (BdrvStateSetAioContext *) opaque; BlockDriverState *bs =3D (BlockDriverState *) state->bs; AioContext *new_context =3D state->new_ctx; - AioContext *old_context =3D bdrv_get_aio_context(bs); =20 - /* - * Take the old AioContex when detaching it from bs. - * At this point, new_context lock is already acquired, and we are now - * also taking old_context. This is safe as long as bdrv_detach_aio_co= ntext - * does not call AIO_POLL_WHILE(). - */ - if (old_context !=3D qemu_get_aio_context()) { - aio_context_acquire(old_context); - } bdrv_detach_aio_context(bs); - if (old_context !=3D qemu_get_aio_context()) { - aio_context_release(old_context); - } bdrv_attach_aio_context(bs, new_context); } =20 @@ -7827,7 +7645,6 @@ int bdrv_try_change_aio_context(BlockDriverState *bs,= AioContext *ctx, Transaction *tran; GHashTable *visited; int ret; - AioContext *old_context =3D bdrv_get_aio_context(bs); GLOBAL_STATE_CODE(); =20 /* @@ -7857,34 +7674,7 @@ int bdrv_try_change_aio_context(BlockDriverState *bs= , AioContext *ctx, return -EPERM; } =20 - /* - * Release old AioContext, it won't be needed anymore, as all - * bdrv_drained_begin() have been called already. - */ - if (qemu_get_aio_context() !=3D old_context) { - aio_context_release(old_context); - } - - /* - * Acquire new AioContext since bdrv_drained_end() is going to be call= ed - * after we switched all nodes in the new AioContext, and the function - * assumes that the lock of the bs is always taken. - */ - if (qemu_get_aio_context() !=3D ctx) { - aio_context_acquire(ctx); - } - tran_commit(tran); - - if (qemu_get_aio_context() !=3D ctx) { - aio_context_release(ctx); - } - - /* Re-acquire the old AioContext, since the caller takes and releases = it. */ - if (qemu_get_aio_context() !=3D old_context) { - aio_context_acquire(old_context); - } - return 0; } =20 @@ -8006,7 +7796,6 @@ BlockDriverState *check_to_replace_node(BlockDriverSt= ate *parent_bs, const char *node_name, Error **err= p) { BlockDriverState *to_replace_bs =3D bdrv_find_node(node_name); - AioContext *aio_context; =20 GLOBAL_STATE_CODE(); =20 @@ -8015,12 +7804,8 @@ BlockDriverState *check_to_replace_node(BlockDriverS= tate *parent_bs, return NULL; } =20 - aio_context =3D bdrv_get_aio_context(to_replace_bs); - aio_context_acquire(aio_context); - if (bdrv_op_is_blocked(to_replace_bs, BLOCK_OP_TYPE_REPLACE, errp)) { - to_replace_bs =3D NULL; - goto out; + return NULL; } =20 /* We don't want arbitrary node of the BDS chain to be replaced only t= he top @@ -8033,12 +7818,9 @@ BlockDriverState *check_to_replace_node(BlockDriverS= tate *parent_bs, "because it cannot be guaranteed that doing so would no= t " "lead to an abrupt change of visible data", node_name, parent_bs->node_name); - to_replace_bs =3D NULL; - goto out; + return NULL; } =20 -out: - aio_context_release(aio_context); return to_replace_bs; } =20 diff --git a/block/block-backend.c b/block/block-backend.c index abac4e0235..f412bed274 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -429,7 +429,6 @@ BlockBackend *blk_new_open(const char *filename, const = char *reference, { BlockBackend *blk; BlockDriverState *bs; - AioContext *ctx; uint64_t perm =3D 0; uint64_t shared =3D BLK_PERM_ALL; =20 @@ -459,23 +458,18 @@ BlockBackend *blk_new_open(const char *filename, cons= t char *reference, shared =3D BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED; } =20 - aio_context_acquire(qemu_get_aio_context()); bs =3D bdrv_open(filename, reference, options, flags, errp); - aio_context_release(qemu_get_aio_context()); if (!bs) { return NULL; } =20 /* bdrv_open() could have moved bs to a different AioContext */ - ctx =3D bdrv_get_aio_context(bs); blk =3D blk_new(bdrv_get_aio_context(bs), perm, shared); blk->perm =3D perm; blk->shared_perm =3D shared; =20 - aio_context_acquire(ctx); blk_insert_bs(blk, bs, errp); bdrv_unref(bs); - aio_context_release(ctx); =20 if (!blk->root) { blk_unref(blk); @@ -577,13 +571,9 @@ void blk_remove_all_bs(void) GLOBAL_STATE_CODE(); =20 while ((blk =3D blk_all_next(blk)) !=3D NULL) { - AioContext *ctx =3D blk_get_aio_context(blk); - - aio_context_acquire(ctx); if (blk->root) { blk_remove_bs(blk); } - aio_context_release(ctx); } } =20 @@ -2736,20 +2726,16 @@ int blk_commit_all(void) GRAPH_RDLOCK_GUARD_MAINLOOP(); =20 while ((blk =3D blk_all_next(blk)) !=3D NULL) { - AioContext *aio_context =3D blk_get_aio_context(blk); BlockDriverState *unfiltered_bs =3D bdrv_skip_filters(blk_bs(blk)); =20 - aio_context_acquire(aio_context); if (blk_is_inserted(blk) && bdrv_cow_child(unfiltered_bs)) { int ret; =20 ret =3D bdrv_commit(unfiltered_bs); if (ret < 0) { - aio_context_release(aio_context); return ret; } } - aio_context_release(aio_context); } return 0; } diff --git a/block/copy-before-write.c b/block/copy-before-write.c index 13972879b1..0842a1a6df 100644 --- a/block/copy-before-write.c +++ b/block/copy-before-write.c @@ -412,7 +412,6 @@ static int cbw_open(BlockDriverState *bs, QDict *option= s, int flags, int64_t cluster_size; g_autoptr(BlockdevOptions) full_opts =3D NULL; BlockdevOptionsCbw *opts; - AioContext *ctx; int ret; =20 full_opts =3D cbw_parse_options(options, errp); @@ -435,15 +434,11 @@ static int cbw_open(BlockDriverState *bs, QDict *opti= ons, int flags, =20 GRAPH_RDLOCK_GUARD_MAINLOOP(); =20 - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); - if (opts->bitmap) { bitmap =3D block_dirty_bitmap_lookup(opts->bitmap->node, opts->bitmap->name, NULL, errp); if (!bitmap) { - ret =3D -EINVAL; - goto out; + return -EINVAL; } } s->on_cbw_error =3D opts->has_on_cbw_error ? opts->on_cbw_error : @@ -461,24 +456,21 @@ static int cbw_open(BlockDriverState *bs, QDict *opti= ons, int flags, s->bcs =3D block_copy_state_new(bs->file, s->target, bitmap, errp); if (!s->bcs) { error_prepend(errp, "Cannot create block-copy-state: "); - ret =3D -EINVAL; - goto out; + return -EINVAL; } =20 cluster_size =3D block_copy_cluster_size(s->bcs); =20 s->done_bitmap =3D bdrv_create_dirty_bitmap(bs, cluster_size, NULL, er= rp); if (!s->done_bitmap) { - ret =3D -EINVAL; - goto out; + return -EINVAL; } bdrv_disable_dirty_bitmap(s->done_bitmap); =20 /* s->access_bitmap starts equal to bcs bitmap */ s->access_bitmap =3D bdrv_create_dirty_bitmap(bs, cluster_size, NULL, = errp); if (!s->access_bitmap) { - ret =3D -EINVAL; - goto out; + return -EINVAL; } bdrv_disable_dirty_bitmap(s->access_bitmap); bdrv_dirty_bitmap_merge_internal(s->access_bitmap, @@ -487,11 +479,7 @@ static int cbw_open(BlockDriverState *bs, QDict *optio= ns, int flags, =20 qemu_co_mutex_init(&s->lock); QLIST_INIT(&s->frozen_read_reqs); - - ret =3D 0; -out: - aio_context_release(ctx); - return ret; + return 0; } =20 static void cbw_close(BlockDriverState *bs) diff --git a/block/export/export.c b/block/export/export.c index a8f274e526..6d51ae8ed7 100644 --- a/block/export/export.c +++ b/block/export/export.c @@ -114,7 +114,6 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Er= ror **errp) } =20 ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); =20 if (export->iothread) { IOThread *iothread; @@ -133,8 +132,6 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Er= ror **errp) set_context_errp =3D fixed_iothread ? errp : NULL; ret =3D bdrv_try_change_aio_context(bs, new_ctx, NULL, set_context= _errp); if (ret =3D=3D 0) { - aio_context_release(ctx); - aio_context_acquire(new_ctx); ctx =3D new_ctx; } else if (fixed_iothread) { goto fail; @@ -191,8 +188,6 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Er= ror **errp) assert(exp->blk !=3D NULL); =20 QLIST_INSERT_HEAD(&block_exports, exp, next); - - aio_context_release(ctx); return exp; =20 fail: @@ -200,7 +195,6 @@ fail: blk_set_dev_ops(blk, NULL, NULL); blk_unref(blk); } - aio_context_release(ctx); if (exp) { g_free(exp->id); g_free(exp); @@ -218,9 +212,6 @@ void blk_exp_ref(BlockExport *exp) static void blk_exp_delete_bh(void *opaque) { BlockExport *exp =3D opaque; - AioContext *aio_context =3D exp->ctx; - - aio_context_acquire(aio_context); =20 assert(exp->refcount =3D=3D 0); QLIST_REMOVE(exp, next); @@ -230,8 +221,6 @@ static void blk_exp_delete_bh(void *opaque) qapi_event_send_block_export_deleted(exp->id); g_free(exp->id); g_free(exp); - - aio_context_release(aio_context); } =20 void blk_exp_unref(BlockExport *exp) @@ -249,22 +238,16 @@ void blk_exp_unref(BlockExport *exp) * connections and other internally held references start to shut down. Wh= en * the function returns, there may still be active references while the ex= port * is in the process of shutting down. - * - * Acquires exp->ctx internally. Callers must *not* hold the lock. */ void blk_exp_request_shutdown(BlockExport *exp) { - AioContext *aio_context =3D exp->ctx; - - aio_context_acquire(aio_context); - /* * If the user doesn't own the export any more, it is already shutting * down. We must not call .request_shutdown and decrease the refcount a * second time. */ if (!exp->user_owned) { - goto out; + return; } =20 exp->drv->request_shutdown(exp); @@ -272,9 +255,6 @@ void blk_exp_request_shutdown(BlockExport *exp) assert(exp->user_owned); exp->user_owned =3D false; blk_exp_unref(exp); - -out: - aio_context_release(aio_context); } =20 /* diff --git a/block/io.c b/block/io.c index 7e62fabbf5..8fa7670571 100644 --- a/block/io.c +++ b/block/io.c @@ -294,8 +294,6 @@ static void bdrv_co_drain_bh_cb(void *opaque) BlockDriverState *bs =3D data->bs; =20 if (bs) { - AioContext *ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); bdrv_dec_in_flight(bs); if (data->begin) { bdrv_do_drained_begin(bs, data->parent, data->poll); @@ -303,7 +301,6 @@ static void bdrv_co_drain_bh_cb(void *opaque) assert(!data->poll); bdrv_do_drained_end(bs, data->parent); } - aio_context_release(ctx); } else { assert(data->begin); bdrv_drain_all_begin(); @@ -320,8 +317,6 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDr= iverState *bs, { BdrvCoDrainData data; Coroutine *self =3D qemu_coroutine_self(); - AioContext *ctx =3D bdrv_get_aio_context(bs); - AioContext *co_ctx =3D qemu_coroutine_get_aio_context(self); =20 /* Calling bdrv_drain() from a BH ensures the current coroutine yields= and * other coroutines run if they were queued by aio_co_enter(). */ @@ -340,17 +335,6 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockD= riverState *bs, bdrv_inc_in_flight(bs); } =20 - /* - * Temporarily drop the lock across yield or we would get deadlocks. - * bdrv_co_drain_bh_cb() reaquires the lock as needed. - * - * When we yield below, the lock for the current context will be - * released, so if this is actually the lock that protects bs, don't d= rop - * it a second time. - */ - if (ctx !=3D co_ctx) { - aio_context_release(ctx); - } replay_bh_schedule_oneshot_event(qemu_get_aio_context(), bdrv_co_drain_bh_cb, &data); =20 @@ -358,11 +342,6 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockD= riverState *bs, /* If we are resumed from some other event (such as an aio completion = or a * timer callback), it is a bug in the caller that should be fixed. */ assert(data.done); - - /* Reacquire the AioContext of bs if we dropped it */ - if (ctx !=3D co_ctx) { - aio_context_acquire(ctx); - } } =20 static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent, @@ -478,13 +457,12 @@ static bool bdrv_drain_all_poll(void) GLOBAL_STATE_CODE(); GRAPH_RDLOCK_GUARD_MAINLOOP(); =20 - /* bdrv_drain_poll() can't make changes to the graph and we are holdin= g the - * main AioContext lock, so iterating bdrv_next_all_states() is safe. = */ + /* + * bdrv_drain_poll() can't make changes to the graph and we hold the B= QL, + * so iterating bdrv_next_all_states() is safe. + */ while ((bs =3D bdrv_next_all_states(bs))) { - AioContext *aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); result |=3D bdrv_drain_poll(bs, NULL, true); - aio_context_release(aio_context); } =20 return result; @@ -525,11 +503,7 @@ void bdrv_drain_all_begin_nopoll(void) /* Quiesce all nodes, without polling in-flight requests yet. The graph * cannot change during this loop. */ while ((bs =3D bdrv_next_all_states(bs))) { - AioContext *aio_context =3D bdrv_get_aio_context(bs); - - aio_context_acquire(aio_context); bdrv_do_drained_begin(bs, NULL, false); - aio_context_release(aio_context); } } =20 @@ -588,11 +562,7 @@ void bdrv_drain_all_end(void) } =20 while ((bs =3D bdrv_next_all_states(bs))) { - AioContext *aio_context =3D bdrv_get_aio_context(bs); - - aio_context_acquire(aio_context); bdrv_do_drained_end(bs, NULL); - aio_context_release(aio_context); } =20 assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); @@ -2368,15 +2338,10 @@ int bdrv_flush_all(void) } =20 for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { - AioContext *aio_context =3D bdrv_get_aio_context(bs); - int ret; - - aio_context_acquire(aio_context); - ret =3D bdrv_flush(bs); + int ret =3D bdrv_flush(bs); if (ret < 0 && !result) { result =3D ret; } - aio_context_release(aio_context); } =20 return result; diff --git a/block/mirror.c b/block/mirror.c index 51f9e2f17c..5145eb53e1 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -662,7 +662,6 @@ static int mirror_exit_common(Job *job) MirrorBlockJob *s =3D container_of(job, MirrorBlockJob, common.job); BlockJob *bjob =3D &s->common; MirrorBDSOpaque *bs_opaque; - AioContext *replace_aio_context =3D NULL; BlockDriverState *src; BlockDriverState *target_bs; BlockDriverState *mirror_top_bs; @@ -677,7 +676,6 @@ static int mirror_exit_common(Job *job) } s->prepared =3D true; =20 - aio_context_acquire(qemu_get_aio_context()); bdrv_graph_rdlock_main_loop(); =20 mirror_top_bs =3D s->mirror_top_bs; @@ -742,11 +740,6 @@ static int mirror_exit_common(Job *job) } bdrv_graph_rdunlock_main_loop(); =20 - if (s->to_replace) { - replace_aio_context =3D bdrv_get_aio_context(s->to_replace); - aio_context_acquire(replace_aio_context); - } - if (s->should_complete && !abort) { BlockDriverState *to_replace =3D s->to_replace ?: src; bool ro =3D bdrv_is_read_only(to_replace); @@ -785,9 +778,6 @@ static int mirror_exit_common(Job *job) error_free(s->replace_blocker); bdrv_unref(s->to_replace); } - if (replace_aio_context) { - aio_context_release(replace_aio_context); - } g_free(s->replaces); =20 /* @@ -811,8 +801,6 @@ static int mirror_exit_common(Job *job) bdrv_unref(mirror_top_bs); bdrv_unref(src); =20 - aio_context_release(qemu_get_aio_context()); - return ret; } =20 @@ -1191,24 +1179,17 @@ static void mirror_complete(Job *job, Error **errp) =20 /* block all operations on to_replace bs */ if (s->replaces) { - AioContext *replace_aio_context; - s->to_replace =3D bdrv_find_node(s->replaces); if (!s->to_replace) { error_setg(errp, "Node name '%s' not found", s->replaces); return; } =20 - replace_aio_context =3D bdrv_get_aio_context(s->to_replace); - aio_context_acquire(replace_aio_context); - /* TODO Translate this into child freeze system. */ error_setg(&s->replace_blocker, "block device is in use by block-job-complete"); bdrv_op_block_all(s->to_replace, s->replace_blocker); bdrv_ref(s->to_replace); - - aio_context_release(replace_aio_context); } =20 s->should_complete =3D true; diff --git a/block/monitor/bitmap-qmp-cmds.c b/block/monitor/bitmap-qmp-cmd= s.c index 70d01a3776..a738e7bbf7 100644 --- a/block/monitor/bitmap-qmp-cmds.c +++ b/block/monitor/bitmap-qmp-cmds.c @@ -95,7 +95,6 @@ void qmp_block_dirty_bitmap_add(const char *node, const c= har *name, { BlockDriverState *bs; BdrvDirtyBitmap *bitmap; - AioContext *aio_context; =20 if (!name || name[0] =3D=3D '\0') { error_setg(errp, "Bitmap name cannot be empty"); @@ -107,14 +106,11 @@ void qmp_block_dirty_bitmap_add(const char *node, con= st char *name, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - if (has_granularity) { if (granularity < 512 || !is_power_of_2(granularity)) { error_setg(errp, "Granularity must be power of 2 " "and at least 512"); - goto out; + return; } } else { /* Default to cluster size, if available: */ @@ -132,12 +128,12 @@ void qmp_block_dirty_bitmap_add(const char *node, con= st char *name, if (persistent && !bdrv_can_store_new_dirty_bitmap(bs, name, granularity, errp)) { - goto out; + return; } =20 bitmap =3D bdrv_create_dirty_bitmap(bs, granularity, name, errp); if (bitmap =3D=3D NULL) { - goto out; + return; } =20 if (disabled) { @@ -145,9 +141,6 @@ void qmp_block_dirty_bitmap_add(const char *node, const= char *name, } =20 bdrv_dirty_bitmap_set_persistence(bitmap, persistent); - -out: - aio_context_release(aio_context); } =20 BdrvDirtyBitmap *block_dirty_bitmap_remove(const char *node, const char *n= ame, @@ -157,7 +150,6 @@ BdrvDirtyBitmap *block_dirty_bitmap_remove(const char *= node, const char *name, { BlockDriverState *bs; BdrvDirtyBitmap *bitmap; - AioContext *aio_context; =20 GLOBAL_STATE_CODE(); =20 @@ -166,19 +158,14 @@ BdrvDirtyBitmap *block_dirty_bitmap_remove(const char= *node, const char *name, return NULL; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - if (bdrv_dirty_bitmap_check(bitmap, BDRV_BITMAP_BUSY | BDRV_BITMAP_RO, errp)) { - aio_context_release(aio_context); return NULL; } =20 if (bdrv_dirty_bitmap_get_persistence(bitmap) && bdrv_remove_persistent_dirty_bitmap(bs, name, errp) < 0) { - aio_context_release(aio_context); return NULL; } =20 @@ -190,7 +177,6 @@ BdrvDirtyBitmap *block_dirty_bitmap_remove(const char *= node, const char *name, *bitmap_bs =3D bs; } =20 - aio_context_release(aio_context); return release ? NULL : bitmap; } =20 diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index c729cbf1eb..bdbb5cb141 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -141,7 +141,6 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict) const char *id =3D qdict_get_str(qdict, "id"); BlockBackend *blk; BlockDriverState *bs; - AioContext *aio_context; Error *local_err =3D NULL; =20 GLOBAL_STATE_CODE(); @@ -168,14 +167,10 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict) return; } =20 - aio_context =3D blk_get_aio_context(blk); - aio_context_acquire(aio_context); - bs =3D blk_bs(blk); if (bs) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_DRIVE_DEL, &local_err)) { error_report_err(local_err); - aio_context_release(aio_context); return; } =20 @@ -196,8 +191,6 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict) } else { blk_unref(blk); } - - aio_context_release(aio_context); } =20 void hmp_commit(Monitor *mon, const QDict *qdict) @@ -213,7 +206,6 @@ void hmp_commit(Monitor *mon, const QDict *qdict) ret =3D blk_commit_all(); } else { BlockDriverState *bs; - AioContext *aio_context; =20 blk =3D blk_by_name(device); if (!blk) { @@ -222,18 +214,13 @@ void hmp_commit(Monitor *mon, const QDict *qdict) } =20 bs =3D bdrv_skip_implicit_filters(blk_bs(blk)); - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 if (!blk_is_available(blk)) { error_report("Device '%s' has no medium", device); - aio_context_release(aio_context); return; } =20 ret =3D bdrv_commit(bs); - - aio_context_release(aio_context); } if (ret < 0) { error_report("'commit' error for '%s': %s", device, strerror(-ret)= ); @@ -560,7 +547,6 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict) BlockBackend *blk =3D NULL; BlockDriverState *bs =3D NULL; BlockBackend *local_blk =3D NULL; - AioContext *ctx =3D NULL; bool qdev =3D qdict_get_try_bool(qdict, "qdev", false); const char *device =3D qdict_get_str(qdict, "device"); const char *command =3D qdict_get_str(qdict, "command"); @@ -582,9 +568,6 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict) } } =20 - ctx =3D blk ? blk_get_aio_context(blk) : bdrv_get_aio_context(bs); - aio_context_acquire(ctx); - if (bs) { blk =3D local_blk =3D blk_new(bdrv_get_aio_context(bs), 0, BLK_PER= M_ALL); ret =3D blk_insert_bs(blk, bs, &err); @@ -622,11 +605,6 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict) =20 fail: blk_unref(local_blk); - - if (ctx) { - aio_context_release(ctx); - } - hmp_handle_error(mon, err); } =20 @@ -882,7 +860,6 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdic= t) int nb_sns, i; int total; int *global_snapshots; - AioContext *aio_context; =20 typedef struct SnapshotEntry { QEMUSnapshotInfo sn; @@ -909,11 +886,8 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdi= ct) error_report_err(err); return; } - aio_context =3D bdrv_get_aio_context(bs); =20 - aio_context_acquire(aio_context); nb_sns =3D bdrv_snapshot_list(bs, &sn_tab); - aio_context_release(aio_context); =20 if (nb_sns < 0) { monitor_printf(mon, "bdrv_snapshot_list: error %d\n", nb_sns); @@ -924,9 +898,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdic= t) int bs1_nb_sns =3D 0; ImageEntry *ie; SnapshotEntry *se; - AioContext *ctx =3D bdrv_get_aio_context(bs1); =20 - aio_context_acquire(ctx); if (bdrv_can_snapshot(bs1)) { sn =3D NULL; bs1_nb_sns =3D bdrv_snapshot_list(bs1, &sn); @@ -944,7 +916,6 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdic= t) } g_free(sn); } - aio_context_release(ctx); } =20 if (no_snapshot) { diff --git a/block/qapi-sysemu.c b/block/qapi-sysemu.c index 1618cd225a..e4282631d2 100644 --- a/block/qapi-sysemu.c +++ b/block/qapi-sysemu.c @@ -174,7 +174,6 @@ blockdev_remove_medium(const char *device, const char *= id, Error **errp) { BlockBackend *blk; BlockDriverState *bs; - AioContext *aio_context; bool has_attached_device; =20 GLOBAL_STATE_CODE(); @@ -204,13 +203,10 @@ blockdev_remove_medium(const char *device, const char= *id, Error **errp) return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - bdrv_graph_rdlock_main_loop(); if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_EJECT, errp)) { bdrv_graph_rdunlock_main_loop(); - goto out; + return; } bdrv_graph_rdunlock_main_loop(); =20 @@ -223,9 +219,6 @@ blockdev_remove_medium(const char *device, const char *= id, Error **errp) * value passed here (i.e. false). */ blk_dev_change_media_cb(blk, false, &error_abort); } - -out: - aio_context_release(aio_context); } =20 void qmp_blockdev_remove_medium(const char *id, Error **errp) @@ -237,7 +230,6 @@ static void qmp_blockdev_insert_anon_medium(BlockBacken= d *blk, BlockDriverState *bs, Error **= errp) { Error *local_err =3D NULL; - AioContext *ctx; bool has_device; int ret; =20 @@ -259,11 +251,7 @@ static void qmp_blockdev_insert_anon_medium(BlockBacke= nd *blk, return; } =20 - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); ret =3D blk_insert_bs(blk, bs, errp); - aio_context_release(ctx); - if (ret < 0) { return; } @@ -374,9 +362,7 @@ void qmp_blockdev_change_medium(const char *device, qdict_put_str(options, "driver", format); } =20 - aio_context_acquire(qemu_get_aio_context()); medium_bs =3D bdrv_open(filename, NULL, options, bdrv_flags, errp); - aio_context_release(qemu_get_aio_context()); =20 if (!medium_bs) { goto fail; @@ -437,20 +423,16 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg, = Error **errp) ThrottleConfig cfg; BlockDriverState *bs; BlockBackend *blk; - AioContext *aio_context; =20 blk =3D qmp_get_blk(arg->device, arg->id, errp); if (!blk) { return; } =20 - aio_context =3D blk_get_aio_context(blk); - aio_context_acquire(aio_context); - bs =3D blk_bs(blk); if (!bs) { error_setg(errp, "Device has no medium"); - goto out; + return; } =20 throttle_config_init(&cfg); @@ -505,7 +487,7 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg, Er= ror **errp) } =20 if (!throttle_is_valid(&cfg, errp)) { - goto out; + return; } =20 if (throttle_enabled(&cfg)) { @@ -522,9 +504,6 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg, Er= ror **errp) /* If all throttling settings are set to 0, disable I/O limits */ blk_io_limits_disable(blk); } - -out: - aio_context_release(aio_context); } =20 void qmp_block_latency_histogram_set( diff --git a/block/qapi.c b/block/qapi.c index 82a30b38fe..9e806fa230 100644 --- a/block/qapi.c +++ b/block/qapi.c @@ -234,13 +234,11 @@ bdrv_do_query_node_info(BlockDriverState *bs, BlockNo= deInfo *info, Error **errp) int ret; Error *err =3D NULL; =20 - aio_context_acquire(bdrv_get_aio_context(bs)); - size =3D bdrv_getlength(bs); if (size < 0) { error_setg_errno(errp, -size, "Can't get image size '%s'", bs->exact_filename); - goto out; + return; } =20 bdrv_refresh_filename(bs); @@ -265,7 +263,7 @@ bdrv_do_query_node_info(BlockDriverState *bs, BlockNode= Info *info, Error **errp) info->format_specific =3D bdrv_get_specific_info(bs, &err); if (err) { error_propagate(errp, err); - goto out; + return; } backing_filename =3D bs->backing_file; if (backing_filename[0] !=3D '\0') { @@ -300,11 +298,8 @@ bdrv_do_query_node_info(BlockDriverState *bs, BlockNod= eInfo *info, Error **errp) break; default: error_propagate(errp, err); - goto out; + return; } - -out: - aio_context_release(bdrv_get_aio_context(bs)); } =20 /** @@ -709,15 +704,10 @@ BlockStatsList *qmp_query_blockstats(bool has_query_n= odes, /* Just to be safe if query_nodes is not always initialized */ if (has_query_nodes && query_nodes) { for (bs =3D bdrv_next_node(NULL); bs; bs =3D bdrv_next_node(bs)) { - AioContext *ctx =3D bdrv_get_aio_context(bs); - - aio_context_acquire(ctx); QAPI_LIST_APPEND(tail, bdrv_query_bds_stats(bs, false)); - aio_context_release(ctx); } } else { for (blk =3D blk_all_next(NULL); blk; blk =3D blk_all_next(blk)) { - AioContext *ctx =3D blk_get_aio_context(blk); BlockStats *s; char *qdev; =20 @@ -725,7 +715,6 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nod= es, continue; } =20 - aio_context_acquire(ctx); s =3D bdrv_query_bds_stats(blk_bs(blk), true); s->device =3D g_strdup(blk_name(blk)); =20 @@ -737,7 +726,6 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nod= es, } =20 bdrv_query_blk_stats(s->stats, blk); - aio_context_release(ctx); =20 QAPI_LIST_APPEND(tail, s); } diff --git a/block/raw-format.c b/block/raw-format.c index 1111dffd54..ac7e8495f6 100644 --- a/block/raw-format.c +++ b/block/raw-format.c @@ -470,7 +470,6 @@ static int raw_open(BlockDriverState *bs, QDict *option= s, int flags, Error **errp) { BDRVRawState *s =3D bs->opaque; - AioContext *ctx; bool has_size; uint64_t offset, size; BdrvChildRole file_role; @@ -522,11 +521,7 @@ static int raw_open(BlockDriverState *bs, QDict *optio= ns, int flags, bs->file->bs->filename); } =20 - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); ret =3D raw_apply_options(bs, s, offset, has_size, size, errp); - aio_context_release(ctx); - if (ret < 0) { return ret; } diff --git a/block/replication.c b/block/replication.c index 424b537ff7..ca6bd0a720 100644 --- a/block/replication.c +++ b/block/replication.c @@ -394,14 +394,7 @@ static void reopen_backing_file(BlockDriverState *bs, = bool writable, } =20 if (reopen_queue) { - AioContext *ctx =3D bdrv_get_aio_context(bs); - if (ctx !=3D qemu_get_aio_context()) { - aio_context_release(ctx); - } bdrv_reopen_multiple(reopen_queue, errp); - if (ctx !=3D qemu_get_aio_context()) { - aio_context_acquire(ctx); - } } } =20 @@ -462,14 +455,11 @@ static void replication_start(ReplicationState *rs, R= eplicationMode mode, BlockDriverState *top_bs; BdrvChild *active_disk, *hidden_disk, *secondary_disk; int64_t active_length, hidden_length, disk_length; - AioContext *aio_context; Error *local_err =3D NULL; BackupPerf perf =3D { .use_copy_range =3D true, .max_workers =3D 1 }; =20 GLOBAL_STATE_CODE(); =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); s =3D bs->opaque; =20 if (s->stage =3D=3D BLOCK_REPLICATION_DONE || @@ -479,20 +469,17 @@ static void replication_start(ReplicationState *rs, R= eplicationMode mode, * Ignore the request because the secondary side of replication * doesn't have to do anything anymore. */ - aio_context_release(aio_context); return; } =20 if (s->stage !=3D BLOCK_REPLICATION_NONE) { error_setg(errp, "Block replication is running or done"); - aio_context_release(aio_context); return; } =20 if (s->mode !=3D mode) { error_setg(errp, "The parameter mode's value is invalid, needs %d," " but got %d", s->mode, mode); - aio_context_release(aio_context); return; } =20 @@ -505,7 +492,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, if (!active_disk || !active_disk->bs || !active_disk->bs->backing)= { error_setg(errp, "Active disk doesn't have backing file"); bdrv_graph_rdunlock_main_loop(); - aio_context_release(aio_context); return; } =20 @@ -513,7 +499,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, if (!hidden_disk->bs || !hidden_disk->bs->backing) { error_setg(errp, "Hidden disk doesn't have backing file"); bdrv_graph_rdunlock_main_loop(); - aio_context_release(aio_context); return; } =20 @@ -521,7 +506,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) { error_setg(errp, "The secondary disk doesn't have block backen= d"); bdrv_graph_rdunlock_main_loop(); - aio_context_release(aio_context); return; } bdrv_graph_rdunlock_main_loop(); @@ -534,7 +518,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, active_length !=3D hidden_length || hidden_length !=3D disk_le= ngth) { error_setg(errp, "Active disk, hidden disk, secondary disk's l= ength" " are not the same"); - aio_context_release(aio_context); return; } =20 @@ -546,7 +529,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, !hidden_disk->bs->drv->bdrv_make_empty) { error_setg(errp, "Active disk or hidden disk doesn't support make_em= pty"); - aio_context_release(aio_context); bdrv_graph_rdunlock_main_loop(); return; } @@ -556,7 +538,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, reopen_backing_file(bs, true, &local_err); if (local_err) { error_propagate(errp, local_err); - aio_context_release(aio_context); return; } =20 @@ -569,7 +550,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, if (local_err) { error_propagate(errp, local_err); bdrv_graph_wrunlock(); - aio_context_release(aio_context); return; } =20 @@ -580,7 +560,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, if (local_err) { error_propagate(errp, local_err); bdrv_graph_wrunlock(); - aio_context_release(aio_context); return; } =20 @@ -594,7 +573,6 @@ static void replication_start(ReplicationState *rs, Rep= licationMode mode, error_setg(errp, "No top_bs or it is invalid"); bdrv_graph_wrunlock(); reopen_backing_file(bs, false, NULL); - aio_context_release(aio_context); return; } bdrv_op_block_all(top_bs, s->blocker); @@ -612,13 +590,11 @@ static void replication_start(ReplicationState *rs, R= eplicationMode mode, if (local_err) { error_propagate(errp, local_err); backup_job_cleanup(bs); - aio_context_release(aio_context); return; } job_start(&s->backup_job->job); break; default: - aio_context_release(aio_context); abort(); } =20 @@ -629,18 +605,12 @@ static void replication_start(ReplicationState *rs, R= eplicationMode mode, } =20 s->error =3D 0; - aio_context_release(aio_context); } =20 static void replication_do_checkpoint(ReplicationState *rs, Error **errp) { BlockDriverState *bs =3D rs->opaque; - BDRVReplicationState *s; - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - s =3D bs->opaque; + BDRVReplicationState *s =3D bs->opaque; =20 if (s->stage =3D=3D BLOCK_REPLICATION_DONE || s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { @@ -649,38 +619,28 @@ static void replication_do_checkpoint(ReplicationStat= e *rs, Error **errp) * Ignore the request because the secondary side of replication * doesn't have to do anything anymore. */ - aio_context_release(aio_context); return; } =20 if (s->mode =3D=3D REPLICATION_MODE_SECONDARY) { secondary_do_checkpoint(bs, errp); } - aio_context_release(aio_context); } =20 static void replication_get_error(ReplicationState *rs, Error **errp) { BlockDriverState *bs =3D rs->opaque; - BDRVReplicationState *s; - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - s =3D bs->opaque; + BDRVReplicationState *s =3D bs->opaque; =20 if (s->stage =3D=3D BLOCK_REPLICATION_NONE) { error_setg(errp, "Block replication is not running"); - aio_context_release(aio_context); return; } =20 if (s->error) { error_setg(errp, "I/O error occurred"); - aio_context_release(aio_context); return; } - aio_context_release(aio_context); } =20 static void replication_done(void *opaque, int ret) @@ -708,12 +668,7 @@ static void replication_done(void *opaque, int ret) static void replication_stop(ReplicationState *rs, bool failover, Error **= errp) { BlockDriverState *bs =3D rs->opaque; - BDRVReplicationState *s; - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - s =3D bs->opaque; + BDRVReplicationState *s =3D bs->opaque; =20 if (s->stage =3D=3D BLOCK_REPLICATION_DONE || s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { @@ -722,13 +677,11 @@ static void replication_stop(ReplicationState *rs, bo= ol failover, Error **errp) * Ignore the request because the secondary side of replication * doesn't have to do anything anymore. */ - aio_context_release(aio_context); return; } =20 if (s->stage !=3D BLOCK_REPLICATION_RUNNING) { error_setg(errp, "Block replication is not running"); - aio_context_release(aio_context); return; } =20 @@ -744,15 +697,12 @@ static void replication_stop(ReplicationState *rs, bo= ol failover, Error **errp) * disk, secondary disk in backup_job_completed(). */ if (s->backup_job) { - aio_context_release(aio_context); job_cancel_sync(&s->backup_job->job, true); - aio_context_acquire(aio_context); } =20 if (!failover) { secondary_do_checkpoint(bs, errp); s->stage =3D BLOCK_REPLICATION_DONE; - aio_context_release(aio_context); return; } =20 @@ -765,10 +715,8 @@ static void replication_stop(ReplicationState *rs, boo= l failover, Error **errp) bdrv_graph_rdunlock_main_loop(); break; default: - aio_context_release(aio_context); abort(); } - aio_context_release(aio_context); } =20 static const char *const replication_strong_runtime_opts[] =3D { diff --git a/block/snapshot.c b/block/snapshot.c index e486d3e205..a28f2b039f 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -525,9 +525,7 @@ static bool GRAPH_RDLOCK bdrv_all_snapshots_includes_bs= (BlockDriverState *bs) return bdrv_has_blk(bs) || QLIST_EMPTY(&bs->parents); } =20 -/* Group operations. All block drivers are involved. - * These functions will properly handle dataplane (take aio_context_acquire - * when appropriate for appropriate block drivers) */ +/* Group operations. All block drivers are involved. */ =20 bool bdrv_all_can_snapshot(bool has_devices, strList *devices, Error **errp) @@ -545,14 +543,11 @@ bool bdrv_all_can_snapshot(bool has_devices, strList = *devices, iterbdrvs =3D bdrvs; while (iterbdrvs) { BlockDriverState *bs =3D iterbdrvs->data; - AioContext *ctx =3D bdrv_get_aio_context(bs); bool ok =3D true; =20 - aio_context_acquire(ctx); if (devices || bdrv_all_snapshots_includes_bs(bs)) { ok =3D bdrv_can_snapshot(bs); } - aio_context_release(ctx); if (!ok) { error_setg(errp, "Device '%s' is writable but does not support= " "snapshots", bdrv_get_device_or_node_name(bs)); @@ -582,18 +577,15 @@ int bdrv_all_delete_snapshot(const char *name, iterbdrvs =3D bdrvs; while (iterbdrvs) { BlockDriverState *bs =3D iterbdrvs->data; - AioContext *ctx =3D bdrv_get_aio_context(bs); QEMUSnapshotInfo sn1, *snapshot =3D &sn1; int ret =3D 0; =20 - aio_context_acquire(ctx); if ((devices || bdrv_all_snapshots_includes_bs(bs)) && bdrv_snapshot_find(bs, snapshot, name) >=3D 0) { ret =3D bdrv_snapshot_delete(bs, snapshot->id_str, snapshot->name, errp); } - aio_context_release(ctx); if (ret < 0) { error_prepend(errp, "Could not delete snapshot '%s' on '%s': ", name, bdrv_get_device_or_node_name(bs)); @@ -628,17 +620,14 @@ int bdrv_all_goto_snapshot(const char *name, iterbdrvs =3D bdrvs; while (iterbdrvs) { BlockDriverState *bs =3D iterbdrvs->data; - AioContext *ctx =3D bdrv_get_aio_context(bs); bool all_snapshots_includes_bs; =20 - aio_context_acquire(ctx); bdrv_graph_rdlock_main_loop(); all_snapshots_includes_bs =3D bdrv_all_snapshots_includes_bs(bs); bdrv_graph_rdunlock_main_loop(); =20 ret =3D (devices || all_snapshots_includes_bs) ? bdrv_snapshot_goto(bs, name, errp) : 0; - aio_context_release(ctx); if (ret < 0) { bdrv_graph_rdlock_main_loop(); error_prepend(errp, "Could not load snapshot '%s' on '%s': ", @@ -670,15 +659,12 @@ int bdrv_all_has_snapshot(const char *name, iterbdrvs =3D bdrvs; while (iterbdrvs) { BlockDriverState *bs =3D iterbdrvs->data; - AioContext *ctx =3D bdrv_get_aio_context(bs); QEMUSnapshotInfo sn; int ret =3D 0; =20 - aio_context_acquire(ctx); if (devices || bdrv_all_snapshots_includes_bs(bs)) { ret =3D bdrv_snapshot_find(bs, &sn, name); } - aio_context_release(ctx); if (ret < 0) { if (ret =3D=3D -ENOENT) { return 0; @@ -715,10 +701,8 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, iterbdrvs =3D bdrvs; while (iterbdrvs) { BlockDriverState *bs =3D iterbdrvs->data; - AioContext *ctx =3D bdrv_get_aio_context(bs); int ret =3D 0; =20 - aio_context_acquire(ctx); if (bs =3D=3D vm_state_bs) { sn->vm_state_size =3D vm_state_size; ret =3D bdrv_snapshot_create(bs, sn); @@ -726,7 +710,6 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, sn->vm_state_size =3D 0; ret =3D bdrv_snapshot_create(bs, sn); } - aio_context_release(ctx); if (ret < 0) { error_setg(errp, "Could not create snapshot '%s' on '%s'", sn->name, bdrv_get_device_or_node_name(bs)); @@ -757,13 +740,10 @@ BlockDriverState *bdrv_all_find_vmstate_bs(const char= *vmstate_bs, iterbdrvs =3D bdrvs; while (iterbdrvs) { BlockDriverState *bs =3D iterbdrvs->data; - AioContext *ctx =3D bdrv_get_aio_context(bs); bool found =3D false; =20 - aio_context_acquire(ctx); found =3D (devices || bdrv_all_snapshots_includes_bs(bs)) && bdrv_can_snapshot(bs); - aio_context_release(ctx); =20 if (vmstate_bs) { if (g_str_equal(vmstate_bs, diff --git a/block/write-threshold.c b/block/write-threshold.c index 76d8885677..56fe88de81 100644 --- a/block/write-threshold.c +++ b/block/write-threshold.c @@ -33,7 +33,6 @@ void qmp_block_set_write_threshold(const char *node_name, Error **errp) { BlockDriverState *bs; - AioContext *aio_context; =20 bs =3D bdrv_find_node(node_name); if (!bs) { @@ -41,12 +40,7 @@ void qmp_block_set_write_threshold(const char *node_name, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - bdrv_write_threshold_set(bs, threshold_bytes); - - aio_context_release(aio_context); } =20 void bdrv_write_threshold_check_write(BlockDriverState *bs, int64_t offset, diff --git a/blockdev.c b/blockdev.c index db9cc96510..8a1b28f830 100644 --- a/blockdev.c +++ b/blockdev.c @@ -662,7 +662,6 @@ err_no_opts: /* Takes the ownership of bs_opts */ BlockDriverState *bds_tree_init(QDict *bs_opts, Error **errp) { - BlockDriverState *bs; int bdrv_flags =3D 0; =20 GLOBAL_STATE_CODE(); @@ -677,11 +676,7 @@ BlockDriverState *bds_tree_init(QDict *bs_opts, Error = **errp) bdrv_flags |=3D BDRV_O_INACTIVE; } =20 - aio_context_acquire(qemu_get_aio_context()); - bs =3D bdrv_open(NULL, NULL, bs_opts, bdrv_flags, errp); - aio_context_release(qemu_get_aio_context()); - - return bs; + return bdrv_open(NULL, NULL, bs_opts, bdrv_flags, errp); } =20 void blockdev_close_all_bdrv_states(void) @@ -690,11 +685,7 @@ void blockdev_close_all_bdrv_states(void) =20 GLOBAL_STATE_CODE(); QTAILQ_FOREACH_SAFE(bs, &monitor_bdrv_states, monitor_list, next_bs) { - AioContext *ctx =3D bdrv_get_aio_context(bs); - - aio_context_acquire(ctx); bdrv_unref(bs); - aio_context_release(ctx); } } =20 @@ -1048,7 +1039,6 @@ fail: static BlockDriverState *qmp_get_root_bs(const char *name, Error **errp) { BlockDriverState *bs; - AioContext *aio_context; =20 GRAPH_RDLOCK_GUARD_MAINLOOP(); =20 @@ -1062,16 +1052,11 @@ static BlockDriverState *qmp_get_root_bs(const char= *name, Error **errp) return NULL; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - if (!bdrv_is_inserted(bs)) { error_setg(errp, "Device has no medium"); bs =3D NULL; } =20 - aio_context_release(aio_context); - return bs; } =20 @@ -1141,7 +1126,6 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal_s= ync(const char *device, Error **errp) { BlockDriverState *bs; - AioContext *aio_context; QEMUSnapshotInfo sn; Error *local_err =3D NULL; SnapshotInfo *info =3D NULL; @@ -1154,39 +1138,35 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal= _sync(const char *device, if (!bs) { return NULL; } - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 if (!id && !name) { error_setg(errp, "Name or id must be provided"); - goto out_aio_context; + return NULL; } =20 if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_INTERNAL_SNAPSHOT_DELETE, err= p)) { - goto out_aio_context; + return NULL; } =20 ret =3D bdrv_snapshot_find_by_id_and_name(bs, id, name, &sn, &local_er= r); if (local_err) { error_propagate(errp, local_err); - goto out_aio_context; + return NULL; } if (!ret) { error_setg(errp, "Snapshot with id '%s' and name '%s' does not exist on " "device '%s'", STR_OR_NULL(id), STR_OR_NULL(name), device); - goto out_aio_context; + return NULL; } =20 bdrv_snapshot_delete(bs, id, name, &local_err); if (local_err) { error_propagate(errp, local_err); - goto out_aio_context; + return NULL; } =20 - aio_context_release(aio_context); - info =3D g_new0(SnapshotInfo, 1); info->id =3D g_strdup(sn.id_str); info->name =3D g_strdup(sn.name); @@ -1201,10 +1181,6 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal_= sync(const char *device, } =20 return info; - -out_aio_context: - aio_context_release(aio_context); - return NULL; } =20 /* internal snapshot private data */ @@ -1232,7 +1208,6 @@ static void internal_snapshot_action(BlockdevSnapshot= Internal *internal, bool ret; int64_t rt; InternalSnapshotState *state =3D g_new0(InternalSnapshotState, 1); - AioContext *aio_context; int ret1; =20 GLOBAL_STATE_CODE(); @@ -1248,33 +1223,30 @@ static void internal_snapshot_action(BlockdevSnapsh= otInternal *internal, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - state->bs =3D bs; =20 /* Paired with .clean() */ bdrv_drained_begin(bs); =20 if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_INTERNAL_SNAPSHOT, errp)) { - goto out; + return; } =20 if (bdrv_is_read_only(bs)) { error_setg(errp, "Device '%s' is read only", device); - goto out; + return; } =20 if (!bdrv_can_snapshot(bs)) { error_setg(errp, "Block format '%s' used by device '%s' " "does not support internal snapshots", bs->drv->format_name, device); - goto out; + return; } =20 if (!strlen(name)) { error_setg(errp, "Name is empty"); - goto out; + return; } =20 /* check whether a snapshot with name exist */ @@ -1282,12 +1254,12 @@ static void internal_snapshot_action(BlockdevSnapsh= otInternal *internal, &local_err); if (local_err) { error_propagate(errp, local_err); - goto out; + return; } else if (ret) { error_setg(errp, "Snapshot with name '%s' already exists on device '%s'", name, device); - goto out; + return; } =20 /* 3. take the snapshot */ @@ -1308,14 +1280,11 @@ static void internal_snapshot_action(BlockdevSnapsh= otInternal *internal, error_setg_errno(errp, -ret1, "Failed to create snapshot '%s' on device '%s'", name, device); - goto out; + return; } =20 /* 4. succeed, mark a snapshot is created */ state->created =3D true; - -out: - aio_context_release(aio_context); } =20 static void internal_snapshot_abort(void *opaque) @@ -1323,7 +1292,6 @@ static void internal_snapshot_abort(void *opaque) InternalSnapshotState *state =3D opaque; BlockDriverState *bs =3D state->bs; QEMUSnapshotInfo *sn =3D &state->sn; - AioContext *aio_context; Error *local_error =3D NULL; =20 GLOBAL_STATE_CODE(); @@ -1333,9 +1301,6 @@ static void internal_snapshot_abort(void *opaque) return; } =20 - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - if (bdrv_snapshot_delete(bs, sn->id_str, sn->name, &local_error) < 0) { error_reportf_err(local_error, "Failed to delete snapshot with id '%s' and " @@ -1343,25 +1308,17 @@ static void internal_snapshot_abort(void *opaque) sn->id_str, sn->name, bdrv_get_device_name(bs)); } - - aio_context_release(aio_context); } =20 static void internal_snapshot_clean(void *opaque) { g_autofree InternalSnapshotState *state =3D opaque; - AioContext *aio_context; =20 if (!state->bs) { return; } =20 - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - bdrv_drained_end(state->bs); - - aio_context_release(aio_context); } =20 /* external snapshot private data */ @@ -1395,7 +1352,6 @@ static void external_snapshot_action(TransactionActio= n *action, /* File name of the new image (for 'blockdev-snapshot-sync') */ const char *new_image_file; ExternalSnapshotState *state =3D g_new0(ExternalSnapshotState, 1); - AioContext *aio_context; uint64_t perm, shared; =20 /* TODO We'll eventually have to take a writer lock in this function */ @@ -1435,26 +1391,23 @@ static void external_snapshot_action(TransactionAct= ion *action, return; } =20 - aio_context =3D bdrv_get_aio_context(state->old_bs); - aio_context_acquire(aio_context); - /* Paired with .clean() */ bdrv_drained_begin(state->old_bs); =20 if (!bdrv_is_inserted(state->old_bs)) { error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device); - goto out; + return; } =20 if (bdrv_op_is_blocked(state->old_bs, BLOCK_OP_TYPE_EXTERNAL_SNAPSHOT, errp)) { - goto out; + return; } =20 if (!bdrv_is_read_only(state->old_bs)) { if (bdrv_flush(state->old_bs)) { error_setg(errp, QERR_IO_ERROR); - goto out; + return; } } =20 @@ -1466,13 +1419,13 @@ static void external_snapshot_action(TransactionAct= ion *action, =20 if (node_name && !snapshot_node_name) { error_setg(errp, "New overlay node-name missing"); - goto out; + return; } =20 if (snapshot_node_name && bdrv_lookup_bs(snapshot_node_name, snapshot_node_name, NULL)) { error_setg(errp, "New overlay node-name already in use"); - goto out; + return; } =20 flags =3D state->old_bs->open_flags; @@ -1485,20 +1438,18 @@ static void external_snapshot_action(TransactionAct= ion *action, int64_t size =3D bdrv_getlength(state->old_bs); if (size < 0) { error_setg_errno(errp, -size, "bdrv_getlength failed"); - goto out; + return; } bdrv_refresh_filename(state->old_bs); =20 - aio_context_release(aio_context); bdrv_img_create(new_image_file, format, state->old_bs->filename, state->old_bs->drv->format_name, NULL, size, flags, false, &local_err); - aio_context_acquire(aio_context); =20 if (local_err) { error_propagate(errp, local_err); - goto out; + return; } } =20 @@ -1508,20 +1459,15 @@ static void external_snapshot_action(TransactionAct= ion *action, } qdict_put_str(options, "driver", format); } - aio_context_release(aio_context); =20 - aio_context_acquire(qemu_get_aio_context()); state->new_bs =3D bdrv_open(new_image_file, snapshot_ref, options, fla= gs, errp); - aio_context_release(qemu_get_aio_context()); =20 /* We will manually add the backing_hd field to the bs later */ if (!state->new_bs) { return; } =20 - aio_context_acquire(aio_context); - /* * Allow attaching a backing file to an overlay that's already in use = only * if the parents don't assume that they are already seeing a valid im= age. @@ -1530,41 +1476,34 @@ static void external_snapshot_action(TransactionAct= ion *action, bdrv_get_cumulative_perm(state->new_bs, &perm, &shared); if (perm & BLK_PERM_CONSISTENT_READ) { error_setg(errp, "The overlay is already in use"); - goto out; + return; } =20 if (state->new_bs->drv->is_filter) { error_setg(errp, "Filters cannot be used as overlays"); - goto out; + return; } =20 if (bdrv_cow_child(state->new_bs)) { error_setg(errp, "The overlay already has a backing image"); - goto out; + return; } =20 if (!state->new_bs->drv->supports_backing) { error_setg(errp, "The overlay does not support backing images"); - goto out; + return; } =20 ret =3D bdrv_append(state->new_bs, state->old_bs, errp); if (ret < 0) { - goto out; + return; } state->overlay_appended =3D true; - -out: - aio_context_release(aio_context); } =20 static void external_snapshot_commit(void *opaque) { ExternalSnapshotState *state =3D opaque; - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->old_bs); - aio_context_acquire(aio_context); =20 /* We don't need (or want) to use the transactional * bdrv_reopen_multiple() across all the entries at once, because we @@ -1572,8 +1511,6 @@ static void external_snapshot_commit(void *opaque) if (!qatomic_read(&state->old_bs->copy_on_read)) { bdrv_reopen_set_read_only(state->old_bs, true, NULL); } - - aio_context_release(aio_context); } =20 static void external_snapshot_abort(void *opaque) @@ -1586,7 +1523,6 @@ static void external_snapshot_abort(void *opaque) int ret; =20 aio_context =3D bdrv_get_aio_context(state->old_bs); - aio_context_acquire(aio_context); =20 bdrv_ref(state->old_bs); /* we can't let bdrv_set_backind_hd= () close state->old_bs; we need it = */ @@ -1599,15 +1535,9 @@ static void external_snapshot_abort(void *opaque) */ tmp_context =3D bdrv_get_aio_context(state->old_bs); if (aio_context !=3D tmp_context) { - aio_context_release(aio_context); - aio_context_acquire(tmp_context); - ret =3D bdrv_try_change_aio_context(state->old_bs, aio_context, NULL, NULL); assert(ret =3D=3D 0); - - aio_context_release(tmp_context); - aio_context_acquire(aio_context); } =20 bdrv_drained_begin(state->new_bs); @@ -1617,8 +1547,6 @@ static void external_snapshot_abort(void *opaque) bdrv_drained_end(state->new_bs); =20 bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_b= s */ - - aio_context_release(aio_context); } } } @@ -1626,19 +1554,13 @@ static void external_snapshot_abort(void *opaque) static void external_snapshot_clean(void *opaque) { g_autofree ExternalSnapshotState *state =3D opaque; - AioContext *aio_context; =20 if (!state->old_bs) { return; } =20 - aio_context =3D bdrv_get_aio_context(state->old_bs); - aio_context_acquire(aio_context); - bdrv_drained_end(state->old_bs); bdrv_unref(state->new_bs); - - aio_context_release(aio_context); } =20 typedef struct DriveBackupState { @@ -1670,7 +1592,6 @@ static void drive_backup_action(DriveBackup *backup, BlockDriverState *target_bs; BlockDriverState *source =3D NULL; AioContext *aio_context; - AioContext *old_context; const char *format; QDict *options; Error *local_err =3D NULL; @@ -1698,7 +1619,6 @@ static void drive_backup_action(DriveBackup *backup, } =20 aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 state->bs =3D bs; /* Paired with .clean() */ @@ -1713,7 +1633,7 @@ static void drive_backup_action(DriveBackup *backup, bdrv_graph_rdlock_main_loop(); if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { bdrv_graph_rdunlock_main_loop(); - goto out; + return; } =20 flags =3D bs->open_flags | BDRV_O_RDWR; @@ -1744,7 +1664,7 @@ static void drive_backup_action(DriveBackup *backup, size =3D bdrv_getlength(bs); if (size < 0) { error_setg_errno(errp, -size, "bdrv_getlength failed"); - goto out; + return; } =20 if (backup->mode !=3D NEW_IMAGE_MODE_EXISTING) { @@ -1770,7 +1690,7 @@ static void drive_backup_action(DriveBackup *backup, =20 if (local_err) { error_propagate(errp, local_err); - goto out; + return; } =20 options =3D qdict_new(); @@ -1779,30 +1699,18 @@ static void drive_backup_action(DriveBackup *backup, if (format) { qdict_put_str(options, "driver", format); } - aio_context_release(aio_context); =20 - aio_context_acquire(qemu_get_aio_context()); target_bs =3D bdrv_open(backup->target, NULL, options, flags, errp); - aio_context_release(qemu_get_aio_context()); - if (!target_bs) { return; } =20 - /* Honor bdrv_try_change_aio_context() context acquisition requirement= s. */ - old_context =3D bdrv_get_aio_context(target_bs); - aio_context_acquire(old_context); - ret =3D bdrv_try_change_aio_context(target_bs, aio_context, NULL, errp= ); if (ret < 0) { bdrv_unref(target_bs); - aio_context_release(old_context); return; } =20 - aio_context_release(old_context); - aio_context_acquire(aio_context); - if (set_backing_hd) { if (bdrv_set_backing_hd(target_bs, source, errp) < 0) { goto unref; @@ -1815,22 +1723,14 @@ static void drive_backup_action(DriveBackup *backup, =20 unref: bdrv_unref(target_bs); -out: - aio_context_release(aio_context); } =20 static void drive_backup_commit(void *opaque) { DriveBackupState *state =3D opaque; - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); =20 assert(state->job); job_start(&state->job->job); - - aio_context_release(aio_context); } =20 static void drive_backup_abort(void *opaque) @@ -1845,18 +1745,12 @@ static void drive_backup_abort(void *opaque) static void drive_backup_clean(void *opaque) { g_autofree DriveBackupState *state =3D opaque; - AioContext *aio_context; =20 if (!state->bs) { return; } =20 - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - bdrv_drained_end(state->bs); - - aio_context_release(aio_context); } =20 typedef struct BlockdevBackupState { @@ -1881,7 +1775,6 @@ static void blockdev_backup_action(BlockdevBackup *ba= ckup, BlockDriverState *bs; BlockDriverState *target_bs; AioContext *aio_context; - AioContext *old_context; int ret; =20 tran_add(tran, &blockdev_backup_drv, state); @@ -1898,17 +1791,12 @@ static void blockdev_backup_action(BlockdevBackup *= backup, =20 /* Honor bdrv_try_change_aio_context() context acquisition requirement= s. */ aio_context =3D bdrv_get_aio_context(bs); - old_context =3D bdrv_get_aio_context(target_bs); - aio_context_acquire(old_context); =20 ret =3D bdrv_try_change_aio_context(target_bs, aio_context, NULL, errp= ); if (ret < 0) { - aio_context_release(old_context); return; } =20 - aio_context_release(old_context); - aio_context_acquire(aio_context); state->bs =3D bs; =20 /* Paired with .clean() */ @@ -1917,22 +1805,14 @@ static void blockdev_backup_action(BlockdevBackup *= backup, state->job =3D do_backup_common(qapi_BlockdevBackup_base(backup), bs, target_bs, aio_context, block_job_txn, errp); - - aio_context_release(aio_context); } =20 static void blockdev_backup_commit(void *opaque) { BlockdevBackupState *state =3D opaque; - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); =20 assert(state->job); job_start(&state->job->job); - - aio_context_release(aio_context); } =20 static void blockdev_backup_abort(void *opaque) @@ -1947,18 +1827,12 @@ static void blockdev_backup_abort(void *opaque) static void blockdev_backup_clean(void *opaque) { g_autofree BlockdevBackupState *state =3D opaque; - AioContext *aio_context; =20 if (!state->bs) { return; } =20 - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - bdrv_drained_end(state->bs); - - aio_context_release(aio_context); } =20 typedef struct BlockDirtyBitmapState { @@ -2453,7 +2327,6 @@ void qmp_block_stream(const char *job_id, const char = *device, } =20 aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 bdrv_graph_rdlock_main_loop(); if (base) { @@ -2520,7 +2393,7 @@ void qmp_block_stream(const char *job_id, const char = *device, if (!base_bs && backing_file) { error_setg(errp, "backing file specified, but streaming the " "entire chain"); - goto out; + return; } =20 if (has_auto_finalize && !auto_finalize) { @@ -2535,18 +2408,14 @@ void qmp_block_stream(const char *job_id, const cha= r *device, filter_node_name, &local_err); if (local_err) { error_propagate(errp, local_err); - goto out; + return; } =20 trace_qmp_block_stream(bs); - -out: - aio_context_release(aio_context); return; =20 out_rdlock: bdrv_graph_rdunlock_main_loop(); - aio_context_release(aio_context); } =20 void qmp_block_commit(const char *job_id, const char *device, @@ -2605,10 +2474,9 @@ void qmp_block_commit(const char *job_id, const char= *device, } =20 aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_COMMIT_SOURCE, errp)) { - goto out; + return; } =20 /* default top_bs is the active layer */ @@ -2616,16 +2484,16 @@ void qmp_block_commit(const char *job_id, const cha= r *device, =20 if (top_node && top) { error_setg(errp, "'top-node' and 'top' are mutually exclusive"); - goto out; + return; } else if (top_node) { top_bs =3D bdrv_lookup_bs(NULL, top_node, errp); if (top_bs =3D=3D NULL) { - goto out; + return; } if (!bdrv_chain_contains(bs, top_bs)) { error_setg(errp, "'%s' is not in this backing file chain", top_node); - goto out; + return; } } else if (top) { /* This strcmp() is just a shortcut, there is no need to @@ -2639,35 +2507,35 @@ void qmp_block_commit(const char *job_id, const cha= r *device, =20 if (top_bs =3D=3D NULL) { error_setg(errp, "Top image file %s not found", top ? top : "NULL"= ); - goto out; + return; } =20 assert(bdrv_get_aio_context(top_bs) =3D=3D aio_context); =20 if (base_node && base) { error_setg(errp, "'base-node' and 'base' are mutually exclusive"); - goto out; + return; } else if (base_node) { base_bs =3D bdrv_lookup_bs(NULL, base_node, errp); if (base_bs =3D=3D NULL) { - goto out; + return; } if (!bdrv_chain_contains(top_bs, base_bs)) { error_setg(errp, "'%s' is not in this backing file chain", base_node); - goto out; + return; } } else if (base) { base_bs =3D bdrv_find_backing_image(top_bs, base); if (base_bs =3D=3D NULL) { error_setg(errp, "Can't find '%s' in the backing chain", base); - goto out; + return; } } else { base_bs =3D bdrv_find_base(top_bs); if (base_bs =3D=3D NULL) { error_setg(errp, "There is no backimg image"); - goto out; + return; } } =20 @@ -2677,14 +2545,14 @@ void qmp_block_commit(const char *job_id, const cha= r *device, iter =3D bdrv_filter_or_cow_bs(iter)) { if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_COMMIT_TARGET, errp)) { - goto out; + return; } } =20 /* Do not allow attempts to commit an image into itself */ if (top_bs =3D=3D base_bs) { error_setg(errp, "cannot commit an image into itself"); - goto out; + return; } =20 /* @@ -2707,7 +2575,7 @@ void qmp_block_commit(const char *job_id, const char = *device, error_setg(errp, "'backing-file' specified, but 'top' has = a " "writer on it"); } - goto out; + return; } if (!job_id) { /* @@ -2723,7 +2591,7 @@ void qmp_block_commit(const char *job_id, const char = *device, } else { BlockDriverState *overlay_bs =3D bdrv_find_overlay(bs, top_bs); if (bdrv_op_is_blocked(overlay_bs, BLOCK_OP_TYPE_COMMIT_TARGET, er= rp)) { - goto out; + return; } commit_start(job_id, bs, base_bs, top_bs, job_flags, speed, on_error, backing_file, @@ -2731,11 +2599,8 @@ void qmp_block_commit(const char *job_id, const char= *device, } if (local_err !=3D NULL) { error_propagate(errp, local_err); - goto out; + return; } - -out: - aio_context_release(aio_context); } =20 /* Common QMP interface for drive-backup and blockdev-backup */ @@ -2984,8 +2849,6 @@ static void blockdev_mirror_common(const char *job_id= , BlockDriverState *bs, =20 if (replaces) { BlockDriverState *to_replace_bs; - AioContext *aio_context; - AioContext *replace_aio_context; int64_t bs_size, replace_size; =20 bs_size =3D bdrv_getlength(bs); @@ -2999,19 +2862,7 @@ static void blockdev_mirror_common(const char *job_i= d, BlockDriverState *bs, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - replace_aio_context =3D bdrv_get_aio_context(to_replace_bs); - /* - * bdrv_getlength() is a co-wrapper and uses AIO_WAIT_WHILE. Be su= re not - * to acquire the same AioContext twice. - */ - if (replace_aio_context !=3D aio_context) { - aio_context_acquire(replace_aio_context); - } replace_size =3D bdrv_getlength(to_replace_bs); - if (replace_aio_context !=3D aio_context) { - aio_context_release(replace_aio_context); - } =20 if (replace_size < 0) { error_setg_errno(errp, -replace_size, @@ -3040,7 +2891,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) BlockDriverState *bs; BlockDriverState *target_backing_bs, *target_bs; AioContext *aio_context; - AioContext *old_context; BlockMirrorBackingMode backing_mode; Error *local_err =3D NULL; QDict *options =3D NULL; @@ -3063,7 +2913,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) } =20 aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 if (!arg->has_mode) { arg->mode =3D NEW_IMAGE_MODE_ABSOLUTE_PATHS; @@ -3087,14 +2936,14 @@ void qmp_drive_mirror(DriveMirror *arg, Error **err= p) size =3D bdrv_getlength(bs); if (size < 0) { error_setg_errno(errp, -size, "bdrv_getlength failed"); - goto out; + return; } =20 if (arg->replaces) { if (!arg->node_name) { error_setg(errp, "a node-name must be provided when replacing = a" " named node of the graph"); - goto out; + return; } } =20 @@ -3142,7 +2991,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) =20 if (local_err) { error_propagate(errp, local_err); - goto out; + return; } =20 options =3D qdict_new(); @@ -3152,15 +3001,11 @@ void qmp_drive_mirror(DriveMirror *arg, Error **err= p) if (format) { qdict_put_str(options, "driver", format); } - aio_context_release(aio_context); =20 /* Mirroring takes care of copy-on-write using the source's backing * file. */ - aio_context_acquire(qemu_get_aio_context()); target_bs =3D bdrv_open(arg->target, NULL, options, flags, errp); - aio_context_release(qemu_get_aio_context()); - if (!target_bs) { return; } @@ -3172,20 +3017,12 @@ void qmp_drive_mirror(DriveMirror *arg, Error **err= p) bdrv_graph_rdunlock_main_loop(); =20 =20 - /* Honor bdrv_try_change_aio_context() context acquisition requirement= s. */ - old_context =3D bdrv_get_aio_context(target_bs); - aio_context_acquire(old_context); - ret =3D bdrv_try_change_aio_context(target_bs, aio_context, NULL, errp= ); if (ret < 0) { bdrv_unref(target_bs); - aio_context_release(old_context); return; } =20 - aio_context_release(old_context); - aio_context_acquire(aio_context); - blockdev_mirror_common(arg->job_id, bs, target_bs, arg->replaces, arg->sync, backing_mode, zero_target, @@ -3201,8 +3038,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) arg->has_auto_dismiss, arg->auto_dismiss, errp); bdrv_unref(target_bs); -out: - aio_context_release(aio_context); } =20 void qmp_blockdev_mirror(const char *job_id, @@ -3225,7 +3060,6 @@ void qmp_blockdev_mirror(const char *job_id, BlockDriverState *bs; BlockDriverState *target_bs; AioContext *aio_context; - AioContext *old_context; BlockMirrorBackingMode backing_mode =3D MIRROR_LEAVE_BACKING_CHAIN; bool zero_target; int ret; @@ -3242,18 +3076,11 @@ void qmp_blockdev_mirror(const char *job_id, =20 zero_target =3D (sync =3D=3D MIRROR_SYNC_MODE_FULL); =20 - /* Honor bdrv_try_change_aio_context() context acquisition requirement= s. */ - old_context =3D bdrv_get_aio_context(target_bs); aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(old_context); =20 ret =3D bdrv_try_change_aio_context(target_bs, aio_context, NULL, errp= ); - - aio_context_release(old_context); - aio_context_acquire(aio_context); - if (ret < 0) { - goto out; + return; } =20 blockdev_mirror_common(job_id, bs, target_bs, @@ -3268,8 +3095,6 @@ void qmp_blockdev_mirror(const char *job_id, has_auto_finalize, auto_finalize, has_auto_dismiss, auto_dismiss, errp); -out: - aio_context_release(aio_context); } =20 /* @@ -3432,7 +3257,6 @@ void qmp_change_backing_file(const char *device, Error **errp) { BlockDriverState *bs =3D NULL; - AioContext *aio_context; BlockDriverState *image_bs =3D NULL; Error *local_err =3D NULL; bool ro; @@ -3443,9 +3267,6 @@ void qmp_change_backing_file(const char *device, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - bdrv_graph_rdlock_main_loop(); =20 image_bs =3D bdrv_lookup_bs(NULL, image_node_name, &local_err); @@ -3484,7 +3305,7 @@ void qmp_change_backing_file(const char *device, =20 if (ro) { if (bdrv_reopen_set_read_only(image_bs, false, errp) !=3D 0) { - goto out; + return; } } =20 @@ -3502,14 +3323,10 @@ void qmp_change_backing_file(const char *device, if (ro) { bdrv_reopen_set_read_only(image_bs, true, errp); } - -out: - aio_context_release(aio_context); return; =20 out_rdlock: bdrv_graph_rdunlock_main_loop(); - aio_context_release(aio_context); } =20 void qmp_blockdev_add(BlockdevOptions *options, Error **errp) @@ -3549,7 +3366,6 @@ void qmp_blockdev_reopen(BlockdevOptionsList *reopen_= list, Error **errp) for (; reopen_list !=3D NULL; reopen_list =3D reopen_list->next) { BlockdevOptions *options =3D reopen_list->value; BlockDriverState *bs; - AioContext *ctx; QObject *obj; Visitor *v; QDict *qdict; @@ -3577,12 +3393,7 @@ void qmp_blockdev_reopen(BlockdevOptionsList *reopen= _list, Error **errp) =20 qdict_flatten(qdict); =20 - ctx =3D bdrv_get_aio_context(bs); - aio_context_acquire(ctx); - queue =3D bdrv_reopen_queue(queue, bs, qdict, false); - - aio_context_release(ctx); } =20 /* Perform the reopen operation */ @@ -3595,7 +3406,6 @@ fail: =20 void qmp_blockdev_del(const char *node_name, Error **errp) { - AioContext *aio_context; BlockDriverState *bs; =20 GLOBAL_STATE_CODE(); @@ -3610,30 +3420,25 @@ void qmp_blockdev_del(const char *node_name, Error = **errp) error_setg(errp, "Node %s is in use", node_name); return; } - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_DRIVE_DEL, errp)) { - goto out; + return; } =20 if (!QTAILQ_IN_USE(bs, monitor_list)) { error_setg(errp, "Node %s is not owned by the monitor", bs->node_name); - goto out; + return; } =20 if (bs->refcnt > 1) { error_setg(errp, "Block device %s is in use", bdrv_get_device_or_node_name(bs)); - goto out; + return; } =20 QTAILQ_REMOVE(&monitor_bdrv_states, bs, monitor_list); bdrv_unref(bs); - -out: - aio_context_release(aio_context); } =20 static BdrvChild * GRAPH_RDLOCK @@ -3723,7 +3528,6 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) void qmp_x_blockdev_set_iothread(const char *node_name, StrOrNull *iothrea= d, bool has_force, bool force, Error **errp) { - AioContext *old_context; AioContext *new_context; BlockDriverState *bs; =20 @@ -3755,12 +3559,7 @@ void qmp_x_blockdev_set_iothread(const char *node_na= me, StrOrNull *iothread, new_context =3D qemu_get_aio_context(); } =20 - old_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(old_context); - bdrv_try_change_aio_context(bs, new_context, NULL, errp); - - aio_context_release(old_context); } =20 QemuOptsList qemu_common_drive_opts =3D { diff --git a/blockjob.c b/blockjob.c index 7310412313..d5f29e14af 100644 --- a/blockjob.c +++ b/blockjob.c @@ -198,9 +198,7 @@ void block_job_remove_all_bdrv(BlockJob *job) * one to make sure that such a concurrent access does not attempt * to process an already freed BdrvChild. */ - aio_context_release(job->job.aio_context); bdrv_graph_wrlock(); - aio_context_acquire(job->job.aio_context); while (job->nodes) { GSList *l =3D job->nodes; BdrvChild *c =3D l->data; @@ -234,28 +232,12 @@ int block_job_add_bdrv(BlockJob *job, const char *nam= e, BlockDriverState *bs, uint64_t perm, uint64_t shared_perm, Error **errp) { BdrvChild *c; - AioContext *ctx =3D bdrv_get_aio_context(bs); - bool need_context_ops; GLOBAL_STATE_CODE(); =20 bdrv_ref(bs); =20 - need_context_ops =3D ctx !=3D job->job.aio_context; - - if (need_context_ops) { - if (job->job.aio_context !=3D qemu_get_aio_context()) { - aio_context_release(job->job.aio_context); - } - aio_context_acquire(ctx); - } c =3D bdrv_root_attach_child(bs, name, &child_job, 0, perm, shared_per= m, job, errp); - if (need_context_ops) { - aio_context_release(ctx); - if (job->job.aio_context !=3D qemu_get_aio_context()) { - aio_context_acquire(job->job.aio_context); - } - } if (c =3D=3D NULL) { return -EPERM; } diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-bl= k.c index f83bb0f116..7bbbd981ad 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -124,7 +124,6 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) VirtIOBlockDataPlane *s =3D vblk->dataplane; BusState *qbus =3D BUS(qdev_get_parent_bus(DEVICE(vblk))); VirtioBusClass *k =3D VIRTIO_BUS_GET_CLASS(qbus); - AioContext *old_context; unsigned i; unsigned nvqs =3D s->conf->num_queues; Error *local_err =3D NULL; @@ -178,10 +177,7 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) =20 trace_virtio_blk_data_plane_start(s); =20 - old_context =3D blk_get_aio_context(s->conf->conf.blk); - aio_context_acquire(old_context); r =3D blk_set_aio_context(s->conf->conf.blk, s->ctx, &local_err); - aio_context_release(old_context); if (r < 0) { error_report_err(local_err); goto fail_aio_context; @@ -208,13 +204,11 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) =20 /* Get this show started by hooking up our callbacks */ if (!blk_in_drain(s->conf->conf.blk)) { - aio_context_acquire(s->ctx); for (i =3D 0; i < nvqs; i++) { VirtQueue *vq =3D virtio_get_queue(s->vdev, i); =20 virtio_queue_aio_attach_host_notifier(vq, s->ctx); } - aio_context_release(s->ctx); } return 0; =20 @@ -314,8 +308,6 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) */ vblk->dataplane_started =3D false; =20 - aio_context_acquire(s->ctx); - /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete = */ blk_drain(s->conf->conf.blk); =20 @@ -325,8 +317,6 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context(), NULL); =20 - aio_context_release(s->ctx); - /* Clean up guest notifier (irq) */ k->set_guest_notifiers(qbus->parent, nvqs, false); =20 diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index c4bb28c66f..98501e6885 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -260,8 +260,6 @@ static void xen_block_complete_aio(void *opaque, int re= t) XenBlockRequest *request =3D opaque; XenBlockDataPlane *dataplane =3D request->dataplane; =20 - aio_context_acquire(dataplane->ctx); - if (ret !=3D 0) { error_report("%s I/O error", request->req.operation =3D=3D BLKIF_OP_READ ? @@ -273,10 +271,10 @@ static void xen_block_complete_aio(void *opaque, int = ret) if (request->presync) { request->presync =3D 0; xen_block_do_aio(request); - goto done; + return; } if (request->aio_inflight > 0) { - goto done; + return; } =20 switch (request->req.operation) { @@ -318,9 +316,6 @@ static void xen_block_complete_aio(void *opaque, int re= t) if (dataplane->more_work) { qemu_bh_schedule(dataplane->bh); } - -done: - aio_context_release(dataplane->ctx); } =20 static bool xen_block_split_discard(XenBlockRequest *request, @@ -601,9 +596,7 @@ static void xen_block_dataplane_bh(void *opaque) { XenBlockDataPlane *dataplane =3D opaque; =20 - aio_context_acquire(dataplane->ctx); xen_block_handle_requests(dataplane); - aio_context_release(dataplane->ctx); } =20 static bool xen_block_dataplane_event(void *opaque) @@ -703,10 +696,8 @@ void xen_block_dataplane_stop(XenBlockDataPlane *datap= lane) xen_block_dataplane_detach(dataplane); } =20 - aio_context_acquire(dataplane->ctx); /* Xen doesn't have multiple users for nodes, so this can't fail */ blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abo= rt); - aio_context_release(dataplane->ctx); =20 /* * Now that the context has been moved onto the main thread, cancel @@ -752,7 +743,6 @@ void xen_block_dataplane_start(XenBlockDataPlane *datap= lane, { ERRP_GUARD(); XenDevice *xendev =3D dataplane->xendev; - AioContext *old_context; unsigned int ring_size; unsigned int i; =20 @@ -836,11 +826,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *data= plane, goto stop; } =20 - old_context =3D blk_get_aio_context(dataplane->blk); - aio_context_acquire(old_context); /* If other users keep the BlockBackend in the iothread, that's ok */ blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL); - aio_context_release(old_context); =20 if (!blk_in_drain(dataplane->blk)) { xen_block_dataplane_attach(dataplane); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index a1f8e15522..5e49c0625f 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -102,7 +102,6 @@ static void virtio_blk_rw_complete(void *opaque, int re= t) VirtIOBlock *s =3D next->dev; VirtIODevice *vdev =3D VIRTIO_DEVICE(s); =20 - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (next) { VirtIOBlockReq *req =3D next; next =3D req->mr_next; @@ -135,7 +134,6 @@ static void virtio_blk_rw_complete(void *opaque, int re= t) block_acct_done(blk_get_stats(s->blk), &req->acct); virtio_blk_free_request(req); } - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 static void virtio_blk_flush_complete(void *opaque, int ret) @@ -143,19 +141,15 @@ static void virtio_blk_flush_complete(void *opaque, i= nt ret) VirtIOBlockReq *req =3D opaque; VirtIOBlock *s =3D req->dev; =20 - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, 0, true)) { - goto out; + return; } } =20 virtio_blk_req_complete(req, VIRTIO_BLK_S_OK); block_acct_done(blk_get_stats(s->blk), &req->acct); virtio_blk_free_request(req); - -out: - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) @@ -165,10 +159,9 @@ static void virtio_blk_discard_write_zeroes_complete(v= oid *opaque, int ret) bool is_write_zeroes =3D (virtio_ldl_p(VIRTIO_DEVICE(s), &req->out.typ= e) & ~VIRTIO_BLK_T_BARRIER) =3D=3D VIRTIO_BLK_T_WRI= TE_ZEROES; =20 - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes))= { - goto out; + return; } } =20 @@ -177,9 +170,6 @@ static void virtio_blk_discard_write_zeroes_complete(vo= id *opaque, int ret) block_acct_done(blk_get_stats(s->blk), &req->acct); } virtio_blk_free_request(req); - -out: - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 #ifdef __linux__ @@ -226,10 +216,8 @@ static void virtio_blk_ioctl_complete(void *opaque, in= t status) virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len); =20 out: - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); g_free(ioctl_req); } =20 @@ -669,7 +657,6 @@ static void virtio_blk_zone_report_complete(void *opaqu= e, int ret) { ZoneCmdData *data =3D opaque; VirtIOBlockReq *req =3D data->req; - VirtIOBlock *s =3D req->dev; VirtIODevice *vdev =3D VIRTIO_DEVICE(req->dev); struct iovec *in_iov =3D data->in_iov; unsigned in_num =3D data->in_num; @@ -760,10 +747,8 @@ static void virtio_blk_zone_report_complete(void *opaq= ue, int ret) } =20 out: - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, err_status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); g_free(data->zone_report_data.zones); g_free(data); } @@ -826,10 +811,8 @@ static void virtio_blk_zone_mgmt_complete(void *opaque= , int ret) err_status =3D VIRTIO_BLK_S_ZONE_INVALID_CMD; } =20 - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, err_status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 static int virtio_blk_handle_zone_mgmt(VirtIOBlockReq *req, BlockZoneOp op) @@ -879,7 +862,6 @@ static void virtio_blk_zone_append_complete(void *opaqu= e, int ret) { ZoneCmdData *data =3D opaque; VirtIOBlockReq *req =3D data->req; - VirtIOBlock *s =3D req->dev; VirtIODevice *vdev =3D VIRTIO_DEVICE(req->dev); int64_t append_sector, n; uint8_t err_status =3D VIRTIO_BLK_S_OK; @@ -902,10 +884,8 @@ static void virtio_blk_zone_append_complete(void *opaq= ue, int ret) trace_virtio_blk_zone_append_complete(vdev, req, append_sector, ret); =20 out: - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, err_status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); g_free(data); } =20 @@ -941,10 +921,8 @@ static int virtio_blk_handle_zone_append(VirtIOBlockRe= q *req, return 0; =20 out: - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, err_status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); return err_status; } =20 @@ -1134,7 +1112,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *= vq) MultiReqBuffer mrb =3D {}; bool suppress_notifications =3D virtio_queue_get_notification(vq); =20 - aio_context_acquire(blk_get_aio_context(s->blk)); defer_call_begin(); =20 do { @@ -1160,7 +1137,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *= vq) } =20 defer_call_end(); - aio_context_release(blk_get_aio_context(s->blk)); } =20 static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) @@ -1188,7 +1164,6 @@ static void virtio_blk_dma_restart_bh(void *opaque) =20 s->rq =3D NULL; =20 - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (req) { VirtIOBlockReq *next =3D req->next; if (virtio_blk_handle_request(req, &mrb)) { @@ -1212,8 +1187,6 @@ static void virtio_blk_dma_restart_bh(void *opaque) =20 /* Paired with inc in virtio_blk_dma_restart_cb() */ blk_dec_in_flight(s->conf.conf.blk); - - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 static void virtio_blk_dma_restart_cb(void *opaque, bool running, @@ -1235,11 +1208,8 @@ static void virtio_blk_dma_restart_cb(void *opaque, = bool running, static void virtio_blk_reset(VirtIODevice *vdev) { VirtIOBlock *s =3D VIRTIO_BLK(vdev); - AioContext *ctx; VirtIOBlockReq *req; =20 - ctx =3D blk_get_aio_context(s->blk); - aio_context_acquire(ctx); blk_drain(s->blk); =20 /* We drop queued requests after blk_drain() because blk_drain() itsel= f can @@ -1251,8 +1221,6 @@ static void virtio_blk_reset(VirtIODevice *vdev) virtio_blk_free_request(req); } =20 - aio_context_release(ctx); - assert(!s->dataplane_started); blk_set_enable_write_cache(s->blk, s->original_wce); } @@ -1268,10 +1236,6 @@ static void virtio_blk_update_config(VirtIODevice *v= dev, uint8_t *config) uint64_t capacity; int64_t length; int blk_size =3D conf->logical_block_size; - AioContext *ctx; - - ctx =3D blk_get_aio_context(s->blk); - aio_context_acquire(ctx); =20 blk_get_geometry(s->blk, &capacity); memset(&blkcfg, 0, sizeof(blkcfg)); @@ -1295,7 +1259,6 @@ static void virtio_blk_update_config(VirtIODevice *vd= ev, uint8_t *config) * per track (cylinder). */ length =3D blk_getlength(s->blk); - aio_context_release(ctx); if (length > 0 && length / conf->heads / conf->secs % blk_size) { blkcfg.geometry.sectors =3D conf->secs & ~s->sector_mask; } else { @@ -1362,9 +1325,7 @@ static void virtio_blk_set_config(VirtIODevice *vdev,= const uint8_t *config) =20 memcpy(&blkcfg, config, s->config_size); =20 - aio_context_acquire(blk_get_aio_context(s->blk)); blk_set_enable_write_cache(s->blk, blkcfg.wce !=3D 0); - aio_context_release(blk_get_aio_context(s->blk)); } =20 static uint64_t virtio_blk_get_features(VirtIODevice *vdev, uint64_t featu= res, @@ -1432,11 +1393,9 @@ static void virtio_blk_set_status(VirtIODevice *vdev= , uint8_t status) * s->blk would erroneously be placed in writethrough mode. */ if (!virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE)) { - aio_context_acquire(blk_get_aio_context(s->blk)); blk_set_enable_write_cache(s->blk, virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_WC= E)); - aio_context_release(blk_get_aio_context(s->blk)); } } =20 diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-sys= tem.c index 1473ab3d5e..73cced4626 100644 --- a/hw/core/qdev-properties-system.c +++ b/hw/core/qdev-properties-system.c @@ -120,9 +120,7 @@ static void set_drive_helper(Object *obj, Visitor *v, c= onst char *name, "node"); } =20 - aio_context_acquire(ctx); blk_replace_bs(blk, bs, errp); - aio_context_release(ctx); return; } =20 @@ -148,10 +146,7 @@ static void set_drive_helper(Object *obj, Visitor *v, = const char *name, 0, BLK_PERM_ALL); blk_created =3D true; =20 - aio_context_acquire(ctx); ret =3D blk_insert_bs(blk, bs, errp); - aio_context_release(ctx); - if (ret < 0) { goto fail; } @@ -207,12 +202,8 @@ static void release_drive(Object *obj, const char *nam= e, void *opaque) BlockBackend **ptr =3D object_field_prop_ptr(obj, prop); =20 if (*ptr) { - AioContext *ctx =3D blk_get_aio_context(*ptr); - - aio_context_acquire(ctx); blockdev_auto_del(*ptr); blk_detach_dev(*ptr, dev); - aio_context_release(ctx); } } =20 diff --git a/job.c b/job.c index 99a2e54b54..660ce22c56 100644 --- a/job.c +++ b/job.c @@ -464,12 +464,8 @@ void job_unref_locked(Job *job) assert(!job->txn); =20 if (job->driver->free) { - AioContext *aio_context =3D job->aio_context; job_unlock(); - /* FIXME: aiocontext lock is required because cb calls blk_unr= ef */ - aio_context_acquire(aio_context); job->driver->free(job); - aio_context_release(aio_context); job_lock(); } =20 @@ -840,12 +836,10 @@ static void job_clean(Job *job) =20 /* * Called with job_mutex held, but releases it temporarily. - * Takes AioContext lock internally to invoke a job->driver callback. */ static int job_finalize_single_locked(Job *job) { int job_ret; - AioContext *ctx =3D job->aio_context; =20 assert(job_is_completed_locked(job)); =20 @@ -854,7 +848,6 @@ static int job_finalize_single_locked(Job *job) =20 job_ret =3D job->ret; job_unlock(); - aio_context_acquire(ctx); =20 if (!job_ret) { job_commit(job); @@ -867,7 +860,6 @@ static int job_finalize_single_locked(Job *job) job->cb(job->opaque, job_ret); } =20 - aio_context_release(ctx); job_lock(); =20 /* Emit events only if we actually started */ @@ -886,17 +878,13 @@ static int job_finalize_single_locked(Job *job) =20 /* * Called with job_mutex held, but releases it temporarily. - * Takes AioContext lock internally to invoke a job->driver callback. */ static void job_cancel_async_locked(Job *job, bool force) { - AioContext *ctx =3D job->aio_context; GLOBAL_STATE_CODE(); if (job->driver->cancel) { job_unlock(); - aio_context_acquire(ctx); force =3D job->driver->cancel(job, force); - aio_context_release(ctx); job_lock(); } else { /* No .cancel() means the job will behave as if force-cancelled */ @@ -931,7 +919,6 @@ static void job_cancel_async_locked(Job *job, bool forc= e) =20 /* * Called with job_mutex held, but releases it temporarily. - * Takes AioContext lock internally to invoke a job->driver callback. */ static void job_completed_txn_abort_locked(Job *job) { @@ -979,15 +966,12 @@ static void job_completed_txn_abort_locked(Job *job) static int job_prepare_locked(Job *job) { int ret; - AioContext *ctx =3D job->aio_context; =20 GLOBAL_STATE_CODE(); =20 if (job->ret =3D=3D 0 && job->driver->prepare) { job_unlock(); - aio_context_acquire(ctx); ret =3D job->driver->prepare(job); - aio_context_release(ctx); job_lock(); job->ret =3D ret; job_update_rc_locked(job); diff --git a/migration/block.c b/migration/block.c index a15f9bddcb..2bcfcbfdf6 100644 --- a/migration/block.c +++ b/migration/block.c @@ -66,7 +66,7 @@ typedef struct BlkMigDevState { /* Protected by block migration lock. */ int64_t completed_sectors; =20 - /* During migration this is protected by iothread lock / AioContext. + /* During migration this is protected by bdrv_dirty_bitmap_lock(). * Allocation and free happen during setup and cleanup respectively. */ BdrvDirtyBitmap *dirty_bitmap; @@ -101,7 +101,7 @@ typedef struct BlkMigState { int prev_progress; int bulk_completed; =20 - /* Lock must be taken _inside_ the iothread lock and any AioContexts. = */ + /* Lock must be taken _inside_ the iothread lock. */ QemuMutex lock; } BlkMigState; =20 @@ -270,7 +270,6 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevS= tate *bmds) =20 if (bmds->shared_base) { qemu_mutex_lock_iothread(); - aio_context_acquire(blk_get_aio_context(bb)); /* Skip unallocated sectors; intentionally treats failure or * partial sector as an allocated sector */ while (cur_sector < total_sectors && @@ -281,7 +280,6 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevS= tate *bmds) } cur_sector +=3D count >> BDRV_SECTOR_BITS; } - aio_context_release(blk_get_aio_context(bb)); qemu_mutex_unlock_iothread(); } =20 @@ -313,22 +311,10 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDe= vState *bmds) block_mig_state.submitted++; blk_mig_unlock(); =20 - /* We do not know if bs is under the main thread (and thus does - * not acquire the AioContext when doing AIO) or rather under - * dataplane. Thus acquire both the iothread mutex and the - * AioContext. - * - * This is ugly and will disappear when we make bdrv_* thread-safe, - * without the need to acquire the AioContext. - */ - qemu_mutex_lock_iothread(); - aio_context_acquire(blk_get_aio_context(bmds->blk)); bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, cur_sector * BDRV_SECTOR_S= IZE, nr_sectors * BDRV_SECTOR_SIZE); blk->aiocb =3D blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE, &blk-= >qiov, 0, blk_mig_read_cb, blk); - aio_context_release(blk_get_aio_context(bmds->blk)); - qemu_mutex_unlock_iothread(); =20 bmds->cur_sector =3D cur_sector + nr_sectors; return (bmds->cur_sector >=3D total_sectors); @@ -512,7 +498,7 @@ static void blk_mig_reset_dirty_cursor(void) } } =20 -/* Called with iothread lock and AioContext taken. */ +/* Called with iothread lock taken. */ =20 static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds, int is_async) @@ -606,9 +592,7 @@ static int blk_mig_save_dirty_block(QEMUFile *f, int is= _async) int ret =3D 1; =20 QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) { - aio_context_acquire(blk_get_aio_context(bmds->blk)); ret =3D mig_save_device_dirty(f, bmds, is_async); - aio_context_release(blk_get_aio_context(bmds->blk)); if (ret <=3D 0) { break; } @@ -666,9 +650,9 @@ static int64_t get_remaining_dirty(void) int64_t dirty =3D 0; =20 QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) { - aio_context_acquire(blk_get_aio_context(bmds->blk)); + bdrv_dirty_bitmap_lock(bmds->dirty_bitmap); dirty +=3D bdrv_get_dirty_count(bmds->dirty_bitmap); - aio_context_release(blk_get_aio_context(bmds->blk)); + bdrv_dirty_bitmap_unlock(bmds->dirty_bitmap); } =20 return dirty; @@ -681,7 +665,6 @@ static void block_migration_cleanup_bmds(void) { BlkMigDevState *bmds; BlockDriverState *bs; - AioContext *ctx; =20 unset_dirty_tracking(); =20 @@ -693,13 +676,7 @@ static void block_migration_cleanup_bmds(void) bdrv_op_unblock_all(bs, bmds->blocker); } error_free(bmds->blocker); - - /* Save ctx, because bmds->blk can disappear during blk_unref. */ - ctx =3D blk_get_aio_context(bmds->blk); - aio_context_acquire(ctx); blk_unref(bmds->blk); - aio_context_release(ctx); - g_free(bmds->blk_name); g_free(bmds->aio_bitmap); g_free(bmds); diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c index 86ae832176..99710c8ffb 100644 --- a/migration/migration-hmp-cmds.c +++ b/migration/migration-hmp-cmds.c @@ -852,14 +852,11 @@ static void vm_completion(ReadLineState *rs, const ch= ar *str) =20 for (bs =3D bdrv_first(&it); bs; bs =3D bdrv_next(&it)) { SnapshotInfoList *snapshots, *snapshot; - AioContext *ctx =3D bdrv_get_aio_context(bs); bool ok =3D false; =20 - aio_context_acquire(ctx); if (bdrv_can_snapshot(bs)) { ok =3D bdrv_query_snapshot_info_list(bs, &snapshots, NULL) =3D= =3D 0; } - aio_context_release(ctx); if (!ok) { continue; } diff --git a/migration/savevm.c b/migration/savevm.c index eec5503a42..1b9ab7b8ee 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -3049,7 +3049,6 @@ bool save_snapshot(const char *name, bool overwrite, = const char *vmstate, int saved_vm_running; uint64_t vm_state_size; g_autoptr(GDateTime) now =3D g_date_time_new_now_local(); - AioContext *aio_context; =20 GLOBAL_STATE_CODE(); =20 @@ -3092,7 +3091,6 @@ bool save_snapshot(const char *name, bool overwrite, = const char *vmstate, if (bs =3D=3D NULL) { return false; } - aio_context =3D bdrv_get_aio_context(bs); =20 saved_vm_running =3D runstate_is_running(); =20 @@ -3101,8 +3099,6 @@ bool save_snapshot(const char *name, bool overwrite, = const char *vmstate, =20 bdrv_drain_all_begin(); =20 - aio_context_acquire(aio_context); - memset(sn, 0, sizeof(*sn)); =20 /* fill auxiliary fields */ @@ -3139,14 +3135,6 @@ bool save_snapshot(const char *name, bool overwrite,= const char *vmstate, goto the_end; } =20 - /* The bdrv_all_create_snapshot() call that follows acquires the AioCo= ntext - * for itself. BDRV_POLL_WHILE() does not support nested locking beca= use - * it only releases the lock once. Therefore synchronous I/O will dea= dlock - * unless we release the AioContext before bdrv_all_create_snapshot(). - */ - aio_context_release(aio_context); - aio_context =3D NULL; - ret =3D bdrv_all_create_snapshot(sn, bs, vm_state_size, has_devices, devices, errp); if (ret < 0) { @@ -3157,10 +3145,6 @@ bool save_snapshot(const char *name, bool overwrite,= const char *vmstate, ret =3D 0; =20 the_end: - if (aio_context) { - aio_context_release(aio_context); - } - bdrv_drain_all_end(); =20 if (saved_vm_running) { @@ -3258,7 +3242,6 @@ bool load_snapshot(const char *name, const char *vmst= ate, QEMUSnapshotInfo sn; QEMUFile *f; int ret; - AioContext *aio_context; MigrationIncomingState *mis =3D migration_incoming_get_current(); =20 if (!bdrv_all_can_snapshot(has_devices, devices, errp)) { @@ -3278,12 +3261,9 @@ bool load_snapshot(const char *name, const char *vms= tate, if (!bs_vm_state) { return false; } - aio_context =3D bdrv_get_aio_context(bs_vm_state); =20 /* Don't even try to load empty VM states */ - aio_context_acquire(aio_context); ret =3D bdrv_snapshot_find(bs_vm_state, &sn, name); - aio_context_release(aio_context); if (ret < 0) { return false; } else if (sn.vm_state_size =3D=3D 0) { @@ -3320,10 +3300,8 @@ bool load_snapshot(const char *name, const char *vms= tate, ret =3D -EINVAL; goto err_drain; } - aio_context_acquire(aio_context); ret =3D qemu_loadvm_state(f); migration_incoming_state_destroy(); - aio_context_release(aio_context); =20 bdrv_drain_all_end(); =20 diff --git a/net/colo-compare.c b/net/colo-compare.c index 7f9e6f89ce..f2dfc0ebdc 100644 --- a/net/colo-compare.c +++ b/net/colo-compare.c @@ -1439,12 +1439,10 @@ static void colo_compare_finalize(Object *obj) qemu_bh_delete(s->event_bh); =20 AioContext *ctx =3D iothread_get_aio_context(s->iothread); - aio_context_acquire(ctx); AIO_WAIT_WHILE(ctx, !s->out_sendco.done); if (s->notify_dev) { AIO_WAIT_WHILE(ctx, !s->notify_sendco.done); } - aio_context_release(ctx); =20 /* Release all unhandled packets after compare thead exited */ g_queue_foreach(&s->conn_list, colo_flush_packets, s); diff --git a/qemu-img.c b/qemu-img.c index 5a77f67719..7668f86769 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -960,7 +960,6 @@ static int img_commit(int argc, char **argv) Error *local_err =3D NULL; CommonBlockJobCBInfo cbi; bool image_opts =3D false; - AioContext *aio_context; int64_t rate_limit =3D 0; =20 fmt =3D NULL; @@ -1078,12 +1077,9 @@ static int img_commit(int argc, char **argv) .bs =3D bs, }; =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); commit_active_start("commit", bs, base_bs, JOB_DEFAULT, rate_limit, BLOCKDEV_ON_ERROR_REPORT, NULL, common_block_job_c= b, &cbi, false, &local_err); - aio_context_release(aio_context); if (local_err) { goto done; } diff --git a/qemu-io.c b/qemu-io.c index 050c70835f..6cb1e00385 100644 --- a/qemu-io.c +++ b/qemu-io.c @@ -414,15 +414,7 @@ static void prep_fetchline(void *opaque) =20 static int do_qemuio_command(const char *cmd) { - int ret; - AioContext *ctx =3D - qemuio_blk ? blk_get_aio_context(qemuio_blk) : qemu_get_aio_contex= t(); - - aio_context_acquire(ctx); - ret =3D qemuio_command(qemuio_blk, cmd); - aio_context_release(ctx); - - return ret; + return qemuio_command(qemuio_blk, cmd); } =20 static int command_loop(void) diff --git a/qemu-nbd.c b/qemu-nbd.c index 186e6468b1..bac0b5e3ec 100644 --- a/qemu-nbd.c +++ b/qemu-nbd.c @@ -1123,9 +1123,7 @@ int main(int argc, char **argv) qdict_put_str(raw_opts, "file", bs->node_name); qdict_put_int(raw_opts, "offset", dev_offset); =20 - aio_context_acquire(qemu_get_aio_context()); bs =3D bdrv_open(NULL, NULL, raw_opts, flags, &error_fatal); - aio_context_release(qemu_get_aio_context()); =20 blk_remove_bs(blk); blk_insert_bs(blk, bs, &error_fatal); diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c index 3e60549a4a..82c66fff26 100644 --- a/replay/replay-debugging.c +++ b/replay/replay-debugging.c @@ -144,7 +144,6 @@ static char *replay_find_nearest_snapshot(int64_t icoun= t, char *ret =3D NULL; int rv; int nb_sns, i; - AioContext *aio_context; =20 *snapshot_icount =3D -1; =20 @@ -152,11 +151,8 @@ static char *replay_find_nearest_snapshot(int64_t icou= nt, if (!bs) { goto fail; } - aio_context =3D bdrv_get_aio_context(bs); =20 - aio_context_acquire(aio_context); nb_sns =3D bdrv_snapshot_list(bs, &sn_tab); - aio_context_release(aio_context); =20 for (i =3D 0; i < nb_sns; i++) { rv =3D bdrv_all_has_snapshot(sn_tab[i].name, false, NULL, NULL); diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index d9754dfebc..17830a69c1 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -179,13 +179,7 @@ static void do_drain_end(enum drain_type drain_type, B= lockDriverState *bs) =20 static void do_drain_begin_unlocked(enum drain_type drain_type, BlockDrive= rState *bs) { - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_acquire(bdrv_get_aio_context(bs)); - } do_drain_begin(drain_type, bs); - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_release(bdrv_get_aio_context(bs)); - } } =20 static BlockBackend * no_coroutine_fn test_setup(void) @@ -209,13 +203,7 @@ static BlockBackend * no_coroutine_fn test_setup(void) =20 static void do_drain_end_unlocked(enum drain_type drain_type, BlockDriverS= tate *bs) { - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_acquire(bdrv_get_aio_context(bs)); - } do_drain_end(drain_type, bs); - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_release(bdrv_get_aio_context(bs)); - } } =20 /* @@ -520,12 +508,8 @@ static void test_iothread_main_thread_bh(void *opaque) { struct test_iothread_data *data =3D opaque; =20 - /* Test that the AioContext is not yet locked in a random BH that is - * executed during drain, otherwise this would deadlock. */ - aio_context_acquire(bdrv_get_aio_context(data->bs)); bdrv_flush(data->bs); bdrv_dec_in_flight(data->bs); /* incremented by test_iothread_common()= */ - aio_context_release(bdrv_get_aio_context(data->bs)); } =20 /* @@ -567,7 +551,6 @@ static void test_iothread_common(enum drain_type drain_= type, int drain_thread) blk_set_disable_request_queuing(blk, true); =20 blk_set_aio_context(blk, ctx_a, &error_abort); - aio_context_acquire(ctx_a); =20 s->bh_indirection_ctx =3D ctx_b; =20 @@ -582,8 +565,6 @@ static void test_iothread_common(enum drain_type drain_= type, int drain_thread) g_assert(acb !=3D NULL); g_assert_cmpint(aio_ret, =3D=3D, -EINPROGRESS); =20 - aio_context_release(ctx_a); - data =3D (struct test_iothread_data) { .bs =3D bs, .drain_type =3D drain_type, @@ -592,10 +573,6 @@ static void test_iothread_common(enum drain_type drain= _type, int drain_thread) =20 switch (drain_thread) { case 0: - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_acquire(ctx_a); - } - /* * Increment in_flight so that do_drain_begin() waits for * test_iothread_main_thread_bh(). This prevents the race between @@ -613,20 +590,10 @@ static void test_iothread_common(enum drain_type drai= n_type, int drain_thread) do_drain_begin(drain_type, bs); g_assert_cmpint(bs->in_flight, =3D=3D, 0); =20 - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_release(ctx_a); - } qemu_event_wait(&done_event); - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_acquire(ctx_a); - } =20 g_assert_cmpint(aio_ret, =3D=3D, 0); do_drain_end(drain_type, bs); - - if (drain_type !=3D BDRV_DRAIN_ALL) { - aio_context_release(ctx_a); - } break; case 1: co =3D qemu_coroutine_create(test_iothread_drain_co_entry, &data); @@ -637,9 +604,7 @@ static void test_iothread_common(enum drain_type drain_= type, int drain_thread) g_assert_not_reached(); } =20 - aio_context_acquire(ctx_a); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); - aio_context_release(ctx_a); =20 bdrv_unref(bs); blk_unref(blk); @@ -757,7 +722,6 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, BlockJob *job; TestBlockJob *tjob; IOThread *iothread =3D NULL; - AioContext *ctx; int ret; =20 src =3D bdrv_new_open_driver(&bdrv_test, "source", BDRV_O_RDWR, @@ -787,11 +751,11 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, } =20 if (use_iothread) { + AioContext *ctx; + iothread =3D iothread_new(); ctx =3D iothread_get_aio_context(iothread); blk_set_aio_context(blk_src, ctx, &error_abort); - } else { - ctx =3D qemu_get_aio_context(); } =20 target =3D bdrv_new_open_driver(&bdrv_test, "target", BDRV_O_RDWR, @@ -800,7 +764,6 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, blk_insert_bs(blk_target, target, &error_abort); blk_set_allow_aio_context_change(blk_target, true); =20 - aio_context_acquire(ctx); tjob =3D block_job_create("job0", &test_job_driver, NULL, src, 0, BLK_PERM_ALL, 0, 0, NULL, NULL, &error_abort); @@ -821,7 +784,6 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, tjob->prepare_ret =3D -EIO; break; } - aio_context_release(ctx); =20 job_start(&job->job); =20 @@ -912,12 +874,10 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, } g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 - aio_context_acquire(ctx); if (use_iothread) { blk_set_aio_context(blk_src, qemu_get_aio_context(), &error_abort); assert(blk_get_aio_context(blk_target) =3D=3D qemu_get_aio_context= ()); } - aio_context_release(ctx); =20 blk_unref(blk_src); blk_unref(blk_target); @@ -1401,9 +1361,7 @@ static void test_append_to_drained(void) g_assert_cmpint(base_s->drain_count, =3D=3D, 1); g_assert_cmpint(base->in_flight, =3D=3D, 0); =20 - aio_context_acquire(qemu_get_aio_context()); bdrv_append(overlay, base, &error_abort); - aio_context_release(qemu_get_aio_context()); =20 g_assert_cmpint(base->in_flight, =3D=3D, 0); g_assert_cmpint(overlay->in_flight, =3D=3D, 0); @@ -1438,16 +1396,11 @@ static void test_set_aio_context(void) =20 bdrv_drained_begin(bs); bdrv_try_change_aio_context(bs, ctx_a, NULL, &error_abort); - - aio_context_acquire(ctx_a); bdrv_drained_end(bs); =20 bdrv_drained_begin(bs); bdrv_try_change_aio_context(bs, ctx_b, NULL, &error_abort); - aio_context_release(ctx_a); - aio_context_acquire(ctx_b); bdrv_try_change_aio_context(bs, qemu_get_aio_context(), NULL, &error_a= bort); - aio_context_release(ctx_b); bdrv_drained_end(bs); =20 bdrv_unref(bs); diff --git a/tests/unit/test-bdrv-graph-mod.c b/tests/unit/test-bdrv-graph-= mod.c index 8ee6ef38d8..cafc023db4 100644 --- a/tests/unit/test-bdrv-graph-mod.c +++ b/tests/unit/test-bdrv-graph-mod.c @@ -142,10 +142,8 @@ static void test_update_perm_tree(void) BDRV_CHILD_DATA, &error_abort); bdrv_graph_wrunlock(); =20 - aio_context_acquire(qemu_get_aio_context()); ret =3D bdrv_append(filter, bs, NULL); g_assert_cmpint(ret, <, 0); - aio_context_release(qemu_get_aio_context()); =20 bdrv_unref(filter); blk_unref(root); @@ -211,9 +209,7 @@ static void test_should_update_child(void) bdrv_attach_child(filter, target, "target", &child_of_bds, BDRV_CHILD_DATA, &error_abort); bdrv_graph_wrunlock(); - aio_context_acquire(qemu_get_aio_context()); bdrv_append(filter, bs, &error_abort); - aio_context_release(qemu_get_aio_context()); =20 bdrv_graph_rdlock_main_loop(); g_assert(target->backing->bs =3D=3D bs); @@ -440,9 +436,7 @@ static void test_append_greedy_filter(void) &error_abort); bdrv_graph_wrunlock(); =20 - aio_context_acquire(qemu_get_aio_context()); bdrv_append(fl, base, &error_abort); - aio_context_release(qemu_get_aio_context()); bdrv_unref(fl); bdrv_unref(top); } diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index 9b15d2768c..3766d5de6b 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -483,7 +483,6 @@ static void test_sync_op(const void *opaque) bdrv_graph_rdunlock_main_loop(); =20 blk_set_aio_context(blk, ctx, &error_abort); - aio_context_acquire(ctx); if (t->fn) { t->fn(c); } @@ -491,7 +490,6 @@ static void test_sync_op(const void *opaque) t->blkfn(blk); } blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); - aio_context_release(ctx); =20 bdrv_unref(bs); blk_unref(blk); @@ -576,9 +574,7 @@ static void test_attach_blockjob(void) aio_poll(qemu_get_aio_context(), false); } =20 - aio_context_acquire(ctx); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); - aio_context_release(ctx); =20 tjob->n =3D 0; while (tjob->n =3D=3D 0) { @@ -595,9 +591,7 @@ static void test_attach_blockjob(void) WITH_JOB_LOCK_GUARD() { job_complete_sync_locked(&tjob->common.job, &error_abort); } - aio_context_acquire(ctx); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); - aio_context_release(ctx); =20 bdrv_unref(bs); blk_unref(blk); @@ -654,9 +648,7 @@ static void test_propagate_basic(void) =20 /* Switch the AioContext back */ main_ctx =3D qemu_get_aio_context(); - aio_context_acquire(ctx); blk_set_aio_context(blk, main_ctx, &error_abort); - aio_context_release(ctx); g_assert(blk_get_aio_context(blk) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(bs_a) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(bs_verify) =3D=3D main_ctx); @@ -732,9 +724,7 @@ static void test_propagate_diamond(void) =20 /* Switch the AioContext back */ main_ctx =3D qemu_get_aio_context(); - aio_context_acquire(ctx); blk_set_aio_context(blk, main_ctx, &error_abort); - aio_context_release(ctx); g_assert(blk_get_aio_context(blk) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(bs_verify) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(bs_a) =3D=3D main_ctx); @@ -764,13 +754,11 @@ static void test_propagate_mirror(void) &error_abort); =20 /* Start a mirror job */ - aio_context_acquire(main_ctx); mirror_start("job0", src, target, NULL, JOB_DEFAULT, 0, 0, 0, MIRROR_SYNC_MODE_NONE, MIRROR_OPEN_BACKING_CHAIN, false, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, false, "filter_node", MIRROR_COPY_MODE_BACKGROUND, &error_abort); - aio_context_release(main_ctx); =20 WITH_JOB_LOCK_GUARD() { job =3D job_get_locked("job0"); @@ -785,9 +773,7 @@ static void test_propagate_mirror(void) g_assert(job->aio_context =3D=3D ctx); =20 /* Change the AioContext of target */ - aio_context_acquire(ctx); bdrv_try_change_aio_context(target, main_ctx, NULL, &error_abort); - aio_context_release(ctx); g_assert(bdrv_get_aio_context(src) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(target) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(filter) =3D=3D main_ctx); @@ -805,10 +791,8 @@ static void test_propagate_mirror(void) g_assert(bdrv_get_aio_context(filter) =3D=3D main_ctx); =20 /* ...unless we explicitly allow it */ - aio_context_acquire(ctx); blk_set_allow_aio_context_change(blk, true); bdrv_try_change_aio_context(target, ctx, NULL, &error_abort); - aio_context_release(ctx); =20 g_assert(blk_get_aio_context(blk) =3D=3D ctx); g_assert(bdrv_get_aio_context(src) =3D=3D ctx); @@ -817,10 +801,8 @@ static void test_propagate_mirror(void) =20 job_cancel_sync_all(); =20 - aio_context_acquire(ctx); blk_set_aio_context(blk, main_ctx, &error_abort); bdrv_try_change_aio_context(target, main_ctx, NULL, &error_abort); - aio_context_release(ctx); =20 blk_unref(blk); bdrv_unref(src); @@ -836,7 +818,6 @@ static void test_attach_second_node(void) BlockDriverState *bs, *filter; QDict *options; =20 - aio_context_acquire(main_ctx); blk =3D blk_new(ctx, BLK_PERM_ALL, BLK_PERM_ALL); bs =3D bdrv_new_open_driver(&bdrv_test, "base", BDRV_O_RDWR, &error_ab= ort); blk_insert_bs(blk, bs, &error_abort); @@ -846,15 +827,12 @@ static void test_attach_second_node(void) qdict_put_str(options, "file", "base"); =20 filter =3D bdrv_open(NULL, NULL, options, BDRV_O_RDWR, &error_abort); - aio_context_release(main_ctx); =20 g_assert(blk_get_aio_context(blk) =3D=3D ctx); g_assert(bdrv_get_aio_context(bs) =3D=3D ctx); g_assert(bdrv_get_aio_context(filter) =3D=3D ctx); =20 - aio_context_acquire(ctx); blk_set_aio_context(blk, main_ctx, &error_abort); - aio_context_release(ctx); g_assert(blk_get_aio_context(blk) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(bs) =3D=3D main_ctx); g_assert(bdrv_get_aio_context(filter) =3D=3D main_ctx); @@ -868,11 +846,9 @@ static void test_attach_preserve_blk_ctx(void) { IOThread *iothread =3D iothread_new(); AioContext *ctx =3D iothread_get_aio_context(iothread); - AioContext *main_ctx =3D qemu_get_aio_context(); BlockBackend *blk; BlockDriverState *bs; =20 - aio_context_acquire(main_ctx); blk =3D blk_new(ctx, BLK_PERM_ALL, BLK_PERM_ALL); bs =3D bdrv_new_open_driver(&bdrv_test, "base", BDRV_O_RDWR, &error_ab= ort); bs->total_sectors =3D 65536 / BDRV_SECTOR_SIZE; @@ -881,25 +857,18 @@ static void test_attach_preserve_blk_ctx(void) blk_insert_bs(blk, bs, &error_abort); g_assert(blk_get_aio_context(blk) =3D=3D ctx); g_assert(bdrv_get_aio_context(bs) =3D=3D ctx); - aio_context_release(main_ctx); =20 /* Remove the node again */ - aio_context_acquire(ctx); blk_remove_bs(blk); - aio_context_release(ctx); g_assert(blk_get_aio_context(blk) =3D=3D ctx); g_assert(bdrv_get_aio_context(bs) =3D=3D qemu_get_aio_context()); =20 /* Re-attach the node */ - aio_context_acquire(main_ctx); blk_insert_bs(blk, bs, &error_abort); - aio_context_release(main_ctx); g_assert(blk_get_aio_context(blk) =3D=3D ctx); g_assert(bdrv_get_aio_context(bs) =3D=3D ctx); =20 - aio_context_acquire(ctx); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); - aio_context_release(ctx); bdrv_unref(bs); blk_unref(blk); } diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index a130f6fefb..fe3e0d2d38 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -228,7 +228,6 @@ static void cancel_common(CancelJob *s) BlockJob *job =3D &s->common; BlockBackend *blk =3D s->blk; JobStatus sts =3D job->job.status; - AioContext *ctx =3D job->job.aio_context; =20 job_cancel_sync(&job->job, true); WITH_JOB_LOCK_GUARD() { @@ -240,9 +239,7 @@ static void cancel_common(CancelJob *s) job_unref_locked(&job->job); } =20 - aio_context_acquire(ctx); destroy_blk(blk); - aio_context_release(ctx); =20 } =20 @@ -391,132 +388,6 @@ static void test_cancel_concluded(void) cancel_common(s); } =20 -/* (See test_yielding_driver for the job description) */ -typedef struct YieldingJob { - BlockJob common; - bool should_complete; -} YieldingJob; - -static void yielding_job_complete(Job *job, Error **errp) -{ - YieldingJob *s =3D container_of(job, YieldingJob, common.job); - s->should_complete =3D true; - job_enter(job); -} - -static int coroutine_fn yielding_job_run(Job *job, Error **errp) -{ - YieldingJob *s =3D container_of(job, YieldingJob, common.job); - - job_transition_to_ready(job); - - while (!s->should_complete) { - job_yield(job); - } - - return 0; -} - -/* - * This job transitions immediately to the READY state, and then - * yields until it is to complete. - */ -static const BlockJobDriver test_yielding_driver =3D { - .job_driver =3D { - .instance_size =3D sizeof(YieldingJob), - .free =3D block_job_free, - .user_resume =3D block_job_user_resume, - .run =3D yielding_job_run, - .complete =3D yielding_job_complete, - }, -}; - -/* - * Test that job_complete_locked() works even on jobs that are in a paused - * state (i.e., STANDBY). - * - * To do this, run YieldingJob in an IO thread, get it into the READY - * state, then have a drained section. Before ending the section, - * acquire the context so the job will not be entered and will thus - * remain on STANDBY. - * - * job_complete_locked() should still work without error. - * - * Note that on the QMP interface, it is impossible to lock an IO - * thread before a drained section ends. In practice, the - * bdrv_drain_all_end() and the aio_context_acquire() will be - * reversed. However, that makes for worse reproducibility here: - * Sometimes, the job would no longer be in STANDBY then but already - * be started. We cannot prevent that, because the IO thread runs - * concurrently. We can only prevent it by taking the lock before - * ending the drained section, so we do that. - * - * (You can reverse the order of operations and most of the time the - * test will pass, but sometimes the assert(status =3D=3D STANDBY) will - * fail.) - */ -static void test_complete_in_standby(void) -{ - BlockBackend *blk; - IOThread *iothread; - AioContext *ctx; - Job *job; - BlockJob *bjob; - - /* Create a test drive, move it to an IO thread */ - blk =3D create_blk(NULL); - iothread =3D iothread_new(); - - ctx =3D iothread_get_aio_context(iothread); - blk_set_aio_context(blk, ctx, &error_abort); - - /* Create our test job */ - bjob =3D mk_job(blk, "job", &test_yielding_driver, true, - JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); - job =3D &bjob->job; - assert_job_status_is(job, JOB_STATUS_CREATED); - - /* Wait for the job to become READY */ - job_start(job); - /* - * Here we are waiting for the status to change, so don't bother - * protecting the read every time. - */ - AIO_WAIT_WHILE_UNLOCKED(ctx, job->status !=3D JOB_STATUS_READY); - - /* Begin the drained section, pausing the job */ - bdrv_drain_all_begin(); - assert_job_status_is(job, JOB_STATUS_STANDBY); - - /* Lock the IO thread to prevent the job from being run */ - aio_context_acquire(ctx); - /* This will schedule the job to resume it */ - bdrv_drain_all_end(); - aio_context_release(ctx); - - WITH_JOB_LOCK_GUARD() { - /* But the job cannot run, so it will remain on standby */ - assert(job->status =3D=3D JOB_STATUS_STANDBY); - - /* Even though the job is on standby, this should work */ - job_complete_locked(job, &error_abort); - - /* The test is done now, clean up. */ - job_finish_sync_locked(job, NULL, &error_abort); - assert(job->status =3D=3D JOB_STATUS_PENDING); - - job_finalize_locked(job, &error_abort); - assert(job->status =3D=3D JOB_STATUS_CONCLUDED); - - job_dismiss_locked(&job, &error_abort); - } - - aio_context_acquire(ctx); - destroy_blk(blk); - aio_context_release(ctx); - iothread_join(iothread); -} - int main(int argc, char **argv) { qemu_init_main_loop(&error_abort); @@ -531,13 +402,5 @@ int main(int argc, char **argv) g_test_add_func("/blockjob/cancel/standby", test_cancel_standby); g_test_add_func("/blockjob/cancel/pending", test_cancel_pending); g_test_add_func("/blockjob/cancel/concluded", test_cancel_concluded); - - /* - * This test is flaky and sometimes fails in CI and otherwise: - * don't run unless user opts in via environment variable. - */ - if (getenv("QEMU_TEST_FLAKY_TESTS")) { - g_test_add_func("/blockjob/complete_in_standby", test_complete_in_= standby); - } return g_test_run(); } diff --git a/tests/unit/test-replication.c b/tests/unit/test-replication.c index afff908d77..5d2003b8ce 100644 --- a/tests/unit/test-replication.c +++ b/tests/unit/test-replication.c @@ -199,17 +199,13 @@ static BlockBackend *start_primary(void) static void teardown_primary(void) { BlockBackend *blk; - AioContext *ctx; =20 /* remove P_ID */ blk =3D blk_by_name(P_ID); assert(blk); =20 - ctx =3D blk_get_aio_context(blk); - aio_context_acquire(ctx); monitor_remove_blk(blk); blk_unref(blk); - aio_context_release(ctx); } =20 static void test_primary_read(void) @@ -345,27 +341,20 @@ static void teardown_secondary(void) { /* only need to destroy two BBs */ BlockBackend *blk; - AioContext *ctx; =20 /* remove S_LOCAL_DISK_ID */ blk =3D blk_by_name(S_LOCAL_DISK_ID); assert(blk); =20 - ctx =3D blk_get_aio_context(blk); - aio_context_acquire(ctx); monitor_remove_blk(blk); blk_unref(blk); - aio_context_release(ctx); =20 /* remove S_ID */ blk =3D blk_by_name(S_ID); assert(blk); =20 - ctx =3D blk_get_aio_context(blk); - aio_context_acquire(ctx); monitor_remove_blk(blk); blk_unref(blk); - aio_context_release(ctx); } =20 static void test_secondary_read(void) diff --git a/util/async.c b/util/async.c index 04ee83d220..dfd44ef612 100644 --- a/util/async.c +++ b/util/async.c @@ -562,12 +562,10 @@ static void co_schedule_bh_cb(void *opaque) Coroutine *co =3D QSLIST_FIRST(&straight); QSLIST_REMOVE_HEAD(&straight, co_scheduled_next); trace_aio_co_schedule_bh_cb(ctx, co); - aio_context_acquire(ctx); =20 /* Protected by write barrier in qemu_aio_coroutine_enter */ qatomic_set(&co->scheduled, NULL); qemu_aio_coroutine_enter(ctx, co); - aio_context_release(ctx); } } =20 @@ -707,9 +705,7 @@ void aio_co_enter(AioContext *ctx, Coroutine *co) assert(self !=3D co); QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, co, co_queue_next); } else { - aio_context_acquire(ctx); qemu_aio_coroutine_enter(ctx, co); - aio_context_release(ctx); } } =20 diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index a9a48fffb8..3bfb1ad3ec 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -360,10 +360,7 @@ static void vu_accept(QIONetListener *listener, QIOCha= nnelSocket *sioc, =20 qio_channel_set_follow_coroutine_ctx(server->ioc, true); =20 - /* Attaching the AioContext starts the vu_client_trip coroutine */ - aio_context_acquire(server->ctx); vhost_user_server_attach_aio_context(server, server->ctx); - aio_context_release(server->ctx); } =20 /* server->ctx acquired by caller */ diff --git a/scripts/block-coroutine-wrapper.py b/scripts/block-coroutine-w= rapper.py index 38364fa557..c9c09fcacd 100644 --- a/scripts/block-coroutine-wrapper.py +++ b/scripts/block-coroutine-wrapper.py @@ -278,12 +278,9 @@ def gen_no_co_wrapper(func: FuncDecl) -> str: static void {name}_bh(void *opaque) {{ {struct_name} *s =3D opaque; - AioContext *ctx =3D {func.gen_ctx('s->')}; =20 {graph_lock} - aio_context_acquire(ctx); {func.get_result}{name}({ func.gen_list('s->{name}') }); - aio_context_release(ctx); {graph_unlock} =20 aio_co_wake(s->co); diff --git a/tests/tsan/suppressions.tsan b/tests/tsan/suppressions.tsan index d9a002a2ef..b3ef59c27c 100644 --- a/tests/tsan/suppressions.tsan +++ b/tests/tsan/suppressions.tsan @@ -4,7 +4,6 @@ =20 # TSan reports a double lock on RECURSIVE mutexes. # Since the recursive lock is intentional, we choose to ignore it. -mutex:aio_context_acquire mutex:pthread_mutex_lock =20 # TSan reports a race between pthread_mutex_init() and --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800482; cv=none; d=zohomail.com; s=zohoarc; b=mxDx7yXU53mxlfHl4zHjGsWKxeDwdDLeGrk1vJTrYQ60rNh9xNU0/G8Qq6DxgQhg75W3Z0fYj/dRQb7gB4sgU7xhximY9R3inCQDEPJtXb3G+RYfYlOq4GcLc5pW4y61Ih+rN/ktI0wECi5K1old9gNFvfG0wKIQ7Dy3l1fkaDM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800482; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=eTK1D/m0rA2DrSG0310cQh9Ae3M4PF5cLiXTsHSalhM=; b=jwdfMXwOXh13BtSIdONk+Uo0ijLbfepN6VD3uWLjWO4Wq/drH/OAMSAwyVOHio44F1dkPMrOgCvmCzsi4MZO1ijw5oBM4bIyUlhsEVWmhcQPALqe4m956HCmV3uC/DIY2wmm1D7r186GimwuRfXsQ+xF5P2s0uvlPNxoVGWfa7Y= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800482504123.2756031268957; Tue, 5 Dec 2023 10:21:22 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648189.1012340 (Exim 4.92) (envelope-from ) id 1rAa2I-0004vt-CY; Tue, 05 Dec 2023 18:20:46 +0000 Received: by outflank-mailman (output) from mailman id 648189.1012340; Tue, 05 Dec 2023 18:20:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2I-0004vh-8m; Tue, 05 Dec 2023 18:20:46 +0000 Received: by outflank-mailman (input) for mailman id 648189; Tue, 05 Dec 2023 18:20:44 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2G-0002fT-Pz for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:44 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 0076e629-939b-11ee-98e5-6d05b1d4d9a1; Tue, 05 Dec 2023 19:20:44 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-689-aLTBzfH7NZ6pFUQL-ZClUQ-1; Tue, 05 Dec 2023 13:20:37 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 91D0029ABA04; Tue, 5 Dec 2023 18:20:36 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id D27EE3C25; Tue, 5 Dec 2023 18:20:35 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0076e629-939b-11ee-98e5-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800443; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eTK1D/m0rA2DrSG0310cQh9Ae3M4PF5cLiXTsHSalhM=; b=CqFKd2elAa5X3eJu7+hCP5rmtkCEBMWlGP84WpDOw6sLupBFEpIYWts9p3tw8F0VrAstGk /v7tLUpPhnJ69lmj4gqauHXfFYe3RaN2MMcE/mRAsttXwZSwQHOShcC0TPALUhO1zJ/cEC Sqd1BURKoB6wK0AHFl10TI9O//xqHLY= X-MC-Unique: aLTBzfH7NZ6pFUQL-ZClUQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 07/14] block: remove bdrv_co_lock() Date: Tue, 5 Dec 2023 13:20:04 -0500 Message-ID: <20231205182011.1976568-8-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800483188100006 Content-Type: text/plain; charset="utf-8" The bdrv_co_lock() and bdrv_co_unlock() functions are already no-ops. Remove them. Signed-off-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf --- include/block/block-global-state.h | 14 -------------- block.c | 10 ---------- blockdev.c | 4 ---- 3 files changed, 28 deletions(-) diff --git a/include/block/block-global-state.h b/include/block/block-globa= l-state.h index 0327f1c605..4ec0b217f0 100644 --- a/include/block/block-global-state.h +++ b/include/block/block-global-state.h @@ -267,20 +267,6 @@ int bdrv_debug_remove_breakpoint(BlockDriverState *bs,= const char *tag); int bdrv_debug_resume(BlockDriverState *bs, const char *tag); bool bdrv_debug_is_suspended(BlockDriverState *bs, const char *tag); =20 -/** - * Locks the AioContext of @bs if it's not the current AioContext. This av= oids - * double locking which could lead to deadlocks: This is a coroutine_fn, s= o we - * know we already own the lock of the current AioContext. - * - * May only be called in the main thread. - */ -void coroutine_fn bdrv_co_lock(BlockDriverState *bs); - -/** - * Unlocks the AioContext of @bs if it's not the current AioContext. - */ -void coroutine_fn bdrv_co_unlock(BlockDriverState *bs); - bool bdrv_child_change_aio_context(BdrvChild *c, AioContext *ctx, GHashTable *visited, Transaction *tran, Error **errp); diff --git a/block.c b/block.c index 91ace5d2d5..434b7f4d72 100644 --- a/block.c +++ b/block.c @@ -7431,16 +7431,6 @@ void coroutine_fn bdrv_co_leave(BlockDriverState *bs= , AioContext *old_ctx) bdrv_dec_in_flight(bs); } =20 -void coroutine_fn bdrv_co_lock(BlockDriverState *bs) -{ - /* TODO removed in next patch */ -} - -void coroutine_fn bdrv_co_unlock(BlockDriverState *bs) -{ - /* TODO removed in next patch */ -} - static void bdrv_do_remove_aio_context_notifier(BdrvAioNotifier *ban) { GLOBAL_STATE_CODE(); diff --git a/blockdev.c b/blockdev.c index 8a1b28f830..3a5e7222ec 100644 --- a/blockdev.c +++ b/blockdev.c @@ -2264,18 +2264,14 @@ void coroutine_fn qmp_block_resize(const char *devi= ce, const char *node_name, return; } =20 - bdrv_co_lock(bs); bdrv_drained_begin(bs); - bdrv_co_unlock(bs); =20 old_ctx =3D bdrv_co_enter(bs); blk_co_truncate(blk, size, false, PREALLOC_MODE_OFF, 0, errp); bdrv_co_leave(bs, old_ctx); =20 - bdrv_co_lock(bs); bdrv_drained_end(bs); blk_co_unref(blk); - bdrv_co_unlock(bs); } =20 void qmp_block_stream(const char *job_id, const char *device, --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800471; cv=none; d=zohomail.com; s=zohoarc; b=bqZVCqKacHw9mk0waTglAHH+xJj4i+ZBngunBnjkrnFvHe9PaXHT0kWhGC83+PwQoLwdR/JEPYvTwkZeYfmrohGEVsCsk7CKsOEPuZtgGDb+JyQhT356SPdoqIl36CzRj1fFdFmdT6+0qkZY4g334pGk30lNj119fJeVV1jTrbY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800471; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=QKQzQ3fye1WySx6n8C4OipFXirz9rwxVm6vz00LMeKs=; b=Rhc+vZ5cj6GgUN8GFEBTAl+35Xs6zKqeADRor4MpPPrUbZEVNZl+neD1dhYwhImHs2fylQUOK2rhHavybSmMsqbRqoNRlC2KvoqIrbHT+ARy5NpfApFkLMU80KnC2Jh6lOw1311+AUFQvbDrNvf7iuAUdGJ1vVc6Tye4QKCVDys= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 170180047165040.53400926004713; Tue, 5 Dec 2023 10:21:11 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648192.1012350 (Exim 4.92) (envelope-from ) id 1rAa2K-0005GW-P3; Tue, 05 Dec 2023 18:20:48 +0000 Received: by outflank-mailman (output) from mailman id 648192.1012350; Tue, 05 Dec 2023 18:20:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2K-0005GH-KW; Tue, 05 Dec 2023 18:20:48 +0000 Received: by outflank-mailman (input) for mailman id 648192; Tue, 05 Dec 2023 18:20:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2J-0002wG-Om for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:47 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 015b3741-939b-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:45 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-587-ksoH-B_qNCu3IHo5ypFuDQ-1; Tue, 05 Dec 2023 13:20:40 -0500 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D0ACB386A0AC; Tue, 5 Dec 2023 18:20:38 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 329D9492BC6; Tue, 5 Dec 2023 18:20:37 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 015b3741-939b-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800444; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QKQzQ3fye1WySx6n8C4OipFXirz9rwxVm6vz00LMeKs=; b=PCXoewqez83iclCPZ5RqkjrtMG4/MWwVviMVfoT/DFNUQArvaxTn419iEwwm0r0xsxJzTT DyUBjEuJhtIunFVurqUbxUMpbjnyQ/5QtLH7K1frbZobIFUUTj/LelrYCOwObBcBYscVqG smev1PigzZYwj8I0E2l2INBeYDJ4ttw= X-MC-Unique: ksoH-B_qNCu3IHo5ypFuDQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 08/14] scsi: remove AioContext locking Date: Tue, 5 Dec 2023 13:20:05 -0500 Message-ID: <20231205182011.1976568-9-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800473754100002 Content-Type: text/plain; charset="utf-8" The AioContext lock no longer has any effect. Remove it. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/hw/virtio/virtio-scsi.h | 14 -------------- hw/scsi/scsi-bus.c | 2 -- hw/scsi/scsi-disk.c | 31 +++++-------------------------- hw/scsi/virtio-scsi.c | 18 ------------------ 4 files changed, 5 insertions(+), 60 deletions(-) diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scs= i.h index da8cb928d9..7f0573b1bf 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -101,20 +101,6 @@ struct VirtIOSCSI { uint32_t host_features; }; =20 -static inline void virtio_scsi_acquire(VirtIOSCSI *s) -{ - if (s->ctx) { - aio_context_acquire(s->ctx); - } -} - -static inline void virtio_scsi_release(VirtIOSCSI *s) -{ - if (s->ctx) { - aio_context_release(s->ctx); - } -} - void virtio_scsi_common_realize(DeviceState *dev, VirtIOHandleOutput ctrl, VirtIOHandleOutput evt, diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index f3ec11f892..df68a44b6a 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -1731,9 +1731,7 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCS= ISense sense) { scsi_device_for_each_req_async(sdev, scsi_device_purge_one_req, NULL); =20 - aio_context_acquire(blk_get_aio_context(sdev->conf.blk)); blk_drain(sdev->conf.blk); - aio_context_release(blk_get_aio_context(sdev->conf.blk)); scsi_device_set_ua(sdev, sense); } =20 diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index a5048e0aaf..61be3d395a 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -2339,14 +2339,10 @@ static void scsi_disk_reset(DeviceState *dev) { SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev.qdev, dev); uint64_t nb_sectors; - AioContext *ctx; =20 scsi_device_purge_requests(&s->qdev, SENSE_CODE(RESET)); =20 - ctx =3D blk_get_aio_context(s->qdev.conf.blk); - aio_context_acquire(ctx); blk_get_geometry(s->qdev.conf.blk, &nb_sectors); - aio_context_release(ctx); =20 nb_sectors /=3D s->qdev.blocksize / BDRV_SECTOR_SIZE; if (nb_sectors) { @@ -2545,15 +2541,13 @@ static void scsi_unrealize(SCSIDevice *dev) static void scsi_hd_realize(SCSIDevice *dev, Error **errp) { SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, dev); - AioContext *ctx =3D NULL; + /* can happen for devices without drive. The error message for missing * backend will be issued in scsi_realize */ if (s->qdev.conf.blk) { - ctx =3D blk_get_aio_context(s->qdev.conf.blk); - aio_context_acquire(ctx); if (!blkconf_blocksizes(&s->qdev.conf, errp)) { - goto out; + return; } } s->qdev.blocksize =3D s->qdev.conf.logical_block_size; @@ -2562,16 +2556,11 @@ static void scsi_hd_realize(SCSIDevice *dev, Error = **errp) s->product =3D g_strdup("QEMU HARDDISK"); } scsi_realize(&s->qdev, errp); -out: - if (ctx) { - aio_context_release(ctx); - } } =20 static void scsi_cd_realize(SCSIDevice *dev, Error **errp) { SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, dev); - AioContext *ctx; int ret; uint32_t blocksize =3D 2048; =20 @@ -2587,8 +2576,6 @@ static void scsi_cd_realize(SCSIDevice *dev, Error **= errp) blocksize =3D dev->conf.physical_block_size; } =20 - ctx =3D blk_get_aio_context(dev->conf.blk); - aio_context_acquire(ctx); s->qdev.blocksize =3D blocksize; s->qdev.type =3D TYPE_ROM; s->features |=3D 1 << SCSI_DISK_F_REMOVABLE; @@ -2596,7 +2583,6 @@ static void scsi_cd_realize(SCSIDevice *dev, Error **= errp) s->product =3D g_strdup("QEMU CD-ROM"); } scsi_realize(&s->qdev, errp); - aio_context_release(ctx); } =20 =20 @@ -2727,7 +2713,6 @@ static int get_device_type(SCSIDiskState *s) static void scsi_block_realize(SCSIDevice *dev, Error **errp) { SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, dev); - AioContext *ctx; int sg_version; int rc; =20 @@ -2742,9 +2727,6 @@ static void scsi_block_realize(SCSIDevice *dev, Error= **errp) "be removed in a future version"); } =20 - ctx =3D blk_get_aio_context(s->qdev.conf.blk); - aio_context_acquire(ctx); - /* check we are using a driver managing SG_IO (version 3 and after) */ rc =3D blk_ioctl(s->qdev.conf.blk, SG_GET_VERSION_NUM, &sg_version); if (rc < 0) { @@ -2752,18 +2734,18 @@ static void scsi_block_realize(SCSIDevice *dev, Err= or **errp) if (rc !=3D -EPERM) { error_append_hint(errp, "Is this a SCSI device?\n"); } - goto out; + return; } if (sg_version < 30000) { error_setg(errp, "scsi generic interface too old"); - goto out; + return; } =20 /* get device type from INQUIRY data */ rc =3D get_device_type(s); if (rc < 0) { error_setg(errp, "INQUIRY failed"); - goto out; + return; } =20 /* Make a guess for the block size, we'll fix it when the guest sends. @@ -2783,9 +2765,6 @@ static void scsi_block_realize(SCSIDevice *dev, Error= **errp) =20 scsi_realize(&s->qdev, errp); scsi_generic_read_device_inquiry(&s->qdev); - -out: - aio_context_release(ctx); } =20 typedef struct SCSIBlockReq { diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 4f8d35facc..ca365a70e9 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -642,9 +642,7 @@ static void virtio_scsi_handle_ctrl(VirtIODevice *vdev,= VirtQueue *vq) return; } =20 - virtio_scsi_acquire(s); virtio_scsi_handle_ctrl_vq(s, vq); - virtio_scsi_release(s); } =20 static void virtio_scsi_complete_cmd_req(VirtIOSCSIReq *req) @@ -882,9 +880,7 @@ static void virtio_scsi_handle_cmd(VirtIODevice *vdev, = VirtQueue *vq) return; } =20 - virtio_scsi_acquire(s); virtio_scsi_handle_cmd_vq(s, vq); - virtio_scsi_release(s); } =20 static void virtio_scsi_get_config(VirtIODevice *vdev, @@ -1031,9 +1027,7 @@ static void virtio_scsi_handle_event(VirtIODevice *vd= ev, VirtQueue *vq) return; } =20 - virtio_scsi_acquire(s); virtio_scsi_handle_event_vq(s, vq); - virtio_scsi_release(s); } =20 static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense se= nse) @@ -1052,9 +1046,7 @@ static void virtio_scsi_change(SCSIBus *bus, SCSIDevi= ce *dev, SCSISense sense) }, }; =20 - virtio_scsi_acquire(s); virtio_scsi_push_event(s, &info); - virtio_scsi_release(s); } } =20 @@ -1071,17 +1063,13 @@ static void virtio_scsi_hotplug(HotplugHandler *hot= plug_dev, DeviceState *dev, VirtIODevice *vdev =3D VIRTIO_DEVICE(hotplug_dev); VirtIOSCSI *s =3D VIRTIO_SCSI(vdev); SCSIDevice *sd =3D SCSI_DEVICE(dev); - AioContext *old_context; int ret; =20 if (s->ctx && !s->dataplane_fenced) { if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)= ) { return; } - old_context =3D blk_get_aio_context(sd->conf.blk); - aio_context_acquire(old_context); ret =3D blk_set_aio_context(sd->conf.blk, s->ctx, errp); - aio_context_release(old_context); if (ret < 0) { return; } @@ -1097,10 +1085,8 @@ static void virtio_scsi_hotplug(HotplugHandler *hotp= lug_dev, DeviceState *dev, }, }; =20 - virtio_scsi_acquire(s); virtio_scsi_push_event(s, &info); scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); - virtio_scsi_release(s); } } =20 @@ -1122,17 +1108,13 @@ static void virtio_scsi_hotunplug(HotplugHandler *h= otplug_dev, DeviceState *dev, qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); =20 if (s->ctx) { - virtio_scsi_acquire(s); /* If other users keep the BlockBackend in the iothread, that's ok= */ blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL); - virtio_scsi_release(s); } =20 if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { - virtio_scsi_acquire(s); virtio_scsi_push_event(s, &info); scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); - virtio_scsi_release(s); } } =20 --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800480; cv=none; d=zohomail.com; s=zohoarc; b=RWCMCrBYvkx7qliAikSSgpByFqVbOkxLDN6cCVW1zd6vi2vMqaZ6Tv/jIHHNLlsW+S0Ye4vFvowXCXxCKhelMMcMNQHOEL6m3ax3SAwrTcP5if7wpcYdueic2kDz+DLgkY57GgvRfm9zwiJqNHMElrulz26AKG57jJwLOziR3Ds= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800480; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=pcv/XqVh2Z7iu9z6gUwclnS+Bu4/7vR59iug4jWCKh4=; b=f7/hY9ZfBEMZv9yLjfVrUxSKkzvMNRn8KAoISaKJlxUU4MvEy22/ODyaA8IB57CT+T4WVddMPykYO1hGtxaSbqpj6s6HsZiBqa+tQEmfhz7dGsp3fl8x7uLOmwvjLebGDdvdSY6Fy3VuuaZEwVzrSMlQcp7SCWleZACst56S+Z8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800480677823.6087522186215; Tue, 5 Dec 2023 10:21:20 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648196.1012369 (Exim 4.92) (envelope-from ) id 1rAa2Q-00069X-Ij; Tue, 05 Dec 2023 18:20:54 +0000 Received: by outflank-mailman (output) from mailman id 648196.1012369; Tue, 05 Dec 2023 18:20:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2Q-00068I-BP; Tue, 05 Dec 2023 18:20:54 +0000 Received: by outflank-mailman (input) for mailman id 648196; Tue, 05 Dec 2023 18:20:53 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2P-0002wG-0G for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:53 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 03bfa233-939b-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:49 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-84-vcT85WOuNBCsPHqsqxe_sw-1; Tue, 05 Dec 2023 13:20:42 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 309EC3C2A1C8; Tue, 5 Dec 2023 18:20:41 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 716931121312; Tue, 5 Dec 2023 18:20:40 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 03bfa233-939b-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pcv/XqVh2Z7iu9z6gUwclnS+Bu4/7vR59iug4jWCKh4=; b=dPeenLW2QWqasP3i4IT7DuHtatnhn/4Qc5MekdMRc8bHeur3/M1H/Tpk+ShhgP5QMg5Lym +4MCB/mVaQm/PioyJmr3sO2WTxbkrKCz4YHbBbqTVHaVC16nURBx+bXeEE1c2Ms5ywI1OQ Th1/A7dQySjgJ42NCYMW/jUquN4S6mk= X-MC-Unique: vcT85WOuNBCsPHqsqxe_sw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 09/14] aio-wait: draw equivalence between AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED() Date: Tue, 5 Dec 2023 13:20:06 -0500 Message-ID: <20231205182011.1976568-10-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800481120100003 Content-Type: text/plain; charset="utf-8" Now that the AioContext lock no longer exists, AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED() are equivalent. A future patch will get rid of AIO_WAIT_WHILE_UNLOCKED(). Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/block/aio-wait.h | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index 5449b6d742..157f105916 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -63,9 +63,6 @@ extern AioWait global_aio_wait; * @ctx: the aio context, or NULL if multiple aio contexts (for which the * caller does not hold a lock) are involved in the polling conditio= n. * @cond: wait while this conditional expression is true - * @unlock: whether to unlock and then lock again @ctx. This applies - * only when waiting for another AioContext from the main loop. - * Otherwise it's ignored. * * Wait while a condition is true. Use this to implement synchronous * operations that require event loop activity. @@ -78,7 +75,7 @@ extern AioWait global_aio_wait; * wait on conditions between two IOThreads since that could lead to deadl= ock, * go via the main loop instead. */ -#define AIO_WAIT_WHILE_INTERNAL(ctx, cond, unlock) ({ \ +#define AIO_WAIT_WHILE_INTERNAL(ctx, cond) ({ \ bool waited_ =3D false; \ AioWait *wait_ =3D &global_aio_wait; \ AioContext *ctx_ =3D (ctx); \ @@ -95,13 +92,7 @@ extern AioWait global_aio_wait; assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ while ((cond)) { \ - if (unlock && ctx_) { \ - aio_context_release(ctx_); \ - } \ aio_poll(qemu_get_aio_context(), true); \ - if (unlock && ctx_) { \ - aio_context_acquire(ctx_); \ - } \ waited_ =3D true; \ } \ } \ @@ -109,10 +100,11 @@ extern AioWait global_aio_wait; waited_; }) =20 #define AIO_WAIT_WHILE(ctx, cond) \ - AIO_WAIT_WHILE_INTERNAL(ctx, cond, true) + AIO_WAIT_WHILE_INTERNAL(ctx, cond) =20 +/* TODO replace this with AIO_WAIT_WHILE() in a future patch */ #define AIO_WAIT_WHILE_UNLOCKED(ctx, cond) \ - AIO_WAIT_WHILE_INTERNAL(ctx, cond, false) + AIO_WAIT_WHILE_INTERNAL(ctx, cond) =20 /** * aio_wait_kick: --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800492; cv=none; d=zohomail.com; s=zohoarc; b=ksiX892uxhheexMGgQHXd/PYQNkg5ppD+/cmH3iv7ecg1KashQ4MpJdmUjNnErt0Eq7IoNCB+JV1BYpuKceRUP13RgpPKu1z4jqTbJEahxEFTSI8IFaZb8icSgTT492wpMi3YWz48sUkVsG0p0kWIfWULOzII7YfKwcfWZ/RQHM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800492; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=CXVutLONoGnwNzaaLHCyq57V4khRfPmIjmlvVDaOzsM=; b=HXczIbztGY09Y9DsTjqEAL9KpoSvvBOWOAWSUjZ9SZ4OxgqRLnh7hqy4zP3jBbUo168qCJ4cr1xmO4J2p9NVuRpLsaGVWRT24Wg9LJusNKPobyG52u6xR8jGY5XZpRxWuiOTU6lDd1wviylhN3Dl5NPC4MoB9GVI+YcCZjjpBSM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800492524133.5717977985529; Tue, 5 Dec 2023 10:21:32 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648195.1012360 (Exim 4.92) (envelope-from ) id 1rAa2P-0005oJ-4U; Tue, 05 Dec 2023 18:20:53 +0000 Received: by outflank-mailman (output) from mailman id 648195.1012360; Tue, 05 Dec 2023 18:20:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2O-0005nc-Vd; Tue, 05 Dec 2023 18:20:52 +0000 Received: by outflank-mailman (input) for mailman id 648195; Tue, 05 Dec 2023 18:20:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2O-0002wG-0I for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:52 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 03b80a54-939b-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:49 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-696-FJu31-KlP_-4-uDfX0zTIw-1; Tue, 05 Dec 2023 13:20:44 -0500 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 52F743C2A1CB; Tue, 5 Dec 2023 18:20:43 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id AF518492BE6; Tue, 5 Dec 2023 18:20:42 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 03b80a54-939b-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CXVutLONoGnwNzaaLHCyq57V4khRfPmIjmlvVDaOzsM=; b=Q6LP+B8SIvX1gS/60KXvRyKZKkKJaOhR3gvE9nI3lNHWsAtFvj3u0pta24hDl9dD7gwwNb 4pDwavhPpdw4ZGEB+GbRx6iZapvjlCHRA82MRPrHDpIYDnsTZSdXhI9LnXA1uwMANDDwpW n+2m1gpfAtEIdHZTZz8fKnqfmYnH+30= X-MC-Unique: FJu31-KlP_-4-uDfX0zTIw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 10/14] aio: remove aio_context_acquire()/aio_context_release() API Date: Tue, 5 Dec 2023 13:20:07 -0500 Message-ID: <20231205182011.1976568-11-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800493513100001 Content-Type: text/plain; charset="utf-8" Delete these functions because nothing calls these functions anymore. I introduced these APIs in commit 98563fc3ec44 ("aio: add aio_context_acquire() and aio_context_release()") in 2014. It's with a sigh of relief that I delete these APIs almost 10 years later. Thanks to Paolo Bonzini's vision for multi-queue QEMU, we got an understanding of where the code needed to go in order to remove the limitations that the original dataplane and the IOThread/AioContext approach that followed it. Emanuele Giuseppe Esposito had the splendid determination to convert large parts of the codebase so that they no longer needed the AioContext lock. This was a painstaking process, both in the actual code changes required and the iterations of code review that Emanuele eked out of Kevin and me over many months. Kevin Wolf tackled multitudes of graph locking conversions to protect in-flight I/O from run-time changes to the block graph as well as the clang Thread Safety Analysis annotations that allow the compiler to check whether the graph lock is being used correctly. And me, well, I'm just here to add some pizzazz to the QEMU multi-queue block layer :). Thank you to everyone who helped with this effort, including Eric Blake, code reviewer extraordinaire, and others who I've forgotten to mention. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/block/aio.h | 17 ----------------- util/async.c | 10 ---------- 2 files changed, 27 deletions(-) diff --git a/include/block/aio.h b/include/block/aio.h index f08b358077..af05512a7d 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -278,23 +278,6 @@ void aio_context_ref(AioContext *ctx); */ void aio_context_unref(AioContext *ctx); =20 -/* Take ownership of the AioContext. If the AioContext will be shared bet= ween - * threads, and a thread does not want to be interrupted, it will have to - * take ownership around calls to aio_poll(). Otherwise, aio_poll() - * automatically takes care of calling aio_context_acquire and - * aio_context_release. - * - * Note that this is separate from bdrv_drained_begin/bdrv_drained_end. A - * thread still has to call those to avoid being interrupted by the guest. - * - * Bottom halves, timers and callbacks can be created or removed without - * acquiring the AioContext. - */ -void aio_context_acquire(AioContext *ctx); - -/* Relinquish ownership of the AioContext. */ -void aio_context_release(AioContext *ctx); - /** * aio_bh_schedule_oneshot_full: Allocate a new bottom half structure that= will * run only once and as soon as possible. diff --git a/util/async.c b/util/async.c index dfd44ef612..460529057c 100644 --- a/util/async.c +++ b/util/async.c @@ -719,16 +719,6 @@ void aio_context_unref(AioContext *ctx) g_source_unref(&ctx->source); } =20 -void aio_context_acquire(AioContext *ctx) -{ - /* TODO remove this function */ -} - -void aio_context_release(AioContext *ctx) -{ - /* TODO remove this function */ -} - QEMU_DEFINE_STATIC_CO_TLS(AioContext *, my_aiocontext) =20 AioContext *qemu_get_current_aio_context(void) --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800510; cv=none; d=zohomail.com; s=zohoarc; b=Ek4YDEm/Frm6BOh+0tuRTJIhep92vZ1tjrzbEra2R95QTM04l/Sxsa+ezkxqvncbQbFYRP2b1JPQdaba1k25+BfwTj69HxQzsUBkUBkT4u+8FnAXSZmntfKA+KUUYkCRf2u9LwXTc99Xaz1piuHcUp+jNywU11dqaC20BaMvah8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800510; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8/B7zc0XB6x72qbuhQc1b2o1g2AgjtE1qkTNJwgvUlY=; b=QGbbTHrjvifm6afMYLVa+OyM+eMgGqHQ3+0LjlPDRVxXTjYtf3ffJk7PERYohLmKQ7yLNfHHrD9v8lkVVJcy1lhdwd3yT5PRPdeK6i8dlOX90wEok0tIYbz+AcdKgPY/bIjMUjR3k2Dsg4hqt58Lp6n2fY0Y0RDcPUBcXeSLRMM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800510349941.515461590579; Tue, 5 Dec 2023 10:21:50 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648199.1012380 (Exim 4.92) (envelope-from ) id 1rAa2S-0006YY-EM; Tue, 05 Dec 2023 18:20:56 +0000 Received: by outflank-mailman (output) from mailman id 648199.1012380; Tue, 05 Dec 2023 18:20:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2S-0006Ws-86; Tue, 05 Dec 2023 18:20:56 +0000 Received: by outflank-mailman (input) for mailman id 648199; Tue, 05 Dec 2023 18:20:54 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2Q-0002wG-I7 for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:54 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 05a8f400-939b-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:52 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-685-8BIN0AuQO9OJpRD2qHD49w-1; Tue, 05 Dec 2023 13:20:46 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 780813C2A1C1; Tue, 5 Dec 2023 18:20:45 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id D5A7A40C6EB9; Tue, 5 Dec 2023 18:20:44 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 05a8f400-939b-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800451; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8/B7zc0XB6x72qbuhQc1b2o1g2AgjtE1qkTNJwgvUlY=; b=es5UEZFnLlnAER/WQ1VKxtnMjgIKwE0DDM1v837p1/n0jTVeKA0d+O2qZidblkg5bDK/Cp YQzVB2uaRoJMrqNPUZDCEBW2WJIQfYffuWr/E007pFDVIbAFlXRuWuphSZ7KE+h/L5yEK1 b9GUUnFTleR5zaHMhZtyd/DWP7msn4g= X-MC-Unique: 8BIN0AuQO9OJpRD2qHD49w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 11/14] docs: remove AioContext lock from IOThread docs Date: Tue, 5 Dec 2023 13:20:08 -0500 Message-ID: <20231205182011.1976568-12-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800511983100005 Content-Type: text/plain; charset="utf-8" Encourage the use of locking primitives and stop mentioning the AioContext lock since it is being removed. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake --- docs/devel/multiple-iothreads.txt | 45 +++++++++++-------------------- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/docs/devel/multiple-iothreads.txt b/docs/devel/multiple-iothre= ads.txt index a3e949f6b3..4865196bde 100644 --- a/docs/devel/multiple-iothreads.txt +++ b/docs/devel/multiple-iothreads.txt @@ -88,27 +88,18 @@ loop, depending on which AioContext instance the caller= passes in. =20 How to synchronize with an IOThread ----------------------------------- -AioContext is not thread-safe so some rules must be followed when using fi= le -descriptors, event notifiers, timers, or BHs across threads: +Variables that can be accessed by multiple threads require some form of +synchronization such as qemu_mutex_lock(), rcu_read_lock(), etc. =20 -1. AioContext functions can always be called safely. They handle their -own locking internally. - -2. Other threads wishing to access the AioContext must use -aio_context_acquire()/aio_context_release() for mutual exclusion. Once the -context is acquired no other thread can access it or run event loop iterat= ions -in this AioContext. - -Legacy code sometimes nests aio_context_acquire()/aio_context_release() ca= lls. -Do not use nesting anymore, it is incompatible with the BDRV_POLL_WHILE() = macro -used in the block layer and can lead to hangs. - -There is currently no lock ordering rule if a thread needs to acquire mult= iple -AioContexts simultaneously. Therefore, it is only safe for code holding t= he -QEMU global mutex to acquire other AioContexts. +AioContext functions like aio_set_fd_handler(), aio_set_event_notifier(), +aio_bh_new(), and aio_timer_new() are thread-safe. They can be used to tri= gger +activity in an IOThread. =20 Side note: the best way to schedule a function call across threads is to c= all -aio_bh_schedule_oneshot(). No acquire/release or locking is needed. +aio_bh_schedule_oneshot(). + +The main loop thread can wait synchronously for a condition using +AIO_WAIT_WHILE(). =20 AioContext and the block layer ------------------------------ @@ -124,22 +115,16 @@ Block layer code must therefore expect to run in an I= OThread and avoid using old APIs that implicitly use the main loop. See the "How to program for IOThreads" above for information on how to do that. =20 -If main loop code such as a QMP function wishes to access a BlockDriverSta= te -it must first call aio_context_acquire(bdrv_get_aio_context(bs)) to ensure -that callbacks in the IOThread do not run in parallel. - Code running in the monitor typically needs to ensure that past requests from the guest are completed. When a block device is running in an IOThread, the IOThread can also process requests from the guest (via ioeventfd). To achieve both objects, wrap the code between bdrv_drained_begin() and bdrv_drained_end(), thus creating a "drained -section". The functions must be called between aio_context_acquire() -and aio_context_release(). You can freely release and re-acquire the -AioContext within a drained section. +section". =20 -Long-running jobs (usually in the form of coroutines) are best scheduled in -the BlockDriverState's AioContext to avoid the need to acquire/release aro= und -each bdrv_*() call. The functions bdrv_add/remove_aio_context_notifier, -or alternatively blk_add/remove_aio_context_notifier if you use BlockBacke= nds, -can be used to get a notification whenever bdrv_try_change_aio_context() m= oves a +Long-running jobs (usually in the form of coroutines) are often scheduled = in +the BlockDriverState's AioContext. The functions +bdrv_add/remove_aio_context_notifier, or alternatively +blk_add/remove_aio_context_notifier if you use BlockBackends, can be used = to +get a notification whenever bdrv_try_change_aio_context() moves a BlockDriverState to a different AioContext. --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800481; cv=none; d=zohomail.com; s=zohoarc; b=McNu04r+QsXqXcAsxIAifEU0vGgKzhqetW6kBz32bhYmpSuZvkWKTX4Ct4s6BLoahtMhS50aAEEaY7cj6ZRJ4/23c1eOFERltBe9WN8VYGtRX0J+5h4MKYvFnsPvRNF07H9LmD8xluAizFFWn59bNNV0jnXlVS1lhw4XGCSfcFM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800481; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=H+WxIOWX4SMnf2UESX0n+F/AkXzvUVoPbzHBDgPUzKs=; b=Jq1gbqY9R9LC772tqC4Dityrv4xluOnHaD83EoMp8FLSXWYjUDKc3xyc/GSVOb2/4g0g6hhsU7/yQVZNDuGZpP1vYQA5kR0cuCcqoIVC444h7/33iXFAaquGxnnMZ0MjTXAryY1lMxQObEbgr7+Ha+cju3HLaYpn6x8N8GCRr/0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800481391987.5901276668772; Tue, 5 Dec 2023 10:21:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648200.1012386 (Exim 4.92) (envelope-from ) id 1rAa2T-0006f7-1Z; Tue, 05 Dec 2023 18:20:57 +0000 Received: by outflank-mailman (output) from mailman id 648200.1012386; Tue, 05 Dec 2023 18:20:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2S-0006eA-O7; Tue, 05 Dec 2023 18:20:56 +0000 Received: by outflank-mailman (input) for mailman id 648200; Tue, 05 Dec 2023 18:20:55 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa2R-0002wG-Iu for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:55 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 065b854d-939b-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:53 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-342-31YqxZDJOkGkEymZe14SUA-1; Tue, 05 Dec 2023 13:20:49 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1234A29ABA04; Tue, 5 Dec 2023 18:20:48 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B81D40C6EB9; Tue, 5 Dec 2023 18:20:47 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 065b854d-939b-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800452; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H+WxIOWX4SMnf2UESX0n+F/AkXzvUVoPbzHBDgPUzKs=; b=QJMMuWoZyf1GjWRRO6w+lA0GCFX1mCKzaiQo+kCznJEfv55SK9rwWQ3o8PP8nK+k9uTziX mDxdVREWbiUQ5llqOy+YqC+B1wTjNp6SqeKN8QvYuj14/0SdnGHM/8c3PMXVV5Z2v0+eei qlR9DN/ZBtzGot+zcN+uz+8yo2YN6bE= X-MC-Unique: 31YqxZDJOkGkEymZe14SUA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 12/14] scsi: remove outdated AioContext lock comment Date: Tue, 5 Dec 2023 13:20:09 -0500 Message-ID: <20231205182011.1976568-13-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800483182100005 Content-Type: text/plain; charset="utf-8" The SCSI subsystem no longer uses the AioContext lock. Request processing runs exclusively in the BlockBackend's AioContext since "scsi: only access SCSIDevice->requests from one thread" and hence the lock is unnecessary. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake --- hw/scsi/scsi-disk.c | 1 - 1 file changed, 1 deletion(-) diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index 61be3d395a..2e7e1e9a1c 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -355,7 +355,6 @@ done: scsi_req_unref(&r->req); } =20 -/* Called with AioContext lock held */ static void scsi_dma_complete(void *opaque, int ret) { SCSIDiskReq *r =3D (SCSIDiskReq *)opaque; --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800630; cv=none; d=zohomail.com; s=zohoarc; b=OMtjUkOJGQc1Map0HG9Ua3TMbQR8s6zdCtQxSaGhk85/svlIir0eh7+6+NauSjwHE2+E4HjPI7XvbXSPYSORhtZywAJR2kDDHkpXdtds6sMdZTpUZKK9ziyi6nLlNmws23S7uJmFIQcs7o7FyhhnvJRCark8Jrd92gQxstoLTtA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800630; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=epSHBjIPGBGGtE9Q/MgkR6IvNKquBK1BoKNfsiriADs=; b=MckqyUE9cPK/2KccPIrpVPErHGslhxyEIdxSiWSpA/AyCwQEojmiAw+ROw65R5CoLSsqKmjApI9aLykVTshhgAsjFUEbDSFr5GpgIeqjrUUlMpDXjoYkUx9HAn2f4EwsowNsingUzIubSXHVGMPdlGlGnVXimPFg07pPw3Z+ssc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1701800630741389.39309505335814; Tue, 5 Dec 2023 10:23:50 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rAa46-0001ri-5F; Tue, 05 Dec 2023 13:22:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rAa2b-0000HI-Sj for qemu-devel@nongnu.org; Tue, 05 Dec 2023 13:21:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rAa2S-0002oh-Tv for qemu-devel@nongnu.org; Tue, 05 Dec 2023 13:21:05 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-104-9IK1xsJfM6qlc9VTvpSUpA-1; Tue, 05 Dec 2023 13:20:51 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B3C0E185A786; Tue, 5 Dec 2023 18:20:50 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 160AC2166B31; Tue, 5 Dec 2023 18:20:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=epSHBjIPGBGGtE9Q/MgkR6IvNKquBK1BoKNfsiriADs=; b=iQJRvDlE2GJaZjwU/hhWCrFklAKN4Ry4p4gl0C4GE6DBlgdRby6QokTO0mmXKymfV7yfR6 Rzm1waKQvzqGtKN+XT/QQ2FPruZaa4+lj1pDsnv3os+GrXA3otiM7I5gv2UUOene2csAbd qmQiDgJRk48GKKBZnh8MF51nVSJdwQ4= X-MC-Unique: 9IK1xsJfM6qlc9VTvpSUpA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 13/14] job: remove outdated AioContext locking comments Date: Tue, 5 Dec 2023 13:20:10 -0500 Message-ID: <20231205182011.1976568-14-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800632515100005 Content-Type: text/plain; charset="utf-8" The AioContext lock no longer exists. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake --- include/qemu/job.h | 20 -------------------- 1 file changed, 20 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index e502787dd8..9ea98b5927 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -67,8 +67,6 @@ typedef struct Job { =20 /** * The completion function that will be called when the job completes. - * Called with AioContext lock held, since many callback implementatio= ns - * use bdrv_* functions that require to hold the lock. */ BlockCompletionFunc *cb; =20 @@ -264,9 +262,6 @@ struct JobDriver { * * This callback will not be invoked if the job has already failed. * If it fails, abort and then clean will be called. - * - * Called with AioContext lock held, since many callbacs implementatio= ns - * use bdrv_* functions that require to hold the lock. */ int (*prepare)(Job *job); =20 @@ -277,9 +272,6 @@ struct JobDriver { * * All jobs will complete with a call to either .commit() or .abort() = but * never both. - * - * Called with AioContext lock held, since many callback implementatio= ns - * use bdrv_* functions that require to hold the lock. */ void (*commit)(Job *job); =20 @@ -290,9 +282,6 @@ struct JobDriver { * * All jobs will complete with a call to either .commit() or .abort() = but * never both. - * - * Called with AioContext lock held, since many callback implementatio= ns - * use bdrv_* functions that require to hold the lock. */ void (*abort)(Job *job); =20 @@ -301,9 +290,6 @@ struct JobDriver { * .commit() or .abort(). Regardless of which callback is invoked after * completion, .clean() will always be called, even if the job does not * belong to a transaction group. - * - * Called with AioContext lock held, since many callbacs implementatio= ns - * use bdrv_* functions that require to hold the lock. */ void (*clean)(Job *job); =20 @@ -318,17 +304,12 @@ struct JobDriver { * READY). * (If the callback is NULL, the job is assumed to terminate * without I/O.) - * - * Called with AioContext lock held, since many callback implementatio= ns - * use bdrv_* functions that require to hold the lock. */ bool (*cancel)(Job *job, bool force); =20 =20 /** * Called when the job is freed. - * Called with AioContext lock held, since many callback implementatio= ns - * use bdrv_* functions that require to hold the lock. */ void (*free)(Job *job); }; @@ -424,7 +405,6 @@ void job_ref_locked(Job *job); * Release a reference that was previously acquired with job_ref_locked() = or * job_create(). If it's the last reference to the object, it will be free= d. * - * Takes AioContext lock internally to invoke a job->driver callback. * Called with job lock held. */ void job_unref_locked(Job *job); --=20 2.43.0 From nobody Tue May 14 22:12:00 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800589; cv=none; d=zohomail.com; s=zohoarc; b=Dc9Neyc46ajZllZ25UDAYeCq4E2W/49F5rZ8eo+oePhnvPNYEhEZGteyVDJwCAtRSsWtkrqHY4DI+iF9uE+HZgmA2AM4zl6GFzBuFAn7wAaue4TeMq/mn2B+4uYtpVGt75jSxb2NuDf0TYByg+BPnAVgHd9vzWjk6S0r1we3EqI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800589; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=N/WlNEcJiVcwUo4X2yN6OGfeTdrwDXu2CtfUNEQgJM8=; b=K7chV0DD1k/ViqUwx7p/ZREP8qiUPvggmoGMV3RPJN6zrfvQYw72YfSXiTP2cdG5igVD4ywGdAlTcrgl+E/9DnCLQt+bARoDDKUN9bUWRZMQYnll1bUFfgAVBs8zQgE6ndpT9b2OdAdEI9A8U06x6inZcv7QPgKR3wCksYDSPGI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1701800589785599.8204077140681; Tue, 5 Dec 2023 10:23:09 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rAa4G-0002NQ-RE; Tue, 05 Dec 2023 13:22:48 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rAa2d-0000Kp-Lm for qemu-devel@nongnu.org; Tue, 05 Dec 2023 13:21:08 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rAa2W-0002op-7o for qemu-devel@nongnu.org; Tue, 05 Dec 2023 13:21:07 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-108-Ot4DU_SOMbSwWhL1j98Pcg-1; Tue, 05 Dec 2023 13:20:54 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6617985A58A; Tue, 5 Dec 2023 18:20:53 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6DA5740C6EB9; Tue, 5 Dec 2023 18:20:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N/WlNEcJiVcwUo4X2yN6OGfeTdrwDXu2CtfUNEQgJM8=; b=Dkjcc43rpGoE5Apcq/QaIbrUU2Z/9nmlwxh0f3bflk+Ilq21LoU5ZV0VVR3DyOZiBre14Q be+p2YvWYw0jQca9RLdd8LlAvrCBdcIxTMNkS1Vbirpkpx1Y2whfLhr28v66hhmjontu96 LdUTbIwv4qWhwUhqt5ZOzB9GumflIzE= X-MC-Unique: Ot4DU_SOMbSwWhL1j98Pcg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 14/14] block: remove outdated AioContext locking comments Date: Tue, 5 Dec 2023 13:20:11 -0500 Message-ID: <20231205182011.1976568-15-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800591707100006 Content-Type: text/plain; charset="utf-8" The AioContext lock no longer exists. There is one noteworthy change: - * More specifically, these functions use BDRV_POLL_WHILE(bs), which - * requires the caller to be either in the main thread and hold - * the BlockdriverState (bs) AioContext lock, or directly in the - * home thread that runs the bs AioContext. Calling them from - * another thread in another AioContext would cause deadlocks. + * More specifically, these functions use BDRV_POLL_WHILE(bs), which req= uires + * the caller to be either in the main thread or directly in the home th= read + * that runs the bs AioContext. Calling them from another thread in anot= her + * AioContext would cause deadlocks. I am not sure whether deadlocks are still possible. Maybe they have just moved to the fine-grained locks that have replaced the AioContext. Since I am not sure if the deadlocks are gone, I have kept the substance unchanged and just removed mention of the AioContext. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/block/block-common.h | 3 -- include/block/block-io.h | 9 ++-- include/block/block_int-common.h | 2 - block.c | 73 ++++++---------------------- block/block-backend.c | 8 --- block/export/vhost-user-blk-server.c | 4 -- tests/qemu-iotests/202 | 2 +- tests/qemu-iotests/203 | 3 +- 8 files changed, 22 insertions(+), 82 deletions(-) diff --git a/include/block/block-common.h b/include/block/block-common.h index d7599564db..a846023a09 100644 --- a/include/block/block-common.h +++ b/include/block/block-common.h @@ -70,9 +70,6 @@ * automatically takes the graph rdlock when calling the wrapped function.= In * the same way, no_co_wrapper_bdrv_wrlock functions automatically take the * graph wrlock. - * - * If the first parameter of the function is a BlockDriverState, BdrvChild= or - * BlockBackend pointer, the AioContext lock for it is taken in the wrappe= r. */ #define no_co_wrapper #define no_co_wrapper_bdrv_rdlock diff --git a/include/block/block-io.h b/include/block/block-io.h index 8eb39a858b..b49e0537dd 100644 --- a/include/block/block-io.h +++ b/include/block/block-io.h @@ -332,11 +332,10 @@ bdrv_co_copy_range(BdrvChild *src, int64_t src_offset, * "I/O or GS" API functions. These functions can run without * the BQL, but only in one specific iothread/main loop. * - * More specifically, these functions use BDRV_POLL_WHILE(bs), which - * requires the caller to be either in the main thread and hold - * the BlockdriverState (bs) AioContext lock, or directly in the - * home thread that runs the bs AioContext. Calling them from - * another thread in another AioContext would cause deadlocks. + * More specifically, these functions use BDRV_POLL_WHILE(bs), which requi= res + * the caller to be either in the main thread or directly in the home thre= ad + * that runs the bs AioContext. Calling them from another thread in another + * AioContext would cause deadlocks. * * Therefore, these functions are not proper I/O, because they * can't run in *any* iothreads, but only in a specific one. diff --git a/include/block/block_int-common.h b/include/block/block_int-com= mon.h index 4e31d161c5..151279d481 100644 --- a/include/block/block_int-common.h +++ b/include/block/block_int-common.h @@ -1192,8 +1192,6 @@ struct BlockDriverState { /* The error object in use for blocking operations on backing_hd */ Error *backing_blocker; =20 - /* Protected by AioContext lock */ - /* * If we are reading a disk image, give its size in sectors. * Generally read-only; it is written to by load_snapshot and diff --git a/block.c b/block.c index 434b7f4d72..a097772238 100644 --- a/block.c +++ b/block.c @@ -1616,11 +1616,6 @@ out: g_free(gen_node_name); } =20 -/* - * The caller must always hold @bs AioContext lock, because this function = calls - * bdrv_refresh_total_sectors() which polls when called from non-coroutine - * context. - */ static int no_coroutine_fn GRAPH_UNLOCKED bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv, const char *node_= name, QDict *options, int open_flags, Error **errp) @@ -2901,7 +2896,7 @@ uint64_t bdrv_qapi_perm_to_blk_perm(BlockPermission q= api_perm) * Replaces the node that a BdrvChild points to without updating permissio= ns. * * If @new_bs is non-NULL, the parent of @child must already be drained th= rough - * @child and the caller must hold the AioContext lock for @new_bs. + * @child. */ static void GRAPH_WRLOCK bdrv_replace_child_noperm(BdrvChild *child, BlockDriverState *new_bs) @@ -3041,9 +3036,8 @@ static TransactionActionDrv bdrv_attach_child_common_= drv =3D { * * Returns new created child. * - * The caller must hold the AioContext lock for @child_bs. Both @parent_bs= and - * @child_bs can move to a different AioContext in this function. Callers = must - * make sure that their AioContext locking is still correct after this. + * Both @parent_bs and @child_bs can move to a different AioContext in this + * function. */ static BdrvChild * GRAPH_WRLOCK bdrv_attach_child_common(BlockDriverState *child_bs, @@ -3142,9 +3136,8 @@ bdrv_attach_child_common(BlockDriverState *child_bs, /* * Function doesn't update permissions, caller is responsible for this. * - * The caller must hold the AioContext lock for @child_bs. Both @parent_bs= and - * @child_bs can move to a different AioContext in this function. Callers = must - * make sure that their AioContext locking is still correct after this. + * Both @parent_bs and @child_bs can move to a different AioContext in this + * function. * * After calling this function, the transaction @tran may only be completed * while holding a writer lock for the graph. @@ -3184,9 +3177,6 @@ bdrv_attach_child_noperm(BlockDriverState *parent_bs, * * On failure NULL is returned, errp is set and the reference to * child_bs is also dropped. - * - * The caller must hold the AioContext lock @child_bs, but not that of @ctx - * (unless @child_bs is already in @ctx). */ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs, const char *child_name, @@ -3226,9 +3216,6 @@ out: * * On failure NULL is returned, errp is set and the reference to * child_bs is also dropped. - * - * If @parent_bs and @child_bs are in different AioContexts, the caller mu= st - * hold the AioContext lock for @child_bs, but not for @parent_bs. */ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, BlockDriverState *child_bs, @@ -3418,9 +3405,8 @@ static BdrvChildRole bdrv_backing_role(BlockDriverSta= te *bs) * * Function doesn't update permissions, caller is responsible for this. * - * The caller must hold the AioContext lock for @child_bs. Both @parent_bs= and - * @child_bs can move to a different AioContext in this function. Callers = must - * make sure that their AioContext locking is still correct after this. + * Both @parent_bs and @child_bs can move to a different AioContext in this + * function. * * After calling this function, the transaction @tran may only be completed * while holding a writer lock for the graph. @@ -3513,9 +3499,8 @@ out: } =20 /* - * The caller must hold the AioContext lock for @backing_hd. Both @bs and - * @backing_hd can move to a different AioContext in this function. Caller= s must - * make sure that their AioContext locking is still correct after this. + * Both @bs and @backing_hd can move to a different AioContext in this + * function. * * If a backing child is already present (i.e. we're detaching a node), th= at * child node must be drained. @@ -3574,8 +3559,6 @@ int bdrv_set_backing_hd(BlockDriverState *bs, BlockDr= iverState *backing_hd, * itself, all options starting with "${bdref_key}." are considered part o= f the * BlockdevRef. * - * The caller must hold the main AioContext lock. - * * TODO Can this be unified with bdrv_open_image()? */ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options, @@ -3745,9 +3728,7 @@ done: * * The BlockdevRef will be removed from the options QDict. * - * The caller must hold the lock of the main AioContext and no other AioCo= ntext. - * @parent can move to a different AioContext in this function. Callers mu= st - * make sure that their AioContext locking is still correct after this. + * @parent can move to a different AioContext in this function. */ BdrvChild *bdrv_open_child(const char *filename, QDict *options, const char *bdref_key, @@ -3778,9 +3759,7 @@ BdrvChild *bdrv_open_child(const char *filename, /* * Wrapper on bdrv_open_child() for most popular case: open primary child = of bs. * - * The caller must hold the lock of the main AioContext and no other AioCo= ntext. - * @parent can move to a different AioContext in this function. Callers mu= st - * make sure that their AioContext locking is still correct after this. + * @parent can move to a different AioContext in this function. */ int bdrv_open_file_child(const char *filename, QDict *options, const char *bdref_key, @@ -3923,8 +3902,6 @@ out: * The reference parameter may be used to specify an existing block device= which * should be opened. If specified, neither options nor a filename may be g= iven, * nor can an existing BDS be reused (that is, *pbs has to be NULL). - * - * The caller must always hold the main AioContext lock. */ static BlockDriverState * no_coroutine_fn bdrv_open_inherit(const char *filename, const char *reference, QDict *opti= ons, @@ -4217,7 +4194,6 @@ close_and_fail: return NULL; } =20 -/* The caller must always hold the main AioContext lock. */ BlockDriverState *bdrv_open(const char *filename, const char *reference, QDict *options, int flags, Error **errp) { @@ -4665,10 +4641,7 @@ int bdrv_reopen_set_read_only(BlockDriverState *bs, = bool read_only, * * Return 0 on success, otherwise return < 0 and set @errp. * - * The caller must hold the AioContext lock of @reopen_state->bs. * @reopen_state->bs can move to a different AioContext in this function. - * Callers must make sure that their AioContext locking is still correct a= fter - * this. */ static int GRAPH_UNLOCKED bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state, @@ -4801,8 +4774,6 @@ out_rdlock: * It is the responsibility of the caller to then call the abort() or * commit() for any other BDS that have been left in a prepare() state * - * The caller must hold the AioContext lock of @reopen_state->bs. - * * After calling this function, the transaction @change_child_tran may onl= y be * completed while holding a writer lock for the graph. */ @@ -5437,8 +5408,6 @@ int bdrv_drop_filter(BlockDriverState *bs, Error **er= rp) * child. * * This function does not create any image files. - * - * The caller must hold the AioContext lock for @bs_top. */ int bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top, Error **errp) @@ -5545,9 +5514,8 @@ static void bdrv_delete(BlockDriverState *bs) * after the call (even on failure), so if the caller intends to reuse the * dictionary, it needs to use qobject_ref() before calling bdrv_open. * - * The caller holds the AioContext lock for @bs. It must make sure that @bs - * stays in the same AioContext, i.e. @options must not refer to nodes in a - * different AioContext. + * The caller must make sure that @bs stays in the same AioContext, i.e. + * @options must not refer to nodes in a different AioContext. */ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options, int flags, Error **errp) @@ -7565,10 +7533,6 @@ static TransactionActionDrv set_aio_context =3D { * * Must be called from the main AioContext. * - * The caller must own the AioContext lock for the old AioContext of bs, b= ut it - * must not own the AioContext lock for new_context (unless new_context is= the - * same as the current context of bs). - * * @visited will accumulate all visited BdrvChild objects. The caller is * responsible for freeing the list afterwards. */ @@ -7621,13 +7585,6 @@ static bool bdrv_change_aio_context(BlockDriverState= *bs, AioContext *ctx, * * If ignore_child is not NULL, that child (and its subgraph) will not * be touched. - * - * This function still requires the caller to take the bs current - * AioContext lock, otherwise draining will fail since AIO_WAIT_WHILE - * assumes the lock is always held if bs is in another AioContext. - * For the same reason, it temporarily also holds the new AioContext, since - * bdrv_drained_end calls BDRV_POLL_WHILE that assumes the lock is taken t= oo. - * Therefore the new AioContext lock must not be taken by the caller. */ int bdrv_try_change_aio_context(BlockDriverState *bs, AioContext *ctx, BdrvChild *ignore_child, Error **errp) @@ -7653,8 +7610,8 @@ int bdrv_try_change_aio_context(BlockDriverState *bs,= AioContext *ctx, =20 /* * Linear phase: go through all callbacks collected in the transaction. - * Run all callbacks collected in the recursion to switch all nodes - * AioContext lock (transaction commit), or undo all changes done in t= he + * Run all callbacks collected in the recursion to switch every node's + * AioContext (transaction commit), or undo all changes done in the * recursion (transaction abort). */ =20 diff --git a/block/block-backend.c b/block/block-backend.c index f412bed274..209eb07528 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -390,8 +390,6 @@ BlockBackend *blk_new(AioContext *ctx, uint64_t perm, u= int64_t shared_perm) * Both sets of permissions can be changed later using blk_set_perm(). * * Return the new BlockBackend on success, null on failure. - * - * Callers must hold the AioContext lock of @bs. */ BlockBackend *blk_new_with_bs(BlockDriverState *bs, uint64_t perm, uint64_t shared_perm, Error **errp) @@ -416,8 +414,6 @@ BlockBackend *blk_new_with_bs(BlockDriverState *bs, uin= t64_t perm, * Just as with bdrv_open(), after having called this function the referen= ce to * @options belongs to the block layer (even on failure). * - * Called without holding an AioContext lock. - * * TODO: Remove @filename and @flags; it should be possible to specify a w= hole * BDS tree just by specifying the @options QDict (or @reference, * alternatively). At the time of adding this function, this is not possib= le, @@ -872,8 +868,6 @@ BlockBackend *blk_by_public(BlockBackendPublic *public) =20 /* * Disassociates the currently associated BlockDriverState from @blk. - * - * The caller must hold the AioContext lock for the BlockBackend. */ void blk_remove_bs(BlockBackend *blk) { @@ -915,8 +909,6 @@ void blk_remove_bs(BlockBackend *blk) =20 /* * Associates a new BlockDriverState with @blk. - * - * Callers must hold the AioContext lock of @bs. */ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp) { diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index 16f48388d3..50c358e8cd 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -278,7 +278,6 @@ static void vu_blk_exp_resize(void *opaque) vu_config_change_msg(&vexp->vu_server.vu_dev); } =20 -/* Called with vexp->export.ctx acquired */ static void vu_blk_drained_begin(void *opaque) { VuBlkExport *vexp =3D opaque; @@ -287,7 +286,6 @@ static void vu_blk_drained_begin(void *opaque) vhost_user_server_detach_aio_context(&vexp->vu_server); } =20 -/* Called with vexp->export.blk AioContext acquired */ static void vu_blk_drained_end(void *opaque) { VuBlkExport *vexp =3D opaque; @@ -300,8 +298,6 @@ static void vu_blk_drained_end(void *opaque) * Ensures that bdrv_drained_begin() waits until in-flight requests comple= te * and the server->co_trip coroutine has terminated. It will be restarted = in * vhost_user_server_attach_aio_context(). - * - * Called with vexp->export.ctx acquired. */ static bool vu_blk_drained_poll(void *opaque) { diff --git a/tests/qemu-iotests/202 b/tests/qemu-iotests/202 index b784dcd791..13304242e5 100755 --- a/tests/qemu-iotests/202 +++ b/tests/qemu-iotests/202 @@ -21,7 +21,7 @@ # Check that QMP 'transaction' blockdev-snapshot-sync with multiple drives= on a # single IOThread completes successfully. This particular command trigger= ed a # hang due to recursive AioContext locking and BDRV_POLL_WHILE(). Protect -# against regressions. +# against regressions even though the AioContext lock no longer exists. =20 import iotests =20 diff --git a/tests/qemu-iotests/203 b/tests/qemu-iotests/203 index ab80fd0e44..1ba878522b 100755 --- a/tests/qemu-iotests/203 +++ b/tests/qemu-iotests/203 @@ -21,7 +21,8 @@ # Check that QMP 'migrate' with multiple drives on a single IOThread compl= etes # successfully. This particular command triggered a hang in the source QE= MU # process due to recursive AioContext locking in bdrv_invalidate_all() and -# BDRV_POLL_WHILE(). +# BDRV_POLL_WHILE(). Protect against regressions even though the AioConte= xt +# lock no longer exists. =20 import iotests =20 --=20 2.43.0