From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 148595088243830.910897606820186; Wed, 1 Feb 2017 04:08:02 -0800 (PST) Received: from localhost ([::1]:50165 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYthg-0002CG-3L for importer@patchew.org; Wed, 01 Feb 2017 07:08:00 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50533) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfV-0000y3-Q7 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:48 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfT-0002G1-Gj for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:45 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59504) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfT-0002Fh-2v for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:43 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4EC7680F91 for ; Wed, 1 Feb 2017 12:05:43 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLa019474; Wed, 1 Feb 2017 07:05:41 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:16 -0800 Message-Id: <20170201120533.13838-2-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 01 Feb 2017 12:05:43 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 01/18] block: move AioContext and QEMUTimer to libqemuutil X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" AioContext is fairly self contained, the only dependency is QEMUTimer but that in turn doesn't need anything else. So move them out of block-obj-y to avoid introducing a dependency from io/ to block-obj-y. main-loop and its dependency iohandler also need to be moved, because later in this series io/ will call iohandler_get_aio_context. Signed-off-by: Paolo Bonzini Reviewed-by: Stefan Hajnoczi --- v2->v3: moved earlier and expanded to include main-loop/iohandler due to re= base =20 Makefile.objs | 4 --- block/io.c | 29 ------------------- stubs/Makefile.objs | 1 + stubs/linux-aio.c | 32 +++++++++++++++++++++ stubs/set-fd-handler.c | 11 -------- tests/Makefile.include | 10 ++++--- util/Makefile.objs | 6 +++- aio-posix.c =3D> util/aio-posix.c | 0 aio-win32.c =3D> util/aio-win32.c | 0 util/aiocb.c | 55 +++++++++++++++++++++++++++++++++= ++++ async.c =3D> util/async.c | 3 +- iohandler.c =3D> util/iohandler.c | 0 main-loop.c =3D> util/main-loop.c | 0 qemu-timer.c =3D> util/qemu-timer.c | 0 thread-pool.c =3D> util/thread-pool.c | 0 15 files changed, 101 insertions(+), 50 deletions(-) create mode 100644 stubs/linux-aio.c rename aio-posix.c =3D> util/aio-posix.c (100%) rename aio-win32.c =3D> util/aio-win32.c (100%) create mode 100644 util/aiocb.c rename async.c =3D> util/async.c (99%) rename iohandler.c =3D> util/iohandler.c (100%) rename main-loop.c =3D> util/main-loop.c (100%) rename qemu-timer.c =3D> util/qemu-timer.c (100%) rename thread-pool.c =3D> util/thread-pool.c (100%) diff --git a/Makefile.objs b/Makefile.objs index 01cef86..40510a2 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -7,12 +7,8 @@ util-obj-y +=3D qmp-introspect.o qapi-types.o qapi-visit.o= qapi-event.o ####################################################################### # block-obj-y is code used by both qemu system emulation and qemu-img =20 -block-obj-y =3D async.o thread-pool.o block-obj-y +=3D nbd/ block-obj-y +=3D block.o blockjob.o -block-obj-y +=3D main-loop.o iohandler.o qemu-timer.o -block-obj-$(CONFIG_POSIX) +=3D aio-posix.o -block-obj-$(CONFIG_WIN32) +=3D aio-win32.o block-obj-y +=3D block/ block-obj-y +=3D qemu-io-cmds.o block-obj-$(CONFIG_REPLICATION) +=3D replication.o diff --git a/block/io.c b/block/io.c index c42b34a..76dfaf4 100644 --- a/block/io.c +++ b/block/io.c @@ -2239,35 +2239,6 @@ BlockAIOCB *bdrv_aio_flush(BlockDriverState *bs, return &acb->common; } =20 -void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs, - BlockCompletionFunc *cb, void *opaque) -{ - BlockAIOCB *acb; - - acb =3D g_malloc(aiocb_info->aiocb_size); - acb->aiocb_info =3D aiocb_info; - acb->bs =3D bs; - acb->cb =3D cb; - acb->opaque =3D opaque; - acb->refcnt =3D 1; - return acb; -} - -void qemu_aio_ref(void *p) -{ - BlockAIOCB *acb =3D p; - acb->refcnt++; -} - -void qemu_aio_unref(void *p) -{ - BlockAIOCB *acb =3D p; - assert(acb->refcnt > 0); - if (--acb->refcnt =3D=3D 0) { - g_free(acb); - } -} - /**************************************************************/ /* Coroutine block device emulation */ =20 diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs index a187295..aa6050f 100644 --- a/stubs/Makefile.objs +++ b/stubs/Makefile.objs @@ -16,6 +16,7 @@ stub-obj-y +=3D get-vm-name.o stub-obj-y +=3D iothread.o stub-obj-y +=3D iothread-lock.o stub-obj-y +=3D is-daemonized.o +stub-obj-$(CONFIG_LINUX_AIO) +=3D linux-aio.o stub-obj-y +=3D machine-init-done.o stub-obj-y +=3D migr-blocker.o stub-obj-y +=3D monitor.o diff --git a/stubs/linux-aio.c b/stubs/linux-aio.c new file mode 100644 index 0000000..ed47bd4 --- /dev/null +++ b/stubs/linux-aio.c @@ -0,0 +1,32 @@ +/* + * Linux native AIO support. + * + * Copyright (C) 2009 IBM, Corp. + * Copyright (C) 2009 Red Hat, Inc. + * + * This work is licensed under the terms of the GNU GPL, version 2 or late= r. + * See the COPYING file in the top-level directory. + */ +#include "qemu/osdep.h" +#include "block/aio.h" +#include "block/raw-aio.h" + +void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context) +{ + abort(); +} + +void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context) +{ + abort(); +} + +LinuxAioState *laio_init(void) +{ + abort(); +} + +void laio_cleanup(LinuxAioState *s) +{ + abort(); +} diff --git a/stubs/set-fd-handler.c b/stubs/set-fd-handler.c index acbe65c..26965de 100644 --- a/stubs/set-fd-handler.c +++ b/stubs/set-fd-handler.c @@ -9,14 +9,3 @@ void qemu_set_fd_handler(int fd, { abort(); } - -void aio_set_fd_handler(AioContext *ctx, - int fd, - bool is_external, - IOHandler *io_read, - IOHandler *io_write, - AioPollFn *io_poll, - void *opaque) -{ - abort(); -} diff --git a/tests/Makefile.include b/tests/Makefile.include index 33b4f88..bb570bb 100644 --- a/tests/Makefile.include +++ b/tests/Makefile.include @@ -45,6 +45,9 @@ check-unit-y +=3D tests/test-visitor-serialization$(EXESU= F) check-unit-y +=3D tests/test-iov$(EXESUF) gcov-files-test-iov-y =3D util/iov.c check-unit-y +=3D tests/test-aio$(EXESUF) +gcov-files-test-aio-y =3D util/async.c util/qemu-timer.o +gcov-files-test-aio-$(CONFIG_WIN32) +=3D util/aio-win32.c +gcov-files-test-aio-$(CONFIG_POSIX) +=3D util/aio-posix.c check-unit-y +=3D tests/test-throttle$(EXESUF) gcov-files-test-aio-$(CONFIG_WIN32) =3D aio-win32.c gcov-files-test-aio-$(CONFIG_POSIX) =3D aio-posix.c @@ -510,7 +513,7 @@ tests/check-qjson$(EXESUF): tests/check-qjson.o $(test-= util-obj-y) tests/check-qom-interface$(EXESUF): tests/check-qom-interface.o $(test-qom= -obj-y) tests/check-qom-proplist$(EXESUF): tests/check-qom-proplist.o $(test-qom-o= bj-y) =20 -tests/test-char$(EXESUF): tests/test-char.o qemu-char.o qemu-timer.o $(tes= t-util-obj-y) $(qtest-obj-y) $(test-block-obj-y) +tests/test-char$(EXESUF): tests/test-char.o qemu-char.o $(qtest-obj-y) $(t= est-io-obj-y) $(test-util-obj-y) tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(test-block-obj-y) tests/test-aio$(EXESUF): tests/test-aio.o $(test-block-obj-y) tests/test-throttle$(EXESUF): tests/test-throttle.o $(test-block-obj-y) @@ -543,8 +546,7 @@ tests/test-vmstate$(EXESUF): tests/test-vmstate.o \ migration/vmstate.o migration/qemu-file.o \ migration/qemu-file-channel.o migration/qjson.o \ $(test-io-obj-y) -tests/test-timed-average$(EXESUF): tests/test-timed-average.o qemu-timer.o= \ - $(test-util-obj-y) +tests/test-timed-average$(EXESUF): tests/test-timed-average.o $(test-util-= obj-y) tests/test-base64$(EXESUF): tests/test-base64.o \ libqemuutil.a libqemustub.a tests/ptimer-test$(EXESUF): tests/ptimer-test.o tests/ptimer-test-stubs.o = hw/core/ptimer.o libqemustub.a @@ -703,7 +705,7 @@ tests/usb-hcd-ehci-test$(EXESUF): tests/usb-hcd-ehci-te= st.o $(libqos-usb-obj-y) tests/usb-hcd-xhci-test$(EXESUF): tests/usb-hcd-xhci-test.o $(libqos-usb-o= bj-y) tests/pc-cpu-test$(EXESUF): tests/pc-cpu-test.o tests/postcopy-test$(EXESUF): tests/postcopy-test.o -tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o qemu-char.o qemu-t= imer.o $(qtest-obj-y) $(test-io-obj-y) $(libqos-virtio-obj-y) $(libqos-pc-o= bj-y) +tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o qemu-char.o $(qtes= t-obj-y) $(test-io-obj-y) $(libqos-virtio-obj-y) $(libqos-pc-obj-y) tests/qemu-iotests/socket_scm_helper$(EXESUF): tests/qemu-iotests/socket_s= cm_helper.o tests/test-qemu-opts$(EXESUF): tests/test-qemu-opts.o $(test-util-obj-y) tests/test-write-threshold$(EXESUF): tests/test-write-threshold.o $(test-b= lock-obj-y) diff --git a/util/Makefile.objs b/util/Makefile.objs index c1f247d..8c62e22 100644 --- a/util/Makefile.objs +++ b/util/Makefile.objs @@ -1,14 +1,18 @@ util-obj-y =3D osdep.o cutils.o unicode.o qemu-timer-common.o util-obj-y +=3D bufferiszero.o util-obj-y +=3D lockcnt.o +util-obj-y +=3D aiocb.o async.o thread-pool.o qemu-timer.o +util-obj-y +=3D main-loop.o iohandler.o +util-obj-$(CONFIG_POSIX) +=3D aio-posix.o util-obj-$(CONFIG_POSIX) +=3D compatfd.o util-obj-$(CONFIG_POSIX) +=3D event_notifier-posix.o util-obj-$(CONFIG_POSIX) +=3D mmap-alloc.o util-obj-$(CONFIG_POSIX) +=3D oslib-posix.o util-obj-$(CONFIG_POSIX) +=3D qemu-openpty.o util-obj-$(CONFIG_POSIX) +=3D qemu-thread-posix.o -util-obj-$(CONFIG_WIN32) +=3D event_notifier-win32.o util-obj-$(CONFIG_POSIX) +=3D memfd.o +util-obj-$(CONFIG_WIN32) +=3D aio-win32.o +util-obj-$(CONFIG_WIN32) +=3D event_notifier-win32.o util-obj-$(CONFIG_WIN32) +=3D oslib-win32.o util-obj-$(CONFIG_WIN32) +=3D qemu-thread-win32.o util-obj-y +=3D envlist.o path.o module.o diff --git a/aio-posix.c b/util/aio-posix.c similarity index 100% rename from aio-posix.c rename to util/aio-posix.c diff --git a/aio-win32.c b/util/aio-win32.c similarity index 100% rename from aio-win32.c rename to util/aio-win32.c diff --git a/util/aiocb.c b/util/aiocb.c new file mode 100644 index 0000000..305a9cf --- /dev/null +++ b/util/aiocb.c @@ -0,0 +1,55 @@ +/* + * BlockAIOCB allocation + * + * Copyright (c) 2003-2017 Fabrice Bellard and the QEMU team + * + * Permission is hereby granted, free of charge, to any person obtaining a= copy + * of this software and associated documentation files (the "Software"), t= o deal + * in the Software without restriction, including without limitation the r= ights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or se= ll + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included= in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OT= HER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING= FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS = IN + * THE SOFTWARE. + */ + +#include "qemu/osdep.h" +#include "block/aio.h" + +void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs, + BlockCompletionFunc *cb, void *opaque) +{ + BlockAIOCB *acb; + + acb =3D g_malloc(aiocb_info->aiocb_size); + acb->aiocb_info =3D aiocb_info; + acb->bs =3D bs; + acb->cb =3D cb; + acb->opaque =3D opaque; + acb->refcnt =3D 1; + return acb; +} + +void qemu_aio_ref(void *p) +{ + BlockAIOCB *acb =3D p; + acb->refcnt++; +} + +void qemu_aio_unref(void *p) +{ + BlockAIOCB *acb =3D p; + assert(acb->refcnt > 0); + if (--acb->refcnt =3D=3D 0) { + g_free(acb); + } +} diff --git a/async.c b/util/async.c similarity index 99% rename from async.c rename to util/async.c index 0d218ab..75519e2 100644 --- a/async.c +++ b/util/async.c @@ -1,7 +1,8 @@ /* - * QEMU System Emulator + * Data plane event loop * * Copyright (c) 2003-2008 Fabrice Bellard + * Copyright (c) 2009-2017 the QEMU team * * Permission is hereby granted, free of charge, to any person obtaining a= copy * of this software and associated documentation files (the "Software"), t= o deal diff --git a/iohandler.c b/util/iohandler.c similarity index 100% rename from iohandler.c rename to util/iohandler.c diff --git a/main-loop.c b/util/main-loop.c similarity index 100% rename from main-loop.c rename to util/main-loop.c diff --git a/qemu-timer.c b/util/qemu-timer.c similarity index 100% rename from qemu-timer.c rename to util/qemu-timer.c diff --git a/thread-pool.c b/util/thread-pool.c similarity index 100% rename from thread-pool.c rename to util/thread-pool.c --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951156540604.4392666300581; Wed, 1 Feb 2017 04:12:36 -0800 (PST) Received: from localhost ([::1]:50189 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtm6-0006Qq-Bl for importer@patchew.org; Wed, 01 Feb 2017 07:12:34 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50569) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfY-0000yF-Th for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfW-0002Gn-8t for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:48 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40026) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfV-0002GM-O5 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:46 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EBB0D7E9F4 for ; Wed, 1 Feb 2017 12:05:45 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLb019474; Wed, 1 Feb 2017 07:05:43 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:17 -0800 Message-Id: <20170201120533.13838-3-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 01 Feb 2017 12:05:45 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 02/18] aio: introduce aio_co_schedule and aio_co_wake X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" aio_co_wake provides the infrastructure to start a coroutine on a "home" AioContext. It will be used by CoMutex and CoQueue, so that coroutines don't jump from one context to another when they go to sleep on a mutex or waitqueue. However, it can also be used as a more efficient alternative to one-shot bottom halves, and saves the effort of tracking which AioContext a coroutine is running on. aio_co_schedule is the part of aio_co_wake that starts a coroutine on a remove AioContext, but it is also useful to implement e.g. bdrv_set_aio_context callbacks. The implementation of aio_co_schedule is based on a lock-free multiple-producer, single-consumer queue. The multiple producers use cmpxchg to add to a LIFO stack. The consumer (a per-AioContext bottom half) grabs all items added so far, inverts the list to make it FIFO, and goes through it one item at a time until it's empty. The data structure was inspired by OSv, which uses it in the very code we'll "port" to QEMU for the thread-safe CoMutex. Most of the new code is really tests. Signed-off-by: Paolo Bonzini Reviewed-by: Stefan Hajnoczi --- v2->v3: add coroutine_fn annotation in test-aio-multithread.c [Stefan] don't remove yield_until_fd_readable, will resubmit to trivial [Ste= fan] include/block/aio.h | 32 +++++++ include/qemu/coroutine_int.h | 11 ++- tests/Makefile.include | 8 +- tests/iothread.c | 91 ++++++++++++++++++ tests/iothread.h | 25 +++++ tests/test-aio-multithread.c | 213 +++++++++++++++++++++++++++++++++++++++= ++++ trace-events | 4 + util/async.c | 65 +++++++++++++ util/qemu-coroutine.c | 8 ++ 9 files changed, 453 insertions(+), 4 deletions(-) create mode 100644 tests/iothread.c create mode 100644 tests/iothread.h create mode 100644 tests/test-aio-multithread.c diff --git a/include/block/aio.h b/include/block/aio.h index 7df271d..614cbc6 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -47,6 +47,7 @@ typedef void QEMUBHFunc(void *opaque); typedef bool AioPollFn(void *opaque); typedef void IOHandler(void *opaque); =20 +struct Coroutine; struct ThreadPool; struct LinuxAioState; =20 @@ -108,6 +109,9 @@ struct AioContext { bool notified; EventNotifier notifier; =20 + QSLIST_HEAD(, Coroutine) scheduled_coroutines; + QEMUBH *co_schedule_bh; + /* Thread pool for performing work and receiving completion callbacks. * Has its own locking. */ @@ -483,6 +487,34 @@ static inline bool aio_node_check(AioContext *ctx, boo= l is_external) } =20 /** + * aio_co_schedule: + * @ctx: the aio context + * @co: the coroutine + * + * Start a coroutine on a remote AioContext. + * + * The coroutine must not be entered by anyone else while aio_co_schedule() + * is active. In addition the coroutine must have yielded unless ctx + * is the context in which the coroutine is running (i.e. the value of + * qemu_get_current_aio_context() from the coroutine itself). + */ +void aio_co_schedule(AioContext *ctx, struct Coroutine *co); + +/** + * aio_co_wake: + * @co: the coroutine + * + * Restart a coroutine on the AioContext where it was running last, thus + * preventing coroutines from jumping from one context to another when they + * go to sleep. + * + * aio_co_wake may be executed either in coroutine or non-coroutine + * context. The coroutine must not be entered by anyone else while + * aio_co_wake() is active. + */ +void aio_co_wake(struct Coroutine *co); + +/** * Return the AioContext whose event loop runs in the current thread. * * If called from an IOThread this will be the IOThread's AioContext. If diff --git a/include/qemu/coroutine_int.h b/include/qemu/coroutine_int.h index 14d4f1d..cb98892 100644 --- a/include/qemu/coroutine_int.h +++ b/include/qemu/coroutine_int.h @@ -40,12 +40,21 @@ struct Coroutine { CoroutineEntry *entry; void *entry_arg; Coroutine *caller; + + /* Only used when the coroutine has terminated. */ QSLIST_ENTRY(Coroutine) pool_next; + size_t locks_held; =20 - /* Coroutines that should be woken up when we yield or terminate */ + /* Coroutines that should be woken up when we yield or terminate. + * Only used when the coroutine is running. + */ QSIMPLEQ_HEAD(, Coroutine) co_queue_wakeup; + + /* Only used when the coroutine has yielded. */ + AioContext *ctx; QSIMPLEQ_ENTRY(Coroutine) co_queue_next; + QSLIST_ENTRY(Coroutine) co_scheduled_next; }; =20 Coroutine *qemu_coroutine_new(void); diff --git a/tests/Makefile.include b/tests/Makefile.include index bb570bb..eb7d14a 100644 --- a/tests/Makefile.include +++ b/tests/Makefile.include @@ -48,9 +48,10 @@ check-unit-y +=3D tests/test-aio$(EXESUF) gcov-files-test-aio-y =3D util/async.c util/qemu-timer.o gcov-files-test-aio-$(CONFIG_WIN32) +=3D util/aio-win32.c gcov-files-test-aio-$(CONFIG_POSIX) +=3D util/aio-posix.c +check-unit-y +=3D tests/test-aio-multithread$(EXESUF) +gcov-files-test-aio-multithread-y =3D $(gcov-files-test-aio-y) +gcov-files-test-aio-multithread-y +=3D util/qemu-coroutine.c tests/iothrea= d.c check-unit-y +=3D tests/test-throttle$(EXESUF) -gcov-files-test-aio-$(CONFIG_WIN32) =3D aio-win32.c -gcov-files-test-aio-$(CONFIG_POSIX) =3D aio-posix.c check-unit-y +=3D tests/test-thread-pool$(EXESUF) gcov-files-test-thread-pool-y =3D thread-pool.c gcov-files-test-hbitmap-y =3D util/hbitmap.c @@ -501,7 +502,7 @@ test-qapi-obj-y =3D tests/test-qapi-visit.o tests/test-= qapi-types.o \ $(test-qom-obj-y) test-crypto-obj-y =3D $(crypto-obj-y) $(test-qom-obj-y) test-io-obj-y =3D $(io-obj-y) $(test-crypto-obj-y) -test-block-obj-y =3D $(block-obj-y) $(test-io-obj-y) +test-block-obj-y =3D $(block-obj-y) $(test-io-obj-y) tests/iothread.o =20 tests/check-qint$(EXESUF): tests/check-qint.o $(test-util-obj-y) tests/check-qstring$(EXESUF): tests/check-qstring.o $(test-util-obj-y) @@ -516,6 +517,7 @@ tests/check-qom-proplist$(EXESUF): tests/check-qom-prop= list.o $(test-qom-obj-y) tests/test-char$(EXESUF): tests/test-char.o qemu-char.o $(qtest-obj-y) $(t= est-io-obj-y) $(test-util-obj-y) tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(test-block-obj-y) tests/test-aio$(EXESUF): tests/test-aio.o $(test-block-obj-y) +tests/test-aio-multithread$(EXESUF): tests/test-aio-multithread.o $(test-b= lock-obj-y) tests/test-throttle$(EXESUF): tests/test-throttle.o $(test-block-obj-y) tests/test-blockjob$(EXESUF): tests/test-blockjob.o $(test-block-obj-y) $(= test-util-obj-y) tests/test-blockjob-txn$(EXESUF): tests/test-blockjob-txn.o $(test-block-o= bj-y) $(test-util-obj-y) diff --git a/tests/iothread.c b/tests/iothread.c new file mode 100644 index 0000000..777d9ee --- /dev/null +++ b/tests/iothread.c @@ -0,0 +1,91 @@ +/* + * Event loop thread implementation for unit tests + * + * Copyright Red Hat Inc., 2013, 2016 + * + * Authors: + * Stefan Hajnoczi + * Paolo Bonzini + * + * This work is licensed under the terms of the GNU GPL, version 2 or late= r. + * See the COPYING file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "block/aio.h" +#include "qemu/main-loop.h" +#include "qemu/rcu.h" +#include "iothread.h" + +struct IOThread { + AioContext *ctx; + + QemuThread thread; + QemuMutex init_done_lock; + QemuCond init_done_cond; /* is thread initialization done? */ + bool stopping; +}; + +static __thread IOThread *my_iothread; + +AioContext *qemu_get_current_aio_context(void) +{ + return my_iothread ? my_iothread->ctx : qemu_get_aio_context(); +} + +static void *iothread_run(void *opaque) +{ + IOThread *iothread =3D opaque; + + rcu_register_thread(); + + my_iothread =3D iothread; + qemu_mutex_lock(&iothread->init_done_lock); + iothread->ctx =3D aio_context_new(&error_abort); + qemu_cond_signal(&iothread->init_done_cond); + qemu_mutex_unlock(&iothread->init_done_lock); + + while (!atomic_read(&iothread->stopping)) { + aio_poll(iothread->ctx, true); + } + + rcu_unregister_thread(); + return NULL; +} + +void iothread_join(IOThread *iothread) +{ + iothread->stopping =3D true; + aio_notify(iothread->ctx); + qemu_thread_join(&iothread->thread); + qemu_cond_destroy(&iothread->init_done_cond); + qemu_mutex_destroy(&iothread->init_done_lock); + aio_context_unref(iothread->ctx); + g_free(iothread); +} + +IOThread *iothread_new(void) +{ + IOThread *iothread =3D g_new0(IOThread, 1); + + qemu_mutex_init(&iothread->init_done_lock); + qemu_cond_init(&iothread->init_done_cond); + qemu_thread_create(&iothread->thread, NULL, iothread_run, + iothread, QEMU_THREAD_JOINABLE); + + /* Wait for initialization to complete */ + qemu_mutex_lock(&iothread->init_done_lock); + while (iothread->ctx =3D=3D NULL) { + qemu_cond_wait(&iothread->init_done_cond, + &iothread->init_done_lock); + } + qemu_mutex_unlock(&iothread->init_done_lock); + return iothread; +} + +AioContext *iothread_get_aio_context(IOThread *iothread) +{ + return iothread->ctx; +} diff --git a/tests/iothread.h b/tests/iothread.h new file mode 100644 index 0000000..4877cea --- /dev/null +++ b/tests/iothread.h @@ -0,0 +1,25 @@ +/* + * Event loop thread implementation for unit tests + * + * Copyright Red Hat Inc., 2013, 2016 + * + * Authors: + * Stefan Hajnoczi + * Paolo Bonzini + * + * This work is licensed under the terms of the GNU GPL, version 2 or late= r. + * See the COPYING file in the top-level directory. + */ +#ifndef TEST_IOTHREAD_H +#define TEST_IOTHREAD_H + +#include "block/aio.h" +#include "qemu/thread.h" + +typedef struct IOThread IOThread; + +IOThread *iothread_new(void); +void iothread_join(IOThread *iothread); +AioContext *iothread_get_aio_context(IOThread *iothread); + +#endif diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c new file mode 100644 index 0000000..534807d --- /dev/null +++ b/tests/test-aio-multithread.c @@ -0,0 +1,213 @@ +/* + * AioContext multithreading tests + * + * Copyright Red Hat, Inc. 2016 + * + * Authors: + * Paolo Bonzini + * + * This work is licensed under the terms of the GNU LGPL, version 2 or lat= er. + * See the COPYING.LIB file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include +#include "block/aio.h" +#include "qapi/error.h" +#include "qemu/coroutine.h" +#include "qemu/thread.h" +#include "qemu/error-report.h" +#include "iothread.h" + +/* AioContext management */ + +#define NUM_CONTEXTS 5 + +static IOThread *threads[NUM_CONTEXTS]; +static AioContext *ctx[NUM_CONTEXTS]; +static __thread int id =3D -1; + +static QemuEvent done_event; + +/* Run a function synchronously on a remote iothread. */ + +typedef struct CtxRunData { + QEMUBHFunc *cb; + void *arg; +} CtxRunData; + +static void ctx_run_bh_cb(void *opaque) +{ + CtxRunData *data =3D opaque; + + data->cb(data->arg); + qemu_event_set(&done_event); +} + +static void ctx_run(int i, QEMUBHFunc *cb, void *opaque) +{ + CtxRunData data =3D { + .cb =3D cb, + .arg =3D opaque + }; + + qemu_event_reset(&done_event); + aio_bh_schedule_oneshot(ctx[i], ctx_run_bh_cb, &data); + qemu_event_wait(&done_event); +} + +/* Starting the iothreads. */ + +static void set_id_cb(void *opaque) +{ + int *i =3D opaque; + + id =3D *i; +} + +static void create_aio_contexts(void) +{ + int i; + + for (i =3D 0; i < NUM_CONTEXTS; i++) { + threads[i] =3D iothread_new(); + ctx[i] =3D iothread_get_aio_context(threads[i]); + } + + qemu_event_init(&done_event, false); + for (i =3D 0; i < NUM_CONTEXTS; i++) { + ctx_run(i, set_id_cb, &i); + } +} + +/* Stopping the iothreads. */ + +static void join_aio_contexts(void) +{ + int i; + + for (i =3D 0; i < NUM_CONTEXTS; i++) { + aio_context_ref(ctx[i]); + } + for (i =3D 0; i < NUM_CONTEXTS; i++) { + iothread_join(threads[i]); + } + for (i =3D 0; i < NUM_CONTEXTS; i++) { + aio_context_unref(ctx[i]); + } + qemu_event_destroy(&done_event); +} + +/* Basic test for the stuff above. */ + +static void test_lifecycle(void) +{ + create_aio_contexts(); + join_aio_contexts(); +} + +/* aio_co_schedule test. */ + +static Coroutine *to_schedule[NUM_CONTEXTS]; + +static bool now_stopping; + +static int count_retry; +static int count_here; +static int count_other; + +static bool schedule_next(int n) +{ + Coroutine *co; + + co =3D atomic_xchg(&to_schedule[n], NULL); + if (!co) { + atomic_inc(&count_retry); + return false; + } + + if (n =3D=3D id) { + atomic_inc(&count_here); + } else { + atomic_inc(&count_other); + } + + aio_co_schedule(ctx[n], co); + return true; +} + +static void finish_cb(void *opaque) +{ + schedule_next(id); +} + +static coroutine_fn void test_multi_co_schedule_entry(void *opaque) +{ + g_assert(to_schedule[id] =3D=3D NULL); + atomic_mb_set(&to_schedule[id], qemu_coroutine_self()); + + while (!atomic_mb_read(&now_stopping)) { + int n; + + n =3D g_test_rand_int_range(0, NUM_CONTEXTS); + schedule_next(n); + qemu_coroutine_yield(); + + g_assert(to_schedule[id] =3D=3D NULL); + atomic_mb_set(&to_schedule[id], qemu_coroutine_self()); + } +} + + +static void test_multi_co_schedule(int seconds) +{ + int i; + + count_here =3D count_other =3D count_retry =3D 0; + now_stopping =3D false; + + create_aio_contexts(); + for (i =3D 0; i < NUM_CONTEXTS; i++) { + Coroutine *co1 =3D qemu_coroutine_create(test_multi_co_schedule_en= try, NULL); + aio_co_schedule(ctx[i], co1); + } + + g_usleep(seconds * 1000000); + + atomic_mb_set(&now_stopping, true); + for (i =3D 0; i < NUM_CONTEXTS; i++) { + ctx_run(i, finish_cb, NULL); + to_schedule[i] =3D NULL; + } + + join_aio_contexts(); + g_test_message("scheduled %d, queued %d, retry %d, total %d\n", + count_other, count_here, count_retry, + count_here + count_other + count_retry); +} + +static void test_multi_co_schedule_1(void) +{ + test_multi_co_schedule(1); +} + +static void test_multi_co_schedule_10(void) +{ + test_multi_co_schedule(10); +} + +/* End of tests. */ + +int main(int argc, char **argv) +{ + init_clocks(); + + g_test_init(&argc, &argv, NULL); + g_test_add_func("/aio/multi/lifecycle", test_lifecycle); + if (g_test_quick()) { + g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1); + } else { + g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10); + } + return g_test_run(); +} diff --git a/trace-events b/trace-events index 839a9d0..149cf99 100644 --- a/trace-events +++ b/trace-events @@ -85,6 +85,10 @@ xen_map_cache(uint64_t phys_addr) "want %#"PRIx64 xen_remap_bucket(uint64_t index) "index %#"PRIx64 xen_map_cache_return(void* ptr) "%p" =20 +# async.c +aio_co_schedule(void *ctx, void *co) "ctx %p co %p" +aio_co_schedule_bh_cb(void *ctx, void *co) "ctx %p co %p" + # monitor.c handle_qmp_command(void *mon, const char *cmd_name) "mon %p cmd_name \"%s\= "" monitor_protocol_event_handler(uint32_t event, void *qdict) "event=3D%d da= ta=3D%p" diff --git a/util/async.c b/util/async.c index 75519e2..44c9c3b 100644 --- a/util/async.c +++ b/util/async.c @@ -31,6 +31,8 @@ #include "qemu/main-loop.h" #include "qemu/atomic.h" #include "block/raw-aio.h" +#include "trace/generated-tracers.h" +#include "qemu/coroutine_int.h" =20 /***********************************************************/ /* bottom halves (can be seen as timers which expire ASAP) */ @@ -275,6 +277,9 @@ aio_ctx_finalize(GSource *source) } #endif =20 + assert(QSLIST_EMPTY(&ctx->scheduled_coroutines)); + qemu_bh_delete(ctx->co_schedule_bh); + qemu_lockcnt_lock(&ctx->list_lock); assert(!qemu_lockcnt_count(&ctx->list_lock)); while (ctx->first_bh) { @@ -364,6 +369,28 @@ static bool event_notifier_poll(void *opaque) return atomic_read(&ctx->notified); } =20 +static void co_schedule_bh_cb(void *opaque) +{ + AioContext *ctx =3D opaque; + QSLIST_HEAD(, Coroutine) straight, reversed; + + QSLIST_MOVE_ATOMIC(&reversed, &ctx->scheduled_coroutines); + QSLIST_INIT(&straight); + + while (!QSLIST_EMPTY(&reversed)) { + Coroutine *co =3D QSLIST_FIRST(&reversed); + QSLIST_REMOVE_HEAD(&reversed, co_scheduled_next); + QSLIST_INSERT_HEAD(&straight, co, co_scheduled_next); + } + + while (!QSLIST_EMPTY(&straight)) { + Coroutine *co =3D QSLIST_FIRST(&straight); + QSLIST_REMOVE_HEAD(&straight, co_scheduled_next); + trace_aio_co_schedule_bh_cb(ctx, co); + qemu_coroutine_enter(co); + } +} + AioContext *aio_context_new(Error **errp) { int ret; @@ -379,6 +406,10 @@ AioContext *aio_context_new(Error **errp) } g_source_set_can_recurse(&ctx->source, true); qemu_lockcnt_init(&ctx->list_lock); + + ctx->co_schedule_bh =3D aio_bh_new(ctx, co_schedule_bh_cb, ctx); + QSLIST_INIT(&ctx->scheduled_coroutines); + aio_set_event_notifier(ctx, &ctx->notifier, false, (EventNotifierHandler *) @@ -402,6 +433,40 @@ fail: return NULL; } =20 +void aio_co_schedule(AioContext *ctx, Coroutine *co) +{ + trace_aio_co_schedule(ctx, co); + QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines, + co, co_scheduled_next); + qemu_bh_schedule(ctx->co_schedule_bh); +} + +void aio_co_wake(struct Coroutine *co) +{ + AioContext *ctx; + + /* Read coroutine before co->ctx. Matches smp_wmb in + * qemu_coroutine_enter. + */ + smp_read_barrier_depends(); + ctx =3D atomic_read(&co->ctx); + + if (ctx !=3D qemu_get_current_aio_context()) { + aio_co_schedule(ctx, co); + return; + } + + if (qemu_in_coroutine()) { + Coroutine *self =3D qemu_coroutine_self(); + assert(self !=3D co); + QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, co, co_queue_next); + } else { + aio_context_acquire(ctx); + qemu_coroutine_enter(co); + aio_context_release(ctx); + } +} + void aio_context_ref(AioContext *ctx) { g_source_ref(&ctx->source); diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c index a5d2f6c..415600d 100644 --- a/util/qemu-coroutine.c +++ b/util/qemu-coroutine.c @@ -19,6 +19,7 @@ #include "qemu/atomic.h" #include "qemu/coroutine.h" #include "qemu/coroutine_int.h" +#include "block/aio.h" =20 enum { POOL_BATCH_SIZE =3D 64, @@ -114,6 +115,13 @@ void qemu_coroutine_enter(Coroutine *co) } =20 co->caller =3D self; + co->ctx =3D qemu_get_current_aio_context(); + + /* Store co->ctx before anything that stores co. Matches + * barrier in aio_co_wake. + */ + smp_wmb(); + ret =3D qemu_coroutine_switch(self, co, COROUTINE_ENTER); =20 qemu_co_queue_run_restart(co); --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485950885264429.3724814268055; Wed, 1 Feb 2017 04:08:05 -0800 (PST) Received: from localhost ([::1]:50166 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYthg-0002Cv-Dc for importer@patchew.org; Wed, 01 Feb 2017 07:08:00 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50571) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfZ-0000yO-4b for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:50 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfY-0002HM-D5 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:49 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59524) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfY-0002HD-89 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:48 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5377881129 for ; Wed, 1 Feb 2017 12:05:48 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLc019474; Wed, 1 Feb 2017 07:05:46 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:18 -0800 Message-Id: <20170201120533.13838-4-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 01 Feb 2017 12:05:48 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 03/18] block-backend: allow blk_prw from coroutine context X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" qcow2_create2 calls this. Do not run a nested event loop, as that breaks when aio_co_wake tries to queue the coroutine on the co_queue_wakeup list of the currently running one. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- block/block-backend.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index efbf398..1177598 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -880,7 +880,6 @@ static int blk_prw(BlockBackend *blk, int64_t offset, u= int8_t *buf, { QEMUIOVector qiov; struct iovec iov; - Coroutine *co; BlkRwCo rwco; =20 iov =3D (struct iovec) { @@ -897,9 +896,14 @@ static int blk_prw(BlockBackend *blk, int64_t offset, = uint8_t *buf, .ret =3D NOT_DONE, }; =20 - co =3D qemu_coroutine_create(co_entry, &rwco); - qemu_coroutine_enter(co); - BDRV_POLL_WHILE(blk_bs(blk), rwco.ret =3D=3D NOT_DONE); + if (qemu_in_coroutine()) { + /* Fast-path if already in coroutine context */ + co_entry(&rwco); + } else { + Coroutine *co =3D qemu_coroutine_create(co_entry, &rwco); + qemu_coroutine_enter(co); + BDRV_POLL_WHILE(blk_bs(blk), rwco.ret =3D=3D NOT_DONE); + } =20 return rwco.ret; } --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951150878766.6495736334552; Wed, 1 Feb 2017 04:12:30 -0800 (PST) Received: from localhost ([::1]:50187 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtlz-0006Mp-Vz for importer@patchew.org; Wed, 01 Feb 2017 07:12:28 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50604) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfb-00011k-Gz for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:54 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfa-0002IA-Jh for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:51 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48234) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfa-0002Hs-DM for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:50 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9D7193B74F for ; Wed, 1 Feb 2017 12:05:50 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLd019474; Wed, 1 Feb 2017 07:05:48 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:19 -0800 Message-Id: <20170201120533.13838-5-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Wed, 01 Feb 2017 12:05:50 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 04/18] test-thread-pool: use generic AioContext infrastructure X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Once the thread pool starts using aio_co_wake, it will also need qemu_get_current_aio_context(). Make test-thread-pool create an AioContext with qemu_init_main_loop, so that stubs/iothread.c and tests/iothread.c can provide the rest. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- tests/test-thread-pool.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c index 8dbf66a..91b4ec5 100644 --- a/tests/test-thread-pool.c +++ b/tests/test-thread-pool.c @@ -6,6 +6,7 @@ #include "qapi/error.h" #include "qemu/timer.h" #include "qemu/error-report.h" +#include "qemu/main-loop.h" =20 static AioContext *ctx; static ThreadPool *pool; @@ -224,15 +225,9 @@ static void test_cancel_async(void) int main(int argc, char **argv) { int ret; - Error *local_error =3D NULL; =20 - init_clocks(); - - ctx =3D aio_context_new(&local_error); - if (!ctx) { - error_reportf_err(local_error, "Failed to create AIO Context: "); - exit(1); - } + qemu_init_main_loop(&error_abort); + ctx =3D qemu_get_current_aio_context(); pool =3D aio_get_thread_pool(ctx); =20 g_test_init(&argc, &argv, NULL); @@ -245,6 +240,5 @@ int main(int argc, char **argv) =20 ret =3D g_test_run(); =20 - aio_context_unref(ctx); return ret; } --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951410213523.3809952679272; Wed, 1 Feb 2017 04:16:50 -0800 (PST) Received: from localhost ([::1]:50216 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtqC-0003JN-8G for importer@patchew.org; Wed, 01 Feb 2017 07:16:48 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50630) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfe-00015Z-Or for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:58 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfd-0002Ig-0r for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:54 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38006) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfc-0002IY-OB for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:52 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id ED979C04B92A for ; Wed, 1 Feb 2017 12:05:52 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLe019474; Wed, 1 Feb 2017 07:05:51 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:20 -0800 Message-Id: <20170201120533.13838-6-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Wed, 01 Feb 2017 12:05:53 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 05/18] io: add methods to set I/O handlers on AioContext X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is in preparation for making qio_channel_yield work on AioContexts other than the main one. Reviewed-by: Daniel P. Berrange Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- include/io/channel.h | 25 +++++++++++++++++++++++++ io/channel-command.c | 13 +++++++++++++ io/channel-file.c | 11 +++++++++++ io/channel-socket.c | 16 +++++++++++----- io/channel-tls.c | 12 ++++++++++++ io/channel-watch.c | 6 ++++++ io/channel.c | 11 +++++++++++ 7 files changed, 89 insertions(+), 5 deletions(-) diff --git a/include/io/channel.h b/include/io/channel.h index 32a9470..0bc7c3f 100644 --- a/include/io/channel.h +++ b/include/io/channel.h @@ -23,6 +23,7 @@ =20 #include "qemu-common.h" #include "qom/object.h" +#include "block/aio.h" =20 #define TYPE_QIO_CHANNEL "qio-channel" #define QIO_CHANNEL(obj) \ @@ -132,6 +133,11 @@ struct QIOChannelClass { off_t offset, int whence, Error **errp); + void (*io_set_aio_fd_handler)(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque); }; =20 /* General I/O handling functions */ @@ -525,4 +531,23 @@ void qio_channel_yield(QIOChannel *ioc, void qio_channel_wait(QIOChannel *ioc, GIOCondition condition); =20 +/** + * qio_channel_set_aio_fd_handler: + * @ioc: the channel object + * @ctx: the AioContext to set the handlers on + * @io_read: the read handler + * @io_write: the write handler + * @opaque: the opaque value passed to the handler + * + * This is used internally by qio_channel_yield(). It can + * be used by channel implementations to forward the handlers + * to another channel (e.g. from #QIOChannelTLS to the + * underlying socket). + */ +void qio_channel_set_aio_fd_handler(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque); + #endif /* QIO_CHANNEL_H */ diff --git a/io/channel-command.c b/io/channel-command.c index ad25313..319c5ed 100644 --- a/io/channel-command.c +++ b/io/channel-command.c @@ -328,6 +328,18 @@ static int qio_channel_command_close(QIOChannel *ioc, } =20 =20 +static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque) +{ + QIOChannelCommand *cioc =3D QIO_CHANNEL_COMMAND(ioc); + aio_set_fd_handler(ctx, cioc->readfd, false, io_read, NULL, NULL, opaq= ue); + aio_set_fd_handler(ctx, cioc->writefd, false, NULL, io_write, NULL, op= aque); +} + + static GSource *qio_channel_command_create_watch(QIOChannel *ioc, GIOCondition condition) { @@ -349,6 +361,7 @@ static void qio_channel_command_class_init(ObjectClass = *klass, ioc_klass->io_set_blocking =3D qio_channel_command_set_blocking; ioc_klass->io_close =3D qio_channel_command_close; ioc_klass->io_create_watch =3D qio_channel_command_create_watch; + ioc_klass->io_set_aio_fd_handler =3D qio_channel_command_set_aio_fd_ha= ndler; } =20 static const TypeInfo qio_channel_command_info =3D { diff --git a/io/channel-file.c b/io/channel-file.c index e1da243..b383273 100644 --- a/io/channel-file.c +++ b/io/channel-file.c @@ -186,6 +186,16 @@ static int qio_channel_file_close(QIOChannel *ioc, } =20 =20 +static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque) +{ + QIOChannelFile *fioc =3D QIO_CHANNEL_FILE(ioc); + aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write, NULL, opaq= ue); +} + static GSource *qio_channel_file_create_watch(QIOChannel *ioc, GIOCondition condition) { @@ -206,6 +216,7 @@ static void qio_channel_file_class_init(ObjectClass *kl= ass, ioc_klass->io_seek =3D qio_channel_file_seek; ioc_klass->io_close =3D qio_channel_file_close; ioc_klass->io_create_watch =3D qio_channel_file_create_watch; + ioc_klass->io_set_aio_fd_handler =3D qio_channel_file_set_aio_fd_handl= er; } =20 static const TypeInfo qio_channel_file_info =3D { diff --git a/io/channel-socket.c b/io/channel-socket.c index f385233..f546c68 100644 --- a/io/channel-socket.c +++ b/io/channel-socket.c @@ -649,11 +649,6 @@ qio_channel_socket_set_blocking(QIOChannel *ioc, qemu_set_block(sioc->fd); } else { qemu_set_nonblock(sioc->fd); -#ifdef WIN32 - WSAEventSelect(sioc->fd, ioc->event, - FD_READ | FD_ACCEPT | FD_CLOSE | - FD_CONNECT | FD_WRITE | FD_OOB); -#endif } return 0; } @@ -733,6 +728,16 @@ qio_channel_socket_shutdown(QIOChannel *ioc, return 0; } =20 +static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque) +{ + QIOChannelSocket *sioc =3D QIO_CHANNEL_SOCKET(ioc); + aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, NULL, opaq= ue); +} + static GSource *qio_channel_socket_create_watch(QIOChannel *ioc, GIOCondition condition) { @@ -755,6 +760,7 @@ static void qio_channel_socket_class_init(ObjectClass *= klass, ioc_klass->io_set_cork =3D qio_channel_socket_set_cork; ioc_klass->io_set_delay =3D qio_channel_socket_set_delay; ioc_klass->io_create_watch =3D qio_channel_socket_create_watch; + ioc_klass->io_set_aio_fd_handler =3D qio_channel_socket_set_aio_fd_han= dler; } =20 static const TypeInfo qio_channel_socket_info =3D { diff --git a/io/channel-tls.c b/io/channel-tls.c index f25ab0a..6182702 100644 --- a/io/channel-tls.c +++ b/io/channel-tls.c @@ -345,6 +345,17 @@ static int qio_channel_tls_close(QIOChannel *ioc, return qio_channel_close(tioc->master, errp); } =20 +static void qio_channel_tls_set_aio_fd_handler(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque) +{ + QIOChannelTLS *tioc =3D QIO_CHANNEL_TLS(ioc); + + qio_channel_set_aio_fd_handler(tioc->master, ctx, io_read, io_write, o= paque); +} + static GSource *qio_channel_tls_create_watch(QIOChannel *ioc, GIOCondition condition) { @@ -372,6 +383,7 @@ static void qio_channel_tls_class_init(ObjectClass *kla= ss, ioc_klass->io_close =3D qio_channel_tls_close; ioc_klass->io_shutdown =3D qio_channel_tls_shutdown; ioc_klass->io_create_watch =3D qio_channel_tls_create_watch; + ioc_klass->io_set_aio_fd_handler =3D qio_channel_tls_set_aio_fd_handle= r; } =20 static const TypeInfo qio_channel_tls_info =3D { diff --git a/io/channel-watch.c b/io/channel-watch.c index cf1cdff..8640d1c 100644 --- a/io/channel-watch.c +++ b/io/channel-watch.c @@ -285,6 +285,12 @@ GSource *qio_channel_create_socket_watch(QIOChannel *i= oc, GSource *source; QIOChannelSocketSource *ssource; =20 +#ifdef WIN32 + WSAEventSelect(socket, ioc->event, + FD_READ | FD_ACCEPT | FD_CLOSE | + FD_CONNECT | FD_WRITE | FD_OOB); +#endif + source =3D g_source_new(&qio_channel_socket_source_funcs, sizeof(QIOChannelSocketSource)); ssource =3D (QIOChannelSocketSource *)source; diff --git a/io/channel.c b/io/channel.c index 80924c1..ce470d7 100644 --- a/io/channel.c +++ b/io/channel.c @@ -154,6 +154,17 @@ GSource *qio_channel_create_watch(QIOChannel *ioc, } =20 =20 +void qio_channel_set_aio_fd_handler(QIOChannel *ioc, + AioContext *ctx, + IOHandler *io_read, + IOHandler *io_write, + void *opaque) +{ + QIOChannelClass *klass =3D QIO_CHANNEL_GET_CLASS(ioc); + + klass->io_set_aio_fd_handler(ioc, ctx, io_read, io_write, opaque); +} + guint qio_channel_add_watch(QIOChannel *ioc, GIOCondition condition, QIOChannelFunc func, --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485950898222704.6755214905162; Wed, 1 Feb 2017 04:08:18 -0800 (PST) Received: from localhost ([::1]:50167 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYthw-0002XG-AR for importer@patchew.org; Wed, 01 Feb 2017 07:08:16 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50642) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfg-00016j-Hs for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:02 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtff-0002JL-9n for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:56 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40370) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtff-0002J8-2R for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:55 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 41B1A64D85 for ; Wed, 1 Feb 2017 12:05:55 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLf019474; Wed, 1 Feb 2017 07:05:53 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:21 -0800 Message-Id: <20170201120533.13838-7-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 01 Feb 2017 12:05:55 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 06/18] io: make qio_channel_yield aware of AioContexts X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Support separate coroutines for reading and writing, and place the read/write handlers on the AioContext that the QIOChannel is registered with. Reviewed-by: Daniel P. Berrange Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- include/io/channel.h | 47 ++++++++++++++++++++++++++-- io/channel.c | 86 +++++++++++++++++++++++++++++++++++++++---------= ---- 2 files changed, 109 insertions(+), 24 deletions(-) diff --git a/include/io/channel.h b/include/io/channel.h index 0bc7c3f..5d48906 100644 --- a/include/io/channel.h +++ b/include/io/channel.h @@ -23,6 +23,7 @@ =20 #include "qemu-common.h" #include "qom/object.h" +#include "qemu/coroutine.h" #include "block/aio.h" =20 #define TYPE_QIO_CHANNEL "qio-channel" @@ -81,6 +82,9 @@ struct QIOChannel { Object parent; unsigned int features; /* bitmask of QIOChannelFeatures */ char *name; + AioContext *ctx; + Coroutine *read_coroutine; + Coroutine *write_coroutine; #ifdef _WIN32 HANDLE event; /* For use with GSource on Win32 */ #endif @@ -503,13 +507,50 @@ guint qio_channel_add_watch(QIOChannel *ioc, =20 =20 /** + * qio_channel_attach_aio_context: + * @ioc: the channel object + * @ctx: the #AioContext to set the handlers on + * + * Request that qio_channel_yield() sets I/O handlers on + * the given #AioContext. If @ctx is %NULL, qio_channel_yield() + * uses QEMU's main thread event loop. + * + * You can move a #QIOChannel from one #AioContext to another even if + * I/O handlers are set for a coroutine. However, #QIOChannel provides + * no synchronization between the calls to qio_channel_yield() and + * qio_channel_attach_aio_context(). + * + * Therefore you should first call qio_channel_detach_aio_context() + * to ensure that the coroutine is not entered concurrently. Then, + * while the coroutine has yielded, call qio_channel_attach_aio_context(), + * and then aio_co_schedule() to place the coroutine on the new + * #AioContext. The calls to qio_channel_detach_aio_context() + * and qio_channel_attach_aio_context() should be protected with + * aio_context_acquire() and aio_context_release(). + */ +void qio_channel_attach_aio_context(QIOChannel *ioc, + AioContext *ctx); + +/** + * qio_channel_detach_aio_context: + * @ioc: the channel object + * + * Disable any I/O handlers set by qio_channel_yield(). With the + * help of aio_co_schedule(), this allows moving a coroutine that was + * paused by qio_channel_yield() to another context. + */ +void qio_channel_detach_aio_context(QIOChannel *ioc); + +/** * qio_channel_yield: * @ioc: the channel object * @condition: the I/O condition to wait for * - * Yields execution from the current coroutine until - * the condition indicated by @condition becomes - * available. + * Yields execution from the current coroutine until the condition + * indicated by @condition becomes available. @condition must + * be either %G_IO_IN or %G_IO_OUT; it cannot contain both. In + * addition, no two coroutine can be waiting on the same condition + * and channel at the same time. * * This must only be called from coroutine context */ diff --git a/io/channel.c b/io/channel.c index ce470d7..cdf7454 100644 --- a/io/channel.c +++ b/io/channel.c @@ -21,7 +21,7 @@ #include "qemu/osdep.h" #include "io/channel.h" #include "qapi/error.h" -#include "qemu/coroutine.h" +#include "qemu/main-loop.h" =20 bool qio_channel_has_feature(QIOChannel *ioc, QIOChannelFeature feature) @@ -238,36 +238,80 @@ off_t qio_channel_io_seek(QIOChannel *ioc, } =20 =20 -typedef struct QIOChannelYieldData QIOChannelYieldData; -struct QIOChannelYieldData { - QIOChannel *ioc; - Coroutine *co; -}; +static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc); + +static void qio_channel_restart_read(void *opaque) +{ + QIOChannel *ioc =3D opaque; + Coroutine *co =3D ioc->read_coroutine; =20 + ioc->read_coroutine =3D NULL; + qio_channel_set_aio_fd_handlers(ioc); + aio_co_wake(co); +} =20 -static gboolean qio_channel_yield_enter(QIOChannel *ioc, - GIOCondition condition, - gpointer opaque) +static void qio_channel_restart_write(void *opaque) { - QIOChannelYieldData *data =3D opaque; - qemu_coroutine_enter(data->co); - return FALSE; + QIOChannel *ioc =3D opaque; + Coroutine *co =3D ioc->write_coroutine; + + ioc->write_coroutine =3D NULL; + qio_channel_set_aio_fd_handlers(ioc); + aio_co_wake(co); } =20 +static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc) +{ + IOHandler *rd_handler =3D NULL, *wr_handler =3D NULL; + AioContext *ctx; + + if (ioc->read_coroutine) { + rd_handler =3D qio_channel_restart_read; + } + if (ioc->write_coroutine) { + wr_handler =3D qio_channel_restart_write; + } + + ctx =3D ioc->ctx ? ioc->ctx : iohandler_get_aio_context(); + qio_channel_set_aio_fd_handler(ioc, ctx, rd_handler, wr_handler, ioc); +} + +void qio_channel_attach_aio_context(QIOChannel *ioc, + AioContext *ctx) +{ + AioContext *old_ctx; + if (ioc->ctx =3D=3D ctx) { + return; + } + + old_ctx =3D ioc->ctx ? ioc->ctx : iohandler_get_aio_context(); + qio_channel_set_aio_fd_handler(ioc, old_ctx, NULL, NULL, NULL); + ioc->ctx =3D ctx; + qio_channel_set_aio_fd_handlers(ioc); +} + +void qio_channel_detach_aio_context(QIOChannel *ioc) +{ + ioc->read_coroutine =3D NULL; + ioc->write_coroutine =3D NULL; + qio_channel_set_aio_fd_handlers(ioc); + ioc->ctx =3D NULL; +} =20 void coroutine_fn qio_channel_yield(QIOChannel *ioc, GIOCondition condition) { - QIOChannelYieldData data; - assert(qemu_in_coroutine()); - data.ioc =3D ioc; - data.co =3D qemu_coroutine_self(); - qio_channel_add_watch(ioc, - condition, - qio_channel_yield_enter, - &data, - NULL); + if (condition =3D=3D G_IO_IN) { + assert(!ioc->read_coroutine); + ioc->read_coroutine =3D qemu_coroutine_self(); + } else if (condition =3D=3D G_IO_OUT) { + assert(!ioc->write_coroutine); + ioc->write_coroutine =3D qemu_coroutine_self(); + } else { + abort(); + } + qio_channel_set_aio_fd_handlers(ioc); qemu_coroutine_yield(); } =20 --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951404334774.6371228664823; Wed, 1 Feb 2017 04:16:44 -0800 (PST) Received: from localhost ([::1]:50214 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtq5-00039o-Ff for importer@patchew.org; Wed, 01 Feb 2017 07:16:41 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50674) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfm-0001B3-Gg for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfh-0002Ji-I3 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:02 -0500 Received: from mx1.redhat.com ([209.132.183.28]:39866) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfh-0002Jb-AM for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:57 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8ADF361B84 for ; Wed, 1 Feb 2017 12:05:57 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLg019474; Wed, 1 Feb 2017 07:05:55 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:22 -0800 Message-Id: <20170201120533.13838-8-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 01 Feb 2017 12:05:57 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 07/18] nbd: convert to use qio_channel_yield X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" In the client, read the reply headers from a coroutine, switching the read side between the "read header" coroutine and the I/O coroutine that reads the body of the reply. In the server, if the server can read more requests it will create a new "read request" coroutine as soon as a request has been read. Otherwise, the new coroutine is created in nbd_request_put. Signed-off-by: Paolo Bonzini Reviewed-by: Stefan Hajnoczi --- v2->v3: add comments about wake/yield sequence [Stefan] block/nbd-client.c | 117 ++++++++++++++++++++++++-------------------------= ---- block/nbd-client.h | 2 +- nbd/client.c | 2 +- nbd/common.c | 9 +---- nbd/server.c | 94 +++++++++++++----------------------------- 5 files changed, 83 insertions(+), 141 deletions(-) diff --git a/block/nbd-client.c b/block/nbd-client.c index 06f1532..10fcc9e 100644 --- a/block/nbd-client.c +++ b/block/nbd-client.c @@ -33,8 +33,9 @@ #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs)) #define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs)) =20 -static void nbd_recv_coroutines_enter_all(NBDClientSession *s) +static void nbd_recv_coroutines_enter_all(BlockDriverState *bs) { + NBDClientSession *s =3D nbd_get_client_session(bs); int i; =20 for (i =3D 0; i < MAX_NBD_REQUESTS; i++) { @@ -42,6 +43,7 @@ static void nbd_recv_coroutines_enter_all(NBDClientSessio= n *s) qemu_coroutine_enter(s->recv_coroutine[i]); } } + BDRV_POLL_WHILE(bs, s->read_reply_co); } =20 static void nbd_teardown_connection(BlockDriverState *bs) @@ -56,7 +58,7 @@ static void nbd_teardown_connection(BlockDriverState *bs) qio_channel_shutdown(client->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL); - nbd_recv_coroutines_enter_all(client); + nbd_recv_coroutines_enter_all(bs); =20 nbd_client_detach_aio_context(bs); object_unref(OBJECT(client->sioc)); @@ -65,54 +67,43 @@ static void nbd_teardown_connection(BlockDriverState *b= s) client->ioc =3D NULL; } =20 -static void nbd_reply_ready(void *opaque) +static coroutine_fn void nbd_read_reply_entry(void *opaque) { - BlockDriverState *bs =3D opaque; - NBDClientSession *s =3D nbd_get_client_session(bs); + NBDClientSession *s =3D opaque; uint64_t i; int ret; =20 - if (!s->ioc) { /* Already closed */ - return; - } - - if (s->reply.handle =3D=3D 0) { - /* No reply already in flight. Fetch a header. It is possible - * that another thread has done the same thing in parallel, so - * the socket is not readable anymore. - */ + for (;;) { + assert(s->reply.handle =3D=3D 0); ret =3D nbd_receive_reply(s->ioc, &s->reply); - if (ret =3D=3D -EAGAIN) { - return; - } if (ret < 0) { - s->reply.handle =3D 0; - goto fail; + break; } - } =20 - /* There's no need for a mutex on the receive side, because the - * handler acts as a synchronization point and ensures that only - * one coroutine is called until the reply finishes. */ - i =3D HANDLE_TO_INDEX(s, s->reply.handle); - if (i >=3D MAX_NBD_REQUESTS) { - goto fail; - } + /* There's no need for a mutex on the receive side, because the + * handler acts as a synchronization point and ensures that only + * one coroutine is called until the reply finishes. + */ + i =3D HANDLE_TO_INDEX(s, s->reply.handle); + if (i >=3D MAX_NBD_REQUESTS || !s->recv_coroutine[i]) { + break; + } =20 - if (s->recv_coroutine[i]) { - qemu_coroutine_enter(s->recv_coroutine[i]); - return; + /* We're woken up by the recv_coroutine itself. Note that there + * is no race between yielding and reentering read_reply_co. This + * is because: + * + * - if recv_coroutine[i] runs on the same AioContext, it is only + * entered after we yield + * + * - if recv_coroutine[i] runs on a different AioContext, reenteri= ng + * read_reply_co happens through a bottom half, which can only + * run after we yield. + */ + aio_co_wake(s->recv_coroutine[i]); + qemu_coroutine_yield(); } - -fail: - nbd_teardown_connection(bs); -} - -static void nbd_restart_write(void *opaque) -{ - BlockDriverState *bs =3D opaque; - - qemu_coroutine_enter(nbd_get_client_session(bs)->send_coroutine); + s->read_reply_co =3D NULL; } =20 static int nbd_co_send_request(BlockDriverState *bs, @@ -120,7 +111,6 @@ static int nbd_co_send_request(BlockDriverState *bs, QEMUIOVector *qiov) { NBDClientSession *s =3D nbd_get_client_session(bs); - AioContext *aio_context; int rc, ret, i; =20 qemu_co_mutex_lock(&s->send_mutex); @@ -141,11 +131,6 @@ static int nbd_co_send_request(BlockDriverState *bs, return -EPIPE; } =20 - s->send_coroutine =3D qemu_coroutine_self(); - aio_context =3D bdrv_get_aio_context(bs); - - aio_set_fd_handler(aio_context, s->sioc->fd, false, - nbd_reply_ready, nbd_restart_write, NULL, bs); if (qiov) { qio_channel_set_cork(s->ioc, true); rc =3D nbd_send_request(s->ioc, request); @@ -160,9 +145,6 @@ static int nbd_co_send_request(BlockDriverState *bs, } else { rc =3D nbd_send_request(s->ioc, request); } - aio_set_fd_handler(aio_context, s->sioc->fd, false, - nbd_reply_ready, NULL, NULL, bs); - s->send_coroutine =3D NULL; qemu_co_mutex_unlock(&s->send_mutex); return rc; } @@ -174,8 +156,7 @@ static void nbd_co_receive_reply(NBDClientSession *s, { int ret; =20 - /* Wait until we're woken up by the read handler. TODO: perhaps - * peek at the next reply and avoid yielding if it's ours? */ + /* Wait until we're woken up by nbd_read_reply_entry. */ qemu_coroutine_yield(); *reply =3D s->reply; if (reply->handle !=3D request->handle || @@ -209,13 +190,19 @@ static void nbd_coroutine_start(NBDClientSession *s, /* s->recv_coroutine[i] is set as soon as we get the send_lock. */ } =20 -static void nbd_coroutine_end(NBDClientSession *s, +static void nbd_coroutine_end(BlockDriverState *bs, NBDRequest *request) { + NBDClientSession *s =3D nbd_get_client_session(bs); int i =3D HANDLE_TO_INDEX(s, request->handle); + s->recv_coroutine[i] =3D NULL; - if (s->in_flight-- =3D=3D MAX_NBD_REQUESTS) { - qemu_co_queue_next(&s->free_sema); + s->in_flight--; + qemu_co_queue_next(&s->free_sema); + + /* Kick the read_reply_co to get the next reply. */ + if (s->read_reply_co) { + aio_co_wake(s->read_reply_co); } } =20 @@ -241,7 +228,7 @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t= offset, } else { nbd_co_receive_reply(client, &request, &reply, qiov); } - nbd_coroutine_end(client, &request); + nbd_coroutine_end(bs, &request); return -reply.error; } =20 @@ -271,7 +258,7 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_= t offset, } else { nbd_co_receive_reply(client, &request, &reply, NULL); } - nbd_coroutine_end(client, &request); + nbd_coroutine_end(bs, &request); return -reply.error; } =20 @@ -306,7 +293,7 @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, i= nt64_t offset, } else { nbd_co_receive_reply(client, &request, &reply, NULL); } - nbd_coroutine_end(client, &request); + nbd_coroutine_end(bs, &request); return -reply.error; } =20 @@ -331,7 +318,7 @@ int nbd_client_co_flush(BlockDriverState *bs) } else { nbd_co_receive_reply(client, &request, &reply, NULL); } - nbd_coroutine_end(client, &request); + nbd_coroutine_end(bs, &request); return -reply.error; } =20 @@ -357,23 +344,23 @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int6= 4_t offset, int count) } else { nbd_co_receive_reply(client, &request, &reply, NULL); } - nbd_coroutine_end(client, &request); + nbd_coroutine_end(bs, &request); return -reply.error; =20 } =20 void nbd_client_detach_aio_context(BlockDriverState *bs) { - aio_set_fd_handler(bdrv_get_aio_context(bs), - nbd_get_client_session(bs)->sioc->fd, - false, NULL, NULL, NULL, NULL); + NBDClientSession *client =3D nbd_get_client_session(bs); + qio_channel_detach_aio_context(QIO_CHANNEL(client->sioc)); } =20 void nbd_client_attach_aio_context(BlockDriverState *bs, AioContext *new_context) { - aio_set_fd_handler(new_context, nbd_get_client_session(bs)->sioc->fd, - false, nbd_reply_ready, NULL, NULL, bs); + NBDClientSession *client =3D nbd_get_client_session(bs); + qio_channel_attach_aio_context(QIO_CHANNEL(client->sioc), new_context); + aio_co_schedule(new_context, client->read_reply_co); } =20 void nbd_client_close(BlockDriverState *bs) @@ -434,7 +421,7 @@ int nbd_client_init(BlockDriverState *bs, /* Now that we're connected, set the socket to be non-blocking and * kick the reply mechanism. */ qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL); - + client->read_reply_co =3D qemu_coroutine_create(nbd_read_reply_entry, = client); nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs)); =20 logout("Established connection with NBD server\n"); diff --git a/block/nbd-client.h b/block/nbd-client.h index f8d6006..8cdfc92 100644 --- a/block/nbd-client.h +++ b/block/nbd-client.h @@ -25,7 +25,7 @@ typedef struct NBDClientSession { =20 CoMutex send_mutex; CoQueue free_sema; - Coroutine *send_coroutine; + Coroutine *read_reply_co; int in_flight; =20 Coroutine *recv_coroutine[MAX_NBD_REQUESTS]; diff --git a/nbd/client.c b/nbd/client.c index ffb0743..5c9dee3 100644 --- a/nbd/client.c +++ b/nbd/client.c @@ -778,7 +778,7 @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *re= ply) ssize_t ret; =20 ret =3D read_sync(ioc, buf, sizeof(buf)); - if (ret < 0) { + if (ret <=3D 0) { return ret; } =20 diff --git a/nbd/common.c b/nbd/common.c index a5f39ea..dccbb8e 100644 --- a/nbd/common.c +++ b/nbd/common.c @@ -43,14 +43,7 @@ ssize_t nbd_wr_syncv(QIOChannel *ioc, } if (len =3D=3D QIO_CHANNEL_ERR_BLOCK) { if (qemu_in_coroutine()) { - /* XXX figure out if we can create a variant on - * qio_channel_yield() that works with AIO contexts - * and consider using that in this branch */ - qemu_coroutine_yield(); - } else if (done) { - /* XXX this is needed by nbd_reply_ready. */ - qio_channel_wait(ioc, - do_read ? G_IO_IN : G_IO_OUT); + qio_channel_yield(ioc, do_read ? G_IO_IN : G_IO_OUT); } else { return -EAGAIN; } diff --git a/nbd/server.c b/nbd/server.c index efe5cb8..ac92fa0 100644 --- a/nbd/server.c +++ b/nbd/server.c @@ -95,8 +95,6 @@ struct NBDClient { CoMutex send_lock; Coroutine *send_coroutine; =20 - bool can_read; - QTAILQ_ENTRY(NBDClient) next; int nb_requests; bool closing; @@ -104,9 +102,7 @@ struct NBDClient { =20 /* That's all folks */ =20 -static void nbd_set_handlers(NBDClient *client); -static void nbd_unset_handlers(NBDClient *client); -static void nbd_update_can_read(NBDClient *client); +static void nbd_client_receive_next_request(NBDClient *client); =20 static gboolean nbd_negotiate_continue(QIOChannel *ioc, GIOCondition condition, @@ -785,7 +781,7 @@ void nbd_client_put(NBDClient *client) */ assert(client->closing); =20 - nbd_unset_handlers(client); + qio_channel_detach_aio_context(client->ioc); object_unref(OBJECT(client->sioc)); object_unref(OBJECT(client->ioc)); if (client->tlscreds) { @@ -826,7 +822,6 @@ static NBDRequestData *nbd_request_get(NBDClient *clien= t) =20 assert(client->nb_requests <=3D MAX_NBD_REQUESTS - 1); client->nb_requests++; - nbd_update_can_read(client); =20 req =3D g_new0(NBDRequestData, 1); nbd_client_get(client); @@ -844,7 +839,8 @@ static void nbd_request_put(NBDRequestData *req) g_free(req); =20 client->nb_requests--; - nbd_update_can_read(client); + nbd_client_receive_next_request(client); + nbd_client_put(client); } =20 @@ -858,7 +854,13 @@ static void blk_aio_attached(AioContext *ctx, void *op= aque) exp->ctx =3D ctx; =20 QTAILQ_FOREACH(client, &exp->clients, next) { - nbd_set_handlers(client); + qio_channel_attach_aio_context(client->ioc, ctx); + if (client->recv_coroutine) { + aio_co_schedule(ctx, client->recv_coroutine); + } + if (client->send_coroutine) { + aio_co_schedule(ctx, client->send_coroutine); + } } } =20 @@ -870,7 +872,7 @@ static void blk_aio_detach(void *opaque) TRACE("Export %s: Detaching clients from AIO context %p\n", exp->name,= exp->ctx); =20 QTAILQ_FOREACH(client, &exp->clients, next) { - nbd_unset_handlers(client); + qio_channel_detach_aio_context(client->ioc); } =20 exp->ctx =3D NULL; @@ -1045,7 +1047,6 @@ static ssize_t nbd_co_send_reply(NBDRequestData *req,= NBDReply *reply, g_assert(qemu_in_coroutine()); qemu_co_mutex_lock(&client->send_lock); client->send_coroutine =3D qemu_coroutine_self(); - nbd_set_handlers(client); =20 if (!len) { rc =3D nbd_send_reply(client->ioc, reply); @@ -1062,7 +1063,6 @@ static ssize_t nbd_co_send_reply(NBDRequestData *req,= NBDReply *reply, } =20 client->send_coroutine =3D NULL; - nbd_set_handlers(client); qemu_co_mutex_unlock(&client->send_lock); return rc; } @@ -1079,9 +1079,7 @@ static ssize_t nbd_co_receive_request(NBDRequestData = *req, ssize_t rc; =20 g_assert(qemu_in_coroutine()); - client->recv_coroutine =3D qemu_coroutine_self(); - nbd_update_can_read(client); - + assert(client->recv_coroutine =3D=3D qemu_coroutine_self()); rc =3D nbd_receive_request(client->ioc, request); if (rc < 0) { if (rc !=3D -EAGAIN) { @@ -1163,23 +1161,25 @@ static ssize_t nbd_co_receive_request(NBDRequestDat= a *req, =20 out: client->recv_coroutine =3D NULL; - nbd_update_can_read(client); + nbd_client_receive_next_request(client); =20 return rc; } =20 -static void nbd_trip(void *opaque) +/* Owns a reference to the NBDClient passed as opaque. */ +static coroutine_fn void nbd_trip(void *opaque) { NBDClient *client =3D opaque; NBDExport *exp =3D client->exp; NBDRequestData *req; - NBDRequest request; + NBDRequest request =3D { 0 }; /* GCC thinks it can be used uninitia= lized */ NBDReply reply; ssize_t ret; int flags; =20 TRACE("Reading request."); if (client->closing) { + nbd_client_put(client); return; } =20 @@ -1338,60 +1338,21 @@ static void nbd_trip(void *opaque) =20 done: nbd_request_put(req); + nbd_client_put(client); return; =20 out: nbd_request_put(req); client_close(client); + nbd_client_put(client); } =20 -static void nbd_read(void *opaque) -{ - NBDClient *client =3D opaque; - - if (client->recv_coroutine) { - qemu_coroutine_enter(client->recv_coroutine); - } else { - qemu_coroutine_enter(qemu_coroutine_create(nbd_trip, client)); - } -} - -static void nbd_restart_write(void *opaque) -{ - NBDClient *client =3D opaque; - - qemu_coroutine_enter(client->send_coroutine); -} - -static void nbd_set_handlers(NBDClient *client) -{ - if (client->exp && client->exp->ctx) { - aio_set_fd_handler(client->exp->ctx, client->sioc->fd, true, - client->can_read ? nbd_read : NULL, - client->send_coroutine ? nbd_restart_write : NU= LL, - NULL, client); - } -} - -static void nbd_unset_handlers(NBDClient *client) -{ - if (client->exp && client->exp->ctx) { - aio_set_fd_handler(client->exp->ctx, client->sioc->fd, true, NULL, - NULL, NULL, NULL); - } -} - -static void nbd_update_can_read(NBDClient *client) +static void nbd_client_receive_next_request(NBDClient *client) { - bool can_read =3D client->recv_coroutine || - client->nb_requests < MAX_NBD_REQUESTS; - - if (can_read !=3D client->can_read) { - client->can_read =3D can_read; - nbd_set_handlers(client); - - /* There is no need to invoke aio_notify(), since aio_set_fd_handl= er() - * in nbd_set_handlers() will have taken care of that */ + if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS)= { + nbd_client_get(client); + client->recv_coroutine =3D qemu_coroutine_create(nbd_trip, client); + aio_co_schedule(client->exp->ctx, client->recv_coroutine); } } =20 @@ -1409,11 +1370,13 @@ static coroutine_fn void nbd_co_client_start(void *= opaque) goto out; } qemu_co_mutex_init(&client->send_lock); - nbd_set_handlers(client); =20 if (exp) { QTAILQ_INSERT_TAIL(&exp->clients, client, next); } + + nbd_client_receive_next_request(client); + out: g_free(data); } @@ -1439,7 +1402,6 @@ void nbd_client_new(NBDExport *exp, object_ref(OBJECT(client->sioc)); client->ioc =3D QIO_CHANNEL(sioc); object_ref(OBJECT(client->ioc)); - client->can_read =3D true; client->close =3D close_fn; =20 data->client =3D client; --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951169064925.9413173430394; Wed, 1 Feb 2017 04:12:49 -0800 (PST) Received: from localhost ([::1]:50190 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtmJ-0006go-A1 for importer@patchew.org; Wed, 01 Feb 2017 07:12:47 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50659) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfk-00018a-K6 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:03 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfj-0002K1-NK for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:00 -0500 Received: from mx1.redhat.com ([209.132.183.28]:39876) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfj-0002Jx-IN for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:05:59 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CD90C61BA0 for ; Wed, 1 Feb 2017 12:05:59 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLh019474; Wed, 1 Feb 2017 07:05:58 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:23 -0800 Message-Id: <20170201120533.13838-9-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 01 Feb 2017 12:05:59 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 08/18] coroutine-lock: reschedule coroutine on the AioContext it was running on X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" As a small step towards the introduction of multiqueue, we want coroutines to remain on the same AioContext that started them, unless they are moved explicitly with e.g. aio_co_schedule. This patch avoids that coroutines switch AioContext when they use a CoMutex. For now it does not make much of a difference, because the CoMutex is not thread-safe and the AioContext itself is used to protect the CoMutex from concurrent access. However, this is going to change. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- util/qemu-coroutine-lock.c | 5 ++--- util/trace-events | 1 - 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c index 14cf9ce..e6afd1a 100644 --- a/util/qemu-coroutine-lock.c +++ b/util/qemu-coroutine-lock.c @@ -27,6 +27,7 @@ #include "qemu/coroutine.h" #include "qemu/coroutine_int.h" #include "qemu/queue.h" +#include "block/aio.h" #include "trace.h" =20 void qemu_co_queue_init(CoQueue *queue) @@ -63,7 +64,6 @@ void qemu_co_queue_run_restart(Coroutine *co) =20 static bool qemu_co_queue_do_restart(CoQueue *queue, bool single) { - Coroutine *self =3D qemu_coroutine_self(); Coroutine *next; =20 if (QSIMPLEQ_EMPTY(&queue->entries)) { @@ -72,8 +72,7 @@ static bool qemu_co_queue_do_restart(CoQueue *queue, bool= single) =20 while ((next =3D QSIMPLEQ_FIRST(&queue->entries)) !=3D NULL) { QSIMPLEQ_REMOVE_HEAD(&queue->entries, co_queue_next); - QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, next, co_queue_next); - trace_qemu_co_queue_next(next); + aio_co_wake(next); if (single) { break; } diff --git a/util/trace-events b/util/trace-events index 2b8aa30..65705c4 100644 --- a/util/trace-events +++ b/util/trace-events @@ -13,7 +13,6 @@ qemu_coroutine_terminate(void *co) "self %p" =20 # util/qemu-coroutine-lock.c qemu_co_queue_run_restart(void *co) "co %p" -qemu_co_queue_next(void *nxt) "next %p" qemu_co_mutex_lock_entry(void *mutex, void *self) "mutex %p self %p" qemu_co_mutex_lock_return(void *mutex, void *self) "mutex %p self %p" qemu_co_mutex_unlock_entry(void *mutex, void *self) "mutex %p self %p" --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951152929799.5787689227787; Wed, 1 Feb 2017 04:12:32 -0800 (PST) Received: from localhost ([::1]:50188 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtm2-0006OD-4M for importer@patchew.org; Wed, 01 Feb 2017 07:12:30 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50679) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfn-0001C2-Qp for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfm-0002LK-2i for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40144) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfl-0002KD-Tn for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:02 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2A0297E9CA for ; Wed, 1 Feb 2017 12:06:02 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLi019474; Wed, 1 Feb 2017 07:06:00 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:24 -0800 Message-Id: <20170201120533.13838-10-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 01 Feb 2017 12:06:02 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 09/18] blkdebug: reschedule coroutine on the AioContext it is running on X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Keep the coroutine on the same AioContext. Without this change, there would be a race between yielding the coroutine and reentering it. While the race cannot happen now, because the code only runs from a single AioContext, this will change with multiqueue support in the block layer. While doing the change, replace custom bottom half with aio_co_schedule. Signed-off-by: Paolo Bonzini Reviewed-by: Stefan Hajnoczi --- v2->v3: new patch block/blkdebug.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/block/blkdebug.c b/block/blkdebug.c index acccf85..d8eee1b 100644 --- a/block/blkdebug.c +++ b/block/blkdebug.c @@ -405,12 +405,6 @@ out: return ret; } =20 -static void error_callback_bh(void *opaque) -{ - Coroutine *co =3D opaque; - qemu_coroutine_enter(co); -} - static int inject_error(BlockDriverState *bs, BlkdebugRule *rule) { BDRVBlkdebugState *s =3D bs->opaque; @@ -423,8 +417,7 @@ static int inject_error(BlockDriverState *bs, BlkdebugR= ule *rule) } =20 if (!immediately) { - aio_bh_schedule_oneshot(bdrv_get_aio_context(bs), error_callback_b= h, - qemu_coroutine_self()); + aio_co_schedule(qemu_get_current_aio_context(), qemu_coroutine_sel= f()); qemu_coroutine_yield(); } =20 --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951396926718.1712360305028; Wed, 1 Feb 2017 04:16:36 -0800 (PST) Received: from localhost ([::1]:50213 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtpx-00034H-Cx for importer@patchew.org; Wed, 01 Feb 2017 07:16:33 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50698) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfp-0001E2-Mo for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfo-0002Lf-Nc for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:05 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40156) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfo-0002LX-EB for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:04 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A9A9C883A4 for ; Wed, 1 Feb 2017 12:06:04 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLj019474; Wed, 1 Feb 2017 07:06:02 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:25 -0800 Message-Id: <20170201120533.13838-11-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 01 Feb 2017 12:06:04 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 10/18] qed: introduce qed_aio_start_io and qed_aio_next_io_cb X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" qed_aio_start_io and qed_aio_next_io will not have to acquire/release the AioContext, while qed_aio_next_io_cb will. Split the functionality and gain a little type-safety in the process. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- block/qed.c | 39 +++++++++++++++++++++++++-------------- 1 file changed, 25 insertions(+), 14 deletions(-) diff --git a/block/qed.c b/block/qed.c index 1a7ef0a..7f1c508 100644 --- a/block/qed.c +++ b/block/qed.c @@ -273,7 +273,19 @@ static CachedL2Table *qed_new_l2_table(BDRVQEDState *s) return l2_table; } =20 -static void qed_aio_next_io(void *opaque, int ret); +static void qed_aio_next_io(QEDAIOCB *acb, int ret); + +static void qed_aio_start_io(QEDAIOCB *acb) +{ + qed_aio_next_io(acb, 0); +} + +static void qed_aio_next_io_cb(void *opaque, int ret) +{ + QEDAIOCB *acb =3D opaque; + + qed_aio_next_io(acb, ret); +} =20 static void qed_plug_allocating_write_reqs(BDRVQEDState *s) { @@ -292,7 +304,7 @@ static void qed_unplug_allocating_write_reqs(BDRVQEDSta= te *s) =20 acb =3D QSIMPLEQ_FIRST(&s->allocating_write_reqs); if (acb) { - qed_aio_next_io(acb, 0); + qed_aio_start_io(acb); } } =20 @@ -959,7 +971,7 @@ static void qed_aio_complete(QEDAIOCB *acb, int ret) QSIMPLEQ_REMOVE_HEAD(&s->allocating_write_reqs, next); acb =3D QSIMPLEQ_FIRST(&s->allocating_write_reqs); if (acb) { - qed_aio_next_io(acb, 0); + qed_aio_start_io(acb); } else if (s->header.features & QED_F_NEED_CHECK) { qed_start_need_check_timer(s); } @@ -984,7 +996,7 @@ static void qed_commit_l2_update(void *opaque, int ret) acb->request.l2_table =3D qed_find_l2_cache_entry(&s->l2_cache, l2_off= set); assert(acb->request.l2_table !=3D NULL); =20 - qed_aio_next_io(opaque, ret); + qed_aio_next_io(acb, ret); } =20 /** @@ -1032,11 +1044,11 @@ static void qed_aio_write_l2_update(QEDAIOCB *acb, = int ret, uint64_t offset) if (need_alloc) { /* Write out the whole new L2 table */ qed_write_l2_table(s, &acb->request, 0, s->table_nelems, true, - qed_aio_write_l1_update, acb); + qed_aio_write_l1_update, acb); } else { /* Write out only the updated part of the L2 table */ qed_write_l2_table(s, &acb->request, index, acb->cur_nclusters, fa= lse, - qed_aio_next_io, acb); + qed_aio_next_io_cb, acb); } return; =20 @@ -1088,7 +1100,7 @@ static void qed_aio_write_main(void *opaque, int ret) } =20 if (acb->find_cluster_ret =3D=3D QED_CLUSTER_FOUND) { - next_fn =3D qed_aio_next_io; + next_fn =3D qed_aio_next_io_cb; } else { if (s->bs->backing) { next_fn =3D qed_aio_write_flush_before_l2_update; @@ -1201,7 +1213,7 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t= len) if (acb->flags & QED_AIOCB_ZERO) { /* Skip ahead if the clusters are already zero */ if (acb->find_cluster_ret =3D=3D QED_CLUSTER_ZERO) { - qed_aio_next_io(acb, 0); + qed_aio_start_io(acb); return; } =20 @@ -1321,18 +1333,18 @@ static void qed_aio_read_data(void *opaque, int ret, /* Handle zero cluster and backing file reads */ if (ret =3D=3D QED_CLUSTER_ZERO) { qemu_iovec_memset(&acb->cur_qiov, 0, 0, acb->cur_qiov.size); - qed_aio_next_io(acb, 0); + qed_aio_start_io(acb); return; } else if (ret !=3D QED_CLUSTER_FOUND) { qed_read_backing_file(s, acb->cur_pos, &acb->cur_qiov, - &acb->backing_qiov, qed_aio_next_io, acb); + &acb->backing_qiov, qed_aio_next_io_cb, acb); return; } =20 BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO); bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE, &acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE, - qed_aio_next_io, acb); + qed_aio_next_io_cb, acb); return; =20 err: @@ -1342,9 +1354,8 @@ err: /** * Begin next I/O or complete the request */ -static void qed_aio_next_io(void *opaque, int ret) +static void qed_aio_next_io(QEDAIOCB *acb, int ret) { - QEDAIOCB *acb =3D opaque; BDRVQEDState *s =3D acb_to_s(acb); QEDFindClusterFunc *io_fn =3D (acb->flags & QED_AIOCB_WRITE) ? qed_aio_write_data : qed_aio_read_data; @@ -1400,7 +1411,7 @@ static BlockAIOCB *qed_aio_setup(BlockDriverState *bs, qemu_iovec_init(&acb->cur_qiov, qiov->niov); =20 /* Start request */ - qed_aio_next_io(acb, 0); + qed_aio_start_io(acb); return &acb->common; } =20 --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951700401467.0639539372886; Wed, 1 Feb 2017 04:21:40 -0800 (PST) Received: from localhost ([::1]:50239 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtus-0007g4-4B for importer@patchew.org; Wed, 01 Feb 2017 07:21:38 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50740) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfw-0001J4-N9 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:18 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfq-0002Lz-TC for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:12 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36176) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfq-0002Ls-LL for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:06 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E04E25277F for ; Wed, 1 Feb 2017 12:06:06 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLk019474; Wed, 1 Feb 2017 07:06:05 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:26 -0800 Message-Id: <20170201120533.13838-12-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 01 Feb 2017 12:06:06 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 11/18] aio: push aio_context_acquire/release down to dispatching X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The AioContext data structures are now protected by list_lock and/or they are walked with FOREACH_RCU primitives. There is no need anymore to acquire the AioContext for the entire duration of aio_dispatch. Instead, just acquire it before and after invoking the callbacks. The next step is then to push it further down. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- util/aio-posix.c | 25 +++++++++++-------------- util/aio-win32.c | 15 +++++++-------- util/async.c | 2 ++ 3 files changed, 20 insertions(+), 22 deletions(-) diff --git a/util/aio-posix.c b/util/aio-posix.c index a8d7090..b590c5a 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -402,7 +402,9 @@ static bool aio_dispatch_handlers(AioContext *ctx) (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && aio_node_check(ctx, node->is_external) && node->io_read) { + aio_context_acquire(ctx); node->io_read(node->opaque); + aio_context_release(ctx); =20 /* aio_notify() does not count as progress */ if (node->opaque !=3D &ctx->notifier) { @@ -413,7 +415,9 @@ static bool aio_dispatch_handlers(AioContext *ctx) (revents & (G_IO_OUT | G_IO_ERR)) && aio_node_check(ctx, node->is_external) && node->io_write) { + aio_context_acquire(ctx); node->io_write(node->opaque); + aio_context_release(ctx); progress =3D true; } =20 @@ -450,7 +454,9 @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds) } =20 /* Run our timers */ + aio_context_acquire(ctx); progress |=3D timerlistgroup_run_timers(&ctx->tlg); + aio_context_release(ctx); =20 return progress; } @@ -597,9 +603,6 @@ bool aio_poll(AioContext *ctx, bool blocking) int64_t timeout; int64_t start =3D 0; =20 - aio_context_acquire(ctx); - progress =3D false; - /* aio_notify can avoid the expensive event_notifier_set if * everything (file descriptors, bottom halves, timers) will * be re-evaluated before the next blocking poll(). This is @@ -617,9 +620,11 @@ bool aio_poll(AioContext *ctx, bool blocking) start =3D qemu_clock_get_ns(QEMU_CLOCK_REALTIME); } =20 - if (try_poll_mode(ctx, blocking)) { - progress =3D true; - } else { + aio_context_acquire(ctx); + progress =3D try_poll_mode(ctx, blocking); + aio_context_release(ctx); + + if (!progress) { assert(npfd =3D=3D 0); =20 /* fill pollfds */ @@ -636,9 +641,6 @@ bool aio_poll(AioContext *ctx, bool blocking) timeout =3D blocking ? aio_compute_timeout(ctx) : 0; =20 /* wait until next event */ - if (timeout) { - aio_context_release(ctx); - } if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { AioHandler epoll_handler; =20 @@ -650,9 +652,6 @@ bool aio_poll(AioContext *ctx, bool blocking) } else { ret =3D qemu_poll_ns(pollfds, npfd, timeout); } - if (timeout) { - aio_context_acquire(ctx); - } } =20 if (blocking) { @@ -717,8 +716,6 @@ bool aio_poll(AioContext *ctx, bool blocking) progress =3D true; } =20 - aio_context_release(ctx); - return progress; } =20 diff --git a/util/aio-win32.c b/util/aio-win32.c index 900524c..ab6d0e5 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -266,7 +266,9 @@ static bool aio_dispatch_handlers(AioContext *ctx, HAND= LE event) (revents || event_notifier_get_handle(node->e) =3D=3D event) && node->io_notify) { node->pfd.revents =3D 0; + aio_context_acquire(ctx); node->io_notify(node->e); + aio_context_release(ctx); =20 /* aio_notify() does not count as progress */ if (node->e !=3D &ctx->notifier) { @@ -278,11 +280,15 @@ static bool aio_dispatch_handlers(AioContext *ctx, HA= NDLE event) (node->io_read || node->io_write)) { node->pfd.revents =3D 0; if ((revents & G_IO_IN) && node->io_read) { + aio_context_acquire(ctx); node->io_read(node->opaque); + aio_context_release(ctx); progress =3D true; } if ((revents & G_IO_OUT) && node->io_write) { + aio_context_acquire(ctx); node->io_write(node->opaque); + aio_context_release(ctx); progress =3D true; } =20 @@ -329,7 +335,6 @@ bool aio_poll(AioContext *ctx, bool blocking) int count; int timeout; =20 - aio_context_acquire(ctx); progress =3D false; =20 /* aio_notify can avoid the expensive event_notifier_set if @@ -371,17 +376,11 @@ bool aio_poll(AioContext *ctx, bool blocking) =20 timeout =3D blocking && !have_select_revents ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0; - if (timeout) { - aio_context_release(ctx); - } ret =3D WaitForMultipleObjects(count, events, FALSE, timeout); if (blocking) { assert(first); atomic_sub(&ctx->notify_me, 2); } - if (timeout) { - aio_context_acquire(ctx); - } =20 if (first) { aio_notify_accept(ctx); @@ -404,8 +403,8 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |=3D aio_dispatch_handlers(ctx, event); } while (count > 0); =20 + aio_context_acquire(ctx); progress |=3D timerlistgroup_run_timers(&ctx->tlg); - aio_context_release(ctx); return progress; } diff --git a/util/async.c b/util/async.c index 44c9c3b..8e65e4b 100644 --- a/util/async.c +++ b/util/async.c @@ -114,7 +114,9 @@ int aio_bh_poll(AioContext *ctx) ret =3D 1; } bh->idle =3D 0; + aio_context_acquire(ctx); aio_bh_call(bh); + aio_context_release(ctx); } if (bh->deleted) { deleted =3D true; --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951679004686.3842697031346; Wed, 1 Feb 2017 04:21:19 -0800 (PST) Received: from localhost ([::1]:50238 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtuW-0007R4-Ur for importer@patchew.org; Wed, 01 Feb 2017 07:21:17 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50731) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfw-0001Im-CS for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:13 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtft-0002MS-7b for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:12 -0500 Received: from mx1.redhat.com ([209.132.183.28]:39940) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtft-0002MB-0A for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:09 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3C86161B85 for ; Wed, 1 Feb 2017 12:06:09 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLl019474; Wed, 1 Feb 2017 07:06:07 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:27 -0800 Message-Id: <20170201120533.13838-13-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 01 Feb 2017 12:06:09 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 12/18] block: explicitly acquire aiocontext in timers that need it X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- block/curl.c | 2 ++ block/io.c | 5 +++++ block/iscsi.c | 8 ++++++-- block/null.c | 4 ++++ block/qed.c | 12 ++++++++++++ block/qed.h | 3 +++ block/throttle-groups.c | 2 ++ util/aio-posix.c | 2 -- util/aio-win32.c | 2 -- util/qemu-coroutine-sleep.c | 2 +- 10 files changed, 35 insertions(+), 7 deletions(-) diff --git a/block/curl.c b/block/curl.c index 792fef8..65e6da1 100644 --- a/block/curl.c +++ b/block/curl.c @@ -424,9 +424,11 @@ static void curl_multi_timeout_do(void *arg) return; } =20 + aio_context_acquire(s->aio_context); curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running); =20 curl_multi_check_completion(s); + aio_context_release(s->aio_context); #else abort(); #endif diff --git a/block/io.c b/block/io.c index 76dfaf4..dd6c74f 100644 --- a/block/io.c +++ b/block/io.c @@ -2080,6 +2080,11 @@ void bdrv_aio_cancel(BlockAIOCB *acb) if (acb->aiocb_info->get_aio_context) { aio_poll(acb->aiocb_info->get_aio_context(acb), true); } else if (acb->bs) { + /* qemu_aio_ref and qemu_aio_unref are not thread-safe, so + * assert that we're not using an I/O thread. Thread-safe + * code should use bdrv_aio_cancel_async exclusively. + */ + assert(bdrv_get_aio_context(acb->bs) =3D=3D qemu_get_aio_conte= xt()); aio_poll(bdrv_get_aio_context(acb->bs), true); } else { abort(); diff --git a/block/iscsi.c b/block/iscsi.c index 1860f1b..664b71a 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -174,7 +174,7 @@ static void iscsi_retry_timer_expired(void *opaque) struct IscsiTask *iTask =3D opaque; iTask->complete =3D 1; if (iTask->co) { - qemu_coroutine_enter(iTask->co); + aio_co_wake(iTask->co); } } =20 @@ -1392,16 +1392,20 @@ static void iscsi_nop_timed_event(void *opaque) { IscsiLun *iscsilun =3D opaque; =20 + aio_context_acquire(iscsilun->aio_context); if (iscsi_get_nops_in_flight(iscsilun->iscsi) >=3D MAX_NOP_FAILURES) { error_report("iSCSI: NOP timeout. Reconnecting..."); iscsilun->request_timed_out =3D true; } else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != =3D 0) { error_report("iSCSI: failed to sent NOP-Out. Disabling NOP message= s."); - return; + goto out; } =20 timer_mod(iscsilun->nop_timer, qemu_clock_get_ms(QEMU_CLOCK_REALTIME) = + NOP_INTERVAL); iscsi_set_events(iscsilun); + +out: + aio_context_release(iscsilun->aio_context); } =20 static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp) diff --git a/block/null.c b/block/null.c index b300390..356209a 100644 --- a/block/null.c +++ b/block/null.c @@ -141,7 +141,11 @@ static void null_bh_cb(void *opaque) static void null_timer_cb(void *opaque) { NullAIOCB *acb =3D opaque; + AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); + + aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, 0); + aio_context_release(ctx); timer_deinit(&acb->timer); qemu_aio_unref(acb); } diff --git a/block/qed.c b/block/qed.c index 7f1c508..a21d025 100644 --- a/block/qed.c +++ b/block/qed.c @@ -345,10 +345,22 @@ static void qed_need_check_timer_cb(void *opaque) =20 trace_qed_need_check_timer_cb(s); =20 + qed_acquire(s); qed_plug_allocating_write_reqs(s); =20 /* Ensure writes are on disk before clearing flag */ bdrv_aio_flush(s->bs->file->bs, qed_clear_need_check, s); + qed_release(s); +} + +void qed_acquire(BDRVQEDState *s) +{ + aio_context_acquire(bdrv_get_aio_context(s->bs)); +} + +void qed_release(BDRVQEDState *s) +{ + aio_context_release(bdrv_get_aio_context(s->bs)); } =20 static void qed_start_need_check_timer(BDRVQEDState *s) diff --git a/block/qed.h b/block/qed.h index 9676ab9..ce8c314 100644 --- a/block/qed.h +++ b/block/qed.h @@ -198,6 +198,9 @@ enum { */ typedef void QEDFindClusterFunc(void *opaque, int ret, uint64_t offset, si= ze_t len); =20 +void qed_acquire(BDRVQEDState *s); +void qed_release(BDRVQEDState *s); + /** * Generic callback for chaining async callbacks */ diff --git a/block/throttle-groups.c b/block/throttle-groups.c index 17b2efb..aade5de 100644 --- a/block/throttle-groups.c +++ b/block/throttle-groups.c @@ -416,7 +416,9 @@ static void timer_cb(BlockBackend *blk, bool is_write) qemu_mutex_unlock(&tg->lock); =20 /* Run the request that was waiting for this timer */ + aio_context_acquire(blk_get_aio_context(blk)); empty_queue =3D !qemu_co_enter_next(&blkp->throttled_reqs[is_write]); + aio_context_release(blk_get_aio_context(blk)); =20 /* If the request queue was empty then we have to take care of * scheduling the next one */ diff --git a/util/aio-posix.c b/util/aio-posix.c index b590c5a..4dc597c 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -454,9 +454,7 @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds) } =20 /* Run our timers */ - aio_context_acquire(ctx); progress |=3D timerlistgroup_run_timers(&ctx->tlg); - aio_context_release(ctx); =20 return progress; } diff --git a/util/aio-win32.c b/util/aio-win32.c index ab6d0e5..810e1c6 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -403,9 +403,7 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |=3D aio_dispatch_handlers(ctx, event); } while (count > 0); =20 - aio_context_acquire(ctx); progress |=3D timerlistgroup_run_timers(&ctx->tlg); - aio_context_release(ctx); return progress; } =20 diff --git a/util/qemu-coroutine-sleep.c b/util/qemu-coroutine-sleep.c index 25de3ed..9c56550 100644 --- a/util/qemu-coroutine-sleep.c +++ b/util/qemu-coroutine-sleep.c @@ -25,7 +25,7 @@ static void co_sleep_cb(void *opaque) { CoSleepCB *sleep_cb =3D opaque; =20 - qemu_coroutine_enter(sleep_cb->co); + aio_co_wake(sleep_cb->co); } =20 void coroutine_fn co_aio_sleep_ns(AioContext *ctx, QEMUClockType type, --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951754427328.9992291113434; Wed, 1 Feb 2017 04:22:34 -0800 (PST) Received: from localhost ([::1]:50243 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtvk-0000IK-B2 for importer@patchew.org; Wed, 01 Feb 2017 07:22:32 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50752) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfx-0001Jd-BF for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:15 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfv-0002Ms-I1 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:13 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40506) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfv-0002Me-A6 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:11 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 88C25274878 for ; Wed, 1 Feb 2017 12:06:11 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLm019474; Wed, 1 Feb 2017 07:06:09 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:28 -0800 Message-Id: <20170201120533.13838-14-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 01 Feb 2017 12:06:11 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 13/18] block: explicitly acquire aiocontext in callbacks that need it X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This covers both file descriptor callbacks and polling callbacks, since they execute related code. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- block/curl.c | 16 +++++++++++++--- block/iscsi.c | 4 ++++ block/linux-aio.c | 4 ++++ block/nfs.c | 6 ++++++ block/sheepdog.c | 29 +++++++++++++++-------------- block/ssh.c | 29 +++++++++-------------------- block/win32-aio.c | 10 ++++++---- hw/block/virtio-blk.c | 5 ++++- hw/scsi/virtio-scsi.c | 6 ++++++ util/aio-posix.c | 7 ------- util/aio-win32.c | 6 ------ 11 files changed, 67 insertions(+), 55 deletions(-) diff --git a/block/curl.c b/block/curl.c index 65e6da1..05b9ca3 100644 --- a/block/curl.c +++ b/block/curl.c @@ -386,9 +386,8 @@ static void curl_multi_check_completion(BDRVCURLState *= s) } } =20 -static void curl_multi_do(void *arg) +static void curl_multi_do_locked(CURLState *s) { - CURLState *s =3D (CURLState *)arg; CURLSocket *socket, *next_socket; int running; int r; @@ -406,12 +405,23 @@ static void curl_multi_do(void *arg) } } =20 +static void curl_multi_do(void *arg) +{ + CURLState *s =3D (CURLState *)arg; + + aio_context_acquire(s->s->aio_context); + curl_multi_do_locked(s); + aio_context_release(s->s->aio_context); +} + static void curl_multi_read(void *arg) { CURLState *s =3D (CURLState *)arg; =20 - curl_multi_do(arg); + aio_context_acquire(s->s->aio_context); + curl_multi_do_locked(s); curl_multi_check_completion(s->s); + aio_context_release(s->s->aio_context); } =20 static void curl_multi_timeout_do(void *arg) diff --git a/block/iscsi.c b/block/iscsi.c index 664b71a..303b108 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -394,8 +394,10 @@ iscsi_process_read(void *arg) IscsiLun *iscsilun =3D arg; struct iscsi_context *iscsi =3D iscsilun->iscsi; =20 + aio_context_acquire(iscsilun->aio_context); iscsi_service(iscsi, POLLIN); iscsi_set_events(iscsilun); + aio_context_release(iscsilun->aio_context); } =20 static void @@ -404,8 +406,10 @@ iscsi_process_write(void *arg) IscsiLun *iscsilun =3D arg; struct iscsi_context *iscsi =3D iscsilun->iscsi; =20 + aio_context_acquire(iscsilun->aio_context); iscsi_service(iscsi, POLLOUT); iscsi_set_events(iscsilun); + aio_context_release(iscsilun->aio_context); } =20 static int64_t sector_lun2qemu(int64_t sector, IscsiLun *iscsilun) diff --git a/block/linux-aio.c b/block/linux-aio.c index 03ab741..277c016 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -251,7 +251,9 @@ static void qemu_laio_completion_cb(EventNotifier *e) LinuxAioState *s =3D container_of(e, LinuxAioState, e); =20 if (event_notifier_test_and_clear(&s->e)) { + aio_context_acquire(s->aio_context); qemu_laio_process_completions_and_submit(s); + aio_context_release(s->aio_context); } } =20 @@ -265,7 +267,9 @@ static bool qemu_laio_poll_cb(void *opaque) return false; } =20 + aio_context_acquire(s->aio_context); qemu_laio_process_completions_and_submit(s); + aio_context_release(s->aio_context); return true; } =20 diff --git a/block/nfs.c b/block/nfs.c index a564340..803faf9 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -207,15 +207,21 @@ static void nfs_set_events(NFSClient *client) static void nfs_process_read(void *arg) { NFSClient *client =3D arg; + + aio_context_acquire(client->aio_context); nfs_service(client->context, POLLIN); nfs_set_events(client); + aio_context_release(client->aio_context); } =20 static void nfs_process_write(void *arg) { NFSClient *client =3D arg; + + aio_context_acquire(client->aio_context); nfs_service(client->context, POLLOUT); nfs_set_events(client); + aio_context_release(client->aio_context); } =20 static void nfs_co_init_task(BlockDriverState *bs, NFSRPC *task) diff --git a/block/sheepdog.c b/block/sheepdog.c index f757157..32c4e4c 100644 --- a/block/sheepdog.c +++ b/block/sheepdog.c @@ -575,13 +575,6 @@ static coroutine_fn int send_co_req(int sockfd, Sheepd= ogReq *hdr, void *data, return ret; } =20 -static void restart_co_req(void *opaque) -{ - Coroutine *co =3D opaque; - - qemu_coroutine_enter(co); -} - typedef struct SheepdogReqCo { int sockfd; BlockDriverState *bs; @@ -592,12 +585,19 @@ typedef struct SheepdogReqCo { unsigned int *rlen; int ret; bool finished; + Coroutine *co; } SheepdogReqCo; =20 +static void restart_co_req(void *opaque) +{ + SheepdogReqCo *srco =3D opaque; + + aio_co_wake(srco->co); +} + static coroutine_fn void do_co_req(void *opaque) { int ret; - Coroutine *co; SheepdogReqCo *srco =3D opaque; int sockfd =3D srco->sockfd; SheepdogReq *hdr =3D srco->hdr; @@ -605,9 +605,9 @@ static coroutine_fn void do_co_req(void *opaque) unsigned int *wlen =3D srco->wlen; unsigned int *rlen =3D srco->rlen; =20 - co =3D qemu_coroutine_self(); + srco->co =3D qemu_coroutine_self(); aio_set_fd_handler(srco->aio_context, sockfd, false, - NULL, restart_co_req, NULL, co); + NULL, restart_co_req, NULL, srco); =20 ret =3D send_co_req(sockfd, hdr, data, wlen); if (ret < 0) { @@ -615,7 +615,7 @@ static coroutine_fn void do_co_req(void *opaque) } =20 aio_set_fd_handler(srco->aio_context, sockfd, false, - restart_co_req, NULL, NULL, co); + restart_co_req, NULL, NULL, srco); =20 ret =3D qemu_co_recv(sockfd, hdr, sizeof(*hdr)); if (ret !=3D sizeof(*hdr)) { @@ -643,6 +643,7 @@ out: aio_set_fd_handler(srco->aio_context, sockfd, false, NULL, NULL, NULL, NULL); =20 + srco->co =3D NULL; srco->ret =3D ret; srco->finished =3D true; if (srco->bs) { @@ -866,7 +867,7 @@ static void coroutine_fn aio_read_response(void *opaque) * We've finished all requests which belong to the AIOCB, so * we can switch back to sd_co_readv/writev now. */ - qemu_coroutine_enter(acb->coroutine); + aio_co_wake(acb->coroutine); } =20 return; @@ -883,14 +884,14 @@ static void co_read_response(void *opaque) s->co_recv =3D qemu_coroutine_create(aio_read_response, opaque); } =20 - qemu_coroutine_enter(s->co_recv); + aio_co_wake(s->co_recv); } =20 static void co_write_request(void *opaque) { BDRVSheepdogState *s =3D opaque; =20 - qemu_coroutine_enter(s->co_send); + aio_co_wake(s->co_send); } =20 /* diff --git a/block/ssh.c b/block/ssh.c index e0edf20..835932e 100644 --- a/block/ssh.c +++ b/block/ssh.c @@ -889,10 +889,14 @@ static void restart_coroutine(void *opaque) =20 DPRINTF("co=3D%p", co); =20 - qemu_coroutine_enter(co); + aio_co_wake(co); } =20 -static coroutine_fn void set_fd_handler(BDRVSSHState *s, BlockDriverState = *bs) +/* A non-blocking call returned EAGAIN, so yield, ensuring the + * handlers are set up so that we'll be rescheduled when there is an + * interesting event on the socket. + */ +static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs) { int r; IOHandler *rd_handler =3D NULL, *wr_handler =3D NULL; @@ -912,25 +916,10 @@ static coroutine_fn void set_fd_handler(BDRVSSHState = *s, BlockDriverState *bs) =20 aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, false, rd_handler, wr_handler, NULL, co); -} - -static coroutine_fn void clear_fd_handler(BDRVSSHState *s, - BlockDriverState *bs) -{ - DPRINTF("s->sock=3D%d", s->sock); - aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, - false, NULL, NULL, NULL, NULL); -} - -/* A non-blocking call returned EAGAIN, so yield, ensuring the - * handlers are set up so that we'll be rescheduled when there is an - * interesting event on the socket. - */ -static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs) -{ - set_fd_handler(s, bs); qemu_coroutine_yield(); - clear_fd_handler(s, bs); + DPRINTF("s->sock=3D%d - back", s->sock); + aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, false, + NULL, NULL, NULL, NULL); } =20 /* SFTP has a function `libssh2_sftp_seek64' which seeks to a position diff --git a/block/win32-aio.c b/block/win32-aio.c index 8cdf73b..c3f8f1a 100644 --- a/block/win32-aio.c +++ b/block/win32-aio.c @@ -41,7 +41,7 @@ struct QEMUWin32AIOState { HANDLE hIOCP; EventNotifier e; int count; - bool is_aio_context_attached; + AioContext *aio_ctx; }; =20 typedef struct QEMUWin32AIOCB { @@ -88,7 +88,9 @@ static void win32_aio_process_completion(QEMUWin32AIOStat= e *s, } =20 =20 + aio_context_acquire(s->aio_ctx); waiocb->common.cb(waiocb->common.opaque, ret); + aio_context_release(s->aio_ctx); qemu_aio_unref(waiocb); } =20 @@ -176,13 +178,13 @@ void win32_aio_detach_aio_context(QEMUWin32AIOState *= aio, AioContext *old_context) { aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL); - aio->is_aio_context_attached =3D false; + aio->aio_ctx =3D NULL; } =20 void win32_aio_attach_aio_context(QEMUWin32AIOState *aio, AioContext *new_context) { - aio->is_aio_context_attached =3D true; + aio->aio_ctx =3D new_context; aio_set_event_notifier(new_context, &aio->e, false, win32_aio_completion_cb, NULL); } @@ -212,7 +214,7 @@ out_free_state: =20 void win32_aio_cleanup(QEMUWin32AIOState *aio) { - assert(!aio->is_aio_context_attached); + assert(!aio->aio_ctx); CloseHandle(aio->hIOCP); event_notifier_cleanup(&aio->e); g_free(aio); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 702eda8..a00ee38 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -150,7 +150,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int= status) { VirtIOBlockIoctlReq *ioctl_req =3D opaque; VirtIOBlockReq *req =3D ioctl_req->req; - VirtIODevice *vdev =3D VIRTIO_DEVICE(req->dev); + VirtIOBlock *s =3D req->dev; + VirtIODevice *vdev =3D VIRTIO_DEVICE(s); struct virtio_scsi_inhdr *scsi; struct sg_io_hdr *hdr; =20 @@ -586,6 +587,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) VirtIOBlockReq *req; MultiReqBuffer mrb =3D {}; =20 + aio_context_acquire(blk_get_aio_context(s->blk)); blk_io_plug(s->blk); =20 do { @@ -607,6 +609,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) } =20 blk_io_unplug(s->blk); + aio_context_release(blk_get_aio_context(s->blk)); } =20 static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index ce19eff..5d9718a 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -440,9 +440,11 @@ void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQue= ue *vq) { VirtIOSCSIReq *req; =20 + virtio_scsi_acquire(s); while ((req =3D virtio_scsi_pop_req(s, vq))) { virtio_scsi_handle_ctrl_req(s, req); } + virtio_scsi_release(s); } =20 static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq) @@ -598,6 +600,7 @@ void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue= *vq) =20 QTAILQ_HEAD(, VirtIOSCSIReq) reqs =3D QTAILQ_HEAD_INITIALIZER(reqs); =20 + virtio_scsi_acquire(s); do { virtio_queue_set_notification(vq, 0); =20 @@ -624,6 +627,7 @@ void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue= *vq) QTAILQ_FOREACH_SAFE(req, &reqs, next, next) { virtio_scsi_handle_cmd_req_submit(s, req); } + virtio_scsi_release(s); } =20 static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq) @@ -754,9 +758,11 @@ out: =20 void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq) { + virtio_scsi_acquire(s); if (s->events_dropped) { virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); } + virtio_scsi_release(s); } =20 static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq) diff --git a/util/aio-posix.c b/util/aio-posix.c index 4dc597c..84cee43 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -402,9 +402,7 @@ static bool aio_dispatch_handlers(AioContext *ctx) (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && aio_node_check(ctx, node->is_external) && node->io_read) { - aio_context_acquire(ctx); node->io_read(node->opaque); - aio_context_release(ctx); =20 /* aio_notify() does not count as progress */ if (node->opaque !=3D &ctx->notifier) { @@ -415,9 +413,7 @@ static bool aio_dispatch_handlers(AioContext *ctx) (revents & (G_IO_OUT | G_IO_ERR)) && aio_node_check(ctx, node->is_external) && node->io_write) { - aio_context_acquire(ctx); node->io_write(node->opaque); - aio_context_release(ctx); progress =3D true; } =20 @@ -618,10 +614,7 @@ bool aio_poll(AioContext *ctx, bool blocking) start =3D qemu_clock_get_ns(QEMU_CLOCK_REALTIME); } =20 - aio_context_acquire(ctx); progress =3D try_poll_mode(ctx, blocking); - aio_context_release(ctx); - if (!progress) { assert(npfd =3D=3D 0); =20 diff --git a/util/aio-win32.c b/util/aio-win32.c index 810e1c6..20b63ce 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -266,9 +266,7 @@ static bool aio_dispatch_handlers(AioContext *ctx, HAND= LE event) (revents || event_notifier_get_handle(node->e) =3D=3D event) && node->io_notify) { node->pfd.revents =3D 0; - aio_context_acquire(ctx); node->io_notify(node->e); - aio_context_release(ctx); =20 /* aio_notify() does not count as progress */ if (node->e !=3D &ctx->notifier) { @@ -280,15 +278,11 @@ static bool aio_dispatch_handlers(AioContext *ctx, HA= NDLE event) (node->io_read || node->io_write)) { node->pfd.revents =3D 0; if ((revents & G_IO_IN) && node->io_read) { - aio_context_acquire(ctx); node->io_read(node->opaque); - aio_context_release(ctx); progress =3D true; } if ((revents & G_IO_OUT) && node->io_write) { - aio_context_acquire(ctx); node->io_write(node->opaque); - aio_context_release(ctx); progress =3D true; } =20 --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485951948710589.6342479003565; Wed, 1 Feb 2017 04:25:48 -0800 (PST) Received: from localhost ([::1]:50264 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtys-0003Az-IK for importer@patchew.org; Wed, 01 Feb 2017 07:25:46 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50778) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtfz-0001Mn-Mn for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtfx-0002NX-RJ for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:15 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48384) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfx-0002ND-IV for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:13 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CF5833D961 for ; Wed, 1 Feb 2017 12:06:13 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLn019474; Wed, 1 Feb 2017 07:06:12 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:29 -0800 Message-Id: <20170201120533.13838-15-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Wed, 01 Feb 2017 12:06:13 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 14/18] block: explicitly acquire aiocontext in bottom halves that need it X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- block/archipelago.c | 3 +++ block/blkreplay.c | 2 +- block/block-backend.c | 6 ++++++ block/curl.c | 26 ++++++++++++++++++-------- block/gluster.c | 9 +-------- block/io.c | 6 +++++- block/iscsi.c | 6 +++++- block/linux-aio.c | 15 +++++++++------ block/nfs.c | 3 ++- block/null.c | 4 ++++ block/qed.c | 3 +++ block/rbd.c | 4 ++++ dma-helpers.c | 2 ++ hw/block/virtio-blk.c | 2 ++ hw/scsi/scsi-bus.c | 2 ++ util/async.c | 4 ++-- util/thread-pool.c | 2 ++ 17 files changed, 71 insertions(+), 28 deletions(-) diff --git a/block/archipelago.c b/block/archipelago.c index 2449cfc..a624390 100644 --- a/block/archipelago.c +++ b/block/archipelago.c @@ -310,8 +310,11 @@ static void qemu_archipelago_complete_aio(void *opaque) { AIORequestData *reqdata =3D (AIORequestData *) opaque; ArchipelagoAIOCB *aio_cb =3D (ArchipelagoAIOCB *) reqdata->aio_cb; + AioContext *ctx =3D bdrv_get_aio_context(aio_cb->common.bs); =20 + aio_context_acquire(ctx); aio_cb->common.cb(aio_cb->common.opaque, aio_cb->ret); + aio_context_release(ctx); aio_cb->status =3D 0; =20 qemu_aio_unref(aio_cb); diff --git a/block/blkreplay.c b/block/blkreplay.c index a741654..cfc8c5b 100755 --- a/block/blkreplay.c +++ b/block/blkreplay.c @@ -60,7 +60,7 @@ static int64_t blkreplay_getlength(BlockDriverState *bs) static void blkreplay_bh_cb(void *opaque) { Request *req =3D opaque; - qemu_coroutine_enter(req->co); + aio_co_wake(req->co); qemu_bh_delete(req->bh); g_free(req); } diff --git a/block/block-backend.c b/block/block-backend.c index 1177598..bfc0e6b 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -939,9 +939,12 @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags = flags) static void error_callback_bh(void *opaque) { struct BlockBackendAIOCB *acb =3D opaque; + AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); =20 bdrv_dec_in_flight(acb->common.bs); + aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, acb->ret); + aio_context_release(ctx); qemu_aio_unref(acb); } =20 @@ -983,9 +986,12 @@ static void blk_aio_complete(BlkAioEmAIOCB *acb) static void blk_aio_complete_bh(void *opaque) { BlkAioEmAIOCB *acb =3D opaque; + AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); =20 assert(acb->has_returned); + aio_context_acquire(ctx); blk_aio_complete(acb); + aio_context_release(ctx); } =20 static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int byt= es, diff --git a/block/curl.c b/block/curl.c index 05b9ca3..f3f063b 100644 --- a/block/curl.c +++ b/block/curl.c @@ -796,13 +796,18 @@ static void curl_readv_bh_cb(void *p) { CURLState *state; int running; + int ret =3D -EINPROGRESS; =20 CURLAIOCB *acb =3D p; - BDRVCURLState *s =3D acb->common.bs->opaque; + BlockDriverState *bs =3D acb->common.bs; + BDRVCURLState *s =3D bs->opaque; + AioContext *ctx =3D bdrv_get_aio_context(bs); =20 size_t start =3D acb->sector_num * BDRV_SECTOR_SIZE; size_t end; =20 + aio_context_acquire(ctx); + // In case we have the requested data already (e.g. read-ahead), // we can just call the callback and be done. switch (curl_find_buf(s, start, acb->nb_sectors * BDRV_SECTOR_SIZE, ac= b)) { @@ -810,7 +815,7 @@ static void curl_readv_bh_cb(void *p) qemu_aio_unref(acb); // fall through case FIND_RET_WAIT: - return; + goto out; default: break; } @@ -818,9 +823,8 @@ static void curl_readv_bh_cb(void *p) // No cache found, so let's start a new request state =3D curl_init_state(acb->common.bs, s); if (!state) { - acb->common.cb(acb->common.opaque, -EIO); - qemu_aio_unref(acb); - return; + ret =3D -EIO; + goto out; } =20 acb->start =3D 0; @@ -834,9 +838,8 @@ static void curl_readv_bh_cb(void *p) state->orig_buf =3D g_try_malloc(state->buf_len); if (state->buf_len && state->orig_buf =3D=3D NULL) { curl_clean_state(state); - acb->common.cb(acb->common.opaque, -ENOMEM); - qemu_aio_unref(acb); - return; + ret =3D -ENOMEM; + goto out; } state->acb[0] =3D acb; =20 @@ -849,6 +852,13 @@ static void curl_readv_bh_cb(void *p) =20 /* Tell curl it needs to kick things off */ curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running); + +out: + if (ret !=3D -EINPROGRESS) { + acb->common.cb(acb->common.opaque, ret); + qemu_aio_unref(acb); + } + aio_context_release(ctx); } =20 static BlockAIOCB *curl_aio_readv(BlockDriverState *bs, diff --git a/block/gluster.c b/block/gluster.c index 1a22f29..56b4abe 100644 --- a/block/gluster.c +++ b/block/gluster.c @@ -698,13 +698,6 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsG= luster *gconf, return qemu_gluster_glfs_init(gconf, errp); } =20 -static void qemu_gluster_complete_aio(void *opaque) -{ - GlusterAIOCB *acb =3D (GlusterAIOCB *)opaque; - - qemu_coroutine_enter(acb->coroutine); -} - /* * AIO callback routine called from GlusterFS thread. */ @@ -720,7 +713,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ss= ize_t ret, void *arg) acb->ret =3D -EIO; /* Partial read/write - fail it */ } =20 - aio_bh_schedule_oneshot(acb->aio_context, qemu_gluster_complete_aio, a= cb); + aio_co_schedule(acb->aio_context, acb->coroutine); } =20 static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags) diff --git a/block/io.c b/block/io.c index dd6c74f..8486e27 100644 --- a/block/io.c +++ b/block/io.c @@ -189,7 +189,7 @@ static void bdrv_co_drain_bh_cb(void *opaque) bdrv_dec_in_flight(bs); bdrv_drained_begin(bs); data->done =3D true; - qemu_coroutine_enter(co); + aio_co_wake(co); } =20 static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs) @@ -2152,9 +2152,13 @@ static void bdrv_co_complete(BlockAIOCBCoroutine *ac= b) static void bdrv_co_em_bh(void *opaque) { BlockAIOCBCoroutine *acb =3D opaque; + BlockDriverState *bs =3D acb->common.bs; + AioContext *ctx =3D bdrv_get_aio_context(bs); =20 assert(!acb->need_bh); + aio_context_acquire(ctx); bdrv_co_complete(acb); + aio_context_release(ctx); } =20 static void bdrv_co_maybe_schedule_bh(BlockAIOCBCoroutine *acb) diff --git a/block/iscsi.c b/block/iscsi.c index 303b108..4fb43c2 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -136,13 +136,16 @@ static void iscsi_bh_cb(void *p) { IscsiAIOCB *acb =3D p; + AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); =20 qemu_bh_delete(acb->bh); =20 g_free(acb->buf); acb->buf =3D NULL; =20 + aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, acb->status); + aio_context_release(ctx); =20 if (acb->task !=3D NULL) { scsi_free_scsi_task(acb->task); @@ -165,8 +168,9 @@ iscsi_schedule_bh(IscsiAIOCB *acb) static void iscsi_co_generic_bh_cb(void *opaque) { struct IscsiTask *iTask =3D opaque; + iTask->complete =3D 1; - qemu_coroutine_enter(iTask->co); + aio_co_wake(iTask->co); } =20 static void iscsi_retry_timer_expired(void *opaque) diff --git a/block/linux-aio.c b/block/linux-aio.c index 277c016..f7ae38a 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -54,10 +54,10 @@ struct LinuxAioState { io_context_t ctx; EventNotifier e; =20 - /* io queue for submit at batch */ + /* io queue for submit at batch. Protected by AioContext lock. */ LaioQueue io_q; =20 - /* I/O completion processing */ + /* I/O completion processing. Only runs in I/O thread. */ QEMUBH *completion_bh; int event_idx; int event_max; @@ -75,6 +75,7 @@ static inline ssize_t io_event_ret(struct io_event *ev) */ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) { + LinuxAioState *s =3D laiocb->ctx; int ret; =20 ret =3D laiocb->ret; @@ -93,6 +94,7 @@ static void qemu_laio_process_completion(struct qemu_laio= cb *laiocb) } =20 laiocb->ret =3D ret; + aio_context_acquire(s->aio_context); if (laiocb->co) { /* If the coroutine is already entered it must be in ioq_submit() = and * will notice laio->ret has been filled in when it eventually runs @@ -106,6 +108,7 @@ static void qemu_laio_process_completion(struct qemu_la= iocb *laiocb) laiocb->common.cb(laiocb->common.opaque, ret); qemu_aio_unref(laiocb); } + aio_context_release(s->aio_context); } =20 /** @@ -234,9 +237,12 @@ static void qemu_laio_process_completions(LinuxAioStat= e *s) static void qemu_laio_process_completions_and_submit(LinuxAioState *s) { qemu_laio_process_completions(s); + + aio_context_acquire(s->aio_context); if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) { ioq_submit(s); } + aio_context_release(s->aio_context); } =20 static void qemu_laio_completion_bh(void *opaque) @@ -251,9 +257,7 @@ static void qemu_laio_completion_cb(EventNotifier *e) LinuxAioState *s =3D container_of(e, LinuxAioState, e); =20 if (event_notifier_test_and_clear(&s->e)) { - aio_context_acquire(s->aio_context); qemu_laio_process_completions_and_submit(s); - aio_context_release(s->aio_context); } } =20 @@ -267,9 +271,7 @@ static bool qemu_laio_poll_cb(void *opaque) return false; } =20 - aio_context_acquire(s->aio_context); qemu_laio_process_completions_and_submit(s); - aio_context_release(s->aio_context); return true; } =20 @@ -459,6 +461,7 @@ void laio_detach_aio_context(LinuxAioState *s, AioConte= xt *old_context) { aio_set_event_notifier(old_context, &s->e, false, NULL, NULL); qemu_bh_delete(s->completion_bh); + s->aio_context =3D NULL; } =20 void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context) diff --git a/block/nfs.c b/block/nfs.c index 803faf9..32631bb 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -236,8 +236,9 @@ static void nfs_co_init_task(BlockDriverState *bs, NFSR= PC *task) static void nfs_co_generic_bh_cb(void *opaque) { NFSRPC *task =3D opaque; + task->complete =3D 1; - qemu_coroutine_enter(task->co); + aio_co_wake(task->co); } =20 static void diff --git a/block/null.c b/block/null.c index 356209a..5eb2038 100644 --- a/block/null.c +++ b/block/null.c @@ -134,7 +134,11 @@ static const AIOCBInfo null_aiocb_info =3D { static void null_bh_cb(void *opaque) { NullAIOCB *acb =3D opaque; + AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); + + aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, 0); + aio_context_release(ctx); qemu_aio_unref(acb); } =20 diff --git a/block/qed.c b/block/qed.c index a21d025..db8295d 100644 --- a/block/qed.c +++ b/block/qed.c @@ -942,6 +942,7 @@ static void qed_update_l2_table(BDRVQEDState *s, QEDTab= le *table, int index, static void qed_aio_complete_bh(void *opaque) { QEDAIOCB *acb =3D opaque; + BDRVQEDState *s =3D acb_to_s(acb); BlockCompletionFunc *cb =3D acb->common.cb; void *user_opaque =3D acb->common.opaque; int ret =3D acb->bh_ret; @@ -949,7 +950,9 @@ static void qed_aio_complete_bh(void *opaque) qemu_aio_unref(acb); =20 /* Invoke callback */ + qed_acquire(s); cb(user_opaque, ret); + qed_release(s); } =20 static void qed_aio_complete(QEDAIOCB *acb, int ret) diff --git a/block/rbd.c b/block/rbd.c index a57b3e3..2cb2cb4 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -413,6 +413,7 @@ shutdown: static void qemu_rbd_complete_aio(RADOSCB *rcb) { RBDAIOCB *acb =3D rcb->acb; + AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); int64_t r; =20 r =3D rcb->ret; @@ -445,7 +446,10 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb) qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size); } qemu_vfree(acb->bounce); + + aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret)); + aio_context_release(ctx); =20 qemu_aio_unref(acb); } diff --git a/dma-helpers.c b/dma-helpers.c index 6f9d47c..39d4802 100644 --- a/dma-helpers.c +++ b/dma-helpers.c @@ -166,8 +166,10 @@ static void dma_blk_cb(void *opaque, int ret) QEMU_ALIGN_DOWN(dbs->iov.size, dbs->align)= ); } =20 + aio_context_acquire(dbs->ctx); dbs->acb =3D dbs->io_func(dbs->offset, &dbs->iov, dma_blk_cb, dbs, dbs->io_func_opaque); + aio_context_release(dbs->ctx); assert(dbs->acb); } =20 diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index a00ee38..af652f3 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -639,6 +639,7 @@ static void virtio_blk_dma_restart_bh(void *opaque) =20 s->rq =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (req) { VirtIOBlockReq *next =3D req->next; if (virtio_blk_handle_request(req, &mrb)) { @@ -659,6 +660,7 @@ static void virtio_blk_dma_restart_bh(void *opaque) if (mrb.num_reqs) { virtio_blk_submit_multireq(s->blk, &mrb); } + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 static void virtio_blk_dma_restart_cb(void *opaque, int running, diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index 5940cb1..c9f0ac0 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -105,6 +105,7 @@ static void scsi_dma_restart_bh(void *opaque) qemu_bh_delete(s->bh); s->bh =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->conf.blk)); QTAILQ_FOREACH_SAFE(req, &s->requests, next, next) { scsi_req_ref(req); if (req->retry) { @@ -122,6 +123,7 @@ static void scsi_dma_restart_bh(void *opaque) } scsi_req_unref(req); } + aio_context_release(blk_get_aio_context(s->conf.blk)); } =20 void scsi_req_retry(SCSIRequest *req) diff --git a/util/async.c b/util/async.c index 8e65e4b..99b9d7e 100644 --- a/util/async.c +++ b/util/async.c @@ -114,9 +114,7 @@ int aio_bh_poll(AioContext *ctx) ret =3D 1; } bh->idle =3D 0; - aio_context_acquire(ctx); aio_bh_call(bh); - aio_context_release(ctx); } if (bh->deleted) { deleted =3D true; @@ -389,7 +387,9 @@ static void co_schedule_bh_cb(void *opaque) Coroutine *co =3D QSLIST_FIRST(&straight); QSLIST_REMOVE_HEAD(&straight, co_scheduled_next); trace_aio_co_schedule_bh_cb(ctx, co); + aio_context_acquire(ctx); qemu_coroutine_enter(co); + aio_context_release(ctx); } } =20 diff --git a/util/thread-pool.c b/util/thread-pool.c index 6fba913..7c9cec5 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -165,6 +165,7 @@ static void thread_pool_completion_bh(void *opaque) ThreadPool *pool =3D opaque; ThreadPoolElement *elem, *next; =20 + aio_context_acquire(pool->ctx); restart: QLIST_FOREACH_SAFE(elem, &pool->head, all, next) { if (elem->state !=3D THREAD_DONE) { @@ -191,6 +192,7 @@ restart: qemu_aio_unref(elem); } } + aio_context_release(pool->ctx); } =20 static void thread_pool_cancel(BlockAIOCB *acb) --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485952209474916.1581440384022; Wed, 1 Feb 2017 04:30:09 -0800 (PST) Received: from localhost ([::1]:50288 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYu35-00081V-Gb for importer@patchew.org; Wed, 01 Feb 2017 07:30:07 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50804) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtg2-0001Qh-Fn for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:21 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtg0-0002Nn-4e for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:18 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36570) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtfz-0002Nb-T7 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:16 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 261CDC05681B for ; Wed, 1 Feb 2017 12:06:16 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLo019474; Wed, 1 Feb 2017 07:06:14 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:30 -0800 Message-Id: <20170201120533.13838-16-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 01 Feb 2017 12:06:16 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 15/18] block: explicitly acquire aiocontext in aio callbacks that need it X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- block/archipelago.c | 3 --- block/block-backend.c | 7 ------- block/curl.c | 2 +- block/io.c | 6 +----- block/iscsi.c | 3 --- block/linux-aio.c | 5 +---- block/mirror.c | 12 +++++++++--- block/null.c | 8 -------- block/qed-cluster.c | 2 ++ block/qed-table.c | 12 ++++++++++-- block/qed.c | 4 ++-- block/rbd.c | 4 ---- block/win32-aio.c | 3 --- hw/block/virtio-blk.c | 12 +++++++++++- hw/scsi/scsi-disk.c | 15 +++++++++++++++ hw/scsi/scsi-generic.c | 20 +++++++++++++++++--- util/thread-pool.c | 4 +++- 17 files changed, 72 insertions(+), 50 deletions(-) diff --git a/block/archipelago.c b/block/archipelago.c index a624390..2449cfc 100644 --- a/block/archipelago.c +++ b/block/archipelago.c @@ -310,11 +310,8 @@ static void qemu_archipelago_complete_aio(void *opaque) { AIORequestData *reqdata =3D (AIORequestData *) opaque; ArchipelagoAIOCB *aio_cb =3D (ArchipelagoAIOCB *) reqdata->aio_cb; - AioContext *ctx =3D bdrv_get_aio_context(aio_cb->common.bs); =20 - aio_context_acquire(ctx); aio_cb->common.cb(aio_cb->common.opaque, aio_cb->ret); - aio_context_release(ctx); aio_cb->status =3D 0; =20 qemu_aio_unref(aio_cb); diff --git a/block/block-backend.c b/block/block-backend.c index bfc0e6b..819f272 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -939,12 +939,9 @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags = flags) static void error_callback_bh(void *opaque) { struct BlockBackendAIOCB *acb =3D opaque; - AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); =20 bdrv_dec_in_flight(acb->common.bs); - aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, acb->ret); - aio_context_release(ctx); qemu_aio_unref(acb); } =20 @@ -986,12 +983,8 @@ static void blk_aio_complete(BlkAioEmAIOCB *acb) static void blk_aio_complete_bh(void *opaque) { BlkAioEmAIOCB *acb =3D opaque; - AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); - assert(acb->has_returned); - aio_context_acquire(ctx); blk_aio_complete(acb); - aio_context_release(ctx); } =20 static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int byt= es, diff --git a/block/curl.c b/block/curl.c index f3f063b..2939cc7 100644 --- a/block/curl.c +++ b/block/curl.c @@ -854,11 +854,11 @@ static void curl_readv_bh_cb(void *p) curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running); =20 out: + aio_context_release(ctx); if (ret !=3D -EINPROGRESS) { acb->common.cb(acb->common.opaque, ret); qemu_aio_unref(acb); } - aio_context_release(ctx); } =20 static BlockAIOCB *curl_aio_readv(BlockDriverState *bs, diff --git a/block/io.c b/block/io.c index 8486e27..a5c7d36 100644 --- a/block/io.c +++ b/block/io.c @@ -813,7 +813,7 @@ static void bdrv_co_io_em_complete(void *opaque, int re= t) CoroutineIOCompletion *co =3D opaque; =20 co->ret =3D ret; - qemu_coroutine_enter(co->coroutine); + aio_co_wake(co->coroutine); } =20 static int coroutine_fn bdrv_driver_preadv(BlockDriverState *bs, @@ -2152,13 +2152,9 @@ static void bdrv_co_complete(BlockAIOCBCoroutine *ac= b) static void bdrv_co_em_bh(void *opaque) { BlockAIOCBCoroutine *acb =3D opaque; - BlockDriverState *bs =3D acb->common.bs; - AioContext *ctx =3D bdrv_get_aio_context(bs); =20 assert(!acb->need_bh); - aio_context_acquire(ctx); bdrv_co_complete(acb); - aio_context_release(ctx); } =20 static void bdrv_co_maybe_schedule_bh(BlockAIOCBCoroutine *acb) diff --git a/block/iscsi.c b/block/iscsi.c index 4fb43c2..2561be9 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -136,16 +136,13 @@ static void iscsi_bh_cb(void *p) { IscsiAIOCB *acb =3D p; - AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); =20 qemu_bh_delete(acb->bh); =20 g_free(acb->buf); acb->buf =3D NULL; =20 - aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, acb->status); - aio_context_release(ctx); =20 if (acb->task !=3D NULL) { scsi_free_scsi_task(acb->task); diff --git a/block/linux-aio.c b/block/linux-aio.c index f7ae38a..88b8d55 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -75,7 +75,6 @@ static inline ssize_t io_event_ret(struct io_event *ev) */ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) { - LinuxAioState *s =3D laiocb->ctx; int ret; =20 ret =3D laiocb->ret; @@ -94,7 +93,6 @@ static void qemu_laio_process_completion(struct qemu_laio= cb *laiocb) } =20 laiocb->ret =3D ret; - aio_context_acquire(s->aio_context); if (laiocb->co) { /* If the coroutine is already entered it must be in ioq_submit() = and * will notice laio->ret has been filled in when it eventually runs @@ -102,13 +100,12 @@ static void qemu_laio_process_completion(struct qemu_= laiocb *laiocb) * that! */ if (!qemu_coroutine_entered(laiocb->co)) { - qemu_coroutine_enter(laiocb->co); + aio_co_wake(laiocb->co); } } else { laiocb->common.cb(laiocb->common.opaque, ret); qemu_aio_unref(laiocb); } - aio_context_release(s->aio_context); } =20 /** diff --git a/block/mirror.c b/block/mirror.c index 301ba92..698a54e 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -132,6 +132,8 @@ static void mirror_write_complete(void *opaque, int ret) { MirrorOp *op =3D opaque; MirrorBlockJob *s =3D op->s; + + aio_context_acquire(blk_get_aio_context(s->common.blk)); if (ret < 0) { BlockErrorAction action; =20 @@ -142,12 +144,15 @@ static void mirror_write_complete(void *opaque, int r= et) } } mirror_iteration_done(op, ret); + aio_context_release(blk_get_aio_context(s->common.blk)); } =20 static void mirror_read_complete(void *opaque, int ret) { MirrorOp *op =3D opaque; MirrorBlockJob *s =3D op->s; + + aio_context_acquire(blk_get_aio_context(s->common.blk)); if (ret < 0) { BlockErrorAction action; =20 @@ -158,10 +163,11 @@ static void mirror_read_complete(void *opaque, int re= t) } =20 mirror_iteration_done(op, ret); - return; + } else { + blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op-= >qiov, + 0, mirror_write_complete, op); } - blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op->qio= v, - 0, mirror_write_complete, op); + aio_context_release(blk_get_aio_context(s->common.blk)); } =20 static inline void mirror_clip_sectors(MirrorBlockJob *s, diff --git a/block/null.c b/block/null.c index 5eb2038..b300390 100644 --- a/block/null.c +++ b/block/null.c @@ -134,22 +134,14 @@ static const AIOCBInfo null_aiocb_info =3D { static void null_bh_cb(void *opaque) { NullAIOCB *acb =3D opaque; - AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); - - aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, 0); - aio_context_release(ctx); qemu_aio_unref(acb); } =20 static void null_timer_cb(void *opaque) { NullAIOCB *acb =3D opaque; - AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); - - aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, 0); - aio_context_release(ctx); timer_deinit(&acb->timer); qemu_aio_unref(acb); } diff --git a/block/qed-cluster.c b/block/qed-cluster.c index c24e756..8f5da74 100644 --- a/block/qed-cluster.c +++ b/block/qed-cluster.c @@ -83,6 +83,7 @@ static void qed_find_cluster_cb(void *opaque, int ret) unsigned int index; unsigned int n; =20 + qed_acquire(s); if (ret) { goto out; } @@ -109,6 +110,7 @@ static void qed_find_cluster_cb(void *opaque, int ret) =20 out: find_cluster_cb->cb(find_cluster_cb->opaque, ret, offset, len); + qed_release(s); g_free(find_cluster_cb); } =20 diff --git a/block/qed-table.c b/block/qed-table.c index ed443e2..b12c298 100644 --- a/block/qed-table.c +++ b/block/qed-table.c @@ -31,6 +31,7 @@ static void qed_read_table_cb(void *opaque, int ret) { QEDReadTableCB *read_table_cb =3D opaque; QEDTable *table =3D read_table_cb->table; + BDRVQEDState *s =3D read_table_cb->s; int noffsets =3D read_table_cb->qiov.size / sizeof(uint64_t); int i; =20 @@ -40,13 +41,15 @@ static void qed_read_table_cb(void *opaque, int ret) } =20 /* Byteswap offsets */ + qed_acquire(s); for (i =3D 0; i < noffsets; i++) { table->offsets[i] =3D le64_to_cpu(table->offsets[i]); } + qed_release(s); =20 out: /* Completion */ - trace_qed_read_table_cb(read_table_cb->s, read_table_cb->table, ret); + trace_qed_read_table_cb(s, read_table_cb->table, ret); gencb_complete(&read_table_cb->gencb, ret); } =20 @@ -84,8 +87,9 @@ typedef struct { static void qed_write_table_cb(void *opaque, int ret) { QEDWriteTableCB *write_table_cb =3D opaque; + BDRVQEDState *s =3D write_table_cb->s; =20 - trace_qed_write_table_cb(write_table_cb->s, + trace_qed_write_table_cb(s, write_table_cb->orig_table, write_table_cb->flush, ret); @@ -97,8 +101,10 @@ static void qed_write_table_cb(void *opaque, int ret) if (write_table_cb->flush) { /* We still need to flush first */ write_table_cb->flush =3D false; + qed_acquire(s); bdrv_aio_flush(write_table_cb->s->bs, qed_write_table_cb, write_table_cb); + qed_release(s); return; } =20 @@ -213,6 +219,7 @@ static void qed_read_l2_table_cb(void *opaque, int ret) CachedL2Table *l2_table =3D request->l2_table; uint64_t l2_offset =3D read_l2_table_cb->l2_offset; =20 + qed_acquire(s); if (ret) { /* can't trust loaded L2 table anymore */ qed_unref_l2_cache_entry(l2_table); @@ -228,6 +235,7 @@ static void qed_read_l2_table_cb(void *opaque, int ret) request->l2_table =3D qed_find_l2_cache_entry(&s->l2_cache, l2_off= set); assert(request->l2_table !=3D NULL); } + qed_release(s); =20 gencb_complete(&read_l2_table_cb->gencb, ret); } diff --git a/block/qed.c b/block/qed.c index db8295d..0b62c77 100644 --- a/block/qed.c +++ b/block/qed.c @@ -745,7 +745,7 @@ static void qed_is_allocated_cb(void *opaque, int ret, = uint64_t offset, size_t l } =20 if (cb->co) { - qemu_coroutine_enter(cb->co); + aio_co_wake(cb->co); } } =20 @@ -1462,7 +1462,7 @@ static void coroutine_fn qed_co_pwrite_zeroes_cb(void= *opaque, int ret) cb->done =3D true; cb->ret =3D ret; if (cb->co) { - qemu_coroutine_enter(cb->co); + aio_co_wake(cb->co); } } =20 diff --git a/block/rbd.c b/block/rbd.c index 2cb2cb4..a57b3e3 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -413,7 +413,6 @@ shutdown: static void qemu_rbd_complete_aio(RADOSCB *rcb) { RBDAIOCB *acb =3D rcb->acb; - AioContext *ctx =3D bdrv_get_aio_context(acb->common.bs); int64_t r; =20 r =3D rcb->ret; @@ -446,10 +445,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb) qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size); } qemu_vfree(acb->bounce); - - aio_context_acquire(ctx); acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret)); - aio_context_release(ctx); =20 qemu_aio_unref(acb); } diff --git a/block/win32-aio.c b/block/win32-aio.c index c3f8f1a..3be8f45 100644 --- a/block/win32-aio.c +++ b/block/win32-aio.c @@ -87,10 +87,7 @@ static void win32_aio_process_completion(QEMUWin32AIOSta= te *s, qemu_vfree(waiocb->buf); } =20 - - aio_context_acquire(s->aio_ctx); waiocb->common.cb(waiocb->common.opaque, ret); - aio_context_release(s->aio_ctx); qemu_aio_unref(waiocb); } =20 diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index af652f3..39516e8 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -89,7 +89,9 @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req= , int error, static void virtio_blk_rw_complete(void *opaque, int ret) { VirtIOBlockReq *next =3D opaque; + VirtIOBlock *s =3D next->dev; =20 + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (next) { VirtIOBlockReq *req =3D next; next =3D req->mr_next; @@ -122,21 +124,27 @@ static void virtio_blk_rw_complete(void *opaque, int = ret) block_acct_done(blk_get_stats(req->dev->blk), &req->acct); virtio_blk_free_request(req); } + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 static void virtio_blk_flush_complete(void *opaque, int ret) { VirtIOBlockReq *req =3D opaque; + VirtIOBlock *s =3D req->dev; =20 + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, 0)) { - return; + goto out; } } =20 virtio_blk_req_complete(req, VIRTIO_BLK_S_OK); block_acct_done(blk_get_stats(req->dev->blk), &req->acct); virtio_blk_free_request(req); + +out: + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } =20 #ifdef __linux__ @@ -183,8 +191,10 @@ static void virtio_blk_ioctl_complete(void *opaque, in= t status) virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len); =20 out: + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, status); virtio_blk_free_request(req); + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); g_free(ioctl_req); } =20 diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index cc06fe5..bbfb5dc 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -207,6 +207,7 @@ static void scsi_aio_complete(void *opaque, int ret) =20 assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); if (scsi_disk_req_check_error(r, ret, true)) { goto done; } @@ -215,6 +216,7 @@ static void scsi_aio_complete(void *opaque, int ret) scsi_req_complete(&r->req, GOOD); =20 done: + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); scsi_req_unref(&r->req); } =20 @@ -290,12 +292,14 @@ static void scsi_dma_complete(void *opaque, int ret) assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); if (ret < 0) { block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct); } else { block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct); } scsi_dma_complete_noio(r, ret); + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); } =20 static void scsi_read_complete(void * opaque, int ret) @@ -306,6 +310,7 @@ static void scsi_read_complete(void * opaque, int ret) =20 assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); if (scsi_disk_req_check_error(r, ret, true)) { goto done; } @@ -320,6 +325,7 @@ static void scsi_read_complete(void * opaque, int ret) =20 done: scsi_req_unref(&r->req); + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); } =20 /* Actually issue a read to the block device. */ @@ -364,12 +370,14 @@ static void scsi_do_read_cb(void *opaque, int ret) assert (r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); if (ret < 0) { block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct); } else { block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct); } scsi_do_read(opaque, ret); + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); } =20 /* Read more data from scsi device into buffer. */ @@ -489,12 +497,14 @@ static void scsi_write_complete(void * opaque, int re= t) assert (r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); if (ret < 0) { block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct); } else { block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct); } scsi_write_complete_noio(r, ret); + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); } =20 static void scsi_write_data(SCSIRequest *req) @@ -1625,11 +1635,14 @@ static void scsi_unmap_complete(void *opaque, int r= et) { UnmapCBData *data =3D opaque; SCSIDiskReq *r =3D data->r; + SCSIDiskState *s =3D DO_UPCAST(SCSIDiskState, qdev, r->req.dev); =20 assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); scsi_unmap_complete_noio(data, ret); + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); } =20 static void scsi_disk_emulate_unmap(SCSIDiskReq *r, uint8_t *inbuf) @@ -1696,6 +1709,7 @@ static void scsi_write_same_complete(void *opaque, in= t ret) =20 assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); if (scsi_disk_req_check_error(r, ret, true)) { goto done; } @@ -1724,6 +1738,7 @@ done: scsi_req_unref(&r->req); qemu_vfree(data->iov.iov_base); g_free(data); + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); } =20 static void scsi_disk_emulate_write_same(SCSIDiskReq *r, uint8_t *inbuf) diff --git a/hw/scsi/scsi-generic.c b/hw/scsi/scsi-generic.c index 92f091a..2933119 100644 --- a/hw/scsi/scsi-generic.c +++ b/hw/scsi/scsi-generic.c @@ -143,10 +143,14 @@ done: static void scsi_command_complete(void *opaque, int ret) { SCSIGenericReq *r =3D (SCSIGenericReq *)opaque; + SCSIDevice *s =3D r->req.dev; =20 assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; + + aio_context_acquire(blk_get_aio_context(s->conf.blk)); scsi_command_complete_noio(r, ret); + aio_context_release(blk_get_aio_context(s->conf.blk)); } =20 static int execute_command(BlockBackend *blk, @@ -182,9 +186,11 @@ static void scsi_read_complete(void * opaque, int ret) assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->conf.blk)); + if (ret || r->req.io_canceled) { scsi_command_complete_noio(r, ret); - return; + goto done; } =20 len =3D r->io_header.dxfer_len - r->io_header.resid; @@ -193,7 +199,7 @@ static void scsi_read_complete(void * opaque, int ret) r->len =3D -1; if (len =3D=3D 0) { scsi_command_complete_noio(r, 0); - return; + goto done; } =20 /* Snoop READ CAPACITY output to set the blocksize. */ @@ -237,6 +243,9 @@ static void scsi_read_complete(void * opaque, int ret) } scsi_req_data(&r->req, len); scsi_req_unref(&r->req); + +done: + aio_context_release(blk_get_aio_context(s->conf.blk)); } =20 /* Read more data from scsi device into buffer. */ @@ -272,9 +281,11 @@ static void scsi_write_complete(void * opaque, int ret) assert(r->req.aiocb !=3D NULL); r->req.aiocb =3D NULL; =20 + aio_context_acquire(blk_get_aio_context(s->conf.blk)); + if (ret || r->req.io_canceled) { scsi_command_complete_noio(r, ret); - return; + goto done; } =20 if (r->req.cmd.buf[0] =3D=3D MODE_SELECT && r->req.cmd.buf[4] =3D=3D 1= 2 && @@ -284,6 +295,9 @@ static void scsi_write_complete(void * opaque, int ret) } =20 scsi_command_complete_noio(r, ret); + +done: + aio_context_release(blk_get_aio_context(s->conf.blk)); } =20 /* Write data to a scsi device. Returns nonzero on failure. diff --git a/util/thread-pool.c b/util/thread-pool.c index 7c9cec5..ce6cd30 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -185,7 +185,9 @@ restart: */ qemu_bh_schedule(pool->completion_bh); =20 + aio_context_release(pool->ctx); elem->common.cb(elem->common.opaque, elem->ret); + aio_context_acquire(pool->ctx); qemu_aio_unref(elem); goto restart; } else { @@ -269,7 +271,7 @@ static void thread_pool_co_cb(void *opaque, int ret) ThreadPoolCo *co =3D opaque; =20 co->ret =3D ret; - qemu_coroutine_enter(co->co); + aio_co_wake(co->co); } =20 int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *f= unc, --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485952020249779.0300372223794; Wed, 1 Feb 2017 04:27:00 -0800 (PST) Received: from localhost ([::1]:50267 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYu02-0004N9-0v for importer@patchew.org; Wed, 01 Feb 2017 07:26:58 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50826) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtg5-0001Tj-L3 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtg2-0002OH-Fz for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:21 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38208) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtg2-0002O5-7e for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:18 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6ED88C04B92E for ; Wed, 1 Feb 2017 12:06:18 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLp019474; Wed, 1 Feb 2017 07:06:16 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:31 -0800 Message-Id: <20170201120533.13838-17-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Wed, 01 Feb 2017 12:06:18 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 16/18] aio-posix: partially inline aio_dispatch into aio_poll X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch prepares for the removal of unnecessary lockcnt inc/dec pairs. Extract the dispatching loop for file descriptor handlers into a new function aio_dispatch_handlers, and then inline aio_dispatch into aio_poll. aio_dispatch can now become void. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- include/block/aio.h | 6 +----- util/aio-posix.c | 44 ++++++++++++++------------------------------ util/aio-win32.c | 13 ++++--------- util/async.c | 2 +- 4 files changed, 20 insertions(+), 45 deletions(-) diff --git a/include/block/aio.h b/include/block/aio.h index 614cbc6..677b6ff 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -310,12 +310,8 @@ bool aio_pending(AioContext *ctx); /* Dispatch any pending callbacks from the GSource attached to the AioCont= ext. * * This is used internally in the implementation of the GSource. - * - * @dispatch_fds: true to process fds, false to skip them - * (can be used as an optimization by callers that know the= re - * are no fds ready) */ -bool aio_dispatch(AioContext *ctx, bool dispatch_fds); +void aio_dispatch(AioContext *ctx); =20 /* Progress in completing AIO work to occur. This can issue new pending * aio as a result of executing I/O completion or bh callbacks. diff --git a/util/aio-posix.c b/util/aio-posix.c index 84cee43..2173378 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -386,12 +386,6 @@ static bool aio_dispatch_handlers(AioContext *ctx) AioHandler *node, *tmp; bool progress =3D false; =20 - /* - * We have to walk very carefully in case aio_set_fd_handler is - * called while we're walking. - */ - qemu_lockcnt_inc(&ctx->list_lock); - QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) { int revents; =20 @@ -426,33 +420,18 @@ static bool aio_dispatch_handlers(AioContext *ctx) } } =20 - qemu_lockcnt_dec(&ctx->list_lock); return progress; } =20 -/* - * Note that dispatch_fds =3D=3D false has the side-effect of post-poning = the - * freeing of deleted handlers. - */ -bool aio_dispatch(AioContext *ctx, bool dispatch_fds) +void aio_dispatch(AioContext *ctx) { - bool progress; - - /* - * If there are callbacks left that have been queued, we need to call = them. - * Do not call select in this case, because it is possible that the ca= ller - * does not need a complete flush (as is the case for aio_poll loops). - */ - progress =3D aio_bh_poll(ctx); + aio_bh_poll(ctx); =20 - if (dispatch_fds) { - progress |=3D aio_dispatch_handlers(ctx); - } - - /* Run our timers */ - progress |=3D timerlistgroup_run_timers(&ctx->tlg); + qemu_lockcnt_inc(&ctx->list_lock); + aio_dispatch_handlers(ctx); + qemu_lockcnt_dec(&ctx->list_lock); =20 - return progress; + timerlistgroup_run_timers(&ctx->tlg); } =20 /* These thread-local variables are used only in a small part of aio_poll @@ -702,11 +681,16 @@ bool aio_poll(AioContext *ctx, bool blocking) npfd =3D 0; qemu_lockcnt_dec(&ctx->list_lock); =20 - /* Run dispatch even if there were no readable fds to run timers */ - if (aio_dispatch(ctx, ret > 0)) { - progress =3D true; + progress |=3D aio_bh_poll(ctx); + + if (ret > 0) { + qemu_lockcnt_inc(&ctx->list_lock); + progress |=3D aio_dispatch_handlers(ctx); + qemu_lockcnt_dec(&ctx->list_lock); } =20 + progress |=3D timerlistgroup_run_timers(&ctx->tlg); + return progress; } =20 diff --git a/util/aio-win32.c b/util/aio-win32.c index 20b63ce..442a179 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -309,16 +309,11 @@ static bool aio_dispatch_handlers(AioContext *ctx, HA= NDLE event) return progress; } =20 -bool aio_dispatch(AioContext *ctx, bool dispatch_fds) +void aio_dispatch(AioContext *ctx) { - bool progress; - - progress =3D aio_bh_poll(ctx); - if (dispatch_fds) { - progress |=3D aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); - } - progress |=3D timerlistgroup_run_timers(&ctx->tlg); - return progress; + aio_bh_poll(ctx); + aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); + timerlistgroup_run_timers(&ctx->tlg); } =20 bool aio_poll(AioContext *ctx, bool blocking) diff --git a/util/async.c b/util/async.c index 99b9d7e..cc40735 100644 --- a/util/async.c +++ b/util/async.c @@ -258,7 +258,7 @@ aio_ctx_dispatch(GSource *source, AioContext *ctx =3D (AioContext *) source; =20 assert(callback =3D=3D NULL); - aio_dispatch(ctx, true); + aio_dispatch(ctx); return true; } =20 --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 148595196967040.39744682599019; Wed, 1 Feb 2017 04:26:09 -0800 (PST) Received: from localhost ([::1]:50265 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtzB-0003Wr-Hk for importer@patchew.org; Wed, 01 Feb 2017 07:26:05 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50828) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtg5-0001Ts-Q3 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtg4-0002On-Ra for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:21 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40558) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtg4-0002Ob-J1 for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:20 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C57E74DD70 for ; Wed, 1 Feb 2017 12:06:20 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLq019474; Wed, 1 Feb 2017 07:06:19 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:32 -0800 Message-Id: <20170201120533.13838-18-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 01 Feb 2017 12:06:20 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 17/18] async: remove unnecessary inc/dec pairs X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Pull the increment/decrement pair out of aio_bh_poll and into the callers. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- util/aio-posix.c | 8 +++----- util/aio-win32.c | 8 ++++---- util/async.c | 12 ++++++------ 3 files changed, 13 insertions(+), 15 deletions(-) diff --git a/util/aio-posix.c b/util/aio-posix.c index 2173378..2d51239 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -425,9 +425,8 @@ static bool aio_dispatch_handlers(AioContext *ctx) =20 void aio_dispatch(AioContext *ctx) { - aio_bh_poll(ctx); - qemu_lockcnt_inc(&ctx->list_lock); + aio_bh_poll(ctx); aio_dispatch_handlers(ctx); qemu_lockcnt_dec(&ctx->list_lock); =20 @@ -679,16 +678,15 @@ bool aio_poll(AioContext *ctx, bool blocking) } =20 npfd =3D 0; - qemu_lockcnt_dec(&ctx->list_lock); =20 progress |=3D aio_bh_poll(ctx); =20 if (ret > 0) { - qemu_lockcnt_inc(&ctx->list_lock); progress |=3D aio_dispatch_handlers(ctx); - qemu_lockcnt_dec(&ctx->list_lock); } =20 + qemu_lockcnt_dec(&ctx->list_lock); + progress |=3D timerlistgroup_run_timers(&ctx->tlg); =20 return progress; diff --git a/util/aio-win32.c b/util/aio-win32.c index 442a179..bca496a 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -253,8 +253,6 @@ static bool aio_dispatch_handlers(AioContext *ctx, HAND= LE event) bool progress =3D false; AioHandler *tmp; =20 - qemu_lockcnt_inc(&ctx->list_lock); - /* * We have to walk very carefully in case aio_set_fd_handler is * called while we're walking. @@ -305,14 +303,15 @@ static bool aio_dispatch_handlers(AioContext *ctx, HA= NDLE event) } } =20 - qemu_lockcnt_dec(&ctx->list_lock); return progress; } =20 void aio_dispatch(AioContext *ctx) { + qemu_lockcnt_inc(&ctx->list_lock); aio_bh_poll(ctx); aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); + qemu_lockcnt_dec(&ctx->list_lock); timerlistgroup_run_timers(&ctx->tlg); } =20 @@ -349,7 +348,6 @@ bool aio_poll(AioContext *ctx, bool blocking) } } =20 - qemu_lockcnt_dec(&ctx->list_lock); first =3D true; =20 /* ctx->notifier is always registered. */ @@ -392,6 +390,8 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |=3D aio_dispatch_handlers(ctx, event); } while (count > 0); =20 + qemu_lockcnt_dec(&ctx->list_lock); + progress |=3D timerlistgroup_run_timers(&ctx->tlg); return progress; } diff --git a/util/async.c b/util/async.c index cc40735..9c3ce6a 100644 --- a/util/async.c +++ b/util/async.c @@ -90,15 +90,16 @@ void aio_bh_call(QEMUBH *bh) bh->cb(bh->opaque); } =20 -/* Multiple occurrences of aio_bh_poll cannot be called concurrently */ +/* Multiple occurrences of aio_bh_poll cannot be called concurrently. + * The count in ctx->list_lock is incremented before the call, and is + * not affected by the call. + */ int aio_bh_poll(AioContext *ctx) { QEMUBH *bh, **bhp, *next; int ret; bool deleted =3D false; =20 - qemu_lockcnt_inc(&ctx->list_lock); - ret =3D 0; for (bh =3D atomic_rcu_read(&ctx->first_bh); bh; bh =3D next) { next =3D atomic_rcu_read(&bh->next); @@ -123,11 +124,10 @@ int aio_bh_poll(AioContext *ctx) =20 /* remove deleted bhs */ if (!deleted) { - qemu_lockcnt_dec(&ctx->list_lock); return ret; } =20 - if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) { + if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) { bhp =3D &ctx->first_bh; while (*bhp) { bh =3D *bhp; @@ -138,7 +138,7 @@ int aio_bh_poll(AioContext *ctx) bhp =3D &bh->next; } } - qemu_lockcnt_unlock(&ctx->list_lock); + qemu_lockcnt_inc_and_unlock(&ctx->list_lock); } return ret; } --=20 2.9.3 From nobody Thu May 2 06:36:59 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1485952499954141.18309681329742; Wed, 1 Feb 2017 04:34:59 -0800 (PST) Received: from localhost ([::1]:50310 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYu7l-0004Cb-Mh for importer@patchew.org; Wed, 01 Feb 2017 07:34:57 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50855) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYtg8-0001WD-9B for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:25 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYtg7-0002PK-4E for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:24 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36340) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cYtg6-0002P9-SM for qemu-devel@nongnu.org; Wed, 01 Feb 2017 07:06:23 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1A8583A768C for ; Wed, 1 Feb 2017 12:06:23 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-148.ams2.redhat.com [10.36.117.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v11C5bLr019474; Wed, 1 Feb 2017 07:06:21 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 1 Feb 2017 04:05:33 -0800 Message-Id: <20170201120533.13838-19-pbonzini@redhat.com> In-Reply-To: <20170201120533.13838-1-pbonzini@redhat.com> References: <20170201120533.13838-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 01 Feb 2017 12:06:23 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 18/18] block: document fields protected by AioContext lock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- include/block/block_int.h | 64 +++++++++++++++++++++++++-------------= ---- include/sysemu/block-backend.h | 14 ++++++--- 2 files changed, 49 insertions(+), 29 deletions(-) diff --git a/include/block/block_int.h b/include/block/block_int.h index 2d92d7e..1670941 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -430,8 +430,9 @@ struct BdrvChild { * copied as well. */ struct BlockDriverState { - int64_t total_sectors; /* if we are reading a disk image, give its - size in sectors */ + /* Protected by big QEMU lock or read-only after opening. No special + * locking needed during I/O... + */ int open_flags; /* flags used to open the file, re-used for re-open */ bool read_only; /* if true, the media is read only */ bool encrypted; /* if true, the media is encrypted */ @@ -439,14 +440,6 @@ struct BlockDriverState { bool sg; /* if true, the device is a /dev/sg* */ bool probed; /* if true, format was probed rather than specified */ =20 - int copy_on_read; /* if nonzero, copy read backing sectors into image. - note this is a reference count */ - - CoQueue flush_queue; /* Serializing flush queue */ - bool active_flush_req; /* Flush request in flight? */ - unsigned int write_gen; /* Current data generation */ - unsigned int flushed_gen; /* Flushed write generation */ - BlockDriver *drv; /* NULL means no media */ void *opaque; =20 @@ -468,18 +461,6 @@ struct BlockDriverState { BdrvChild *backing; BdrvChild *file; =20 - /* Callback before write request is processed */ - NotifierWithReturnList before_write_notifiers; - - /* number of in-flight requests; overall and serialising */ - unsigned int in_flight; - unsigned int serialising_in_flight; - - bool wakeup; - - /* Offset after the highest byte written to */ - uint64_t wr_highest_offset; - /* I/O Limits */ BlockLimits bl; =20 @@ -497,11 +478,8 @@ struct BlockDriverState { QTAILQ_ENTRY(BlockDriverState) bs_list; /* element of the list of monitor-owned BDS */ QTAILQ_ENTRY(BlockDriverState) monitor_list; - QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps; int refcnt; =20 - QLIST_HEAD(, BdrvTrackedRequest) tracked_requests; - /* operation blockers */ QLIST_HEAD(, BdrvOpBlocker) op_blockers[BLOCK_OP_TYPE_MAX]; =20 @@ -522,6 +500,31 @@ struct BlockDriverState { /* The error object in use for blocking operations on backing_hd */ Error *backing_blocker; =20 + /* Protected by AioContext lock */ + + /* If true, copy read backing sectors into image. Can be >1 if more + * than one client has requested copy-on-read. + */ + int copy_on_read; + + /* If we are reading a disk image, give its size in sectors. + * Generally read-only; it is written to by load_vmstate and save_vmst= ate, + * but the block layer is quiescent during those. + */ + int64_t total_sectors; + + /* Callback before write request is processed */ + NotifierWithReturnList before_write_notifiers; + + /* number of in-flight requests; overall and serialising */ + unsigned int in_flight; + unsigned int serialising_in_flight; + + bool wakeup; + + /* Offset after the highest byte written to */ + uint64_t wr_highest_offset; + /* threshold limit for writes, in bytes. "High water mark". */ uint64_t write_threshold_offset; NotifierWithReturn write_threshold_notifier; @@ -529,6 +532,17 @@ struct BlockDriverState { /* counter for nested bdrv_io_plug */ unsigned io_plugged; =20 + QLIST_HEAD(, BdrvTrackedRequest) tracked_requests; + CoQueue flush_queue; /* Serializing flush queue */ + bool active_flush_req; /* Flush request in flight? */ + unsigned int write_gen; /* Current data generation */ + unsigned int flushed_gen; /* Flushed write generation */ + + QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps; + + /* do we need to tell the quest if we have a volatile write cache? */ + int enable_write_cache; + int quiesce_counter; }; =20 diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h index 6444e41..f365a51 100644 --- a/include/sysemu/block-backend.h +++ b/include/sysemu/block-backend.h @@ -64,14 +64,20 @@ typedef struct BlockDevOps { * fields that must be public. This is in particular for QLIST_ENTRY() and * friends so that BlockBackends can be kept in lists outside block-backen= d.c */ typedef struct BlockBackendPublic { - /* I/O throttling. - * throttle_state tells us if this BlockBackend has I/O limits configu= red. - * io_limits_disabled tells us if they are currently being enforced */ + /* I/O throttling has its own locking, but also some fields are + * protected by the AioContext lock. + */ + + /* Protected by AioContext lock. */ CoQueue throttled_reqs[2]; + + /* Nonzero if the I/O limits are currently being ignored; generally + * it is zero. */ unsigned int io_limits_disabled; =20 /* The following fields are protected by the ThrottleGroup lock. - * See the ThrottleGroup documentation for details. */ + * See the ThrottleGroup documentation for details. + * throttle_state tells us if I/O limits are configured. */ ThrottleState *throttle_state; ThrottleTimers throttle_timers; unsigned pending_reqs[2]; --=20 2.9.3