From nobody Tue May 14 14:17:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1694635291; cv=none; d=zohomail.com; s=zohoarc; b=IULFmN7OxULVmuSUG8mLfEFcHj76jyGepWLGKSD/o4ROVU0+MIDSYqN1qYehCSMKWQE42m0Aghg6WJgUig34s/A5+M1Jmb3J7UNAkAsQE8RwKgVE5Mo6WRxj5QRwYvF6yIUgmWDn1cUt6SKjelOOhMJLam6+yjZ8zsUZh97QmhE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694635291; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=pG2WPZLRqDLjPAdddF1KKTi0oO1n+tSekW4IHh1/dp4=; b=Qj4zt227FTt9U8yIJkwDlC/tiqoJG1BK+VKkL2k2aIuYYeJ4DomGZgT3vhPmrbDX0R82HVnIzSO9KSjM2WNhjJZucs+Zn3UuX9y5Rn8rfIAN4YVNoZz6Fbu4ItJG6HqycafbJNIWJbdhJWkUUKDW0VeEb9roxAkUFJtuWCArhSs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 169463529128163.67169701130297; Wed, 13 Sep 2023 13:01:31 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.601545.937576 (Exim 4.92) (envelope-from ) id 1qgW2r-0006ii-G0; Wed, 13 Sep 2023 20:01:05 +0000 Received: by outflank-mailman (output) from mailman id 601545.937576; Wed, 13 Sep 2023 20:01:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2r-0006iZ-9q; Wed, 13 Sep 2023 20:01:05 +0000 Received: by outflank-mailman (input) for mailman id 601545; Wed, 13 Sep 2023 20:01:04 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2q-0006hx-2d for xen-devel@lists.xenproject.org; Wed, 13 Sep 2023 20:01:04 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 43455a95-5270-11ee-8787-cb3800f73035; Wed, 13 Sep 2023 22:01:02 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-614-PwMA5HczPguv2YeuDT1NhA-1; Wed, 13 Sep 2023 16:00:56 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1BF8B857A9D; Wed, 13 Sep 2023 20:00:55 +0000 (UTC) Received: from localhost (unknown [10.39.192.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 10CD640C6EBF; Wed, 13 Sep 2023 20:00:48 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 43455a95-5270-11ee-8787-cb3800f73035 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694635261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pG2WPZLRqDLjPAdddF1KKTi0oO1n+tSekW4IHh1/dp4=; b=TaEu0B6j7lNdai1MjO/xRT11xDT483roe+4KRIyuhtUEEbMWwA0NgUt5eEZJwSaWdodXes HqeG6Fx1jKecmhXexqfpDd7Tqo1idKfbViOsFlnOAX2FFo7BwJ00kARgMTzqmft/LAZKCG p3Wk1OkR4fVJWJEYz4h16l6vtBSoTyc= X-MC-Unique: PwMA5HczPguv2YeuDT1NhA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Ilya Maximets , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Kevin Wolf , xen-devel@lists.xenproject.org, Anthony Perard , Paolo Bonzini , Stefan Hajnoczi , , Julia Suvorova , Aarushi Mehta , Paul Durrant , "Michael S. Tsirkin" , Fam Zheng , Stefano Garzarella , Hanna Reitz Subject: [PATCH v3 1/4] block: rename blk_io_plug_call() API to defer_call() Date: Wed, 13 Sep 2023 16:00:42 -0400 Message-ID: <20230913200045.1024233-2-stefanha@redhat.com> In-Reply-To: <20230913200045.1024233-1-stefanha@redhat.com> References: <20230913200045.1024233-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1694635293421100002 Prepare to move the blk_io_plug_call() API out of the block layer so that other subsystems call use this deferred call mechanism. Rename it to defer_call() but leave the code in block/plug.c. The next commit will move the code out of the block layer. Suggested-by: Ilya Maximets Reviewed-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Paul Durrant Signed-off-by: Stefan Hajnoczi Reviewed-by: Michael S. Tsirkin --- include/sysemu/block-backend-io.h | 6 +- block/blkio.c | 8 +-- block/io_uring.c | 4 +- block/linux-aio.c | 4 +- block/nvme.c | 4 +- block/plug.c | 109 +++++++++++++++--------------- hw/block/dataplane/xen-block.c | 10 +-- hw/block/virtio-blk.c | 4 +- hw/scsi/virtio-scsi.c | 6 +- 9 files changed, 76 insertions(+), 79 deletions(-) diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backe= nd-io.h index be4dcef59d..cfcfd85c1d 100644 --- a/include/sysemu/block-backend-io.h +++ b/include/sysemu/block-backend-io.h @@ -100,9 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int error); int blk_get_max_iov(BlockBackend *blk); int blk_get_max_hw_iov(BlockBackend *blk); =20 -void blk_io_plug(void); -void blk_io_unplug(void); -void blk_io_plug_call(void (*fn)(void *), void *opaque); +void defer_call_begin(void); +void defer_call_end(void); +void defer_call(void (*fn)(void *), void *opaque); =20 AioContext *blk_get_aio_context(BlockBackend *blk); BlockAcctStats *blk_get_stats(BlockBackend *blk); diff --git a/block/blkio.c b/block/blkio.c index 1dd495617c..7cf6d61f47 100644 --- a/block/blkio.c +++ b/block/blkio.c @@ -312,10 +312,10 @@ static void blkio_detach_aio_context(BlockDriverState= *bs) } =20 /* - * Called by blk_io_unplug() or immediately if not plugged. Called without - * blkio_lock. + * Called by defer_call_end() or immediately if not in a deferred section. + * Called without blkio_lock. */ -static void blkio_unplug_fn(void *opaque) +static void blkio_deferred_fn(void *opaque) { BDRVBlkioState *s =3D opaque; =20 @@ -332,7 +332,7 @@ static void blkio_submit_io(BlockDriverState *bs) { BDRVBlkioState *s =3D bs->opaque; =20 - blk_io_plug_call(blkio_unplug_fn, s); + defer_call(blkio_deferred_fn, s); } =20 static int coroutine_fn diff --git a/block/io_uring.c b/block/io_uring.c index 69d9820928..8429f341be 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -306,7 +306,7 @@ static void ioq_init(LuringQueue *io_q) io_q->blocked =3D false; } =20 -static void luring_unplug_fn(void *opaque) +static void luring_deferred_fn(void *opaque) { LuringState *s =3D opaque; trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue, @@ -367,7 +367,7 @@ static int luring_do_submit(int fd, LuringAIOCB *luring= cb, LuringState *s, return ret; } =20 - blk_io_plug_call(luring_unplug_fn, s); + defer_call(luring_deferred_fn, s); } return 0; } diff --git a/block/linux-aio.c b/block/linux-aio.c index 1a51503271..49a37174c2 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -353,7 +353,7 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64= _t dev_max_batch) return max_batch; } =20 -static void laio_unplug_fn(void *opaque) +static void laio_deferred_fn(void *opaque) { LinuxAioState *s =3D opaque; =20 @@ -393,7 +393,7 @@ static int laio_do_submit(int fd, struct qemu_laiocb *l= aiocb, off_t offset, if (s->io_q.in_queue >=3D laio_max_batch(s, dev_max_batch)) { ioq_submit(s); } else { - blk_io_plug_call(laio_unplug_fn, s); + defer_call(laio_deferred_fn, s); } } =20 diff --git a/block/nvme.c b/block/nvme.c index b6e95f0b7e..dfbd1085fd 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -476,7 +476,7 @@ static void nvme_trace_command(const NvmeCmd *cmd) } } =20 -static void nvme_unplug_fn(void *opaque) +static void nvme_deferred_fn(void *opaque) { NVMeQueuePair *q =3D opaque; =20 @@ -503,7 +503,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeR= equest *req, q->need_kick++; qemu_mutex_unlock(&q->lock); =20 - blk_io_plug_call(nvme_unplug_fn, q); + defer_call(nvme_deferred_fn, q); } =20 static void nvme_admin_cmd_sync_cb(void *opaque, int ret) diff --git a/block/plug.c b/block/plug.c index 98a155d2f4..f26173559c 100644 --- a/block/plug.c +++ b/block/plug.c @@ -1,24 +1,21 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ /* - * Block I/O plugging + * Deferred calls * * Copyright Red Hat. * - * This API defers a function call within a blk_io_plug()/blk_io_unplug() + * This API defers a function call within a defer_call_begin()/defer_call_= end() * section, allowing multiple calls to batch up. This is a performance * optimization that is used in the block layer to submit several I/O requ= ests * at once instead of individually: * - * blk_io_plug(); <-- start of plugged region + * defer_call_begin(); <-- start of section * ... - * blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call - * blk_io_plug_call(my_func, my_obj); <-- another - * blk_io_plug_call(my_func, my_obj); <-- another + * defer_call(my_func, my_obj); <-- deferred my_func(my_obj) call + * defer_call(my_func, my_obj); <-- another + * defer_call(my_func, my_obj); <-- another * ... - * blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called= once - * - * This code is actually generic and not tied to the block layer. If anoth= er - * subsystem needs this functionality, it could be renamed. + * defer_call_end(); <-- end of section, my_func(my_obj) is called once */ =20 #include "qemu/osdep.h" @@ -27,66 +24,66 @@ #include "qemu/thread.h" #include "sysemu/block-backend.h" =20 -/* A function call that has been deferred until unplug() */ +/* A function call that has been deferred until defer_call_end() */ typedef struct { void (*fn)(void *); void *opaque; -} UnplugFn; +} DeferredCall; =20 /* Per-thread state */ typedef struct { - unsigned count; /* how many times has plug() been called? */ - GArray *unplug_fns; /* functions to call at unplug time */ -} Plug; + unsigned nesting_level; + GArray *deferred_call_array; +} DeferCallThreadState; =20 -/* Use get_ptr_plug() to fetch this thread-local value */ -QEMU_DEFINE_STATIC_CO_TLS(Plug, plug); +/* Use get_ptr_defer_call_thread_state() to fetch this thread-local value = */ +QEMU_DEFINE_STATIC_CO_TLS(DeferCallThreadState, defer_call_thread_state); =20 /* Called at thread cleanup time */ -static void blk_io_plug_atexit(Notifier *n, void *value) +static void defer_call_atexit(Notifier *n, void *value) { - Plug *plug =3D get_ptr_plug(); - g_array_free(plug->unplug_fns, TRUE); + DeferCallThreadState *thread_state =3D get_ptr_defer_call_thread_state= (); + g_array_free(thread_state->deferred_call_array, TRUE); } =20 /* This won't involve coroutines, so use __thread */ -static __thread Notifier blk_io_plug_atexit_notifier; +static __thread Notifier defer_call_atexit_notifier; =20 /** - * blk_io_plug_call: + * defer_call: * @fn: a function pointer to be invoked * @opaque: a user-defined argument to @fn() * - * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unpl= ug() - * section. + * Call @fn(@opaque) immediately if not within a + * defer_call_begin()/defer_call_end() section. * * Otherwise defer the call until the end of the outermost - * blk_io_plug()/blk_io_unplug() section in this thread. If the same + * defer_call_begin()/defer_call_end() section in this thread. If the same * @fn/@opaque pair has already been deferred, it will only be called once= upon - * blk_io_unplug() so that accumulated calls are batched into a single cal= l. + * defer_call_end() so that accumulated calls are batched into a single ca= ll. * * The caller must ensure that @opaque is not freed before @fn() is invoke= d. */ -void blk_io_plug_call(void (*fn)(void *), void *opaque) +void defer_call(void (*fn)(void *), void *opaque) { - Plug *plug =3D get_ptr_plug(); + DeferCallThreadState *thread_state =3D get_ptr_defer_call_thread_state= (); =20 - /* Call immediately if we're not plugged */ - if (plug->count =3D=3D 0) { + /* Call immediately if we're not deferring calls */ + if (thread_state->nesting_level =3D=3D 0) { fn(opaque); return; } =20 - GArray *array =3D plug->unplug_fns; + GArray *array =3D thread_state->deferred_call_array; if (!array) { - array =3D g_array_new(FALSE, FALSE, sizeof(UnplugFn)); - plug->unplug_fns =3D array; - blk_io_plug_atexit_notifier.notify =3D blk_io_plug_atexit; - qemu_thread_atexit_add(&blk_io_plug_atexit_notifier); + array =3D g_array_new(FALSE, FALSE, sizeof(DeferredCall)); + thread_state->deferred_call_array =3D array; + defer_call_atexit_notifier.notify =3D defer_call_atexit; + qemu_thread_atexit_add(&defer_call_atexit_notifier); } =20 - UnplugFn *fns =3D (UnplugFn *)array->data; - UnplugFn new_fn =3D { + DeferredCall *fns =3D (DeferredCall *)array->data; + DeferredCall new_fn =3D { .fn =3D fn, .opaque =3D opaque, }; @@ -106,46 +103,46 @@ void blk_io_plug_call(void (*fn)(void *), void *opaqu= e) } =20 /** - * blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug() + * defer_call_begin: Defer defer_call() functions until defer_call_end() * - * blk_io_plug/unplug are thread-local operations. This means that multiple - * threads can simultaneously call plug/unplug, but the caller must ensure= that - * each unplug() is called in the same thread of the matching plug(). + * defer_call_begin() and defer_call_end() are thread-local operations. The + * caller must ensure that each defer_call_begin() has a matching + * defer_call_end() in the same thread. * - * Nesting is supported. blk_io_plug_call() functions are only called at t= he - * outermost blk_io_unplug(). + * Nesting is supported. defer_call() functions are only called at the + * outermost defer_call_end(). */ -void blk_io_plug(void) +void defer_call_begin(void) { - Plug *plug =3D get_ptr_plug(); + DeferCallThreadState *thread_state =3D get_ptr_defer_call_thread_state= (); =20 - assert(plug->count < UINT32_MAX); + assert(thread_state->nesting_level < UINT32_MAX); =20 - plug->count++; + thread_state->nesting_level++; } =20 /** - * blk_io_unplug: Run any pending blk_io_plug_call() functions + * defer_call_end: Run any pending defer_call() functions * - * There must have been a matching blk_io_plug() call in the same thread p= rior - * to this blk_io_unplug() call. + * There must have been a matching defer_call_begin() call in the same thr= ead + * prior to this defer_call_end() call. */ -void blk_io_unplug(void) +void defer_call_end(void) { - Plug *plug =3D get_ptr_plug(); + DeferCallThreadState *thread_state =3D get_ptr_defer_call_thread_state= (); =20 - assert(plug->count > 0); + assert(thread_state->nesting_level > 0); =20 - if (--plug->count > 0) { + if (--thread_state->nesting_level > 0) { return; } =20 - GArray *array =3D plug->unplug_fns; + GArray *array =3D thread_state->deferred_call_array; if (!array) { return; } =20 - UnplugFn *fns =3D (UnplugFn *)array->data; + DeferredCall *fns =3D (DeferredCall *)array->data; =20 for (guint i =3D 0; i < array->len; i++) { fns[i].fn(fns[i].opaque); diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index 3b6f2b0aa2..e9dd8f8a99 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -509,7 +509,7 @@ static int xen_block_get_request(XenBlockDataPlane *dat= aplane, =20 /* * Threshold of in-flight requests above which we will start using - * blk_io_plug()/blk_io_unplug() to batch requests. + * defer_call_begin()/defer_call_end() to batch requests. */ #define IO_PLUG_THRESHOLD 1 =20 @@ -537,7 +537,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane= *dataplane) * is below us. */ if (inflight_atstart > IO_PLUG_THRESHOLD) { - blk_io_plug(); + defer_call_begin(); } while (rc !=3D rp) { /* pull request from ring */ @@ -577,12 +577,12 @@ static bool xen_block_handle_requests(XenBlockDataPla= ne *dataplane) =20 if (inflight_atstart > IO_PLUG_THRESHOLD && batched >=3D inflight_atstart) { - blk_io_unplug(); + defer_call_end(); } xen_block_do_aio(request); if (inflight_atstart > IO_PLUG_THRESHOLD) { if (batched >=3D inflight_atstart) { - blk_io_plug(); + defer_call_begin(); batched =3D 0; } else { batched++; @@ -590,7 +590,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane= *dataplane) } } if (inflight_atstart > IO_PLUG_THRESHOLD) { - blk_io_unplug(); + defer_call_end(); } =20 return done_something; diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 39e7f23fab..6a45033d15 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -1134,7 +1134,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *= vq) bool suppress_notifications =3D virtio_queue_get_notification(vq); =20 aio_context_acquire(blk_get_aio_context(s->blk)); - blk_io_plug(); + defer_call_begin(); =20 do { if (suppress_notifications) { @@ -1158,7 +1158,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *= vq) virtio_blk_submit_multireq(s, &mrb); } =20 - blk_io_unplug(); + defer_call_end(); aio_context_release(blk_get_aio_context(s->blk)); } =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 45b95ea070..c2465e3e88 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -799,7 +799,7 @@ static int virtio_scsi_handle_cmd_req_prepare(VirtIOSCS= I *s, VirtIOSCSIReq *req) return -ENOBUFS; } scsi_req_ref(req->sreq); - blk_io_plug(); + defer_call_begin(); object_unref(OBJECT(d)); return 0; } @@ -810,7 +810,7 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCS= I *s, VirtIOSCSIReq *req) if (scsi_req_enqueue(sreq)) { scsi_req_continue(sreq); } - blk_io_unplug(); + defer_call_end(); scsi_req_unref(sreq); } =20 @@ -836,7 +836,7 @@ static void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, Vi= rtQueue *vq) while (!QTAILQ_EMPTY(&reqs)) { req =3D QTAILQ_FIRST(&reqs); QTAILQ_REMOVE(&reqs, req, next); - blk_io_unplug(); + defer_call_end(); scsi_req_unref(req->sreq); virtqueue_detach_element(req->vq, &req->elem, 0); virtio_scsi_free_req(req); --=20 2.41.0 From nobody Tue May 14 14:17:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1694635288; cv=none; d=zohomail.com; s=zohoarc; b=HNNf1Mt8RehSw+utYZeSFOfkJi/PLT/ZmTtRc3g34MFeQkkmnIOMerjCtDkeHAlPva89hoiZic8aeecSEDHAr9Z13W34vPV4alQnap0dJtytn2n30lhAFmk6KP1xDRKH1C5kNZ8NIkX2lIQp/RZ4AyHb2xmZwQ5qNSVlqLzuRGM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694635288; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=vZx2mIKFs9Ta6GgBCR1MQ0gDXiljqmR/KSAIKi0gYQc=; b=nE4wKrcTT5DmiZhjdUGb1B+gaMkGck8C4yekAZI4pIwcCGBZYYo9lXjGrSxjhU36N6SELzrwrXBntIK6GzerpiVcgXr3ug3hrZJcpVXvIfzRPb3w4nieo27fSr4wa/3DitW+WwCAeQepbCKk8vsWKDsMR/W2OmfRWwPUMt9Y9ts= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1694635287822550.7776490815384; Wed, 13 Sep 2023 13:01:27 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.601546.937580 (Exim 4.92) (envelope-from ) id 1qgW2r-0006kV-N2; Wed, 13 Sep 2023 20:01:05 +0000 Received: by outflank-mailman (output) from mailman id 601546.937580; Wed, 13 Sep 2023 20:01:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2r-0006jH-IL; Wed, 13 Sep 2023 20:01:05 +0000 Received: by outflank-mailman (input) for mailman id 601546; Wed, 13 Sep 2023 20:01:04 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2q-0006RI-Du for xen-devel@lists.xenproject.org; Wed, 13 Sep 2023 20:01:04 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 43625a95-5270-11ee-9b0d-b553b5be7939; Wed, 13 Sep 2023 22:01:02 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-303-I8LERHS8PpC5WWseqPnzKg-1; Wed, 13 Sep 2023 16:00:58 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5C57788905A; Wed, 13 Sep 2023 20:00:57 +0000 (UTC) Received: from localhost (unknown [10.39.192.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id C1B9910F1BE7; Wed, 13 Sep 2023 20:00:56 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 43625a95-5270-11ee-9b0d-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694635261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vZx2mIKFs9Ta6GgBCR1MQ0gDXiljqmR/KSAIKi0gYQc=; b=BdWFWU7MxQBY56c8Y7hU0g0tfxDOHJHCEbQ9/rSM71KsBAkxDRnlu6IH0n6pg/t/mcdh9v 7DH/HQJXuTkoWB1CfcdEA+4GIPezBiqciDWlZeAO/4UKlavPtMPdmqRmrr3H8q7WULUIS0 bJnDoiCMhSlDpbxnsJK0vRmR0nG6yMw= X-MC-Unique: I8LERHS8PpC5WWseqPnzKg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Ilya Maximets , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Kevin Wolf , xen-devel@lists.xenproject.org, Anthony Perard , Paolo Bonzini , Stefan Hajnoczi , , Julia Suvorova , Aarushi Mehta , Paul Durrant , "Michael S. Tsirkin" , Fam Zheng , Stefano Garzarella , Hanna Reitz Subject: [PATCH v3 2/4] util/defer-call: move defer_call() to util/ Date: Wed, 13 Sep 2023 16:00:43 -0400 Message-ID: <20230913200045.1024233-3-stefanha@redhat.com> In-Reply-To: <20230913200045.1024233-1-stefanha@redhat.com> References: <20230913200045.1024233-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1694635290409100001 The networking subsystem may wish to use defer_call(), so move the code to util/ where it can be reused. As a reminder of what defer_call() does: This API defers a function call within a defer_call_begin()/defer_call_end() section, allowing multiple calls to batch up. This is a performance optimization that is used in the block layer to submit several I/O requests at once instead of individually: defer_call_begin(); <-- start of section ... defer_call(my_func, my_obj); <-- deferred my_func(my_obj) call defer_call(my_func, my_obj); <-- another defer_call(my_func, my_obj); <-- another ... defer_call_end(); <-- end of section, my_func(my_obj) is called once Suggested-by: Ilya Maximets Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Stefan Hajnoczi Reviewed-by: Michael S. Tsirkin --- MAINTAINERS | 3 ++- include/qemu/defer-call.h | 16 ++++++++++++++++ include/sysemu/block-backend-io.h | 4 ---- block/blkio.c | 1 + block/io_uring.c | 1 + block/linux-aio.c | 1 + block/nvme.c | 1 + hw/block/dataplane/xen-block.c | 1 + hw/block/virtio-blk.c | 1 + hw/scsi/virtio-scsi.c | 1 + block/plug.c =3D> util/defer-call.c | 2 +- block/meson.build | 1 - util/meson.build | 1 + 13 files changed, 27 insertions(+), 7 deletions(-) create mode 100644 include/qemu/defer-call.h rename block/plug.c =3D> util/defer-call.c (99%) diff --git a/MAINTAINERS b/MAINTAINERS index 00562f924f..acda735326 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2685,12 +2685,13 @@ S: Supported F: util/async.c F: util/aio-*.c F: util/aio-*.h +F: util/defer-call.c F: util/fdmon-*.c F: block/io.c -F: block/plug.c F: migration/block* F: include/block/aio.h F: include/block/aio-wait.h +F: include/qemu/defer-call.h F: scripts/qemugdb/aio.py F: tests/unit/test-fdmon-epoll.c T: git https://github.com/stefanha/qemu.git block diff --git a/include/qemu/defer-call.h b/include/qemu/defer-call.h new file mode 100644 index 0000000000..e2c1d24572 --- /dev/null +++ b/include/qemu/defer-call.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Deferred calls + * + * Copyright Red Hat. + */ + +#ifndef QEMU_DEFER_CALL_H +#define QEMU_DEFER_CALL_H + +/* See documentation in util/defer-call.c */ +void defer_call_begin(void); +void defer_call_end(void); +void defer_call(void (*fn)(void *), void *opaque); + +#endif /* QEMU_DEFER_CALL_H */ diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backe= nd-io.h index cfcfd85c1d..d174275a5c 100644 --- a/include/sysemu/block-backend-io.h +++ b/include/sysemu/block-backend-io.h @@ -100,10 +100,6 @@ void blk_iostatus_set_err(BlockBackend *blk, int error= ); int blk_get_max_iov(BlockBackend *blk); int blk_get_max_hw_iov(BlockBackend *blk); =20 -void defer_call_begin(void); -void defer_call_end(void); -void defer_call(void (*fn)(void *), void *opaque); - AioContext *blk_get_aio_context(BlockBackend *blk); BlockAcctStats *blk_get_stats(BlockBackend *blk); void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk, diff --git a/block/blkio.c b/block/blkio.c index 7cf6d61f47..0a0a6c0f5f 100644 --- a/block/blkio.c +++ b/block/blkio.c @@ -13,6 +13,7 @@ #include "block/block_int.h" #include "exec/memory.h" #include "exec/cpu-common.h" /* for qemu_ram_get_fd() */ +#include "qemu/defer-call.h" #include "qapi/error.h" #include "qemu/error-report.h" #include "qapi/qmp/qdict.h" diff --git a/block/io_uring.c b/block/io_uring.c index 8429f341be..3a1e1f45b3 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -15,6 +15,7 @@ #include "block/block.h" #include "block/raw-aio.h" #include "qemu/coroutine.h" +#include "qemu/defer-call.h" #include "qapi/error.h" #include "sysemu/block-backend.h" #include "trace.h" diff --git a/block/linux-aio.c b/block/linux-aio.c index 49a37174c2..a2670b3e46 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -14,6 +14,7 @@ #include "block/raw-aio.h" #include "qemu/event_notifier.h" #include "qemu/coroutine.h" +#include "qemu/defer-call.h" #include "qapi/error.h" #include "sysemu/block-backend.h" =20 diff --git a/block/nvme.c b/block/nvme.c index dfbd1085fd..96b3f8f2fa 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -16,6 +16,7 @@ #include "qapi/error.h" #include "qapi/qmp/qdict.h" #include "qapi/qmp/qstring.h" +#include "qemu/defer-call.h" #include "qemu/error-report.h" #include "qemu/main-loop.h" #include "qemu/module.h" diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index e9dd8f8a99..c4bb28c66f 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -19,6 +19,7 @@ */ =20 #include "qemu/osdep.h" +#include "qemu/defer-call.h" #include "qemu/error-report.h" #include "qemu/main-loop.h" #include "qemu/memalign.h" diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 6a45033d15..a1f8e15522 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -12,6 +12,7 @@ */ =20 #include "qemu/osdep.h" +#include "qemu/defer-call.h" #include "qapi/error.h" #include "qemu/iov.h" #include "qemu/module.h" diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index c2465e3e88..83c154e74e 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -18,6 +18,7 @@ #include "standard-headers/linux/virtio_ids.h" #include "hw/virtio/virtio-scsi.h" #include "migration/qemu-file-types.h" +#include "qemu/defer-call.h" #include "qemu/error-report.h" #include "qemu/iov.h" #include "qemu/module.h" diff --git a/block/plug.c b/util/defer-call.c similarity index 99% rename from block/plug.c rename to util/defer-call.c index f26173559c..037dc0abf0 100644 --- a/block/plug.c +++ b/util/defer-call.c @@ -22,7 +22,7 @@ #include "qemu/coroutine-tls.h" #include "qemu/notify.h" #include "qemu/thread.h" -#include "sysemu/block-backend.h" +#include "qemu/defer-call.h" =20 /* A function call that has been deferred until defer_call_end() */ typedef struct { diff --git a/block/meson.build b/block/meson.build index f351b9d0d3..59ff6d380c 100644 --- a/block/meson.build +++ b/block/meson.build @@ -21,7 +21,6 @@ block_ss.add(files( 'mirror.c', 'nbd.c', 'null.c', - 'plug.c', 'preallocate.c', 'progress_meter.c', 'qapi.c', diff --git a/util/meson.build b/util/meson.build index c4827fd70a..769b24f2e0 100644 --- a/util/meson.build +++ b/util/meson.build @@ -28,6 +28,7 @@ util_ss.add(when: 'CONFIG_WIN32', if_true: pathcch) if glib_has_gslice util_ss.add(files('qtree.c')) endif +util_ss.add(files('defer-call.c')) util_ss.add(files('envlist.c', 'path.c', 'module.c')) util_ss.add(files('host-utils.c')) util_ss.add(files('bitmap.c', 'bitops.c')) --=20 2.41.0 From nobody Tue May 14 14:17:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1694635291; cv=none; d=zohomail.com; s=zohoarc; b=g/0dvRKSp3bOh8OYX6iNj4fvto/SYpYzLSHvQgkjIm6Jg624yNr2e2oBP1IP3Med1LykSPR1pka1w84MKNWKgrJoKNQNXCRtTa4YjHJjSGL/ZO18DxIoBO4c/i4BwFj9ZwdWbRCBz6ACx6y8LrLv2mG8kwNkEC7sKRQo8vGj228= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694635291; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DFze3Cwgp+YWHC2j1jinaSyM7xhK6i4yDvggp9s94ik=; b=PW9p4wiVZlbLVuKt0ULT7J7GZ/ezBG++9AcJAOvGSv090pHgPaPXadskWpnWDbgvOHZwGeOYJeTrq8LpiYMrTHM21NSsCC2KTgiQT0ln4EXNDLNjdpYM9JD+klUrS3mb0lxrXPXIPuOHELqv/UvV0HwKwfPP/nxmaqnkcVu9FBE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1694635291815525.6364309955123; Wed, 13 Sep 2023 13:01:31 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.601547.937588 (Exim 4.92) (envelope-from ) id 1qgW2s-0006ts-9E; Wed, 13 Sep 2023 20:01:06 +0000 Received: by outflank-mailman (output) from mailman id 601547.937588; Wed, 13 Sep 2023 20:01:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2s-0006t3-2e; Wed, 13 Sep 2023 20:01:06 +0000 Received: by outflank-mailman (input) for mailman id 601547; Wed, 13 Sep 2023 20:01:05 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2r-0006hx-B4 for xen-devel@lists.xenproject.org; Wed, 13 Sep 2023 20:01:05 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 449d97f9-5270-11ee-8787-cb3800f73035; Wed, 13 Sep 2023 22:01:04 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-458-iNU3FLWrPDaVFpUU8ODbpw-1; Wed, 13 Sep 2023 16:01:00 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 35E25805566; Wed, 13 Sep 2023 20:00:59 +0000 (UTC) Received: from localhost (unknown [10.39.192.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id A782940C2009; Wed, 13 Sep 2023 20:00:58 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 449d97f9-5270-11ee-8787-cb3800f73035 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694635263; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DFze3Cwgp+YWHC2j1jinaSyM7xhK6i4yDvggp9s94ik=; b=gdbgfrIx+70PYJXCkMDqt1RcxPsa4TaxXR79bwOMwAGEcANo7Ic/NNrJ6Dg9ULK97Lc2iB BzY7bzRD5jXizApcP+kuE8XmMWHlip6Oexjn6zbG/e/xfSRFnCEJZ8IxMSHLvZvzUNtO5f HQ24xN9fGYE8Y2UbHV7Bx7tpP+Mpgfo= X-MC-Unique: iNU3FLWrPDaVFpUU8ODbpw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Ilya Maximets , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Kevin Wolf , xen-devel@lists.xenproject.org, Anthony Perard , Paolo Bonzini , Stefan Hajnoczi , , Julia Suvorova , Aarushi Mehta , Paul Durrant , "Michael S. Tsirkin" , Fam Zheng , Stefano Garzarella , Hanna Reitz , Eric Blake Subject: [PATCH v3 3/4] virtio: use defer_call() in virtio_irqfd_notify() Date: Wed, 13 Sep 2023 16:00:44 -0400 Message-ID: <20230913200045.1024233-4-stefanha@redhat.com> In-Reply-To: <20230913200045.1024233-1-stefanha@redhat.com> References: <20230913200045.1024233-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1694635293343100001 Content-Type: text/plain; charset="utf-8" virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used Buffer Notifications from an IOThread. This involves an eventfd write(2) syscall. Calling this repeatedly when completing multiple I/O requests in a row is wasteful. Use the defer_call() API to batch together virtio_irqfd_notify() calls made during thread pool (aio=3Dthreads), Linux AIO (aio=3Dnative), and io_uring (aio=3Dio_uring) completion processing. Behavior is unchanged for emulated devices that do not use defer_call_begin()/defer_call_end() since defer_call() immediately invokes the callback when called outside a defer_call_begin()/defer_call_end() region. fio rw=3Drandread bs=3D4k iodepth=3D64 numjobs=3D8 IOPS increases by ~9% wi= th a single IOThread and 8 vCPUs. iodepth=3D1 decreases by ~1% but this could be noise. Detailed performance data and configuration specifics are available here: https://gitlab.com/stefanha/virt-playbooks/-/tree/blk_io_plug-irqfd This duplicates the BH that virtio-blk uses for batching. The next commit will remove it. Reviewed-by: Eric Blake Signed-off-by: Stefan Hajnoczi Reviewed-by: Michael S. Tsirkin --- block/io_uring.c | 6 ++++++ block/linux-aio.c | 4 ++++ hw/virtio/virtio.c | 13 ++++++++++++- util/thread-pool.c | 5 +++++ hw/virtio/trace-events | 1 + 5 files changed, 28 insertions(+), 1 deletion(-) diff --git a/block/io_uring.c b/block/io_uring.c index 3a1e1f45b3..7cdd00e9f1 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -125,6 +125,9 @@ static void luring_process_completions(LuringState *s) { struct io_uring_cqe *cqes; int total_bytes; + + defer_call_begin(); + /* * Request completion callbacks can run the nested event loop. * Schedule ourselves so the nested event loop will "see" remaining @@ -217,7 +220,10 @@ end: aio_co_wake(luringcb->co); } } + qemu_bh_cancel(s->completion_bh); + + defer_call_end(); } =20 static int ioq_submit(LuringState *s) diff --git a/block/linux-aio.c b/block/linux-aio.c index a2670b3e46..ec05d946f3 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -205,6 +205,8 @@ static void qemu_laio_process_completions(LinuxAioState= *s) { struct io_event *events; =20 + defer_call_begin(); + /* Reschedule so nested event loops see currently pending completions = */ qemu_bh_schedule(s->completion_bh); =20 @@ -231,6 +233,8 @@ static void qemu_laio_process_completions(LinuxAioState= *s) * own `for` loop. If we are the last all counters dropped to zero. */ s->event_max =3D 0; s->event_idx =3D 0; + + defer_call_end(); } =20 static void qemu_laio_process_completions_and_submit(LinuxAioState *s) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 969c25f4cf..d9aeed7012 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -15,6 +15,7 @@ #include "qapi/error.h" #include "qapi/qapi-commands-virtio.h" #include "trace.h" +#include "qemu/defer-call.h" #include "qemu/error-report.h" #include "qemu/log.h" #include "qemu/main-loop.h" @@ -2426,6 +2427,16 @@ static bool virtio_should_notify(VirtIODevice *vdev,= VirtQueue *vq) } } =20 +/* Batch irqs while inside a defer_call_begin()/defer_call_end() section */ +static void virtio_notify_irqfd_deferred_fn(void *opaque) +{ + EventNotifier *notifier =3D opaque; + VirtQueue *vq =3D container_of(notifier, VirtQueue, guest_notifier); + + trace_virtio_notify_irqfd_deferred_fn(vq->vdev, vq); + event_notifier_set(notifier); +} + void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueue *vq) { WITH_RCU_READ_LOCK_GUARD() { @@ -2452,7 +2463,7 @@ void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueu= e *vq) * to an atomic operation. */ virtio_set_isr(vq->vdev, 0x1); - event_notifier_set(&vq->guest_notifier); + defer_call(virtio_notify_irqfd_deferred_fn, &vq->guest_notifier); } =20 static void virtio_irq(VirtQueue *vq) diff --git a/util/thread-pool.c b/util/thread-pool.c index e3d8292d14..d84961779a 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -15,6 +15,7 @@ * GNU GPL, version 2 or (at your option) any later version. */ #include "qemu/osdep.h" +#include "qemu/defer-call.h" #include "qemu/queue.h" #include "qemu/thread.h" #include "qemu/coroutine.h" @@ -175,6 +176,8 @@ static void thread_pool_completion_bh(void *opaque) ThreadPool *pool =3D opaque; ThreadPoolElement *elem, *next; =20 + defer_call_begin(); /* cb() may use defer_call() to coalesce work */ + restart: QLIST_FOREACH_SAFE(elem, &pool->head, all, next) { if (elem->state !=3D THREAD_DONE) { @@ -208,6 +211,8 @@ restart: qemu_aio_unref(elem); } } + + defer_call_end(); } =20 static void thread_pool_cancel(BlockAIOCB *acb) diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index 7109cf1a3b..29f4f543ad 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -73,6 +73,7 @@ virtqueue_fill(void *vq, const void *elem, unsigned int l= en, unsigned int idx) " virtqueue_flush(void *vq, unsigned int count) "vq %p count %u" virtqueue_pop(void *vq, void *elem, unsigned int in_num, unsigned int out_= num) "vq %p elem %p in_num %u out_num %u" virtio_queue_notify(void *vdev, int n, void *vq) "vdev %p n %d vq %p" +virtio_notify_irqfd_deferred_fn(void *vdev, void *vq) "vdev %p vq %p" virtio_notify_irqfd(void *vdev, void *vq) "vdev %p vq %p" virtio_notify(void *vdev, void *vq) "vdev %p vq %p" virtio_set_status(void *vdev, uint8_t val) "vdev %p val %u" --=20 2.41.0 From nobody Tue May 14 14:17:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1694635295; cv=none; d=zohomail.com; s=zohoarc; b=MBZL8QKgJCoZCIX0kMGjRVqtoE9TP/x7kbFBlw0+1pIVePE2uC96op5zfltoGcZroXBK39ACyi1kpDfvjRRZpzNJBGp4k6KceYcPes6uU1FcNLKITWCGvcoEoiUNby3PIwo3kMYWGF8yAbi7BKbQaReIt1cKk/e0AkNrtYZbjpo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694635295; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=c7TW1/CILt/+UW3nXOTVhsXTMbwN0KKvdAoGezBqwmI=; b=cdj5PJu2sQS8Sg05ldUgqNYwKhv+umXpfXARibSshOAiEExJwUY2oMNhZd5qf84A50Exutb/XfGEQuexOvcgTsqOlhzeJN6nXMCPZO37QqT4Gvxo65Q2rHM/mQMkUZgtaUWKGhylThTDRpZForbEUyGMqkbP/aXJzP/KqMhuGQk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1694635295713696.3936538136536; Wed, 13 Sep 2023 13:01:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.601549.937606 (Exim 4.92) (envelope-from ) id 1qgW2u-0007V9-FI; Wed, 13 Sep 2023 20:01:08 +0000 Received: by outflank-mailman (output) from mailman id 601549.937606; Wed, 13 Sep 2023 20:01:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2u-0007Uy-C2; Wed, 13 Sep 2023 20:01:08 +0000 Received: by outflank-mailman (input) for mailman id 601549; Wed, 13 Sep 2023 20:01:07 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2t-0006RI-Ma for xen-devel@lists.xenproject.org; Wed, 13 Sep 2023 20:01:07 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 45706fe3-5270-11ee-9b0d-b553b5be7939; Wed, 13 Sep 2023 22:01:06 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-653-zPjeAJD3Ooe0jC7EMpIwiA-1; Wed, 13 Sep 2023 16:01:01 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 04CD0805B3E; Wed, 13 Sep 2023 20:01:01 +0000 (UTC) Received: from localhost (unknown [10.39.192.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7FB6640C6EA8; Wed, 13 Sep 2023 20:01:00 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 45706fe3-5270-11ee-9b0d-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694635264; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c7TW1/CILt/+UW3nXOTVhsXTMbwN0KKvdAoGezBqwmI=; b=XTyeqTJ3u0uDPsCSJ/AeJ38nX9l9iVWWt4QsN+F6jhkwlwvaVHo5Jhd4J3vHgSq5rz5WVK mkNkG52L1ZzwhOT1Ld19fEknHfYLQgboPkEuXXbORaf40Vtaqwj3QlKP2GmQinOwyPdsWI wBfrDDdEYnTT8aHW+VzzhbCbk1O4MuU= X-MC-Unique: zPjeAJD3Ooe0jC7EMpIwiA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Ilya Maximets , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Kevin Wolf , xen-devel@lists.xenproject.org, Anthony Perard , Paolo Bonzini , Stefan Hajnoczi , , Julia Suvorova , Aarushi Mehta , Paul Durrant , "Michael S. Tsirkin" , Fam Zheng , Stefano Garzarella , Hanna Reitz , Eric Blake Subject: [PATCH v3 4/4] virtio-blk: remove batch notification BH Date: Wed, 13 Sep 2023 16:00:45 -0400 Message-ID: <20230913200045.1024233-5-stefanha@redhat.com> In-Reply-To: <20230913200045.1024233-1-stefanha@redhat.com> References: <20230913200045.1024233-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1694635296419100001 Content-Type: text/plain; charset="utf-8" There is a batching mechanism for virtio-blk Used Buffer Notifications that is no longer needed because the previous commit added batching to virtio_notify_irqfd(). Note that this mechanism was rarely used in practice because it is only enabled when EVENT_IDX is not negotiated by the driver. Modern drivers enable EVENT_IDX. Reviewed-by: Eric Blake Signed-off-by: Stefan Hajnoczi Reviewed-by: Michael S. Tsirkin --- hw/block/dataplane/virtio-blk.c | 48 +-------------------------------- 1 file changed, 1 insertion(+), 47 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-bl= k.c index da36fcfd0b..f83bb0f116 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -31,9 +31,6 @@ struct VirtIOBlockDataPlane { =20 VirtIOBlkConf *conf; VirtIODevice *vdev; - QEMUBH *bh; /* bh for guest notification */ - unsigned long *batch_notify_vqs; - bool batch_notifications; =20 /* Note that these EventNotifiers are assigned by value. This is * fine as long as you do not call event_notifier_cleanup on them @@ -47,36 +44,7 @@ struct VirtIOBlockDataPlane { /* Raise an interrupt to signal guest, if necessary */ void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq) { - if (s->batch_notifications) { - set_bit(virtio_get_queue_index(vq), s->batch_notify_vqs); - qemu_bh_schedule(s->bh); - } else { - virtio_notify_irqfd(s->vdev, vq); - } -} - -static void notify_guest_bh(void *opaque) -{ - VirtIOBlockDataPlane *s =3D opaque; - unsigned nvqs =3D s->conf->num_queues; - unsigned long bitmap[BITS_TO_LONGS(nvqs)]; - unsigned j; - - memcpy(bitmap, s->batch_notify_vqs, sizeof(bitmap)); - memset(s->batch_notify_vqs, 0, sizeof(bitmap)); - - for (j =3D 0; j < nvqs; j +=3D BITS_PER_LONG) { - unsigned long bits =3D bitmap[j / BITS_PER_LONG]; - - while (bits !=3D 0) { - unsigned i =3D j + ctzl(bits); - VirtQueue *vq =3D virtio_get_queue(s->vdev, i); - - virtio_notify_irqfd(s->vdev, vq); - - bits &=3D bits - 1; /* clear right-most bit */ - } - } + virtio_notify_irqfd(s->vdev, vq); } =20 /* Context: QEMU global mutex held */ @@ -126,9 +94,6 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, Vi= rtIOBlkConf *conf, } else { s->ctx =3D qemu_get_aio_context(); } - s->bh =3D aio_bh_new_guarded(s->ctx, notify_guest_bh, s, - &DEVICE(vdev)->mem_reentrancy_guard); - s->batch_notify_vqs =3D bitmap_new(conf->num_queues); =20 *dataplane =3D s; =20 @@ -146,8 +111,6 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane= *s) =20 vblk =3D VIRTIO_BLK(s->vdev); assert(!vblk->dataplane_started); - g_free(s->batch_notify_vqs); - qemu_bh_delete(s->bh); if (s->iothread) { object_unref(OBJECT(s->iothread)); } @@ -173,12 +136,6 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) =20 s->starting =3D true; =20 - if (!virtio_vdev_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX)) { - s->batch_notifications =3D true; - } else { - s->batch_notifications =3D false; - } - /* Set up guest notifier (irq) */ r =3D k->set_guest_notifiers(qbus->parent, nvqs, true); if (r !=3D 0) { @@ -370,9 +327,6 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) =20 aio_context_release(s->ctx); =20 - qemu_bh_cancel(s->bh); - notify_guest_bh(s); /* final chance to notify guest */ - /* Clean up guest notifier (irq) */ k->set_guest_notifiers(qbus->parent, nvqs, false); =20 --=20 2.41.0