From nobody Tue Feb 10 10:20:18 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1541153139808898.1771643741502; Fri, 2 Nov 2018 03:05:39 -0700 (PDT) Received: from localhost ([::1]:50553 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gIWKg-0006Kw-N2 for importer@patchew.org; Fri, 02 Nov 2018 06:05:38 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34304) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gIWH8-0002X8-6s for qemu-devel@nongnu.org; Fri, 02 Nov 2018 06:02:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gIWH4-0000aL-2E for qemu-devel@nongnu.org; Fri, 02 Nov 2018 06:01:58 -0400 Received: from smtp03.citrix.com ([162.221.156.55]:1109) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gIWGv-00007o-3z; Fri, 02 Nov 2018 06:01:45 -0400 X-IronPort-AV: E=Sophos;i="5.54,455,1534809600"; d="scan'208";a="69460950" From: Tim Smith To: , , Date: Fri, 2 Nov 2018 10:00:59 +0000 Message-ID: <154115285942.11300.11718576813181760505.stgit@dhcp-3-135.uk.xensource.com> In-Reply-To: <154115285434.11300.8459925605672823399.stgit@dhcp-3-135.uk.xensource.com> References: <154115285434.11300.8459925605672823399.stgit@dhcp-3-135.uk.xensource.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 162.221.156.55 Subject: [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Anthony Perard , Kevin Wolf , Paul Durrant , Stefano Stabellini , Max Reitz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" When I/O consists of many small requests, performance is improved by batching them together in a single io_submit() call. When there are relatively few requests, the extra overhead is not worth it. This introduces a check to start batching I/O requests via blk_io_plug()/ blk_io_unplug() in an amount proportional to the number which were already in flight at the time we started reading the ring. Signed-off-by: Tim Smith Acked-by: Anthony PERARD Reviewed-by: Paul Durrant --- hw/block/xen_disk.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c index 36eff94f84..cb2881b7e6 100644 --- a/hw/block/xen_disk.c +++ b/hw/block/xen_disk.c @@ -101,6 +101,9 @@ struct XenBlkDev { AioContext *ctx; }; =20 +/* Threshold of in-flight requests above which we will start using + * blk_io_plug()/blk_io_unplug() to batch requests */ +#define IO_PLUG_THRESHOLD 1 /* ------------------------------------------------------------- */ =20 static void ioreq_reset(struct ioreq *ioreq) @@ -542,6 +545,8 @@ static void blk_handle_requests(struct XenBlkDev *blkde= v) { RING_IDX rc, rp; struct ioreq *ioreq; + int inflight_atstart =3D blkdev->requests_inflight; + int batched =3D 0; =20 blkdev->more_work =3D 0; =20 @@ -550,6 +555,16 @@ static void blk_handle_requests(struct XenBlkDev *blkd= ev) xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ =20 blk_send_response_all(blkdev); + /* If there was more than IO_PLUG_THRESHOLD ioreqs in flight + * when we got here, this is an indication that there the bottleneck + * is below us, so it's worth beginning to batch up I/O requests + * rather than submitting them immediately. The maximum number + * of requests we're willing to batch is the number already in + * flight, so it can grow up to max_requests when the bottleneck + * is below us */ + if (inflight_atstart > IO_PLUG_THRESHOLD) { + blk_io_plug(blkdev->blk); + } while (rc !=3D rp) { /* pull request from ring */ if (RING_REQUEST_CONS_OVERFLOW(&blkdev->rings.common, rc)) { @@ -589,7 +604,22 @@ static void blk_handle_requests(struct XenBlkDev *blkd= ev) continue; } =20 + if (inflight_atstart > IO_PLUG_THRESHOLD && + batched >=3D inflight_atstart) { + blk_io_unplug(blkdev->blk); + } ioreq_runio_qemu_aio(ioreq); + if (inflight_atstart > IO_PLUG_THRESHOLD) { + if (batched >=3D inflight_atstart) { + blk_io_plug(blkdev->blk); + batched =3D 0; + } else { + batched++; + } + } + } + if (inflight_atstart > IO_PLUG_THRESHOLD) { + blk_io_unplug(blkdev->blk); } =20 if (blkdev->more_work && blkdev->requests_inflight < blkdev->max_reque= sts) {