From nobody Tue Feb 10 07:42:37 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1541151114468199.55629741816983; Fri, 2 Nov 2018 02:31:54 -0700 (PDT) Received: from localhost ([::1]:50434 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gIVnr-000198-9K for importer@patchew.org; Fri, 02 Nov 2018 05:31:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56203) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gIVmB-0000Dm-Sn for qemu-devel@nongnu.org; Fri, 02 Nov 2018 05:30:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gIVm9-0003XQ-QD for qemu-devel@nongnu.org; Fri, 02 Nov 2018 05:29:59 -0400 Received: from smtp03.citrix.com ([162.221.156.55]:4352) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gIVm9-0003WP-Cj for qemu-devel@nongnu.org; Fri, 02 Nov 2018 05:29:57 -0400 X-IronPort-AV: E=Sophos;i="5.54,455,1534809600"; d="scan'208";a="69458995" From: Tim Smith To: Date: Fri, 2 Nov 2018 09:29:50 +0000 Message-ID: <154115099006.664.2982181181564452215.stgit@dhcp-3-135.uk.xensource.com> In-Reply-To: <154115098499.664.15585399091081300567.stgit@dhcp-3-135.uk.xensource.com> References: <154115098499.664.15585399091081300567.stgit@dhcp-3-135.uk.xensource.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 162.221.156.55 Subject: [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Durrant Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" When I/O consists of many small requests, performance is improved by batching them together in a single io_submit() call. When there are relatively few requests, the extra overhead is not worth it. This introduces a check to start batching I/O requests via blk_io_plug()/ blk_io_unplug() in an amount proportional to the number which were already in flight at the time we started reading the ring. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c index 36eff94f84..6cb40d66fa 100644 --- a/hw/block/xen_disk.c +++ b/hw/block/xen_disk.c @@ -101,6 +101,9 @@ struct XenBlkDev { AioContext *ctx; }; =20 +/* Threshold of in-flight requests above which we will start using + * blk_io_plug()/blk_io_unplug() to batch requests */ +#define IO_PLUG_THRESHOLD 1 /* ------------------------------------------------------------- */ =20 static void ioreq_reset(struct ioreq *ioreq) @@ -542,6 +545,8 @@ static void blk_handle_requests(struct XenBlkDev *blkde= v) { RING_IDX rc, rp; struct ioreq *ioreq; + int inflight_atstart =3D blkdev->requests_inflight; + int batched =3D 0; =20 blkdev->more_work =3D 0; =20 @@ -550,6 +555,16 @@ static void blk_handle_requests(struct XenBlkDev *blkd= ev) xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ =20 blk_send_response_all(blkdev); + /* If there was more than IO_PLUG_THRESHOLD ioreqs in flight + * when we got here, this is an indication that there the bottleneck + * is below us, so it's worth beginning to batch up I/O requests + * rather than submitting them immediately. The maximum number + * of requests we're willing to batch is the number already in + * flight, so it can grow up to max_requests when the bottleneck + * is below us */ + if (inflight_atstart > IO_PLUG_THRESHOLD) { + blk_io_plug(blkdev->blk); + } while (rc !=3D rp) { /* pull request from ring */ if (RING_REQUEST_CONS_OVERFLOW(&blkdev->rings.common, rc)) { @@ -589,7 +604,21 @@ static void blk_handle_requests(struct XenBlkDev *blkd= ev) continue; } =20 + if (inflight_atstart > IO_PLUG_THRESHOLD && batched >=3D inflight_= atstart) { + blk_io_unplug(blkdev->blk); + } ioreq_runio_qemu_aio(ioreq); + if (inflight_atstart > IO_PLUG_THRESHOLD) { + if (batched >=3D inflight_atstart) { + blk_io_plug(blkdev->blk); + batched=3D0; + } else { + batched++; + } + } + } + if (inflight_atstart > IO_PLUG_THRESHOLD) { + blk_io_unplug(blkdev->blk); } =20 if (blkdev->more_work && blkdev->requests_inflight < blkdev->max_reque= sts) {