From nobody Tue Apr 30 01:32:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1486133246899287.01979785711194; Fri, 3 Feb 2017 06:47:26 -0800 (PST) Received: from localhost ([::1]:35149 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cZf92-0003cL-Dy for importer@patchew.org; Fri, 03 Feb 2017 09:47:24 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49034) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cZf89-00034r-Ci for qemu-devel@nongnu.org; Fri, 03 Feb 2017 09:46:30 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cZf84-00066e-DB for qemu-devel@nongnu.org; Fri, 03 Feb 2017 09:46:29 -0500 Received: from mx1.redhat.com ([209.132.183.28]:43406) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cZf84-000669-6Y for qemu-devel@nongnu.org; Fri, 03 Feb 2017 09:46:24 -0500 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BA6B3C054900; Fri, 3 Feb 2017 14:46:23 +0000 (UTC) Received: from localhost (ovpn-117-36.ams2.redhat.com [10.36.117.36]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v13EkMqE006473; Fri, 3 Feb 2017 09:46:23 -0500 From: Stefan Hajnoczi To: Date: Fri, 3 Feb 2017 14:37:53 +0000 Message-Id: <20170203143753.9903-2-stefanha@redhat.com> In-Reply-To: <20170203143753.9903-1-stefanha@redhat.com> References: <20170203143753.9903-1-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 03 Feb 2017 14:46:24 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 1/1] iothread: enable AioContext polling by default X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Karl Rister , Christian Borntraeger , Stefan Hajnoczi , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" IOThread AioContexts are likely to consist only of event sources like virtqueue ioeventfds and LinuxAIO completion eventfds that are pollable from userspace (without system calls). We recently merged the AioContext polling feature but didn't enable it by default yet. I have gone back over the performance data on the mailing list and picked a default polling value that gave good results. Let's enable AioContext polling by default so users don't have another switch they need to set manually. If performance regressions are found we can still disable this for the QEMU 2.9 release. Cc: Paolo Bonzini Cc: Christian Borntraeger Cc: Karl Rister Signed-off-by: Stefan Hajnoczi Message-id: 20170126170119.27876-1-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi --- iothread.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/iothread.c b/iothread.c index 7bedde8..257b01d 100644 --- a/iothread.c +++ b/iothread.c @@ -30,6 +30,12 @@ typedef ObjectClass IOThreadClass; #define IOTHREAD_CLASS(klass) \ OBJECT_CLASS_CHECK(IOThreadClass, klass, TYPE_IOTHREAD) =20 +/* Benchmark results from 2016 on NVMe SSD drives show max polling times a= round + * 16-32 microseconds yield IOPS improvements for both iodepth=3D1 and iod= epth=3D32 + * workloads. + */ +#define IOTHREAD_POLL_MAX_NS_DEFAULT 32768ULL + static __thread IOThread *my_iothread; =20 AioContext *qemu_get_current_aio_context(void) @@ -71,6 +77,13 @@ static int iothread_stop(Object *object, void *opaque) return 0; } =20 +static void iothread_instance_init(Object *obj) +{ + IOThread *iothread =3D IOTHREAD(obj); + + iothread->poll_max_ns =3D IOTHREAD_POLL_MAX_NS_DEFAULT; +} + static void iothread_instance_finalize(Object *obj) { IOThread *iothread =3D IOTHREAD(obj); @@ -215,6 +228,7 @@ static const TypeInfo iothread_info =3D { .parent =3D TYPE_OBJECT, .class_init =3D iothread_class_init, .instance_size =3D sizeof(IOThread), + .instance_init =3D iothread_instance_init, .instance_finalize =3D iothread_instance_finalize, .interfaces =3D (InterfaceInfo[]) { {TYPE_USER_CREATABLE}, --=20 2.9.3