From nobody Sat Feb 7 04:40:41 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1654697371; cv=none; d=zohomail.com; s=zohoarc; b=Z/dnlj5mSDqOvhkkpyvUpcENV3kfPbeCzLKYHmhb/htFs8dS/CWVDHb0NBLOVZYTktfVTCs2YVJGjh3bFiZ4E3+j1sXQZ78lHVH2Qpg1ag7GNrZNfJHQ9T4O0+/QbyBtm4OmZhQSo+aQXBL5NBnpS3dzxvl5/H5/PeVK415RCN8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654697371; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6XMjzJhLEvCz69VNLS/Mt2zAvqJeBHHwBfci8Z4YAKc=; b=OQARQjqaSMmHMvSlqs50meWDtmkpBIE2PM5lL9S8+V/AArRwu9V7sVsRM/FzQPU5Hz+MyG2IvnQpfA+USCyo0+58zj1l6B3eKRt4pQEKBQlDeADwPmL9bH3boqodMeEziFJ8G4cIoZHIjT7Yp8vBOtor49zhfKadS0AHGj8pHEc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1654697371062720.7772856158484; Wed, 8 Jun 2022 07:09:31 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-595-z_sntYGPPdGlYlAOhkvodg-1; Wed, 08 Jun 2022 10:08:43 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F300329AB458; Wed, 8 Jun 2022 14:08:36 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id D967740E80E2; Wed, 8 Jun 2022 14:08:36 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 3858A1947BAD; Wed, 8 Jun 2022 14:08:36 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id A5D561947040 for ; Wed, 8 Jun 2022 13:43:17 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 946AC492CA3; Wed, 8 Jun 2022 13:43:17 +0000 (UTC) Received: from maggie.redhat.com (unknown [10.43.2.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C97C492C3B for ; Wed, 8 Jun 2022 13:43:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654697369; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=6XMjzJhLEvCz69VNLS/Mt2zAvqJeBHHwBfci8Z4YAKc=; b=LuaNMORbkGwh19NUlwJb9M9Zb1hcbk1sXx1ycdmtgmk2z1mVN0IWji/ui14/DI9s0t+wmI 5iXjZOkwR3kYvzTYJSXt9l5hMChABAKXLGkBGJyUtth0cyaCbUPFoMpTx1gv1+JeyKXcRc vu/kY74/h3eEi0fooedzXKM4jWmtnOI= X-MC-Unique: z_sntYGPPdGlYlAOhkvodg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH v3 11/15] qemu: Wire up new virDomainSetIOThreadParams parameters Date: Wed, 8 Jun 2022 15:43:05 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1654697372926100001 Content-Type: text/plain; charset="utf-8"; x-default="true" Introduced in previous commit, QEMU driver needs to be taught how to set VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN and VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX parameters on given IOThread. Fortunately, this is fairly trivial to do and since these two parameters are exposed in domain XML too the update of inactive XML can be wired up too. Signed-off-by: Michal Privoznik Reviewed-by: Peter Krempa --- src/qemu/qemu_driver.c | 140 +++++++++++++++++++++++++++++++++-- src/qemu/qemu_monitor.h | 4 + src/qemu/qemu_monitor_json.c | 2 + 3 files changed, 141 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ee1adb0300..ded34e97cd 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5323,6 +5323,26 @@ qemuDomainHotplugModIOThread(virQEMUDriver *driver, } =20 =20 +static int +qemuDomainHotplugModIOThreadIDDef(virDomainIOThreadIDDef *def, + qemuMonitorIOThreadInfo mondef) +{ + /* These have no representation in domain XML */ + if (mondef.set_poll_grow || + mondef.set_poll_max_ns || + mondef.set_poll_shrink) + return -1; + + if (mondef.set_thread_pool_min) + def->thread_pool_min =3D mondef.thread_pool_min; + + if (mondef.set_thread_pool_max) + def->thread_pool_max =3D mondef.thread_pool_max; + + return 0; +} + + static int qemuDomainHotplugDelIOThread(virQEMUDriver *driver, virDomainObj *vm, @@ -5430,6 +5450,10 @@ qemuDomainIOThreadParseParams(virTypedParameterPtr p= arams, VIR_TYPED_PARAM_UINT, VIR_DOMAIN_IOTHREAD_POLL_SHRINK, VIR_TYPED_PARAM_UINT, + VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN, + VIR_TYPED_PARAM_INT, + VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX, + VIR_TYPED_PARAM_INT, NULL) < 0) return -1; =20 @@ -5454,6 +5478,20 @@ qemuDomainIOThreadParseParams(virTypedParameterPtr p= arams, if (rc =3D=3D 1) iothread->set_poll_shrink =3D true; =20 + if ((rc =3D virTypedParamsGetInt(params, nparams, + VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN, + &iothread->thread_pool_min)) < 0) + return -1; + if (rc =3D=3D 1) + iothread->set_thread_pool_min =3D true; + + if ((rc =3D virTypedParamsGetInt(params, nparams, + VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX, + &iothread->thread_pool_max)) < 0) + return -1; + if (rc =3D=3D 1) + iothread->set_thread_pool_max =3D true; + if (iothread->set_poll_max_ns && iothread->poll_max_ns > INT_MAX) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("poll-max-ns (%llu) must be less than or equal to= %d"), @@ -5475,6 +5513,78 @@ qemuDomainIOThreadParseParams(virTypedParameterPtr p= arams, return -1; } =20 + if (iothread->set_thread_pool_min && iothread->thread_pool_min < -1) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("thread_pool_min (%d) must be equal to or greater= than -1"), + iothread->thread_pool_min); + return -1; + } + + if (iothread->set_thread_pool_max && + (iothread->thread_pool_max < -1 || iothread->thread_pool_max =3D= =3D 0)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("thread_pool_max (%d) must be a positive number o= r -1"), + iothread->thread_pool_max); + return -1; + } + + return 0; +} + + +/** + * qemuDomainIOThreadValidate: + * iothreaddef: IOThread definition in domain XML + * iothread: new values to set + * live: whether this is update of active domain + * + * Validate that changes to be made to an IOThread (as expressed by @iothr= ead) + * are consistent with the current state of the IOThread (@iothreaddef). + * For instance, that thread_pool_min won't end up greater than thread_poo= l_max. + * + * Returns: 0 on success, + * -1 on error, with error message reported. + */ +static int +qemuDomainIOThreadValidate(virDomainIOThreadIDDef *iothreaddef, + qemuMonitorIOThreadInfo iothread, + bool live) +{ + int thread_pool_min =3D iothreaddef->thread_pool_min; + int thread_pool_max =3D iothreaddef->thread_pool_max; + + /* For live change we don't have a way to let QEMU return to its + * defaults. Therefore, deny setting -1. */ + + if (iothread.set_thread_pool_min) { + if (live && iothread.thread_pool_min < 0) { + virReportError(VIR_ERR_OPERATION_INVALID, + _("thread_pool_min (%d) must be equal to or gre= ater than 0 for live change"), + iothread.thread_pool_min); + return -1; + } + + thread_pool_min =3D iothread.thread_pool_min; + } + + if (iothread.set_thread_pool_max) { + if (live && iothread.thread_pool_max < 0) { + virReportError(VIR_ERR_OPERATION_INVALID, + _("thread_pool_max (%d) must be equal to or gre= ater than 0 for live change"), + iothread.thread_pool_max); + return -1; + } + + thread_pool_max =3D iothread.thread_pool_max; + } + + if (thread_pool_min > thread_pool_max) { + virReportError(VIR_ERR_OPERATION_INVALID, + _("thread_pool_min (%d) can't be greater than threa= d_pool_max (%d)"), + thread_pool_min, thread_pool_max); + return -1; + } + return 0; } =20 @@ -5496,6 +5606,7 @@ qemuDomainChgIOThread(virQEMUDriver *driver, qemuDomainObjPrivate *priv; virDomainDef *def; virDomainDef *persistentDef; + virDomainIOThreadIDDef *iothreaddef =3D NULL; int ret =3D -1; =20 cfg =3D virQEMUDriverGetConfig(driver); @@ -5535,16 +5646,22 @@ qemuDomainChgIOThread(virQEMUDriver *driver, break; =20 case VIR_DOMAIN_IOTHREAD_ACTION_MOD: - if (!(virDomainIOThreadIDFind(def, iothread.iothread_id))) { + iothreaddef =3D virDomainIOThreadIDFind(def, iothread.iothread= _id); + + if (!iothreaddef) { virReportError(VIR_ERR_INVALID_ARG, _("cannot find IOThread '%u' in iothreadids= "), iothread.iothread_id); goto endjob; } =20 + if (qemuDomainIOThreadValidate(iothreaddef, iothread, true) < = 0) + goto endjob; + if (qemuDomainHotplugModIOThread(driver, vm, iothread) < 0) goto endjob; =20 + qemuDomainHotplugModIOThreadIDDef(iothreaddef, iothread); break; =20 } @@ -5572,10 +5689,23 @@ qemuDomainChgIOThread(virQEMUDriver *driver, break; =20 case VIR_DOMAIN_IOTHREAD_ACTION_MOD: - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("configuring persistent polling values is " - "not supported")); - goto endjob; + iothreaddef =3D virDomainIOThreadIDFind(persistentDef, iothrea= d.iothread_id); + + if (!iothreaddef) { + virReportError(VIR_ERR_INVALID_ARG, + _("cannot find IOThread '%u' in iothreadids= "), + iothread.iothread_id); + goto endjob; + } + + if (qemuDomainIOThreadValidate(iothreaddef, iothread, false) <= 0) + goto endjob; + + if (qemuDomainHotplugModIOThreadIDDef(iothreaddef, iothread) <= 0) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("configuring persistent polling values is= not supported")); + goto endjob; + } =20 break; } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 91f2d0941c..06822d6642 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1314,9 +1314,13 @@ struct _qemuMonitorIOThreadInfo { unsigned long long poll_max_ns; unsigned int poll_grow; unsigned int poll_shrink; + int thread_pool_min; + int thread_pool_max; bool set_poll_max_ns; bool set_poll_grow; bool set_poll_shrink; + bool set_thread_pool_min; + bool set_thread_pool_max; }; int qemuMonitorGetIOThreads(qemuMonitor *mon, qemuMonitorIOThreadInfo ***iothreads, diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 8b81a07429..6b3acab0d2 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -7447,6 +7447,8 @@ qemuMonitorJSONSetIOThread(qemuMonitor *mon, VIR_IOTHREAD_SET_PROP("poll-max-ns", poll_max_ns); VIR_IOTHREAD_SET_PROP("poll-grow", poll_grow); VIR_IOTHREAD_SET_PROP("poll-shrink", poll_shrink); + VIR_IOTHREAD_SET_PROP("thread-pool-min", thread_pool_min); + VIR_IOTHREAD_SET_PROP("thread-pool-max", thread_pool_max); =20 #undef VIR_IOTHREAD_SET_PROP =20 --=20 2.35.1