From nobody Sun Feb 8 22:57:51 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1568913268; cv=none; d=zoho.com; s=zohoarc; b=dKBEIDE83t7sDH7vHIdqgDnxLr7zXc5WVhW7ID86Ld/OwBXVcpaynwY+qFlw5XoJUEZjIF+6f3qIF7yMEdBBOtvjKUm6QPgC3OGgJCBoxsNaAGG9IcOPb2MkFy7q5ieVVfX0mdTrP3uthXe3+Xj8Cy+0ioDYJhjB0Pawre7eRXA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568913268; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=/m8Sliz1OXMNIwFFVx1wjZrv5fqCKex+l2MMfp2D6AM=; b=RNMaDRvup3Q64fVzEXpCBqDmJZldd4lcKG8Kp6CnhDo6KZsYxVhSAZoYjG2A86zV+KwKgiYr1n6Qw+6qoNmzuzbYJ/0DVZ2b6MnVvVDK8l2Q92+oSpbU0+OYzvBT6kIahJeqSlxY4bCFuLD1Zkrt+bidKhl1X3kHYC3ZxvPe9fM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1568913268697728.3361089273533; Thu, 19 Sep 2019 10:14:28 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0EBC33086E22; Thu, 19 Sep 2019 17:14:27 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DF6FB5DA5B; Thu, 19 Sep 2019 17:14:26 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 9CE0F180BA9F; Thu, 19 Sep 2019 17:14:26 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x8JHDt9T004788 for ; Thu, 19 Sep 2019 13:13:55 -0400 Received: by smtp.corp.redhat.com (Postfix) id 9C90760E3E; Thu, 19 Sep 2019 17:13:55 +0000 (UTC) Received: from angien.brq.redhat.com (unknown [10.43.2.229]) by smtp.corp.redhat.com (Postfix) with ESMTP id F360760C5E for ; Thu, 19 Sep 2019 17:13:53 +0000 (UTC) From: Peter Krempa To: libvir-list@redhat.com Date: Thu, 19 Sep 2019 19:13:20 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH 17/22] qemu: Use virTypedParamList in the bulk stats gathering functions X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Thu, 19 Sep 2019 17:14:27 +0000 (UTC) Content-Type: text/plain; charset="utf-8" The bulk stats functions are specific as they pass around the list into many sub-functions and also a substantial amount of the entries uses formatted names for indexing purposes. This makes them ideal to be converted to the new virTypedParamList helpers. Unfortunately given how the functions are used this requires a big-bang rewrite of all of the calls to add entries to the parameter list. Given that a substantial simplification is achieved as well as a prety significant change to the original code is required some macros which were used only sporradically were replaced by inline calls rather than tweaking the macros first and deleting them later. Signed-off-by: Peter Krempa Reviewed-by: J=C3=A1n Tomko --- src/qemu/qemu_driver.c | 472 ++++++++++++----------------------------- 1 file changed, 139 insertions(+), 333 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9c24e435e9..c33fd6824c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -20907,22 +20907,13 @@ qemuConnectGetDomainCapabilities(virConnectPtr co= nn, static int qemuDomainGetStatsState(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags ATTRIBUTE_UNUSED) { - if (virTypedParamsAddInt(&record->params, - &record->nparams, - maxparams, - "state.state", - dom->state.state) < 0) + if (virTypedParamListAddI(params, dom->state.state, "state.state") < 0) return -1; - if (virTypedParamsAddInt(&record->params, - &record->nparams, - maxparams, - "state.reason", - dom->state.reason) < 0) + if (virTypedParamListAddI(params, dom->state.reason, "state.reason") <= 0) return -1; return 0; @@ -21063,10 +21054,8 @@ qemuDomainGetResctrlMonData(virQEMUDriverPtr drive= r, static int qemuDomainGetStatsCpuCache(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams) + virTypedParamListPtr params) { - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; virQEMUResctrlMonDataPtr *resdata =3D NULL; size_t nresdata =3D 0; size_t i =3D 0; @@ -21080,49 +21069,29 @@ qemuDomainGetStatsCpuCache(virQEMUDriverPtr drive= r, VIR_RESCTRL_MONITOR_TYPE_CACHE) < 0) goto cleanup; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "cpu.cache.monitor.count"); - if (virTypedParamsAddUInt(&record->params, &record->nparams, - maxparams, param_name, nresdata) < 0) + if (virTypedParamListAddUI(params, nresdata, "cpu.cache.monitor.count"= ) < 0) goto cleanup; for (i =3D 0; i < nresdata; i++) { - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "cpu.cache.monitor.%zu.name", i); - if (virTypedParamsAddString(&record->params, - &record->nparams, - maxparams, - param_name, - resdata[i]->name) < 0) + if (virTypedParamListAddS(params, resdata[i]->name, + "cpu.cache.monitor.%zu.name", i) < 0) goto cleanup; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "cpu.cache.monitor.%zu.vcpus", i); - if (virTypedParamsAddString(&record->params, &record->nparams, - maxparams, param_name, - resdata[i]->vcpus) < 0) + if (virTypedParamListAddS(params, resdata[i]->vcpus, + "cpu.cache.monitor.%zu.vcpus", i) < 0) goto cleanup; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "cpu.cache.monitor.%zu.bank.count", i); - if (virTypedParamsAddUInt(&record->params, &record->nparams, - maxparams, param_name, - resdata[i]->nstats) < 0) + if (virTypedParamListAddUI(params, resdata[i]->nstats, + "cpu.cache.monitor.%zu.bank.count", i) = < 0) goto cleanup; for (j =3D 0; j < resdata[i]->nstats; j++) { - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "cpu.cache.monitor.%zu.bank.%zu.id", i, j); - if (virTypedParamsAddUInt(&record->params, &record->nparams, - maxparams, param_name, - resdata[i]->stats[j]->id) < 0) + if (virTypedParamListAddUI(params, resdata[i]->stats[j]->id, + "cpu.cache.monitor.%zu.bank.%zu.id"= , i, j) < 0) goto cleanup; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "cpu.cache.monitor.%zu.bank.%zu.bytes", i, j); - if (virTypedParamsAddUInt(&record->params, &record->nparams, - maxparams, param_name, - resdata[i]->stats[j]->vals[0]) < 0) + if (virTypedParamListAddUI(params, resdata[i]->stats[j]->vals[= 0], + "cpu.cache.monitor.%zu.bank.%zu.byt= es", i, j) < 0) goto cleanup; } } @@ -21138,8 +21107,7 @@ qemuDomainGetStatsCpuCache(virQEMUDriverPtr driver, static int qemuDomainGetStatsCpuCgroup(virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams) + virTypedParamListPtr params) { qemuDomainObjPrivatePtr priv =3D dom->privateData; unsigned long long cpu_time =3D 0; @@ -21151,25 +21119,13 @@ qemuDomainGetStatsCpuCgroup(virDomainObjPtr dom, return 0; err =3D virCgroupGetCpuacctUsage(priv->cgroup, &cpu_time); - if (!err && virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - "cpu.time", - cpu_time) < 0) + if (!err && virTypedParamListAddULL(params, cpu_time, "cpu.time") < 0) return -1; err =3D virCgroupGetCpuacctStat(priv->cgroup, &user_time, &sys_time); - if (!err && virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - "cpu.user", - user_time) < 0) + if (!err && virTypedParamListAddULL(params, user_time, "cpu.user") < 0) return -1; - if (!err && virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - "cpu.system", - sys_time) < 0) + if (!err && virTypedParamListAddULL(params, sys_time, "cpu.system") < = 0) return -1; return 0; @@ -21179,14 +21135,13 @@ qemuDomainGetStatsCpuCgroup(virDomainObjPtr dom, static int qemuDomainGetStatsCpu(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags ATTRIBUTE_UNUSED) { - if (qemuDomainGetStatsCpuCgroup(dom, record, maxparams) < 0) + if (qemuDomainGetStatsCpuCgroup(dom, params) < 0) return -1; - if (qemuDomainGetStatsCpuCache(driver, dom, record, maxparams) < 0) + if (qemuDomainGetStatsCpuCache(driver, dom, params) < 0) return -1; return 0; @@ -21196,8 +21151,7 @@ qemuDomainGetStatsCpu(virQEMUDriverPtr driver, static int qemuDomainGetStatsBalloon(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags) { virDomainMemoryStatStruct stats[VIR_DOMAIN_MEMORY_STAT_NR]; @@ -21211,18 +21165,11 @@ qemuDomainGetStatsBalloon(virQEMUDriverPtr driver, cur_balloon =3D dom->def->mem.cur_balloon; } - if (virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - "balloon.current", - cur_balloon) < 0) + if (virTypedParamListAddULL(params, cur_balloon, "balloon.current") < = 0) return -1; - if (virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - "balloon.maximum", - virDomainDefGetMemoryTotal(dom->def)) < 0) + if (virTypedParamListAddULL(params, virDomainDefGetMemoryTotal(dom->de= f), + "balloon.maximum") < 0) return -1; if (!HAVE_JOB(privflags) || !virDomainObjIsActive(dom)) @@ -21235,11 +21182,7 @@ qemuDomainGetStatsBalloon(virQEMUDriverPtr driver, #define STORE_MEM_RECORD(TAG, NAME) \ if (stats[i].tag =3D=3D VIR_DOMAIN_MEMORY_STAT_ ##TAG) \ - if (virTypedParamsAddULLong(&record->params, \ - &record->nparams, \ - maxparams, \ - "balloon." NAME, \ - stats[i].val) < 0) \ + if (virTypedParamListAddULL(params, stats[i].val, "balloon." NAME)= < 0) \ return -1; for (i =3D 0; i < nr_stats; i++) { @@ -21266,30 +21209,22 @@ qemuDomainGetStatsBalloon(virQEMUDriverPtr driver, static int qemuDomainGetStatsVcpu(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags) { virDomainVcpuDefPtr vcpu; qemuDomainVcpuPrivatePtr vcpupriv; size_t i; int ret =3D -1; - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; virVcpuInfoPtr cpuinfo =3D NULL; unsigned long long *cpuwait =3D NULL; - if (virTypedParamsAddUInt(&record->params, - &record->nparams, - maxparams, - "vcpu.current", - virDomainDefGetVcpus(dom->def)) < 0) + if (virTypedParamListAddUI(params, virDomainDefGetVcpus(dom->def), + "vcpu.current") < 0) return -1; - if (virTypedParamsAddUInt(&record->params, - &record->nparams, - maxparams, - "vcpu.maximum", - virDomainDefGetVcpusMax(dom->def)) < 0) + if (virTypedParamListAddUI(params, virDomainDefGetVcpusMax(dom->def), + "vcpu.maximum") < 0) return -1; if (VIR_ALLOC_N(cpuinfo, virDomainDefGetVcpus(dom->def)) < 0 || @@ -21312,34 +21247,20 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver, } for (i =3D 0; i < virDomainDefGetVcpus(dom->def); i++) { - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "vcpu.%u.state", cpuinfo[i].number); - if (virTypedParamsAddInt(&record->params, - &record->nparams, - maxparams, - param_name, - cpuinfo[i].state) < 0) + if (virTypedParamListAddI(params, cpuinfo[i].state, + "vcpu.%u.state", cpuinfo[i].number) < 0) goto cleanup; /* stats below are available only if the VM is alive */ if (!virDomainObjIsActive(dom)) continue; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "vcpu.%u.time", cpuinfo[i].number); - if (virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - param_name, - cpuinfo[i].cpuTime) < 0) + if (virTypedParamListAddULL(params, cpuinfo[i].cpuTime, + "vcpu.%u.time", cpuinfo[i].number) < 0) goto cleanup; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "vcpu.%u.wait", cpuinfo[i].number); - if (virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - param_name, - cpuwait[i]) < 0) + + if (virTypedParamListAddULL(params, cpuwait[i], + "vcpu.%u.wait", cpuinfo[i].number) < 0) goto cleanup; /* state below is extracted from the individual vcpu structs */ @@ -21349,13 +21270,10 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver, vcpupriv =3D QEMU_DOMAIN_VCPU_PRIVATE(vcpu); if (vcpupriv->halted !=3D VIR_TRISTATE_BOOL_ABSENT) { - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "vcpu.%u.halted", cpuinfo[i].number); - if (virTypedParamsAddBoolean(&record->params, - &record->nparams, - maxparams, - param_name, - vcpupriv->halted =3D=3D VIR_TRIST= ATE_BOOL_YES) < 0) + if (virTypedParamListAddB(params, + vcpupriv->halted =3D=3D VIR_TRISTATE= _BOOL_YES, + "vcpu.%u.halted", + cpuinfo[i].number) < 0) goto cleanup; } } @@ -21368,49 +21286,15 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver, return ret; } -#define QEMU_ADD_COUNT_PARAM(record, maxparams, type, count) \ -do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, "%s.count", type); \ - if (virTypedParamsAddUInt(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - count) < 0) \ - goto cleanup; \ -} while (0) - -#define QEMU_ADD_NAME_PARAM(record, maxparams, type, subtype, num, name) \ -do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \ - "%s.%zu.%s", type, num, subtype); \ - if (virTypedParamsAddString(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - name) < 0) \ - goto cleanup; \ -} while (0) - -#define QEMU_ADD_NET_PARAM(record, maxparams, num, name, value) \ -do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \ - "net.%zu.%s", num, name); \ - if (value >=3D 0 && virTypedParamsAddULLong(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - value) < 0) \ - return -1; \ -} while (0) +#define QEMU_ADD_NET_PARAM(params, num, name, value) \ + if (value >=3D 0 && \ + virTypedParamListAddULL((params), (value), "net.%zu.%s", (num), (n= ame)) < 0) \ + return -1; static int qemuDomainGetStatsInterface(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags ATTRIBUTE_UNUSED) { size_t i; @@ -21420,7 +21304,8 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driver= ATTRIBUTE_UNUSED, if (!virDomainObjIsActive(dom)) return 0; - QEMU_ADD_COUNT_PARAM(record, maxparams, "net", dom->def->nnets); + if (virTypedParamListAddUI(params, dom->def->nnets, "net.count") < 0) + goto cleanup; /* Check the path is one of the domain's network interfaces. */ for (i =3D 0; i < dom->def->nnets; i++) { @@ -21434,8 +21319,8 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driver= ATTRIBUTE_UNUSED, actualType =3D virDomainNetGetActualType(net); - QEMU_ADD_NAME_PARAM(record, maxparams, - "net", "name", i, net->ifname); + if (virTypedParamListAddS(params, net->ifname, "net.%zu.name", i) = < 0) + goto cleanup; if (actualType =3D=3D VIR_DOMAIN_NET_TYPE_VHOSTUSER) { if (virNetDevOpenvswitchInterfaceStats(net->ifname, &tmp) < 0)= { @@ -21450,21 +21335,21 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driv= er ATTRIBUTE_UNUSED, } } - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "rx.bytes", tmp.rx_bytes); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "rx.pkts", tmp.rx_packets); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "rx.errs", tmp.rx_errs); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "rx.drop", tmp.rx_drop); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "tx.bytes", tmp.tx_bytes); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "tx.pkts", tmp.tx_packets); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "tx.errs", tmp.tx_errs); - QEMU_ADD_NET_PARAM(record, maxparams, i, + QEMU_ADD_NET_PARAM(params, i, "tx.drop", tmp.tx_drop); } @@ -21475,39 +21360,16 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driv= er ATTRIBUTE_UNUSED, #undef QEMU_ADD_NET_PARAM -#define QEMU_ADD_BLOCK_PARAM_UI(record, maxparams, num, name, value) \ - do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \ - "block.%zu.%s", num, name); \ - if (virTypedParamsAddUInt(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - value) < 0) \ - goto cleanup; \ - } while (0) - -#define QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, num, name, value) \ -do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \ - "block.%zu.%s", num, name); \ - if (virTypedParamsAddULLong(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - value) < 0) \ - goto cleanup; \ -} while (0) +#define QEMU_ADD_BLOCK_PARAM_ULL(params, num, name, value) \ + if (virTypedParamListAddULL((params), (value), "block.%zu.%s", (num), = (name)) < 0) \ + goto cleanup /* refresh information by opening images on the disk */ static int qemuDomainGetStatsOneBlockFallback(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, virStorageSourcePtr src, size_t block_idx) { @@ -21522,13 +21384,13 @@ qemuDomainGetStatsOneBlockFallback(virQEMUDriverP= tr driver, } if (src->allocation) - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "allocation", src->allocation); if (src->capacity) - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "capacity", src->capacity); if (src->physical) - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "physical", src->physical); ret =3D 0; cleanup: @@ -21576,8 +21438,7 @@ static int qemuDomainGetStatsOneBlock(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, const char *entryname, virStorageSourcePtr src, size_t block_idx, @@ -21589,8 +21450,8 @@ qemuDomainGetStatsOneBlock(virQEMUDriverPtr driver, /* the VM is offline so we have to go and load the stast from the disk= by * ourselves */ if (!virDomainObjIsActive(dom)) { - ret =3D qemuDomainGetStatsOneBlockFallback(driver, cfg, dom, recor= d, - maxparams, src, block_idx= ); + ret =3D qemuDomainGetStatsOneBlockFallback(driver, cfg, dom, param= s, + src, block_idx); goto cleanup; } @@ -21602,18 +21463,18 @@ qemuDomainGetStatsOneBlock(virQEMUDriverPtr drive= r, goto cleanup; } - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "allocation", entry->wr_highest_offset); if (entry->capacity) - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "capacity", entry->capacity); if (entry->physical) { - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "physical", entry->physical); } else { if (qemuDomainStorageUpdatePhysical(driver, cfg, dom, src) =3D=3D = 0) - QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, block_idx, + QEMU_ADD_BLOCK_PARAM_ULL(params, block_idx, "physical", src->physical); } @@ -21627,8 +21488,7 @@ static int qemuDomainGetStatsBlockExportBackendStorage(const char *entryname, virHashTablePtr stats, size_t recordnr, - virDomainStatsRecordPtr record= s, - int *nrecords) + virTypedParamListPtr params) { qemuBlockStats *entry; int ret =3D -1; @@ -21639,7 +21499,7 @@ qemuDomainGetStatsBlockExportBackendStorage(const c= har *entryname, } if (entry->write_threshold) - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "threshold", + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "threshold", entry->write_threshold); ret =3D 0; @@ -21652,8 +21512,7 @@ static int qemuDomainGetStatsBlockExportFrontend(const char *frontendname, virHashTablePtr stats, size_t recordnr, - virDomainStatsRecordPtr records, - int *nrecords) + virTypedParamListPtr params) { qemuBlockStats *entry; int ret =3D -1; @@ -21666,14 +21525,14 @@ qemuDomainGetStatsBlockExportFrontend(const char = *frontendname, goto cleanup; } - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "rd.reqs", entry= ->rd_req); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "rd.bytes", entr= y->rd_bytes); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "rd.times", entr= y->rd_total_times); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "wr.reqs", entry= ->wr_req); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "wr.bytes", entr= y->wr_bytes); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "wr.times", entr= y->wr_total_times); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "fl.reqs", entry= ->flush_req); - QEMU_ADD_BLOCK_PARAM_ULL(records, nrecords, recordnr, "fl.times", entr= y->flush_total_times); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "rd.reqs", entry->rd_req); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "rd.bytes", entry->rd_bytes= ); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "rd.times", entry->rd_total= _times); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "wr.reqs", entry->wr_req); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "wr.bytes", entry->wr_bytes= ); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "wr.times", entry->wr_total= _times); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "fl.reqs", entry->flush_req= ); + QEMU_ADD_BLOCK_PARAM_ULL(params, recordnr, "fl.times", entry->flush_to= tal_times); ret =3D 0; cleanup: @@ -21685,18 +21544,20 @@ static int qemuDomainGetStatsBlockExportHeader(virDomainDiskDefPtr disk, virStorageSourcePtr src, size_t recordnr, - virDomainStatsRecordPtr records, - int *nrecords) + virTypedParamListPtr params) { int ret =3D -1; - QEMU_ADD_NAME_PARAM(records, nrecords, "block", "name", recordnr, disk= ->dst); + if (virTypedParamListAddS(params, disk->dst, "block.%zu.name", recordn= r) < 0) + goto cleanup; + + if (virStorageSourceIsLocalStorage(src) && src->path && + virTypedParamListAddS(params, src->path, "block.%zu.path", recordn= r) < 0) + goto cleanup; - if (virStorageSourceIsLocalStorage(src) && src->path) - QEMU_ADD_NAME_PARAM(records, nrecords, "block", "path", recordnr, = src->path); - if (src->id) - QEMU_ADD_BLOCK_PARAM_UI(records, nrecords, recordnr, "backingIndex= ", - src->id); + if (src->id && + virTypedParamListAddUI(params, src->id, "block.%zu.backingIndex", = recordnr) < 0) + goto cleanup; ret =3D 0; cleanup: @@ -21708,8 +21569,7 @@ static int qemuDomainGetStatsBlockExportDisk(virDomainDiskDefPtr disk, virHashTablePtr stats, virHashTablePtr nodestats, - virDomainStatsRecordPtr records, - int *nrecords, + virTypedParamListPtr params, size_t *recordnr, bool visitBacking, virQEMUDriverPtr driver, @@ -21736,7 +21596,7 @@ qemuDomainGetStatsBlockExportDisk(virDomainDiskDefP= tr disk, "skip getting stats", disk->dst); return qemuDomainGetStatsBlockExportHeader(disk, disk->src, *recor= dnr, - records, nrecords); + params); } for (n =3D disk->src; virStorageSourceIsBacking(n); n =3D n->backingSt= ore) { @@ -21757,25 +21617,24 @@ qemuDomainGetStatsBlockExportDisk(virDomainDiskDe= fPtr disk, backendstoragealias =3D alias; } - if (qemuDomainGetStatsBlockExportHeader(disk, n, *recordnr, - records, nrecords) < 0) + if (qemuDomainGetStatsBlockExportHeader(disk, n, *recordnr, params= ) < 0) goto cleanup; /* The following stats make sense only for the frontend device */ if (n =3D=3D disk->src) { if (qemuDomainGetStatsBlockExportFrontend(frontendalias, stats= , *recordnr, - records, nrecords) <= 0) + params) < 0) goto cleanup; } - if (qemuDomainGetStatsOneBlock(driver, cfg, dom, records, nrecords, + if (qemuDomainGetStatsOneBlock(driver, cfg, dom, params, backendalias, n, *recordnr, stats) < 0) goto cleanup; if (qemuDomainGetStatsBlockExportBackendStorage(backendstoragealia= s, stats, *recordnr, - records, nrecords)= < 0) + params) < 0) goto cleanup; VIR_FREE(alias); @@ -21796,8 +21655,7 @@ qemuDomainGetStatsBlockExportDisk(virDomainDiskDefP= tr disk, static int qemuDomainGetStatsBlock(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags) { size_t i; @@ -21846,18 +21704,19 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver, /* When listing backing chains, it's easier to fix up the count * after the iteration than it is to iterate twice; but we still * want count listed first. */ - count_index =3D record->nparams; - QEMU_ADD_COUNT_PARAM(record, maxparams, "block", 0); + count_index =3D params->npar; + if (virTypedParamListAddUI(params, 0, "block.count") < 0) + goto cleanup; for (i =3D 0; i < dom->def->ndisks; i++) { if (qemuDomainGetStatsBlockExportDisk(dom->def->disks[i], stats, n= odestats, - record, maxparams, &visited, + params, &visited, visitBacking, driver, cfg, d= om, blockdev) < 0) goto cleanup; } - record->params[count_index].value.ui =3D visited; + params->par[count_index].value.ui =3D visited; ret =3D 0; cleanup: @@ -21870,39 +21729,10 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver, #undef QEMU_ADD_BLOCK_PARAM_ULL -#undef QEMU_ADD_NAME_PARAM - -#define QEMU_ADD_IOTHREAD_PARAM_UI(record, maxparams, id, name, value) \ - do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \ - "iothread.%u.%s", id, name); \ - if (virTypedParamsAddUInt(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - value) < 0) \ - goto cleanup; \ - } while (0) - -#define QEMU_ADD_IOTHREAD_PARAM_ULL(record, maxparams, id, name, value) \ -do { \ - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \ - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \ - "iothread.%u.%s", id, name); \ - if (virTypedParamsAddULLong(&(record)->params, \ - &(record)->nparams, \ - maxparams, \ - param_name, \ - value) < 0) \ - goto cleanup; \ -} while (0) - static int qemuDomainGetStatsIOThread(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags ATTRIBUTE_UNUSED) { qemuDomainObjPrivatePtr priv =3D dom->privateData; @@ -21923,22 +21753,20 @@ qemuDomainGetStatsIOThread(virQEMUDriverPtr drive= r, if (niothreads =3D=3D 0) return 0; - QEMU_ADD_COUNT_PARAM(record, maxparams, "iothread", niothreads); + if (virTypedParamListAddUI(params, niothreads, "iothread.count") < 0) + goto cleanup; for (i =3D 0; i < niothreads; i++) { if (iothreads[i]->poll_valid) { - QEMU_ADD_IOTHREAD_PARAM_ULL(record, maxparams, - iothreads[i]->iothread_id, - "poll-max-ns", - iothreads[i]->poll_max_ns); - QEMU_ADD_IOTHREAD_PARAM_UI(record, maxparams, - iothreads[i]->iothread_id, - "poll-grow", - iothreads[i]->poll_grow); - QEMU_ADD_IOTHREAD_PARAM_UI(record, maxparams, - iothreads[i]->iothread_id, - "poll-shrink", - iothreads[i]->poll_shrink); + if (virTypedParamListAddULL(params, iothreads[i]->poll_max_ns, + "iothread.%zu.poll-max-ns", i) < 0) + goto cleanup; + if (virTypedParamListAddUI(params, iothreads[i]->poll_grow, + "iothread.%zu.poll-grow", i) < 0) + goto cleanup; + if (virTypedParamListAddUI(params, iothreads[i]->poll_shrink, + "iothread.%zu.poll-shrink", i) < 0) + goto cleanup; } } @@ -21952,32 +21780,19 @@ qemuDomainGetStatsIOThread(virQEMUDriverPtr drive= r, return ret; } -#undef QEMU_ADD_IOTHREAD_PARAM_UI - -#undef QEMU_ADD_IOTHREAD_PARAM_ULL - -#undef QEMU_ADD_COUNT_PARAM static int qemuDomainGetStatsPerfOneEvent(virPerfPtr perf, virPerfEventType type, - virDomainStatsRecordPtr record, - int *maxparams) + virTypedParamListPtr params) { - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; uint64_t value =3D 0; if (virPerfReadEvent(perf, type, &value) < 0) return -1; - snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, "perf.%s", - virPerfEventTypeToString(type)); - - if (virTypedParamsAddULLong(&record->params, - &record->nparams, - maxparams, - param_name, - value) < 0) + if (virTypedParamListAddULL(params, value, "perf.%s", + virPerfEventTypeToString(type)) < 0) return -1; return 0; @@ -21986,8 +21801,7 @@ qemuDomainGetStatsPerfOneEvent(virPerfPtr perf, static int qemuDomainGetStatsPerf(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr params, unsigned int privflags ATTRIBUTE_UNUSED) { size_t i; @@ -21998,8 +21812,7 @@ qemuDomainGetStatsPerf(virQEMUDriverPtr driver ATTR= IBUTE_UNUSED, if (!virPerfEventIsEnabled(priv->perf, i)) continue; - if (qemuDomainGetStatsPerfOneEvent(priv->perf, i, - record, maxparams) < 0) + if (qemuDomainGetStatsPerfOneEvent(priv->perf, i, params) < 0) goto cleanup; } @@ -22012,8 +21825,7 @@ qemuDomainGetStatsPerf(virQEMUDriverPtr driver ATTR= IBUTE_UNUSED, typedef int (*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver, virDomainObjPtr dom, - virDomainStatsRecordPtr record, - int *maxparams, + virTypedParamListPtr list, unsigned int flags); struct qemuDomainGetStatsWorker { @@ -22084,37 +21896,31 @@ qemuDomainGetStats(virConnectPtr conn, virDomainStatsRecordPtr *record, unsigned int flags) { - int maxparams =3D 0; - virDomainStatsRecordPtr tmp; + VIR_AUTOFREE(virDomainStatsRecordPtr) tmp =3D NULL; + VIR_AUTOPTR(virTypedParamList) params =3D NULL; size_t i; - int ret =3D -1; - if (VIR_ALLOC(tmp) < 0) - goto cleanup; + if (VIR_ALLOC(params) < 0) + return -1; for (i =3D 0; qemuDomainGetStatsWorkers[i].func; i++) { if (stats & qemuDomainGetStatsWorkers[i].stats) { - if (qemuDomainGetStatsWorkers[i].func(conn->privateData, dom, = tmp, - &maxparams, flags) < 0) - goto cleanup; + if (qemuDomainGetStatsWorkers[i].func(conn->privateData, dom, = params, + flags) < 0) + return -1; } } + if (VIR_ALLOC(tmp) < 0) + return -1; + if (!(tmp->dom =3D virGetDomain(conn, dom->def->name, dom->def->uuid, dom->def->id))) - goto cleanup; - - *record =3D tmp; - tmp =3D NULL; - ret =3D 0; - - cleanup: - if (tmp) { - virTypedParamsFree(tmp->params, tmp->nparams); - VIR_FREE(tmp); - } + return -1; - return ret; + tmp->nparams =3D virTypedParamListStealParams(params, &tmp->params); + VIR_STEAL_PTR(*record, tmp); + return 0; } --=20 2.21.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list