From nobody Sun Feb 8 06:22:22 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 63.128.21.124 as permitted sender) client-ip=63.128.21.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 63.128.21.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1606930489; cv=none; d=zohomail.com; s=zohoarc; b=gbVgZ5GG8LlACjamFA4XI9WnFWqJ3Alp6mfuf83olsS+ApBXNUvNoi8HJ14pUIA4yrhvmCJDCFoo6Jzpk2xAkbmDVsqkTltazwNNDM/w+P6GD/yC2pv5LjvQYYsyrD3XohzQT7GgxIOtkcim7gtE/a+078KoqCofF66aAa6AUSc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1606930489; h=Content-Type:Content-Transfer-Encoding:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=DHWynCNLIvBbnXf8CmVX72NG27S9FiHxiubklVxmE+s=; b=nLHPRTEzWW3TlsncTBm7d7vxaG/ho4DecX0O/dMcraZSdqLjo+e2q8/kGvcHTtOLs0DCRqKB5gbvPz7cW2gNB9WvOz49Dns+sE1fU4zdN/cxBgOYtojYg9a3F0u8nYSftMM+1AhTgxWJlzrj4VHKsUPmFQGZwqA9/FH6vNGIxSM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 63.128.21.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by mx.zohomail.com with SMTPS id 160693048973584.73871557129121; Wed, 2 Dec 2020 09:34:49 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-54-LvJfPkCJOtWNRIRMOrc1Sw-1; Wed, 02 Dec 2020 12:34:44 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 29A5B185E48B; Wed, 2 Dec 2020 17:34:38 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DAAD519C48; Wed, 2 Dec 2020 17:34:35 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 88A6D180954D; Wed, 2 Dec 2020 17:34:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 0B2HYVv4016757 for ; Wed, 2 Dec 2020 12:34:31 -0500 Received: by smtp.corp.redhat.com (Postfix) id 48FBF10013C1; Wed, 2 Dec 2020 17:34:31 +0000 (UTC) Received: from fedora.redhat.com (ovpn-112-197.phx2.redhat.com [10.3.112.197]) by smtp.corp.redhat.com (Postfix) with ESMTP id 08D0B10013C0 for ; Wed, 2 Dec 2020 17:34:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1606930488; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=DHWynCNLIvBbnXf8CmVX72NG27S9FiHxiubklVxmE+s=; b=OZYrgcNamBApvcD5O0V6Z4F/b+eM4PFFszC87PUjWKMpT5J/OjfP25M7nsgy6wDpI53AJI 0GPIQujiSikZcIrT1mTTBp26Gn/WpfOkh6hjJJ1QxfSkLP54v2u1Sd+yDEtncdGHlM0vcv zzGayVoSEYvgS0TGEdI/Z9EaPfWwnLs= X-MC-Unique: LvJfPkCJOtWNRIRMOrc1Sw-1 From: John Ferlan To: libvir-list@redhat.com Subject: [PATCH v2] qemu: Pass / fill niothreads for qemuMonitorGetIOThreads Date: Wed, 2 Dec 2020 12:34:24 -0500 Message-Id: <20201202173424.817246-1-jferlan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" Let's pass along / fill @niothreads rather than trying to make dual use as a return value and thread count. This resolves a Coverity issue detected in qemuDomainGetIOThreadsMon where if qemuDomainObjExitMonitor failed, then a -1 was returned and overwrite @niothreads causing a memory leak. Signed-off-by: John Ferlan Reviewed-by: Michal Privoznik --- Since v1, updated the logic to pass @niothreads around rather than rely on the dual meaning. Took the full plunge. src/qemu/qemu_driver.c | 23 +++++++++++------------ src/qemu/qemu_monitor.c | 8 +++++--- src/qemu/qemu_monitor.h | 3 ++- src/qemu/qemu_monitor_json.c | 6 ++++-- src/qemu/qemu_monitor_json.h | 3 ++- src/qemu/qemu_process.c | 4 ++-- tests/qemumonitorjsontest.c | 4 ++-- 7 files changed, 28 insertions(+), 23 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index bca1c84630..65725b2ef2 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4972,17 +4972,18 @@ qemuDomainGetMaxVcpus(virDomainPtr dom) static int qemuDomainGetIOThreadsMon(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + int *niothreads) { qemuDomainObjPrivatePtr priv =3D vm->privateData; - int niothreads =3D 0; + int ret =3D -1; =20 qemuDomainObjEnterMonitor(driver, vm); - niothreads =3D qemuMonitorGetIOThreads(priv->mon, iothreads); - if (qemuDomainObjExitMonitor(driver, vm) < 0 || niothreads < 0) + ret =3D qemuMonitorGetIOThreads(priv->mon, iothreads, niothreads); + if (qemuDomainObjExitMonitor(driver, vm) < 0) return -1; =20 - return niothreads; + return ret; } =20 =20 @@ -5014,7 +5015,7 @@ qemuDomainGetIOThreadsLive(virQEMUDriverPtr driver, goto endjob; } =20 - if ((niothreads =3D qemuDomainGetIOThreadsMon(driver, vm, &iothreads))= < 0) + if ((ret =3D qemuDomainGetIOThreadsMon(driver, vm, &iothreads, &niothr= eads)) < 0) goto endjob; =20 /* Nothing to do */ @@ -5314,8 +5315,7 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, * IOThreads thread_id's, adjust the cgroups, thread affinity, * and add the thread_id to the vm->def->iothreadids list. */ - if ((new_niothreads =3D qemuMonitorGetIOThreads(priv->mon, - &new_iothreads)) < 0) + if (qemuMonitorGetIOThreads(priv->mon, &new_iothreads, &new_niothreads= ) < 0) goto exit_monitor; =20 if (qemuDomainObjExitMonitor(driver, vm) < 0) @@ -5425,8 +5425,7 @@ qemuDomainHotplugDelIOThread(virQEMUDriverPtr driver, if (rc < 0) goto exit_monitor; =20 - if ((new_niothreads =3D qemuMonitorGetIOThreads(priv->mon, - &new_iothreads)) < 0) + if (qemuMonitorGetIOThreads(priv->mon, &new_iothreads, &new_niothreads= ) < 0) goto exit_monitor; =20 if (qemuDomainObjExitMonitor(driver, vm) < 0) @@ -18507,7 +18506,7 @@ qemuDomainGetStatsIOThread(virQEMUDriverPtr driver, qemuDomainObjPrivatePtr priv =3D dom->privateData; size_t i; qemuMonitorIOThreadInfoPtr *iothreads =3D NULL; - int niothreads; + int niothreads =3D 0; int ret =3D -1; =20 if (!HAVE_JOB(privflags) || !virDomainObjIsActive(dom)) @@ -18516,7 +18515,7 @@ qemuDomainGetStatsIOThread(virQEMUDriverPtr driver, if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD)) return 0; =20 - if ((niothreads =3D qemuDomainGetIOThreadsMon(driver, dom, &iothreads)= ) < 0) + if (qemuDomainGetIOThreadsMon(driver, dom, &iothreads, &niothreads) < = 0) return -1; =20 /* qemuDomainGetIOThreadsMon returns a NULL-terminated list, so we mus= t free diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index ce1a06c4c8..551b65e778 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4211,22 +4211,24 @@ qemuMonitorRTCResetReinjection(qemuMonitorPtr mon) * qemuMonitorGetIOThreads: * @mon: Pointer to the monitor * @iothreads: Location to return array of IOThreadInfo data + * @niothreads: Count of the number of IOThreads in the array * * Issue query-iothreads command. * Retrieve the list of iothreads defined/running for the machine * - * Returns count of IOThreadInfo structures on success + * Returns 0 on success * -1 on error. */ int qemuMonitorGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + int *niothreads) { VIR_DEBUG("iothreads=3D%p", iothreads); =20 QEMU_CHECK_MONITOR(mon); =20 - return qemuMonitorJSONGetIOThreads(mon, iothreads); + return qemuMonitorJSONGetIOThreads(mon, iothreads, niothreads); } =20 =20 diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 8bc092870b..49be2d5412 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1365,7 +1365,8 @@ struct _qemuMonitorIOThreadInfo { bool set_poll_shrink; }; int qemuMonitorGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads); + qemuMonitorIOThreadInfoPtr **iothreads, + int *niothreads); int qemuMonitorSetIOThread(qemuMonitorPtr mon, qemuMonitorIOThreadInfoPtr iothreadInfo); =20 diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 5acc1a10aa..f70490d9b0 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -8072,7 +8072,8 @@ qemuMonitorJSONRTCResetReinjection(qemuMonitorPtr mon) */ int qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + int *niothreads) { int ret =3D -1; virJSONValuePtr cmd; @@ -8149,9 +8150,10 @@ qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, info->poll_valid =3D true; } =20 - ret =3D n; + *niothreads =3D n; *iothreads =3D infolist; infolist =3D NULL; + ret =3D 0; =20 cleanup: if (infolist) { diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index d2928b0ffc..4eb0f667a2 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -550,7 +550,8 @@ int qemuMonitorJSONGetGuestCPU(qemuMonitorPtr mon, int qemuMonitorJSONRTCResetReinjection(qemuMonitorPtr mon); =20 int qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + int *niothreads) ATTRIBUTE_NONNULL(2); =20 int qemuMonitorJSONSetIOThread(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 20e90026e1..01afe66ec9 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2498,10 +2498,10 @@ qemuProcessDetectIOThreadPIDs(virQEMUDriverPtr driv= er, /* Get the list of IOThreads from qemu */ if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto cleanup; - niothreads =3D qemuMonitorGetIOThreads(priv->mon, &iothreads); + ret =3D qemuMonitorGetIOThreads(priv->mon, &iothreads, &niothreads); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - if (niothreads < 0) + if (ret < 0) goto cleanup; =20 if (niothreads !=3D vm->def->niothreadids) { diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 79ef2a545e..d0c37967d5 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2377,8 +2377,8 @@ testQemuMonitorJSONGetIOThreads(const void *opaque) "}") < 0) goto cleanup; =20 - if ((ninfo =3D qemuMonitorGetIOThreads(qemuMonitorTestGetMonitor(test), - &info)) < 0) + if (qemuMonitorGetIOThreads(qemuMonitorTestGetMonitor(test), + &info, &ninfo) < 0) goto cleanup; =20 if (ninfo !=3D 2) { --=20 2.28.0