From nobody Mon Feb 9 00:02:31 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1626864491; cv=none; d=zohomail.com; s=zohoarc; b=MfsOd3K9tisOLKEmtBaQ2YWIdBzB6xnqOdoY2RUlaiVLNrdzeCSWRtgQ4sauEYsNhtJkXvrzTZtXoAjc9Lb8hdGtx/Atdi+OZAy/fnImux4dzD+viYUFYSPPiRuLImAGMHygWVlsL0Sc0/NiNBi3BdwINFnP1L7sn7yCu7OSmio= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1626864491; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZOM1g2rgUa3IWTxwADTn2iT15kfMeBDflpO6kgRjPQY=; b=dskTLHYu41Vziu0qvSkYx5tf+7QMd/AghNWbbarHlk1dGmh6NtNhCDHvoG7As1CXqrbpfo/Lo1eT56QU1K4mZEOTzQH5mn89gw37vOvXouJF+TG5GLNXECGuEmMLGEKmJ7/FL8rW+k+BsIqQYaqxMl93UGjTJPGWlOmvlpqC0qw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1626864491946957.8342252100498; Wed, 21 Jul 2021 03:48:11 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-523-lkoaTeuVO0e2oNF9IS6gIA-1; Wed, 21 Jul 2021 06:46:09 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AE36210150A4; Wed, 21 Jul 2021 10:46:03 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 866551970E; Wed, 21 Jul 2021 10:46:03 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 4A424181A2A9; Wed, 21 Jul 2021 10:46:03 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 16LAhcPB001266 for ; Wed, 21 Jul 2021 06:43:38 -0400 Received: by smtp.corp.redhat.com (Postfix) id 87BAC19D9D; Wed, 21 Jul 2021 10:43:38 +0000 (UTC) Received: from andariel.local (unknown [10.40.193.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id E701719C79 for ; Wed, 21 Jul 2021 10:43:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626864490; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=ZOM1g2rgUa3IWTxwADTn2iT15kfMeBDflpO6kgRjPQY=; b=OKgu8wI0ewMfRfZbLnZoSRrSg7GLYmnlyxEnQXaNlbXgwlCybjLdxGztHK0GmcJxzWJB+1 w3sfvG2R5Uvp4gECTXS/q7/6tCTGvx+Ds3gC3mlNml68NAoTcITpXwGZxw91QwUmsGBKeI vFa51a/Sug+WxmWOrQ0kabDxN9qCz9M= X-MC-Unique: lkoaTeuVO0e2oNF9IS6gIA-1 From: Peter Krempa To: libvir-list@redhat.com Subject: [PATCH 33/33] qemu: process: Extract code for submitting event handling to separate thread Date: Wed, 21 Jul 2021 12:42:45 +0200 Message-Id: <578b37592d8f0d5004e92a092a22632f0ccabb5b.1626864132.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1626864493567100001 Content-Type: text/plain; charset="utf-8" The submission of the event to the helper thread has a verbose cleanup path which was duplicated in all the event handlers. Simplify it by extracting the code into a helper named 'qemuProcessEventSubmit' and reuse it where appropriate. Signed-off-by: Peter Krempa --- src/qemu/qemu_process.c | 113 +++++++++++++++------------------------- 1 file changed, 43 insertions(+), 70 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1bdfbce697..4c1130945f 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -280,6 +280,33 @@ qemuConnectAgent(virQEMUDriver *driver, virDomainObj *= vm) } +/** + * qemuProcessEventSubmit: + * @driver: QEMU driver object + * @event: pointer to the variable holding the event processing data (stol= en and cleared) + * + * Submits @event to be processed by the asynchronous event handling threa= d. + * In case when submission of the handling fails @event is properly freed = and + * cleared. If (*event)->vm is non-NULL the domain object is uref'd before= freeing + * @event. + */ +static void +qemuProcessEventSubmit(virQEMUDriver *driver, + struct qemuProcessEvent **event) +{ + if (!*event) + return; + + if (virThreadPoolSendJob(driver->workerPool, 0, *event) < 0) { + if ((*event)->vm) + virObjectUnref((*event)->vm); + qemuProcessEventFree(*event); + } + + *event =3D NULL; +} + + /* * This is a callback registered with a qemuMonitor *instance, * and to be invoked when the monitor console hits an end of file @@ -310,11 +337,7 @@ qemuProcessHandleMonitorEOF(qemuMonitor *mon, processEvent->eventType =3D QEMU_PROCESS_EVENT_MONITOR_EOF; processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - virObjectUnref(vm); - qemuProcessEventFree(processEvent); - goto cleanup; - } + qemuProcessEventSubmit(driver, &processEvent); /* We don't want this EOF handler to be called over and over while the * thread is waiting for a job. @@ -833,10 +856,8 @@ qemuProcessHandleWatchdog(qemuMonitor *mon G_GNUC_UNUS= ED, * deleted before handling watchdog event is finished. */ processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0)= { - virObjectUnref(vm); - qemuProcessEventFree(processEvent); - } + + qemuProcessEventSubmit(driver, &processEvent); } virObjectUnlock(vm); @@ -925,7 +946,6 @@ qemuProcessHandleBlockJob(qemuMonitor *mon G_GNUC_UNUSE= D, { qemuDomainObjPrivate *priv; virQEMUDriver *driver =3D opaque; - struct qemuProcessEvent *processEvent =3D NULL; virDomainDiskDef *disk; g_autoptr(qemuBlockJobData) job =3D NULL; char *data =3D NULL; @@ -954,7 +974,7 @@ qemuProcessHandleBlockJob(qemuMonitor *mon G_GNUC_UNUSE= D, virDomainObjBroadcast(vm); } else { /* there is no waiting SYNC API, dispatch the update to a thread */ - processEvent =3D g_new0(struct qemuProcessEvent, 1); + struct qemuProcessEvent *processEvent =3D g_new0(struct qemuProces= sEvent, 1); processEvent->eventType =3D QEMU_PROCESS_EVENT_BLOCK_JOB; data =3D g_strdup(diskAlias); @@ -963,16 +983,10 @@ qemuProcessHandleBlockJob(qemuMonitor *mon G_GNUC_UNU= SED, processEvent->action =3D type; processEvent->status =3D status; - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0)= { - virObjectUnref(vm); - goto cleanup; - } - - processEvent =3D NULL; + qemuProcessEventSubmit(driver, &processEvent); } cleanup: - qemuProcessEventFree(processEvent); virObjectUnlock(vm); } @@ -986,7 +1000,6 @@ qemuProcessHandleJobStatusChange(qemuMonitor *mon G_GN= UC_UNUSED, { virQEMUDriver *driver =3D opaque; qemuDomainObjPrivate *priv; - struct qemuProcessEvent *processEvent =3D NULL; qemuBlockJobData *job =3D NULL; int jobnewstate; @@ -1016,23 +1029,18 @@ qemuProcessHandleJobStatusChange(qemuMonitor *mon G= _GNUC_UNUSED, VIR_DEBUG("job '%s' handled synchronously", jobname); virDomainObjBroadcast(vm); } else { + struct qemuProcessEvent *processEvent =3D g_new0(struct qemuProces= sEvent, 1); + VIR_DEBUG("job '%s' handled by event thread", jobname); - processEvent =3D g_new0(struct qemuProcessEvent, 1); processEvent->eventType =3D QEMU_PROCESS_EVENT_JOB_STATUS_CHANGE; processEvent->vm =3D virObjectRef(vm); processEvent->data =3D virObjectRef(job); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0)= { - virObjectUnref(vm); - goto cleanup; - } - - processEvent =3D NULL; + qemuProcessEventSubmit(driver, &processEvent); } cleanup: - qemuProcessEventFree(processEvent); virObjectUnlock(vm); } @@ -1288,10 +1296,7 @@ qemuProcessHandleGuestPanic(qemuMonitor *mon G_GNUC_= UNUSED, */ processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - virObjectUnref(vm); - qemuProcessEventFree(processEvent); - } + qemuProcessEventSubmit(driver, &processEvent); virObjectUnlock(vm); } @@ -1323,17 +1328,10 @@ qemuProcessHandleDeviceDeleted(qemuMonitor *mon G_G= NUC_UNUSED, processEvent->data =3D data; processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - virObjectUnref(vm); - goto error; - } + qemuProcessEventSubmit(driver, &processEvent); cleanup: virObjectUnlock(vm); - return; - error: - qemuProcessEventFree(processEvent); - goto cleanup; } @@ -1503,17 +1501,9 @@ qemuProcessHandleNicRxFilterChanged(qemuMonitor *mon= G_GNUC_UNUSED, processEvent->data =3D data; processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - virObjectUnref(vm); - goto error; - } + qemuProcessEventSubmit(driver, &processEvent); - cleanup: virObjectUnlock(vm); - return; - error: - qemuProcessEventFree(processEvent); - goto cleanup; } @@ -1541,17 +1531,9 @@ qemuProcessHandleSerialChanged(qemuMonitor *mon G_GN= UC_UNUSED, processEvent->action =3D connected; processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - virObjectUnref(vm); - goto error; - } + qemuProcessEventSubmit(driver, &processEvent); - cleanup: virObjectUnlock(vm); - return; - error: - qemuProcessEventFree(processEvent); - goto cleanup; } @@ -1740,11 +1722,8 @@ qemuProcessHandlePRManagerStatusChanged(qemuMonitor = *mon G_GNUC_UNUSED, processEvent->eventType =3D QEMU_PROCESS_EVENT_PR_DISCONNECT; processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - qemuProcessEventFree(processEvent); - virObjectUnref(vm); - goto cleanup; - } + qemuProcessEventSubmit(driver, &processEvent); + cleanup: virObjectUnlock(vm); @@ -1783,10 +1762,7 @@ qemuProcessHandleRdmaGidStatusChanged(qemuMonitor *m= on G_GNUC_UNUSED, processEvent->vm =3D virObjectRef(vm); processEvent->data =3D g_steal_pointer(&info); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - qemuProcessEventFree(processEvent); - virObjectUnref(vm); - } + qemuProcessEventSubmit(driver, &processEvent); virObjectUnlock(vm); } @@ -1806,10 +1782,7 @@ qemuProcessHandleGuestCrashloaded(qemuMonitor *mon G= _GNUC_UNUSED, processEvent->eventType =3D QEMU_PROCESS_EVENT_GUEST_CRASHLOADED; processEvent->vm =3D virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - virObjectUnref(vm); - qemuProcessEventFree(processEvent); - } + qemuProcessEventSubmit(driver, &processEvent); virObjectUnlock(vm); } --=20 2.31.1