From nobody Thu May 2 06:04:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1642698073; cv=none; d=zohomail.com; s=zohoarc; b=WFXLCemwYXvPQ1TiDwRkDWq1khYdplytlvC0pUNCmp2H3MtXeqk8LtUP2Cv8GvtLXEs21VxPjBpnfEGvZciqFweA+zq9V31KqwTRy9IQZ7VXeEEs+yDySqIS5hF90q3o0OV0QodQu/q9LN8tDfgbL1prc2Izhq+4OYXGMMFwIbM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1642698073; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=noZiSiRxHoQvAJQhww1USECTTRcPQmKLZhpKkPZElm8=; b=R2kJFGjORHt4i2KUFBYHF9oF1UWn0KDtqt2L0k+pHuNE7tQ11PQOEHBcy1O3VkjYuUd5B/bGcJhG63/V6VS5E3nFd54/1b88cCwrgry86mb5zmERTK+7OIgcc9/KFaVvM/kgkvW3UytoC2/G3x/aH7k2YLLvtLYOsHeVxBBQREU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 164269807300277.04358124379496; Thu, 20 Jan 2022 09:01:13 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-497-Nxc3poznOb6oMqNeyAPqxw-1; Thu, 20 Jan 2022 12:01:09 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EB9C61091DA8; Thu, 20 Jan 2022 17:01:03 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BFC661059101; Thu, 20 Jan 2022 17:01:03 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 803391806D1C; Thu, 20 Jan 2022 17:01:03 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 20KH0ZJ0029414 for ; Thu, 20 Jan 2022 12:00:35 -0500 Received: by smtp.corp.redhat.com (Postfix) id 4317F7EDAB; Thu, 20 Jan 2022 17:00:35 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2933F7EBDD for ; Thu, 20 Jan 2022 17:00:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642698071; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=noZiSiRxHoQvAJQhww1USECTTRcPQmKLZhpKkPZElm8=; b=ctnbCwJdpWxM5Maj/RfTF0H433oQYP5SuoszW0bRgmHGxwQoSjPWw0TBQkQCaWPfTY4AG5 0N2DeBY8UaiDcRTmC9GiZK7trWiZMJrCXeyJo+QCNbXvqLzTOYlJH69fcKTRMk2rvDcULJ 1HaU3hwlJ1a2jBH2HZC8P15UNLalKaA= X-MC-Unique: Nxc3poznOb6oMqNeyAPqxw-1 From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 1/3] qemu: use generalized virDomainJobData instead of qemuDomainJobInfo Date: Thu, 20 Jan 2022 17:59:48 +0100 Message-Id: <28ae0da6f03d1cf9eaa9a3e9007f035d4fc80a02.1642697880.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1642698074070100003 Content-Type: text/plain; charset="utf-8" This patch includes: * introducing new files: src/hypervisor/domain_job.c and src/hypervisor/dom= ain_job.h * new struct virDomainJobData, which is almost the same as qemuDomainJobInfo - the only differences are moving qemu specific job stats into the qemuDomainJobDataPrivate and adding jobType (possibly more attributes in the future if needed). * moving qemuDomainJobStatus to the domain_job.h and renaming it as virDomainJobStatus * moving and renaming qemuDomainJobStatusToType * adding callback struct virDomainJobDataPrivateDataCallbacks taking care of allocation, copying and freeing of private data of virDomainJobData * adding functions for virDomainJobDataPrivateDataCallbacks for qemu hypervisor * adding 'public' (public between the different hypervisors) functions taking care of init, copy, free of virDomainJobData * renaming every occurrence of qemuDomainJobInfo *info to virDomainJobData *data Signed-off-by: Kristina Hanicova --- src/hypervisor/domain_job.c | 78 +++++++++++ src/hypervisor/domain_job.h | 72 ++++++++++ src/hypervisor/meson.build | 1 + src/libvirt_private.syms | 7 + src/qemu/qemu_backup.c | 40 +++--- src/qemu/qemu_backup.h | 4 +- src/qemu/qemu_domainjob.c | 227 +++++++++++++++---------------- src/qemu/qemu_domainjob.h | 54 ++------ src/qemu/qemu_driver.c | 111 ++++++++------- src/qemu/qemu_migration.c | 188 +++++++++++++------------ src/qemu/qemu_migration.h | 4 +- src/qemu/qemu_migration_cookie.c | 60 ++++---- src/qemu/qemu_migration_cookie.h | 2 +- src/qemu/qemu_process.c | 24 ++-- src/qemu/qemu_snapshot.c | 4 +- 15 files changed, 517 insertions(+), 359 deletions(-) create mode 100644 src/hypervisor/domain_job.c create mode 100644 src/hypervisor/domain_job.h diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c new file mode 100644 index 0000000000..daccbe4a69 --- /dev/null +++ b/src/hypervisor/domain_job.c @@ -0,0 +1,78 @@ +/* + * Copyright (C) 2022 Red Hat, Inc. + * SPDX-License-Identifier: LGPL-2.1-or-later + */ + +#include +#include + +#include "domain_job.h" + + +virDomainJobData * +virDomainJobDataInit(virDomainJobDataPrivateDataCallbacks *cb) +{ + virDomainJobData *ret =3D g_new0(virDomainJobData, 1); + + ret->privateDataCb =3D cb; + + if (ret->privateDataCb && ret->privateDataCb->allocPrivateData) + ret->privateData =3D ret->privateDataCb->allocPrivateData(); + + return ret; +} + +virDomainJobData * +virDomainJobDataCopy(virDomainJobData *data) +{ + virDomainJobData *ret =3D g_new0(virDomainJobData, 1); + + memcpy(ret, data, sizeof(*data)); + + if (ret->privateDataCb && data->privateDataCb->copyPrivateData) + ret->privateData =3D data->privateDataCb->copyPrivateData(data->pr= ivateData); + + ret->errmsg =3D g_strdup(data->errmsg); + + return ret; +} + +void +virDomainJobDataFree(virDomainJobData *data) +{ + if (!data) + return; + + if (data->privateDataCb && data->privateDataCb->freePrivateData) + data->privateDataCb->freePrivateData(data->privateData); + + g_free(data->errmsg); + g_free(data); +} + +virDomainJobType +virDomainJobStatusToType(virDomainJobStatus status) +{ + switch (status) { + case VIR_DOMAIN_JOB_STATUS_NONE: + break; + + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + case VIR_DOMAIN_JOB_STATUS_PAUSED: + return VIR_DOMAIN_JOB_UNBOUNDED; + + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + return VIR_DOMAIN_JOB_COMPLETED; + + case VIR_DOMAIN_JOB_STATUS_FAILED: + return VIR_DOMAIN_JOB_FAILED; + + case VIR_DOMAIN_JOB_STATUS_CANCELED: + return VIR_DOMAIN_JOB_CANCELLED; + } + + return VIR_DOMAIN_JOB_NONE; +} diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h new file mode 100644 index 0000000000..257ef067e4 --- /dev/null +++ b/src/hypervisor/domain_job.h @@ -0,0 +1,72 @@ +/* + * Copyright (C) 2022 Red Hat, Inc. + * SPDX-License-Identifier: LGPL-2.1-or-later + */ + +#pragma once + +#include "internal.h" + +typedef enum { + VIR_DOMAIN_JOB_STATUS_NONE =3D 0, + VIR_DOMAIN_JOB_STATUS_ACTIVE, + VIR_DOMAIN_JOB_STATUS_MIGRATING, + VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED, + VIR_DOMAIN_JOB_STATUS_PAUSED, + VIR_DOMAIN_JOB_STATUS_POSTCOPY, + VIR_DOMAIN_JOB_STATUS_COMPLETED, + VIR_DOMAIN_JOB_STATUS_FAILED, + VIR_DOMAIN_JOB_STATUS_CANCELED, +} virDomainJobStatus; + +typedef void *(*virDomainJobDataPrivateDataAlloc) (void); +typedef void *(*virDomainJobDataPrivateDataCopy) (void *); +typedef void (*virDomainJobDataPrivateDataFree) (void *); + +typedef struct _virDomainJobDataPrivateDataCallbacks virDomainJobDataPriva= teDataCallbacks; +struct _virDomainJobDataPrivateDataCallbacks { + virDomainJobDataPrivateDataAlloc allocPrivateData; + virDomainJobDataPrivateDataCopy copyPrivateData; + virDomainJobDataPrivateDataFree freePrivateData; +}; + +typedef struct _virDomainJobData virDomainJobData; +struct _virDomainJobData { + virDomainJobType jobType; + + virDomainJobStatus status; + virDomainJobOperation operation; + unsigned long long started; /* When the async job started */ + unsigned long long stopped; /* When the domain's CPUs were stopped */ + unsigned long long sent; /* When the source sent status info to the + destination (only for migrations). */ + unsigned long long received; /* When the destination host received sta= tus + info from the source (migrations only)= . */ + /* Computed values */ + unsigned long long timeElapsed; + long long timeDelta; /* delta =3D received - sent, i.e., the differenc= e between + the source and the destination time plus the t= ime + between the end of Perform phase on the source= and + the beginning of Finish phase on the destinati= on. */ + bool timeDeltaSet; + + char *errmsg; /* optional error message for failed completed jobs */ + + void *privateData; /* private data of hypervisors */ + virDomainJobDataPrivateDataCallbacks *privateDataCb; /* callbacks of p= rivate data, hypervisor based */ +}; + + +virDomainJobData * +virDomainJobDataInit(virDomainJobDataPrivateDataCallbacks *cb); + +void +virDomainJobDataFree(virDomainJobData *data); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(virDomainJobData, virDomainJobDataFree); + +virDomainJobData * +virDomainJobDataCopy(virDomainJobData *data); + +virDomainJobType +virDomainJobStatusToType(virDomainJobStatus status); diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index 70801c0820..ec11ec0cd8 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -3,6 +3,7 @@ hypervisor_sources =3D [ 'domain_driver.c', 'virclosecallbacks.c', 'virhostdev.c', + 'domain_job.c', ] =20 stateful_driver_source_files +=3D files(hypervisor_sources) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index ba3462d849..d648059e16 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1565,6 +1565,13 @@ virDomainDriverParseBlkioDeviceStr; virDomainDriverSetupPersistentDefBlkioParams; =20 =20 +# hypervisor/domain_job.h +virDomainJobDataCopy; +virDomainJobDataFree; +virDomainJobDataInit; +virDomainJobStatusToType; + + # hypervisor/virclosecallbacks.h virCloseCallbacksGet; virCloseCallbacksGetConn; diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 304a0d5a4f..081c4d023f 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -555,7 +555,7 @@ qemuBackupBeginPullExportDisks(virDomainObj *vm, =20 void qemuBackupJobTerminate(virDomainObj *vm, - qemuDomainJobStatus jobstatus) + virDomainJobStatus jobstatus) =20 { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -583,7 +583,7 @@ qemuBackupJobTerminate(virDomainObj *vm, !(priv->backup->apiFlags & VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTER= NAL) && (priv->backup->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL || (priv->backup->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PUSH && - jobstatus !=3D QEMU_DOMAIN_JOB_STATUS_COMPLETED))) { + jobstatus !=3D VIR_DOMAIN_JOB_STATUS_COMPLETED))) { =20 uid_t uid; gid_t gid; @@ -600,15 +600,19 @@ qemuBackupJobTerminate(virDomainObj *vm, } =20 if (priv->job.current) { + qemuDomainJobDataPrivate *privData =3D NULL; + qemuDomainJobInfoUpdateTime(priv->job.current); =20 - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed =3D qemuDomainJobInfoCopy(priv->job.current); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + priv->job.completed =3D virDomainJobDataCopy(priv->job.current); + + privData =3D priv->job.completed->privateData; =20 - priv->job.completed->stats.backup.total =3D priv->backup->push_tot= al; - priv->job.completed->stats.backup.transferred =3D priv->backup->pu= sh_transferred; - priv->job.completed->stats.backup.tmp_used =3D priv->backup->pull_= tmp_used; - priv->job.completed->stats.backup.tmp_total =3D priv->backup->pull= _tmp_total; + privData->stats.backup.total =3D priv->backup->push_total; + privData->stats.backup.transferred =3D priv->backup->push_transfer= red; + privData->stats.backup.tmp_used =3D priv->backup->pull_tmp_used; + privData->stats.backup.tmp_total =3D priv->backup->pull_tmp_total; =20 priv->job.completed->status =3D jobstatus; priv->job.completed->errmsg =3D g_strdup(priv->backup->errmsg); @@ -687,7 +691,7 @@ qemuBackupJobCancelBlockjobs(virDomainObj *vm, } =20 if (terminatebackup && !has_active) - qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); + qemuBackupJobTerminate(vm, VIR_DOMAIN_JOB_STATUS_CANCELED); } =20 =20 @@ -742,6 +746,7 @@ qemuBackupBegin(virDomainObj *vm, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privData =3D priv->job.current->privateData; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); g_autoptr(virDomainBackupDef) def =3D NULL; g_autofree char *suffix =3D NULL; @@ -795,7 +800,7 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + privData->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; =20 if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", @@ -985,7 +990,7 @@ qemuBackupNotifyBlockjobEnd(virDomainObj *vm, bool has_cancelling =3D false; bool has_cancelled =3D false; bool has_failed =3D false; - qemuDomainJobStatus jobstatus =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + virDomainJobStatus jobstatus =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; virDomainBackupDef *backup =3D priv->backup; size_t i; =20 @@ -1082,9 +1087,9 @@ qemuBackupNotifyBlockjobEnd(virDomainObj *vm, /* all sub-jobs have stopped */ =20 if (has_failed) - jobstatus =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobstatus =3D VIR_DOMAIN_JOB_STATUS_FAILED; else if (has_cancelled && backup->type =3D=3D VIR_DOMAIN_BACKUP_TY= PE_PUSH) - jobstatus =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobstatus =3D VIR_DOMAIN_JOB_STATUS_CANCELED; =20 qemuBackupJobTerminate(vm, jobstatus); } @@ -1135,9 +1140,10 @@ qemuBackupGetJobInfoStatsUpdateOne(virDomainObj *vm, int qemuBackupGetJobInfoStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; + qemuDomainBackupStats *stats =3D &privJob->stats.backup; qemuDomainObjPrivate *priv =3D vm->privateData; qemuMonitorJobInfo **blockjobs =3D NULL; size_t nblockjobs =3D 0; @@ -1151,10 +1157,10 @@ qemuBackupGetJobInfoStats(virQEMUDriver *driver, return -1; } =20 - if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + if (qemuDomainJobInfoUpdateTime(jobData) < 0) return -1; =20 - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; =20 qemuDomainObjEnterMonitor(driver, vm); =20 diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h index ebb3154516..de4dff9357 100644 --- a/src/qemu/qemu_backup.h +++ b/src/qemu/qemu_backup.h @@ -45,12 +45,12 @@ qemuBackupNotifyBlockjobEnd(virDomainObj *vm, =20 void qemuBackupJobTerminate(virDomainObj *vm, - qemuDomainJobStatus jobstatus); + virDomainJobStatus jobstatus); =20 int qemuBackupGetJobInfoStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo); + virDomainJobData *jobData); =20 /* exported for testing */ int diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 1ecde5af86..1d9bec2cfd 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -63,6 +63,38 @@ VIR_ENUM_IMPL(qemuDomainAsyncJob, "backup", ); =20 +static void * +qemuJobDataAllocPrivateData(void) +{ + return g_new0(qemuDomainJobDataPrivate, 1); +} + + +static void * +qemuJobDataCopyPrivateData(void *data) +{ + qemuDomainJobDataPrivate *ret =3D g_new0(qemuDomainJobDataPrivate, 1); + + memcpy(ret, data, sizeof(qemuDomainJobDataPrivate)); + + return ret; +} + + +static void +qemuJobDataFreePrivateData(void *data) +{ + g_free(data); +} + + +virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallbacks =3D { + .allocPrivateData =3D qemuJobDataAllocPrivateData, + .copyPrivateData =3D qemuJobDataCopyPrivateData, + .freePrivateData =3D qemuJobDataFreePrivateData, +}; + + const char * qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, int phase G_GNUC_UNUSED) @@ -116,26 +148,6 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob j= ob, } =20 =20 -void -qemuDomainJobInfoFree(qemuDomainJobInfo *info) -{ - g_free(info->errmsg); - g_free(info); -} - - -qemuDomainJobInfo * -qemuDomainJobInfoCopy(qemuDomainJobInfo *info) -{ - qemuDomainJobInfo *ret =3D g_new0(qemuDomainJobInfo, 1); - - memcpy(ret, info, sizeof(*info)); - - ret->errmsg =3D g_strdup(info->errmsg); - - return ret; -} - void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm) @@ -149,7 +161,7 @@ qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, if (!priv->job.completed) return; =20 - if (qemuDomainJobInfoToParams(priv->job.completed, &type, + if (qemuDomainJobDataToParams(priv->job.completed, &type, ¶ms, &nparams) < 0) { VIR_WARN("Could not get stats for completed job; domain %s", vm->def->name); @@ -216,7 +228,7 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObj *job) job->mask =3D QEMU_JOB_DEFAULT_MASK; job->abortJob =3D false; VIR_FREE(job->error); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); + g_clear_pointer(&job->current, virDomainJobDataFree); job->cb->resetJobPrivate(job->privateData); job->apiFlags =3D 0; } @@ -254,8 +266,8 @@ qemuDomainObjClearJob(qemuDomainJobObj *job) qemuDomainObjResetJob(job); qemuDomainObjResetAsyncJob(job); g_clear_pointer(&job->privateData, job->cb->freeJobPrivate); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); - g_clear_pointer(&job->completed, qemuDomainJobInfoFree); + g_clear_pointer(&job->current, virDomainJobDataFree); + g_clear_pointer(&job->completed, virDomainJobDataFree); virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); } @@ -268,111 +280,87 @@ qemuDomainTrackJob(qemuDomainJob job) =20 =20 int -qemuDomainJobInfoUpdateTime(qemuDomainJobInfo *jobInfo) +qemuDomainJobInfoUpdateTime(virDomainJobData *jobData) { unsigned long long now; =20 - if (!jobInfo->started) + if (!jobData->started) return 0; =20 if (virTimeMillisNow(&now) < 0) return -1; =20 - if (now < jobInfo->started) { + if (now < jobData->started) { VIR_WARN("Async job starts in the future"); - jobInfo->started =3D 0; + jobData->started =3D 0; return 0; } =20 - jobInfo->timeElapsed =3D now - jobInfo->started; + jobData->timeElapsed =3D now - jobData->started; return 0; } =20 int -qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfo *jobInfo) +qemuDomainJobInfoUpdateDowntime(virDomainJobData *jobData) { unsigned long long now; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; =20 - if (!jobInfo->stopped) + if (!jobData->stopped) return 0; =20 if (virTimeMillisNow(&now) < 0) return -1; =20 - if (now < jobInfo->stopped) { + if (now < jobData->stopped) { VIR_WARN("Guest's CPUs stopped in the future"); - jobInfo->stopped =3D 0; + jobData->stopped =3D 0; return 0; } =20 - jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; - jobInfo->stats.mig.downtime_set =3D true; + priv->stats.mig.downtime =3D now - jobData->stopped; + priv->stats.mig.downtime_set =3D true; return 0; } =20 -static virDomainJobType -qemuDomainJobStatusToType(qemuDomainJobStatus status) -{ - switch (status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: - break; - - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: - return VIR_DOMAIN_JOB_UNBOUNDED; - - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - return VIR_DOMAIN_JOB_COMPLETED; - - case QEMU_DOMAIN_JOB_STATUS_FAILED: - return VIR_DOMAIN_JOB_FAILED; - - case QEMU_DOMAIN_JOB_STATUS_CANCELED: - return VIR_DOMAIN_JOB_CANCELLED; - } - - return VIR_DOMAIN_JOB_NONE; -} =20 int -qemuDomainJobInfoToInfo(qemuDomainJobInfo *jobInfo, +qemuDomainJobInfoToInfo(virDomainJobData *jobData, virDomainJobInfoPtr info) { - info->type =3D qemuDomainJobStatusToType(jobInfo->status); - info->timeElapsed =3D jobInfo->timeElapsed; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + info->type =3D virDomainJobStatusToType(jobData->status); + info->timeElapsed =3D jobData->timeElapsed; =20 - switch (jobInfo->statsType) { + switch (priv->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - info->fileTotal =3D jobInfo->stats.mig.disk_total + - jobInfo->mirrorStats.total; - info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + - (jobInfo->mirrorStats.total - - jobInfo->mirrorStats.transferred); - info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + - jobInfo->mirrorStats.transferred; + info->memTotal =3D priv->stats.mig.ram_total; + info->memRemaining =3D priv->stats.mig.ram_remaining; + info->memProcessed =3D priv->stats.mig.ram_transferred; + info->fileTotal =3D priv->stats.mig.disk_total + + priv->mirrorStats.total; + info->fileRemaining =3D priv->stats.mig.disk_remaining + + (priv->mirrorStats.total - + priv->mirrorStats.transferred); + info->fileProcessed =3D priv->stats.mig.disk_transferred + + priv->mirrorStats.transferred; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + info->memTotal =3D priv->stats.mig.ram_total; + info->memRemaining =3D priv->stats.mig.ram_remaining; + info->memProcessed =3D priv->stats.mig.ram_transferred; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - info->memTotal =3D jobInfo->stats.dump.total; - info->memProcessed =3D jobInfo->stats.dump.completed; + info->memTotal =3D priv->stats.dump.total; + info->memProcessed =3D priv->stats.dump.completed; info->memRemaining =3D info->memTotal - info->memProcessed; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - info->fileTotal =3D jobInfo->stats.backup.total; - info->fileProcessed =3D jobInfo->stats.backup.transferred; + info->fileTotal =3D priv->stats.backup.total; + info->fileProcessed =3D priv->stats.backup.transferred; info->fileRemaining =3D info->fileTotal - info->fileProcessed; break; =20 @@ -389,13 +377,14 @@ qemuDomainJobInfoToInfo(qemuDomainJobInfo *jobInfo, =20 =20 static int -qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainMigrationJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; - qemuDomainMirrorStats *mirrorStats =3D &jobInfo->mirrorStats; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuMonitorMigrationStats *stats =3D &priv->stats.mig; + qemuDomainMirrorStats *mirrorStats =3D &priv->mirrorStats; virTypedParameterPtr par =3D NULL; int maxpar =3D 0; int npar =3D 0; @@ -404,19 +393,19 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo = *jobInfo, =20 if (virTypedParamsAddInt(&par, &npar, &maxpar, VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) + jobData->operation) < 0) goto error; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) + jobData->timeElapsed) < 0) goto error; =20 - if (jobInfo->timeDeltaSet && - jobInfo->timeElapsed > jobInfo->timeDelta && + if (jobData->timeDeltaSet && + jobData->timeElapsed > jobData->timeDelta && virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_TIME_ELAPSED_NET, - jobInfo->timeElapsed - jobInfo->timeDelta)= < 0) + jobData->timeElapsed - jobData->timeDelta)= < 0) goto error; =20 if (stats->downtime_set && @@ -426,11 +415,11 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo = *jobInfo, goto error; =20 if (stats->downtime_set && - jobInfo->timeDeltaSet && - stats->downtime > jobInfo->timeDelta && + jobData->timeDeltaSet && + stats->downtime > jobData->timeDelta && virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_DOWNTIME_NET, - stats->downtime - jobInfo->timeDelta) < 0) + stats->downtime - jobData->timeDelta) < 0) goto error; =20 if (stats->setup_time_set && @@ -505,7 +494,7 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo *j= obInfo, =20 /* The remaining stats are disk, mirror, or migration specific * so if this is a SAVEDUMP, we can just skip them */ - if (jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) + if (priv->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) goto done; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, @@ -554,7 +543,7 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo *j= obInfo, goto error; =20 done: - *type =3D qemuDomainJobStatusToType(jobInfo->status); + *type =3D virDomainJobStatusToType(jobData->status); *params =3D par; *nparams =3D npar; return 0; @@ -566,24 +555,25 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo = *jobInfo, =20 =20 static int -qemuDomainDumpJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainDumpJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuMonitorDumpStats *stats =3D &priv->stats.dump; virTypedParameterPtr par =3D NULL; int maxpar =3D 0; int npar =3D 0; =20 if (virTypedParamsAddInt(&par, &npar, &maxpar, VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) + jobData->operation) < 0) goto error; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) + jobData->timeElapsed) < 0) goto error; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, @@ -597,7 +587,7 @@ qemuDomainDumpJobInfoToParams(qemuDomainJobInfo *jobInf= o, stats->total - stats->completed) < 0) goto error; =20 - *type =3D qemuDomainJobStatusToType(jobInfo->status); + *type =3D virDomainJobStatusToType(jobData->status); *params =3D par; *nparams =3D npar; return 0; @@ -609,19 +599,20 @@ qemuDomainDumpJobInfoToParams(qemuDomainJobInfo *jobI= nfo, =20 =20 static int -qemuDomainBackupJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainBackupJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuDomainBackupStats *stats =3D &priv->stats.backup; g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); =20 - if (virTypedParamListAddInt(par, jobInfo->operation, + if (virTypedParamListAddInt(par, jobData->operation, VIR_DOMAIN_JOB_OPERATION) < 0) return -1; =20 - if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, + if (virTypedParamListAddULLong(par, jobData->timeElapsed, VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) return -1; =20 @@ -649,38 +640,40 @@ qemuDomainBackupJobInfoToParams(qemuDomainJobInfo *jo= bInfo, return -1; } =20 - if (jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && + if (jobData->status !=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && virTypedParamListAddBoolean(par, - jobInfo->status =3D=3D QEMU_DOMAIN_JOB= _STATUS_COMPLETED, + jobData->status =3D=3D VIR_DOMAIN_JOB_= STATUS_COMPLETED, VIR_DOMAIN_JOB_SUCCESS) < 0) return -1; =20 - if (jobInfo->errmsg && - virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) + if (jobData->errmsg && + virTypedParamListAddString(par, jobData->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) return -1; =20 *nparams =3D virTypedParamListStealParams(par, params); - *type =3D qemuDomainJobStatusToType(jobInfo->status); + *type =3D virDomainJobStatusToType(jobData->status); return 0; } =20 =20 int -qemuDomainJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - switch (jobInfo->statsType) { + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + + switch (priv->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, n= params); + return qemuDomainMigrationJobDataToParams(jobData, type, params, n= params); =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparam= s); + return qemuDomainDumpJobDataToParams(jobData, type, params, nparam= s); =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - return qemuDomainBackupJobInfoToParams(jobInfo, type, params, npar= ams); + return qemuDomainBackupJobDataToParams(jobData, type, params, npar= ams); =20 case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -688,7 +681,7 @@ qemuDomainJobInfoToParams(qemuDomainJobInfo *jobInfo, break; =20 default: - virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType= ); + virReportEnumRangeError(qemuDomainJobStatsType, priv->statsType); break; } =20 @@ -895,8 +888,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); qemuDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivate= DataCallbacks); + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; priv->job.asyncJob =3D asyncJob; priv->job.asyncOwner =3D virThreadSelfID(); priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index f904bd49c2..5de82ee016 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -20,6 +20,7 @@ =20 #include #include "qemu_monitor.h" +#include "domain_job.h" =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define QEMU_JOB_DEFAULT_MASK \ @@ -79,17 +80,6 @@ typedef enum { } qemuDomainAsyncJob; VIR_ENUM_DECL(qemuDomainAsyncJob); =20 -typedef enum { - QEMU_DOMAIN_JOB_STATUS_NONE =3D 0, - QEMU_DOMAIN_JOB_STATUS_ACTIVE, - QEMU_DOMAIN_JOB_STATUS_MIGRATING, - QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED, - QEMU_DOMAIN_JOB_STATUS_PAUSED, - QEMU_DOMAIN_JOB_STATUS_POSTCOPY, - QEMU_DOMAIN_JOB_STATUS_COMPLETED, - QEMU_DOMAIN_JOB_STATUS_FAILED, - QEMU_DOMAIN_JOB_STATUS_CANCELED, -} qemuDomainJobStatus; =20 typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_NONE =3D 0, @@ -114,24 +104,8 @@ struct _qemuDomainBackupStats { unsigned long long tmp_total; }; =20 -typedef struct _qemuDomainJobInfo qemuDomainJobInfo; -struct _qemuDomainJobInfo { - qemuDomainJobStatus status; - virDomainJobOperation operation; - unsigned long long started; /* When the async job started */ - unsigned long long stopped; /* When the domain's CPUs were stopped */ - unsigned long long sent; /* When the source sent status info to the - destination (only for migrations). */ - unsigned long long received; /* When the destination host received sta= tus - info from the source (migrations only)= . */ - /* Computed values */ - unsigned long long timeElapsed; - long long timeDelta; /* delta =3D received - sent, i.e., the difference - between the source and the destination time pl= us - the time between the end of Perform phase on t= he - source and the beginning of Finish phase on the - destination. */ - bool timeDeltaSet; +typedef struct _qemuDomainJobDataPrivate qemuDomainJobDataPrivate; +struct _qemuDomainJobDataPrivate { /* Raw values from QEMU */ qemuDomainJobStatsType statsType; union { @@ -140,17 +114,9 @@ struct _qemuDomainJobInfo { qemuDomainBackupStats backup; } stats; qemuDomainMirrorStats mirrorStats; - - char *errmsg; /* optional error message for failed completed jobs */ }; =20 -void -qemuDomainJobInfoFree(qemuDomainJobInfo *info); - -G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); - -qemuDomainJobInfo * -qemuDomainJobInfoCopy(qemuDomainJobInfo *info); +extern virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallback= s; =20 typedef struct _qemuDomainJobObj qemuDomainJobObj; =20 @@ -198,8 +164,8 @@ struct _qemuDomainJobObj { unsigned long long asyncStarted; /* When the current async job star= ted */ int phase; /* Job phase (mainly for migration= s) */ unsigned long long mask; /* Jobs allowed during async job */ - qemuDomainJobInfo *current; /* async job progress data */ - qemuDomainJobInfo *completed; /* statistics data of a recently com= pleted job */ + virDomainJobData *current; /* async job progress data */ + virDomainJobData *completed; /* statistics data of a recently comp= leted job */ bool abortJob; /* abort of the job requested */ char *error; /* job event completion error */ unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ @@ -256,14 +222,14 @@ void qemuDomainObjDiscardAsyncJob(virQEMUDriver *driv= er, virDomainObj *obj); void qemuDomainObjReleaseAsyncJob(virDomainObj *obj); =20 -int qemuDomainJobInfoUpdateTime(qemuDomainJobInfo *jobInfo) +int qemuDomainJobInfoUpdateTime(virDomainJobData *jobData) ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfo *jobInfo) +int qemuDomainJobInfoUpdateDowntime(virDomainJobData *jobData) ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoToInfo(qemuDomainJobInfo *jobInfo, +int qemuDomainJobInfoToInfo(virDomainJobData *jobData, virDomainJobInfoPtr info) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -int qemuDomainJobInfoToParams(qemuDomainJobInfo *jobInfo, +int qemuDomainJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 83cc7a04ea..800db34a4b 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2630,6 +2630,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, int ret =3D -1; virObjectEvent *event =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; virQEMUSaveData *data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 @@ -2646,7 +2647,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, goto endjob; } =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* Pause */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { @@ -2939,6 +2940,7 @@ qemuDumpWaitForCompletion(virDomainObj *vm) { qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; =20 VIR_DEBUG("Waiting for dump completion"); while (!jobPriv->dumpCompleted && !priv->job.abortJob) { @@ -2946,7 +2948,7 @@ qemuDumpWaitForCompletion(virDomainObj *vm) return -1; } =20 - if (priv->job.current->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STAT= US_FAILED) { + if (privJobCurrent->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STATUS_= FAILED) { if (priv->job.error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), @@ -2985,10 +2987,13 @@ qemuDumpToFd(virQEMUDriver *driver, if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, fd) = < 0) return -1; =20 - if (detach) - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUM= P; - else - g_clear_pointer(&priv->job.current, qemuDomainJobInfoFree); + if (detach) { + qemuDomainJobDataPrivate *privStats =3D priv->job.current->private= Data; + + privStats->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; + } else { + g_clear_pointer(&priv->job.current, virDomainJobDataFree); + } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; @@ -3123,6 +3128,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; qemuDomainObjPrivate *priv =3D NULL; + qemuDomainJobDataPrivate *privJobCurrent =3D NULL; bool resume =3D false, paused =3D false; int ret =3D -1; virObjectEvent *event =3D NULL; @@ -3147,7 +3153,8 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + privJobCurrent =3D priv->job.current->privateData; + privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* Migrate will always stop the VM, so the resume condition is independent of whether the stop command is issued. */ @@ -12409,28 +12416,30 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr co= nn, static int qemuDomainGetJobInfoMigrationStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privStats =3D jobData->privateData; + bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); =20 - if (jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_MIGRATING || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) { + if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_ACTIVE || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_MIGRATING || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED = || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) { if (events && - jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && + jobData->status !=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && qemuMigrationAnyFetchStats(driver, vm, QEMU_ASYNC_JOB_NONE, - jobInfo, NULL) < 0) + jobData, NULL) < 0) return -1; =20 - if (jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && - jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION= && + if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && + privStats->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATI= ON && qemuMigrationSrcFetchMirrorStats(driver, vm, QEMU_ASYNC_JOB_NO= NE, - jobInfo) < 0) + jobData) < 0) return -1; =20 - if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + if (qemuDomainJobInfoUpdateTime(jobData) < 0) return -1; } =20 @@ -12441,9 +12450,10 @@ qemuDomainGetJobInfoMigrationStats(virQEMUDriver *= driver, static int qemuDomainGetJobInfoDumpStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; qemuMonitorDumpStats stats =3D { 0 }; int rc; =20 @@ -12456,33 +12466,33 @@ qemuDomainGetJobInfoDumpStats(virQEMUDriver *driv= er, if (rc < 0) return -1; =20 - jobInfo->stats.dump =3D stats; + privJob->stats.dump =3D stats; =20 - if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + if (qemuDomainJobInfoUpdateTime(jobData) < 0) return -1; =20 - switch (jobInfo->stats.dump.status) { + switch (privJob->stats.dump.status) { case QEMU_MONITOR_DUMP_STATUS_NONE: case QEMU_MONITOR_DUMP_STATUS_FAILED: case QEMU_MONITOR_DUMP_STATUS_LAST: virReportError(VIR_ERR_OPERATION_FAILED, _("dump query failed, status=3D%d"), - jobInfo->stats.dump.status); + privJob->stats.dump.status); return -1; break; =20 case QEMU_MONITOR_DUMP_STATUS_ACTIVE: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; VIR_DEBUG("dump active, bytes written=3D'%llu' remaining=3D'%llu'", - jobInfo->stats.dump.completed, - jobInfo->stats.dump.total - - jobInfo->stats.dump.completed); + privJob->stats.dump.completed, + privJob->stats.dump.total - + privJob->stats.dump.completed); break; =20 case QEMU_MONITOR_DUMP_STATUS_COMPLETED: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; VIR_DEBUG("dump completed, bytes written=3D'%llu'", - jobInfo->stats.dump.completed); + privJob->stats.dump.completed); break; } =20 @@ -12494,16 +12504,17 @@ static int qemuDomainGetJobStatsInternal(virQEMUDriver *driver, virDomainObj *vm, bool completed, - qemuDomainJobInfo **jobInfo) + virDomainJobData **jobData) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privStats =3D NULL; int ret =3D -1; =20 - *jobInfo =3D NULL; + *jobData =3D NULL; =20 if (completed) { if (priv->job.completed && !priv->job.current) - *jobInfo =3D qemuDomainJobInfoCopy(priv->job.completed); + *jobData =3D virDomainJobDataCopy(priv->job.completed); =20 return 0; } @@ -12525,22 +12536,24 @@ qemuDomainGetJobStatsInternal(virQEMUDriver *driv= er, ret =3D 0; goto cleanup; } - *jobInfo =3D qemuDomainJobInfoCopy(priv->job.current); + *jobData =3D virDomainJobDataCopy(priv->job.current); + + privStats =3D (*jobData)->privateData; =20 - switch ((*jobInfo)->statsType) { + switch (privStats->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - if (qemuDomainGetJobInfoMigrationStats(driver, vm, *jobInfo) < 0) + if (qemuDomainGetJobInfoMigrationStats(driver, vm, *jobData) < 0) goto cleanup; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - if (qemuDomainGetJobInfoDumpStats(driver, vm, *jobInfo) < 0) + if (qemuDomainGetJobInfoDumpStats(driver, vm, *jobData) < 0) goto cleanup; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - if (qemuBackupGetJobInfoStats(driver, vm, *jobInfo) < 0) + if (qemuBackupGetJobInfoStats(driver, vm, *jobData) < 0) goto cleanup; break; =20 @@ -12561,7 +12574,7 @@ qemuDomainGetJobInfo(virDomainPtr dom, virDomainJobInfoPtr info) { virQEMUDriver *driver =3D dom->conn->privateData; - g_autoptr(qemuDomainJobInfo) jobInfo =3D NULL; + g_autoptr(virDomainJobData) jobData =3D NULL; virDomainObj *vm; int ret =3D -1; =20 @@ -12573,16 +12586,16 @@ qemuDomainGetJobInfo(virDomainPtr dom, if (virDomainGetJobInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainGetJobStatsInternal(driver, vm, false, &jobInfo) < 0) + if (qemuDomainGetJobStatsInternal(driver, vm, false, &jobData) < 0) goto cleanup; =20 - if (!jobInfo || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_NONE) { + if (!jobData || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_NONE) { ret =3D 0; goto cleanup; } =20 - ret =3D qemuDomainJobInfoToInfo(jobInfo, info); + ret =3D qemuDomainJobInfoToInfo(jobData, info); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12600,7 +12613,7 @@ qemuDomainGetJobStats(virDomainPtr dom, virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; qemuDomainObjPrivate *priv; - g_autoptr(qemuDomainJobInfo) jobInfo =3D NULL; + g_autoptr(virDomainJobData) jobData =3D NULL; bool completed =3D !!(flags & VIR_DOMAIN_JOB_STATS_COMPLETED); int ret =3D -1; =20 @@ -12614,11 +12627,11 @@ qemuDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; - if (qemuDomainGetJobStatsInternal(driver, vm, completed, &jobInfo) < 0) + if (qemuDomainGetJobStatsInternal(driver, vm, completed, &jobData) < 0) goto cleanup; =20 - if (!jobInfo || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_NONE) { + if (!jobData || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_NONE) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; *nparams =3D 0; @@ -12626,10 +12639,10 @@ qemuDomainGetJobStats(virDomainPtr dom, goto cleanup; } =20 - ret =3D qemuDomainJobInfoToParams(jobInfo, type, params, nparams); + ret =3D qemuDomainJobDataToParams(jobData, type, params, nparams); =20 if (completed && ret =3D=3D 0 && !(flags & VIR_DOMAIN_JOB_STATS_KEEP_C= OMPLETED)) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12695,7 +12708,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) break; =20 case QEMU_ASYNC_JOB_MIGRATION_OUT: - if ((priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTC= OPY || + if ((priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCO= PY || (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY))) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 2635ef1162..7957b79fc2 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1200,7 +1200,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, return -1; =20 if (priv->job.abortJob) { - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJ= ob), _("canceled by client")); @@ -1623,35 +1623,37 @@ qemuMigrationSrcWaitForSpice(virDomainObj *vm) =20 =20 static void -qemuMigrationUpdateJobType(qemuDomainJobInfo *jobInfo) +qemuMigrationUpdateJobType(virDomainJobData *jobData) { - switch ((qemuMonitorMigrationStatus) jobInfo->stats.mig.status) { + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + + switch ((qemuMonitorMigrationStatus) priv->stats.mig.status) { case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_POSTCOPY; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_COMPLETED: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_INACTIVE: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_NONE; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_NONE; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_ERROR: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_CANCELLED: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_PRE_SWITCHOVER: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_PAUSED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_PAUSED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_DEVICE: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_MIGRATING; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_MIGRATING; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_SETUP: @@ -1668,11 +1670,12 @@ int qemuMigrationAnyFetchStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo, + virDomainJobData *jobData, char **error) { qemuDomainObjPrivate *priv =3D vm->privateData; qemuMonitorMigrationStats stats; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; int rv; =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) @@ -1684,7 +1687,7 @@ qemuMigrationAnyFetchStats(virQEMUDriver *driver, if (rv < 0) return -1; =20 - jobInfo->stats.mig =3D stats; + privJob->stats.mig =3D stats; =20 return 0; } @@ -1725,41 +1728,42 @@ qemuMigrationJobCheckStatus(virQEMUDriver *driver, qemuDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobInfo *jobInfo =3D priv->job.current; + virDomainJobData *jobData =3D priv->job.current; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; g_autofree char *error =3D NULL; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); =20 if (!events || - jobInfo->stats.mig.status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_ERR= OR) { - if (qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobInfo, &err= or) < 0) + privJob->stats.mig.status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_ERR= OR) { + if (qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobData, &err= or) < 0) return -1; } =20 - qemuMigrationUpdateJobType(jobInfo); + qemuMigrationUpdateJobType(jobData); =20 - switch (jobInfo->status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: + switch (jobData->status) { + case VIR_DOMAIN_JOB_STATUS_NONE: virReportError(VIR_ERR_OPERATION_FAILED, _("%s: %s"), qemuMigrationJobName(vm), _("is not active")); return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_FAILED: + case VIR_DOMAIN_JOB_STATUS_FAILED: virReportError(VIR_ERR_OPERATION_FAILED, _("%s: %s"), qemuMigrationJobName(vm), error ? error : _("unexpectedly failed")); return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_CANCELED: + case VIR_DOMAIN_JOB_STATUS_CANCELED: virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuMigrationJobName(vm), _("canceled by client")); return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + case VIR_DOMAIN_JOB_STATUS_PAUSED: break; } =20 @@ -1790,7 +1794,7 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobInfo *jobInfo =3D priv->job.current; + virDomainJobData *jobData =3D priv->job.current; int pauseReason; =20 if (qemuMigrationJobCheckStatus(driver, vm, asyncJob) < 0) @@ -1820,7 +1824,7 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, * wait again for the real end of the migration. */ if (flags & QEMU_MIGRATION_COMPLETED_PRE_SWITCHOVER && - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PAUSED) { + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { VIR_DEBUG("Migration paused before switchover"); return 1; } @@ -1830,38 +1834,38 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, * will continue waiting until the migrate state changes to completed. */ if (flags & QEMU_MIGRATION_COMPLETED_POSTCOPY && - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) { + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) { VIR_DEBUG("Migration switched to post-copy"); return 1; } =20 - if (jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED) + if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) return 1; else return 0; =20 error: - switch (jobInfo->status) { - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: + switch (jobData->status) { + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + case VIR_DOMAIN_JOB_STATUS_PAUSED: /* The migration was aborted by us rather than QEMU itself. */ - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; return -2; =20 - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: /* Something failed after QEMU already finished the migration. */ - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_FAILED: - case QEMU_DOMAIN_JOB_STATUS_CANCELED: + case VIR_DOMAIN_JOB_STATUS_FAILED: + case VIR_DOMAIN_JOB_STATUS_CANCELED: /* QEMU aborted the migration. */ return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_NONE: + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_NONE: /* Impossible. */ break; } @@ -1881,11 +1885,11 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *dr= iver, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobInfo *jobInfo =3D priv->job.current; + virDomainJobData *jobData =3D priv->job.current; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); int rv; =20 - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_MIGRATING; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_MIGRATING; =20 while ((rv =3D qemuMigrationAnyCompleted(driver, vm, asyncJob, dconn, flags)) !=3D 1) { @@ -1895,7 +1899,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driv= er, if (events) { if (virDomainObjWait(vm) < 0) { if (virDomainObjIsActive(vm)) - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; return -2; } } else { @@ -1909,17 +1913,17 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *dr= iver, } =20 if (events) - ignore_value(qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobI= nfo, NULL)); + ignore_value(qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobD= ata, NULL)); =20 - qemuDomainJobInfoUpdateTime(jobInfo); - qemuDomainJobInfoUpdateDowntime(jobInfo); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed =3D qemuDomainJobInfoCopy(jobInfo); - priv->job.completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + qemuDomainJobInfoUpdateTime(jobData); + qemuDomainJobInfoUpdateDowntime(jobData); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + priv->job.completed =3D virDomainJobDataCopy(jobData); + priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 if (asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT && - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED) - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) + jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 return 0; } @@ -3385,7 +3389,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, virObjectEvent *event; qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; - qemuDomainJobInfo *jobInfo =3D NULL; + virDomainJobData *jobData =3D NULL; =20 VIR_DEBUG("driver=3D%p, vm=3D%p, cookiein=3D%s, cookieinlen=3D%d, " "flags=3D0x%x, retcode=3D%d", @@ -3405,13 +3409,15 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, return -1; =20 if (retcode =3D=3D 0) - jobInfo =3D priv->job.completed; + jobData =3D priv->job.completed; else - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); =20 /* Update times with the values sent by the destination daemon */ - if (mig->jobInfo && jobInfo) { + if (mig->jobData && jobData) { int reason; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; + qemuDomainJobDataPrivate *privMigJob =3D mig->jobData->privateData; =20 /* We need to refresh migration statistics after a completed post-= copy * migration since priv->job.completed contains obsolete data from= the @@ -3420,14 +3426,14 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, if (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY && qemuMigrationAnyFetchStats(driver, vm, QEMU_ASYNC_JOB_MIGRATIO= N_OUT, - jobInfo, NULL) < 0) + jobData, NULL) < 0) VIR_WARN("Could not refresh migration statistics"); =20 - qemuDomainJobInfoUpdateTime(jobInfo); - jobInfo->timeDeltaSet =3D mig->jobInfo->timeDeltaSet; - jobInfo->timeDelta =3D mig->jobInfo->timeDelta; - jobInfo->stats.mig.downtime_set =3D mig->jobInfo->stats.mig.downti= me_set; - jobInfo->stats.mig.downtime =3D mig->jobInfo->stats.mig.downtime; + qemuDomainJobInfoUpdateTime(jobData); + jobData->timeDeltaSet =3D mig->jobData->timeDeltaSet; + jobData->timeDelta =3D mig->jobData->timeDelta; + privJob->stats.mig.downtime_set =3D privMigJob->stats.mig.downtime= _set; + privJob->stats.mig.downtime =3D privMigJob->stats.mig.downtime; } =20 if (flags & VIR_MIGRATE_OFFLINE) @@ -4197,7 +4203,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, /* explicitly do this *after* we entered the monitor, * as this is a critical section so we are guaranteed * priv->job.abortJob will not change */ - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJob), _("canceled by client")); @@ -4312,7 +4318,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, * resume it now once we finished all block jobs and wait for the real * end of the migration. */ - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PAUSED) { + if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(driver, vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWI= TCHOVER, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) @@ -4373,7 +4379,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 if (virDomainObjIsActive(vm)) { if (cancel && - priv->job.current->status !=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COM= PLETED && + priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISO= R_COMPLETED && qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { qemuMonitorMigrateCancel(priv->mon); @@ -4388,8 +4394,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 qemuMigrationSrcCancelRemoveTempBitmaps(vm, QEMU_ASYNC_JOB_MIGRATI= ON_OUT); =20 - if (priv->job.current->status !=3D QEMU_DOMAIN_JOB_STATUS_CANCELED) - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + if (priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; } =20 if (iothread) @@ -5625,7 +5631,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, unsigned short port; unsigned long long timeReceived =3D 0; virObjectEvent *event; - qemuDomainJobInfo *jobInfo =3D NULL; + virDomainJobData *jobData =3D NULL; bool inPostCopy =3D false; bool doKill =3D true; =20 @@ -5649,7 +5655,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, : QEMU_MIGRATION_PHASE_FINISH2); =20 qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); =20 cookie_flags =3D QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_STATS | @@ -5741,7 +5747,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, goto endjob; } =20 - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) + if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) inPostCopy =3D true; =20 if (!(flags & VIR_MIGRATE_PAUSED)) { @@ -5777,17 +5783,17 @@ qemuMigrationDstFinish(virQEMUDriver *driver, doKill =3D false; } =20 - if (mig->jobInfo) { - jobInfo =3D mig->jobInfo; - mig->jobInfo =3D NULL; + if (mig->jobData) { + jobData =3D mig->jobData; + mig->jobData =3D NULL; =20 - if (jobInfo->sent && timeReceived) { - jobInfo->timeDelta =3D timeReceived - jobInfo->sent; - jobInfo->received =3D timeReceived; - jobInfo->timeDeltaSet =3D true; + if (jobData->sent && timeReceived) { + jobData->timeDelta =3D timeReceived - jobData->sent; + jobData->received =3D timeReceived; + jobData->timeDeltaSet =3D true; } - qemuDomainJobInfoUpdateTime(jobInfo); - qemuDomainJobInfoUpdateDowntime(jobInfo); + qemuDomainJobInfoUpdateTime(jobData); + qemuDomainJobInfoUpdateDowntime(jobData); } =20 if (inPostCopy) { @@ -5852,10 +5858,12 @@ qemuMigrationDstFinish(virQEMUDriver *driver, } =20 if (dom) { - if (jobInfo) { - priv->job.completed =3D g_steal_pointer(&jobInfo); - priv->job.completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLET= ED; - priv->job.completed->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_= MIGRATION; + if (jobData) { + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; + + priv->job.completed =3D g_steal_pointer(&jobData); + priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETE= D; + privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; } =20 if (qemuMigrationCookieFormat(mig, driver, vm, @@ -5868,7 +5876,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, * is obsolete anyway. */ if (inPostCopy) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); } =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, @@ -5879,7 +5887,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, qemuDomainRemoveInactiveJob(driver, vm); =20 cleanup: - g_clear_pointer(&jobInfo, qemuDomainJobInfoFree); + g_clear_pointer(&jobData, virDomainJobDataFree); virPortAllocatorRelease(port); if (priv->mon) qemuMonitorSetDomainLog(priv->mon, NULL, NULL, NULL); @@ -6097,6 +6105,7 @@ qemuMigrationJobStart(virQEMUDriver *driver, unsigned long apiFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJob =3D priv->job.current->privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -6113,7 +6122,7 @@ qemuMigrationJobStart(virQEMUDriver *driver, if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) return -1; =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; =20 qemuDomainObjSetAsyncJobMask(vm, mask); return 0; @@ -6233,13 +6242,14 @@ int qemuMigrationSrcFetchMirrorStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { size_t i; qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; bool nbd =3D false; g_autoptr(GHashTable) blockinfo =3D NULL; - qemuDomainMirrorStats *stats =3D &jobInfo->mirrorStats; + qemuDomainMirrorStats *stats =3D &privJob->mirrorStats; =20 for (i =3D 0; i < vm->def->ndisks; i++) { virDomainDiskDef *disk =3D vm->def->disks[i]; diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index b233358a51..6b169f73c7 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -221,7 +221,7 @@ int qemuMigrationAnyFetchStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo, + virDomainJobData *jobData, char **error); =20 int @@ -258,4 +258,4 @@ int qemuMigrationSrcFetchMirrorStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo); + virDomainJobData *jobData); diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_coo= kie.c index bffab7c13d..d1654b59c5 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -166,7 +166,7 @@ qemuMigrationCookieFree(qemuMigrationCookie *mig) g_free(mig->name); g_free(mig->lockState); g_free(mig->lockDriver); - g_clear_pointer(&mig->jobInfo, qemuDomainJobInfoFree); + g_clear_pointer(&mig->jobData, virDomainJobDataFree); virCPUDefFree(mig->cpu); qemuMigrationCookieCapsFree(mig->caps); if (mig->blockDirtyBitmaps) @@ -531,8 +531,8 @@ qemuMigrationCookieAddStatistics(qemuMigrationCookie *m= ig, if (!priv->job.completed) return 0; =20 - g_clear_pointer(&mig->jobInfo, qemuDomainJobInfoFree); - mig->jobInfo =3D qemuDomainJobInfoCopy(priv->job.completed); + g_clear_pointer(&mig->jobData, virDomainJobDataFree); + mig->jobData =3D virDomainJobDataCopy(priv->job.completed); =20 mig->flags |=3D QEMU_MIGRATION_COOKIE_STATS; =20 @@ -632,22 +632,23 @@ qemuMigrationCookieNetworkXMLFormat(virBuffer *buf, =20 static void qemuMigrationCookieStatisticsXMLFormat(virBuffer *buf, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuMonitorMigrationStats *stats =3D &priv->stats.mig; =20 virBufferAddLit(buf, "\n"); virBufferAdjustIndent(buf, 2); =20 - virBufferAsprintf(buf, "%llu\n", jobInfo->started); - virBufferAsprintf(buf, "%llu\n", jobInfo->stopped); - virBufferAsprintf(buf, "%llu\n", jobInfo->sent); - if (jobInfo->timeDeltaSet) - virBufferAsprintf(buf, "%lld\n", jobInfo->timeDelta= ); + virBufferAsprintf(buf, "%llu\n", jobData->started); + virBufferAsprintf(buf, "%llu\n", jobData->stopped); + virBufferAsprintf(buf, "%llu\n", jobData->sent); + if (jobData->timeDeltaSet) + virBufferAsprintf(buf, "%lld\n", jobData->timeDelta= ); =20 virBufferAsprintf(buf, "<%1$s>%2$llu\n", VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed); + jobData->timeElapsed); if (stats->downtime_set) virBufferAsprintf(buf, "<%1$s>%2$llu\n", VIR_DOMAIN_JOB_DOWNTIME, @@ -884,8 +885,8 @@ qemuMigrationCookieXMLFormat(virQEMUDriver *driver, if ((mig->flags & QEMU_MIGRATION_COOKIE_NBD) && mig->nbd) qemuMigrationCookieNBDXMLFormat(mig->nbd, buf); =20 - if (mig->flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobInfo) - qemuMigrationCookieStatisticsXMLFormat(buf, mig->jobInfo); + if (mig->flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData) + qemuMigrationCookieStatisticsXMLFormat(buf, mig->jobData); =20 if (mig->flags & QEMU_MIGRATION_COOKIE_CPU && mig->cpu) virCPUDefFormatBufFull(buf, mig->cpu, NULL); @@ -1031,29 +1032,30 @@ qemuMigrationCookieNBDXMLParse(xmlXPathContextPtr c= txt) } =20 =20 -static qemuDomainJobInfo * +static virDomainJobData * qemuMigrationCookieStatisticsXMLParse(xmlXPathContextPtr ctxt) { - qemuDomainJobInfo *jobInfo =3D NULL; + virDomainJobData *jobData =3D NULL; qemuMonitorMigrationStats *stats; + qemuDomainJobDataPrivate *priv =3D NULL; VIR_XPATH_NODE_AUTORESTORE(ctxt) =20 if (!(ctxt->node =3D virXPathNode("./statistics", ctxt))) return NULL; =20 - jobInfo =3D g_new0(qemuDomainJobInfo, 1); + jobData =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks); + priv =3D jobData->privateData; + stats =3D &priv->stats.mig; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 - stats =3D &jobInfo->stats.mig; - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; - - virXPathULongLong("string(./started[1])", ctxt, &jobInfo->started); - virXPathULongLong("string(./stopped[1])", ctxt, &jobInfo->stopped); - virXPathULongLong("string(./sent[1])", ctxt, &jobInfo->sent); - if (virXPathLongLong("string(./delta[1])", ctxt, &jobInfo->timeDelta) = =3D=3D 0) - jobInfo->timeDeltaSet =3D true; + virXPathULongLong("string(./started[1])", ctxt, &jobData->started); + virXPathULongLong("string(./stopped[1])", ctxt, &jobData->stopped); + virXPathULongLong("string(./sent[1])", ctxt, &jobData->sent); + if (virXPathLongLong("string(./delta[1])", ctxt, &jobData->timeDelta) = =3D=3D 0) + jobData->timeDeltaSet =3D true; =20 virXPathULongLong("string(./" VIR_DOMAIN_JOB_TIME_ELAPSED "[1])", - ctxt, &jobInfo->timeElapsed); + ctxt, &jobData->timeElapsed); =20 if (virXPathULongLong("string(./" VIR_DOMAIN_JOB_DOWNTIME "[1])", ctxt, &stats->downtime) =3D=3D 0) @@ -1113,7 +1115,7 @@ qemuMigrationCookieStatisticsXMLParse(xmlXPathContext= Ptr ctxt) virXPathInt("string(./" VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE "[1])", ctxt, &stats->cpu_throttle_percentage); =20 - return jobInfo; + return jobData; } =20 =20 @@ -1385,7 +1387,7 @@ qemuMigrationCookieXMLParse(qemuMigrationCookie *mig, =20 if (flags & QEMU_MIGRATION_COOKIE_STATS && virXPathBoolean("boolean(./statistics)", ctxt) && - (!(mig->jobInfo =3D qemuMigrationCookieStatisticsXMLParse(ctxt)))) + (!(mig->jobData =3D qemuMigrationCookieStatisticsXMLParse(ctxt)))) return -1; =20 if (flags & QEMU_MIGRATION_COOKIE_CPU && @@ -1546,8 +1548,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, } } =20 - if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobInfo && priv->job.c= urrent) - mig->jobInfo->operation =3D priv->job.current->operation; + if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData && priv->job.c= urrent) + mig->jobData->operation =3D priv->job.current->operation; =20 return g_steal_pointer(&mig); } diff --git a/src/qemu/qemu_migration_cookie.h b/src/qemu/qemu_migration_coo= kie.h index 1726e5f2da..d9e1d949a8 100644 --- a/src/qemu/qemu_migration_cookie.h +++ b/src/qemu/qemu_migration_cookie.h @@ -162,7 +162,7 @@ struct _qemuMigrationCookie { qemuMigrationCookieNBD *nbd; =20 /* If (flags & QEMU_MIGRATION_COOKIE_STATS) */ - qemuDomainJobInfo *jobInfo; + virDomainJobData *jobData; =20 /* If flags & QEMU_MIGRATION_COOKIE_CPU */ virCPUDef *cpu; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 7ff4dc1835..fb7a04139a 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -650,7 +650,7 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PO= STCOPY) + if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POS= TCOPY) reason =3D VIR_DOMAIN_PAUSED_POSTCOPY; else reason =3D VIR_DOMAIN_PAUSED_MIGRATION; @@ -1544,6 +1544,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, void *opaque) { qemuDomainObjPrivate *priv; + qemuDomainJobDataPrivate *privJob =3D NULL; virQEMUDriver *driver =3D opaque; virObjectEvent *event =3D NULL; int reason; @@ -1560,7 +1561,9 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, goto cleanup; } =20 - priv->job.current->stats.mig.status =3D status; + privJob =3D priv->job.current->privateData; + + privJob->stats.mig.status =3D status; virDomainObjBroadcast(vm); =20 if (status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY && @@ -1622,6 +1625,7 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_GNU= C_UNUSED, { qemuDomainObjPrivate *priv; qemuDomainJobPrivate *jobPriv; + qemuDomainJobDataPrivate *privJobCurrent =3D NULL; =20 virObjectLock(vm); =20 @@ -1630,18 +1634,19 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_G= NUC_UNUSED, =20 priv =3D vm->privateData; jobPriv =3D priv->job.privateData; + privJobCurrent =3D priv->job.current->privateData; if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job"); goto cleanup; } jobPriv->dumpCompleted =3D true; - priv->job.current->stats.dump =3D *stats; + privJobCurrent->stats.dump =3D *stats; priv->job.error =3D g_strdup(error); =20 /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { priv->job.error =3D g_strdup(virGetLastErrorMessage()); - priv->job.current->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_= FAILED; + privJobCurrent->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_FAI= LED; } =20 virDomainObjBroadcast(vm); @@ -3592,6 +3597,7 @@ qemuProcessRecoverJob(virQEMUDriver *driver, unsigned int *stopFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privDataJobCurrent =3D NULL; virDomainState state; int reason; unsigned long long now; @@ -3659,10 +3665,12 @@ qemuProcessRecoverJob(virQEMUDriver *driver, /* We reset the job parameters for backup so that the job will look * active. This is possible because we are able to recover the sta= te * of blockjobs and also the backup job allows all sub-job types */ - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); + priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivateData= Callbacks); + privDataJobCurrent =3D priv->job.current->privateData; + priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_BACKUP; - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + privDataJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKU= P; + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; priv->job.current->started =3D now; break; =20 @@ -8280,7 +8288,7 @@ void qemuProcessStop(virQEMUDriver *driver, =20 /* clean up a possible backup job */ if (priv->backup) - qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); + qemuBackupJobTerminate(vm, VIR_DOMAIN_JOB_STATUS_CANCELED); =20 /* Do this explicitly after vm->pid is reset so that security drivers = don't * try to enter the domain's namespace which is non-existent by now as= qemu diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 1887c70708..6f9aedace7 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1414,11 +1414,13 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, =20 /* do the memory snapshot if necessary */ if (memory) { + qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->pr= ivateData; + /* check if migration is possible */ if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDU= MP; + privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | --=20 2.34.1 From nobody Thu May 2 06:04:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1642698080; cv=none; d=zohomail.com; s=zohoarc; b=OSjJVe/URDVB7z/oyxbLgDMQr+h8BxO4JRRLd/6r6Mb9XA8ibYv76oWYKpe5IOKXJiYdgf5bfPUtjPmJBZHOET5Gt3mxCkEwj0Fl6i6igV0k2vvl+sl5SqoAw3dJQTFRI4172rxOVsqcPuS9SNkOj4kCWDZUyQTxDW90bzXz814= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1642698080; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7JBZ65SUq/LZLdTYxo90TF4X69fzqIpy4S8QMNToEsQ=; b=PQWy0FuOybsMuDfFCgx1o2dM+mOu3eWRJpSA5/rDY/m4me45GuZSls7HHk5u9Fub8dSQW4M2MaILXs1s7QpEgaydo3huM/YVtxebCkw5N4MB+OpHj371tFR1APlzh7cCNyiiyeBxY9Uecj+82pmNCKqaDOCrik+k3HirNMTsrVc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1642698080684367.26500148995217; Thu, 20 Jan 2022 09:01:20 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-622-aQnQd4iIOTyGtJiRU6Qopg-1; Thu, 20 Jan 2022 12:01:16 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C4837185302D; Thu, 20 Jan 2022 17:01:09 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A33AF4EC91; Thu, 20 Jan 2022 17:01:09 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 3D2CD1806D2B; Thu, 20 Jan 2022 17:01:09 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 20KH0aXd029421 for ; Thu, 20 Jan 2022 12:00:36 -0500 Received: by smtp.corp.redhat.com (Postfix) id 123337ED93; Thu, 20 Jan 2022 17:00:36 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9093A7EDAB for ; Thu, 20 Jan 2022 17:00:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642698079; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=7JBZ65SUq/LZLdTYxo90TF4X69fzqIpy4S8QMNToEsQ=; b=VEBibFKTRk6T/h+8xJtCUMkoWW+EPtyKe3HoqAjdP3QFj8GoF0n3UyCjzA/16ZawAjTSAs YUt6j412OThRUzvjzlcfJXnbejbjuOTWsyaVVsBILcConSsFg5P/NsKKWkFbvPsKsLATmG vtZEpZE1zyWF5Eq0neB+Z8OJhLh3GSw= X-MC-Unique: aQnQd4iIOTyGtJiRU6Qopg-1 From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 2/3] qemu: make separate function for setting statsType of privateData Date: Thu, 20 Jan 2022 17:59:49 +0100 Message-Id: <4e1734e80e8e769c66d78f1a3511099613c7c678.1642697880.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1642698082835100001 Content-Type: text/plain; charset="utf-8" We only need to set statsType in almost every case of setting something from private data, so it seems unnecessary to pull privateData out of current / completed job for just this one thing every time. I think this patch keeps the code cleaner without variables used just once. Signed-off-by: Kristina Hanicova --- src/qemu/qemu_backup.c | 4 ++-- src/qemu/qemu_domain.c | 8 ++++++++ src/qemu/qemu_domain.h | 3 +++ src/qemu/qemu_driver.c | 14 ++++++-------- src/qemu/qemu_migration.c | 9 ++++----- src/qemu/qemu_process.c | 5 ++--- src/qemu/qemu_snapshot.c | 5 ++--- 7 files changed, 27 insertions(+), 21 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 081c4d023f..23d3f44dd8 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -746,7 +746,6 @@ qemuBackupBegin(virDomainObj *vm, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privData =3D priv->job.current->privateData; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); g_autoptr(virDomainBackupDef) def =3D NULL; g_autofree char *suffix =3D NULL; @@ -800,7 +799,8 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - privData->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP); =20 if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index a8401bac30..62e67b438f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -81,6 +81,14 @@ VIR_LOG_INIT("qemu.qemu_domain"); =20 =20 +void +qemuDomainJobPrivateSetStatsType(qemuDomainJobDataPrivate *privData, + qemuDomainJobStatsType type) +{ + privData->statsType =3D type; +} + + static void * qemuJobAllocPrivate(void) { diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index e5046367e3..7363ff6444 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -488,6 +488,9 @@ void qemuDomainObjStopWorker(virDomainObj *dom); =20 virDomainObj *qemuDomainObjFromDomain(virDomainPtr domain); =20 +void qemuDomainJobPrivateSetStatsType(qemuDomainJobDataPrivate *privData, + qemuDomainJobStatsType type); + qemuDomainSaveCookie *qemuDomainSaveCookieNew(virDomainObj *vm); =20 void qemuDomainEventFlush(int timer, void *opaque); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 800db34a4b..7f7684f1cc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2630,7 +2630,6 @@ qemuDomainSaveInternal(virQEMUDriver *driver, int ret =3D -1; virObjectEvent *event =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; virQEMUSaveData *data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 @@ -2647,7 +2646,8 @@ qemuDomainSaveInternal(virQEMUDriver *driver, goto endjob; } =20 - privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Pause */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { @@ -2988,9 +2988,8 @@ qemuDumpToFd(virQEMUDriver *driver, return -1; =20 if (detach) { - qemuDomainJobDataPrivate *privStats =3D priv->job.current->private= Data; - - privStats->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUM= P); } else { g_clear_pointer(&priv->job.current, virDomainJobDataFree); } @@ -3128,7 +3127,6 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; qemuDomainObjPrivate *priv =3D NULL; - qemuDomainJobDataPrivate *privJobCurrent =3D NULL; bool resume =3D false, paused =3D false; int ret =3D -1; virObjectEvent *event =3D NULL; @@ -3153,8 +3151,8 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - privJobCurrent =3D priv->job.current->privateData; - privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Migrate will always stop the VM, so the resume condition is independent of whether the stop command is issued. */ diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 7957b79fc2..9e4fd67e2a 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -5859,11 +5859,10 @@ qemuMigrationDstFinish(virQEMUDriver *driver, =20 if (dom) { if (jobData) { - qemuDomainJobDataPrivate *privJob =3D jobData->privateData; - priv->job.completed =3D g_steal_pointer(&jobData); priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETE= D; - privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + qemuDomainJobPrivateSetStatsType(jobData->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_MI= GRATION); } =20 if (qemuMigrationCookieFormat(mig, driver, vm, @@ -6105,7 +6104,6 @@ qemuMigrationJobStart(virQEMUDriver *driver, unsigned long apiFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privJob =3D priv->job.current->privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -6122,7 +6120,8 @@ qemuMigrationJobStart(virQEMUDriver *driver, if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) return -1; =20 - privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION); =20 qemuDomainObjSetAsyncJobMask(vm, mask); return 0; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index fb7a04139a..39872e6c12 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3597,7 +3597,6 @@ qemuProcessRecoverJob(virQEMUDriver *driver, unsigned int *stopFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privDataJobCurrent =3D NULL; virDomainState state; int reason; unsigned long long now; @@ -3666,10 +3665,10 @@ qemuProcessRecoverJob(virQEMUDriver *driver, * active. This is possible because we are able to recover the sta= te * of blockjobs and also the backup job allows all sub-job types */ priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivateData= Callbacks); - privDataJobCurrent =3D priv->job.current->privateData; =20 + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP= ); priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_BACKUP; - privDataJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKU= P; priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; priv->job.current->started =3D now; break; diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 6f9aedace7..359a7a32ea 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1414,13 +1414,12 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, =20 /* do the memory snapshot if necessary */ if (memory) { - qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->pr= ivateData; - /* check if migration is possible */ if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; =20 - privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + qemuDomainJobPrivateSetStatsType(priv->job.current->privateData, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDU= MP); =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | --=20 2.34.1 From nobody Thu May 2 06:04:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1642698081; cv=none; d=zohomail.com; s=zohoarc; b=mCdFtjEE0hf/RcbGUHq30Y1Mmx6LKRDd6xLVX+P0yY8Nwb3ibQN7QErkOEchEmkm0b/Dx6RR9BxPQtX7FW6zvFhXdzl1KYNMykUYzDSPoGfEKhcrPfbQw05yb5kwCqlKk8tLK7WDiGiuKoWeVA4JosIMQ5B0TLtYJ4ka9T6Vv1s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1642698081; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ENho7nWKhS/NrX+akwMwX7eSu57fXVEWfCRsqdesGeg=; b=Vcw1fRiFnj5dtIhhw54yksWrk7ylpVpA9t8bZBbSOou8JB+mWtOenmy7seYkjkYUZJJzgJ/C6IyAWhKOyLsyHXGyXfNazPZm/uMAU1fHslKi0Y+YWLjJu/frDlXUbU77aIPJosP3u/iRxR1asHlUFit8DtkW2WqEOM6Clq6NSpI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1642698081749751.2326180109235; Thu, 20 Jan 2022 09:01:21 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-R-26KgKqPp2kebPyMxAARQ-1; Thu, 20 Jan 2022 12:01:18 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9165A36393; Thu, 20 Jan 2022 17:01:11 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 700BD7EBDD; Thu, 20 Jan 2022 17:01:11 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 3C1544A7C9; Thu, 20 Jan 2022 17:01:11 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 20KH0aDm029429 for ; Thu, 20 Jan 2022 12:00:36 -0500 Received: by smtp.corp.redhat.com (Postfix) id D4ADB7EBFF; Thu, 20 Jan 2022 17:00:36 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5F2C47EBDD for ; Thu, 20 Jan 2022 17:00:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642698080; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=ENho7nWKhS/NrX+akwMwX7eSu57fXVEWfCRsqdesGeg=; b=Ag461Db4t4ef1yPg56YOzndZDqjibUSISwNi1j7Oj2vkpoLSgm+DC6hz+RDD3/BHa8GlBv lxSHXHjYpFzoXqHOHWOpU9u6pNfFuxqCYfa72uYeO8rtVRzV0ylhoMoaysWpKPwNVELZGS FfGGK6uZCP8Ve4yDMcf14ooDAVbytls= X-MC-Unique: R-26KgKqPp2kebPyMxAARQ-1 From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 3/3] libxl: use virDomainJobData instead of virDomainJobInfo Date: Thu, 20 Jan 2022 17:59:50 +0100 Message-Id: <85c8a36d6733e7928973666a33bcd0aefcb00d27.1642697881.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1642698083296100003 Content-Type: text/plain; charset="utf-8" This transition will make it easier for me in order to generalize jobs in the future as they will always use virDomainJobData and virDomainJobInfo will be only used in the public api.. Signed-off-by: Kristina Hanicova Reviewed-by: Jiri Denemark --- src/libxl/libxl_domain.c | 10 +++++----- src/libxl/libxl_domain.h | 3 ++- src/libxl/libxl_driver.c | 14 +++++++++----- 3 files changed, 16 insertions(+), 11 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index feca60f7d2..8f2feb328b 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -60,7 +60,7 @@ libxlDomainObjInitJob(libxlDomainObjPrivate *priv) if (virCondInit(&priv->job.cond) < 0) return -1; =20 - priv->job.current =3D g_new0(virDomainJobInfo, 1); + priv->job.current =3D virDomainJobDataInit(NULL); =20 return 0; } @@ -78,7 +78,7 @@ static void libxlDomainObjFreeJob(libxlDomainObjPrivate *priv) { ignore_value(virCondDestroy(&priv->job.cond)); - VIR_FREE(priv->job.current); + virDomainJobDataFree(priv->job.current); } =20 /* Give up waiting for mutex after 30 seconds */ @@ -119,7 +119,7 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); priv->job.started =3D now; - priv->job.current->type =3D VIR_DOMAIN_JOB_UNBOUNDED; + priv->job.current->jobType =3D VIR_DOMAIN_JOB_UNBOUNDED; =20 return 0; =20 @@ -168,7 +168,7 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, int libxlDomainJobUpdateTime(struct libxlDomainJobObj *job) { - virDomainJobInfoPtr jobInfo =3D job->current; + virDomainJobData *jobData =3D job->current; unsigned long long now; =20 if (!job->started) @@ -182,7 +182,7 @@ libxlDomainJobUpdateTime(struct libxlDomainJobObj *job) return 0; } =20 - jobInfo->timeElapsed =3D now - job->started; + jobData->timeElapsed =3D now - job->started; return 0; } =20 diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 981bfc2bca..475e4a6933 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -26,6 +26,7 @@ #include "libxl_conf.h" #include "virchrdev.h" #include "virenum.h" +#include "domain_job.h" =20 /* Only 1 job is allowed at any time * A job includes *all* libxl.so api, even those just querying @@ -46,7 +47,7 @@ struct libxlDomainJobObj { enum libxlDomainJob active; /* Currently running job */ int owner; /* Thread which set current job */ unsigned long long started; /* When the job started */ - virDomainJobInfoPtr current; /* Statistics for the current job = */ + virDomainJobData *current; /* Statistics for the current job */ }; =20 typedef struct _libxlDomainObjPrivate libxlDomainObjPrivate; diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 2d9385654c..03d542299b 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5212,7 +5212,11 @@ libxlDomainGetJobInfo(virDomainPtr dom, if (libxlDomainJobUpdateTime(&priv->job) < 0) goto cleanup; =20 - memcpy(info, priv->job.current, sizeof(virDomainJobInfo)); + /* setting only these two attributes is enough because libxl never sets + * anything else */ + memset(info, 0, sizeof(*info)); + info->type =3D priv->job.current->jobType; + info->timeElapsed =3D priv->job.current->timeElapsed; ret =3D 0; =20 cleanup: @@ -5229,7 +5233,7 @@ libxlDomainGetJobStats(virDomainPtr dom, { libxlDomainObjPrivate *priv; virDomainObj *vm; - virDomainJobInfoPtr jobInfo; + virDomainJobData *jobData; int ret =3D -1; int maxparams =3D 0; =20 @@ -5243,7 +5247,7 @@ libxlDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; - jobInfo =3D priv->job.current; + jobData =3D priv->job.current; if (!priv->job.active) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; @@ -5260,10 +5264,10 @@ libxlDomainGetJobStats(virDomainPtr dom, =20 if (virTypedParamsAddULLong(params, nparams, &maxparams, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) + jobData->timeElapsed) < 0) goto cleanup; =20 - *type =3D jobInfo->type; + *type =3D jobData->jobType; ret =3D 0; =20 cleanup: --=20 2.34.1