From nobody Tue May 14 13:40:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1644587385; cv=none; d=zohomail.com; s=zohoarc; b=dX4RdtdVdVl0uZGdPLRk8F80SZzQltgQiTaoj/CfNqZsJHF2jJXHWkQ5OAACT3iO+KwM8loOwBxFy7by3f+QfWr/KZMHnkAizvSfJZK7RYmcz3DHMe8SzVzH1JWXWcKhWvNOwXPakRfevDwtRF5th8QypTc1Vyfs4H9IjNbwpwQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1644587385; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7QM9RsmeTGzgV6OwJRlRL1xFPq/mpJwhfGYQ4eUfq6Q=; b=YGnn/f6ZdN0R/XRQ5WF3eQTxffx+Bj6VpNgi2OKl1jYABLMKe7YVzRat2adz5fC9AUWewjr/iIO/Z081W2/GvErJSw0CpyPt4mPH5olCJzMsjbjQcW+XAiZISy6ktjZ9sYaMeEHeDle7j0xIR56j94bgR/sqGj46p+smwrdKYxs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1644587385829672.2148567621606; Fri, 11 Feb 2022 05:49:45 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-620-KY3PKQy9NJueFFDq6s3KQA-1; Fri, 11 Feb 2022 08:49:40 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 00626874982; Fri, 11 Feb 2022 13:49:36 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CAA0427C4A; Fri, 11 Feb 2022 13:49:35 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 980FF1806D2B; Fri, 11 Feb 2022 13:49:35 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 21BDnXnq017411 for ; Fri, 11 Feb 2022 08:49:33 -0500 Received: by smtp.corp.redhat.com (Postfix) id 964127D70C; Fri, 11 Feb 2022 13:49:33 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.193.131]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8DAE47D72D for ; Fri, 11 Feb 2022 13:49:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644587384; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=7QM9RsmeTGzgV6OwJRlRL1xFPq/mpJwhfGYQ4eUfq6Q=; b=YSRCno9K3tVtofCxyglDzFDrlqYTV8f0IcKyFNNr4gX6PEF6AQOAkFfobW8s2jre6go0A0 Jd4cbOxtogg4zhQDzZvbEoEV91rBejJEWF/FyT0IwIQgNowVHeCmYaMhO4yvwXprM4nJyt uyuiEh8BDHsC3MQk30sXH/KP5L2u4gU= X-MC-Unique: KY3PKQy9NJueFFDq6s3KQA-1 From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 1/3] qemu: use generalized virDomainJobData instead of qemuDomainJobInfo Date: Fri, 11 Feb 2022 14:49:05 +0100 Message-Id: <4328229ba5b1f081f82ce09632279ec5f80023fc.1644582510.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1644587386838100001 Content-Type: text/plain; charset="utf-8" This patch includes: * introducing new files: src/hypervisor/domain_job.c and src/hypervisor/dom= ain_job.h * new struct virDomainJobData, which is almost the same as qemuDomainJobInfo - the only differences are moving qemu specific job stats into the qemuDomainJobDataPrivate and adding jobType (possibly more attributes in the future if needed). * moving qemuDomainJobStatus to the domain_job.h and renaming it as virDomainJobStatus * moving and renaming qemuDomainJobStatusToType * adding callback struct virDomainJobDataPrivateDataCallbacks taking care of allocation, copying and freeing of private data of virDomainJobData * adding functions for virDomainJobDataPrivateDataCallbacks for qemu hypervisor * adding 'public' (public between the different hypervisors) functions taking care of init, copy, free of virDomainJobData * renaming every occurrence of qemuDomainJobInfo *info to virDomainJobData *data Signed-off-by: Kristina Hanicova Reviewed-by: Jiri Denemark --- src/hypervisor/domain_job.c | 78 +++++++++++ src/hypervisor/domain_job.h | 72 ++++++++++ src/hypervisor/meson.build | 1 + src/libvirt_private.syms | 7 + src/qemu/qemu_backup.c | 42 +++--- src/qemu/qemu_backup.h | 4 +- src/qemu/qemu_domainjob.c | 227 +++++++++++++++---------------- src/qemu/qemu_domainjob.h | 54 ++------ src/qemu/qemu_driver.c | 113 ++++++++------- src/qemu/qemu_migration.c | 190 ++++++++++++++------------ src/qemu/qemu_migration.h | 4 +- src/qemu/qemu_migration_cookie.c | 60 ++++---- src/qemu/qemu_migration_cookie.h | 2 +- src/qemu/qemu_process.c | 24 ++-- src/qemu/qemu_snapshot.c | 4 +- 15 files changed, 520 insertions(+), 362 deletions(-) create mode 100644 src/hypervisor/domain_job.c create mode 100644 src/hypervisor/domain_job.h diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c new file mode 100644 index 0000000000..9ac8a6d544 --- /dev/null +++ b/src/hypervisor/domain_job.c @@ -0,0 +1,78 @@ +/* + * Copyright (C) 2022 Red Hat, Inc. + * SPDX-License-Identifier: LGPL-2.1-or-later + */ + +#include +#include + +#include "domain_job.h" + + +virDomainJobData * +virDomainJobDataInit(virDomainJobDataPrivateDataCallbacks *cb) +{ + virDomainJobData *ret =3D g_new0(virDomainJobData, 1); + + ret->privateDataCb =3D cb; + + if (ret->privateDataCb) + ret->privateData =3D ret->privateDataCb->allocPrivateData(); + + return ret; +} + +virDomainJobData * +virDomainJobDataCopy(virDomainJobData *data) +{ + virDomainJobData *ret =3D g_new0(virDomainJobData, 1); + + memcpy(ret, data, sizeof(*data)); + + if (ret->privateDataCb) + ret->privateData =3D data->privateDataCb->copyPrivateData(data->pr= ivateData); + + ret->errmsg =3D g_strdup(data->errmsg); + + return ret; +} + +void +virDomainJobDataFree(virDomainJobData *data) +{ + if (!data) + return; + + if (data->privateDataCb) + data->privateDataCb->freePrivateData(data->privateData); + + g_free(data->errmsg); + g_free(data); +} + +virDomainJobType +virDomainJobStatusToType(virDomainJobStatus status) +{ + switch (status) { + case VIR_DOMAIN_JOB_STATUS_NONE: + break; + + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + case VIR_DOMAIN_JOB_STATUS_PAUSED: + return VIR_DOMAIN_JOB_UNBOUNDED; + + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + return VIR_DOMAIN_JOB_COMPLETED; + + case VIR_DOMAIN_JOB_STATUS_FAILED: + return VIR_DOMAIN_JOB_FAILED; + + case VIR_DOMAIN_JOB_STATUS_CANCELED: + return VIR_DOMAIN_JOB_CANCELLED; + } + + return VIR_DOMAIN_JOB_NONE; +} diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h new file mode 100644 index 0000000000..257ef067e4 --- /dev/null +++ b/src/hypervisor/domain_job.h @@ -0,0 +1,72 @@ +/* + * Copyright (C) 2022 Red Hat, Inc. + * SPDX-License-Identifier: LGPL-2.1-or-later + */ + +#pragma once + +#include "internal.h" + +typedef enum { + VIR_DOMAIN_JOB_STATUS_NONE =3D 0, + VIR_DOMAIN_JOB_STATUS_ACTIVE, + VIR_DOMAIN_JOB_STATUS_MIGRATING, + VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED, + VIR_DOMAIN_JOB_STATUS_PAUSED, + VIR_DOMAIN_JOB_STATUS_POSTCOPY, + VIR_DOMAIN_JOB_STATUS_COMPLETED, + VIR_DOMAIN_JOB_STATUS_FAILED, + VIR_DOMAIN_JOB_STATUS_CANCELED, +} virDomainJobStatus; + +typedef void *(*virDomainJobDataPrivateDataAlloc) (void); +typedef void *(*virDomainJobDataPrivateDataCopy) (void *); +typedef void (*virDomainJobDataPrivateDataFree) (void *); + +typedef struct _virDomainJobDataPrivateDataCallbacks virDomainJobDataPriva= teDataCallbacks; +struct _virDomainJobDataPrivateDataCallbacks { + virDomainJobDataPrivateDataAlloc allocPrivateData; + virDomainJobDataPrivateDataCopy copyPrivateData; + virDomainJobDataPrivateDataFree freePrivateData; +}; + +typedef struct _virDomainJobData virDomainJobData; +struct _virDomainJobData { + virDomainJobType jobType; + + virDomainJobStatus status; + virDomainJobOperation operation; + unsigned long long started; /* When the async job started */ + unsigned long long stopped; /* When the domain's CPUs were stopped */ + unsigned long long sent; /* When the source sent status info to the + destination (only for migrations). */ + unsigned long long received; /* When the destination host received sta= tus + info from the source (migrations only)= . */ + /* Computed values */ + unsigned long long timeElapsed; + long long timeDelta; /* delta =3D received - sent, i.e., the differenc= e between + the source and the destination time plus the t= ime + between the end of Perform phase on the source= and + the beginning of Finish phase on the destinati= on. */ + bool timeDeltaSet; + + char *errmsg; /* optional error message for failed completed jobs */ + + void *privateData; /* private data of hypervisors */ + virDomainJobDataPrivateDataCallbacks *privateDataCb; /* callbacks of p= rivate data, hypervisor based */ +}; + + +virDomainJobData * +virDomainJobDataInit(virDomainJobDataPrivateDataCallbacks *cb); + +void +virDomainJobDataFree(virDomainJobData *data); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(virDomainJobData, virDomainJobDataFree); + +virDomainJobData * +virDomainJobDataCopy(virDomainJobData *data); + +virDomainJobType +virDomainJobStatusToType(virDomainJobStatus status); diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index 70801c0820..ec11ec0cd8 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -3,6 +3,7 @@ hypervisor_sources =3D [ 'domain_driver.c', 'virclosecallbacks.c', 'virhostdev.c', + 'domain_job.c', ] =20 stateful_driver_source_files +=3D files(hypervisor_sources) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 398cc79ee3..2380f02b88 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1577,6 +1577,13 @@ virDomainDriverParseBlkioDeviceStr; virDomainDriverSetupPersistentDefBlkioParams; =20 =20 +# hypervisor/domain_job.h +virDomainJobDataCopy; +virDomainJobDataFree; +virDomainJobDataInit; +virDomainJobStatusToType; + + # hypervisor/virclosecallbacks.h virCloseCallbacksGet; virCloseCallbacksGetConn; diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 1f7ab55eca..b0c81261de 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -555,7 +555,7 @@ qemuBackupBeginPullExportDisks(virDomainObj *vm, =20 void qemuBackupJobTerminate(virDomainObj *vm, - qemuDomainJobStatus jobstatus) + virDomainJobStatus jobstatus) =20 { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -583,7 +583,7 @@ qemuBackupJobTerminate(virDomainObj *vm, !(priv->backup->apiFlags & VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTER= NAL) && (priv->backup->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PULL || (priv->backup->type =3D=3D VIR_DOMAIN_BACKUP_TYPE_PUSH && - jobstatus !=3D QEMU_DOMAIN_JOB_STATUS_COMPLETED))) { + jobstatus !=3D VIR_DOMAIN_JOB_STATUS_COMPLETED))) { =20 uid_t uid; gid_t gid; @@ -600,15 +600,19 @@ qemuBackupJobTerminate(virDomainObj *vm, } =20 if (priv->job.current) { - qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobDataPrivate *privData =3D NULL; =20 - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed =3D qemuDomainJobInfoCopy(priv->job.current); + qemuDomainJobDataUpdateTime(priv->job.current); =20 - priv->job.completed->stats.backup.total =3D priv->backup->push_tot= al; - priv->job.completed->stats.backup.transferred =3D priv->backup->pu= sh_transferred; - priv->job.completed->stats.backup.tmp_used =3D priv->backup->pull_= tmp_used; - priv->job.completed->stats.backup.tmp_total =3D priv->backup->pull= _tmp_total; + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + priv->job.completed =3D virDomainJobDataCopy(priv->job.current); + + privData =3D priv->job.completed->privateData; + + privData->stats.backup.total =3D priv->backup->push_total; + privData->stats.backup.transferred =3D priv->backup->push_transfer= red; + privData->stats.backup.tmp_used =3D priv->backup->pull_tmp_used; + privData->stats.backup.tmp_total =3D priv->backup->pull_tmp_total; =20 priv->job.completed->status =3D jobstatus; priv->job.completed->errmsg =3D g_strdup(priv->backup->errmsg); @@ -686,7 +690,7 @@ qemuBackupJobCancelBlockjobs(virDomainObj *vm, } =20 if (terminatebackup && !has_active) - qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); + qemuBackupJobTerminate(vm, VIR_DOMAIN_JOB_STATUS_CANCELED); } =20 =20 @@ -741,6 +745,7 @@ qemuBackupBegin(virDomainObj *vm, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privData =3D priv->job.current->privateData; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); g_autoptr(virDomainBackupDef) def =3D NULL; g_autofree char *suffix =3D NULL; @@ -794,7 +799,7 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + privData->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; =20 if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", @@ -984,7 +989,7 @@ qemuBackupNotifyBlockjobEnd(virDomainObj *vm, bool has_cancelling =3D false; bool has_cancelled =3D false; bool has_failed =3D false; - qemuDomainJobStatus jobstatus =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + virDomainJobStatus jobstatus =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; virDomainBackupDef *backup =3D priv->backup; size_t i; =20 @@ -1081,9 +1086,9 @@ qemuBackupNotifyBlockjobEnd(virDomainObj *vm, /* all sub-jobs have stopped */ =20 if (has_failed) - jobstatus =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobstatus =3D VIR_DOMAIN_JOB_STATUS_FAILED; else if (has_cancelled && backup->type =3D=3D VIR_DOMAIN_BACKUP_TY= PE_PUSH) - jobstatus =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobstatus =3D VIR_DOMAIN_JOB_STATUS_CANCELED; =20 qemuBackupJobTerminate(vm, jobstatus); } @@ -1134,9 +1139,10 @@ qemuBackupGetJobInfoStatsUpdateOne(virDomainObj *vm, int qemuBackupGetJobInfoStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; + qemuDomainBackupStats *stats =3D &privJob->stats.backup; qemuDomainObjPrivate *priv =3D vm->privateData; qemuMonitorJobInfo **blockjobs =3D NULL; size_t nblockjobs =3D 0; @@ -1150,10 +1156,10 @@ qemuBackupGetJobInfoStats(virQEMUDriver *driver, return -1; } =20 - if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + if (qemuDomainJobDataUpdateTime(jobData) < 0) return -1; =20 - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; =20 qemuDomainObjEnterMonitor(driver, vm); =20 diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h index ebb3154516..de4dff9357 100644 --- a/src/qemu/qemu_backup.h +++ b/src/qemu/qemu_backup.h @@ -45,12 +45,12 @@ qemuBackupNotifyBlockjobEnd(virDomainObj *vm, =20 void qemuBackupJobTerminate(virDomainObj *vm, - qemuDomainJobStatus jobstatus); + virDomainJobStatus jobstatus); =20 int qemuBackupGetJobInfoStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo); + virDomainJobData *jobData); =20 /* exported for testing */ int diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 1ecde5af86..baa52dd986 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -63,6 +63,38 @@ VIR_ENUM_IMPL(qemuDomainAsyncJob, "backup", ); =20 +static void * +qemuJobDataAllocPrivateData(void) +{ + return g_new0(qemuDomainJobDataPrivate, 1); +} + + +static void * +qemuJobDataCopyPrivateData(void *data) +{ + qemuDomainJobDataPrivate *ret =3D g_new0(qemuDomainJobDataPrivate, 1); + + memcpy(ret, data, sizeof(qemuDomainJobDataPrivate)); + + return ret; +} + + +static void +qemuJobDataFreePrivateData(void *data) +{ + g_free(data); +} + + +virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallbacks =3D { + .allocPrivateData =3D qemuJobDataAllocPrivateData, + .copyPrivateData =3D qemuJobDataCopyPrivateData, + .freePrivateData =3D qemuJobDataFreePrivateData, +}; + + const char * qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, int phase G_GNUC_UNUSED) @@ -116,26 +148,6 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob j= ob, } =20 =20 -void -qemuDomainJobInfoFree(qemuDomainJobInfo *info) -{ - g_free(info->errmsg); - g_free(info); -} - - -qemuDomainJobInfo * -qemuDomainJobInfoCopy(qemuDomainJobInfo *info) -{ - qemuDomainJobInfo *ret =3D g_new0(qemuDomainJobInfo, 1); - - memcpy(ret, info, sizeof(*info)); - - ret->errmsg =3D g_strdup(info->errmsg); - - return ret; -} - void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm) @@ -149,7 +161,7 @@ qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, if (!priv->job.completed) return; =20 - if (qemuDomainJobInfoToParams(priv->job.completed, &type, + if (qemuDomainJobDataToParams(priv->job.completed, &type, ¶ms, &nparams) < 0) { VIR_WARN("Could not get stats for completed job; domain %s", vm->def->name); @@ -216,7 +228,7 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObj *job) job->mask =3D QEMU_JOB_DEFAULT_MASK; job->abortJob =3D false; VIR_FREE(job->error); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); + g_clear_pointer(&job->current, virDomainJobDataFree); job->cb->resetJobPrivate(job->privateData); job->apiFlags =3D 0; } @@ -254,8 +266,8 @@ qemuDomainObjClearJob(qemuDomainJobObj *job) qemuDomainObjResetJob(job); qemuDomainObjResetAsyncJob(job); g_clear_pointer(&job->privateData, job->cb->freeJobPrivate); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); - g_clear_pointer(&job->completed, qemuDomainJobInfoFree); + g_clear_pointer(&job->current, virDomainJobDataFree); + g_clear_pointer(&job->completed, virDomainJobDataFree); virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); } @@ -268,111 +280,87 @@ qemuDomainTrackJob(qemuDomainJob job) =20 =20 int -qemuDomainJobInfoUpdateTime(qemuDomainJobInfo *jobInfo) +qemuDomainJobDataUpdateTime(virDomainJobData *jobData) { unsigned long long now; =20 - if (!jobInfo->started) + if (!jobData->started) return 0; =20 if (virTimeMillisNow(&now) < 0) return -1; =20 - if (now < jobInfo->started) { + if (now < jobData->started) { VIR_WARN("Async job starts in the future"); - jobInfo->started =3D 0; + jobData->started =3D 0; return 0; } =20 - jobInfo->timeElapsed =3D now - jobInfo->started; + jobData->timeElapsed =3D now - jobData->started; return 0; } =20 int -qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfo *jobInfo) +qemuDomainJobDataUpdateDowntime(virDomainJobData *jobData) { unsigned long long now; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; =20 - if (!jobInfo->stopped) + if (!jobData->stopped) return 0; =20 if (virTimeMillisNow(&now) < 0) return -1; =20 - if (now < jobInfo->stopped) { + if (now < jobData->stopped) { VIR_WARN("Guest's CPUs stopped in the future"); - jobInfo->stopped =3D 0; + jobData->stopped =3D 0; return 0; } =20 - jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; - jobInfo->stats.mig.downtime_set =3D true; + priv->stats.mig.downtime =3D now - jobData->stopped; + priv->stats.mig.downtime_set =3D true; return 0; } =20 -static virDomainJobType -qemuDomainJobStatusToType(qemuDomainJobStatus status) -{ - switch (status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: - break; - - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: - return VIR_DOMAIN_JOB_UNBOUNDED; - - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - return VIR_DOMAIN_JOB_COMPLETED; - - case QEMU_DOMAIN_JOB_STATUS_FAILED: - return VIR_DOMAIN_JOB_FAILED; - - case QEMU_DOMAIN_JOB_STATUS_CANCELED: - return VIR_DOMAIN_JOB_CANCELLED; - } - - return VIR_DOMAIN_JOB_NONE; -} =20 int -qemuDomainJobInfoToInfo(qemuDomainJobInfo *jobInfo, +qemuDomainJobDataToInfo(virDomainJobData *jobData, virDomainJobInfoPtr info) { - info->type =3D qemuDomainJobStatusToType(jobInfo->status); - info->timeElapsed =3D jobInfo->timeElapsed; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + info->type =3D virDomainJobStatusToType(jobData->status); + info->timeElapsed =3D jobData->timeElapsed; =20 - switch (jobInfo->statsType) { + switch (priv->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - info->fileTotal =3D jobInfo->stats.mig.disk_total + - jobInfo->mirrorStats.total; - info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + - (jobInfo->mirrorStats.total - - jobInfo->mirrorStats.transferred); - info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + - jobInfo->mirrorStats.transferred; + info->memTotal =3D priv->stats.mig.ram_total; + info->memRemaining =3D priv->stats.mig.ram_remaining; + info->memProcessed =3D priv->stats.mig.ram_transferred; + info->fileTotal =3D priv->stats.mig.disk_total + + priv->mirrorStats.total; + info->fileRemaining =3D priv->stats.mig.disk_remaining + + (priv->mirrorStats.total - + priv->mirrorStats.transferred); + info->fileProcessed =3D priv->stats.mig.disk_transferred + + priv->mirrorStats.transferred; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + info->memTotal =3D priv->stats.mig.ram_total; + info->memRemaining =3D priv->stats.mig.ram_remaining; + info->memProcessed =3D priv->stats.mig.ram_transferred; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - info->memTotal =3D jobInfo->stats.dump.total; - info->memProcessed =3D jobInfo->stats.dump.completed; + info->memTotal =3D priv->stats.dump.total; + info->memProcessed =3D priv->stats.dump.completed; info->memRemaining =3D info->memTotal - info->memProcessed; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - info->fileTotal =3D jobInfo->stats.backup.total; - info->fileProcessed =3D jobInfo->stats.backup.transferred; + info->fileTotal =3D priv->stats.backup.total; + info->fileProcessed =3D priv->stats.backup.transferred; info->fileRemaining =3D info->fileTotal - info->fileProcessed; break; =20 @@ -389,13 +377,14 @@ qemuDomainJobInfoToInfo(qemuDomainJobInfo *jobInfo, =20 =20 static int -qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainMigrationJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; - qemuDomainMirrorStats *mirrorStats =3D &jobInfo->mirrorStats; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuMonitorMigrationStats *stats =3D &priv->stats.mig; + qemuDomainMirrorStats *mirrorStats =3D &priv->mirrorStats; virTypedParameterPtr par =3D NULL; int maxpar =3D 0; int npar =3D 0; @@ -404,19 +393,19 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo = *jobInfo, =20 if (virTypedParamsAddInt(&par, &npar, &maxpar, VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) + jobData->operation) < 0) goto error; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) + jobData->timeElapsed) < 0) goto error; =20 - if (jobInfo->timeDeltaSet && - jobInfo->timeElapsed > jobInfo->timeDelta && + if (jobData->timeDeltaSet && + jobData->timeElapsed > jobData->timeDelta && virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_TIME_ELAPSED_NET, - jobInfo->timeElapsed - jobInfo->timeDelta)= < 0) + jobData->timeElapsed - jobData->timeDelta)= < 0) goto error; =20 if (stats->downtime_set && @@ -426,11 +415,11 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo = *jobInfo, goto error; =20 if (stats->downtime_set && - jobInfo->timeDeltaSet && - stats->downtime > jobInfo->timeDelta && + jobData->timeDeltaSet && + stats->downtime > jobData->timeDelta && virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_DOWNTIME_NET, - stats->downtime - jobInfo->timeDelta) < 0) + stats->downtime - jobData->timeDelta) < 0) goto error; =20 if (stats->setup_time_set && @@ -505,7 +494,7 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo *j= obInfo, =20 /* The remaining stats are disk, mirror, or migration specific * so if this is a SAVEDUMP, we can just skip them */ - if (jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) + if (priv->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) goto done; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, @@ -554,7 +543,7 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo *j= obInfo, goto error; =20 done: - *type =3D qemuDomainJobStatusToType(jobInfo->status); + *type =3D virDomainJobStatusToType(jobData->status); *params =3D par; *nparams =3D npar; return 0; @@ -566,24 +555,25 @@ qemuDomainMigrationJobInfoToParams(qemuDomainJobInfo = *jobInfo, =20 =20 static int -qemuDomainDumpJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainDumpJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuMonitorDumpStats *stats =3D &priv->stats.dump; virTypedParameterPtr par =3D NULL; int maxpar =3D 0; int npar =3D 0; =20 if (virTypedParamsAddInt(&par, &npar, &maxpar, VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) + jobData->operation) < 0) goto error; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) + jobData->timeElapsed) < 0) goto error; =20 if (virTypedParamsAddULLong(&par, &npar, &maxpar, @@ -597,7 +587,7 @@ qemuDomainDumpJobInfoToParams(qemuDomainJobInfo *jobInf= o, stats->total - stats->completed) < 0) goto error; =20 - *type =3D qemuDomainJobStatusToType(jobInfo->status); + *type =3D virDomainJobStatusToType(jobData->status); *params =3D par; *nparams =3D npar; return 0; @@ -609,19 +599,20 @@ qemuDomainDumpJobInfoToParams(qemuDomainJobInfo *jobI= nfo, =20 =20 static int -qemuDomainBackupJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainBackupJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuDomainBackupStats *stats =3D &priv->stats.backup; g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); =20 - if (virTypedParamListAddInt(par, jobInfo->operation, + if (virTypedParamListAddInt(par, jobData->operation, VIR_DOMAIN_JOB_OPERATION) < 0) return -1; =20 - if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, + if (virTypedParamListAddULLong(par, jobData->timeElapsed, VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) return -1; =20 @@ -649,38 +640,40 @@ qemuDomainBackupJobInfoToParams(qemuDomainJobInfo *jo= bInfo, return -1; } =20 - if (jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && + if (jobData->status !=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && virTypedParamListAddBoolean(par, - jobInfo->status =3D=3D QEMU_DOMAIN_JOB= _STATUS_COMPLETED, + jobData->status =3D=3D VIR_DOMAIN_JOB_= STATUS_COMPLETED, VIR_DOMAIN_JOB_SUCCESS) < 0) return -1; =20 - if (jobInfo->errmsg && - virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) + if (jobData->errmsg && + virTypedParamListAddString(par, jobData->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) return -1; =20 *nparams =3D virTypedParamListStealParams(par, params); - *type =3D qemuDomainJobStatusToType(jobInfo->status); + *type =3D virDomainJobStatusToType(jobData->status); return 0; } =20 =20 int -qemuDomainJobInfoToParams(qemuDomainJobInfo *jobInfo, +qemuDomainJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) { - switch (jobInfo->statsType) { + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + + switch (priv->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, n= params); + return qemuDomainMigrationJobDataToParams(jobData, type, params, n= params); =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparam= s); + return qemuDomainDumpJobDataToParams(jobData, type, params, nparam= s); =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - return qemuDomainBackupJobInfoToParams(jobInfo, type, params, npar= ams); + return qemuDomainBackupJobDataToParams(jobData, type, params, npar= ams); =20 case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -688,7 +681,7 @@ qemuDomainJobInfoToParams(qemuDomainJobInfo *jobInfo, break; =20 default: - virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType= ); + virReportEnumRangeError(qemuDomainJobStatsType, priv->statsType); break; } =20 @@ -895,8 +888,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); qemuDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivate= DataCallbacks); + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; priv->job.asyncJob =3D asyncJob; priv->job.asyncOwner =3D virThreadSelfID(); priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index f904bd49c2..36acf401dd 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -20,6 +20,7 @@ =20 #include #include "qemu_monitor.h" +#include "domain_job.h" =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define QEMU_JOB_DEFAULT_MASK \ @@ -79,17 +80,6 @@ typedef enum { } qemuDomainAsyncJob; VIR_ENUM_DECL(qemuDomainAsyncJob); =20 -typedef enum { - QEMU_DOMAIN_JOB_STATUS_NONE =3D 0, - QEMU_DOMAIN_JOB_STATUS_ACTIVE, - QEMU_DOMAIN_JOB_STATUS_MIGRATING, - QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED, - QEMU_DOMAIN_JOB_STATUS_PAUSED, - QEMU_DOMAIN_JOB_STATUS_POSTCOPY, - QEMU_DOMAIN_JOB_STATUS_COMPLETED, - QEMU_DOMAIN_JOB_STATUS_FAILED, - QEMU_DOMAIN_JOB_STATUS_CANCELED, -} qemuDomainJobStatus; =20 typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_NONE =3D 0, @@ -114,24 +104,8 @@ struct _qemuDomainBackupStats { unsigned long long tmp_total; }; =20 -typedef struct _qemuDomainJobInfo qemuDomainJobInfo; -struct _qemuDomainJobInfo { - qemuDomainJobStatus status; - virDomainJobOperation operation; - unsigned long long started; /* When the async job started */ - unsigned long long stopped; /* When the domain's CPUs were stopped */ - unsigned long long sent; /* When the source sent status info to the - destination (only for migrations). */ - unsigned long long received; /* When the destination host received sta= tus - info from the source (migrations only)= . */ - /* Computed values */ - unsigned long long timeElapsed; - long long timeDelta; /* delta =3D received - sent, i.e., the difference - between the source and the destination time pl= us - the time between the end of Perform phase on t= he - source and the beginning of Finish phase on the - destination. */ - bool timeDeltaSet; +typedef struct _qemuDomainJobDataPrivate qemuDomainJobDataPrivate; +struct _qemuDomainJobDataPrivate { /* Raw values from QEMU */ qemuDomainJobStatsType statsType; union { @@ -140,17 +114,9 @@ struct _qemuDomainJobInfo { qemuDomainBackupStats backup; } stats; qemuDomainMirrorStats mirrorStats; - - char *errmsg; /* optional error message for failed completed jobs */ }; =20 -void -qemuDomainJobInfoFree(qemuDomainJobInfo *info); - -G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); - -qemuDomainJobInfo * -qemuDomainJobInfoCopy(qemuDomainJobInfo *info); +extern virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallback= s; =20 typedef struct _qemuDomainJobObj qemuDomainJobObj; =20 @@ -198,8 +164,8 @@ struct _qemuDomainJobObj { unsigned long long asyncStarted; /* When the current async job star= ted */ int phase; /* Job phase (mainly for migration= s) */ unsigned long long mask; /* Jobs allowed during async job */ - qemuDomainJobInfo *current; /* async job progress data */ - qemuDomainJobInfo *completed; /* statistics data of a recently com= pleted job */ + virDomainJobData *current; /* async job progress data */ + virDomainJobData *completed; /* statistics data of a recently comp= leted job */ bool abortJob; /* abort of the job requested */ char *error; /* job event completion error */ unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ @@ -256,14 +222,14 @@ void qemuDomainObjDiscardAsyncJob(virQEMUDriver *driv= er, virDomainObj *obj); void qemuDomainObjReleaseAsyncJob(virDomainObj *obj); =20 -int qemuDomainJobInfoUpdateTime(qemuDomainJobInfo *jobInfo) +int qemuDomainJobDataUpdateTime(virDomainJobData *jobData) ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfo *jobInfo) +int qemuDomainJobDataUpdateDowntime(virDomainJobData *jobData) ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoToInfo(qemuDomainJobInfo *jobInfo, +int qemuDomainJobDataToInfo(virDomainJobData *jobData, virDomainJobInfoPtr info) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -int qemuDomainJobInfoToParams(qemuDomainJobInfo *jobInfo, +int qemuDomainJobDataToParams(virDomainJobData *jobData, int *type, virTypedParameterPtr *params, int *nparams) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9279eaf811..52a8931656 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2637,6 +2637,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, int ret =3D -1; virObjectEvent *event =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; virQEMUSaveData *data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 @@ -2653,7 +2654,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, goto endjob; } =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* Pause */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { @@ -2946,6 +2947,7 @@ qemuDumpWaitForCompletion(virDomainObj *vm) { qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; =20 VIR_DEBUG("Waiting for dump completion"); while (!jobPriv->dumpCompleted && !priv->job.abortJob) { @@ -2953,7 +2955,7 @@ qemuDumpWaitForCompletion(virDomainObj *vm) return -1; } =20 - if (priv->job.current->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STAT= US_FAILED) { + if (privJobCurrent->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STATUS_= FAILED) { if (priv->job.error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), @@ -2964,7 +2966,7 @@ qemuDumpWaitForCompletion(virDomainObj *vm) =20 return -1; } - qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobDataUpdateTime(priv->job.current); =20 return 0; } @@ -2992,10 +2994,13 @@ qemuDumpToFd(virQEMUDriver *driver, if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, fd) = < 0) return -1; =20 - if (detach) - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUM= P; - else - g_clear_pointer(&priv->job.current, qemuDomainJobInfoFree); + if (detach) { + qemuDomainJobDataPrivate *privStats =3D priv->job.current->private= Data; + + privStats->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; + } else { + g_clear_pointer(&priv->job.current, virDomainJobDataFree); + } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; @@ -3130,6 +3135,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; qemuDomainObjPrivate *priv =3D NULL; + qemuDomainJobDataPrivate *privJobCurrent =3D NULL; bool resume =3D false, paused =3D false; int ret =3D -1; virObjectEvent *event =3D NULL; @@ -3154,7 +3160,8 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + privJobCurrent =3D priv->job.current->privateData; + privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* Migrate will always stop the VM, so the resume condition is independent of whether the stop command is issued. */ @@ -12423,28 +12430,30 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr co= nn, static int qemuDomainGetJobInfoMigrationStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privStats =3D jobData->privateData; + bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); =20 - if (jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_MIGRATING || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) { + if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_ACTIVE || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_MIGRATING || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED = || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) { if (events && - jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && + jobData->status !=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && qemuMigrationAnyFetchStats(driver, vm, QEMU_ASYNC_JOB_NONE, - jobInfo, NULL) < 0) + jobData, NULL) < 0) return -1; =20 - if (jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && - jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION= && + if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && + privStats->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATI= ON && qemuMigrationSrcFetchMirrorStats(driver, vm, QEMU_ASYNC_JOB_NO= NE, - jobInfo) < 0) + jobData) < 0) return -1; =20 - if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + if (qemuDomainJobDataUpdateTime(jobData) < 0) return -1; } =20 @@ -12455,9 +12464,10 @@ qemuDomainGetJobInfoMigrationStats(virQEMUDriver *= driver, static int qemuDomainGetJobInfoDumpStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; qemuMonitorDumpStats stats =3D { 0 }; int rc; =20 @@ -12470,33 +12480,33 @@ qemuDomainGetJobInfoDumpStats(virQEMUDriver *driv= er, if (rc < 0) return -1; =20 - jobInfo->stats.dump =3D stats; + privJob->stats.dump =3D stats; =20 - if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + if (qemuDomainJobDataUpdateTime(jobData) < 0) return -1; =20 - switch (jobInfo->stats.dump.status) { + switch (privJob->stats.dump.status) { case QEMU_MONITOR_DUMP_STATUS_NONE: case QEMU_MONITOR_DUMP_STATUS_FAILED: case QEMU_MONITOR_DUMP_STATUS_LAST: virReportError(VIR_ERR_OPERATION_FAILED, _("dump query failed, status=3D%d"), - jobInfo->stats.dump.status); + privJob->stats.dump.status); return -1; break; =20 case QEMU_MONITOR_DUMP_STATUS_ACTIVE: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; VIR_DEBUG("dump active, bytes written=3D'%llu' remaining=3D'%llu'", - jobInfo->stats.dump.completed, - jobInfo->stats.dump.total - - jobInfo->stats.dump.completed); + privJob->stats.dump.completed, + privJob->stats.dump.total - + privJob->stats.dump.completed); break; =20 case QEMU_MONITOR_DUMP_STATUS_COMPLETED: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; VIR_DEBUG("dump completed, bytes written=3D'%llu'", - jobInfo->stats.dump.completed); + privJob->stats.dump.completed); break; } =20 @@ -12508,16 +12518,17 @@ static int qemuDomainGetJobStatsInternal(virQEMUDriver *driver, virDomainObj *vm, bool completed, - qemuDomainJobInfo **jobInfo) + virDomainJobData **jobData) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privStats =3D NULL; int ret =3D -1; =20 - *jobInfo =3D NULL; + *jobData =3D NULL; =20 if (completed) { if (priv->job.completed && !priv->job.current) - *jobInfo =3D qemuDomainJobInfoCopy(priv->job.completed); + *jobData =3D virDomainJobDataCopy(priv->job.completed); =20 return 0; } @@ -12539,22 +12550,24 @@ qemuDomainGetJobStatsInternal(virQEMUDriver *driv= er, ret =3D 0; goto cleanup; } - *jobInfo =3D qemuDomainJobInfoCopy(priv->job.current); + *jobData =3D virDomainJobDataCopy(priv->job.current); + + privStats =3D (*jobData)->privateData; =20 - switch ((*jobInfo)->statsType) { + switch (privStats->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - if (qemuDomainGetJobInfoMigrationStats(driver, vm, *jobInfo) < 0) + if (qemuDomainGetJobInfoMigrationStats(driver, vm, *jobData) < 0) goto cleanup; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - if (qemuDomainGetJobInfoDumpStats(driver, vm, *jobInfo) < 0) + if (qemuDomainGetJobInfoDumpStats(driver, vm, *jobData) < 0) goto cleanup; break; =20 case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - if (qemuBackupGetJobInfoStats(driver, vm, *jobInfo) < 0) + if (qemuBackupGetJobInfoStats(driver, vm, *jobData) < 0) goto cleanup; break; =20 @@ -12575,7 +12588,7 @@ qemuDomainGetJobInfo(virDomainPtr dom, virDomainJobInfoPtr info) { virQEMUDriver *driver =3D dom->conn->privateData; - g_autoptr(qemuDomainJobInfo) jobInfo =3D NULL; + g_autoptr(virDomainJobData) jobData =3D NULL; virDomainObj *vm; int ret =3D -1; =20 @@ -12587,16 +12600,16 @@ qemuDomainGetJobInfo(virDomainPtr dom, if (virDomainGetJobInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainGetJobStatsInternal(driver, vm, false, &jobInfo) < 0) + if (qemuDomainGetJobStatsInternal(driver, vm, false, &jobData) < 0) goto cleanup; =20 - if (!jobInfo || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_NONE) { + if (!jobData || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_NONE) { ret =3D 0; goto cleanup; } =20 - ret =3D qemuDomainJobInfoToInfo(jobInfo, info); + ret =3D qemuDomainJobDataToInfo(jobData, info); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12614,7 +12627,7 @@ qemuDomainGetJobStats(virDomainPtr dom, virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; qemuDomainObjPrivate *priv; - g_autoptr(qemuDomainJobInfo) jobInfo =3D NULL; + g_autoptr(virDomainJobData) jobData =3D NULL; bool completed =3D !!(flags & VIR_DOMAIN_JOB_STATS_COMPLETED); int ret =3D -1; =20 @@ -12628,11 +12641,11 @@ qemuDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; - if (qemuDomainGetJobStatsInternal(driver, vm, completed, &jobInfo) < 0) + if (qemuDomainGetJobStatsInternal(driver, vm, completed, &jobData) < 0) goto cleanup; =20 - if (!jobInfo || - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_NONE) { + if (!jobData || + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_NONE) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; *nparams =3D 0; @@ -12640,10 +12653,10 @@ qemuDomainGetJobStats(virDomainPtr dom, goto cleanup; } =20 - ret =3D qemuDomainJobInfoToParams(jobInfo, type, params, nparams); + ret =3D qemuDomainJobDataToParams(jobData, type, params, nparams); =20 if (completed && ret =3D=3D 0 && !(flags & VIR_DOMAIN_JOB_STATS_KEEP_C= OMPLETED)) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12709,7 +12722,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) break; =20 case QEMU_ASYNC_JOB_MIGRATION_OUT: - if ((priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTC= OPY || + if ((priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCO= PY || (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY))) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index a1e6a8d1b1..a48350be74 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1199,7 +1199,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, return -1; =20 if (priv->job.abortJob) { - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJ= ob), _("canceled by client")); @@ -1622,35 +1622,37 @@ qemuMigrationSrcWaitForSpice(virDomainObj *vm) =20 =20 static void -qemuMigrationUpdateJobType(qemuDomainJobInfo *jobInfo) +qemuMigrationUpdateJobType(virDomainJobData *jobData) { - switch ((qemuMonitorMigrationStatus) jobInfo->stats.mig.status) { + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + + switch ((qemuMonitorMigrationStatus) priv->stats.mig.status) { case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_POSTCOPY; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_COMPLETED: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_INACTIVE: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_NONE; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_NONE; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_ERROR: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_CANCELLED: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_PRE_SWITCHOVER: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_PAUSED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_PAUSED; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_DEVICE: - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_MIGRATING; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_MIGRATING; break; =20 case QEMU_MONITOR_MIGRATION_STATUS_SETUP: @@ -1667,11 +1669,12 @@ int qemuMigrationAnyFetchStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo, + virDomainJobData *jobData, char **error) { qemuDomainObjPrivate *priv =3D vm->privateData; qemuMonitorMigrationStats stats; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; int rv; =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) @@ -1683,7 +1686,7 @@ qemuMigrationAnyFetchStats(virQEMUDriver *driver, if (rv < 0) return -1; =20 - jobInfo->stats.mig =3D stats; + privJob->stats.mig =3D stats; =20 return 0; } @@ -1724,41 +1727,42 @@ qemuMigrationJobCheckStatus(virQEMUDriver *driver, qemuDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobInfo *jobInfo =3D priv->job.current; + virDomainJobData *jobData =3D priv->job.current; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; g_autofree char *error =3D NULL; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); =20 if (!events || - jobInfo->stats.mig.status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_ERR= OR) { - if (qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobInfo, &err= or) < 0) + privJob->stats.mig.status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_ERR= OR) { + if (qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobData, &err= or) < 0) return -1; } =20 - qemuMigrationUpdateJobType(jobInfo); + qemuMigrationUpdateJobType(jobData); =20 - switch (jobInfo->status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: + switch (jobData->status) { + case VIR_DOMAIN_JOB_STATUS_NONE: virReportError(VIR_ERR_OPERATION_FAILED, _("%s: %s"), qemuMigrationJobName(vm), _("is not active")); return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_FAILED: + case VIR_DOMAIN_JOB_STATUS_FAILED: virReportError(VIR_ERR_OPERATION_FAILED, _("%s: %s"), qemuMigrationJobName(vm), error ? error : _("unexpectedly failed")); return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_CANCELED: + case VIR_DOMAIN_JOB_STATUS_CANCELED: virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuMigrationJobName(vm), _("canceled by client")); return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + case VIR_DOMAIN_JOB_STATUS_PAUSED: break; } =20 @@ -1789,7 +1793,7 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobInfo *jobInfo =3D priv->job.current; + virDomainJobData *jobData =3D priv->job.current; int pauseReason; =20 if (qemuMigrationJobCheckStatus(driver, vm, asyncJob) < 0) @@ -1819,7 +1823,7 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, * wait again for the real end of the migration. */ if (flags & QEMU_MIGRATION_COMPLETED_PRE_SWITCHOVER && - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PAUSED) { + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { VIR_DEBUG("Migration paused before switchover"); return 1; } @@ -1829,38 +1833,38 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, * will continue waiting until the migrate state changes to completed. */ if (flags & QEMU_MIGRATION_COMPLETED_POSTCOPY && - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) { + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) { VIR_DEBUG("Migration switched to post-copy"); return 1; } =20 - if (jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED) + if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) return 1; else return 0; =20 error: - switch (jobInfo->status) { - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: + switch (jobData->status) { + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + case VIR_DOMAIN_JOB_STATUS_PAUSED: /* The migration was aborted by us rather than QEMU itself. */ - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; return -2; =20 - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: /* Something failed after QEMU already finished the migration. */ - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_FAILED: - case QEMU_DOMAIN_JOB_STATUS_CANCELED: + case VIR_DOMAIN_JOB_STATUS_FAILED: + case VIR_DOMAIN_JOB_STATUS_CANCELED: /* QEMU aborted the migration. */ return -1; =20 - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_NONE: + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_NONE: /* Impossible. */ break; } @@ -1880,11 +1884,11 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *dr= iver, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobInfo *jobInfo =3D priv->job.current; + virDomainJobData *jobData =3D priv->job.current; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); int rv; =20 - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_MIGRATING; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_MIGRATING; =20 while ((rv =3D qemuMigrationAnyCompleted(driver, vm, asyncJob, dconn, flags)) !=3D 1) { @@ -1894,7 +1898,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driv= er, if (events) { if (virDomainObjWait(vm) < 0) { if (virDomainObjIsActive(vm)) - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; return -2; } } else { @@ -1908,17 +1912,17 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *dr= iver, } =20 if (events) - ignore_value(qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobI= nfo, NULL)); + ignore_value(qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobD= ata, NULL)); =20 - qemuDomainJobInfoUpdateTime(jobInfo); - qemuDomainJobInfoUpdateDowntime(jobInfo); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed =3D qemuDomainJobInfoCopy(jobInfo); - priv->job.completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + qemuDomainJobDataUpdateTime(jobData); + qemuDomainJobDataUpdateDowntime(jobData); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + priv->job.completed =3D virDomainJobDataCopy(jobData); + priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 if (asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT && - jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED) - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) + jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 return 0; } @@ -3383,7 +3387,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, virObjectEvent *event; qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; - qemuDomainJobInfo *jobInfo =3D NULL; + virDomainJobData *jobData =3D NULL; =20 VIR_DEBUG("driver=3D%p, vm=3D%p, cookiein=3D%s, cookieinlen=3D%d, " "flags=3D0x%x, retcode=3D%d", @@ -3403,13 +3407,15 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, return -1; =20 if (retcode =3D=3D 0) - jobInfo =3D priv->job.completed; + jobData =3D priv->job.completed; else - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); =20 /* Update times with the values sent by the destination daemon */ - if (mig->jobInfo && jobInfo) { + if (mig->jobData && jobData) { int reason; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; + qemuDomainJobDataPrivate *privMigJob =3D mig->jobData->privateData; =20 /* We need to refresh migration statistics after a completed post-= copy * migration since priv->job.completed contains obsolete data from= the @@ -3418,14 +3424,14 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, if (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY && qemuMigrationAnyFetchStats(driver, vm, QEMU_ASYNC_JOB_MIGRATIO= N_OUT, - jobInfo, NULL) < 0) + jobData, NULL) < 0) VIR_WARN("Could not refresh migration statistics"); =20 - qemuDomainJobInfoUpdateTime(jobInfo); - jobInfo->timeDeltaSet =3D mig->jobInfo->timeDeltaSet; - jobInfo->timeDelta =3D mig->jobInfo->timeDelta; - jobInfo->stats.mig.downtime_set =3D mig->jobInfo->stats.mig.downti= me_set; - jobInfo->stats.mig.downtime =3D mig->jobInfo->stats.mig.downtime; + qemuDomainJobDataUpdateTime(jobData); + jobData->timeDeltaSet =3D mig->jobData->timeDeltaSet; + jobData->timeDelta =3D mig->jobData->timeDelta; + privJob->stats.mig.downtime_set =3D privMigJob->stats.mig.downtime= _set; + privJob->stats.mig.downtime =3D privMigJob->stats.mig.downtime; } =20 if (flags & VIR_MIGRATE_OFFLINE) @@ -4194,7 +4200,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, /* explicitly do this *after* we entered the monitor, * as this is a critical section so we are guaranteed * priv->job.abortJob will not change */ - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJob), _("canceled by client")); @@ -4309,7 +4315,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, * resume it now once we finished all block jobs and wait for the real * end of the migration. */ - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PAUSED) { + if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(driver, vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWI= TCHOVER, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) @@ -4339,8 +4345,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 if (priv->job.completed) { priv->job.completed->stopped =3D priv->job.current->stopped; - qemuDomainJobInfoUpdateTime(priv->job.completed); - qemuDomainJobInfoUpdateDowntime(priv->job.completed); + qemuDomainJobDataUpdateTime(priv->job.completed); + qemuDomainJobDataUpdateDowntime(priv->job.completed); ignore_value(virTimeMillisNow(&priv->job.completed->sent)); } =20 @@ -4370,7 +4376,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 if (virDomainObjIsActive(vm)) { if (cancel && - priv->job.current->status !=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COM= PLETED && + priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISO= R_COMPLETED && qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { qemuMonitorMigrateCancel(priv->mon); @@ -4385,8 +4391,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 qemuMigrationSrcCancelRemoveTempBitmaps(vm, QEMU_ASYNC_JOB_MIGRATI= ON_OUT); =20 - if (priv->job.current->status !=3D QEMU_DOMAIN_JOB_STATUS_CANCELED) - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + if (priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; } =20 if (iothread) @@ -5620,7 +5626,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, unsigned short port; unsigned long long timeReceived =3D 0; virObjectEvent *event; - qemuDomainJobInfo *jobInfo =3D NULL; + virDomainJobData *jobData =3D NULL; bool inPostCopy =3D false; bool doKill =3D true; =20 @@ -5644,7 +5650,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, : QEMU_MIGRATION_PHASE_FINISH2); =20 qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); =20 cookie_flags =3D QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_STATS | @@ -5736,7 +5742,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, goto endjob; } =20 - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) + if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) inPostCopy =3D true; =20 if (!(flags & VIR_MIGRATE_PAUSED)) { @@ -5772,16 +5778,16 @@ qemuMigrationDstFinish(virQEMUDriver *driver, doKill =3D false; } =20 - if (mig->jobInfo) { - jobInfo =3D g_steal_pointer(&mig->jobInfo); + if (mig->jobData) { + jobData =3D g_steal_pointer(&mig->jobData); =20 - if (jobInfo->sent && timeReceived) { - jobInfo->timeDelta =3D timeReceived - jobInfo->sent; - jobInfo->received =3D timeReceived; - jobInfo->timeDeltaSet =3D true; + if (jobData->sent && timeReceived) { + jobData->timeDelta =3D timeReceived - jobData->sent; + jobData->received =3D timeReceived; + jobData->timeDeltaSet =3D true; } - qemuDomainJobInfoUpdateTime(jobInfo); - qemuDomainJobInfoUpdateDowntime(jobInfo); + qemuDomainJobDataUpdateTime(jobData); + qemuDomainJobDataUpdateDowntime(jobData); } =20 if (inPostCopy) { @@ -5846,10 +5852,12 @@ qemuMigrationDstFinish(virQEMUDriver *driver, } =20 if (dom) { - if (jobInfo) { - priv->job.completed =3D g_steal_pointer(&jobInfo); - priv->job.completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLET= ED; - priv->job.completed->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_= MIGRATION; + if (jobData) { + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; + + priv->job.completed =3D g_steal_pointer(&jobData); + priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETE= D; + privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; } =20 if (qemuMigrationCookieFormat(mig, driver, vm, @@ -5862,7 +5870,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, * is obsolete anyway. */ if (inPostCopy) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&priv->job.completed, virDomainJobDataFree); } =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, @@ -5873,7 +5881,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, qemuDomainRemoveInactiveJob(driver, vm); =20 cleanup: - g_clear_pointer(&jobInfo, qemuDomainJobInfoFree); + g_clear_pointer(&jobData, virDomainJobDataFree); virPortAllocatorRelease(port); if (priv->mon) qemuMonitorSetDomainLog(priv->mon, NULL, NULL, NULL); @@ -6091,6 +6099,7 @@ qemuMigrationJobStart(virQEMUDriver *driver, unsigned long apiFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJob =3D priv->job.current->privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -6107,7 +6116,7 @@ qemuMigrationJobStart(virQEMUDriver *driver, if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) return -1; =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; =20 qemuDomainObjSetAsyncJobMask(vm, mask); return 0; @@ -6227,13 +6236,14 @@ int qemuMigrationSrcFetchMirrorStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { size_t i; qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privJob =3D jobData->privateData; bool nbd =3D false; g_autoptr(GHashTable) blockinfo =3D NULL; - qemuDomainMirrorStats *stats =3D &jobInfo->mirrorStats; + qemuDomainMirrorStats *stats =3D &privJob->mirrorStats; =20 for (i =3D 0; i < vm->def->ndisks; i++) { virDomainDiskDef *disk =3D vm->def->disks[i]; diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index b233358a51..6b169f73c7 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -221,7 +221,7 @@ int qemuMigrationAnyFetchStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo, + virDomainJobData *jobData, char **error); =20 int @@ -258,4 +258,4 @@ int qemuMigrationSrcFetchMirrorStats(virQEMUDriver *driver, virDomainObj *vm, qemuDomainAsyncJob asyncJob, - qemuDomainJobInfo *jobInfo); + virDomainJobData *jobData); diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_coo= kie.c index ba05a5a07f..0b51676c24 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -166,7 +166,7 @@ qemuMigrationCookieFree(qemuMigrationCookie *mig) g_free(mig->name); g_free(mig->lockState); g_free(mig->lockDriver); - g_clear_pointer(&mig->jobInfo, qemuDomainJobInfoFree); + g_clear_pointer(&mig->jobData, virDomainJobDataFree); virCPUDefFree(mig->cpu); qemuMigrationCookieCapsFree(mig->caps); if (mig->blockDirtyBitmaps) @@ -531,8 +531,8 @@ qemuMigrationCookieAddStatistics(qemuMigrationCookie *m= ig, if (!priv->job.completed) return 0; =20 - g_clear_pointer(&mig->jobInfo, qemuDomainJobInfoFree); - mig->jobInfo =3D qemuDomainJobInfoCopy(priv->job.completed); + g_clear_pointer(&mig->jobData, virDomainJobDataFree); + mig->jobData =3D virDomainJobDataCopy(priv->job.completed); =20 mig->flags |=3D QEMU_MIGRATION_COOKIE_STATS; =20 @@ -632,22 +632,23 @@ qemuMigrationCookieNetworkXMLFormat(virBuffer *buf, =20 static void qemuMigrationCookieStatisticsXMLFormat(virBuffer *buf, - qemuDomainJobInfo *jobInfo) + virDomainJobData *jobData) { - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; + qemuDomainJobDataPrivate *priv =3D jobData->privateData; + qemuMonitorMigrationStats *stats =3D &priv->stats.mig; =20 virBufferAddLit(buf, "\n"); virBufferAdjustIndent(buf, 2); =20 - virBufferAsprintf(buf, "%llu\n", jobInfo->started); - virBufferAsprintf(buf, "%llu\n", jobInfo->stopped); - virBufferAsprintf(buf, "%llu\n", jobInfo->sent); - if (jobInfo->timeDeltaSet) - virBufferAsprintf(buf, "%lld\n", jobInfo->timeDelta= ); + virBufferAsprintf(buf, "%llu\n", jobData->started); + virBufferAsprintf(buf, "%llu\n", jobData->stopped); + virBufferAsprintf(buf, "%llu\n", jobData->sent); + if (jobData->timeDeltaSet) + virBufferAsprintf(buf, "%lld\n", jobData->timeDelta= ); =20 virBufferAsprintf(buf, "<%1$s>%2$llu\n", VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed); + jobData->timeElapsed); if (stats->downtime_set) virBufferAsprintf(buf, "<%1$s>%2$llu\n", VIR_DOMAIN_JOB_DOWNTIME, @@ -884,8 +885,8 @@ qemuMigrationCookieXMLFormat(virQEMUDriver *driver, if ((mig->flags & QEMU_MIGRATION_COOKIE_NBD) && mig->nbd) qemuMigrationCookieNBDXMLFormat(mig->nbd, buf); =20 - if (mig->flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobInfo) - qemuMigrationCookieStatisticsXMLFormat(buf, mig->jobInfo); + if (mig->flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData) + qemuMigrationCookieStatisticsXMLFormat(buf, mig->jobData); =20 if (mig->flags & QEMU_MIGRATION_COOKIE_CPU && mig->cpu) virCPUDefFormatBufFull(buf, mig->cpu, NULL); @@ -1031,29 +1032,30 @@ qemuMigrationCookieNBDXMLParse(xmlXPathContextPtr c= txt) } =20 =20 -static qemuDomainJobInfo * +static virDomainJobData * qemuMigrationCookieStatisticsXMLParse(xmlXPathContextPtr ctxt) { - qemuDomainJobInfo *jobInfo =3D NULL; + virDomainJobData *jobData =3D NULL; qemuMonitorMigrationStats *stats; + qemuDomainJobDataPrivate *priv =3D NULL; VIR_XPATH_NODE_AUTORESTORE(ctxt) =20 if (!(ctxt->node =3D virXPathNode("./statistics", ctxt))) return NULL; =20 - jobInfo =3D g_new0(qemuDomainJobInfo, 1); + jobData =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks); + priv =3D jobData->privateData; + stats =3D &priv->stats.mig; + jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 - stats =3D &jobInfo->stats.mig; - jobInfo->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; - - virXPathULongLong("string(./started[1])", ctxt, &jobInfo->started); - virXPathULongLong("string(./stopped[1])", ctxt, &jobInfo->stopped); - virXPathULongLong("string(./sent[1])", ctxt, &jobInfo->sent); - if (virXPathLongLong("string(./delta[1])", ctxt, &jobInfo->timeDelta) = =3D=3D 0) - jobInfo->timeDeltaSet =3D true; + virXPathULongLong("string(./started[1])", ctxt, &jobData->started); + virXPathULongLong("string(./stopped[1])", ctxt, &jobData->stopped); + virXPathULongLong("string(./sent[1])", ctxt, &jobData->sent); + if (virXPathLongLong("string(./delta[1])", ctxt, &jobData->timeDelta) = =3D=3D 0) + jobData->timeDeltaSet =3D true; =20 virXPathULongLong("string(./" VIR_DOMAIN_JOB_TIME_ELAPSED "[1])", - ctxt, &jobInfo->timeElapsed); + ctxt, &jobData->timeElapsed); =20 if (virXPathULongLong("string(./" VIR_DOMAIN_JOB_DOWNTIME "[1])", ctxt, &stats->downtime) =3D=3D 0) @@ -1113,7 +1115,7 @@ qemuMigrationCookieStatisticsXMLParse(xmlXPathContext= Ptr ctxt) virXPathInt("string(./" VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE "[1])", ctxt, &stats->cpu_throttle_percentage); =20 - return jobInfo; + return jobData; } =20 =20 @@ -1385,7 +1387,7 @@ qemuMigrationCookieXMLParse(qemuMigrationCookie *mig, =20 if (flags & QEMU_MIGRATION_COOKIE_STATS && virXPathBoolean("boolean(./statistics)", ctxt) && - (!(mig->jobInfo =3D qemuMigrationCookieStatisticsXMLParse(ctxt)))) + (!(mig->jobData =3D qemuMigrationCookieStatisticsXMLParse(ctxt)))) return -1; =20 if (flags & QEMU_MIGRATION_COOKIE_CPU && @@ -1546,8 +1548,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, } } =20 - if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobInfo && priv->job.c= urrent) - mig->jobInfo->operation =3D priv->job.current->operation; + if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData && priv->job.c= urrent) + mig->jobData->operation =3D priv->job.current->operation; =20 return g_steal_pointer(&mig); } diff --git a/src/qemu/qemu_migration_cookie.h b/src/qemu/qemu_migration_coo= kie.h index 1726e5f2da..d9e1d949a8 100644 --- a/src/qemu/qemu_migration_cookie.h +++ b/src/qemu/qemu_migration_cookie.h @@ -162,7 +162,7 @@ struct _qemuMigrationCookie { qemuMigrationCookieNBD *nbd; =20 /* If (flags & QEMU_MIGRATION_COOKIE_STATS) */ - qemuDomainJobInfo *jobInfo; + virDomainJobData *jobData; =20 /* If flags & QEMU_MIGRATION_COOKIE_CPU */ virCPUDef *cpu; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 8fccf6b760..4b921c1e35 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -651,7 +651,7 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PO= STCOPY) + if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POS= TCOPY) reason =3D VIR_DOMAIN_PAUSED_POSTCOPY; else reason =3D VIR_DOMAIN_PAUSED_MIGRATION; @@ -1545,6 +1545,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, void *opaque) { qemuDomainObjPrivate *priv; + qemuDomainJobDataPrivate *privJob =3D NULL; virQEMUDriver *driver =3D opaque; virObjectEvent *event =3D NULL; int reason; @@ -1561,7 +1562,9 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, goto cleanup; } =20 - priv->job.current->stats.mig.status =3D status; + privJob =3D priv->job.current->privateData; + + privJob->stats.mig.status =3D status; virDomainObjBroadcast(vm); =20 if (status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY && @@ -1623,6 +1626,7 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_GNU= C_UNUSED, { qemuDomainObjPrivate *priv; qemuDomainJobPrivate *jobPriv; + qemuDomainJobDataPrivate *privJobCurrent =3D NULL; =20 virObjectLock(vm); =20 @@ -1631,18 +1635,19 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_G= NUC_UNUSED, =20 priv =3D vm->privateData; jobPriv =3D priv->job.privateData; + privJobCurrent =3D priv->job.current->privateData; if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job"); goto cleanup; } jobPriv->dumpCompleted =3D true; - priv->job.current->stats.dump =3D *stats; + privJobCurrent->stats.dump =3D *stats; priv->job.error =3D g_strdup(error); =20 /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { priv->job.error =3D g_strdup(virGetLastErrorMessage()); - priv->job.current->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_= FAILED; + privJobCurrent->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_FAI= LED; } =20 virDomainObjBroadcast(vm); @@ -3592,6 +3597,7 @@ qemuProcessRecoverJob(virQEMUDriver *driver, unsigned int *stopFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; + qemuDomainJobDataPrivate *privDataJobCurrent =3D NULL; virDomainState state; int reason; unsigned long long now; @@ -3659,10 +3665,12 @@ qemuProcessRecoverJob(virQEMUDriver *driver, /* We reset the job parameters for backup so that the job will look * active. This is possible because we are able to recover the sta= te * of blockjobs and also the backup job allows all sub-job types */ - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); + priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivateData= Callbacks); + privDataJobCurrent =3D priv->job.current->privateData; + priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_BACKUP; - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + privDataJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKU= P; + priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; priv->job.current->started =3D now; break; =20 @@ -8304,7 +8312,7 @@ void qemuProcessStop(virQEMUDriver *driver, =20 /* clean up a possible backup job */ if (priv->backup) - qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); + qemuBackupJobTerminate(vm, VIR_DOMAIN_JOB_STATUS_CANCELED); =20 /* Do this explicitly after vm->pid is reset so that security drivers = don't * try to enter the domain's namespace which is non-existent by now as= qemu diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index a99f1246e0..98080ac5f8 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1414,11 +1414,13 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, =20 /* do the memory snapshot if necessary */ if (memory) { + qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->pr= ivateData; + /* check if migration is possible */ if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDU= MP; + privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | --=20 2.34.1 From nobody Tue May 14 13:40:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1644587387; cv=none; d=zohomail.com; s=zohoarc; b=CS3tkpT7z4uTL+wNFqHUGTGdRk2YCP1sB2jzDc5s2nl3WzRLUuU/0U4LHYTmiEJ1wkyrXL0IMPEGZMXCTwWbVhOy8NuDGJ1jIPQaoJlIbJN9QaWGT5B6nHS4uKP9wo2ygyMRxH6/Ne9V+uoqY7XMNlN2uDFEu6CLcBKZ2otzPUQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1644587387; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gmUT8tf6HRs3nlNv3kajjV/kJqKF1bsyxGGfb81ZKAw=; b=EAz84Kgl9XxUJekMOz7AKBDx4yozVCrieNEXvgDZqq9YFeJtVXzJJNKodr1N5dCoTURykj9PiVEMvKOi7ecsXLL/bILNTpU1va6E/m4XqX2al67tYXqADzuF6q24hSQ4JIW32Na7pOO59wRoO1QjA1Tr3pK2g7rxWEf8fmw9bq8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1644587387381545.0745596765175; Fri, 11 Feb 2022 05:49:47 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-638-h-t4j3KqMeKDz-UGtk8EoQ-1; Fri, 11 Feb 2022 08:49:42 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 01FB3185302B; Fri, 11 Feb 2022 13:49:38 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D59E916A33; Fri, 11 Feb 2022 13:49:37 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id A4CB757681; Fri, 11 Feb 2022 13:49:37 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 21BDnZCx017419 for ; Fri, 11 Feb 2022 08:49:35 -0500 Received: by smtp.corp.redhat.com (Postfix) id 9C9857D70E; Fri, 11 Feb 2022 13:49:35 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.193.131]) by smtp.corp.redhat.com (Postfix) with ESMTP id 79C9F7D70C for ; Fri, 11 Feb 2022 13:49:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644587386; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=gmUT8tf6HRs3nlNv3kajjV/kJqKF1bsyxGGfb81ZKAw=; b=GBmpz8Zn9GwnRmxDpgGHTCiGJqF0HcCLn+3dxVLmz4XjZiUvO/HEZymI4LGI0P7EWWh6sf vQduALBIEXbeOfGR64EIKoullnxqI916PE504J0qF7Z2ZgSCYiC0Es6zP3/pKMNLyQZJDX 21ZMX5TpQsXmjSrLrFW5c4mcIIoOGvY= X-MC-Unique: h-t4j3KqMeKDz-UGtk8EoQ-1 From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 2/3] qemu: make separate function for setting statsType of privateData Date: Fri, 11 Feb 2022 14:49:06 +0100 Message-Id: <46038357d97d9bb4ef7668d23692238add2d067d.1644582510.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1644587388687100003 Content-Type: text/plain; charset="utf-8" We only need to set statsType in almost every case of setting something from private data, so it seems unnecessary to pull privateData out of current / completed job for just this one thing every time. I think this patch keeps the code cleaner without variables used just once. Signed-off-by: Kristina Hanicova Reviewed-by: Jiri Denemark --- src/qemu/qemu_backup.c | 4 ++-- src/qemu/qemu_domainjob.c | 10 ++++++++++ src/qemu/qemu_domainjob.h | 3 +++ src/qemu/qemu_driver.c | 14 ++++++-------- src/qemu/qemu_migration.c | 9 ++++----- src/qemu/qemu_process.c | 5 ++--- src/qemu/qemu_snapshot.c | 5 ++--- 7 files changed, 29 insertions(+), 21 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index b0c81261de..2471242e60 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -745,7 +745,6 @@ qemuBackupBegin(virDomainObj *vm, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privData =3D priv->job.current->privateData; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); g_autoptr(virDomainBackupDef) def =3D NULL; g_autofree char *suffix =3D NULL; @@ -799,7 +798,8 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - privData->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP); =20 if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index baa52dd986..3e73eba4ed 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -95,6 +95,16 @@ virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateD= ataCallbacks =3D { }; =20 =20 +void +qemuDomainJobSetStatsType(virDomainJobData *jobData, + qemuDomainJobStatsType type) +{ + qemuDomainJobDataPrivate *privData =3D jobData->privateData; + + privData->statsType =3D type; +} + + const char * qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, int phase G_GNUC_UNUSED) diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 36acf401dd..a078e62a1f 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -174,6 +174,9 @@ struct _qemuDomainJobObj { qemuDomainObjPrivateJobCallbacks *cb; }; =20 +void qemuDomainJobSetStatsType(virDomainJobData *jobData, + qemuDomainJobStatsType type); + const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, int phase); int qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 52a8931656..4f3d4bab3e 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2637,7 +2637,6 @@ qemuDomainSaveInternal(virQEMUDriver *driver, int ret =3D -1; virObjectEvent *event =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; virQEMUSaveData *data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 @@ -2654,7 +2653,8 @@ qemuDomainSaveInternal(virQEMUDriver *driver, goto endjob; } =20 - privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Pause */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { @@ -2995,9 +2995,8 @@ qemuDumpToFd(virQEMUDriver *driver, return -1; =20 if (detach) { - qemuDomainJobDataPrivate *privStats =3D priv->job.current->private= Data; - - privStats->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP); } else { g_clear_pointer(&priv->job.current, virDomainJobDataFree); } @@ -3135,7 +3134,6 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; qemuDomainObjPrivate *priv =3D NULL; - qemuDomainJobDataPrivate *privJobCurrent =3D NULL; bool resume =3D false, paused =3D false; int ret =3D -1; virObjectEvent *event =3D NULL; @@ -3160,8 +3158,8 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - privJobCurrent =3D priv->job.current->privateData; - privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Migrate will always stop the VM, so the resume condition is independent of whether the stop command is issued. */ diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index a48350be74..fea5e71f4d 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -5853,11 +5853,10 @@ qemuMigrationDstFinish(virQEMUDriver *driver, =20 if (dom) { if (jobData) { - qemuDomainJobDataPrivate *privJob =3D jobData->privateData; - priv->job.completed =3D g_steal_pointer(&jobData); priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETE= D; - privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + qemuDomainJobSetStatsType(jobData, + QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION= ); } =20 if (qemuMigrationCookieFormat(mig, driver, vm, @@ -6099,7 +6098,6 @@ qemuMigrationJobStart(virQEMUDriver *driver, unsigned long apiFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privJob =3D priv->job.current->privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -6116,7 +6114,8 @@ qemuMigrationJobStart(virQEMUDriver *driver, if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) return -1; =20 - privJob->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION); =20 qemuDomainObjSetAsyncJobMask(vm, mask); return 0; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 4b921c1e35..a1d4299347 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3597,7 +3597,6 @@ qemuProcessRecoverJob(virQEMUDriver *driver, unsigned int *stopFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobDataPrivate *privDataJobCurrent =3D NULL; virDomainState state; int reason; unsigned long long now; @@ -3666,10 +3665,10 @@ qemuProcessRecoverJob(virQEMUDriver *driver, * active. This is possible because we are able to recover the sta= te * of blockjobs and also the backup job allows all sub-job types */ priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivateData= Callbacks); - privDataJobCurrent =3D priv->job.current->privateData; =20 + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP); priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_BACKUP; - privDataJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKU= P; priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; priv->job.current->started =3D now; break; diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 98080ac5f8..cb2a7eb739 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1414,13 +1414,12 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, =20 /* do the memory snapshot if necessary */ if (memory) { - qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->pr= ivateData; - /* check if migration is possible */ if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; =20 - privJobCurrent->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + qemuDomainJobSetStatsType(priv->job.current, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | --=20 2.34.1 From nobody Tue May 14 13:40:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1644587389; cv=none; d=zohomail.com; s=zohoarc; b=n1fVu22dSuUy9BVM1lLrdNuHU1TTjb+HT98gL4CCfXhcce6FHC9fkUtxUWdNp04S8zlIE7e3EjdfirhU0UFTyqmHWpg7cCbnnucGsdzFCMzI0g23Vdali4zDAOgPEykayPLsrBLmHdtCPSK3RDsKHluVGle8aY7rP93lxEM6Cz0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1644587389; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Amns/VJZcpC41W0veGDWdJasdOUTpH0nKxRmKiMlOiA=; b=hECP6Q8taeHr0hnrJXt1Yapxpa3VsrkwlbL6oIKrFib3F8vHlW1CDqKBJRneWonUyRQZy6YwgEwayb9j8dTnvMg4h4ZIZhJp0ckcb4IkZ0QDoiuhpDISPQxD7bnXfVNF80/5HNwxRKQfPYri6evI5g4+98MyBCwyhmO74QwoeXA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1644587389927681.837247068241; Fri, 11 Feb 2022 05:49:49 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-135-6lju-rPzOzalhgOjMhG7yA-1; Fri, 11 Feb 2022 08:49:46 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1FB2810AF4EC; Fri, 11 Feb 2022 13:49:41 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EEE407D71E; Fri, 11 Feb 2022 13:49:40 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id B7FC21806D2C; Fri, 11 Feb 2022 13:49:40 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 21BDncD5017431 for ; Fri, 11 Feb 2022 08:49:38 -0500 Received: by smtp.corp.redhat.com (Postfix) id A49D57314C; Fri, 11 Feb 2022 13:49:38 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.193.131]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8C85A7D71E for ; Fri, 11 Feb 2022 13:49:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644587388; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=Amns/VJZcpC41W0veGDWdJasdOUTpH0nKxRmKiMlOiA=; b=F1QQQNL1vhkwUctXASqz3tcpl0GQzmHxGi/bpMLFM+AoRui4FvY2FFubwXOI0iCsPhLvc8 uG7t4hdJ2Lq4HpYnnyuulzyk5ImIh8TJ1R7VPlwhS3zsK0CnWfHmDH/9qPBXKK9VrtcwCN 1Y/s6aLQ9mgZqvz89zhYDz94Qm3AEFM= X-MC-Unique: 6lju-rPzOzalhgOjMhG7yA-1 From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 3/3] libxl: use virDomainJobData instead of virDomainJobInfo Date: Fri, 11 Feb 2022 14:49:07 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1644587390694100005 Content-Type: text/plain; charset="utf-8" This transition will make it easier for me to generalize jobs in the future as they will always use virDomainJobData and virDomainJobInfo will be only used in the public api.. Signed-off-by: Kristina Hanicova Reviewed-by: Jiri Denemark --- src/libxl/libxl_domain.c | 10 +++++----- src/libxl/libxl_domain.h | 3 ++- src/libxl/libxl_driver.c | 14 +++++++++----- 3 files changed, 16 insertions(+), 11 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index b995f20a64..c91e531a9a 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -60,7 +60,7 @@ libxlDomainObjInitJob(libxlDomainObjPrivate *priv) if (virCondInit(&priv->job.cond) < 0) return -1; =20 - priv->job.current =3D g_new0(virDomainJobInfo, 1); + priv->job.current =3D virDomainJobDataInit(NULL); =20 return 0; } @@ -78,7 +78,7 @@ static void libxlDomainObjFreeJob(libxlDomainObjPrivate *priv) { ignore_value(virCondDestroy(&priv->job.cond)); - VIR_FREE(priv->job.current); + virDomainJobDataFree(priv->job.current); } =20 /* Give up waiting for mutex after 30 seconds */ @@ -119,7 +119,7 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); priv->job.started =3D now; - priv->job.current->type =3D VIR_DOMAIN_JOB_UNBOUNDED; + priv->job.current->jobType =3D VIR_DOMAIN_JOB_UNBOUNDED; =20 return 0; =20 @@ -168,7 +168,7 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, int libxlDomainJobUpdateTime(struct libxlDomainJobObj *job) { - virDomainJobInfoPtr jobInfo =3D job->current; + virDomainJobData *jobData =3D job->current; unsigned long long now; =20 if (!job->started) @@ -182,7 +182,7 @@ libxlDomainJobUpdateTime(struct libxlDomainJobObj *job) return 0; } =20 - jobInfo->timeElapsed =3D now - job->started; + jobData->timeElapsed =3D now - job->started; return 0; } =20 diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 981bfc2bca..475e4a6933 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -26,6 +26,7 @@ #include "libxl_conf.h" #include "virchrdev.h" #include "virenum.h" +#include "domain_job.h" =20 /* Only 1 job is allowed at any time * A job includes *all* libxl.so api, even those just querying @@ -46,7 +47,7 @@ struct libxlDomainJobObj { enum libxlDomainJob active; /* Currently running job */ int owner; /* Thread which set current job */ unsigned long long started; /* When the job started */ - virDomainJobInfoPtr current; /* Statistics for the current job = */ + virDomainJobData *current; /* Statistics for the current job */ }; =20 typedef struct _libxlDomainObjPrivate libxlDomainObjPrivate; diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 97965aaf1d..4c61d330ed 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5235,7 +5235,11 @@ libxlDomainGetJobInfo(virDomainPtr dom, if (libxlDomainJobUpdateTime(&priv->job) < 0) goto cleanup; =20 - memcpy(info, priv->job.current, sizeof(virDomainJobInfo)); + /* setting only these two attributes is enough because libxl never sets + * anything else */ + memset(info, 0, sizeof(*info)); + info->type =3D priv->job.current->jobType; + info->timeElapsed =3D priv->job.current->timeElapsed; ret =3D 0; =20 cleanup: @@ -5252,7 +5256,7 @@ libxlDomainGetJobStats(virDomainPtr dom, { libxlDomainObjPrivate *priv; virDomainObj *vm; - virDomainJobInfoPtr jobInfo; + virDomainJobData *jobData; int ret =3D -1; int maxparams =3D 0; =20 @@ -5266,7 +5270,7 @@ libxlDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; - jobInfo =3D priv->job.current; + jobData =3D priv->job.current; if (!priv->job.active) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; @@ -5283,10 +5287,10 @@ libxlDomainGetJobStats(virDomainPtr dom, =20 if (virTypedParamsAddULLong(params, nparams, &maxparams, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) + jobData->timeElapsed) < 0) goto cleanup; =20 - *type =3D jobInfo->type; + *type =3D jobData->jobType; ret =3D 0; =20 cleanup: --=20 2.34.1