From nobody Wed May 15 21:36:39 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1648135986; cv=none; d=zohomail.com; s=zohoarc; b=djdIl60lnnrEAvwUdc4Boh4SX9LGHB9MUgYqV+WLcEOZ5WfBP8hrrIYWRssO+5ZtzXejCr+FfYBVCdFKO8Pvfiuw8g80yKlxJOE8c95+H2WbNXarUO/MmHsaGXps9VhvTazWjUac9/5F5Z7VDDJ0oFzFZ4NFl7jyIZ0B4+SrH9k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1648135986; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=KSW14M7IGy33IaIm7eGIWYCkyr3HrgA5b6GY3YI1fCM=; b=m18wSd+NbfrR+WO7sYBX37FOrXEyYGvMR3bmlhJdSiY5oqQZVqw8mxXoFGw01g0vCFYz6pkzaTws2QcQmnijTVZjdhiXQOBDzcKvsLvc3tK35VKm4qJz0wQRhtZ8ArO7rMSXc/MZLeB3eLrajWzf38U3uoNLJOeguVwMEKsynFA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1648135986068342.5002698204273; Thu, 24 Mar 2022 08:33:06 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-6-89dtVrJFP9GIGtGaPmdoag-1; Thu, 24 Mar 2022 11:33:02 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 324E43806720; Thu, 24 Mar 2022 15:33:00 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0C94840D2972; Thu, 24 Mar 2022 15:33:00 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5A07C1940357; Thu, 24 Mar 2022 15:32:59 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 1DB221940352 for ; Thu, 24 Mar 2022 15:32:58 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 0D77AC15D7E; Thu, 24 Mar 2022 15:32:58 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 17E93C15D7B for ; Thu, 24 Mar 2022 15:32:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648135984; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=KSW14M7IGy33IaIm7eGIWYCkyr3HrgA5b6GY3YI1fCM=; b=F9o2/XTQ+RaBGEkZf2NUpe1hOkI3G1RXRWfCkQJB8WMGrqyoiALqfXNV/OJ/iuu2SQneof R0qOYNmzNxoeUAn+rRW4TUhGawir/WrSwOrUM47A/y3c8UE0xijvejyTgD1DvmsiShDPLH D4rPk718LMFtLJ2x3oyaQbuBtKChCIM= X-MC-Unique: 89dtVrJFP9GIGtGaPmdoag-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 1/5] move jobs enums QEMU_X into hypervisor as VIR_X Date: Thu, 24 Mar 2022 16:32:42 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1648135988430100001 Content-Type: text/plain; charset="utf-8"; x-default="true" These enums are essentially the same and always sorted in the same order in every hypervisor with jobs. They can be generalized by using the qemu enums as the main ones as they are the most extensive. Signed-off-by: Kristina Hanicova Reviewed-by: Michal Privoznik --- src/hypervisor/domain_job.c | 32 +++ src/hypervisor/domain_job.h | 52 +++++ src/hypervisor/meson.build | 1 + src/libvirt_private.syms | 5 + src/libxl/libxl_domain.c | 1 + src/qemu/MIGRATION.txt | 6 +- src/qemu/THREADS.txt | 16 +- src/qemu/qemu_backup.c | 22 +- src/qemu/qemu_block.c | 20 +- src/qemu/qemu_block.h | 12 +- src/qemu/qemu_blockjob.c | 32 +-- src/qemu/qemu_checkpoint.c | 18 +- src/qemu/qemu_domain.c | 26 +-- src/qemu/qemu_domain.h | 4 +- src/qemu/qemu_domainjob.c | 236 ++++++++++------------ src/qemu/qemu_domainjob.h | 85 ++------ src/qemu/qemu_driver.c | 332 +++++++++++++++---------------- src/qemu/qemu_hotplug.c | 50 ++--- src/qemu/qemu_hotplug.h | 10 +- src/qemu/qemu_migration.c | 218 ++++++++++---------- src/qemu/qemu_migration.h | 8 +- src/qemu/qemu_migration_params.c | 4 +- src/qemu/qemu_process.c | 188 ++++++++--------- src/qemu/qemu_process.h | 22 +- src/qemu/qemu_saveimage.c | 4 +- src/qemu/qemu_saveimage.h | 4 +- src/qemu/qemu_snapshot.c | 56 +++--- src/qemu/qemu_snapshot.h | 2 +- 28 files changed, 739 insertions(+), 727 deletions(-) diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c index 9ac8a6d544..ff4e008cb5 100644 --- a/src/hypervisor/domain_job.c +++ b/src/hypervisor/domain_job.c @@ -9,6 +9,38 @@ #include "domain_job.h" =20 =20 +VIR_ENUM_IMPL(virDomainJob, + VIR_JOB_LAST, + "none", + "query", + "destroy", + "suspend", + "modify", + "abort", + "migration operation", + "none", /* async job is never stored in job.active */ + "async nested", +); + +VIR_ENUM_IMPL(virDomainAgentJob, + VIR_AGENT_JOB_LAST, + "none", + "query", + "modify", +); + +VIR_ENUM_IMPL(virDomainAsyncJob, + VIR_ASYNC_JOB_LAST, + "none", + "migration out", + "migration in", + "save", + "dump", + "snapshot", + "start", + "backup", +); + virDomainJobData * virDomainJobDataInit(virDomainJobDataPrivateDataCallbacks *cb) { diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h index 257ef067e4..b9d1107580 100644 --- a/src/hypervisor/domain_job.h +++ b/src/hypervisor/domain_job.h @@ -6,6 +6,58 @@ #pragma once =20 #include "internal.h" +#include "virenum.h" + +/* Only 1 job is allowed at any time + * A job includes *all* monitor commands, even those just querying + * information, not merely actions */ +typedef enum { + VIR_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) condit= ions */ + VIR_JOB_QUERY, /* Doesn't change any state */ + VIR_JOB_DESTROY, /* Destroys the domain (cannot be masked out) */ + VIR_JOB_SUSPEND, /* Suspends (stops vCPUs) the domain */ + VIR_JOB_MODIFY, /* May change state */ + VIR_JOB_ABORT, /* Abort current async job */ + VIR_JOB_MIGRATION_OP, /* Operation influencing outgoing migration */ + + /* The following two items must always be the last items before JOB_LA= ST */ + VIR_JOB_ASYNC, /* Asynchronous job */ + VIR_JOB_ASYNC_NESTED, /* Normal job within an async job */ + + VIR_JOB_LAST +} virDomainJob; +VIR_ENUM_DECL(virDomainJob); + + +/* Currently only QEMU driver uses agent jobs */ +typedef enum { + VIR_AGENT_JOB_NONE =3D 0, /* No agent job. */ + VIR_AGENT_JOB_QUERY, /* Does not change state of domain */ + VIR_AGENT_JOB_MODIFY, /* May change state of domain */ + + VIR_AGENT_JOB_LAST +} virDomainAgentJob; +VIR_ENUM_DECL(virDomainAgentJob); + + +/* Async job consists of a series of jobs that may change state. Independe= nt + * jobs that do not change state (and possibly others if explicitly allowe= d by + * current async job) are allowed to be run even if async job is active. + * Currently supported by QEMU only. */ +typedef enum { + VIR_ASYNC_JOB_NONE =3D 0, + VIR_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_IN, + VIR_ASYNC_JOB_SAVE, + VIR_ASYNC_JOB_DUMP, + VIR_ASYNC_JOB_SNAPSHOT, + VIR_ASYNC_JOB_START, + VIR_ASYNC_JOB_BACKUP, + + VIR_ASYNC_JOB_LAST +} virDomainAsyncJob; +VIR_ENUM_DECL(virDomainAsyncJob); + =20 typedef enum { VIR_DOMAIN_JOB_STATUS_NONE =3D 0, diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index ec11ec0cd8..7532f30ee2 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -19,6 +19,7 @@ hypervisor_lib =3D static_library( ], include_directories: [ conf_inc_dir, + util_inc_dir, ], ) =20 diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 03697d81a8..8a3e5f7f7c 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1577,10 +1577,15 @@ virDomainDriverSetupPersistentDefBlkioParams; =20 =20 # hypervisor/domain_job.h +virDomainAgentJobTypeToString; +virDomainAsyncJobTypeFromString; +virDomainAsyncJobTypeToString; virDomainJobDataCopy; virDomainJobDataFree; virDomainJobDataInit; virDomainJobStatusToType; +virDomainJobTypeFromString; +virDomainJobTypeToString; =20 =20 # hypervisor/virclosecallbacks.h diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index d33e3811d1..2501f6b848 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -38,6 +38,7 @@ #include "xen_common.h" #include "driver.h" #include "domain_validate.h" +#include "domain_job.h" =20 #define VIR_FROM_THIS VIR_FROM_LIBXL =20 diff --git a/src/qemu/MIGRATION.txt b/src/qemu/MIGRATION.txt index e861fd001e..b75fe62788 100644 --- a/src/qemu/MIGRATION.txt +++ b/src/qemu/MIGRATION.txt @@ -73,14 +73,14 @@ The sequence of calling qemuMigrationJob* helper method= s is as follows: - The first API of a migration protocol (Prepare or Perform/Begin dependin= g on migration type and version) has to start migration job and keep it activ= e: =20 - qemuMigrationJobStart(driver, vm, QEMU_JOB_MIGRATION_{IN,OUT}); + qemuMigrationJobStart(driver, vm, VIR_JOB_MIGRATION_{IN,OUT}); qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); ...do work... qemuMigrationJobContinue(vm); =20 - All consequent phases except for the last one have to keep the job activ= e: =20 - if (!qemuMigrationJobIsActive(vm, QEMU_JOB_MIGRATION_{IN,OUT})) + if (!qemuMigrationJobIsActive(vm, VIR_JOB_MIGRATION_{IN,OUT})) return; qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_*); ...do work... @@ -88,7 +88,7 @@ The sequence of calling qemuMigrationJob* helper methods = is as follows: =20 - The last migration phase finally finishes the migration job: =20 - if (!qemuMigrationJobIsActive(vm, QEMU_JOB_MIGRATION_{IN,OUT})) + if (!qemuMigrationJobIsActive(vm, VIR_JOB_MIGRATION_{IN,OUT})) return; qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_*); ...do work... diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt index 30cf3ce210..b5f54f203c 100644 --- a/src/qemu/THREADS.txt +++ b/src/qemu/THREADS.txt @@ -186,7 +186,7 @@ To acquire the QEMU monitor lock as part of an asynchro= nous job =20 These functions are for use inside an asynchronous job; the caller must check for a return of -1 (VM not running, so nothing to exit). - Helper functions may also call this with QEMU_ASYNC_JOB_NONE when + Helper functions may also call this with VIR_ASYNC_JOB_NONE when used from a sync job (such as when first starting a domain). =20 =20 @@ -220,7 +220,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE); + qemuDomainObjBeginJob(obj, VIR_JOB_TYPE); =20 ...do work... =20 @@ -236,7 +236,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE); + qemuDomainObjBeginJob(obj, VIR_JOB_TYPE); =20 ...do prep work... =20 @@ -259,7 +259,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAgentJob(obj, QEMU_AGENT_JOB_TYPE); + qemuDomainObjBeginAgentJob(obj, VIR_AGENT_JOB_TYPE); =20 ...do prep work... =20 @@ -283,13 +283,13 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); + qemuDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); qemuDomainObjSetAsyncJobMask(obj, allowedJobs); =20 ...do prep work... =20 if (qemuDomainObjEnterMonitorAsync(driver, obj, - QEMU_ASYNC_JOB_TYPE) < 0) { + VIR_ASYNC_JOB_TYPE) < 0) { /* domain died in the meantime */ goto error; } @@ -298,7 +298,7 @@ Design patterns =20 while (!finished) { if (qemuDomainObjEnterMonitorAsync(driver, obj, - QEMU_ASYNC_JOB_TYPE) < 0) { + VIR_ASYNC_JOB_TYPE) < 0) { /* domain died in the meantime */ goto error; } @@ -323,7 +323,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); + qemuDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); =20 ...do prep work... =20 diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index f31b840617..5d24155628 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -466,10 +466,10 @@ qemuBackupDiskPrepareOneStorage(virDomainObj *vm, =20 if (qemuBlockStorageSourceCreate(vm, dd->store, dd->backingStore, = NULL, dd->crdata->srcdata[0], - QEMU_ASYNC_JOB_BACKUP) < 0) + VIR_ASYNC_JOB_BACKUP) < 0) return -1; } else { - if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JO= B_BACKUP) < 0) + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, VIR_ASYNC_JOB= _BACKUP) < 0) return -1; =20 rc =3D qemuBlockStorageSourceAttachApply(priv->mon, dd->crdata->sr= cdata[0]); @@ -622,7 +622,7 @@ qemuBackupJobTerminate(virDomainObj *vm, =20 g_clear_pointer(&priv->backup, virDomainBackupDefFree); =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_BACKUP) + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) qemuDomainObjEndAsyncJob(vm); } =20 @@ -791,13 +791,13 @@ qemuBackupBegin(virDomainObj *vm, * infrastructure for async jobs. We'll allow standard modify-type jobs * as the interlocking of conflicting operations is handled on the blo= ck * job level */ - if (qemuDomainObjBeginAsyncJob(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP, + if (qemuDomainObjBeginAsyncJob(priv->driver, vm, VIR_ASYNC_JOB_BACKUP, VIR_DOMAIN_JOB_OPERATION_BACKUP, flags)= < 0) return -1; =20 qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | - JOB_MASK(QEMU_JOB_SUSPEND) | - JOB_MASK(QEMU_JOB_MODIFY))); + JOB_MASK(VIR_JOB_SUSPEND) | + JOB_MASK(VIR_JOB_MODIFY))); qemuDomainJobSetStatsType(priv->job.current, QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP); =20 @@ -856,7 +856,7 @@ qemuBackupBegin(virDomainObj *vm, goto endjob; } =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_BACKUP))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_BACKUP))) goto endjob; =20 if ((ndd =3D qemuBackupDiskPrepareData(vm, def, blockNamedNodeData, ac= tions, @@ -874,7 +874,7 @@ qemuBackupBegin(virDomainObj *vm, =20 priv->backup =3D g_steal_pointer(&def); =20 - if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BA= CKUP) < 0) + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, VIR_ASYNC_JOB_BAC= KUP) < 0) goto endjob; =20 if (pull) { @@ -910,7 +910,7 @@ qemuBackupBegin(virDomainObj *vm, } =20 if (pull) { - if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JO= B_BACKUP) < 0) + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, VIR_ASYNC_JOB= _BACKUP) < 0) goto endjob; /* note that if the export fails we've already created the checkpo= int * and we will not delete it */ @@ -918,7 +918,7 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjExitMonitor(vm); =20 if (rc < 0) { - qemuBackupJobCancelBlockjobs(vm, priv->backup, false, QEMU_ASY= NC_JOB_BACKUP); + qemuBackupJobCancelBlockjobs(vm, priv->backup, false, VIR_ASYN= C_JOB_BACKUP); goto endjob; } } @@ -932,7 +932,7 @@ qemuBackupBegin(virDomainObj *vm, qemuCheckpointRollbackMetadata(vm, chk); =20 if (!job_started && (nbd_running || tlsAlias || tlsSecretAlias) && - qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BA= CKUP) =3D=3D 0) { + qemuDomainObjEnterMonitorAsync(priv->driver, vm, VIR_ASYNC_JOB_BAC= KUP) =3D=3D 0) { if (nbd_running) ignore_value(qemuMonitorNBDServerStop(priv->mon)); if (tlsAlias) diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c index f70b6d3e63..3d961c8b39 100644 --- a/src/qemu/qemu_block.c +++ b/src/qemu/qemu_block.c @@ -308,7 +308,7 @@ qemuBlockDiskDetectNodes(virDomainDiskDef *disk, int qemuBlockNodeNamesDetect(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(GHashTable) disktable =3D NULL; @@ -2120,7 +2120,7 @@ qemuBlockStorageSourceChainDetach(qemuMonitor *mon, int qemuBlockStorageSourceDetachOneBlockdev(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virStorageSource *src) { int ret; @@ -2694,7 +2694,7 @@ qemuBlockStorageSourceCreateGeneric(virDomainObj *vm, virStorageSource *src, virStorageSource *chain, bool storageCreate, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(virJSONValue) props =3D createProps; qemuDomainObjPrivate *priv =3D vm->privateData; @@ -2749,7 +2749,7 @@ static int qemuBlockStorageSourceCreateStorage(virDomainObj *vm, virStorageSource *src, virStorageSource *chain, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { int actualType =3D virStorageSourceGetActualType(src); g_autoptr(virJSONValue) createstorageprops =3D NULL; @@ -2786,7 +2786,7 @@ qemuBlockStorageSourceCreateFormat(virDomainObj *vm, virStorageSource *src, virStorageSource *backingStore, virStorageSource *chain, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(virJSONValue) createformatprops =3D NULL; int ret; @@ -2836,7 +2836,7 @@ qemuBlockStorageSourceCreate(virDomainObj *vm, virStorageSource *backingStore, virStorageSource *chain, qemuBlockStorageSourceAttachData *data, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int ret =3D -1; @@ -3020,7 +3020,7 @@ qemuBlockNamedNodeDataGetBitmapByName(GHashTable *blo= ckNamedNodeData, =20 GHashTable * qemuBlockGetNamedNodeData(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virQEMUDriver *driver =3D priv->driver; @@ -3372,7 +3372,7 @@ qemuBlockReopenFormatMon(qemuMonitor *mon, static int qemuBlockReopenFormat(virDomainObj *vm, virStorageSource *src, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virQEMUDriver *driver =3D priv->driver; @@ -3413,7 +3413,7 @@ qemuBlockReopenFormat(virDomainObj *vm, int qemuBlockReopenReadWrite(virDomainObj *vm, virStorageSource *src, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { if (!src->readonly) return 0; @@ -3442,7 +3442,7 @@ qemuBlockReopenReadWrite(virDomainObj *vm, int qemuBlockReopenReadOnly(virDomainObj *vm, virStorageSource *src, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { if (src->readonly) return 0; diff --git a/src/qemu/qemu_block.h b/src/qemu/qemu_block.h index 184a549d5c..8eafb8482a 100644 --- a/src/qemu/qemu_block.h +++ b/src/qemu/qemu_block.h @@ -47,7 +47,7 @@ qemuBlockNodeNameGetBackingChain(virJSONValue *namednodes= data, int qemuBlockNodeNamesDetect(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 GHashTable * qemuBlockGetNodeData(virJSONValue *data); @@ -143,7 +143,7 @@ qemuBlockStorageSourceAttachRollback(qemuMonitor *mon, int qemuBlockStorageSourceDetachOneBlockdev(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virStorageSource *src); =20 struct _qemuBlockStorageSourceChainData { @@ -213,7 +213,7 @@ qemuBlockStorageSourceCreate(virDomainObj *vm, virStorageSource *backingStore, virStorageSource *chain, qemuBlockStorageSourceAttachData *data, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuBlockStorageSourceCreateDetectSize(GHashTable *blockNamedNodeData, @@ -233,7 +233,7 @@ qemuBlockNamedNodeDataGetBitmapByName(GHashTable *block= NamedNodeData, =20 GHashTable * qemuBlockGetNamedNodeData(virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuBlockGetBitmapMergeActions(virStorageSource *topsrc, @@ -272,11 +272,11 @@ qemuBlockReopenFormatMon(qemuMonitor *mon, int qemuBlockReopenReadWrite(virDomainObj *vm, virStorageSource *src, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); int qemuBlockReopenReadOnly(virDomainObj *vm, virStorageSource *src, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 bool qemuBlockStorageSourceNeedsStorageSliceLayer(const virStorageSource *src); diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index 87f8ae7b52..8c2205118f 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -565,7 +565,7 @@ qemuBlockJobRefreshJobs(virQEMUDriver *driver, job->reconnected =3D true; =20 if (job->newstate !=3D -1) - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); /* 'job' may be invalid after this update */ } =20 @@ -839,7 +839,7 @@ qemuBlockJobEventProcessLegacy(virQEMUDriver *driver, static void qemuBlockJobEventProcessConcludedRemoveChain(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virStorageSource *chain) { g_autoptr(qemuBlockStorageSourceChainData) data =3D NULL; @@ -942,7 +942,7 @@ qemuBlockJobClearConfigChain(virDomainObj *vm, static int qemuBlockJobProcessEventCompletedPullBitmaps(virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(GHashTable) blockNamedNodeData =3D NULL; @@ -992,7 +992,7 @@ static void qemuBlockJobProcessEventCompletedPull(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { virStorageSource *base =3D NULL; virStorageSource *baseparent =3D NULL; @@ -1106,7 +1106,7 @@ qemuBlockJobDeleteImages(virQEMUDriver *driver, static int qemuBlockJobProcessEventCompletedCommitBitmaps(virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(GHashTable) blockNamedNodeData =3D NULL; @@ -1168,7 +1168,7 @@ static void qemuBlockJobProcessEventCompletedCommit(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { virStorageSource *baseparent =3D NULL; virDomainDiskDef *cfgdisk =3D NULL; @@ -1258,7 +1258,7 @@ static void qemuBlockJobProcessEventCompletedActiveCommit(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { virStorageSource *baseparent =3D NULL; virDomainDiskDef *cfgdisk =3D NULL; @@ -1329,7 +1329,7 @@ qemuBlockJobProcessEventCompletedActiveCommit(virQEMU= Driver *driver, static int qemuBlockJobProcessEventCompletedCopyBitmaps(virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(GHashTable) blockNamedNodeData =3D NULL; @@ -1366,7 +1366,7 @@ static void qemuBlockJobProcessEventConcludedCopyPivot(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; VIR_DEBUG("copy job '%s' on VM '%s' pivoted", job->name, vm->def->name= ); @@ -1402,7 +1402,7 @@ static void qemuBlockJobProcessEventConcludedCopyAbort(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; =20 @@ -1438,7 +1438,7 @@ static void qemuBlockJobProcessEventFailedActiveCommit(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virDomainDiskDef *disk =3D job->disk; @@ -1470,7 +1470,7 @@ static void qemuBlockJobProcessEventConcludedCreate(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(qemuBlockStorageSourceAttachData) backend =3D NULL; =20 @@ -1511,7 +1511,7 @@ static void qemuBlockJobProcessEventConcludedBackup(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, qemuBlockjobState newstate, unsigned long long progressCurrent, unsigned long long progressTotal) @@ -1547,7 +1547,7 @@ static void qemuBlockJobEventProcessConcludedTransition(qemuBlockJobData *job, virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, unsigned long long progressCur= rent, unsigned long long progressTot= al) { @@ -1607,7 +1607,7 @@ static void qemuBlockJobEventProcessConcluded(qemuBlockJobData *job, virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuMonitorJobInfo **jobinfo =3D NULL; size_t njobinfo =3D 0; @@ -1688,7 +1688,7 @@ static void qemuBlockJobEventProcess(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) =20 { switch ((qemuBlockjobState) job->newstate) { diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c index 2a495dfe08..a933230335 100644 --- a/src/qemu/qemu_checkpoint.c +++ b/src/qemu/qemu_checkpoint.c @@ -192,7 +192,7 @@ qemuCheckpointDiscardBitmaps(virDomainObj *vm, =20 actions =3D virJSONValueNewArray(); =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_NONE))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_NONE))) return -1; =20 for (i =3D 0; i < chkdef->ndisks; i++) { @@ -229,7 +229,7 @@ qemuCheckpointDiscardBitmaps(virDomainObj *vm, goto relabel; =20 if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN) && - qemuBlockReopenReadWrite(vm, src, QEMU_ASYNC_JOB_NONE) < 0) + qemuBlockReopenReadWrite(vm, src, VIR_ASYNC_JOB_NONE) < 0) goto relabel; =20 relabelimages =3D g_slist_prepend(relabelimages, src); @@ -244,7 +244,7 @@ qemuCheckpointDiscardBitmaps(virDomainObj *vm, virStorageSource *src =3D next->data; =20 if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN)) - ignore_value(qemuBlockReopenReadOnly(vm, src, QEMU_ASYNC_JOB_N= ONE)); + ignore_value(qemuBlockReopenReadOnly(vm, src, VIR_ASYNC_JOB_NO= NE)); =20 ignore_value(qemuDomainStorageSourceAccessAllow(driver, vm, src, true, false, false= )); @@ -417,7 +417,7 @@ qemuCheckpointRedefineValidateBitmaps(virDomainObj *vm, if (virDomainObjCheckActive(vm) < 0) return -1; =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_NONE))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_NONE))) return -1; =20 for (i =3D 0; i < chkdef->ndisks; i++) { @@ -607,7 +607,7 @@ qemuCheckpointCreateXML(virDomainPtr domain, /* Unlike snapshots, the RNG schema already ensured a sane filename. */ =20 /* We are going to modify the domain below. */ - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return NULL; =20 if (redefine) { @@ -658,13 +658,13 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, size_t i; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) goto endjob; =20 - if (!(nodedataMerge =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_JOB_N= ONE))) + if (!(nodedataMerge =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_JOB_NO= NE))) goto endjob; =20 /* enumerate disks relevant for the checkpoint which are also present = in the @@ -741,7 +741,7 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, goto endjob; =20 /* now do a final refresh */ - if (!(nodedataStats =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_JOB_N= ONE))) + if (!(nodedataStats =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_JOB_NO= NE))) goto endjob; =20 qemuDomainObjEnterMonitor(driver, vm); @@ -852,7 +852,7 @@ qemuCheckpointDelete(virDomainObj *vm, VIR_DOMAIN_CHECKPOINT_DELETE_METADATA_ONLY | VIR_DOMAIN_CHECKPOINT_DELETE_CHILDREN_ONLY, -1); =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (!metadata_only) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 3bf864bc5d..a587ebe86d 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -210,7 +210,7 @@ qemuDomainFormatJobPrivate(virBuffer *buf, { qemuDomainJobPrivate *priv =3D job->privateData; =20 - if (job->asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + if (job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { if (qemuDomainObjPrivateXMLFormatNBDMigration(buf, vm) < 0) return -1; =20 @@ -284,7 +284,7 @@ qemuDomainObjPrivateXMLParseJobNBD(virDomainObj *vm, return -1; =20 if (n > 0) { - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { VIR_WARN("Found disks marked for migration but we were not " "migrating"); n =3D 0; @@ -5858,11 +5858,11 @@ qemuDomainSaveConfig(virDomainObj *obj) static int qemuDomainObjEnterMonitorInternal(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D obj->privateData; =20 - if (asyncJob !=3D QEMU_ASYNC_JOB_NONE) { + if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { int ret; if ((ret =3D qemuDomainObjBeginNestedJob(driver, obj, asyncJob)) <= 0) return ret; @@ -5878,7 +5878,7 @@ qemuDomainObjEnterMonitorInternal(virQEMUDriver *driv= er, } else if (priv->job.owner !=3D virThreadSelfID()) { VIR_WARN("Entering a monitor without owning a job. " "Job %s owner %s (%llu)", - qemuDomainJobTypeToString(priv->job.active), + virDomainJobTypeToString(priv->job.active), priv->job.ownerAPI, priv->job.owner); } =20 @@ -5918,7 +5918,7 @@ qemuDomainObjExitMonitor(virDomainObj *obj) if (!hasRefs) priv->mon =3D NULL; =20 - if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) + if (priv->job.active =3D=3D VIR_JOB_ASYNC_NESTED) qemuDomainObjEndJob(obj); } =20 @@ -5926,7 +5926,7 @@ void qemuDomainObjEnterMonitor(virQEMUDriver *driver, virDomainObj *obj) { ignore_value(qemuDomainObjEnterMonitorInternal(driver, obj, - QEMU_ASYNC_JOB_NONE)); + VIR_ASYNC_JOB_NONE)); } =20 /* @@ -5935,7 +5935,7 @@ void qemuDomainObjEnterMonitor(virQEMUDriver *driver, * To be called immediately before any QEMU monitor API call. * Must have already either called qemuDomainObjBeginJob() * and checked that the VM is still active, with asyncJob of - * QEMU_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob, + * VIR_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob, * with the same asyncJob. * * Returns 0 if job was started, in which case this must be followed with @@ -5946,7 +5946,7 @@ void qemuDomainObjEnterMonitor(virQEMUDriver *driver, int qemuDomainObjEnterMonitorAsync(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { return qemuDomainObjEnterMonitorInternal(driver, obj, asyncJob); } @@ -7135,7 +7135,7 @@ qemuDomainRemoveInactiveLocked(virQEMUDriver *driver, * qemuDomainRemoveInactiveJob: * * Just like qemuDomainRemoveInactive but it tries to grab a - * QEMU_JOB_MODIFY first. Even though it doesn't succeed in + * VIR_JOB_MODIFY first. Even though it doesn't succeed in * grabbing the job the control carries with * qemuDomainRemoveInactive call. */ @@ -7145,7 +7145,7 @@ qemuDomainRemoveInactiveJob(virQEMUDriver *driver, { bool haveJob; =20 - haveJob =3D qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) >=3D 0; + haveJob =3D qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) >=3D 0; =20 qemuDomainRemoveInactive(driver, vm); =20 @@ -7166,7 +7166,7 @@ qemuDomainRemoveInactiveJobLocked(virQEMUDriver *driv= er, { bool haveJob; =20 - haveJob =3D qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) >=3D 0; + haveJob =3D qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) >=3D 0; =20 qemuDomainRemoveInactiveLocked(driver, vm); =20 @@ -10071,7 +10071,7 @@ qemuDomainVcpuPersistOrder(virDomainDef *def) int qemuDomainCheckMonitor(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int ret; diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index edafb585b3..a5d6705571 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -500,7 +500,7 @@ void qemuDomainObjExitMonitor(virDomainObj *obj) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); int qemuDomainObjEnterMonitorAsync(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; =20 =20 @@ -892,7 +892,7 @@ void qemuDomainVcpuPersistOrder(virDomainDef *def) =20 int qemuDomainCheckMonitor(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 bool qemuDomainSupportsVideoVga(const virDomainVideoDef *video, virQEMUCaps *qemuCaps); diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index cf1e093e22..71876fe6a3 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -31,38 +31,6 @@ =20 VIR_LOG_INIT("qemu.qemu_domainjob"); =20 -VIR_ENUM_IMPL(qemuDomainJob, - QEMU_JOB_LAST, - "none", - "query", - "destroy", - "suspend", - "modify", - "abort", - "migration operation", - "none", /* async job is never stored in job.active */ - "async nested", -); - -VIR_ENUM_IMPL(qemuDomainAgentJob, - QEMU_AGENT_JOB_LAST, - "none", - "query", - "modify", -); - -VIR_ENUM_IMPL(qemuDomainAsyncJob, - QEMU_ASYNC_JOB_LAST, - "none", - "migration out", - "migration in", - "save", - "dump", - "snapshot", - "start", - "backup", -); - static void * qemuJobDataAllocPrivateData(void) { @@ -106,22 +74,22 @@ qemuDomainJobSetStatsType(virDomainJobData *jobData, =20 =20 const char * -qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, +virDomainAsyncJobPhaseToString(virDomainAsyncJob job, int phase G_GNUC_UNUSED) { switch (job) { - case QEMU_ASYNC_JOB_MIGRATION_OUT: - case QEMU_ASYNC_JOB_MIGRATION_IN: + case VIR_ASYNC_JOB_MIGRATION_OUT: + case VIR_ASYNC_JOB_MIGRATION_IN: return qemuMigrationJobPhaseTypeToString(phase); =20 - case QEMU_ASYNC_JOB_SAVE: - case QEMU_ASYNC_JOB_DUMP: - case QEMU_ASYNC_JOB_SNAPSHOT: - case QEMU_ASYNC_JOB_START: - case QEMU_ASYNC_JOB_NONE: - case QEMU_ASYNC_JOB_BACKUP: + case VIR_ASYNC_JOB_SAVE: + case VIR_ASYNC_JOB_DUMP: + case VIR_ASYNC_JOB_SNAPSHOT: + case VIR_ASYNC_JOB_START: + case VIR_ASYNC_JOB_NONE: + case VIR_ASYNC_JOB_BACKUP: G_GNUC_FALLTHROUGH; - case QEMU_ASYNC_JOB_LAST: + case VIR_ASYNC_JOB_LAST: break; } =20 @@ -129,25 +97,25 @@ qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, } =20 int -qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, +virDomainAsyncJobPhaseFromString(virDomainAsyncJob job, const char *phase) { if (!phase) return 0; =20 switch (job) { - case QEMU_ASYNC_JOB_MIGRATION_OUT: - case QEMU_ASYNC_JOB_MIGRATION_IN: + case VIR_ASYNC_JOB_MIGRATION_OUT: + case VIR_ASYNC_JOB_MIGRATION_IN: return qemuMigrationJobPhaseTypeFromString(phase); =20 - case QEMU_ASYNC_JOB_SAVE: - case QEMU_ASYNC_JOB_DUMP: - case QEMU_ASYNC_JOB_SNAPSHOT: - case QEMU_ASYNC_JOB_START: - case QEMU_ASYNC_JOB_NONE: - case QEMU_ASYNC_JOB_BACKUP: + case VIR_ASYNC_JOB_SAVE: + case VIR_ASYNC_JOB_DUMP: + case VIR_ASYNC_JOB_SNAPSHOT: + case VIR_ASYNC_JOB_START: + case VIR_ASYNC_JOB_NONE: + case VIR_ASYNC_JOB_BACKUP: G_GNUC_FALLTHROUGH; - case QEMU_ASYNC_JOB_LAST: + case VIR_ASYNC_JOB_LAST: break; } =20 @@ -211,7 +179,7 @@ qemuDomainObjInitJob(qemuDomainJobObj *job, static void qemuDomainObjResetJob(qemuDomainJobObj *job) { - job->active =3D QEMU_JOB_NONE; + job->active =3D VIR_JOB_NONE; job->owner =3D 0; g_clear_pointer(&job->ownerAPI, g_free); job->started =3D 0; @@ -221,7 +189,7 @@ qemuDomainObjResetJob(qemuDomainJobObj *job) static void qemuDomainObjResetAgentJob(qemuDomainJobObj *job) { - job->agentActive =3D QEMU_AGENT_JOB_NONE; + job->agentActive =3D VIR_AGENT_JOB_NONE; job->agentOwner =3D 0; g_clear_pointer(&job->agentOwnerAPI, g_free); job->agentStarted =3D 0; @@ -231,7 +199,7 @@ qemuDomainObjResetAgentJob(qemuDomainJobObj *job) static void qemuDomainObjResetAsyncJob(qemuDomainJobObj *job) { - job->asyncJob =3D QEMU_ASYNC_JOB_NONE; + job->asyncJob =3D VIR_ASYNC_JOB_NONE; job->asyncOwner =3D 0; g_clear_pointer(&job->asyncOwnerAPI, g_free); job->asyncStarted =3D 0; @@ -286,7 +254,7 @@ qemuDomainObjClearJob(qemuDomainJobObj *job) } =20 bool -qemuDomainTrackJob(qemuDomainJob job) +qemuDomainTrackJob(virDomainJob job) { return (QEMU_DOMAIN_TRACK_JOBS & JOB_MASK(job)) !=3D 0; } @@ -713,14 +681,14 @@ qemuDomainObjSetJobPhase(virDomainObj *obj, return; =20 VIR_DEBUG("Setting '%s' phase to '%s'", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase)); + virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobPhaseToString(priv->job.asyncJob, phase)); =20 if (priv->job.asyncOwner =3D=3D 0) { priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); } else if (me !=3D priv->job.asyncOwner) { VIR_WARN("'%s' async job is owned by thread %llu", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(priv->job.asyncJob), priv->job.asyncOwner); } =20 @@ -738,7 +706,7 @@ qemuDomainObjSetAsyncJobMask(virDomainObj *obj, if (!priv->job.asyncJob) return; =20 - priv->job.mask =3D allowedJobs | JOB_MASK(QEMU_JOB_DESTROY); + priv->job.mask =3D allowedJobs | JOB_MASK(VIR_JOB_DESTROY); } =20 void @@ -746,7 +714,7 @@ qemuDomainObjDiscardAsyncJob(virDomainObj *obj) { qemuDomainObjPrivate *priv =3D obj->privateData; =20 - if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) + if (priv->job.active =3D=3D VIR_JOB_ASYNC_NESTED) qemuDomainObjResetJob(&priv->job); qemuDomainObjResetAsyncJob(&priv->job); qemuDomainSaveStatus(obj); @@ -758,33 +726,33 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) qemuDomainObjPrivate *priv =3D obj->privateData; =20 VIR_DEBUG("Releasing ownership of '%s' async job", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainAsyncJobTypeToString(priv->job.asyncJob)); =20 if (priv->job.asyncOwner !=3D virThreadSelfID()) { VIR_WARN("'%s' async job is owned by thread %llu", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(priv->job.asyncJob), priv->job.asyncOwner); } priv->job.asyncOwner =3D 0; } =20 static bool -qemuDomainNestedJobAllowed(qemuDomainJobObj *jobs, qemuDomainJob newJob) +qemuDomainNestedJobAllowed(qemuDomainJobObj *jobs, virDomainJob newJob) { return !jobs->asyncJob || - newJob =3D=3D QEMU_JOB_NONE || + newJob =3D=3D VIR_JOB_NONE || (jobs->mask & JOB_MASK(newJob)) !=3D 0; } =20 static bool qemuDomainObjCanSetJob(qemuDomainJobObj *job, - qemuDomainJob newJob, - qemuDomainAgentJob newAgentJob) + virDomainJob newJob, + virDomainAgentJob newAgentJob) { - return ((newJob =3D=3D QEMU_JOB_NONE || - job->active =3D=3D QEMU_JOB_NONE) && - (newAgentJob =3D=3D QEMU_AGENT_JOB_NONE || - job->agentActive =3D=3D QEMU_AGENT_JOB_NONE)); + return ((newJob =3D=3D VIR_JOB_NONE || + job->active =3D=3D VIR_JOB_NONE) && + (newAgentJob =3D=3D VIR_AGENT_JOB_NONE || + job->agentActive =3D=3D VIR_AGENT_JOB_NONE)); } =20 /* Give up waiting for mutex after 30 seconds */ @@ -794,8 +762,8 @@ qemuDomainObjCanSetJob(qemuDomainJobObj *job, * qemuDomainObjBeginJobInternal: * @driver: qemu driver * @obj: domain object - * @job: qemuDomainJob to start - * @asyncJob: qemuDomainAsyncJob to start + * @job: virDomainJob to start + * @asyncJob: virDomainAsyncJob to start * @nowait: don't wait trying to acquire @job * * Acquires job for a domain object which must be locked before @@ -815,16 +783,16 @@ qemuDomainObjCanSetJob(qemuDomainJobObj *job, static int ATTRIBUTE_NONNULL(1) qemuDomainObjBeginJobInternal(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainJob job, - qemuDomainAgentJob agentJob, - qemuDomainAsyncJob asyncJob, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, bool nowait) { qemuDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; - bool nested =3D job =3D=3D QEMU_JOB_ASYNC_NESTED; - bool async =3D job =3D=3D QEMU_JOB_ASYNC; + bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; + bool async =3D job =3D=3D VIR_JOB_ASYNC; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); const char *blocker =3D NULL; const char *agentBlocker =3D NULL; @@ -837,13 +805,13 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", NULLSTR(currentAPI), - qemuDomainJobTypeToString(job), - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(asyncJob), + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAgentJobTypeToString(priv->job.agentActive), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainJobTypeToString(priv->job.active), + virDomainAgentJobTypeToString(priv->job.agentActive), + virDomainAsyncJobTypeToString(priv->job.asyncJob)); =20 if (virTimeMillisNow(&now) < 0) return -1; @@ -852,7 +820,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, then =3D now + QEMU_JOB_WAIT_TIME; =20 retry: - if ((!async && job !=3D QEMU_JOB_DESTROY) && + if ((!async && job !=3D VIR_JOB_DESTROY) && cfg->maxQueuedJobs && priv->job.jobsQueued > cfg->maxQueuedJobs) { goto error; @@ -886,10 +854,10 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, if (job) { qemuDomainObjResetJob(&priv->job); =20 - if (job !=3D QEMU_JOB_ASYNC) { + if (job !=3D VIR_JOB_ASYNC) { VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", - qemuDomainJobTypeToString(job), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); @@ -897,7 +865,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, priv->job.started =3D now; } else { VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", - qemuDomainAsyncJobTypeToString(asyncJob), + virDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); qemuDomainObjResetAsyncJob(&priv->job); priv->job.current =3D virDomainJobDataInit(&qemuJobDataPrivate= DataCallbacks); @@ -914,10 +882,10 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, qemuDomainObjResetAgentJob(&priv->job); =20 VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", - qemuDomainAgentJobTypeToString(agentJob), + virDomainAgentJobTypeToString(agentJob), obj, obj->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainJobTypeToString(priv->job.active), + virDomainAsyncJobTypeToString(priv->job.asyncJob)); priv->job.agentActive =3D agentJob; priv->job.agentOwner =3D virThreadSelfID(); priv->job.agentOwnerAPI =3D g_strdup(virThreadJobGet()); @@ -942,14 +910,14 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, "current job is (%s, %s, %s) " "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " "for (%llus, %llus, %llus)", - qemuDomainJobTypeToString(job), - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(asyncJob), + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), NULLSTR(currentAPI), obj->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAgentJobTypeToString(priv->job.agentActive), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainJobTypeToString(priv->job.active), + virDomainAgentJobTypeToString(priv->job.agentActive), + virDomainAsyncJobTypeToString(priv->job.asyncJob), priv->job.owner, NULLSTR(priv->job.ownerAPI), priv->job.agentOwner, NULLSTR(priv->job.agentOwnerAPI), priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), @@ -1032,11 +1000,11 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver, */ int qemuDomainObjBeginJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainJob job) + virDomainJob job) { if (qemuDomainObjBeginJobInternal(driver, obj, job, - QEMU_AGENT_JOB_NONE, - QEMU_ASYNC_JOB_NONE, false) < 0) + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false) < 0) return -1; return 0; } @@ -1051,23 +1019,23 @@ int qemuDomainObjBeginJob(virQEMUDriver *driver, int qemuDomainObjBeginAgentJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAgentJob agentJob) + virDomainAgentJob agentJob) { - return qemuDomainObjBeginJobInternal(driver, obj, QEMU_JOB_NONE, + return qemuDomainObjBeginJobInternal(driver, obj, VIR_JOB_NONE, agentJob, - QEMU_ASYNC_JOB_NONE, false); + VIR_ASYNC_JOB_NONE, false); } =20 int qemuDomainObjBeginAsyncJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainJobOperation operation, unsigned long apiFlags) { qemuDomainObjPrivate *priv; =20 - if (qemuDomainObjBeginJobInternal(driver, obj, QEMU_JOB_ASYNC, - QEMU_AGENT_JOB_NONE, + if (qemuDomainObjBeginJobInternal(driver, obj, VIR_JOB_ASYNC, + VIR_AGENT_JOB_NONE, asyncJob, false) < 0) return -1; =20 @@ -1080,7 +1048,7 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriver *driver, int qemuDomainObjBeginNestedJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D obj->privateData; =20 @@ -1097,9 +1065,9 @@ qemuDomainObjBeginNestedJob(virQEMUDriver *driver, } =20 return qemuDomainObjBeginJobInternal(driver, obj, - QEMU_JOB_ASYNC_NESTED, - QEMU_AGENT_JOB_NONE, - QEMU_ASYNC_JOB_NONE, + VIR_JOB_ASYNC_NESTED, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false); } =20 @@ -1108,7 +1076,7 @@ qemuDomainObjBeginNestedJob(virQEMUDriver *driver, * * @driver: qemu driver * @obj: domain object - * @job: qemuDomainJob to start + * @job: virDomainJob to start * * Acquires job for a domain object which must be locked before * calling. If there's already a job running it returns @@ -1119,11 +1087,11 @@ qemuDomainObjBeginNestedJob(virQEMUDriver *driver, int qemuDomainObjBeginJobNowait(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainJob job) + virDomainJob job) { return qemuDomainObjBeginJobInternal(driver, obj, job, - QEMU_AGENT_JOB_NONE, - QEMU_ASYNC_JOB_NONE, true); + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, true); } =20 /* @@ -1136,13 +1104,13 @@ void qemuDomainObjEndJob(virDomainObj *obj) { qemuDomainObjPrivate *priv =3D obj->privateData; - qemuDomainJob job =3D priv->job.active; + virDomainJob job =3D priv->job.active; =20 priv->job.jobsQueued--; =20 VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", - qemuDomainJobTypeToString(job), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 qemuDomainObjResetJob(&priv->job); @@ -1157,13 +1125,13 @@ void qemuDomainObjEndAgentJob(virDomainObj *obj) { qemuDomainObjPrivate *priv =3D obj->privateData; - qemuDomainAgentJob agentJob =3D priv->job.agentActive; + virDomainAgentJob agentJob =3D priv->job.agentActive; =20 priv->job.jobsQueued--; =20 VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 qemuDomainObjResetAgentJob(&priv->job); @@ -1180,7 +1148,7 @@ qemuDomainObjEndAsyncJob(virDomainObj *obj) priv->job.jobsQueued--; =20 VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 qemuDomainObjResetAsyncJob(&priv->job); @@ -1194,7 +1162,7 @@ qemuDomainObjAbortAsyncJob(virDomainObj *obj) qemuDomainObjPrivate *priv =3D obj->privateData; =20 VIR_DEBUG("Requesting abort of async job: %s (vm=3D%p name=3D%s)", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 priv->job.abortJob =3D true; @@ -1208,26 +1176,26 @@ qemuDomainObjPrivateXMLFormatJob(virBuffer *buf, qemuDomainObjPrivate *priv =3D vm->privateData; g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - qemuDomainJob job =3D priv->job.active; + virDomainJob job =3D priv->job.active; =20 if (!qemuDomainTrackJob(job)) - job =3D QEMU_JOB_NONE; + job =3D VIR_JOB_NONE; =20 - if (job =3D=3D QEMU_JOB_NONE && - priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) + if (job =3D=3D VIR_JOB_NONE && + priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) return 0; =20 virBufferAsprintf(&attrBuf, " type=3D'%s' async=3D'%s'", - qemuDomainJobTypeToString(job), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(priv->job.asyncJob)); =20 if (priv->job.phase) { virBufferAsprintf(&attrBuf, " phase=3D'%s'", - qemuDomainAsyncJobPhaseToString(priv->job.asyncJ= ob, + virDomainAsyncJobPhaseToString(priv->job.asyncJo= b, priv->job.phase)= ); } =20 - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_NONE) + if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_NONE) virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", priv->job.apiFlags= ); =20 if (priv->job.cb && @@ -1255,7 +1223,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, if ((tmp =3D virXPathString("string(@type)", ctxt))) { int type; =20 - if ((type =3D qemuDomainJobTypeFromString(tmp)) < 0) { + if ((type =3D virDomainJobTypeFromString(tmp)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Unknown job type %s"), tmp); return -1; @@ -1267,7 +1235,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, if ((tmp =3D virXPathString("string(@async)", ctxt))) { int async; =20 - if ((async =3D qemuDomainAsyncJobTypeFromString(tmp)) < 0) { + if ((async =3D virDomainAsyncJobTypeFromString(tmp)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Unknown async job type %s"), tmp); return -1; @@ -1276,7 +1244,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, priv->job.asyncJob =3D async; =20 if ((tmp =3D virXPathString("string(@phase)", ctxt))) { - priv->job.phase =3D qemuDomainAsyncJobPhaseFromString(async, t= mp); + priv->job.phase =3D virDomainAsyncJobPhaseFromString(async, tm= p); if (priv->job.phase < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Unknown job phase %s"), tmp); diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index bec6e3a61c..6520b42c80 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -24,61 +24,14 @@ =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define QEMU_JOB_DEFAULT_MASK \ - (JOB_MASK(QEMU_JOB_QUERY) | \ - JOB_MASK(QEMU_JOB_DESTROY) | \ - JOB_MASK(QEMU_JOB_ABORT)) + (JOB_MASK(VIR_JOB_QUERY) | \ + JOB_MASK(VIR_JOB_DESTROY) | \ + JOB_MASK(VIR_JOB_ABORT)) =20 /* Jobs which have to be tracked in domain state XML. */ #define QEMU_DOMAIN_TRACK_JOBS \ - (JOB_MASK(QEMU_JOB_DESTROY) | \ - JOB_MASK(QEMU_JOB_ASYNC)) - -/* Only 1 job is allowed at any time - * A job includes *all* monitor commands, even those just querying - * information, not merely actions */ -typedef enum { - QEMU_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) condi= tions */ - QEMU_JOB_QUERY, /* Doesn't change any state */ - QEMU_JOB_DESTROY, /* Destroys the domain (cannot be masked out) = */ - QEMU_JOB_SUSPEND, /* Suspends (stops vCPUs) the domain */ - QEMU_JOB_MODIFY, /* May change state */ - QEMU_JOB_ABORT, /* Abort current async job */ - QEMU_JOB_MIGRATION_OP, /* Operation influencing outgoing migration */ - - /* The following two items must always be the last items before JOB_LA= ST */ - QEMU_JOB_ASYNC, /* Asynchronous job */ - QEMU_JOB_ASYNC_NESTED, /* Normal job within an async job */ - - QEMU_JOB_LAST -} qemuDomainJob; -VIR_ENUM_DECL(qemuDomainJob); - -typedef enum { - QEMU_AGENT_JOB_NONE =3D 0, /* No agent job. */ - QEMU_AGENT_JOB_QUERY, /* Does not change state of domain */ - QEMU_AGENT_JOB_MODIFY, /* May change state of domain */ - - QEMU_AGENT_JOB_LAST -} qemuDomainAgentJob; -VIR_ENUM_DECL(qemuDomainAgentJob); - -/* Async job consists of a series of jobs that may change state. Independe= nt - * jobs that do not change state (and possibly others if explicitly allowe= d by - * current async job) are allowed to be run even if async job is active. - */ -typedef enum { - QEMU_ASYNC_JOB_NONE =3D 0, - QEMU_ASYNC_JOB_MIGRATION_OUT, - QEMU_ASYNC_JOB_MIGRATION_IN, - QEMU_ASYNC_JOB_SAVE, - QEMU_ASYNC_JOB_DUMP, - QEMU_ASYNC_JOB_SNAPSHOT, - QEMU_ASYNC_JOB_START, - QEMU_ASYNC_JOB_BACKUP, - - QEMU_ASYNC_JOB_LAST -} qemuDomainAsyncJob; -VIR_ENUM_DECL(qemuDomainAsyncJob); + (JOB_MASK(VIR_JOB_DESTROY) | \ + JOB_MASK(VIR_JOB_ASYNC)) =20 =20 typedef enum { @@ -144,21 +97,21 @@ struct _qemuDomainJobObj { =20 int jobsQueued; =20 - /* The following members are for QEMU_JOB_* */ - qemuDomainJob active; /* Currently running job */ + /* The following members are for VIR_JOB_* */ + virDomainJob active; /* Currently running job */ unsigned long long owner; /* Thread id which set current job= */ char *ownerAPI; /* The API which owns the job */ unsigned long long started; /* When the current job started */ =20 - /* The following members are for QEMU_AGENT_JOB_* */ - qemuDomainAgentJob agentActive; /* Currently running agent job */ + /* The following members are for VIR_AGENT_JOB_* */ + virDomainAgentJob agentActive; /* Currently running agent job */ unsigned long long agentOwner; /* Thread id which set current age= nt job */ char *agentOwnerAPI; /* The API which owns the agent jo= b */ unsigned long long agentStarted; /* When the current agent job star= ted */ =20 - /* The following members are for QEMU_ASYNC_JOB_* */ + /* The following members are for VIR_ASYNC_JOB_* */ virCond asyncCond; /* Use to coordinate with async jo= bs */ - qemuDomainAsyncJob asyncJob; /* Currently active async job */ + virDomainAsyncJob asyncJob; /* Currently active async job */ unsigned long long asyncOwner; /* Thread which set current async = job */ char *asyncOwnerAPI; /* The API which owns the async jo= b */ unsigned long long asyncStarted; /* When the current async job star= ted */ @@ -177,9 +130,9 @@ struct _qemuDomainJobObj { void qemuDomainJobSetStatsType(virDomainJobData *jobData, qemuDomainJobStatsType type); =20 -const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, +const char *virDomainAsyncJobPhaseToString(virDomainAsyncJob job, int phase); -int qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, +int virDomainAsyncJobPhaseFromString(virDomainAsyncJob job, const char *phase); =20 void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, @@ -187,25 +140,25 @@ void qemuDomainEventEmitJobCompleted(virQEMUDriver *d= river, =20 int qemuDomainObjBeginJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainJob job) + virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAgentJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAgentJob agentJob) + virDomainAgentJob agentJob) G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAsyncJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainJobOperation operation, unsigned long apiFlags) G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginNestedJob(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginJobNowait(virQEMUDriver *driver, virDomainObj *obj, - qemuDomainJob job) + virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 void qemuDomainObjEndJob(virDomainObj *obj); @@ -235,7 +188,7 @@ int qemuDomainJobDataToParams(virDomainJobData *jobData, ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); =20 -bool qemuDomainTrackJob(qemuDomainJob job); +bool qemuDomainTrackJob(virDomainJob job); =20 void qemuDomainObjClearJob(qemuDomainJobObj *job); G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(qemuDomainJobObj, qemuDomainObjClearJob); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index b7e83c769a..77012eb527 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -157,7 +157,7 @@ static int qemuDomainObjStart(virConnectPtr conn, virQEMUDriver *driver, virDomainObj *vm, unsigned int flags, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 static int qemuDomainManagedSaveLoad(virDomainObj *vm, void *opaque); @@ -202,7 +202,7 @@ qemuAutostartDomain(virDomainObj *vm, } =20 if (qemuDomainObjStart(NULL, driver, vm, flags, - QEMU_ASYNC_JOB_START) < 0) { + VIR_ASYNC_JOB_START) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Failed to autostart VM '%s': %s"), vm->def->name, virGetLastErrorMessage()); @@ -1625,7 +1625,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr= conn, goto cleanup; } =20 - if (qemuProcessStart(conn, driver, vm, NULL, QEMU_ASYNC_JOB_START, + if (qemuProcessStart(conn, driver, vm, NULL, VIR_ASYNC_JOB_START, NULL, -1, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { @@ -1679,15 +1679,15 @@ static int qemuDomainSuspend(virDomainPtr dom) =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_SUSPEND) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_SUSPEND) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) goto endjob; =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) reason =3D VIR_DOMAIN_PAUSED_MIGRATION; - else if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_SNAPSHOT) + else if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_SNAPSHOT) reason =3D VIR_DOMAIN_PAUSED_SNAPSHOT; else reason =3D VIR_DOMAIN_PAUSED_USER; @@ -1699,7 +1699,7 @@ static int qemuDomainSuspend(virDomainPtr dom) goto endjob; } if (state !=3D VIR_DOMAIN_PAUSED) { - if (qemuProcessStopCPUs(driver, vm, reason, QEMU_ASYNC_JOB_NONE) <= 0) + if (qemuProcessStopCPUs(driver, vm, reason, VIR_ASYNC_JOB_NONE) < = 0) goto endjob; } qemuDomainSaveStatus(vm); @@ -1729,7 +1729,7 @@ static int qemuDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1751,7 +1751,7 @@ static int qemuDomainResume(virDomainPtr dom) state =3D=3D VIR_DOMAIN_PAUSED) { if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, - QEMU_ASYNC_JOB_NONE) < 0) { + VIR_ASYNC_JOB_NONE) < 0) { if (virGetLastErrorCode() =3D=3D VIR_ERR_OK) virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("resume operation failed")); @@ -1782,7 +1782,7 @@ qemuDomainShutdownFlagsAgent(virQEMUDriver *driver, QEMU_AGENT_SHUTDOWN_POWERDOWN; =20 if (qemuDomainObjBeginAgentJob(driver, vm, - QEMU_AGENT_JOB_MODIFY) < 0) + VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_RUNNING) { @@ -1815,7 +1815,7 @@ qemuDomainShutdownFlagsMonitor(virQEMUDriver *driver, =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_RUNNING) { @@ -1914,7 +1914,7 @@ qemuDomainRebootAgent(virQEMUDriver *driver, agentFlag =3D QEMU_AGENT_SHUTDOWN_POWERDOWN; =20 if (qemuDomainObjBeginAgentJob(driver, vm, - QEMU_AGENT_JOB_MODIFY) < 0) + VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (!qemuDomainAgentAvailable(vm, agentForced)) @@ -1943,7 +1943,7 @@ qemuDomainRebootMonitor(virQEMUDriver *driver, int ret =3D -1; =20 if (qemuDomainObjBeginJob(driver, vm, - QEMU_JOB_MODIFY) < 0) + VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2032,7 +2032,7 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) if (virDomainResetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2090,7 +2090,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, reason =3D=3D VIR_DOMAIN_PAUSED_STARTING_UP && !priv->beingDestroyed); =20 - if (qemuProcessBeginStopJob(driver, vm, QEMU_JOB_DESTROY, + if (qemuProcessBeginStopJob(driver, vm, VIR_JOB_DESTROY, !(flags & VIR_DOMAIN_DESTROY_GRACEFUL)) < = 0) goto cleanup; =20 @@ -2107,11 +2107,11 @@ qemuDomainDestroyFlags(virDomainPtr dom, =20 qemuDomainSetFakeReboot(vm, false); =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; =20 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_DESTROYED, - QEMU_ASYNC_JOB_NONE, stopFlags); + VIR_ASYNC_JOB_NONE, stopFlags); event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, VIR_DOMAIN_EVENT_STOPPED_DESTROYED); @@ -2195,7 +2195,7 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom,= unsigned long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2338,7 +2338,7 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPt= r dom, int period, if (virDomainSetMemoryStatsPeriodEnsureACL(dom->conn, vm->def, flags) = < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2406,7 +2406,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, u= nsigned int flags) =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2465,7 +2465,7 @@ static int qemuDomainSendKey(virDomainPtr domain, if (virDomainSendKeyEnsureACL(domain->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2644,7 +2644,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; =20 - if (qemuDomainObjBeginAsyncJob(driver, vm, QEMU_ASYNC_JOB_SAVE, + if (qemuDomainObjBeginAsyncJob(driver, vm, VIR_ASYNC_JOB_SAVE, VIR_DOMAIN_JOB_OPERATION_SAVE, flags) <= 0) goto cleanup; =20 @@ -2661,7 +2661,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { was_running =3D true; if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE, - QEMU_ASYNC_JOB_SAVE) < 0) + VIR_ASYNC_JOB_SAVE) < 0) goto endjob; =20 if (!virDomainObjIsActive(vm)) { @@ -2712,13 +2712,13 @@ qemuDomainSaveInternal(virQEMUDriver *driver, xml =3D NULL; =20 ret =3D qemuSaveImageCreate(driver, vm, path, data, compressor, - flags, QEMU_ASYNC_JOB_SAVE); + flags, VIR_ASYNC_JOB_SAVE); if (ret < 0) goto endjob; =20 /* Shut it down */ qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_SAVED, - QEMU_ASYNC_JOB_SAVE, 0); + VIR_ASYNC_JOB_SAVE, 0); virDomainAuditStop(vm, "saved"); event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPP= ED, VIR_DOMAIN_EVENT_STOPPED_SAV= ED); @@ -2729,7 +2729,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, virErrorPreserveLast(&save_err); if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_SAVE_CANCELED, - QEMU_ASYNC_JOB_SAVE) < 0) { + VIR_ASYNC_JOB_SAVE) < 0) { VIR_WARN("Unable to resume guest CPUs after save failure"); virObjectEventStateQueue(driver->domainEventState, virDomainEventLifecycleNewFromObj(vm, @@ -2977,7 +2977,7 @@ static int qemuDumpToFd(virQEMUDriver *driver, virDomainObj *vm, int fd, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, const char *dumpformat) { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -3085,7 +3085,7 @@ doCoreDump(virQEMUDriver *driver, if (STREQ(memory_dump_format, "elf")) memory_dump_format =3D NULL; =20 - if (qemuDumpToFd(driver, vm, fd, QEMU_ASYNC_JOB_DUMP, + if (qemuDumpToFd(driver, vm, fd, VIR_ASYNC_JOB_DUMP, memory_dump_format) < 0) goto cleanup; } else { @@ -3100,7 +3100,7 @@ doCoreDump(virQEMUDriver *driver, goto cleanup; =20 if (qemuMigrationSrcToFile(driver, vm, fd, compressor, - QEMU_ASYNC_JOB_DUMP) < 0) + VIR_ASYNC_JOB_DUMP) < 0) goto cleanup; } =20 @@ -3150,7 +3150,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto cleanup; =20 if (qemuDomainObjBeginAsyncJob(driver, vm, - QEMU_ASYNC_JOB_DUMP, + VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) < 0) goto cleanup; @@ -3170,7 +3170,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, if (!(flags & VIR_DUMP_LIVE) && virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_DUMP, - QEMU_ASYNC_JOB_DUMP) < 0) + VIR_ASYNC_JOB_DUMP) < 0) goto endjob; paused =3D true; =20 @@ -3189,7 +3189,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, endjob: if ((ret =3D=3D 0) && (flags & VIR_DUMP_CRASH)) { qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_CRASHED, - QEMU_ASYNC_JOB_DUMP, 0); + VIR_ASYNC_JOB_DUMP, 0); virDomainAuditStop(vm, "crashed"); event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, @@ -3205,7 +3205,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, if (resume && virDomainObjIsActive(vm)) { if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, - QEMU_ASYNC_JOB_DUMP) < 0) { + VIR_ASYNC_JOB_DUMP) < 0) { event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT= _SUSPENDED, VIR_DOMAIN_EVENT= _SUSPENDED_API_ERROR); @@ -3264,7 +3264,7 @@ qemuDomainScreenshot(virDomainPtr dom, if (virDomainScreenshotEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -3384,7 +3384,7 @@ processWatchdogEvent(virQEMUDriver *driver, switch (action) { case VIR_DOMAIN_WATCHDOG_ACTION_DUMP: if (qemuDomainObjBeginAsyncJob(driver, vm, - QEMU_ASYNC_JOB_DUMP, + VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) < 0) { return; @@ -3401,7 +3401,7 @@ processWatchdogEvent(virQEMUDriver *driver, =20 ret =3D qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, - QEMU_ASYNC_JOB_DUMP); + VIR_ASYNC_JOB_DUMP); =20 if (ret < 0) virReportError(VIR_ERR_OPERATION_FAILED, @@ -3460,7 +3460,7 @@ processGuestPanicEvent(virQEMUDriver *driver, bool removeInactive =3D false; unsigned long flags =3D VIR_DUMP_MEMORY_ONLY; =20 - if (qemuDomainObjBeginAsyncJob(driver, vm, QEMU_ASYNC_JOB_DUMP, + if (qemuDomainObjBeginAsyncJob(driver, vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) <= 0) return; =20 @@ -3495,7 +3495,7 @@ processGuestPanicEvent(virQEMUDriver *driver, =20 case VIR_DOMAIN_LIFECYCLE_ACTION_DESTROY: qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_CRASHED, - QEMU_ASYNC_JOB_DUMP, 0); + VIR_ASYNC_JOB_DUMP, 0); event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, VIR_DOMAIN_EVENT_STOPPED= _CRASHED); @@ -3540,7 +3540,7 @@ processDeviceDeletedEvent(virQEMUDriver *driver, VIR_DEBUG("Removing device %s from domain %p %s", devAlias, vm, vm->def->name); =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -3777,7 +3777,7 @@ processNicRxFilterChangedEvent(virQEMUDriver *driver, "from domain %p %s", devAlias, vm, vm->def->name); =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -3903,7 +3903,7 @@ processSerialChangedEvent(virQEMUDriver *driver, memset(&dev, 0, sizeof(dev)); } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -3955,7 +3955,7 @@ processBlockJobEvent(virQEMUDriver *driver, virDomainDiskDef *disk; g_autoptr(qemuBlockJobData) job =3D NULL; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -3977,7 +3977,7 @@ processBlockJobEvent(virQEMUDriver *driver, =20 job->newstate =3D status; =20 - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); =20 endjob: qemuDomainObjEndJob(vm); @@ -3989,7 +3989,7 @@ processJobStatusChangeEvent(virQEMUDriver *driver, virDomainObj *vm, qemuBlockJobData *job) { - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -3997,7 +3997,7 @@ processJobStatusChangeEvent(virQEMUDriver *driver, goto endjob; } =20 - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); =20 endjob: qemuDomainObjEndJob(vm); @@ -4015,7 +4015,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, unsigned int stopFlags =3D 0; virObjectEvent *event =3D NULL; =20 - if (qemuProcessBeginStopJob(driver, vm, QEMU_JOB_DESTROY, true) < 0) + if (qemuProcessBeginStopJob(driver, vm, VIR_JOB_DESTROY, true) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4032,7 +4032,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, auditReason =3D "failed"; } =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; qemuMigrationDstErrorSave(driver, vm->def->name, qemuMonitorLastError(priv->mon)); @@ -4040,7 +4040,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, =20 event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPP= ED, eventReason); - qemuProcessStop(driver, vm, stopReason, QEMU_ASYNC_JOB_NONE, stopFlags= ); + qemuProcessStop(driver, vm, stopReason, VIR_ASYNC_JOB_NONE, stopFlags); virDomainAuditStop(vm, auditReason); virObjectEventStateQueue(driver->domainEventState, event); =20 @@ -4131,7 +4131,7 @@ processMemoryDeviceSizeChange(virQEMUDriver *driver, virObjectEvent *event =3D NULL; unsigned long long balloon; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4347,10 +4347,10 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, =20 =20 if (useAgent) { - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) = < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) <= 0) goto cleanup; } else { - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; } =20 @@ -4487,7 +4487,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -4613,7 +4613,7 @@ qemuDomainPinEmulator(virDomainPtr dom, if (virDomainPinEmulatorEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -4789,7 +4789,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned in= t flags) goto cleanup; =20 if (flags & VIR_DOMAIN_VCPU_GUEST) { - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) <= 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < = 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -4874,7 +4874,7 @@ qemuDomainGetIOThreadsLive(virQEMUDriver *driver, size_t i; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -5003,7 +5003,7 @@ qemuDomainPinIOThread(virDomainPtr dom, if (virDomainPinIOThreadEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -5402,7 +5402,7 @@ qemuDomainChgIOThread(virQEMUDriver *driver, =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -5830,7 +5830,7 @@ qemuDomainRestoreFlags(virConnectPtr conn, goto cleanup; =20 ret =3D qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, - false, reset_nvram, QEMU_ASYNC_JOB_START); + false, reset_nvram, VIR_ASYNC_JOB_START); =20 qemuProcessEndJob(vm); =20 @@ -6039,7 +6039,7 @@ qemuDomainObjRestore(virConnectPtr conn, bool start_paused, bool bypass_cache, bool reset_nvram, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(virDomainDef) def =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; @@ -6301,7 +6301,7 @@ qemuDomainObjStart(virConnectPtr conn, virQEMUDriver *driver, virDomainObj *vm, unsigned int flags, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { int ret =3D -1; g_autofree char *managed_save =3D NULL; @@ -6413,7 +6413,7 @@ qemuDomainCreateWithFlags(virDomainPtr dom, unsigned = int flags) } =20 if (qemuDomainObjStart(dom->conn, driver, vm, flags, - QEMU_ASYNC_JOB_START) < 0) + VIR_ASYNC_JOB_START) < 0) goto endjob; =20 dom->id =3D vm->def->id; @@ -6550,7 +6550,7 @@ qemuDomainUndefineFlags(virDomainPtr dom, if (virDomainUndefineFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -6824,7 +6824,7 @@ qemuDomainAttachDeviceLive(virDomainObj *vm, } =20 if (ret =3D=3D 0) - ret =3D qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE= ); + ret =3D qemuDomainUpdateDeviceList(driver, vm, VIR_ASYNC_JOB_NONE); =20 return ret; } @@ -7811,7 +7811,7 @@ qemuDomainAttachDeviceFlags(virDomainPtr dom, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -7866,7 +7866,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr d= om, if (virDomainUpdateDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -7994,7 +7994,7 @@ qemuDomainDetachDeviceLiveAndConfig(virQEMUDriver *dr= iver, if ((rc =3D qemuDomainDetachDeviceLive(vm, dev, driver, false)) < = 0) goto cleanup; =20 - if (rc =3D=3D 0 && qemuDomainUpdateDeviceList(driver, vm, QEMU_ASY= NC_JOB_NONE) < 0) + if (rc =3D=3D 0 && qemuDomainUpdateDeviceList(driver, vm, VIR_ASYN= C_JOB_NONE) < 0) goto cleanup; =20 qemuDomainSaveStatus(vm); @@ -8067,7 +8067,7 @@ qemuDomainDetachDeviceAliasLiveAndConfig(virQEMUDrive= r *driver, if ((rc =3D qemuDomainDetachDeviceLive(vm, &dev, driver, true)) < = 0) return -1; =20 - if (rc =3D=3D 0 && qemuDomainUpdateDeviceList(driver, vm, QEMU_ASY= NC_JOB_NONE) < 0) + if (rc =3D=3D 0 && qemuDomainUpdateDeviceList(driver, vm, VIR_ASYN= C_JOB_NONE) < 0) return -1; } =20 @@ -8096,7 +8096,7 @@ qemuDomainDetachDeviceFlags(virDomainPtr dom, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8131,7 +8131,7 @@ qemuDomainDetachDeviceAlias(virDomainPtr dom, if (virDomainDetachDeviceAliasEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8204,7 +8204,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, autostart =3D (autostart !=3D 0); =20 if (vm->autostart !=3D autostart) { - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!(configFile =3D virDomainConfigFile(cfg->configDir, vm->def->= name))) @@ -8350,7 +8350,7 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -8524,7 +8524,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 /* QEMU and LXC implementation are identical */ @@ -8767,7 +8767,7 @@ qemuDomainSetNumaParameters(virDomainPtr dom, } } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -8987,7 +8987,7 @@ qemuDomainSetPerfEvents(virDomainPtr dom, if (virDomainSetPerfEventsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9062,7 +9062,7 @@ qemuDomainGetPerfEvents(virDomainPtr dom, if (virDomainGetPerfEventsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!(def =3D virDomainObjGetOneDefState(vm, flags, &live))) @@ -9248,7 +9248,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr do= m, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9751,7 +9751,7 @@ qemuDomainBlockResize(virDomainPtr dom, if (virDomainBlockResizeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -9962,7 +9962,7 @@ qemuDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10020,7 +10020,7 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10182,7 +10182,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, if (virDomainSetInterfaceParametersEnsureACL(dom->conn, vm->def, flags= ) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -10491,7 +10491,7 @@ qemuDomainGetInterfaceParameters(virDomainPtr dom, return ret; } =20 -/* This functions assumes that job QEMU_JOB_QUERY is started by a caller */ +/* This functions assumes that job VIR_JOB_QUERY is started by a caller */ static int qemuDomainMemoryStatsInternal(virQEMUDriver *driver, virDomainObj *vm, @@ -10547,7 +10547,7 @@ qemuDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 ret =3D qemuDomainMemoryStatsInternal(driver, vm, stats, nr_stats); @@ -10657,7 +10657,7 @@ qemuDomainMemoryPeek(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10934,7 +10934,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, if (virDomainGetBlockInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!(disk =3D virDomainDiskByName(vm->def, path, false))) { @@ -12428,13 +12428,13 @@ qemuDomainGetJobInfoMigrationStats(virQEMUDriver = *driver, jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) { if (events && jobData->status !=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && - qemuMigrationAnyFetchStats(driver, vm, QEMU_ASYNC_JOB_NONE, + qemuMigrationAnyFetchStats(driver, vm, VIR_ASYNC_JOB_NONE, jobData, NULL) < 0) return -1; =20 if (jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_ACTIVE && privStats->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATI= ON && - qemuMigrationSrcFetchMirrorStats(driver, vm, QEMU_ASYNC_JOB_NO= NE, + qemuMigrationSrcFetchMirrorStats(driver, vm, VIR_ASYNC_JOB_NON= E, jobData) < 0) return -1; =20 @@ -12456,7 +12456,7 @@ qemuDomainGetJobInfoDumpStats(virQEMUDriver *driver, qemuMonitorDumpStats stats =3D { 0 }; int rc; =20 - if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_NONE) < = 0) + if (qemuDomainObjEnterMonitorAsync(driver, vm, VIR_ASYNC_JOB_NONE) < 0) return -1; =20 rc =3D qemuMonitorQueryDump(priv->mon, &stats); @@ -12518,14 +12518,14 @@ qemuDomainGetJobStatsInternal(virQEMUDriver *driv= er, return 0; } =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("migration statistics are available only on " "the source host")); return -1; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12680,7 +12680,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) if (virDomainAbortJobEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_ABORT) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_ABORT) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12689,24 +12689,24 @@ static int qemuDomainAbortJob(virDomainPtr dom) priv =3D vm->privateData; =20 switch (priv->job.asyncJob) { - case QEMU_ASYNC_JOB_NONE: + case VIR_ASYNC_JOB_NONE: virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("no job is active on the domain")); break; =20 - case QEMU_ASYNC_JOB_MIGRATION_IN: + case VIR_ASYNC_JOB_MIGRATION_IN: virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cannot abort incoming migration;" " use virDomainDestroy instead")); break; =20 - case QEMU_ASYNC_JOB_START: + case VIR_ASYNC_JOB_START: virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cannot abort VM start;" " use virDomainDestroy instead")); break; =20 - case QEMU_ASYNC_JOB_MIGRATION_OUT: + case VIR_ASYNC_JOB_MIGRATION_OUT: if ((priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCO= PY || (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY))) { @@ -12718,11 +12718,11 @@ static int qemuDomainAbortJob(virDomainPtr dom) ret =3D qemuDomainAbortJobMigration(vm); break; =20 - case QEMU_ASYNC_JOB_SAVE: + case VIR_ASYNC_JOB_SAVE: ret =3D qemuDomainAbortJobMigration(vm); break; =20 - case QEMU_ASYNC_JOB_DUMP: + case VIR_ASYNC_JOB_DUMP: if (priv->job.apiFlags & VIR_DUMP_MEMORY_ONLY) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cannot abort memory-only dump")); @@ -12732,18 +12732,18 @@ static int qemuDomainAbortJob(virDomainPtr dom) ret =3D qemuDomainAbortJobMigration(vm); break; =20 - case QEMU_ASYNC_JOB_SNAPSHOT: + case VIR_ASYNC_JOB_SNAPSHOT: ret =3D qemuDomainAbortJobMigration(vm); break; =20 - case QEMU_ASYNC_JOB_BACKUP: - qemuBackupJobCancelBlockjobs(vm, priv->backup, true, QEMU_ASYNC_JO= B_NONE); + case VIR_ASYNC_JOB_BACKUP: + qemuBackupJobCancelBlockjobs(vm, priv->backup, true, VIR_ASYNC_JOB= _NONE); ret =3D 0; break; =20 - case QEMU_ASYNC_JOB_LAST: + case VIR_ASYNC_JOB_LAST: default: - virReportEnumRangeError(qemuDomainAsyncJob, priv->job.asyncJob); + virReportEnumRangeError(virDomainAsyncJob, priv->job.asyncJob); break; } =20 @@ -12776,7 +12776,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, if (virDomainMigrateSetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12795,7 +12795,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, downtime) < 0) goto endjob; =20 - if (qemuMigrationParamsApply(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_NONE, migParams) < 0) goto endjob; } else { @@ -12836,13 +12836,13 @@ qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, if (virDomainMigrateGetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) goto endjob; =20 - if (qemuMigrationParamsFetch(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuMigrationParamsFetch(driver, vm, VIR_ASYNC_JOB_NONE, &migParams) < 0) goto endjob; =20 @@ -12890,7 +12890,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr d= om, if (virDomainMigrateGetCompressionCacheEnsureACL(dom->conn, vm->def) <= 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12906,7 +12906,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr d= om, } =20 if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_XBZRLE_CA= CHE_SIZE)) { - if (qemuMigrationParamsFetch(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuMigrationParamsFetch(driver, vm, VIR_ASYNC_JOB_NONE, &migParams) < 0) goto endjob; =20 @@ -12952,7 +12952,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr d= om, if (virDomainMigrateSetCompressionCacheEnsureACL(dom->conn, vm->def) <= 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12977,7 +12977,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr d= om, cacheSize) < 0) goto endjob; =20 - if (qemuMigrationParamsApply(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_NONE, migParams) < 0) goto endjob; } else { @@ -13039,7 +13039,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13064,7 +13064,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, bandwidth * 1024 * 1024) < 0) goto endjob; =20 - if (qemuMigrationParamsApply(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_NONE, migParams) < 0) goto endjob; } else { @@ -13101,13 +13101,13 @@ qemuDomainMigrationGetPostcopyBandwidth(virQEMUDr= iver *driver, int rc; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) goto cleanup; =20 - if (qemuMigrationParamsFetch(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuMigrationParamsFetch(driver, vm, VIR_ASYNC_JOB_NONE, &migParams) < 0) goto cleanup; =20 @@ -13196,7 +13196,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, if (virDomainMigrateStartPostCopyEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13204,7 +13204,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, =20 priv =3D vm->privateData; =20 - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("post-copy can only be started while " "outgoing migration is in progress")); @@ -13941,7 +13941,7 @@ qemuDomainQemuMonitorCommandWithFiles(virDomainPtr = domain, if (virDomainQemuMonitorCommandWithFilesEnsureACL(domain->conn, vm->de= f) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14285,7 +14285,7 @@ qemuDomainBlockPullCommon(virDomainObj *vm, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14411,7 +14411,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, if (virDomainBlockJobAbortEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14458,13 +14458,13 @@ qemuDomainBlockJobAbort(virDomainPtr dom, qemuDomainSaveStatus(vm); =20 if (!async) { - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); while (qemuBlockJobIsRunning(job)) { if (virDomainObjWait(vm) < 0) { ret =3D -1; goto endjob; } - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); } =20 if (pivot && @@ -14486,7 +14486,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, =20 endjob: if (job && !async) - qemuBlockJobSyncEnd(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobSyncEnd(vm, job, VIR_ASYNC_JOB_NONE); qemuDomainObjEndJob(vm); =20 cleanup: @@ -14573,7 +14573,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, goto cleanup; =20 =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14643,7 +14643,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, if (virDomainBlockJobSetSpeedEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14844,7 +14844,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, return -1; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15030,7 +15030,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, goto endjob; } } else { - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEM= U_ASYNC_JOB_NONE))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR= _ASYNC_JOB_NONE))) goto endjob; =20 if (qemuBlockStorageSourceCreateDetectSize(blockNamedNodeData, @@ -15069,7 +15069,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, =20 if (crdata && qemuBlockStorageSourceCreate(vm, mirror, mirrorBacking, mirror= ->backingStore, - crdata->srcdata[0], QEMU_ASYNC_JO= B_NONE) < 0) + crdata->srcdata[0], VIR_ASYNC_JOB= _NONE) < 0) goto endjob; } =20 @@ -15346,7 +15346,7 @@ qemuDomainBlockCommit(virDomainPtr dom, if (virDomainBlockCommitEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15586,7 +15586,7 @@ qemuDomainOpenGraphics(virDomainPtr dom, if (virDomainOpenGraphicsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15698,7 +15698,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, if (qemuSecurityClearSocketLabel(driver->securityManager, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; qemuDomainObjEnterMonitor(driver, vm); ret =3D qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphic= sfd", @@ -15953,7 +15953,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, =20 cfg =3D virQEMUDriverGetConfig(driver); =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 priv =3D vm->privateData; @@ -16234,7 +16234,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, if (virDomainGetBlockIoTuneEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 /* the API check guarantees that only one of the definitions will be s= et */ @@ -16378,7 +16378,7 @@ qemuDomainGetDiskErrors(virDomainPtr dom, if (virDomainGetDiskErrorsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16453,7 +16453,7 @@ qemuDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, @@ -16576,7 +16576,7 @@ qemuDomainQueryWakeupSuspendSupport(virQEMUDriver *= driver, if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_QUERY_CURRENT_MACHINE)) return -1; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if ((ret =3D virDomainObjCheckActive(vm)) < 0) @@ -16598,7 +16598,7 @@ qemuDomainPMSuspendAgent(virQEMUDriver *driver, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16710,7 +16710,7 @@ qemuDomainPMWakeup(virDomainPtr dom, if (virDomainPMWakeupEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16766,7 +16766,7 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, if (virDomainQemuAgentCommandEnsureACL(domain->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16861,7 +16861,7 @@ qemuDomainFSTrim(virDomainPtr dom, if (virDomainFSTrimEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -17032,7 +17032,7 @@ qemuDomainGetHostnameAgent(virQEMUDriver *driver, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17064,7 +17064,7 @@ qemuDomainGetHostnameLease(virQEMUDriver *driver, size_t i, j; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17176,7 +17176,7 @@ qemuDomainGetTime(virDomainPtr dom, if (virDomainGetTimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17213,7 +17213,7 @@ qemuDomainSetTimeAgent(virQEMUDriver *driver, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17270,7 +17270,7 @@ qemuDomainSetTime(virDomainPtr dom, if (qemuDomainSetTimeAgent(driver, vm, seconds, nseconds, rtcSync) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17315,7 +17315,7 @@ qemuDomainFSFreeze(virDomainPtr dom, if (virDomainFSFreezeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17356,7 +17356,7 @@ qemuDomainFSThaw(virDomainPtr dom, if (virDomainFSThawEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17896,7 +17896,7 @@ qemuDomainGetStatsVcpu(virQEMUDriver *driver, cpudelay =3D g_new0(unsigned long long, virDomainDefGetVcpus(dom->def)= ); =20 if (HAVE_JOB(privflags) && virDomainObjIsActive(dom) && - qemuDomainRefreshVcpuHalted(driver, dom, QEMU_ASYNC_JOB_NONE) < 0)= { + qemuDomainRefreshVcpuHalted(driver, dom, VIR_ASYNC_JOB_NONE) < 0) { /* it's ok to be silent and go ahead, because halted vcpu info * wasn't here from the beginning */ virResetLastError(); @@ -18802,9 +18802,9 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, int rv; =20 if (flags & VIR_CONNECT_GET_ALL_DOMAINS_STATS_NOWAIT) - rv =3D qemuDomainObjBeginJobNowait(driver, vm, QEMU_JOB_QU= ERY); + rv =3D qemuDomainObjBeginJobNowait(driver, vm, VIR_JOB_QUE= RY); else - rv =3D qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY); + rv =3D qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY); =20 if (rv =3D=3D 0) domflags |=3D QEMU_DOMAIN_STATS_HAVE_JOB; @@ -18876,7 +18876,7 @@ qemuDomainGetFSInfoAgent(virQEMUDriver *driver, qemuAgent *agent; =20 if (qemuDomainObjBeginAgentJob(driver, vm, - QEMU_AGENT_JOB_QUERY) < 0) + VIR_AGENT_JOB_QUERY) < 0) return ret; =20 if (virDomainObjCheckActive(vm) < 0) @@ -18986,7 +18986,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, if ((nfs =3D qemuDomainGetFSInfoAgent(driver, vm, &agentinfo)) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19037,7 +19037,7 @@ qemuDomainInterfaceAddresses(virDomainPtr dom, break; =20 case VIR_DOMAIN_INTERFACE_ADDRESSES_SRC_AGENT: - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) <= 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < = 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19089,7 +19089,7 @@ qemuDomainSetUserPassword(virDomainPtr dom, if (virDomainSetUserPasswordEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19276,7 +19276,7 @@ static int qemuDomainRename(virDomainPtr dom, if (virDomainRenameEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -19393,7 +19393,7 @@ qemuDomainGetGuestVcpus(virDomainPtr dom, if (virDomainGetGuestVcpusEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19452,7 +19452,7 @@ qemuDomainSetGuestVcpus(virDomainPtr dom, if (virDomainSetGuestVcpusEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19544,7 +19544,7 @@ qemuDomainSetVcpu(virDomainPtr dom, if (virDomainSetVcpuEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -19603,7 +19603,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, if (virDomainSetBlockThresholdEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19626,7 +19626,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, =20 if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV) && !src->nodestorage && - qemuBlockNodeNamesDetect(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + qemuBlockNodeNamesDetect(driver, vm, VIR_ASYNC_JOB_NONE) < 0) goto endjob; =20 if (!src->nodestorage) { @@ -19794,7 +19794,7 @@ qemuDomainSetLifecycleAction(virDomainPtr dom, if (virDomainSetLifecycleActionEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -19942,7 +19942,7 @@ qemuDomainGetSEVInfo(virQEMUDriver *driver, =20 virCheckFlags(VIR_TYPED_PARAM_STRING_OKAY, -1); =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) { @@ -20087,7 +20087,7 @@ qemuDomainSetLaunchSecurityState(virDomainPtr domai= n, else if (rc =3D=3D 1) hasSetaddr =3D true; =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -20433,7 +20433,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, goto cleanup; =20 if (qemuDomainObjBeginAgentJob(driver, vm, - QEMU_AGENT_JOB_QUERY) < 0) + VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20494,7 +20494,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuDomainObjEndAgentJob(vm); =20 if (nfs > 0 || ndisks > 0) { - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -20610,7 +20610,7 @@ qemuDomainAuthorizedSSHKeysGet(virDomainPtr dom, if (virDomainAuthorizedSshKeysGetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20651,7 +20651,7 @@ qemuDomainAuthorizedSSHKeysSet(virDomainPtr dom, if (virDomainAuthorizedSshKeysSetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_QUERY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20760,7 +20760,7 @@ qemuDomainStartDirtyRateCalc(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) { diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 8ea95406c7..3d1bb1be2a 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -369,7 +369,7 @@ qemuDomainChangeMediaLegacy(virQEMUDriver *driver, int qemuHotplugAttachDBusVMState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(virJSONValue) props =3D NULL; @@ -414,7 +414,7 @@ qemuHotplugAttachDBusVMState(virQEMUDriver *driver, int qemuHotplugRemoveDBusVMState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int ret; @@ -452,7 +452,7 @@ static int qemuHotplugAttachManagedPR(virQEMUDriver *driver, virDomainObj *vm, virStorageSource *src, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(virJSONValue) props =3D NULL; @@ -502,7 +502,7 @@ qemuHotplugAttachManagedPR(virQEMUDriver *driver, static int qemuHotplugRemoveManagedPR(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virErrorPtr orig_err; @@ -672,7 +672,7 @@ qemuDomainChangeEjectableMedia(virQEMUDriver *driver, if (qemuDomainStorageSourceChainAccessAllow(driver, vm, newsrc) < 0) goto cleanup; =20 - if (qemuHotplugAttachManagedPR(driver, vm, newsrc, QEMU_ASYNC_JOB_NONE= ) < 0) + if (qemuHotplugAttachManagedPR(driver, vm, newsrc, VIR_ASYNC_JOB_NONE)= < 0) goto cleanup; =20 if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) @@ -700,7 +700,7 @@ qemuDomainChangeEjectableMedia(virQEMUDriver *driver, =20 /* remove PR manager object if unneeded */ if (managedpr) - ignore_value(qemuHotplugRemoveManagedPR(driver, vm, QEMU_ASYNC_JOB= _NONE)); + ignore_value(qemuHotplugRemoveManagedPR(driver, vm, VIR_ASYNC_JOB_= NONE)); =20 /* revert old image do the disk definition */ if (oldsrc) @@ -714,7 +714,7 @@ static qemuSnapshotDiskContext * qemuDomainAttachDiskGenericTransient(virDomainObj *vm, virDomainDiskDef *disk, GHashTable *blockNamedNodeData, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(qemuSnapshotDiskContext) snapctxt =3D NULL; g_autoptr(virDomainSnapshotDiskDef) snapdiskdef =3D NULL; @@ -741,7 +741,7 @@ int qemuDomainAttachDiskGeneric(virQEMUDriver *driver, virDomainObj *vm, virDomainDiskDef *disk, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(qemuBlockStorageSourceChainData) data =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; @@ -1089,10 +1089,10 @@ qemuDomainAttachDeviceDiskLiveInternal(virQEMUDrive= r *driver, if (qemuDomainPrepareDiskSource(disk, priv, cfg) < 0) goto cleanup; =20 - if (qemuHotplugAttachManagedPR(driver, vm, disk->src, QEMU_ASYNC_JOB_N= ONE) < 0) + if (qemuHotplugAttachManagedPR(driver, vm, disk->src, VIR_ASYNC_JOB_NO= NE) < 0) goto cleanup; =20 - ret =3D qemuDomainAttachDiskGeneric(driver, vm, disk, QEMU_ASYNC_JOB_N= ONE); + ret =3D qemuDomainAttachDiskGeneric(driver, vm, disk, VIR_ASYNC_JOB_NO= NE); =20 virDomainAuditDisk(vm, NULL, disk->src, "attach", ret =3D=3D 0); =20 @@ -1113,7 +1113,7 @@ qemuDomainAttachDeviceDiskLiveInternal(virQEMUDriver = *driver, ignore_value(qemuDomainStorageSourceChainAccessRevoke(driver, = vm, disk->src)); =20 if (virStorageSourceChainHasManagedPR(disk->src)) - ignore_value(qemuHotplugRemoveManagedPR(driver, vm, QEMU_ASYNC= _JOB_NONE)); + ignore_value(qemuHotplugRemoveManagedPR(driver, vm, VIR_ASYNC_= JOB_NONE)); } qemuDomainSecretDiskDestroy(disk); =20 @@ -1774,7 +1774,7 @@ qemuDomainAttachHostPCIDevice(virQEMUDriver *driver, void qemuDomainDelTLSObjects(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, const char *secAlias, const char *tlsAlias) { @@ -1805,7 +1805,7 @@ qemuDomainDelTLSObjects(virQEMUDriver *driver, int qemuDomainAddTLSObjects(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virJSONValue **secProps, virJSONValue **tlsProps) { @@ -1907,7 +1907,7 @@ qemuDomainAddChardevTLSObjects(virQEMUDriver *driver, =20 dev->data.tcp.tlscreds =3D true; =20 - if (qemuDomainAddTLSObjects(driver, vm, QEMU_ASYNC_JOB_NONE, + if (qemuDomainAddTLSObjects(driver, vm, VIR_ASYNC_JOB_NONE, &secProps, &tlsProps) < 0) return -1; =20 @@ -2013,7 +2013,7 @@ int qemuDomainAttachRedirdevDevice(virQEMUDriver *dri= ver, ignore_value(qemuMonitorDetachCharDev(priv->mon, charAlias)); qemuDomainObjExitMonitor(vm); virErrorRestore(&orig_err); - qemuDomainDelTLSObjects(driver, vm, QEMU_ASYNC_JOB_NONE, + qemuDomainDelTLSObjects(driver, vm, VIR_ASYNC_JOB_NONE, secAlias, tlsAlias); goto audit; } @@ -2308,7 +2308,7 @@ qemuDomainAttachChrDevice(virQEMUDriver *driver, qemuDomainObjExitMonitor(vm); virErrorRestore(&orig_err); =20 - qemuDomainDelTLSObjects(driver, vm, QEMU_ASYNC_JOB_NONE, + qemuDomainDelTLSObjects(driver, vm, VIR_ASYNC_JOB_NONE, secAlias, tlsAlias); goto audit; } @@ -2414,7 +2414,7 @@ qemuDomainAttachRNGDevice(virQEMUDriver *driver, qemuDomainObjExitMonitor(vm); virErrorRestore(&orig_err); =20 - qemuDomainDelTLSObjects(driver, vm, QEMU_ASYNC_JOB_NONE, + qemuDomainDelTLSObjects(driver, vm, VIR_ASYNC_JOB_NONE, secAlias, tlsAlias); goto audit; } @@ -2510,14 +2510,14 @@ qemuDomainAttachMemory(virQEMUDriver *driver, virObjectEventStateQueue(driver->domainEventState, event); =20 /* fix the balloon size */ - ignore_value(qemuProcessRefreshBalloonState(driver, vm, QEMU_ASYNC_JOB= _NONE)); + ignore_value(qemuProcessRefreshBalloonState(driver, vm, VIR_ASYNC_JOB_= NONE)); =20 /* mem is consumed by vm->def */ mem =3D NULL; =20 /* this step is best effort, removing the device would be so much trou= ble */ ignore_value(qemuDomainUpdateMemoryDeviceInfo(driver, vm, - QEMU_ASYNC_JOB_NONE)); + VIR_ASYNC_JOB_NONE)); =20 ret =3D 0; =20 @@ -4353,7 +4353,7 @@ qemuDomainChangeGraphics(virQEMUDriver *driver, VIR_DOMAIN_GRAPHICS_TYPE= _VNC, &dev->data.vnc.auth, cfg->vncPassword, - QEMU_ASYNC_JOB_NONE) < 0) + VIR_ASYNC_JOB_NONE) < 0) return -1; =20 /* Steal the new dev's char * reference */ @@ -4400,7 +4400,7 @@ qemuDomainChangeGraphics(virQEMUDriver *driver, VIR_DOMAIN_GRAPHICS_TYPE= _SPICE, &dev->data.spice.auth, cfg->spicePassword, - QEMU_ASYNC_JOB_NONE) < 0) + VIR_ASYNC_JOB_NONE) < 0) return -1; =20 /* Steal the new dev's char * reference */ @@ -4532,7 +4532,7 @@ qemuDomainRemoveDiskDevice(virQEMUDriver *driver, qemuDomainStorageSourceChainAccessRevoke(driver, vm, disk->src); =20 if (virStorageSourceChainHasManagedPR(disk->src) && - qemuHotplugRemoveManagedPR(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + qemuHotplugRemoveManagedPR(driver, vm, VIR_ASYNC_JOB_NONE) < 0) goto cleanup; =20 if (disk->transient) { @@ -4619,7 +4619,7 @@ qemuDomainRemoveMemoryDevice(virQEMUDriver *driver, virDomainMemoryDefFree(mem); =20 /* fix the balloon size */ - ignore_value(qemuProcessRefreshBalloonState(driver, vm, QEMU_ASYNC_JOB= _NONE)); + ignore_value(qemuProcessRefreshBalloonState(driver, vm, VIR_ASYNC_JOB_= NONE)); =20 /* decrease the mlock limit after memory unplug if necessary */ ignore_value(qemuDomainAdjustMaxMemLock(vm, false)); @@ -6296,7 +6296,7 @@ qemuDomainRemoveVcpu(virQEMUDriver *driver, virErrorPtr save_error =3D NULL; size_t i; =20 - if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) = < 0) + if (qemuDomainRefreshVcpuInfo(driver, vm, VIR_ASYNC_JOB_NONE, false) <= 0) return -1; =20 /* validation requires us to set the expected state prior to calling i= t */ @@ -6441,7 +6441,7 @@ qemuDomainHotplugAddVcpu(virQEMUDriver *driver, /* start outputting of the new XML element to allow keeping unpluggabi= lity */ vm->def->individualvcpus =3D true; =20 - if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) = < 0) + if (qemuDomainRefreshVcpuInfo(driver, vm, VIR_ASYNC_JOB_NONE, false) <= 0) return -1; =20 /* validation requires us to set the expected state prior to calling i= t */ diff --git a/src/qemu/qemu_hotplug.h b/src/qemu/qemu_hotplug.h index 19c07497b5..a0a9ae47e2 100644 --- a/src/qemu/qemu_hotplug.h +++ b/src/qemu/qemu_hotplug.h @@ -33,13 +33,13 @@ int qemuDomainChangeEjectableMedia(virQEMUDriver *drive= r, =20 void qemuDomainDelTLSObjects(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, const char *secAlias, const char *tlsAlias); =20 int qemuDomainAddTLSObjects(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virJSONValue **secProps, virJSONValue **tlsProps); =20 @@ -61,7 +61,7 @@ int qemuDomainAttachDeviceDiskLive(virQEMUDriver *driver, int qemuDomainAttachDiskGeneric(virQEMUDriver *driver, virDomainObj *vm, virDomainDiskDef *disk, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuDomainAttachNetDevice(virQEMUDriver *driver, virDomainObj *vm, @@ -164,11 +164,11 @@ unsigned long long qemuDomainGetUnplugTimeout(virDoma= inObj *vm) G_GNUC_NO_INLINE =20 int qemuHotplugAttachDBusVMState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuHotplugRemoveDBusVMState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuDomainChangeMemoryRequestedSize(virQEMUDriver *driver, virDomainObj *vm, diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index f109598fb4..43ffe2357a 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -84,7 +84,7 @@ VIR_ENUM_IMPL(qemuMigrationJobPhase, static int qemuMigrationJobStart(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob job, + virDomainAsyncJob job, unsigned long apiFlags) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; =20 @@ -104,7 +104,7 @@ qemuMigrationJobContinue(virDomainObj *obj) =20 static bool qemuMigrationJobIsActive(virDomainObj *vm, - qemuDomainAsyncJob job) + virDomainAsyncJob job) ATTRIBUTE_NONNULL(1); =20 static void @@ -149,7 +149,7 @@ qemuMigrationSrcRestoreDomainState(virQEMUDriver *drive= r, virDomainObj *vm) /* we got here through some sort of failure; start the domain agai= n */ if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_MIGRATION_CANCELED, - QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) { + VIR_ASYNC_JOB_MIGRATION_OUT) < 0) { /* Hm, we already know we are in error here. We don't want to * overwrite the previous error, though, so we just throw some= thing * to the logs and hope for the best */ @@ -501,7 +501,7 @@ qemuMigrationDstStartNBDServer(virQEMUDriver *driver, } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_IN) < = 0) + VIR_ASYNC_JOB_MIGRATION_IN) < 0) goto cleanup; =20 if (!server_started) { @@ -542,7 +542,7 @@ qemuMigrationDstStopNBDServer(virQEMUDriver *driver, return 0; =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_IN) < 0) + VIR_ASYNC_JOB_MIGRATION_IN) < 0) return -1; =20 if (qemuMonitorNBDServerStop(priv->mon) < 0) @@ -583,7 +583,7 @@ qemuMigrationNBDReportMirrorError(qemuBlockJobData *job, */ static int qemuMigrationSrcNBDStorageCopyReady(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { size_t i; size_t notReady =3D 0; @@ -638,7 +638,7 @@ qemuMigrationSrcNBDStorageCopyReady(virDomainObj *vm, */ static int qemuMigrationSrcNBDCopyCancelled(virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, bool abortMigration) { size_t i; @@ -722,7 +722,7 @@ qemuMigrationSrcNBDCopyCancelOne(virQEMUDriver *driver, virDomainDiskDef *disk, qemuBlockJobData *job, bool abortMigration, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int rv; @@ -772,7 +772,7 @@ static int qemuMigrationSrcNBDCopyCancel(virQEMUDriver *driver, virDomainObj *vm, bool abortMigration, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virConnectPtr dconn) { virErrorPtr err =3D NULL; @@ -855,7 +855,7 @@ qemuMigrationSrcNBDCopyCancel(virQEMUDriver *driver, =20 static int qemuMigrationSrcCancelRemoveTempBitmaps(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virQEMUDriver *driver =3D priv->driver; @@ -952,7 +952,7 @@ qemuMigrationSrcNBDStorageCopyBlockdev(virQEMUDriver *d= river, return -1; =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + VIR_ASYNC_JOB_MIGRATION_OUT) < 0) return -1; =20 mon_ret =3D qemuBlockStorageSourceAttachApply(qemuDomainGetMonitor(vm)= , data); @@ -1001,7 +1001,7 @@ qemuMigrationSrcNBDStorageCopyDriveMirror(virQEMUDriv= er *driver, } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + VIR_ASYNC_JOB_MIGRATION_OUT) < 0) return -1; =20 mon_ret =3D qemuMonitorDriveMirror(qemuDomainGetMonitor(vm), @@ -1199,14 +1199,14 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *drive= r, } } =20 - while ((rv =3D qemuMigrationSrcNBDStorageCopyReady(vm, QEMU_ASYNC_JOB_= MIGRATION_OUT)) !=3D 1) { + while ((rv =3D qemuMigrationSrcNBDStorageCopyReady(vm, VIR_ASYNC_JOB_M= IGRATION_OUT)) !=3D 1) { if (rv < 0) return -1; =20 if (priv->job.abortJob) { priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), - qemuDomainAsyncJobTypeToString(priv->job.asyncJ= ob), + virDomainAsyncJobTypeToString(priv->job.asyncJo= b), _("canceled by client")); return -1; } @@ -1221,7 +1221,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, return -1; } =20 - qemuMigrationSrcFetchMirrorStats(driver, vm, QEMU_ASYNC_JOB_MIGRATION_= OUT, + qemuMigrationSrcFetchMirrorStats(driver, vm, VIR_ASYNC_JOB_MIGRATION_O= UT, priv->job.current); return 0; } @@ -1599,7 +1599,7 @@ qemuMigrationAnyPostcopyFailed(virQEMUDriver *driver, if (state =3D=3D VIR_DOMAIN_RUNNING) { if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_POSTCOPY_FAILED, - QEMU_ASYNC_JOB_MIGRATION_IN) < 0) + VIR_ASYNC_JOB_MIGRATION_IN) < 0) VIR_WARN("Unable to pause guest CPUs for %s", vm->def->name); } else { virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, @@ -1673,7 +1673,7 @@ qemuMigrationUpdateJobType(virDomainJobData *jobData) int qemuMigrationAnyFetchStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainJobData *jobData, char **error) { @@ -1703,23 +1703,23 @@ qemuMigrationJobName(virDomainObj *vm) qemuDomainObjPrivate *priv =3D vm->privateData; =20 switch (priv->job.asyncJob) { - case QEMU_ASYNC_JOB_MIGRATION_OUT: + case VIR_ASYNC_JOB_MIGRATION_OUT: return _("migration out job"); - case QEMU_ASYNC_JOB_SAVE: + case VIR_ASYNC_JOB_SAVE: return _("domain save job"); - case QEMU_ASYNC_JOB_DUMP: + case VIR_ASYNC_JOB_DUMP: return _("domain core dump job"); - case QEMU_ASYNC_JOB_NONE: + case VIR_ASYNC_JOB_NONE: return _("undefined"); - case QEMU_ASYNC_JOB_MIGRATION_IN: + case VIR_ASYNC_JOB_MIGRATION_IN: return _("migration in job"); - case QEMU_ASYNC_JOB_SNAPSHOT: + case VIR_ASYNC_JOB_SNAPSHOT: return _("snapshot job"); - case QEMU_ASYNC_JOB_START: + case VIR_ASYNC_JOB_START: return _("start job"); - case QEMU_ASYNC_JOB_BACKUP: + case VIR_ASYNC_JOB_BACKUP: return _("backup job"); - case QEMU_ASYNC_JOB_LAST: + case VIR_ASYNC_JOB_LAST: default: return _("job"); } @@ -1729,7 +1729,7 @@ qemuMigrationJobName(virDomainObj *vm) static int qemuMigrationJobCheckStatus(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virDomainJobData *jobData =3D priv->job.current; @@ -1793,7 +1793,7 @@ enum qemuMigrationCompletedFlags { static int qemuMigrationAnyCompleted(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virConnectPtr dconn, unsigned int flags) { @@ -1884,7 +1884,7 @@ qemuMigrationAnyCompleted(virQEMUDriver *driver, static int qemuMigrationSrcWaitForCompletion(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virConnectPtr dconn, unsigned int flags) { @@ -1925,7 +1925,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driv= er, priv->job.completed =3D virDomainJobDataCopy(jobData); priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 - if (asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT && + if (asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT && jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 @@ -1936,7 +1936,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driv= er, static int qemuMigrationDstWaitForCompletion(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, bool postcopy) { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -2046,7 +2046,7 @@ qemuMigrationSrcGraphicsRelocate(virQEMUDriver *drive= r, } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OUT) =3D= =3D 0) { + VIR_ASYNC_JOB_MIGRATION_OUT) =3D=3D= 0) { qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; =20 ret =3D qemuMonitorGraphicsRelocate(priv->mon, type, listenAddress, @@ -2139,7 +2139,7 @@ int qemuMigrationDstRun(virQEMUDriver *driver, virDomainObj *vm, const char *uri, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int rv; @@ -2160,7 +2160,7 @@ qemuMigrationDstRun(virQEMUDriver *driver, if (rv < 0) return -1; =20 - if (asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) { + if (asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { /* qemuMigrationDstWaitForCompletion is called from the Finish pha= se */ return 0; } @@ -2189,11 +2189,11 @@ qemuMigrationSrcCleanup(virDomainObj *vm, =20 VIR_DEBUG("vm=3D%s, conn=3D%p, asyncJob=3D%s, phase=3D%s", vm->def->name, conn, - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, + virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobPhaseToString(priv->job.asyncJob, priv->job.phase)); =20 - if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_OUT)) + if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT)) return; =20 VIR_DEBUG("The connection which started outgoing migration of domain %= s" @@ -2210,7 +2210,7 @@ qemuMigrationSrcCleanup(virDomainObj *vm, VIR_WARN("Migration of domain %s finished but we don't know if the" " domain was successfully started on destination or not", vm->def->name); - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, jobPriv->migParams, priv->job.apiFlags); /* clear the job and let higher levels decide what to do */ qemuMigrationJobFinish(vm); @@ -2344,11 +2344,11 @@ qemuMigrationSrcBeginPhase(virQEMUDriver *driver, cookieout, cookieoutlen, nmigrate_disks, migrate_disks, flags); =20 - /* Only set the phase if we are inside QEMU_ASYNC_JOB_MIGRATION_OUT. + /* Only set the phase if we are inside VIR_ASYNC_JOB_MIGRATION_OUT. * Otherwise we will start the async job later in the perform phase lo= sing * change protection. */ - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_BEGIN3); =20 if (!qemuMigrationSrcIsAllowed(driver, vm, true, flags)) @@ -2505,7 +2505,7 @@ qemuMigrationSrcBegin(virConnectPtr conn, virQEMUDriver *driver =3D conn->privateData; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); char *xml =3D NULL; - qemuDomainAsyncJob asyncJob; + virDomainAsyncJob asyncJob; =20 if (cfg->migrateTLSForce && !(flags & VIR_MIGRATE_TUNNELLED) && @@ -2516,14 +2516,14 @@ qemuMigrationSrcBegin(virConnectPtr conn, } =20 if ((flags & VIR_MIGRATE_CHANGE_PROTECTION)) { - if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, flags) < 0) goto cleanup; - asyncJob =3D QEMU_ASYNC_JOB_MIGRATION_OUT; + asyncJob =3D VIR_ASYNC_JOB_MIGRATION_OUT; } else { - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; - asyncJob =3D QEMU_ASYNC_JOB_NONE; + asyncJob =3D VIR_ASYNC_JOB_NONE; } =20 qemuMigrationSrcStoreDomainState(vm); @@ -2583,13 +2583,13 @@ qemuMigrationDstPrepareCleanup(virQEMUDriver *drive= r, VIR_DEBUG("driver=3D%p, vm=3D%s, job=3D%s, asyncJob=3D%s", driver, vm->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainJobTypeToString(priv->job.active), + virDomainAsyncJobTypeToString(priv->job.asyncJob)); =20 virPortAllocatorRelease(priv->migrationPort); priv->migrationPort =3D 0; =20 - if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_IN)) + if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_IN)) return; qemuDomainObjDiscardAsyncJob(vm); } @@ -2694,7 +2694,7 @@ qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDomain= Obj *vm, if (qemuMigrationCookieBlockDirtyBitmapsMatchDisks(vm->def, mig->block= DirtyBitmaps) < 0) return -1; =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_MIGRATION_IN))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_MIGRATION_IN))) return -1; =20 for (nextdisk =3D mig->blockDirtyBitmaps; nextdisk; nextdisk =3D nextd= isk->next) { @@ -2925,7 +2925,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, !!(flags & VIR_MIGRATE_NON_SHARED= _INC)) < 0) goto cleanup; =20 - if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, + if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, flags) < 0) goto cleanup; qemuMigrationJobSetPhase(vm, QEMU_MIGRATION_PHASE_PREPARE); @@ -2942,7 +2942,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, =20 startFlags =3D VIR_QEMU_PROCESS_START_AUTODESTROY; =20 - if (qemuProcessInit(driver, vm, mig->cpu, QEMU_ASYNC_JOB_MIGRATION_IN, + if (qemuProcessInit(driver, vm, mig->cpu, VIR_ASYNC_JOB_MIGRATION_IN, true, startFlags) < 0) goto stopjob; stopProcess =3D true; @@ -2958,7 +2958,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, if (qemuProcessPrepareHost(driver, vm, startFlags) < 0) goto stopjob; =20 - rv =3D qemuProcessLaunch(dconn, driver, vm, QEMU_ASYNC_JOB_MIGRATION_I= N, + rv =3D qemuProcessLaunch(dconn, driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, incoming, NULL, VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, startFlags); @@ -2987,7 +2987,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, if (qemuMigrationDstPrepareAnyBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) goto stopjob; =20 - if (qemuMigrationParamsCheck(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, + if (qemuMigrationParamsCheck(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, migParams, mig->caps->automatic) < 0) goto stopjob; =20 @@ -2995,7 +2995,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, * set the migration TLS parameters */ if (flags & VIR_MIGRATE_TLS) { if (qemuMigrationParamsEnableTLS(driver, vm, true, - QEMU_ASYNC_JOB_MIGRATION_IN, + VIR_ASYNC_JOB_MIGRATION_IN, &tlsAlias, NULL, migParams) < 0) goto stopjob; @@ -3004,7 +3004,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, goto stopjob; } =20 - if (qemuMigrationParamsApply(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, migParams) < 0) goto stopjob; =20 @@ -3042,10 +3042,10 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, =20 if (incoming->deferredURI && qemuMigrationDstRun(driver, vm, incoming->deferredURI, - QEMU_ASYNC_JOB_MIGRATION_IN) < 0) + VIR_ASYNC_JOB_MIGRATION_IN) < 0) goto stopjob; =20 - if (qemuProcessFinishStartup(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, + if (qemuProcessFinishStartup(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, false, VIR_DOMAIN_PAUSED_MIGRATION) < 0) goto stopjob; =20 @@ -3110,7 +3110,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, return ret; =20 stopjob: - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, jobPriv->migParams, priv->job.apiFlags); =20 if (stopProcess) { @@ -3119,7 +3119,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, stopFlags |=3D VIR_QEMU_PROCESS_STOP_NO_RELABEL; virDomainAuditStart(vm, "migrated", false); qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, - QEMU_ASYNC_JOB_MIGRATION_IN, stopFlags); + VIR_ASYNC_JOB_MIGRATION_IN, stopFlags); } =20 qemuMigrationJobFinish(vm); @@ -3425,7 +3425,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, */ if (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY && - qemuMigrationAnyFetchStats(driver, vm, QEMU_ASYNC_JOB_MIGRATIO= N_OUT, + qemuMigrationAnyFetchStats(driver, vm, VIR_ASYNC_JOB_MIGRATION= _OUT, jobData, NULL) < 0) VIR_WARN("Could not refresh migration statistics"); =20 @@ -3448,7 +3448,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, qemuMigrationSrcWaitForSpice(vm); =20 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_MIGRATED, - QEMU_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_OUT, VIR_QEMU_PROCESS_STOP_MIGRATED); virDomainAuditStop(vm, "migrated"); =20 @@ -3465,7 +3465,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, =20 /* cancel any outstanding NBD jobs */ qemuMigrationSrcNBDCopyCancel(driver, vm, false, - QEMU_ASYNC_JOB_MIGRATION_OUT, NULL); + VIR_ASYNC_JOB_MIGRATION_OUT, NULL); =20 virErrorRestore(&orig_err); =20 @@ -3475,7 +3475,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, else qemuMigrationSrcRestoreDomainState(driver, vm); =20 - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, jobPriv->migParams, priv->job.apiFlags); =20 qemuDomainSaveStatus(vm); @@ -3496,7 +3496,7 @@ qemuMigrationSrcConfirm(virQEMUDriver *driver, g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); int ret =3D -1; =20 - if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_OUT)) + if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT)) goto cleanup; =20 if (cancelled) @@ -3816,7 +3816,7 @@ static int qemuMigrationSrcContinue(virQEMUDriver *driver, virDomainObj *vm, qemuMonitorMigrationStatus status, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int ret; @@ -3841,10 +3841,10 @@ qemuMigrationSetDBusVMState(virQEMUDriver *driver, if (priv->dbusVMStateIds) { int rv; =20 - if (qemuHotplugAttachDBusVMState(driver, vm, QEMU_ASYNC_JOB_NONE) = < 0) + if (qemuHotplugAttachDBusVMState(driver, vm, VIR_ASYNC_JOB_NONE) <= 0) return -1; =20 - if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_NONE= ) < 0) + if (qemuDomainObjEnterMonitorAsync(driver, vm, VIR_ASYNC_JOB_NONE)= < 0) return -1; =20 rv =3D qemuMonitorSetDBusVMStateIdList(priv->mon, priv->dbusVMStat= eIds); @@ -3853,7 +3853,7 @@ qemuMigrationSetDBusVMState(virQEMUDriver *driver, =20 return rv; } else { - if (qemuHotplugRemoveDBusVMState(driver, vm, QEMU_ASYNC_JOB_NONE) = < 0) + if (qemuHotplugRemoveDBusVMState(driver, vm, VIR_ASYNC_JOB_NONE) <= 0) return -1; } =20 @@ -3888,7 +3888,7 @@ qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge(virD= omainObj *vm, GSList *nextdisk; int rc; =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_MIGRATION_OUT))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_MIGRATION_OUT))) return -1; =20 for (nextdisk =3D mig->blockDirtyBitmaps; nextdisk; nextdisk =3D nextd= isk->next) { @@ -3944,7 +3944,7 @@ qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge(virD= omainObj *vm, } } =20 - if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATIO= N_OUT) < 0) + if (qemuDomainObjEnterMonitorAsync(driver, vm, VIR_ASYNC_JOB_MIGRATION= _OUT) < 0) return -1; =20 rc =3D qemuMonitorTransaction(priv->mon, &actions); @@ -4107,7 +4107,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, qemuMigrationSrcRunPrepareBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) goto error; =20 - if (qemuMigrationParamsCheck(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + if (qemuMigrationParamsCheck(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, migParams, mig->caps->automatic) < 0) goto error; =20 @@ -4121,7 +4121,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, hostname =3D spec->dest.host.name; =20 if (qemuMigrationParamsEnableTLS(driver, vm, false, - QEMU_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_OUT, &tlsAlias, hostname, migParams) < 0) goto error; @@ -4135,7 +4135,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, priv->migMaxBandwidth * 1024 * 1024) < 0) goto error; =20 - if (qemuMigrationParamsApply(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + if (qemuMigrationParamsApply(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, migParams) < 0) goto error; =20 @@ -4188,12 +4188,12 @@ qemuMigrationSrcRun(virQEMUDriver *driver, if (!(flags & VIR_MIGRATE_LIVE) && virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_MIGRATION, - QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + VIR_ASYNC_JOB_MIGRATION_OUT) < 0) goto error; } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + VIR_ASYNC_JOB_MIGRATION_OUT) < 0) goto error; =20 if (priv->job.abortJob) { @@ -4202,7 +4202,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, * priv->job.abortJob will not change */ priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(priv->job.asyncJob), _("canceled by client")); goto exit_monitor; } @@ -4284,7 +4284,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, waitFlags |=3D QEMU_MIGRATION_COMPLETED_POSTCOPY; =20 rc =3D qemuMigrationSrcWaitForCompletion(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_OUT, dconn, waitFlags); if (rc =3D=3D -2) { goto error; @@ -4307,7 +4307,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 if (mig->nbd && qemuMigrationSrcNBDCopyCancel(driver, vm, false, - QEMU_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_OUT, dconn) < 0) goto error; =20 @@ -4318,13 +4318,13 @@ qemuMigrationSrcRun(virQEMUDriver *driver, if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(driver, vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWI= TCHOVER, - QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + VIR_ASYNC_JOB_MIGRATION_OUT) < 0) goto error; =20 waitFlags ^=3D QEMU_MIGRATION_COMPLETED_PRE_SWITCHOVER; =20 rc =3D qemuMigrationSrcWaitForCompletion(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OU= T, + VIR_ASYNC_JOB_MIGRATION_OUT, dconn, waitFlags); if (rc =3D=3D -2) { goto error; @@ -4378,7 +4378,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, if (cancel && priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISO= R_COMPLETED && qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { + VIR_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { qemuMonitorMigrateCancel(priv->mon); qemuDomainObjExitMonitor(vm); } @@ -4386,10 +4386,10 @@ qemuMigrationSrcRun(virQEMUDriver *driver, /* cancel any outstanding NBD jobs */ if (mig && mig->nbd) qemuMigrationSrcNBDCopyCancel(driver, vm, true, - QEMU_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_OUT, dconn); =20 - qemuMigrationSrcCancelRemoveTempBitmaps(vm, QEMU_ASYNC_JOB_MIGRATI= ON_OUT); + qemuMigrationSrcCancelRemoveTempBitmaps(vm, VIR_ASYNC_JOB_MIGRATIO= N_OUT); =20 if (priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; @@ -5274,7 +5274,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; =20 - if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, flags) < 0) goto cleanup; =20 @@ -5314,7 +5314,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, */ if (!v3proto) { qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_MIGRATED, - QEMU_ASYNC_JOB_MIGRATION_OUT, + VIR_ASYNC_JOB_MIGRATION_OUT, VIR_QEMU_PROCESS_STOP_MIGRATED); virDomainAuditStop(vm, "migrated"); event =3D virDomainEventLifecycleNewFromObj(vm, @@ -5330,7 +5330,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, * here */ if (!v3proto && ret < 0) - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, jobPriv->migParams, priv->job.apiFlags); =20 qemuMigrationSrcRestoreDomainState(driver, vm); @@ -5378,10 +5378,10 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, =20 /* If we didn't start the job in the begin phase, start it now. */ if (!(flags & VIR_MIGRATE_CHANGE_PROTECTION)) { - if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + if (qemuMigrationJobStart(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, flags) < 0) return ret; - } else if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_OUT)= ) { + } else if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT))= { return ret; } =20 @@ -5407,7 +5407,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, =20 endjob: if (ret < 0) { - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_OUT, jobPriv->migParams, priv->job.apiFlags); qemuMigrationJobFinish(vm); } else { @@ -5637,7 +5637,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, port =3D priv->migrationPort; priv->migrationPort =3D 0; =20 - if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_IN)) { + if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_IN)) { qemuMigrationDstErrorReport(driver, vm->def->name); goto cleanup; } @@ -5673,7 +5673,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, /* Check for a possible error on the monitor in case Finish was ca= lled * earlier than monitor EOF handler got a chance to process the er= ror */ - qemuDomainCheckMonitor(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN); + qemuDomainCheckMonitor(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN); goto endjob; } =20 @@ -5694,7 +5694,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, goto endjob; =20 if (qemuRefreshVirtioChannelState(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_IN) < 0) + VIR_ASYNC_JOB_MIGRATION_IN) < 0) goto endjob; =20 if (qemuConnectAgent(driver, vm) < 0) @@ -5723,7 +5723,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, * before starting guest CPUs. */ if (qemuMigrationDstWaitForCompletion(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_IN, + VIR_ASYNC_JOB_MIGRATION_IN, !!(flags & VIR_MIGRATE_POSTCOPY)= ) < 0) { /* There's not much we can do for v2 protocol since the * original domain on the source host is already gone. @@ -5734,7 +5734,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, =20 /* Now that the state data was transferred we can refresh the actual s= tate * of the devices */ - if (qemuProcessRefreshState(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN) <= 0) { + if (qemuProcessRefreshState(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN) < = 0) { /* Similarly to the case above v2 protocol will not be able to rec= over * from this. Let's ignore this and perhaps stuff will not break. = */ if (v3proto) @@ -5752,7 +5752,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, if (qemuProcessStartCPUs(driver, vm, inPostCopy ? VIR_DOMAIN_RUNNING_POSTCOPY : VIR_DOMAIN_RUNNING_MIGRATED, - QEMU_ASYNC_JOB_MIGRATION_IN) < 0) { + VIR_ASYNC_JOB_MIGRATION_IN) < 0) { if (virGetLastErrorCode() =3D=3D VIR_ERR_OK) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("resume operation failed")); @@ -5791,7 +5791,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, =20 if (inPostCopy) { if (qemuMigrationDstWaitForCompletion(driver, vm, - QEMU_ASYNC_JOB_MIGRATION_IN, + VIR_ASYNC_JOB_MIGRATION_IN, false) < 0) { goto endjob; } @@ -5838,7 +5838,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, virDomainObjIsActive(vm)) { if (doKill) { qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, - QEMU_ASYNC_JOB_MIGRATION_IN, + VIR_ASYNC_JOB_MIGRATION_IN, VIR_QEMU_PROCESS_STOP_MIGRATED); virDomainAuditStop(vm, "failed"); event =3D virDomainEventLifecycleNewFromObj(vm, @@ -5871,7 +5871,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, g_clear_pointer(&priv->job.completed, virDomainJobDataFree); } =20 - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, jobPriv->migParams, priv->job.apiFlags); =20 qemuMigrationJobFinish(vm); @@ -5901,7 +5901,7 @@ int qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, int fd, virCommand *compressor, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; bool bwParam =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PA= RAM_BANDWIDTH); @@ -6080,10 +6080,10 @@ qemuMigrationSrcCancel(virQEMUDriver *driver, =20 if (storage && qemuMigrationSrcNBDCopyCancel(driver, vm, true, - QEMU_ASYNC_JOB_NONE, NULL) < 0) + VIR_ASYNC_JOB_NONE, NULL) < 0) return -1; =20 - if (qemuMigrationSrcCancelRemoveTempBitmaps(vm, QEMU_ASYNC_JOB_NONE) <= 0) + if (qemuMigrationSrcCancelRemoveTempBitmaps(vm, VIR_ASYNC_JOB_NONE) < = 0) return -1; =20 return 0; @@ -6093,21 +6093,21 @@ qemuMigrationSrcCancel(virQEMUDriver *driver, static int qemuMigrationJobStart(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob job, + virDomainAsyncJob job, unsigned long apiFlags) { qemuDomainObjPrivate *priv =3D vm->privateData; virDomainJobOperation op; unsigned long long mask; =20 - if (job =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) { + if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { op =3D VIR_DOMAIN_JOB_OPERATION_MIGRATION_IN; - mask =3D QEMU_JOB_NONE; + mask =3D VIR_JOB_NONE; } else { op =3D VIR_DOMAIN_JOB_OPERATION_MIGRATION_OUT; mask =3D QEMU_JOB_DEFAULT_MASK | - JOB_MASK(QEMU_JOB_SUSPEND) | - JOB_MASK(QEMU_JOB_MIGRATION_OP); + JOB_MASK(VIR_JOB_SUSPEND) | + JOB_MASK(VIR_JOB_MIGRATION_OP); } =20 if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) @@ -6151,14 +6151,14 @@ qemuMigrationJobContinue(virDomainObj *vm) =20 static bool qemuMigrationJobIsActive(virDomainObj *vm, - qemuDomainAsyncJob job) + virDomainAsyncJob job) { qemuDomainObjPrivate *priv =3D vm->privateData; =20 if (priv->job.asyncJob !=3D job) { const char *msg; =20 - if (job =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) + if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) msg =3D _("domain '%s' is not processing incoming migration"); else msg =3D _("domain '%s' is not being migrated"); @@ -6231,7 +6231,7 @@ qemuMigrationDstErrorReport(virQEMUDriver *driver, int qemuMigrationSrcFetchMirrorStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainJobData *jobData) { size_t i; diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index 6b169f73c7..a8afa66119 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -210,7 +210,7 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, int fd, virCommand *compressor, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; =20 int @@ -220,7 +220,7 @@ qemuMigrationSrcCancel(virQEMUDriver *driver, int qemuMigrationAnyFetchStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainJobData *jobData, char **error); =20 @@ -248,7 +248,7 @@ int qemuMigrationDstRun(virQEMUDriver *driver, virDomainObj *vm, const char *uri, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 void qemuMigrationAnyPostcopyFailed(virQEMUDriver *driver, @@ -257,5 +257,5 @@ qemuMigrationAnyPostcopyFailed(virQEMUDriver *driver, int qemuMigrationSrcFetchMirrorStats(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainJobData *jobData); diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_par= ams.c index 39f84983bc..df2384b213 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -850,7 +850,7 @@ qemuMigrationParamsApply(virQEMUDriver *driver, if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; =20 - if (asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { + if (asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { if (!virBitmapIsAllClear(migParams->caps)) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Migration capabilities can only be set by " @@ -1165,7 +1165,7 @@ qemuMigrationParamsCheck(virQEMUDriver *driver, qemuMigrationParty party; size_t i; =20 - if (asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) + if (asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) party =3D QEMU_MIGRATION_SOURCE; else party =3D QEMU_MIGRATION_DESTINATION; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1ed60917ea..189e4671d1 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -462,7 +462,7 @@ qemuProcessFakeReboot(void *opaque) =20 VIR_DEBUG("vm=3D%p", vm); virObjectLock(vm); - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -484,7 +484,7 @@ qemuProcessFakeReboot(void *opaque) =20 if (qemuProcessStartCPUs(driver, vm, reason, - QEMU_ASYNC_JOB_NONE) < 0) { + VIR_ASYNC_JOB_NONE) < 0) { if (virGetLastErrorCode() =3D=3D VIR_ERR_OK) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("resume operation failed")); @@ -650,7 +650,7 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, * reveal it in domain state nor sent events */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POS= TCOPY) reason =3D VIR_DOMAIN_PAUSED_POSTCOPY; else @@ -1525,7 +1525,7 @@ qemuProcessHandleSpiceMigrated(qemuMonitor *mon G_GNU= C_UNUSED, =20 priv =3D vm->privateData; jobPriv =3D priv->job.privateData; - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { VIR_DEBUG("got SPICE_MIGRATE_COMPLETED event without a migration j= ob"); goto cleanup; } @@ -1557,7 +1557,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, qemuMonitorMigrationStatusTypeToString(status)); =20 priv =3D vm->privateData; - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); goto cleanup; } @@ -1568,7 +1568,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, virDomainObjBroadcast(vm); =20 if (status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY && - priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT && + priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_MIGRATION) { VIR_DEBUG("Correcting paused state reason for domain %s to %s", @@ -1603,7 +1603,7 @@ qemuProcessHandleMigrationPass(qemuMonitor *mon G_GNU= C_UNUSED, vm, vm->def->name, pass); =20 priv =3D vm->privateData; - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION_PASS event without a migration job"); goto cleanup; } @@ -1636,7 +1636,7 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_GNU= C_UNUSED, priv =3D vm->privateData; jobPriv =3D priv->job.privateData; privJobCurrent =3D priv->job.current->privateData; - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job"); goto cleanup; } @@ -1897,7 +1897,7 @@ qemuProcessMonitorLogFree(void *opaque) static int qemuProcessInitMonitor(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { int ret; =20 @@ -2190,7 +2190,7 @@ qemuProcessRefreshChannelVirtioState(virQEMUDriver *d= river, int qemuRefreshVirtioChannelState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(GHashTable) info =3D NULL; @@ -2546,7 +2546,7 @@ qemuProcessInitCpuAffinity(virDomainObj *vm G_GNUC_UN= USED) static int qemuProcessSetLinkStates(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virDomainDef *def =3D vm->def; @@ -3210,7 +3210,7 @@ qemuProcessPrepareMonitorChr(virDomainChrSourceDef *m= onConfig, int qemuProcessStartCPUs(virQEMUDriver *driver, virDomainObj *vm, virDomainRunningReason reason, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { int ret =3D -1; qemuDomainObjPrivate *priv =3D vm->privateData; @@ -3261,7 +3261,7 @@ qemuProcessStartCPUs(virQEMUDriver *driver, virDomain= Obj *vm, int qemuProcessStopCPUs(virQEMUDriver *driver, virDomainObj *vm, virDomainPausedReason reason, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { int ret =3D -1; qemuDomainObjPrivate *priv =3D vm->privateData; @@ -3471,7 +3471,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriver *driver, vm->def->name); if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_MIGRATED, - QEMU_ASYNC_JOB_NONE) < 0) { + VIR_ASYNC_JOB_NONE) < 0) { VIR_WARN("Could not resume domain %s", vm->def->name); } break; @@ -3489,7 +3489,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriver *driver, break; } =20 - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_NONE, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_NONE, jobPriv->migParams, job->apiFlags); return 0; } @@ -3579,13 +3579,13 @@ qemuProcessRecoverMigrationOut(virQEMUDriver *drive= r, reason =3D=3D VIR_DOMAIN_PAUSED_UNKNOWN)) { if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_MIGRATION_CANCELED, - QEMU_ASYNC_JOB_NONE) < 0) { + VIR_ASYNC_JOB_NONE) < 0) { VIR_WARN("Could not resume domain %s", vm->def->name); } } } =20 - qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_NONE, + qemuMigrationParamsReset(driver, vm, VIR_ASYNC_JOB_NONE, jobPriv->migParams, job->apiFlags); return 0; } @@ -3604,21 +3604,21 @@ qemuProcessRecoverJob(virQEMUDriver *driver, state =3D virDomainObjGetState(vm, &reason); =20 switch (job->asyncJob) { - case QEMU_ASYNC_JOB_MIGRATION_OUT: + case VIR_ASYNC_JOB_MIGRATION_OUT: if (qemuProcessRecoverMigrationOut(driver, vm, job, state, reason, stopFlags) < 0) return -1; break; =20 - case QEMU_ASYNC_JOB_MIGRATION_IN: + case VIR_ASYNC_JOB_MIGRATION_IN: if (qemuProcessRecoverMigrationIn(driver, vm, job, state, reason) < 0) return -1; break; =20 - case QEMU_ASYNC_JOB_SAVE: - case QEMU_ASYNC_JOB_DUMP: - case QEMU_ASYNC_JOB_SNAPSHOT: + case VIR_ASYNC_JOB_SAVE: + case VIR_ASYNC_JOB_DUMP: + case VIR_ASYNC_JOB_SNAPSHOT: qemuDomainObjEnterMonitor(driver, vm); ignore_value(qemuMonitorMigrateCancel(priv->mon)); qemuDomainObjExitMonitor(vm); @@ -3627,39 +3627,39 @@ qemuProcessRecoverJob(virQEMUDriver *driver, * recovering an async job, this function is run at startup * and must resume things using sync monitor connections. */ if (state =3D=3D VIR_DOMAIN_PAUSED && - ((job->asyncJob =3D=3D QEMU_ASYNC_JOB_DUMP && + ((job->asyncJob =3D=3D VIR_ASYNC_JOB_DUMP && reason =3D=3D VIR_DOMAIN_PAUSED_DUMP) || - (job->asyncJob =3D=3D QEMU_ASYNC_JOB_SAVE && + (job->asyncJob =3D=3D VIR_ASYNC_JOB_SAVE && reason =3D=3D VIR_DOMAIN_PAUSED_SAVE) || - (job->asyncJob =3D=3D QEMU_ASYNC_JOB_SNAPSHOT && + (job->asyncJob =3D=3D VIR_ASYNC_JOB_SNAPSHOT && (reason =3D=3D VIR_DOMAIN_PAUSED_SNAPSHOT || reason =3D=3D VIR_DOMAIN_PAUSED_MIGRATION)) || reason =3D=3D VIR_DOMAIN_PAUSED_UNKNOWN)) { if (qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_SAVE_CANCELED, - QEMU_ASYNC_JOB_NONE) < 0) { + VIR_ASYNC_JOB_NONE) < 0) { VIR_WARN("Could not resume domain '%s' after migration to= file", vm->def->name); } } break; =20 - case QEMU_ASYNC_JOB_START: + case VIR_ASYNC_JOB_START: /* Already handled in VIR_DOMAIN_PAUSED_STARTING_UP check. */ break; =20 - case QEMU_ASYNC_JOB_BACKUP: + case VIR_ASYNC_JOB_BACKUP: ignore_value(virTimeMillisNow(&now)); =20 /* Restore the config of the async job which is not persisted */ priv->job.jobsQueued++; - priv->job.asyncJob =3D QEMU_ASYNC_JOB_BACKUP; + priv->job.asyncJob =3D VIR_ASYNC_JOB_BACKUP; priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); priv->job.asyncStarted =3D now; =20 qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | - JOB_MASK(QEMU_JOB_SUSPEND) | - JOB_MASK(QEMU_JOB_MODIFY))); + JOB_MASK(VIR_JOB_SUSPEND) | + JOB_MASK(VIR_JOB_MODIFY))); =20 /* We reset the job parameters for backup so that the job will look * active. This is possible because we are able to recover the sta= te @@ -3673,8 +3673,8 @@ qemuProcessRecoverJob(virQEMUDriver *driver, priv->job.current->started =3D now; break; =20 - case QEMU_ASYNC_JOB_NONE: - case QEMU_ASYNC_JOB_LAST: + case VIR_ASYNC_JOB_NONE: + case VIR_ASYNC_JOB_LAST: break; } =20 @@ -3686,32 +3686,32 @@ qemuProcessRecoverJob(virQEMUDriver *driver, * for the job to be properly tracked in domain state XML. */ switch (job->active) { - case QEMU_JOB_QUERY: + case VIR_JOB_QUERY: /* harmless */ break; =20 - case QEMU_JOB_DESTROY: + case VIR_JOB_DESTROY: VIR_DEBUG("Domain %s should have already been destroyed", vm->def->name); return -1; =20 - case QEMU_JOB_SUSPEND: + case VIR_JOB_SUSPEND: /* mostly harmless */ break; =20 - case QEMU_JOB_MODIFY: + case VIR_JOB_MODIFY: /* XXX depending on the command we may be in an inconsistent state= and * we should probably fall back to "monitor error" state and refus= e to */ break; =20 - case QEMU_JOB_MIGRATION_OP: - case QEMU_JOB_ABORT: - case QEMU_JOB_ASYNC: - case QEMU_JOB_ASYNC_NESTED: + case VIR_JOB_MIGRATION_OP: + case VIR_JOB_ABORT: + case VIR_JOB_ASYNC: + case VIR_JOB_ASYNC_NESTED: /* async job was already handled above */ - case QEMU_JOB_NONE: - case QEMU_JOB_LAST: + case VIR_JOB_NONE: + case VIR_JOB_LAST: break; } =20 @@ -3727,7 +3727,7 @@ qemuProcessUpdateDevices(virQEMUDriver *driver, g_auto(GStrv) old =3D g_steal_pointer(&priv->qemuDevices); GStrv tmp; =20 - if (qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + if (qemuDomainUpdateDeviceList(driver, vm, VIR_ASYNC_JOB_NONE) < 0) return -1; =20 if (!old) @@ -4250,7 +4250,7 @@ qemuProcessGetVCPUQOMPath(virDomainObj *vm) static int qemuProcessFetchGuestCPU(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virCPUData **enabled, virCPUData **disabled) { @@ -4358,7 +4358,7 @@ qemuProcessUpdateLiveGuestCPU(virDomainObj *vm, static int qemuProcessUpdateAndVerifyCPU(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(virCPUData) cpu =3D NULL; g_autoptr(virCPUData) disabled =3D NULL; @@ -4379,7 +4379,7 @@ qemuProcessUpdateAndVerifyCPU(virQEMUDriver *driver, static int qemuProcessFetchCPUDefinitions(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, virDomainCapsCPUModels **cpuModels) { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -4403,7 +4403,7 @@ qemuProcessFetchCPUDefinitions(virQEMUDriver *driver, static int qemuProcessUpdateCPU(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(virCPUData) cpu =3D NULL; g_autoptr(virCPUData) disabled =3D NULL; @@ -4613,9 +4613,9 @@ qemuProcessIncomingDefNew(virQEMUCaps *qemuCaps, =20 =20 /* - * This function starts a new QEMU_ASYNC_JOB_START async job. The user is + * This function starts a new VIR_ASYNC_JOB_START async job. The user is * responsible for calling qemuProcessEndJob to stop this job and for pass= ing - * QEMU_ASYNC_JOB_START as @asyncJob argument to any function requiring th= is + * VIR_ASYNC_JOB_START as @asyncJob argument to any function requiring this * parameter between qemuProcessBeginJob and qemuProcessEndJob. */ int @@ -4624,11 +4624,11 @@ qemuProcessBeginJob(virQEMUDriver *driver, virDomainJobOperation operation, unsigned long apiFlags) { - if (qemuDomainObjBeginAsyncJob(driver, vm, QEMU_ASYNC_JOB_START, + if (qemuDomainObjBeginAsyncJob(driver, vm, VIR_ASYNC_JOB_START, operation, apiFlags) < 0) return -1; =20 - qemuDomainObjSetAsyncJobMask(vm, QEMU_JOB_NONE); + qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_NONE); return 0; } =20 @@ -5083,7 +5083,7 @@ qemuProcessSetupRawIO(virDomainObj *vm, static int qemuProcessSetupBalloon(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { unsigned long long balloon =3D vm->def->mem.cur_balloon; qemuDomainObjPrivate *priv =3D vm->privateData; @@ -5561,7 +5561,7 @@ int qemuProcessInit(virQEMUDriver *driver, virDomainObj *vm, virCPUDef *updatedCPU, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, bool migration, unsigned int flags) { @@ -5952,7 +5952,7 @@ qemuProcessVcpusSortOrder(const void *a, static int qemuProcessSetupHotpluggableVcpus(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { unsigned int maxvcpus =3D virDomainDefGetVcpusMax(vm->def); qemuDomainObjPrivate *priv =3D vm->privateData; @@ -7124,7 +7124,7 @@ qemuProcessGenID(virDomainObj *vm, static int qemuProcessSetupDiskThrottlingBlockdev(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; size_t i; @@ -7201,7 +7201,7 @@ qemuProcessEnablePerf(virDomainObj *vm) =20 static int qemuProcessSetupDisksTransientSnapshot(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(qemuSnapshotDiskContext) snapctxt =3D NULL; g_autoptr(GHashTable) blockNamedNodeData =3D NULL; @@ -7252,7 +7252,7 @@ qemuProcessSetupDisksTransientSnapshot(virDomainObj *= vm, =20 static int qemuProcessSetupDisksTransientHotplug(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; bool hasHotpluggedDisk =3D false; @@ -7292,7 +7292,7 @@ qemuProcessSetupDisksTransientHotplug(virDomainObj *v= m, =20 static int qemuProcessSetupDisksTransient(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; =20 @@ -7311,7 +7311,7 @@ qemuProcessSetupDisksTransient(virDomainObj *vm, =20 static int qemuProcessSetupLifecycleActions(virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int rc; @@ -7358,7 +7358,7 @@ int qemuProcessLaunch(virConnectPtr conn, virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, qemuProcessIncomingDef *incoming, virDomainMomentObj *snapshot, virNetDevVPortProfileOp vmop, @@ -7721,7 +7721,7 @@ qemuProcessLaunch(virConnectPtr conn, int qemuProcessRefreshState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; =20 @@ -7756,7 +7756,7 @@ qemuProcessRefreshState(virQEMUDriver *driver, int qemuProcessFinishStartup(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, bool startCPUs, virDomainPausedReason pausedReason) { @@ -7794,7 +7794,7 @@ qemuProcessStart(virConnectPtr conn, virQEMUDriver *driver, virDomainObj *vm, virCPUDef *updatedCPU, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, const char *migrateFrom, int migrateFd, const char *migratePath, @@ -7814,7 +7814,7 @@ qemuProcessStart(virConnectPtr conn, "migrateFrom=3D%s migrateFd=3D%d migratePath=3D%s " "snapshot=3D%p vmop=3D%d flags=3D0x%x", conn, driver, vm, vm->def->name, vm->def->id, - qemuDomainAsyncJobTypeToString(asyncJob), + virDomainAsyncJobTypeToString(asyncJob), NULLSTR(migrateFrom), migrateFd, NULLSTR(migratePath), snapshot, vmop, flags); =20 @@ -7922,7 +7922,7 @@ qemuProcessCreatePretendCmdPrepare(virQEMUDriver *dri= ver, if (!migrateURI) flags |=3D VIR_QEMU_PROCESS_START_NEW; =20 - if (qemuProcessInit(driver, vm, NULL, QEMU_ASYNC_JOB_NONE, + if (qemuProcessInit(driver, vm, NULL, VIR_ASYNC_JOB_NONE, !!migrateURI, flags) < 0) return -1; =20 @@ -7993,7 +7993,7 @@ qemuProcessKill(virDomainObj *vm, unsigned int flags) int qemuProcessBeginStopJob(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJob job, + virDomainJob job, bool forceKill) { qemuDomainObjPrivate *priv =3D vm->privateData; @@ -8026,7 +8026,7 @@ qemuProcessBeginStopJob(virQEMUDriver *driver, void qemuProcessStop(virQEMUDriver *driver, virDomainObj *vm, virDomainShutoffReason reason, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, unsigned int flags) { int ret; @@ -8045,21 +8045,21 @@ void qemuProcessStop(virQEMUDriver *driver, vm, vm->def->name, vm->def->id, (long long)vm->pid, virDomainShutoffReasonTypeToString(reason), - qemuDomainAsyncJobTypeToString(asyncJob), + virDomainAsyncJobTypeToString(asyncJob), flags); =20 /* This method is routinely used in clean up paths. Disable error * reporting so we don't squash a legit error. */ virErrorPreserveLast(&orig_err); =20 - if (asyncJob !=3D QEMU_ASYNC_JOB_NONE) { + if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { if (qemuDomainObjBeginNestedJob(driver, vm, asyncJob) < 0) goto cleanup; - } else if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_NONE && + } else if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_NONE && priv->job.asyncOwner =3D=3D virThreadSelfID() && - priv->job.active !=3D QEMU_JOB_ASYNC_NESTED) { + priv->job.active !=3D VIR_JOB_ASYNC_NESTED) { VIR_WARN("qemuProcessStop called without a nested job (async=3D%s)= ", - qemuDomainAsyncJobTypeToString(asyncJob)); + virDomainAsyncJobTypeToString(asyncJob)); } =20 if (!virDomainObjIsActive(vm)) { @@ -8368,7 +8368,7 @@ void qemuProcessStop(virQEMUDriver *driver, virDomainObjRemoveTransientDef(vm); =20 endjob: - if (asyncJob !=3D QEMU_ASYNC_JOB_NONE) + if (asyncJob !=3D VIR_ASYNC_JOB_NONE) qemuDomainObjEndJob(vm); =20 cleanup: @@ -8388,7 +8388,7 @@ qemuProcessAutoDestroy(virDomainObj *dom, =20 VIR_DEBUG("vm=3D%s, conn=3D%p", dom->def->name, conn); =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) + if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; =20 if (priv->job.asyncJob) { @@ -8399,11 +8399,11 @@ qemuProcessAutoDestroy(virDomainObj *dom, =20 VIR_DEBUG("Killing domain"); =20 - if (qemuProcessBeginStopJob(driver, dom, QEMU_JOB_DESTROY, true) < 0) + if (qemuProcessBeginStopJob(driver, dom, VIR_JOB_DESTROY, true) < 0) return; =20 qemuProcessStop(driver, dom, VIR_DOMAIN_SHUTOFF_DESTROYED, - QEMU_ASYNC_JOB_NONE, stopFlags); + VIR_ASYNC_JOB_NONE, stopFlags); =20 virDomainAuditStop(dom, "destroyed"); event =3D virDomainEventLifecycleNewFromObj(dom, @@ -8447,7 +8447,7 @@ bool qemuProcessAutoDestroyActive(virQEMUDriver *driv= er, int qemuProcessRefreshDisks(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; bool blockdev =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV); @@ -8498,7 +8498,7 @@ qemuProcessRefreshDisks(virQEMUDriver *driver, static int qemuProcessRefreshCPUMigratability(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virDomainDef *def =3D vm->def; @@ -8559,7 +8559,7 @@ qemuProcessRefreshCPU(virQEMUDriver *driver, if (!vm->def->cpu) return 0; =20 - if (qemuProcessRefreshCPUMigratability(driver, vm, QEMU_ASYNC_JOB_NONE= ) < 0) + if (qemuProcessRefreshCPUMigratability(driver, vm, VIR_ASYNC_JOB_NONE)= < 0) return -1; =20 if (!(host =3D virQEMUDriverGetHostCPU(driver))) { @@ -8594,7 +8594,7 @@ qemuProcessRefreshCPU(virQEMUDriver *driver, if (virCPUUpdate(vm->def->os.arch, vm->def->cpu, cpu) < 0) return -1; =20 - if (qemuProcessUpdateCPU(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + if (qemuProcessUpdateCPU(driver, vm, VIR_ASYNC_JOB_NONE) < 0) return -1; } else if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_E= XPANSION)) { /* We only try to fix CPUs when the libvirt/QEMU combo used to sta= rt @@ -8755,12 +8755,12 @@ qemuProcessReconnect(void *opaque) priv =3D obj->privateData; =20 qemuDomainObjRestoreJob(obj, &oldjob); - if (oldjob.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_IN) + if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; - if (oldjob.asyncJob =3D=3D QEMU_ASYNC_JOB_BACKUP && priv->backup) + if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP && priv->backup) priv->backup->apiFlags =3D oldjob.apiFlags; =20 - if (qemuDomainObjBeginJob(driver, obj, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, obj, VIR_JOB_MODIFY) < 0) goto error; jobStarted =3D true; =20 @@ -8792,7 +8792,7 @@ qemuProcessReconnect(void *opaque) tryMonReconn =3D true; =20 /* XXX check PID liveliness & EXE path */ - if (qemuConnectMonitor(driver, obj, QEMU_ASYNC_JOB_NONE, retry, NULL) = < 0) + if (qemuConnectMonitor(driver, obj, VIR_ASYNC_JOB_NONE, retry, NULL) <= 0) goto error; =20 priv->machineName =3D qemuDomainGetMachineName(obj); @@ -8887,7 +8887,7 @@ qemuProcessReconnect(void *opaque) ignore_value(qemuSecurityCheckAllLabel(driver->securityManager, obj->def)); =20 - if (qemuDomainRefreshVcpuInfo(driver, obj, QEMU_ASYNC_JOB_NONE, true) = < 0) + if (qemuDomainRefreshVcpuInfo(driver, obj, VIR_ASYNC_JOB_NONE, true) <= 0) goto error; =20 qemuDomainVcpuPersistOrder(obj->def); @@ -8895,10 +8895,10 @@ qemuProcessReconnect(void *opaque) if (qemuProcessRefreshCPU(driver, obj) < 0) goto error; =20 - if (qemuDomainUpdateMemoryDeviceInfo(driver, obj, QEMU_ASYNC_JOB_NONE)= < 0) + if (qemuDomainUpdateMemoryDeviceInfo(driver, obj, VIR_ASYNC_JOB_NONE) = < 0) goto error; =20 - if (qemuProcessDetectIOThreadPIDs(driver, obj, QEMU_ASYNC_JOB_NONE) < = 0) + if (qemuProcessDetectIOThreadPIDs(driver, obj, VIR_ASYNC_JOB_NONE) < 0) goto error; =20 if (qemuSecurityReserveLabel(driver->securityManager, obj->def, obj->p= id) < 0) @@ -8908,7 +8908,7 @@ qemuProcessReconnect(void *opaque) =20 qemuProcessFiltersInstantiate(obj->def); =20 - if (qemuProcessRefreshDisks(driver, obj, QEMU_ASYNC_JOB_NONE) < 0) + if (qemuProcessRefreshDisks(driver, obj, VIR_ASYNC_JOB_NONE) < 0) goto error; =20 /* At this point we've already checked that the startup of the VM was @@ -8922,16 +8922,16 @@ qemuProcessReconnect(void *opaque) } =20 if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV) && - qemuBlockNodeNamesDetect(driver, obj, QEMU_ASYNC_JOB_NONE) < 0) + qemuBlockNodeNamesDetect(driver, obj, VIR_ASYNC_JOB_NONE) < 0) goto error; =20 - if (qemuRefreshVirtioChannelState(driver, obj, QEMU_ASYNC_JOB_NONE) < = 0) + if (qemuRefreshVirtioChannelState(driver, obj, VIR_ASYNC_JOB_NONE) < 0) goto error; =20 /* If querying of guest's RTC failed, report error, but do not kill th= e domain. */ qemuRefreshRTC(driver, obj); =20 - if (qemuProcessRefreshBalloonState(driver, obj, QEMU_ASYNC_JOB_NONE) <= 0) + if (qemuProcessRefreshBalloonState(driver, obj, VIR_ASYNC_JOB_NONE) < = 0) goto error; =20 if (qemuProcessRecoverJob(driver, obj, &oldjob, &stopFlags) < 0) @@ -9030,7 +9030,7 @@ qemuProcessReconnect(void *opaque) * thread didn't have a chance to start playing with the domain yet * (it's all we can do anyway). */ - qemuProcessStop(driver, obj, state, QEMU_ASYNC_JOB_NONE, stopFlags= ); + qemuProcessStop(driver, obj, state, VIR_ASYNC_JOB_NONE, stopFlags); } goto cleanup; } @@ -9072,7 +9072,7 @@ qemuProcessReconnectHelper(virDomainObj *obj, * object. */ qemuProcessStop(src->driver, obj, VIR_DOMAIN_SHUTOFF_FAILED, - QEMU_ASYNC_JOB_NONE, 0); + VIR_ASYNC_JOB_NONE, 0); qemuDomainRemoveInactiveJobLocked(src->driver, obj); =20 virDomainObjEndAPI(&obj); diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 7e6f9f20e5..85c197714a 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -32,11 +32,11 @@ int qemuProcessPrepareMonitorChr(virDomainChrSourceDef = *monConfig, int qemuProcessStartCPUs(virQEMUDriver *driver, virDomainObj *vm, virDomainRunningReason reason, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); int qemuProcessStopCPUs(virQEMUDriver *driver, virDomainObj *vm, virDomainPausedReason reason, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuProcessBuildDestroyMemoryPaths(virQEMUDriver *driver, virDomainObj *vm, @@ -85,7 +85,7 @@ int qemuProcessStart(virConnectPtr conn, virQEMUDriver *driver, virDomainObj *vm, virCPUDef *updatedCPU, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, const char *migrateFrom, int stdin_fd, const char *stdin_path, @@ -107,7 +107,7 @@ virCommand *qemuProcessCreatePretendCmdBuild(virQEMUDri= ver *driver, int qemuProcessInit(virQEMUDriver *driver, virDomainObj *vm, virCPUDef *updatedCPU, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, bool migration, unsigned int flags); =20 @@ -132,7 +132,7 @@ int qemuProcessPrepareHost(virQEMUDriver *driver, int qemuProcessLaunch(virConnectPtr conn, virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, qemuProcessIncomingDef *incoming, virDomainMomentObj *snapshot, virNetDevVPortProfileOp vmop, @@ -140,13 +140,13 @@ int qemuProcessLaunch(virConnectPtr conn, =20 int qemuProcessFinishStartup(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, bool startCPUs, virDomainPausedReason pausedReason); =20 int qemuProcessRefreshState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 typedef enum { VIR_QEMU_PROCESS_STOP_MIGRATED =3D 1 << 0, @@ -155,12 +155,12 @@ typedef enum { =20 int qemuProcessBeginStopJob(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainJob job, + virDomainJob job, bool forceKill); void qemuProcessStop(virQEMUDriver *driver, virDomainObj *vm, virDomainShutoffReason reason, - qemuDomainAsyncJob asyncJob, + virDomainAsyncJob asyncJob, unsigned int flags); =20 typedef enum { @@ -200,7 +200,7 @@ int qemuProcessSetupIOThread(virDomainObj *vm, =20 int qemuRefreshVirtioChannelState(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuProcessRefreshBalloonState(virQEMUDriver *driver, virDomainObj *vm, @@ -208,7 +208,7 @@ int qemuProcessRefreshBalloonState(virQEMUDriver *drive= r, =20 int qemuProcessRefreshDisks(virQEMUDriver *driver, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int qemuProcessStartManagedPRDaemon(virDomainObj *vm) G_GNUC_NO_INLINE; =20 diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index c0139041eb..4fd4c5cfcd 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -259,7 +259,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, virQEMUSaveData *data, virCommand *compressor, unsigned int flags, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); bool needUnlink =3D false; @@ -578,7 +578,7 @@ qemuSaveImageStartVM(virConnectPtr conn, const char *path, bool start_paused, bool reset_nvram, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; int ret =3D -1; diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index a0daa4ad2b..391cd55ed0 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -68,7 +68,7 @@ qemuSaveImageStartVM(virConnectPtr conn, const char *path, bool start_paused, bool reset_nvram, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) ATTRIBUTE_NONNULL(4) ATTRIBUTE_NONNULL(5) ATTRIBUTE_NONNULL(6); =20 int @@ -97,7 +97,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, virQEMUSaveData *data, virCommand *compressor, unsigned int flags, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 int virQEMUSaveDataWrite(virQEMUSaveData *data, diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 5333730df1..185fcb04a2 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -304,7 +304,7 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver, * domain. Thus we stop and start CPUs ourselves. */ if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE, - QEMU_ASYNC_JOB_SNAPSHOT) < 0) + VIR_ASYNC_JOB_SNAPSHOT) < 0) goto cleanup; =20 resume =3D true; @@ -316,7 +316,7 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver, } =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, - QEMU_ASYNC_JOB_SNAPSHOT) < 0) { + VIR_ASYNC_JOB_SNAPSHOT) < 0) { resume =3D false; goto cleanup; } @@ -333,7 +333,7 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver, event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_S= TOPPED, VIR_DOMAIN_EVENT_STOPPED_FROM_SNA= PSHOT); qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT, - QEMU_ASYNC_JOB_SNAPSHOT, 0); + VIR_ASYNC_JOB_SNAPSHOT, 0); virDomainAuditStop(vm, "from-snapshot"); resume =3D false; } @@ -342,7 +342,7 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver, if (resume && virDomainObjIsActive(vm) && qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, - QEMU_ASYNC_JOB_SNAPSHOT) < 0) { + VIR_ASYNC_JOB_SNAPSHOT) < 0) { event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_SUSPENDED, VIR_DOMAIN_EVENT_SUSPENDED_API_ER= ROR); @@ -863,7 +863,7 @@ static void qemuSnapshotDiskCleanup(qemuSnapshotDiskData *data, size_t ndata, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virQEMUDriver *driver =3D priv->driver; @@ -922,7 +922,7 @@ struct _qemuSnapshotDiskContext { =20 /* needed for automatic cleanup of 'dd' */ virDomainObj *vm; - qemuDomainAsyncJob asyncJob; + virDomainAsyncJob asyncJob; }; =20 typedef struct _qemuSnapshotDiskContext qemuSnapshotDiskContext; @@ -931,7 +931,7 @@ typedef struct _qemuSnapshotDiskContext qemuSnapshotDis= kContext; qemuSnapshotDiskContext * qemuSnapshotDiskContextNew(size_t ndisks, virDomainObj *vm, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; virQEMUDriver *driver =3D priv->driver; @@ -1008,7 +1008,7 @@ qemuSnapshotDiskPrepareOneBlockdev(virQEMUDriver *dri= ver, virQEMUDriverConfig *cfg, bool reuse, GHashTable *blockNamedNodeData, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(virStorageSource) terminator =3D NULL; @@ -1165,7 +1165,7 @@ qemuSnapshotDiskPrepareActiveExternal(virDomainObj *v= m, virDomainMomentObj *snap, bool reuse, GHashTable *blockNamedNodeData, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { g_autoptr(qemuSnapshotDiskContext) snapctxt =3D NULL; size_t i; @@ -1319,7 +1319,7 @@ qemuSnapshotCreateActiveExternalDisks(virDomainObj *v= m, virDomainMomentObj *snap, GHashTable *blockNamedNodeData, unsigned int flags, - qemuDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob) { bool reuse =3D (flags & VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT) !=3D 0; g_autoptr(qemuSnapshotDiskContext) snapctxt =3D NULL; @@ -1371,7 +1371,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE) { int frozen; =20 - if (qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) = < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) <= 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) { @@ -1405,7 +1405,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, * when the user wants to manually snapshot some disks */ if (((memory || has_manual) && !(flags & VIR_DOMAIN_SNAPSHOT_CREAT= E_LIVE))) { if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SNAPSHOT, - QEMU_ASYNC_JOB_SNAPSHOT) < 0) + VIR_ASYNC_JOB_SNAPSHOT) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -1420,7 +1420,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, * migration step as qemu deactivates bitmaps after migration so the r= esult * would be wrong */ if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV) && - !(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_SNAPSHOT))) + !(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, VIR_ASYNC_J= OB_SNAPSHOT))) goto cleanup; =20 /* do the memory snapshot if necessary */ @@ -1434,8 +1434,8 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | - JOB_MASK(QEMU_JOB_SUSPEND) | - JOB_MASK(QEMU_JOB_MIGRATION_OP))= ); + JOB_MASK(VIR_JOB_SUSPEND) | + JOB_MASK(VIR_JOB_MIGRATION_OP))); =20 if ((compressed =3D qemuSaveImageGetCompressionProgram(cfg->snapsh= otImageFormat, &compressor, @@ -1458,7 +1458,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, =20 if ((ret =3D qemuSaveImageCreate(driver, vm, snapdef->memorysnapsh= otfile, data, compressor, 0, - QEMU_ASYNC_JOB_SNAPSHOT)) < 0) + VIR_ASYNC_JOB_SNAPSHOT)) < 0) goto cleanup; =20 /* the memory image was created, remove it on errors */ @@ -1473,7 +1473,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, =20 if ((ret =3D qemuSnapshotCreateActiveExternalDisks(vm, snap, blockNamedNodeData, f= lags, - QEMU_ASYNC_JOB_SNAPSH= OT)) < 0) + VIR_ASYNC_JOB_SNAPSHO= T)) < 0) goto cleanup; =20 /* the snapshot is complete now */ @@ -1481,7 +1481,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_S= TOPPED, VIR_DOMAIN_EVENT_STOPPED_FROM_SNA= PSHOT); qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT, - QEMU_ASYNC_JOB_SNAPSHOT, 0); + VIR_ASYNC_JOB_SNAPSHOT, 0); virDomainAuditStop(vm, "from-snapshot"); resume =3D false; thaw =3D false; @@ -1503,7 +1503,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, if (resume && virDomainObjIsActive(vm) && qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_UNPAUSED, - QEMU_ASYNC_JOB_SNAPSHOT) < 0) { + VIR_ASYNC_JOB_SNAPSHOT) < 0) { event =3D virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_SUSPENDED, VIR_DOMAIN_EVENT_SUSPENDED_API_ER= ROR); @@ -1517,7 +1517,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, } =20 if (thaw && - qemuDomainObjBeginAgentJob(driver, vm, QEMU_AGENT_JOB_MODIFY) >=3D= 0 && + qemuDomainObjBeginAgentJob(driver, vm, VIR_AGENT_JOB_MODIFY) >=3D = 0 && virDomainObjIsActive(vm)) { /* report error only on an otherwise successful snapshot */ if (qemuSnapshotFSThaw(vm, ret =3D=3D 0) < 0) @@ -1889,11 +1889,11 @@ qemuSnapshotCreateXML(virDomainPtr domain, * a regular job, so we need to set the job mask to disallow query as * 'savevm' blocks the monitor. External snapshot will then modify the * job mask appropriately. */ - if (qemuDomainObjBeginAsyncJob(driver, vm, QEMU_ASYNC_JOB_SNAPSHOT, + if (qemuDomainObjBeginAsyncJob(driver, vm, VIR_ASYNC_JOB_SNAPSHOT, VIR_DOMAIN_JOB_OPERATION_SNAPSHOT, flag= s) < 0) return NULL; =20 - qemuDomainObjSetAsyncJobMask(vm, QEMU_JOB_NONE); + qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_NONE); =20 if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE) { snapshot =3D qemuSnapshotRedefine(vm, domain, def, driver, cfg, fl= ags); @@ -2067,7 +2067,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, /* Transitions 5, 6, 8, 9 */ qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT, - QEMU_ASYNC_JOB_START, 0); + VIR_ASYNC_JOB_START, 0); virDomainAuditStop(vm, "from-snapshot"); detail =3D VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT; event =3D virDomainEventLifecycleNewFromObj(vm, @@ -2092,7 +2092,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, =20 rc =3D qemuProcessStart(snapshot->domain->conn, driver, vm, cookie ? cookie->cpu : NULL, - QEMU_ASYNC_JOB_START, NULL, -1, NULL, snap, + VIR_ASYNC_JOB_START, NULL, -1, NULL, snap, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >=3D 0); @@ -2125,7 +2125,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, } rc =3D qemuProcessStartCPUs(driver, vm, VIR_DOMAIN_RUNNING_FROM_SNAPSHOT, - QEMU_ASYNC_JOB_START); + VIR_ASYNC_JOB_START); if (rc < 0) return -1; } @@ -2188,7 +2188,7 @@ qemuSnapshotRevertInactive(virDomainObj *vm, if (virDomainObjIsActive(vm)) { /* Transitions 4, 7 */ qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT, - QEMU_ASYNC_JOB_START, 0); + VIR_ASYNC_JOB_START, 0); virDomainAuditStop(vm, "from-snapshot"); detail =3D VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT; event =3D virDomainEventLifecycleNewFromObj(vm, @@ -2215,7 +2215,7 @@ qemuSnapshotRevertInactive(virDomainObj *vm, start_flags |=3D paused ? VIR_QEMU_PROCESS_START_PAUSED : 0; =20 rc =3D qemuProcessStart(snapshot->domain->conn, driver, vm, NULL, - QEMU_ASYNC_JOB_START, NULL, -1, NULL, NULL, + VIR_ASYNC_JOB_START, NULL, -1, NULL, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >=3D 0); @@ -2394,7 +2394,7 @@ qemuSnapshotDelete(virDomainObj *vm, VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY | VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY, -1); =20 - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (!(snap =3D qemuSnapObjFromSnapshot(vm, snapshot))) diff --git a/src/qemu/qemu_snapshot.h b/src/qemu/qemu_snapshot.h index ad2bdb1114..0cc38c0039 100644 --- a/src/qemu/qemu_snapshot.h +++ b/src/qemu/qemu_snapshot.h @@ -61,7 +61,7 @@ typedef struct _qemuSnapshotDiskContext qemuSnapshotDiskC= ontext; qemuSnapshotDiskContext * qemuSnapshotDiskContextNew(size_t ndisks, virDomainObj *vm, - qemuDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob); =20 void qemuSnapshotDiskContextCleanup(qemuSnapshotDiskContext *snapctxt); --=20 2.35.1 From nobody Wed May 15 21:36:39 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1648135990; cv=none; d=zohomail.com; s=zohoarc; b=XM8ZE8szHVYE10eXRAVRiy1n3MyVbCrEfrc+4D7+8Omv0esA6eiIXaXtlqBQwkN+W+nbYLo85kKg6+QM3BvdgBTXi9JYW+giYs8COAjkZU6e85+MbKHy5L4vawPMO10y6urHS+5JE2ShKFVbylGP8L5WXNgWJlMB2S2u9dcXnuw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1648135990; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LkgbzWvmf3hbnLN3ZpM3FYgFO2ENSazbT9ybOq+j4TQ=; b=dB6qvbx6qoZjavC1BleuzALisTsCFtlCdLBjEzZH6mIOm5FnsazzdYLhYpklKBKm7Ux3IqekQxDZGwyiqnmEcxj44ijZNtIiLrmNRlRw+8HaeTNwpGa2J2SG0QdnW79h5uvxVd+OzgnExyM448VmUsIh7v8s1nMT3m3xfXc+wJc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1648135990797228.41848207159398; Thu, 24 Mar 2022 08:33:10 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-625-1jakoTmJNGuN_JeYM2JqDg-1; Thu, 24 Mar 2022 11:33:06 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C59A31C0691E; Thu, 24 Mar 2022 15:33:03 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B12331454557; Thu, 24 Mar 2022 15:33:03 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7BF44193F6DD; Thu, 24 Mar 2022 15:33:02 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C9EE1194035C for ; Thu, 24 Mar 2022 15:32:58 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id A0F81C15D7C; Thu, 24 Mar 2022 15:32:58 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 49219C15D7B for ; Thu, 24 Mar 2022 15:32:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648135989; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=LkgbzWvmf3hbnLN3ZpM3FYgFO2ENSazbT9ybOq+j4TQ=; b=RlYRBDnIDZGHJFaC64lVXoaSepuVDdEZdRYygBvSNXE7hQ+ANzWDS6xF8M5aMSjOKetaE2 /wWf2ARildVbiyLkrZNAg5umRDiIXN8BJMyzBkkq6djveSvKSZbQmO54OEkID64nuJ7gL1 +g7iD2Cs4/UPFt0IR1d9WmLwlu3msAU= X-MC-Unique: 1jakoTmJNGuN_JeYM2JqDg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 2/5] qemu: move macros QEMU_X into hypervisor as VIR_X Date: Thu, 24 Mar 2022 16:32:43 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1648135992706100007 Content-Type: text/plain; charset="utf-8"; x-default="true" It makes sense to have these in the same file as the definitions of enums. Signed-off-by: Kristina Hanicova Reviewed-by: Michal Privoznik --- src/hypervisor/domain_job.h | 12 ++++++++++++ src/qemu/qemu_backup.c | 2 +- src/qemu/qemu_domainjob.c | 4 ++-- src/qemu/qemu_domainjob.h | 11 ----------- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_process.c | 4 ++-- src/qemu/qemu_snapshot.c | 4 ++-- 7 files changed, 20 insertions(+), 19 deletions(-) diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h index b9d1107580..4f165f730d 100644 --- a/src/hypervisor/domain_job.h +++ b/src/hypervisor/domain_job.h @@ -8,6 +8,18 @@ #include "internal.h" #include "virenum.h" =20 +#define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) +#define VIR_JOB_DEFAULT_MASK \ + (JOB_MASK(VIR_JOB_QUERY) | \ + JOB_MASK(VIR_JOB_DESTROY) | \ + JOB_MASK(VIR_JOB_ABORT)) + +/* Jobs which have to be tracked in domain state XML. */ +#define VIR_DOMAIN_TRACK_JOBS \ + (JOB_MASK(VIR_JOB_DESTROY) | \ + JOB_MASK(VIR_JOB_ASYNC)) + + /* Only 1 job is allowed at any time * A job includes *all* monitor commands, even those just querying * information, not merely actions */ diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 5d24155628..427c090dd8 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -795,7 +795,7 @@ qemuBackupBegin(virDomainObj *vm, VIR_DOMAIN_JOB_OPERATION_BACKUP, flags)= < 0) return -1; =20 - qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + qemuDomainObjSetAsyncJobMask(vm, (VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_SUSPEND) | JOB_MASK(VIR_JOB_MODIFY))); qemuDomainJobSetStatsType(priv->job.current, diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 71876fe6a3..ab542865ca 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -204,7 +204,7 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObj *job) g_clear_pointer(&job->asyncOwnerAPI, g_free); job->asyncStarted =3D 0; job->phase =3D 0; - job->mask =3D QEMU_JOB_DEFAULT_MASK; + job->mask =3D VIR_JOB_DEFAULT_MASK; job->abortJob =3D false; VIR_FREE(job->error); g_clear_pointer(&job->current, virDomainJobDataFree); @@ -256,7 +256,7 @@ qemuDomainObjClearJob(qemuDomainJobObj *job) bool qemuDomainTrackJob(virDomainJob job) { - return (QEMU_DOMAIN_TRACK_JOBS & JOB_MASK(job)) !=3D 0; + return (VIR_DOMAIN_TRACK_JOBS & JOB_MASK(job)) !=3D 0; } =20 =20 diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 6520b42c80..faf47105a3 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -22,17 +22,6 @@ #include "qemu_monitor.h" #include "domain_job.h" =20 -#define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) -#define QEMU_JOB_DEFAULT_MASK \ - (JOB_MASK(VIR_JOB_QUERY) | \ - JOB_MASK(VIR_JOB_DESTROY) | \ - JOB_MASK(VIR_JOB_ABORT)) - -/* Jobs which have to be tracked in domain state XML. */ -#define QEMU_DOMAIN_TRACK_JOBS \ - (JOB_MASK(VIR_JOB_DESTROY) | \ - JOB_MASK(VIR_JOB_ASYNC)) - =20 typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_NONE =3D 0, diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 43ffe2357a..b591514533 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -6105,7 +6105,7 @@ qemuMigrationJobStart(virQEMUDriver *driver, mask =3D VIR_JOB_NONE; } else { op =3D VIR_DOMAIN_JOB_OPERATION_MIGRATION_OUT; - mask =3D QEMU_JOB_DEFAULT_MASK | + mask =3D VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_SUSPEND) | JOB_MASK(VIR_JOB_MIGRATION_OP); } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 189e4671d1..bf514bc963 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3657,7 +3657,7 @@ qemuProcessRecoverJob(virQEMUDriver *driver, priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); priv->job.asyncStarted =3D now; =20 - qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + qemuDomainObjSetAsyncJobMask(vm, (VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_SUSPEND) | JOB_MASK(VIR_JOB_MODIFY))); =20 @@ -3682,7 +3682,7 @@ qemuProcessRecoverJob(virQEMUDriver *driver, return -1; =20 /* In case any special handling is added for job type that has been ig= nored - * before, QEMU_DOMAIN_TRACK_JOBS (from qemu_domain.h) needs to be upd= ated + * before, VIR_DOMAIN_TRACK_JOBS (from qemu_domain.h) needs to be upda= ted * for the job to be properly tracked in domain state XML. */ switch (job->active) { diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 185fcb04a2..4a05cddc54 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1433,7 +1433,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ - qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + qemuDomainObjSetAsyncJobMask(vm, (VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_SUSPEND) | JOB_MASK(VIR_JOB_MIGRATION_OP))); =20 @@ -1466,7 +1466,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, memory_unlink =3D true; =20 /* forbid any further manipulation */ - qemuDomainObjSetAsyncJobMask(vm, QEMU_JOB_DEFAULT_MASK); + qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_DEFAULT_MASK); } =20 /* the domain is now paused if a memory snapshot was requested */ --=20 2.35.1 From nobody Wed May 15 21:36:39 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1648135988; cv=none; d=zohomail.com; s=zohoarc; b=MZZPThQlPr9DnicFzxdkLhSCBlY4kdpgAO6x8vkQ86KnrnCpHWy3bXFuC/thMRYU2iWB23ur/calR3nzy6CXSfKKWy4VUC6WlAVzxXM/v8LrZ+RhONqYt5aQ8FOH7RZxaN8DEapcOYBh59ae4ZYFjJFIRhT8jOGHxvVbO12IcyE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1648135988; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=/WdiryfTjuam/iPIYh0nWs+olz8s/fVDRCtg3ze1SH4=; b=azng2m1Vw41UQc+J5WWTAyJPVnHlFJJmEWP3GxlRMBxYPvN8FL63+PdQ8d+fipYePVEjF2LlJ6hbMRS8L0UvBYEEPjLXlXjx4651CfzeCaLh0kFip0GsE9Gegx/e/EvhycKd8+74YWyX5emqFmU40p2MDqZR3mukSS/6Nm9EJ6E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1648135988454119.58235616829211; Thu, 24 Mar 2022 08:33:08 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-85-mlSSwE4PNpWWW-5zg6bQGg-1; Thu, 24 Mar 2022 11:33:05 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 49BE13800516; Thu, 24 Mar 2022 15:33:02 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2D42340D296B; Thu, 24 Mar 2022 15:33:02 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id B9ADC1940352; Thu, 24 Mar 2022 15:33:01 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5243C1940352 for ; Thu, 24 Mar 2022 15:32:59 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 3FF48C15D7C; Thu, 24 Mar 2022 15:32:59 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id DB4E7C15D7B for ; Thu, 24 Mar 2022 15:32:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648135987; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=/WdiryfTjuam/iPIYh0nWs+olz8s/fVDRCtg3ze1SH4=; b=WH+JPMV80+4y+XQuvDUssVfG7ffWS6sEkUkm2T4K8xG83VVMt8tWOsNiygAwhChxfkrLbI B1FPswfYNoHrhO2QK7GuU4+R/SB86eHzEwM0Aq/ZGsdaceT7Q1VXfDnZN+RpxiR6j+v/UB fq15bEj5elIYCiWqecS+SdjXB3GFVRE= X-MC-Unique: mlSSwE4PNpWWW-5zg6bQGg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 3/5] lxc: use virDomainJob enum instead of virLXCDomainJob Date: Thu, 24 Mar 2022 16:32:44 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1648135990341100003 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: Michal Privoznik --- src/hypervisor/domain_job.h | 4 ++-- src/lxc/lxc_domain.c | 25 ++++++++------------ src/lxc/lxc_domain.h | 19 +++------------ src/lxc/lxc_driver.c | 46 ++++++++++++++++++------------------- 4 files changed, 37 insertions(+), 57 deletions(-) diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h index 4f165f730d..db8b8b1390 100644 --- a/src/hypervisor/domain_job.h +++ b/src/hypervisor/domain_job.h @@ -21,8 +21,8 @@ =20 =20 /* Only 1 job is allowed at any time - * A job includes *all* monitor commands, even those just querying - * information, not merely actions */ + * A job includes *all* monitor commands / hypervisor.so api, + * even those just querying information, not merely actions */ typedef enum { VIR_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) condit= ions */ VIR_JOB_QUERY, /* Doesn't change any state */ diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index 85795d1805..fae56b35fb 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -31,17 +31,10 @@ #include "virsystemd.h" #include "virinitctl.h" #include "domain_driver.h" +#include "domain_job.h" =20 #define VIR_FROM_THIS VIR_FROM_LXC =20 -VIR_ENUM_IMPL(virLXCDomainJob, - LXC_JOB_LAST, - "none", - "query", - "destroy", - "modify", -); - VIR_LOG_INIT("lxc.lxc_domain"); =20 static int @@ -60,7 +53,7 @@ virLXCDomainObjResetJob(virLXCDomainObjPrivate *priv) { struct virLXCDomainJobObj *job =3D &priv->job; =20 - job->active =3D LXC_JOB_NONE; + job->active =3D VIR_JOB_NONE; job->owner =3D 0; } =20 @@ -85,7 +78,7 @@ virLXCDomainObjFreeJob(virLXCDomainObjPrivate *priv) int virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSED, virDomainObj *obj, - enum virLXCDomainJob job) + virDomainJob job) { virLXCDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; @@ -97,14 +90,14 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNU= SED, =20 while (priv->job.active) { VIR_DEBUG("Wait normal job condition for starting job: %s", - virLXCDomainJobTypeToString(job)); + virDomainJobTypeToString(job)); if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) goto error; } =20 virLXCDomainObjResetJob(priv); =20 - VIR_DEBUG("Starting job: %s", virLXCDomainJobTypeToString(job)); + VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); =20 @@ -113,9 +106,9 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNU= SED, error: VIR_WARN("Cannot start job (%s) for domain %s;" " current job is (%s) owned by (%d)", - virLXCDomainJobTypeToString(job), + virDomainJobTypeToString(job), obj->def->name, - virLXCDomainJobTypeToString(priv->job.active), + virDomainJobTypeToString(priv->job.active), priv->job.owner); =20 if (errno =3D=3D ETIMEDOUT) @@ -139,10 +132,10 @@ virLXCDomainObjEndJob(virLXCDriver *driver G_GNUC_UNU= SED, virDomainObj *obj) { virLXCDomainObjPrivate *priv =3D obj->privateData; - enum virLXCDomainJob job =3D priv->job.active; + virDomainJob job =3D priv->job.active; =20 VIR_DEBUG("Stopping job: %s", - virLXCDomainJobTypeToString(job)); + virDomainJobTypeToString(job)); =20 virLXCDomainObjResetJob(priv); virCondSignal(&priv->job.cond); diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index 766837bdf1..1c4cb8c14a 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -25,6 +25,7 @@ #include "lxc_conf.h" #include "lxc_monitor.h" #include "virenum.h" +#include "domain_job.h" =20 =20 typedef enum { @@ -53,23 +54,9 @@ struct _lxcDomainDef { }; =20 =20 -/* Only 1 job is allowed at any time - * A job includes *all* lxc.so api, even those just querying - * information, not merely actions */ - -enum virLXCDomainJob { - LXC_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) co= nditions */ - LXC_JOB_QUERY, /* Doesn't change any state */ - LXC_JOB_DESTROY, /* Destroys the domain (cannot be masked out) */ - LXC_JOB_MODIFY, /* May change state */ - LXC_JOB_LAST -}; -VIR_ENUM_DECL(virLXCDomainJob); - - struct virLXCDomainJobObj { virCond cond; /* Use to coordinate jobs */ - enum virLXCDomainJob active; /* Currently running job */ + virDomainJob active; /* Currently running job */ int owner; /* Thread which set current job */ }; =20 @@ -96,7 +83,7 @@ extern virDomainDefParserConfig virLXCDriverDomainDefPars= erConfig; int virLXCDomainObjBeginJob(virLXCDriver *driver, virDomainObj *obj, - enum virLXCDomainJob job) + virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 void diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index ae6e328adb..e3c7d15a25 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -652,7 +652,7 @@ static int lxcDomainSetMemoryFlags(virDomainPtr dom, un= signed long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -766,7 +766,7 @@ lxcDomainSetMemoryParameters(virDomainPtr dom, if (virDomainSetMemoryParametersEnsureACL(dom->conn, vm->def, flags) <= 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 /* QEMU and LXC implementation are identical */ @@ -983,7 +983,7 @@ static int lxcDomainCreateWithFiles(virDomainPtr dom, goto cleanup; } =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -1104,7 +1104,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, NULL))) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) { + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1351,7 +1351,7 @@ lxcDomainDestroyFlags(virDomainPtr dom, if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_DESTROY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_DESTROY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1814,7 +1814,7 @@ lxcDomainSetSchedulerParametersFlags(virDomainPtr dom, if (!(caps =3D virLXCDriverGetCapabilities(driver, false))) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2033,7 +2033,7 @@ lxcDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_QUERY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2116,7 +2116,7 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_QUERY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2252,7 +2252,7 @@ lxcDomainSetBlkioParameters(virDomainPtr dom, if (virDomainSetBlkioParametersEnsureACL(dom->conn, vm->def, flags) < = 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2394,7 +2394,7 @@ lxcDomainInterfaceStats(virDomainPtr dom, if (virDomainInterfaceStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_QUERY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2454,7 +2454,7 @@ static int lxcDomainSetAutostart(virDomainPtr dom, if (virDomainSetAutostartEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -2605,7 +2605,7 @@ static int lxcDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2655,7 +2655,7 @@ static int lxcDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2784,7 +2784,7 @@ lxcDomainSendProcessSignal(virDomainPtr dom, if (virDomainSendProcessSignalEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2869,7 +2869,7 @@ lxcDomainShutdownFlags(virDomainPtr dom, if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2945,7 +2945,7 @@ lxcDomainReboot(virDomainPtr dom, if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4270,7 +4270,7 @@ static int lxcDomainAttachDeviceFlags(virDomainPtr do= m, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4375,7 +4375,7 @@ static int lxcDomainUpdateDeviceFlags(virDomainPtr do= m, if (virDomainUpdateDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4438,7 +4438,7 @@ static int lxcDomainDetachDeviceFlags(virDomainPtr do= m, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4538,7 +4538,7 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, if (virDomainLxcOpenNamespaceEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_QUERY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4621,7 +4621,7 @@ lxcDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_QUERY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4791,7 +4791,7 @@ lxcDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_MODIFY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, @@ -4897,7 +4897,7 @@ lxcDomainGetHostname(virDomainPtr dom, if (virDomainGetHostnameEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, LXC_JOB_QUERY) < 0) + if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) --=20 2.35.1 From nobody Wed May 15 21:36:39 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1648135993; cv=none; d=zohomail.com; s=zohoarc; b=Wj/NTgihX8ZHxEPPc/qk9EEsaggn83CmFB0UTExJzc3NF1/U8NWKIkz3VYl2c9hNaUpOctJCHjKoGfvwNvrnAhk06Fywb8yRi/k8Y9ePC7ZHoGbwUqqX+K4LWvmOpWjrsz3aVYnAVELC/lLsqX4QEj7JCUXOUrD5hnJoyA8oUZ0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1648135993; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=V5cYTE4Rng0HVPqluxLs3BP8K82jlN4aCmF6PXcGQeY=; b=GpcTkjEH9EglqS+LTUGdvEw1oEEr7Gwn9XSadF+K9rNCVDv8zKYzVrhqA0oGh6QW2K0zxHutC+d302byrKdrZuIN5gUptYaKq3Dwqf9oRlvX8/CHRV0A0wwkXnRRWiKhWvAqzkML8PNLf30TMzCFlaVxqOr/lU5k+8WQZvQY2Eo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1648135993981667.3003005475364; Thu, 24 Mar 2022 08:33:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-628-8NaOuSXHMQqJW62OyGQAew-1; Thu, 24 Mar 2022 11:33:10 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 23D061066559; Thu, 24 Mar 2022 15:33:08 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0BA4F140262B; Thu, 24 Mar 2022 15:33:08 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5F914193F6E6; Thu, 24 Mar 2022 15:33:06 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 1D8CF194035B for ; Thu, 24 Mar 2022 15:33:00 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id D3864C15D7C; Thu, 24 Mar 2022 15:32:59 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B4FFC15D7B for ; Thu, 24 Mar 2022 15:32:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648135992; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=V5cYTE4Rng0HVPqluxLs3BP8K82jlN4aCmF6PXcGQeY=; b=O+5TkKreA3KqFCpVjh3WmII1Nhv4lvJCztTtG64DxFvV8ixmFf0wFi1di4qmUUC3/yxR0X KP/9da9aTlEoiD0QDBfdPTufsiKaf2h8xlCRGudWE6d8UpWYFrNLyv+3jI6teQ1aM1q+g7 n/MV5vKLAa114nA3ZfcDm06BXQzN8rU= X-MC-Unique: 8NaOuSXHMQqJW62OyGQAew-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 4/5] libxl: use virDomainJob enum instead of libxlDomainJob Date: Thu, 24 Mar 2022 16:32:45 +0100 Message-Id: <6a4e114a365d2bd743d4e1b2f976c6a57115360b.1648135845.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1648135994299100009 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: Michal Privoznik --- src/libxl/libxl_domain.c | 28 ++++++++-------------- src/libxl/libxl_domain.h | 17 ++----------- src/libxl/libxl_driver.c | 48 ++++++++++++++++++------------------- src/libxl/libxl_migration.c | 6 ++--- 4 files changed, 39 insertions(+), 60 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 2501f6b848..dbe44f4ffc 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -44,14 +44,6 @@ =20 VIR_LOG_INIT("libxl.libxl_domain"); =20 -VIR_ENUM_IMPL(libxlDomainJob, - LIBXL_JOB_LAST, - "none", - "query", - "destroy", - "modify", -); - =20 static int libxlDomainObjInitJob(libxlDomainObjPrivate *priv) @@ -71,7 +63,7 @@ libxlDomainObjResetJob(libxlDomainObjPrivate *priv) { struct libxlDomainJobObj *job =3D &priv->job; =20 - job->active =3D LIBXL_JOB_NONE; + job->active =3D VIR_JOB_NONE; job->owner =3D 0; } =20 @@ -97,7 +89,7 @@ libxlDomainObjFreeJob(libxlDomainObjPrivate *priv) int libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_UNUSED, virDomainObj *obj, - enum libxlDomainJob job) + virDomainJob job) { libxlDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; @@ -109,14 +101,14 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_G= NUC_UNUSED, =20 while (priv->job.active) { VIR_DEBUG("Wait normal job condition for starting job: %s", - libxlDomainJobTypeToString(job)); + virDomainJobTypeToString(job)); if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) goto error; } =20 libxlDomainObjResetJob(priv); =20 - VIR_DEBUG("Starting job: %s", libxlDomainJobTypeToString(job)); + VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); priv->job.current->started =3D now; @@ -127,9 +119,9 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, error: VIR_WARN("Cannot start job (%s) for domain %s;" " current job is (%s) owned by (%d)", - libxlDomainJobTypeToString(job), + virDomainJobTypeToString(job), obj->def->name, - libxlDomainJobTypeToString(priv->job.active), + virDomainJobTypeToString(priv->job.active), priv->job.owner); =20 if (errno =3D=3D ETIMEDOUT) @@ -157,10 +149,10 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, virDomainObj *obj) { libxlDomainObjPrivate *priv =3D obj->privateData; - enum libxlDomainJob job =3D priv->job.active; + virDomainJob job =3D priv->job.active; =20 VIR_DEBUG("Stopping job: %s", - libxlDomainJobTypeToString(job)); + virDomainJobTypeToString(job)); =20 libxlDomainObjResetJob(priv); virCondSignal(&priv->job.cond); @@ -510,7 +502,7 @@ libxlDomainShutdownThread(void *opaque) goto cleanup; } =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (xl_reason =3D=3D LIBXL_SHUTDOWN_REASON_POWEROFF) { @@ -639,7 +631,7 @@ libxlDomainDeathThread(void *opaque) goto cleanup; } =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_DESTRO= YED); diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 157f480b93..aa15e0352f 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -28,23 +28,10 @@ #include "virenum.h" #include "domain_job.h" =20 -/* Only 1 job is allowed at any time - * A job includes *all* libxl.so api, even those just querying - * information, not merely actions */ -enum libxlDomainJob { - LIBXL_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) = conditions */ - LIBXL_JOB_QUERY, /* Doesn't change any state */ - LIBXL_JOB_DESTROY, /* Destroys the domain (cannot be masked out)= */ - LIBXL_JOB_MODIFY, /* May change state */ - - LIBXL_JOB_LAST -}; -VIR_ENUM_DECL(libxlDomainJob); - =20 struct libxlDomainJobObj { virCond cond; /* Use to coordinate jobs */ - enum libxlDomainJob active; /* Currently running job */ + virDomainJob active; /* Currently running job */ int owner; /* Thread which set current job */ virDomainJobData *current; /* Statistics for the current job */ }; @@ -76,7 +63,7 @@ libxlDomainObjPrivateInitCtx(virDomainObj *vm); int libxlDomainObjBeginJob(libxlDriverPrivate *driver, virDomainObj *obj, - enum libxlDomainJob job) + virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 void diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 478ab3e941..01f281d0a5 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -329,7 +329,7 @@ libxlAutostartDomain(virDomainObj *vm, virObjectLock(vm); virResetLastError(); =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (vm->autostart && !virDomainObjIsActive(vm) && @@ -1056,7 +1056,7 @@ libxlDomainCreateXML(virConnectPtr conn, const char *= xml, NULL))) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) { + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1166,7 +1166,7 @@ libxlDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1219,7 +1219,7 @@ libxlDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1380,7 +1380,7 @@ libxlDomainDestroyFlags(virDomainPtr dom, if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1453,7 +1453,7 @@ libxlDomainPMSuspendForDuration(virDomainPtr dom, if (virDomainPMSuspendForDurationEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1506,7 +1506,7 @@ libxlDomainPMWakeup(virDomainPtr dom, unsigned int fl= ags) if (virDomainPMWakeupEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_PMSUSPENDED) { @@ -1640,7 +1640,7 @@ libxlDomainSetMemoryFlags(virDomainPtr dom, unsigned = long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainLiveConfigHelperMethod(cfg->caps, driver->xmlopt, vm, &fl= ags, @@ -1909,7 +1909,7 @@ libxlDomainSaveFlags(virDomainPtr dom, const char *to= , const char *dxml, if (virDomainSaveFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1974,7 +1974,7 @@ libxlDomainRestoreFlags(virConnectPtr conn, const cha= r *from, NULL))) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) { + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -2021,7 +2021,7 @@ libxlDomainCoreDump(virDomainPtr dom, const char *to,= unsigned int flags) if (virDomainCoreDumpEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2110,7 +2110,7 @@ libxlDomainManagedSave(virDomainPtr dom, unsigned int= flags) if (virDomainManagedSaveEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2255,7 +2255,7 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned i= nt nvcpus, if (virDomainSetVcpusFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_VCPU_LIVE)) { @@ -2453,7 +2453,7 @@ libxlDomainPinVcpuFlags(virDomainPtr dom, unsigned in= t vcpu, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainLiveConfigHelperMethod(cfg->caps, driver->xmlopt, vm, @@ -2784,7 +2784,7 @@ libxlDomainCreateWithFlags(virDomainPtr dom, if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -4102,7 +4102,7 @@ libxlDomainAttachDeviceFlags(virDomainPtr dom, const = char *xml, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4196,7 +4196,7 @@ libxlDomainDetachDeviceFlags(virDomainPtr dom, const = char *xml, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4484,7 +4484,7 @@ libxlDomainSetAutostart(virDomainPtr dom, int autosta= rt) if (virDomainSetAutostartEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -4690,7 +4690,7 @@ libxlDomainSetSchedulerParametersFlags(virDomainPtr d= om, if (virDomainSetSchedulerParametersFlagsEnsureACL(dom->conn, vm->def, = flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5007,7 +5007,7 @@ libxlDomainInterfaceStats(virDomainPtr dom, if (virDomainInterfaceStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_QUERY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5175,7 +5175,7 @@ libxlDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_QUERY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5533,7 +5533,7 @@ libxlDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_QUERY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5583,7 +5583,7 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_QUERY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -6375,7 +6375,7 @@ libxlDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c index 6944c77eed..5bb8747890 100644 --- a/src/libxl/libxl_migration.c +++ b/src/libxl/libxl_migration.c @@ -386,7 +386,7 @@ libxlDomainMigrationSrcBegin(virConnectPtr conn, * terminated in the confirm phase. Errors in the begin or perform * phase will also terminate the job. */ - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!(mig =3D libxlMigrationCookieNew(vm))) @@ -556,7 +556,7 @@ libxlDomainMigrationDstPrepareTunnel3(virConnectPtr dco= nn, * Unless an error is encountered in this function, the job will * be terminated in the finish phase. */ - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto error; =20 priv =3D vm->privateData; @@ -665,7 +665,7 @@ libxlDomainMigrationDstPrepare(virConnectPtr dconn, * Unless an error is encountered in this function, the job will * be terminated in the finish phase. */ - if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0) + if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) goto error; =20 priv =3D vm->privateData; --=20 2.35.1 From nobody Wed May 15 21:36:39 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1648135990; cv=none; d=zohomail.com; s=zohoarc; b=SW4JGo+C8NPRSRjDr+tQ/K2LBymv+avLsAGu6SjAuhfI2OGSEastVUabsuDicyj2fP+hoAcxvVjNtsPk4Z++NKY+gCy7LQMdY8GAM2A6O0loKrlsUJfyXToHCLbkG55KsKHsbK3Mi3GTLFR60K8/VmLm5far73tGAB0sAc7mSVk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1648135990; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=2PSm5kInBgbhpJMlXGlLq6Z4PU/it8djMOeYMMaFq+Q=; b=jsj/aNS1kFLrF4AXf4obsrEAOsYeKS9OfZ2ZzM40C5Na8pF6z//6pYhJNLm7bVEVGBToaKr9l/RkMD9MZ1K+K2L2WVZ2O5ni+ePN+S4l9NbiQKUd6K2eHx54A25AA5fUq55cj4j1XobxOEFDoeDvS0oxTG0/oT54tcrrz46hQs0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1648135990548826.3319092920323; Thu, 24 Mar 2022 08:33:10 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-534-AVwJ6o1wMN2MCe7wY2yHVQ-1; Thu, 24 Mar 2022 11:33:07 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC9182A2AD50; Thu, 24 Mar 2022 15:33:04 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 96963C15D7B; Thu, 24 Mar 2022 15:33:04 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 19790194035C; Thu, 24 Mar 2022 15:33:04 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 858681940359 for ; Thu, 24 Mar 2022 15:33:00 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 73CF2C15D7D; Thu, 24 Mar 2022 15:33:00 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.78]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1AAB6C15D7B for ; Thu, 24 Mar 2022 15:32:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648135989; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=2PSm5kInBgbhpJMlXGlLq6Z4PU/it8djMOeYMMaFq+Q=; b=TIizLGDLiYA3vLqKRRHOUEf66PAW1cnmfORJMJpYjCAWtzddF9iKvvGZu1C5wmehRCp4LM xNJGJ17WjsW0aU5tl6VStr6xAWuxOskcNo3diWx/0xccT7GcOKLN66302Obh3U+K3Oreo0 So+XvONaYds5h+EkEkVD5Cfy3E5fYdc= X-MC-Unique: AVwJ6o1wMN2MCe7wY2yHVQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 5/5] ch: use virDomainJob enum instead of virCHDomainJob Date: Thu, 24 Mar 2022 16:32:46 +0100 Message-Id: <1432fe4d0db09b82fc5cf131ec57f11dcf1b1c1b.1648135845.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1648135992346100005 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: Michal Privoznik --- src/ch/ch_domain.c | 24 ++++++++---------------- src/ch/ch_domain.h | 18 +++--------------- src/ch/ch_driver.c | 20 ++++++++++---------- 3 files changed, 21 insertions(+), 41 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index 25f581c1c3..bb489a74e3 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -31,14 +31,6 @@ =20 #define VIR_FROM_THIS VIR_FROM_CH =20 -VIR_ENUM_IMPL(virCHDomainJob, - CH_JOB_LAST, - "none", - "query", - "destroy", - "modify", -); - VIR_LOG_INIT("ch.ch_domain"); =20 static int @@ -57,7 +49,7 @@ virCHDomainObjResetJob(virCHDomainObjPrivate *priv) { struct virCHDomainJobObj *job =3D &priv->job; =20 - job->active =3D CH_JOB_NONE; + job->active =3D VIR_JOB_NONE; job->owner =3D 0; } =20 @@ -77,7 +69,7 @@ virCHDomainObjFreeJob(virCHDomainObjPrivate *priv) * Successful calls must be followed by EndJob eventually. */ int -virCHDomainObjBeginJob(virDomainObj *obj, enum virCHDomainJob job) +virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { virCHDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; @@ -89,13 +81,13 @@ virCHDomainObjBeginJob(virDomainObj *obj, enum virCHDom= ainJob job) =20 while (priv->job.active) { VIR_DEBUG("Wait normal job condition for starting job: %s", - virCHDomainJobTypeToString(job)); + virDomainJobTypeToString(job)); if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0= ) { VIR_WARN("Cannot start job (%s) for domain %s;" " current job is (%s) owned by (%d)", - virCHDomainJobTypeToString(job), + virDomainJobTypeToString(job), obj->def->name, - virCHDomainJobTypeToString(priv->job.active), + virDomainJobTypeToString(priv->job.active), priv->job.owner); =20 if (errno =3D=3D ETIMEDOUT) @@ -110,7 +102,7 @@ virCHDomainObjBeginJob(virDomainObj *obj, enum virCHDom= ainJob job) =20 virCHDomainObjResetJob(priv); =20 - VIR_DEBUG("Starting job: %s", virCHDomainJobTypeToString(job)); + VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); =20 @@ -127,10 +119,10 @@ void virCHDomainObjEndJob(virDomainObj *obj) { virCHDomainObjPrivate *priv =3D obj->privateData; - enum virCHDomainJob job =3D priv->job.active; + virDomainJob job =3D priv->job.active; =20 VIR_DEBUG("Stopping job: %s", - virCHDomainJobTypeToString(job)); + virDomainJobTypeToString(job)); =20 virCHDomainObjResetJob(priv); virCondSignal(&priv->job.cond); diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index 11a20a874a..f75a08ec87 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -24,27 +24,15 @@ #include "ch_monitor.h" #include "virchrdev.h" #include "vircgroup.h" +#include "domain_job.h" =20 /* Give up waiting for mutex after 30 seconds */ #define CH_JOB_WAIT_TIME (1000ull * 30) =20 -/* Only 1 job is allowed at any time - * A job includes *all* ch.so api, even those just querying - * information, not merely actions */ - -enum virCHDomainJob { - CH_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) con= ditions */ - CH_JOB_QUERY, /* Doesn't change any state */ - CH_JOB_DESTROY, /* Destroys the domain (cannot be masked out) */ - CH_JOB_MODIFY, /* May change state */ - CH_JOB_LAST -}; -VIR_ENUM_DECL(virCHDomainJob); - =20 struct virCHDomainJobObj { virCond cond; /* Use to coordinate jobs */ - enum virCHDomainJob active; /* Currently running job */ + virDomainJob active; /* Currently running job */ int owner; /* Thread which set current job */ }; =20 @@ -82,7 +70,7 @@ extern virDomainXMLPrivateDataCallbacks virCHDriverPrivat= eDataCallbacks; extern virDomainDefParserConfig virCHDriverDomainDefParserConfig; =20 int -virCHDomainObjBeginJob(virDomainObj *obj, enum virCHDomainJob job) +virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 void diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c index ac9298c0b5..2fe7aba9d0 100644 --- a/src/ch/ch_driver.c +++ b/src/ch/ch_driver.c @@ -224,7 +224,7 @@ chDomainCreateXML(virConnectPtr conn, NULL))) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED) < 0) @@ -258,7 +258,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int = flags) if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED); @@ -397,7 +397,7 @@ chDomainShutdownFlags(virDomainPtr dom, if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -453,7 +453,7 @@ chDomainReboot(virDomainPtr dom, unsigned int flags) if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -502,7 +502,7 @@ chDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -547,7 +547,7 @@ chDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -601,7 +601,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int fla= gs) if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_DESTROY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1221,7 +1221,7 @@ chDomainPinVcpuFlags(virDomainPtr dom, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -1358,7 +1358,7 @@ chDomainPinEmulator(virDomainPtr dom, if (virDomainPinEmulatorEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -1629,7 +1629,7 @@ chDomainSetNumaParameters(virDomainPtr dom, } } =20 - if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0) + if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) --=20 2.35.1